text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
\section{Introduction} \label{sec:intr} This paper is concerned with the inverse spectral theory for operators generated by the differential expression \begin{align} \nonumber \ell_n(y) := & y^{(n)} + \sum_{k = 0}^{\lfloor n/2\rfloor - 1} (\tau_{2k}(x) y^{(k)})^{(k)} \\ \label{defl} + & \sum_{k = 0}^{\lfloor (n-1)/2\rfloor - 1} \bigl((\tau_{2k+1}(x) y^{(k)})^{(k+1)} + (\tau_{2k+1}(x) y^{(k+1)})^{(k)}\bigr), \: x \in (0,1), \end{align} where the notation $\lfloor a \rfloor$ means rounding down, the functions $\{ \tau_{\nu} \}_{\nu = 0}^{n-2}$ can be either integrable or distributional. Various aspects of spectral theory for such operators and related issues have been intensively studied in recent years (see, e.g., \cite{MS16, SS20, KM19, BH13, Pap16, BSHZ19, PB20, GN21, Bond21}). However, the general theory of inverse spectral problems for \eqref{defl} with arbitrary $n > 2$ has not been created yet. This paper aims to develop an approach to the reconstruction of the coefficients $\{ \tau_{\nu} \}_{\nu = 0}^{n-2}$ from the spectral data for a wide class of differential operators. \subsection{Historical background} Inverse problems of spectral analysis consist in the recovery of differential operators from their spectral information. Such problems arise in practice when one needs to determine certain physical parameters of a system from some measured data or to construct a model with desired properties. The majority of physical applications are concerned with linear differential operators of form \eqref{defl} with $n = 2, 3, 4$. For $n = 2$, expression \eqref{defl} turns into the Sturm-Liouville (Schr\"odinger) operator \begin{equation} \label{StL} -\ell_2(y) = -y'' + q(x) y, \end{equation} which models string vibrations in classical mechanics, electron motion in quantum mechanics, and is widely used in other branches of science and engineering. The third-order linear differential operators arise in the inverse problem method for integration of the nonlinear Boussinesq equation (see \cite{DTT82, McK81}), in mechanical problems of modeling thin membrane flow of viscous liquid and elastic beam vibrations (see \cite{BP19} and references therein). Inverse spectral problems for the fourth-order linear differential operators attract much attention of scholars because of applications in mechanics and geophysics (see \cite{Barc74, McL76, PK97, CPS98, Glad05, Mor15, BK15, JLX22} and references therein). The classical results of the inverse problem theory have been obtained for the Sturm-Liouville operator \eqref{StL} with integrable potential $q(x)$ in 1950th by Marchenko, Levitan, and their followers (see \cite{Mar77, Lev84}). They have developed the transformation operator method, which reduces the nonlinear inverse Sturm-Liouville spectral problem to the linear Fredholm integral equation of the second kind. However, the transformation operator method appeared to be ineffective for the higher-order differential operators \begin{equation} \label{ho} y^{(n)} + \sum_{k = 0}^{n-2} p_k(x) y^{(k)}, \quad n > 2. \end{equation} Note that the differential expression \eqref{defl} can be transformed into \eqref{ho} in the case of sufficiently smooth coefficients $\{ \tau_{\nu} \}_{\nu = 0}^{n-2}$. Thus, the development of inverse spectral theory for the higher-order operators \eqref{ho} required new approaches. Relying on the ideas of Leibenson \cite{Leib66, Leib71}, Yurko has created the method of spectral mappings. This method allowed him to construct inverse problem solutions for the higher-order differential operators \eqref{ho} with regular (integrable) coefficients on the half-line $x > 0$ and on a finite interval $x \in (0, T)$ (see \cite{Yur92, Yur02}). The case of Bessel-type singularities also was considered \cite{Yur93, Yur95}. Later on, the ideas of the method of spectral mappings were applied to a wide range of inverse spectral problems, e.g., to inverse problems for the first-order differential systems \cite{Yur05}, for differential operators on graphs \cite{Yur16}, for quadratic differential pencils \cite{BY12}. This method is based on the theory of analytic functions and mainly on the contour integration in the complex plane of the spectral parameter. The method of spectral mappings reduces a nonlinear inverse problem to a linear equation in a suitable Banach space. This space is constructed in different ways for different operator classes. In particular, for differential operators on a finite interval, the main equation is usually derived in the space $m$ of infinite bounded sequences. It is also worth mentioning that an approach to inverse scattering problems for higher-order differential operators \eqref{ho} on the full line was developed by Beals et al \cite{Beals85, Beals88}. During the last 20 years, the inverse problems are actively investigated for the second-order differential operators with distributional potentials (see, e.g., \cite{HM-sd, HM-2sp, HM-half, FIY08, MT09, SS10, HP12, Eckh14, Gul19, Bond21-AMP}). In particular, Hryniv and Mykytyuk \cite{HM-sd, HM-2sp, HM-half} transferred the transformation operator method to the Sturm-Liouville operators \eqref{StL} with potential $q(x)$ of class $W_2^{-1}(0,1)$ and so generalized the basic results of inverse problem theory to this class of operators. Note that the space $W_2^{-1}$ contains the Dirac $\delta$-function and the Coulumb potential $\frac{1}{x}$, which are used for modeling particle interactions in quantum mechanics \cite{Alb05}. The method of spectral mappings has been extended to the Sturm-Liouville operators with potentials of $W_2^{-1}$ in \cite{FIY08, Bond21-AMP, Bond21-tamkang}. This opens the possibility of constructing the inverse spectral theory for higher-order differential operators with distribution coefficients. However, till now, only the first steps have been taken in this direction. In \cite{Bond21, Bond22}, the uniqueness of recovering the higher-order differential operators with distribution coefficients on a finite interval and on the half-line has been studied. The goals of this paper are to derive the linear main equation of the inverse problem, to prove its unique solvability, and to obtain reconstruction formulas for the coefficients $\{ \tau_{\nu} \}_{\nu = 0}^{n-2}$ of various classes. \subsection{Problem statement and methods} Our treatment of the differential expression \eqref{defl} is based on \textit{the regularization approach}. Namely, we will assume that the differential equation \begin{equation} \label{eqv} \ell_n(y) = \lambda y, \quad x \in (0, 1), \end{equation} where $\lambda$ is the spectral parameter, can be equivalently transformed into the first-order system \begin{equation} \label{sys} Y'(x) = (F(x) + \Lambda) Y(x), \quad x \in (0, 1), \end{equation} where $Y(x)$ is a column vector function of size $n$, $\Lambda$ is the $(n \times n)$-matrix whose entry at the position $(n,1)$ equals $\lambda$ and all the other entries are zero, and $F(x) = [f_{k,j}(x)]_{k,j = 1}^n$ is a matrix function with the following properties: \begin{equation} \label{propF} \begin{array}{llll} f_{k,j}(x) \equiv 0, \quad & k + 1 < j, \qquad & f_{k,k+1}(x) \equiv 1, \quad & k = \overline{1,n-1}, \\ f_{k,k} \in L_2(0,1), \quad & k = \overline{1,n}, \qquad & f_{k,j} \in L_1(0,1), \quad & k > j, \quad \mbox{trace}(F(x)) = 0. \end{array} \end{equation} We denote the class of $(n \times n)$ matrix functions satisfying \eqref{propF} by $\mathfrak F_n$. By using any matrix $F \in \mathfrak F_n$, one can define the quasi-derivatives \begin{equation} \label{quasi} y^{[0]} := y, \quad y^{[k]} = (y^{[k-1]})' - \sum_{j = 1}^k f_{k,j} y^{[j-1]}, \quad k = \overline{1,n}, \end{equation} and the domain $$ \mathcal D_F = \{ y \colon y^{[k]} \in AC[0,1], \, k = \overline{0, n-1} \}. $$ \begin{df} \label{def:F} A matrix function $F(x) \in \mathfrak F_n$ is called \textit{an associated matrix} of the differential expression $\ell_n(y)$ if $\ell_n(y) = y^{[n]}$ for any $y \in \mathcal D_F$. We call a function $y$ \textit{a solution} of equation \eqref{eqv} if $y \in \mathcal D_F$ and $y^{[n]} = \lambda y$, $x \in (0,1)$. \end{df} For a function $y \in \mathcal D_F$, introduce the notation $\vec y(x) = \mbox{col} ( y^{[0]}(x), y^{[1]}(x), \ldots, y^{[n-1]}(x))$. Obviously, $y$ is a solution of equation \eqref{eqv} if and only if $Y = \vec y$ satisfies \eqref{sys}. The associated matrices for various classes of differential expressions $\ell_n(y)$ have been constructed, e.g., in \cite{MS16, MS19, KM19, VNS21, Bond22} (see also Subsections~\ref{sec:3}-\ref{sec:evenW} of this paper). For example, for the differential expression $\ell_2(y) = y'' - \tau_0 y$, $\tau_0 \in W_2^{-1}(0,1)$, that is, $\tau_0 = \sigma'_0$, $\sigma_0 \in L_2(0,1)$, the associated matrix has the form (see \cite{SS99}): $$ F(x) = \begin{bmatrix} \sigma_0(x) & 1 \\ -\sigma_0^2(x) & -\sigma_0(x) \end{bmatrix}. $$ For the regular case $\tau_{\nu} \in L_1(0,1)$, $\nu = \overline{0,n-2}$, the construction of associated matrix $F(x)$ is well-known (see \cite{EM99} and Subsection~\ref{sec:evenL} of this paper). The regularization of even order ($n = 2m$) differential operators \eqref{defl} with distribution coefficients $\tau_{2k+j} \in W_2^{-(m-k-j)}(0,1)$, $k = \overline{0,m-1}$, $j = 0, 1$, has been obtained by Mirzoev and Shkalikov \cite{MS16}. Later on, the case of odd order $n$ was considered in \cite{MS19}. Vladimirov \cite{Vlad17} suggested a more general construction which, in particular, includes the both cases \cite{MS16, MS19}. It is worth mentioning that in \cite{MS16, MS19, Vlad17} the differential expressions of more general form than \eqref{defl} were studied, with the coefficients at $y^{(n)}$ and $y^{(n-1)}$ not necessarily equal $1$ and $0$, respectively. However, in this paper, we confine ourselves to the form \eqref{defl}, which is natural for studying the inverse problems \cite{Bond21, Bond22}. In this paper, we assume that $\ell_n(y)$ is any differential expression that has an associated matrix in terms of Definition~\ref{def:F}. We do not impose any additional restrictions on $\{ \tau_{\nu} \}_{\nu = 0}^{n-2}$, since we are interested to formulate the abstract results which can be applied to various classes of differential operators. Certain restrictions on $\{ \tau_{\nu} \}_{\nu = 0}^{n-2}$ will be imposed below when necessary. Let us proceed to the inverse problem formulation. Suppose that we have a differential expression of form \eqref{defl} and an associated matrix $F(x) = [f_{k,j}]_{k,j = 1}^n$. By using the corresponding quasi-derivatives \eqref{quasi}, define the linear forms \begin{equation} \label{defU} \mathcal U_{s,a}(y) := y^{[p_{s,a}]}(a) + \sum_{j = 1}^{p_{s,a}} u_{s,j,a} y^{[j-1]}(a), \quad s = \overline{1, n}, \quad a = 0, 1, \end{equation} where $p_{s,a} \in \{ 0, \ldots, n-1 \}$, $p_{s,a} \ne p_{k,a}$ for $s \ne k$, and $u_{s,j,a}$ are some complex numbers. In addition, introduce the matrices $U_a = [u_{s,j,a}]_{s,j = 1}^n$, $u_{s,j,a} := \delta_{j,p_{s,a} + 1}$ for $j > p_{s,a}$, $a = 0, 1$. Here and below, $\delta_{j,k}$ is the Kronecker delta. We call by the problem $\mathcal L$ the triple $(F(x), U_0, U_1)$. Below we introduce various characteristics related to the problem $\mathcal L$. Denote by $\{ C_k(x,\lambda) \}_{k = 1}^n$ and the solutions of equation~\eqref{eqv} satisfying the initial conditions \begin{equation} \label{initC1} \mathcal U_{s,0} (C_k) = \delta_{s,k}, \quad s = \overline{1, n}. \end{equation} Equavalently, the $(n \times n)$-matrix function $C(x, \lambda) := [\vec C_k(x, \lambda)]_{k = 1}^n$ is the solution of the system \eqref{sys} with the initial condition $C(0, \lambda) = U_0^{-1}$. Therefore, the solutions $\{ C_k(x,\lambda) \}_{k = 1}^n$ are uniquely defined. Moreover, their quasi-derivatives $C_k^{[j]}(x, \lambda)$ are entire in $\lambda$ for each fixed $x \in [0,1]$, $k = \overline{1,n}$, $j = \overline{0,n-1}$. It has been proved in \cite[Section 4]{Bond21} that, for all $\lambda \in \mathbb C$ except for a countable set, equation \eqref{eqv} has the so-called \textit{Weyl solutions} $\{ \Phi_k(x,\lambda) \}_{k = 1}^n$ satisfying the boundary conditions \begin{equation} \label{bcPhi} \mathcal U_{s,0}(\Phi_k) = \delta_{s,k}, \quad s = \overline{1, k}, \qquad \mathcal U_{s,1}(\Phi_k) = 0, \quad s = \overline{k+1,n}, \end{equation} Define the matrix function $\Phi(x, \lambda) = [\vec \Phi_k(x, \lambda)]_{k = 1}^n$. The columns of the matrices $C(x, \lambda)$ and $\Phi(x, \lambda)$ form fundamental solution systems of \eqref{sys}. Consequently, the following relation holds: \begin{equation} \label{relM} \Phi(x, \lambda) = C(x, \lambda) M(\lambda) \end{equation} where the matrix function $M(\lambda)$ is called \textit{the Weyl matrix} of the problem $\mathcal L$ (see \cite{Bond21}). The notion of Weyl matrix generalizes the notion of Weyl function for the second-order operators (see \cite{Mar77, Yur02}). Weyl functions and their generalizations play an important role in the inverse spectral theory for various classes of differential operators. In particular, Yurko \cite{Yur92, Yur93, Yur95, Yur02} has used the Weyl matrix as the main spectral characteristics for the reconstruction of the higher-order differential operators \eqref{ho} with regular coefficients. The analogous inverse problem for the differential expression of form \eqref{defl} can be formulated as follows. \begin{prob} \label{prob:Weyl} Given the Weyl matrix $M(\lambda)$, find the coefficients $\{ \tau_{\nu} \}_{\nu = 0}^{n-2}$. \end{prob} The uniqueness of Problem~\ref{prob:Weyl} solution has been proved in \cite{Bond21} for the Mirzoev-Shkalikov case: $n = 2m$, $\tau_{2k+j} \in W_2^{-(m-k-j)}(0,1)$ and $n = 2m+1$, $\tau_{2k+j} \in W_1^{-(m-k-j)}(0,1)$, $j = 0,1$. In \cite{Bond22}, the uniqueness of recovering the boundary condition coefficients from the Weyl matrix has been studied. It has been shown in \cite[Section~4]{Bond21} that the Weyl matrix $M(\lambda) = [M_{j,k}(\lambda)]_{j,k = 1}^n$ is unit lower-triangular, and its non-trivial entries have the form \begin{equation} \label{Mjk} M_{j,k}(\lambda) = -\frac{\Delta_{j,k}(\lambda)}{\Delta_{k,k}(\lambda)}, \quad 1 \le k < j \le n, \end{equation} where $\Delta_{k,k}(\lambda) := \det[\mathcal U_{s,1}(C_r)]_{s,r = k + 1}^n$ and $\Delta_{j,k}(\lambda)$ is obtained from $\Delta_{k,k}(\lambda)$ by the replacement of $C_j$ by $C_k$. The functions $C_r^{[s]}(1, \lambda)$, $r = \overline{1, n}$, $s = \overline{0,n-1}$, are entire analytic in $\lambda$, so do the functions $\Delta_{j,k}(\lambda)$, $1 \le k \le j \le n$. Hence, $M(\lambda)$ is meromorphic in $\lambda$, and the poles of the $k$-th column of $M(\lambda)$ coincide with the zeros of $\Delta_{k,k}(\lambda)$. At the same time, the zeros of the entire functions $\Delta_{j,k}(\lambda)$ , $1 \le k \le j \le n$, coincide with the eigenvalues of some boundary value problems for equation \eqref{eqv}, and the inverse problem by the Weyl matrix (Problem~\ref{prob:Weyl}) is related to the inverse problem by $\frac{n(n+1)}{2}$ spectra (see \cite{Bond21} for details). We will say that the problem $\mathcal L$ belongs to the class $W$ if all the zeros of $\Delta_{k,k}(\lambda)$ are simple for $k = \overline{1,n-1}$. Then, in view of \eqref{Mjk}, the poles of $M(\lambda)$ are simple. In general, the function $\Delta_{k,k}(\lambda)$ can have at most finite number of multiple zeros. The latter case can be treated by developing the methods of Buterin et al \cite{But07, BSY13}, who considered the non-self-adjoint Sturm-Liouville operators ($n = 2$) with regular potentials. However, the case of multiple zeros is much more technically complicated, so, in this paper, we always assume that $\mathcal L \in W$. Denote by $\Lambda$ the set of the Weyl matrix poles. Consider the Laurent series $$ M(\lambda) = \frac{M_{\langle -1 \rangle}(\lambda_0)}{\lambda - \lambda_0} + M_{\langle 0 \rangle}(\lambda_0) + M_{\langle 1 \rangle}(\lambda_0)(\lambda - \lambda_0) + \dots, \quad \lambda_0 \in \Lambda. $$ Denote \begin{equation} \label{defN} \mathcal N(\lambda_0) := [M_{\langle 0 \rangle}(\lambda_0)]^{-1} M_{\langle -1\rangle}(\lambda_0), \quad \lambda_0 \in \Lambda, \end{equation} We call the collection $\{ \lambda_0, \mathcal N(\lambda_0) \}_{\lambda_0 \in \Lambda}$ \textit{the spectral data} of the problem $\mathcal L$. Obviously, the spectral data are uniquely specified by the Weyl matrix $M(\lambda)$, so Problem~\ref{prob:Weyl} can be reduced to the following problem. \begin{prob} \label{prob:sd-coef} Given the spectral data $\{ \lambda_0, \mathcal N(\lambda_0) \}_{\lambda_0 \in \Lambda}$, find the coefficients $\{ \tau_{\nu} \}_{\nu = 0}^{n-2}$. \end{prob} It is more convenient to study the reconstruction question for Problem~\ref{prob:sd-coef}. It is worth mentioning that, in fact, the Weyl matrix and the spectral data can be constructed according to the above definitions for any matrix function $F(x)$ of class $\mathfrak F_n$, not necessarily associated with any differential expression of form \eqref{defl}. But, in general, the matrix $F(x)$ is not uniquely specified by the Weyl matrix (see Example~4.5 in \cite{Bond22}). Therefore, in this paper, the solution of Problem~\ref{prob:sd-coef} is divided into the two steps: $$ \{ \lambda_0, \mathcal N(\lambda_0) \}_{\lambda_0 \in \Lambda} \: \stackrel{(1)}{\to} \: \{ \Phi_k(x, \lambda) \}_{k = 1}^n \: \stackrel{(2)}{\to} \: \{ \tau_{\nu} \}_{\nu = 0}^{n-2}. $$ The recovery of the Weyl solutions $\{ \Phi_k(x, \lambda) \}_{k = 1}^n$ from the spectral data is studied for a matrix $F(x)$ of general form, and then reconstruction formulas are derived for $\{ \tau_{\nu} \}_{\nu = 0}^{n-2}$ of certain classes. For a fixed $F \in \mathfrak F_n$, we define the quasi-derivatives \eqref{quasi}, the expression $\ell_n(y) := y^{[n]}$, the problem $\mathcal L = (F(x), U_0, U_1)$, its spectral data $\{ \lambda_0, \mathcal N(\lambda_0) \}_{\lambda_0 \in \Lambda}$ as above, and focus on the following auxiliary problem. \begin{prob} \label{prob:sd} Given the spectral data $\{ \lambda_0, \mathcal N(\lambda_0) \}_{\lambda_0 \in \Lambda}$, find the Weyl solutions $\{ \Phi_k(x, \lambda) \}_{k = 1}^n$. \end{prob} Let us briefly describe the method of solution. Along with $\mathcal L$, we consider another problem $\tilde {\mathcal L} = (\tilde F(x), \tilde U_0, \tilde U_1)$ of the same form but with different coefficients. Similarly to $\Phi(x, \lambda)$, define $\tilde \Phi(x, \lambda)$ for $\tilde {\mathcal L}$. An important role in our analysis is played by \textit{the matrix of spectral mappings}: $$ \mathcal P(x, \lambda) = \Phi(x, \lambda) [\tilde \Phi(x, \lambda)]^{-1}. $$ For each fixed $x \in [0,1]$, the matrix function $\mathcal P(x, \lambda)$ is meromorpic in $\lambda$ with poles at the eigenvalues $\Lambda \cup \tilde \Lambda$. The method is based on the integration of some functions by a special family of contours enclosing these eigenvalues. Applying the Residue theorem, we derive an infinite system of linear equations. Further, that system is transformed into a linear equation in the Banach space $m$ of infinite bounded sequences. The main equation of the inverse problem has the form $$ (\mathbf{I} - \tilde R(x)) \psi(x) = \tilde \psi(x), \quad x \in [0,1], $$ where, for each fixed $x \in [0,1]$, $\psi(x)$ and $\tilde \psi(x)$ are elements of $m$, $\tilde R(x)$ is a linear compact operator in $m$, and $\mathbf{I}$ is the unit operator. The element $\tilde \psi(x)$ and the operator $\tilde R(x)$ are constructed by the model problem $\tilde {\mathcal L}$ and by the spectral data $\{ \lambda_0, \mathcal N(\lambda_0) \}_{\lambda_0 \in \Lambda}$, $\{ \tilde \lambda_0, \tilde{\mathcal N}(\tilde \lambda_0) \}_{\tilde \lambda_0 \in \tilde \Lambda}$ of the two problems $\mathcal L$, $\tilde {\mathcal L}$, respectively, while the unknown element $\psi(x)$ is related to the desired functions $\{ \Phi_k(x, \lambda) \}_{k = 1}^n$. We prove that the operator $(\mathbf{I} - \tilde R(x))$ has the bounded inverse, and so the main equation is uniquely solvable (see Theorem~\ref{thm:main}). This implies the uniqueness of solution for Problem~\ref{prob:sd}. Using the main equation, we obtain a constructive procedure for solving Problem~\ref{prob:sd} (see Algorithm~\ref{alg:1}). These results can be applied to a wide range of differential operators \eqref{defl} with associated matrices of class $\mathfrak F_n$. Further, by using the solution of the main equation, we derive reconstruction formulas for $\{ \tau_{\nu} \}_{\nu = 0}^{n-2}$. We describe the general idea and then apply it to the certain classes of operators: \medskip (i) $n = 3$, $\tau_1 \in L_2(0,1)$, $\tau_0 \in W_2^{-1}(0,1)$. \smallskip (ii) $n$ is even, $\tau_{\nu} \in L_2(0,1)$, $\nu = \overline{0,n-2}$. \smallskip (iii) $n$ is even, $\tau_{\nu} \in W_2^{-1}(0,1)$, $\nu = \overline{0,n-2}$. \medskip We obtain the uniqueness theorems and constructive algorithms for solving Problem~\ref{prob:sd-coef} for the cases (i)-(iii). Note that, although the functions $\tau_{\nu}$ in the case (ii) are regular, this case has less smoothness than the one considered by Yurko \cite{Yur02}. The reconstruction formulas have the form of series, and the main difficulties in our analysis are related to studying the convergence of those series. These difficulties increase for the case of non-smooth and/or distribution coefficients. In order to prove the series convergence, we use the Birkhoff-type solutions constructed by Savchuk and Shkalikov \cite{SS20} and the precise asymptotic formulas for the spectral data obtained in \cite{Bond22-asympt}. For the cases (ii) and (iii), we reconstruct the functions $\tau_{\nu}$ step-by-step for $\nu = n-2, n-3, \ldots, 1, 0$. The similar approach can be used in the case of odd $n$, which requires technical modifications. By using the reconstruction formulas, one can develop numerical methods for solving inverse spectral problems (see \cite{IY08} for the second-order case). However, this issue requires an additional work. In this paper, we obtain theoretical algorithms, which in the future can be used for investigation of existence and stability of the inverse problem solution. It is worth mentioning that our method of inverse problem solution is the first one for higher-order differential operators with distribution coefficents. The obtained main equation and reconstruction formulas generalize the results of \cite{Bond21-tamkang} for the Sturm-Liouville operators with distribution potential. The other methods applied to the second-order operators (see, e.g., \cite{HM-sd, SS10}), to the best of the author's knowledge, appear to be ineffective for higher orders. \medskip The paper is organized as follows. In Section~\ref{sec:prelim}, we provide preliminaries and study the properties of the spectral data. Section~\ref{sec:main} is devoted to the contour integration and to the derivation of the main equation of the inverse problem in a Banach space. The unique solvability of the main equation is also proved. As a result, an algorithm for solving the auxiliary Problem~\ref{prob:sd} is obtained for arbitrary $F \in \mathfrak F_n$. In Section~\ref{sec:rec}, we derive the reconstruction formulas for the coefficients $\{ \tau_{\nu} \}_{\nu = 0}^{n-2}$ and study the convergence of the obtained series. Section~\ref{sec:concl} contains a brief summary of the main results. \section{Preliminaries} \label{sec:prelim} Throughout the paper, we use the following \textbf{notations}. \begin{enumerate} \item $I$ is the $(n \times n)$ unit matrix, $e_k$ is the $k$-th column of $I$, $k = \overline{1, n}$. \item The sign $T$ denotes the matrix transpose. \item $\delta_{k,j} = \begin{cases} 1, \quad k = j, \\ 0, \quad k \ne j. \end{cases}$ \item $J := [(-1)^{k+1} \delta_{k,n-j+1}]_{k,j = 1}^n$, $J_a := [(-1)^{p_{k,a}^{\star}}\delta_{k,n-j+1}]_{k,j = 1}^n$, where $p_{k,a}^{\star} := n-1-p_{k,a}$, $a = 0, 1$. \item If for $\lambda \to \lambda_0$ $$ A(\lambda) = \sum_{k = -q}^p a_k(\lambda - \lambda_0)^k + o((\lambda-\lambda_0)^p), $$ then $$ [A(\lambda)]_{|\lambda = \lambda_0}^{\langle k \rangle} = A_{\langle k \rangle}(\lambda_0) := a_k. $$ \item The notations $\lfloor x \rfloor$ and $\lceil x \rceil$ are used for rounding a real number $x$ down and up, respectively. \item The binomial coefficients are denoted by $C_n^k = \dfrac{n!}{k!(n-k)!}$. \item Along with $\mathcal L$, we will consider the problems $\tilde {\mathcal L}$, $\mathcal L^{\star}$, $\tilde {\mathcal L}^{\star}$ of the same form but with different coefficients. We agree that, if a symbol $\gamma$ denotes an object related to $\mathcal L$, then the symbols $\tilde \gamma$, $\gamma^{\star}$, $\tilde \gamma^{\star}$ will denote the analogous objects related to $\tilde {\mathcal L}$, $\mathcal L^{\star}$, $\tilde {\mathcal L}^{\star}$, respectively. Note that the quasi-derivatives for the problems $\tilde {\mathcal L}$, $\mathcal L^{\star}$, $\tilde {\mathcal L}^{\star}$ are defined by using the matrices $\tilde F(x)$, $F^{\star}(x)$, $\tilde F^{\star}(x)$, respectively, which may be different from $F(x)$. \item The notation $y^{[k]}$ is used for quasi-derivatives defined by \eqref{quasi} (or analogously by using the entries of $\tilde F(x)$, $F^{\star}(x)$, or $\tilde F^{\star}(x)$). The notation $\vec y(x)$ is used for the column vector of the quasi-derivatives $y^{[0]}(x)$, $y^{[1]}(x)$, \dots, $y^{[n-1]}(x)$. \item In estimates, the symbol $C$ is used for various positive constants independent of $x$, $l$, $k$, etc. \item $a \stackrel{if \: (condition)}{\times} b = \begin{cases} a b, \quad \text{if (condition) holds}, \\ a, \quad \text{otherwise}. \end{cases}$. \end{enumerate} In Subsection~\ref{sec:star}, we define an auxiliary problem $\mathcal L^{\star} = (F^{\star}(x), U_0^{\star}, U_1^{\star})$ and study its properties. In Subsection~\ref{sec:sd}, the properties of the spectral data $\{ \lambda_0, \mathcal N(\lambda_0) \}_{\lambda_0 \in \Lambda}$ are investigated. \subsection{Problems $\mathcal L$ and $\mathcal L^{\star}$} \label{sec:star} For a matrix $F \in \mathfrak F_n$, define the matrix $F^{\star}(x) = [f_{k,j}^{\star}(x)]_{k,j = 1}^n$ as follows: \begin{equation} \label{fstar} f_{k,j}^{\star}(x) := (-1)^{k+j+1} f_{n-j+1, n-k+1}(x). \end{equation} Obviously, $F^{\star} \in \mathfrak F_n$. Let $F(x)$ be a fixed matrix function of class $\mathfrak F_n$. Suppose that $y \in \mathcal D_F$ and $z \in \mathcal D_{F^{\star}}$, the quasi-derivatives for $y$ are defined via \eqref{quasi} by using the elements of $F(x)$, the quasi-derivatives for $z$ are defined as \begin{equation} \label{quasiz} z^{[0]} := z, \quad z^{[k]} = (z^{[k-1]})' - \sum_{j = 1}^k f^{\star}_{k,j} z^{[j-1]}, \quad k = \overline{1,n}, \end{equation} and $$ \mathcal D_{F^{\star}} := \{ z \colon z^{[k]} \in AC[0,1], \, k = \overline{0, n-1} \}. $$ Define \begin{gather*} \ell_n(y) := y^{[n]}, \quad \ell_n^{\star}(z) := (-1)^n z^{[n]}, \quad \langle z, y \rangle := \sum_{j = 0}^{n-1} (-1)^j z^{[j]} y^{[n-j-1]}. \end{gather*} \begin{lem} The following relation holds: \begin{equation} \label{wron1} \frac{d}{dx} \langle z, y \rangle = z \ell_n(y) - y \ell_n^{\star}(z). \end{equation} \end{lem} \begin{proof} Differentiation implies \begin{equation} \label{sm1} \frac{d}{dx} \langle z, y \rangle = \sum_{j = 0}^{n-1} (-1)^j (z^{[j]})' y^{[n-j-1]} + \sum_{j = 0}^{n-1} (-1)^j z^{[j]} (y^{[n-j-1]})'. \end{equation} From \eqref{quasiz} and \eqref{quasi}, we obtain $$ (z^{[j]})' = z^{[j+1]} + \sum_{s = 1}^{j+1} f_{j+1, s}^{\star} z^{[s-1]}, \quad (y^{[n-j-1]})' = y^{[n-j]} + \sum_{s = 1}^{n-j} f_{n-j,s} y^{[s-1]}. $$ Substituting the latter relations into \eqref{sm1}, we get \begin{align*} \frac{d}{dx} \langle z, y \rangle = & \sum_{j = 0}^{n-1} (-1)^j y^{[n-j]} z^{[j]} + \sum_{j = 0}^{n-1} (-1)^j \sum_{s = 1}^{n-j} f_{n-j,s} y^{[s-1]} z^{[j]} \\ & + \sum_{j = 0}^{n-1} (-1)^j y^{[n-j-1]} z^{[j+1]} + \sum_{j = 0}^{n-1} (-1)^j \sum_{s = 1}^{j+1} f_{j+1,s}^{\star} y^{[n-j-1]} z^{[s-1]}. \end{align*} Note that \begin{align*} \sum_{j = 0}^{n-1} (-1)^j y^{[n-j]} z^{[j]} + \sum_{j = 0}^{n-1} (-1)^j y^{[n-j-1]} z^{[j+1]} & = y^{[n]} z + (-1)^{n-1} y z^{[n]}, \\ \sum_{j = 0}^{n-1} (-1)^j \sum_{s = 1}^{n-j} f_{n-j,s} y^{[s-1]} z^{[j]} & = \sum_{1 \le s \le j \le n} (-1)^{s+1} f_{n-s+1, n-j+1}y^{[n-j]} z^{[s-1]}, \\ \sum_{j = 0}^{n-1} (-1)^j \sum_{s = 1}^{j+1} f_{j+1,s}^{\star} y^{[n-j-1]} z^{[s-1]} & = \sum_{1 \le s \le j \le n} (-1)^{j+1} f_{j,s}^{\star} y^{[n-j]} z^{[s-1]}. \end{align*} Taking \eqref{fstar} into account, we arrive at \eqref{wron1}. \end{proof} If $y$ and $z$ satisfy the relations $\ell_n(y) = \lambda y$ and $\ell_n^{\star}(z) = \mu z$, respectively, then \eqref{wron1} readily implies \begin{equation} \label{wron2} \frac{d}{dx} \langle z, y \rangle = (\lambda - \mu) y z. \end{equation} Define $\vec y(x) = \mbox{col} ( y^{[0]}(x), y^{[1]}(x), \ldots, y^{[n-1]}(x))$ and $\vec z(x) = \mbox{col} ( z^{[0]}(x), z^{[1]}(x), \ldots, z^{[n-1]}(x))$ by using the corresponding quasi-derivatives \eqref{quasi} and \eqref{quasiz}, and the matrix $J := [(-1)^{k+1}\delta_{k,n-j+1}]_{k,j = 1}^n$. Then \begin{equation} \label{wrona1} \langle z, y \rangle_{|x = a} = [\vec z(a)]^T J \vec y(a). \end{equation} For $a = 0, 1$, let $U_a = [u_{s,j,a}]_{s,j = 1}^n$ be an $(n \times n)$ matrix such that $u_{s,j,a} = \delta_{j, p_{s,a} + 1}$ for $j > p_{s,a}$, where $p_{s,a} \in \{ 0, \ldots, n-1\}$, $p_{s,a} \ne p_{k,a}$ for $s \ne k$. The matrices $U_a$ define the linear forms $\mathcal U_{s,a}$ via \eqref{defU}. Along with $U_a$, consider the matrices \begin{equation} \label{defUs} U_a^{\star} := [J_a^{-1} U_a^{-1} J]^T, \quad a = 0, 1, \end{equation} where $J_a = [(-1)^{p_{k,a}^{\star}}\delta_{k,n-j+1}]_{k,j = 1}^n$, $p_{k,a}^{\star} := n - 1 - p_{n-k+1, a}$. The matrices $U_a^{\star}$, $a = 0, 1$, generate the linear forms $$ \mathcal U_{s,a}^{\star}(z) = z^{[p_{s,a}^{\star}]}(a) + \sum_{j = 1}^{p_{s,a}^{\star}} u_{s,j,a}^{\star} z^{[j-1]}(a), \quad s = \overline{1, n}, \quad a = 0,1. $$ The matrices $U_a^{\star}$ are chosen is such a way that the following relation holds: \begin{equation} \label{wrona2} \langle z, y \rangle_{|x = a} = \sum_{s = 1}^n (-1)^{p_{s,a}^{\star}} \mathcal U_{s,a}^{\star}(z) \mathcal U_{n-s+1,a}(y) \end{equation} for any $y \in \mathcal D_F$, $z \in \mathcal D_{F^{\star}}$. Indeed, the right-hand side of \eqref{wrona2} can be represented in the matrix form $$ [U_a^{\star} \vec z(a)]^T J_a U_a \vec y(a), $$ Taking \eqref{wrona1} and \eqref{defUs} into account, we arrive at \eqref{wrona2}. Consider the problems $\mathcal L = (F(x), U_0, U_1)$ and $\mathcal L^{\star} = (F^{\star}(x), U_0^{\star}, U_1^{\star})$. For $\mathcal L$, the matrix functions $C(x, \lambda)$, $\Phi(x, \lambda)$, and $M(\lambda)$ were defined in the Introduction. For $\mathcal L^{\star}$, similarly denote by $\{ C_k^{\star}(x, \lambda) \}_{k = 1}^n$ and $\{ \Phi_k^{\star}(x, \lambda) \}_{k = 1}^n$ the solutions of equation $\ell_n^{\star}(z) = \lambda z$, $x \in (0, 1)$, satisfying the conditions \begin{gather} \nonumber \mathcal U^{\star}_{s,0} (C_k^{\star}) = \delta_{s,k}, \quad s = \overline{1, n}, \\ \label{bcPhis} \mathcal U_{s,0}^{\star}(\Phi_k^{\star}) = \delta_{s,k}, \quad s = \overline{1, k}, \qquad \mathcal U_{s,1}^{\star}(\Phi_k^{\star}) = 0, \quad s = \overline{k+1,n}. \end{gather} Put $C^{\star}(x, \lambda) := [\vec C_k^{\star}(x, \lambda)]_{k = 1}^n$, $\Phi^{\star}(x, \lambda) := [\vec \Phi_k^{\star}(x, \lambda)]_{k = 1}^n$. Then, the relation \begin{equation} \label{relMs} \Phi^{\star}(x, \lambda) = C^{\star}(x, \lambda) M^{\star}(\lambda) \end{equation} holds, where $M^{\star}(\lambda)$ is the Weyl matrix of the problem $\mathcal L^{\star}$. \begin{lem} \label{lem:M} The following relations hold: \begin{gather} \label{MJM} [M^{\star}(\lambda)]^T J_0 M(\lambda) = J_0, \\ \label{PJP} [\Phi^{\star}(x, \lambda)]^T J \Phi(x, \lambda) = J_0. \end{gather} \end{lem} \begin{proof} The initial conditions \eqref{initC1} are equivalent to $U_0 C(0, \lambda) = I$. Using \eqref{relM}, we get $M(\lambda) = U_0 \Phi(0, \lambda)$. Similarly, $M^{\star}(\lambda) = U_0^{\star} \Phi^{\star}(0,\lambda)$. Hence \begin{gather} \nonumber A(\lambda) := [M^{\star}(\lambda)]^T J_0 M(\lambda) = [U_0^{\star} \Phi^{\star}(0,\lambda)]^T J_0 U_0 \Phi(0,\lambda), \quad A(\lambda) = [A_{k,j}(\lambda)]_{k,j = 1}^n,\\ \label{Akj} A_{k,j}(\lambda) = [U_0^{\star} \vec \Phi^{\star}_k(0,\lambda)]^T J_0 U_0 \vec \Phi_j(0, \lambda) = \sum_{s = 1}^n (-1)^{p_{s,0}^{\star}} \mathcal U_{s,0}^{\star}(\Phi_k^{\star}) \mathcal U_{n-s+1,0}(\Phi_j). \end{gather} On the one hand, using \eqref{Akj}, \eqref{bcPhi}, and \eqref{bcPhis}, we get $A_{k,j}(\lambda) = 0$ if $k + j > n + 1$ and $A_{k,j}(\lambda) = (-1)^{p_{k,0}^{\star}}$ if $k + j = n + 1$. On the other hand, \eqref{wrona2} and \eqref{Akj} imply $A_{k,j}(\lambda) = \langle \Phi_k^{\star}, \Phi_j \rangle_{|x = 0}$. It follows from \eqref{wron2} that $\langle \Phi_k^{\star}, \Phi_j \rangle$ does not depend on $x$. Consequently, $$ \langle \Phi_k^{\star}, \Phi_j \rangle_{|x = 0} = \langle \Phi_k^{\star}, \Phi_j \rangle_{|x = 1} = \sum_{s = 1}^n (-1)^{p_{s,1}^{\star}} \mathcal U_{s,1}^{\star}(\Phi_k^{\star}) \mathcal U_{n-s+1,1}(\Phi_j). $$ Using the boundary conditions \eqref{bcPhi} and \eqref{bcPhis} at $x = 1$, we conclude that $A_{k,j}(\lambda) = 0$ if $k + j < n + 1$. Thus, $A(\lambda) = J_0$ and \eqref{MJM} is proved. Using the relation $A_{k,j}(\lambda) = \langle \Phi_k^{\star}, \Phi_j\rangle$ for $k,j = \overline{1,n}$ and \eqref{wrona1}, we obtain $$ A(\lambda) = [\Phi^{\star}(x,\lambda)]^T J \Phi(x, \lambda). $$ This implies \eqref{PJP}. \end{proof} \subsection{Spectral data} \label{sec:sd} Consider the Weyl matrix $M(\lambda)$ of the problem $\mathcal L = (F(x), U_0, U_1)$, where $F \in \mathfrak F_n$. Recall that the poles of the $k$-th column of $M(\lambda)$ coincide with the zeros of $\Delta_{k,k}(\lambda) = \det[\mathcal U_{s,1}(C_r)]_{s,r = k+1}^n$. One can easily show that the zeros of $\Delta_{k,k}(\lambda)$ coincide with the eigenvalues of the following boundary value problem $\mathcal L_k$: $$ \ell_n(y) = \lambda y, \quad x \in (0,1), \qquad \mathcal U_{s,0}(y) = 0, \quad s = \overline{1,k}, \qquad \mathcal U_{s,1}(y) = 0, \quad s = \overline{k+1,n}. $$ By virtue of Theorem~1.1 in \cite{Bond22-asympt}, the spectrum of $\mathcal L_k$ is a countable set of eigenvalues $\Lambda_k := \{ \lambda_{l,k} \}_{l \ge 1}$ having the following asymptotics (counting with multiplicities): \begin{equation} \label{asymptla} \lambda_{l,k} = (-1)^{n-k} \left( \frac{\pi}{\sin\tfrac{\pi k}{n}} (l + \chi_k + \varkappa_{l,k}) \right)^n, \end{equation} where $\{ \varkappa_{l,k} \} \in l_2$ and $\chi_k$ are constants which depend only on $n$, $k$, and $\{ p_{s,a} \}$. Hence, for a fixed $k \in \{ 1, \ldots, n-1 \}$ and sufficiently large $l$, the eigenvalues $\lambda_{l,k}$ are simple. Assume that $\mathcal L \in W$, that is, all the zeros of $\Delta_{k,k}(\lambda)$ are simple for $k = \overline{1,n-1}$. Then, in view of \eqref{Mjk} and \eqref{MJM}, the poles of $M(\lambda)$ and $M^{\star}(\lambda)$ are simple. It follows from \eqref{relM} and \eqref{relMs} that the matrix functions $\Phi(x, \lambda)$ and $\Phi^{\star}(x, \lambda)$ for each fixed $x \in [0,1]$ also have only simple poles. Denote $\Lambda := \bigcup_{k = 1}^{n-1} \Lambda_k$. Similarly to $\mathcal N(\lambda_0)$, denote \begin{equation} \label{defNs} \mathcal N^{\star}(\lambda_0) := [M^{\star}_{\langle 0 \rangle}(\lambda_0)]^{-1} M^{\star}_{\langle -1\rangle}(\lambda_0), \quad \lambda_0 \in \Lambda. \end{equation} For $\lambda_0 \not\in \Lambda$, we mean that $\mathcal N(\lambda_0) = \mathcal N^{\star}(\lambda_0) = 0$. Let us study some properties of the matrices $\mathcal N(\lambda_0)$ and $\mathcal N^{\star}(\lambda_0)$. Denote by $\phi(x, \lambda)$ the first row of the matrix function $\Phi(x, \lambda)$: $\phi(x, \lambda) = e_1^T \Phi(x, \lambda) = [\Phi_k(x, \lambda)]_{k = 1}^n$. \begin{lem} \label{lem:N1} The following relations hold for each $\lambda_0 \in \Lambda$: $\mathcal N^2(\lambda_0) = 0$, \begin{gather} \label{relN1} [\mathcal N^{\star}(\lambda_0)]^T = - J_0 \mathcal N(\lambda_0) J_0^{-1}, \\ \label{relNPhi} \Phi_{\langle -1 \rangle}(x, \lambda_0) = \Phi_{\langle 0 \rangle}(x, \lambda_0) \mathcal N(\lambda_0), \quad \Phi^{\star}_{\langle -1 \rangle}(x, \lambda_0) = \Phi^{\star}_{\langle 0 \rangle}(x, \lambda_0) \mathcal N^{\star}(\lambda_0), \\ \label{lnphi} \ell_n(\phi_{\langle 0 \rangle}(x, \lambda_0)) = \lambda_0 \phi_{\langle 0 \rangle}(x, \lambda_0) + \phi_{\langle 0 \rangle}(x, \lambda_0) \mathcal N(\lambda_0). \end{gather} \end{lem} \begin{proof} The relation \eqref{MJM} implies \begin{gather} \label{smM1} [M(\lambda)]^{-1} = J_0^{-1} [M^{\star}(\lambda)]^T J_0, \\ \label{smM2} M(\lambda) J_0^{-1} [M^{\star}(\lambda)]^T = J_0^{-1}. \end{gather} It follows from \eqref{smM2} that \begin{gather} \label{smM3} M_{\langle -1 \rangle}(\lambda_0) J_0^{-1} [M_{\langle -1 \rangle}^{\star}(\lambda_0)]^T = 0, \\ \label{smM4} M_{\langle 0 \rangle}(\lambda_0) J_0^{-1} [M_{\langle -1 \rangle}^{\star}(\lambda_0)]^T + M_{\langle -1 \rangle}(\lambda_0) J_0^{-1} [M_{\langle 0 \rangle}^{\star}(\lambda_0)]^T = 0. \end{gather} Using \eqref{defN}, \eqref{defNs}, and \eqref{smM4}, we obtain \eqref{relN1}. Multiplying \eqref{relN1} by $\mathcal N(\lambda_0)$ and using \eqref{smM3}, we derive $$ \mathcal N(\lambda_0) J_0^{-1} [\mathcal N^{\star}(\lambda_0)]^T = -\mathcal N^2(\lambda_0) J_0^{-1} = 0. $$ Hence $\mathcal N^2(\lambda_0) = 0$. Using \eqref{relM} and \eqref{smM1}, we obtain $$ C(x, \lambda) = \Phi(x, \lambda) [M(\lambda)]^{-1} = \Phi(x, \lambda) J_0^{-1} [M^{\star}(\lambda)]^T J_0. $$ Since $C(x, \lambda)$ is entire in $\lambda$ for each fixed $x \in [0,1]$, then we get \begin{equation} \label{smM5} \Phi_{\langle 0 \rangle}(x, \lambda_0) J_0^{-1} [M^{\star}_{\langle -1\rangle}(\lambda_0)]^T J_0 + \Phi_{\langle -1 \rangle}(x, \lambda_0) J_0^{-1} [M^{\star}_{\langle 0\rangle}(\lambda_0)]^T J_0 = 0, \quad \lambda_0 \in \Lambda. \end{equation} Using \eqref{smM5} and \eqref{defNs}, we derive $$ \Phi_{\langle 0 \rangle}(x, \lambda_0) J_0^{-1} [\mathcal N^{\star}(\lambda_0)]^T J_0 + \Phi_{\langle -1 \rangle}(x, \lambda_0) = 0. $$ Taking \eqref{relN1} into account, we arrive at the first relation in \eqref{relNPhi}. The second one is similar. It follows from the relation $\ell_n(\phi(x, \lambda)) = \lambda \phi(x, \lambda)$ that \begin{align*} & \ell_n(\phi_{\langle -1 \rangle}(x, \lambda_0)) = \lambda_0 \phi_{\langle -1 \rangle}(x, \lambda_0), \\ & \ell_n(\phi_{\langle 0 \rangle}(x, \lambda_0)) = \lambda_0 \phi_{\langle 0 \rangle}(x, \lambda_0) + \phi_{\langle -1 \rangle}(x, \lambda_0). \end{align*} Using \eqref{relNPhi}, we arrive at \eqref{lnphi}. \end{proof} Consider the entries of the matrix $\mathcal N(\lambda_0) = [\mathcal N_{k,j}(\lambda_0)]_{k,j = 1}^n$. Since $M(\lambda)$ is unit lower-triangular, we have $\mathcal N_{k,j}(\lambda_0) = 0$ for all $k \le j$, $\lambda_0 \in \Lambda$. The structural properties of $\mathcal N(\lambda_0)$ are described by the following lemma. \begin{lem} \label{lem:N2} (i) If $\lambda_0 \not\in \Lambda_k$, then $\mathcal N_{s, j}(\lambda_0) = 0$, $s = \overline{k+1,n}$, $j = \overline{1,k}$. (ii) If $\lambda_0 \in \Lambda_s$ for $s = \overline{\nu + 1, k-1}$, $\lambda_0 \not\in \Lambda_{\nu}$, $\lambda_0 \not\in \Lambda_k$, $1 \le \nu + 1 < k \le n$, then $\mathcal N_{k, \nu + 1}(\lambda_0) \ne 0$. (Here $\Lambda_0 = \Lambda_n = \varnothing$). \end{lem} \begin{proof} This lemma is proved similarly to Lemma~2.3.1 in \cite{Yur02}, so we outline the proof briefly. If $\lambda_0 \not\in \Lambda_k$, then $\Phi_{k,\langle-1\rangle}(x, \lambda_0) = 0$. On the other hand, it follows from \eqref{relNPhi} that $$ \Phi_{k,\langle-1\rangle}(x, \lambda_0) = \sum_{s = k+1}^n \mathcal N_{s,k}(\lambda_0) \Phi_{s,\langle 0 \rangle}(x, \lambda_0). $$ Applying the linear forms $\mathcal U_{s,0}$ to this relation for $s = \overline{k+1,n}$, we conclude that $\mathcal N_{s,k}(\lambda_0) = 0$, $s = \overline{k+1,n}$. Thus, the assertion (i) is proved for $j = k$. The proof for $j = k-1, \ldots, 2, 1$ can be obtained by induction. In order to prove (ii), we suppose that $\Delta_{\nu,\nu}(\lambda_0) \ne 0$, $\Delta_{s,s}(\lambda_0) = 0$ for $s = \overline{\nu+1,k-1}$. Then, it can be shown that $\mathcal U_{s,1}(\Phi_{s,\langle 0 \rangle}(x, \lambda_0)) \ne 0$, $s = \overline{\nu + 2, k-1}$ and $\Phi_{\nu + 1, \langle -1 \rangle}(x, \lambda_0) \not\equiv 0$. Suppose that $\mathcal N_{k,\nu+1}(\lambda_0) = 0$. Consequently, \eqref{relNPhi} implies $$ \Phi_{\nu+1,\langle-1\rangle}(x, \lambda_0) = \sum_{s = \nu+2}^{k-1} \mathcal N_{s,\nu+1}(\lambda_0) \Phi_{s,\langle 0 \rangle}(x, \lambda_0). $$ Applying the linear forms $\mathcal U_{s,1}$ for $s = \overline{\nu+2,k-1}$, we conclude that $N_{s,\nu+1}(\lambda_0) = 0$, $s = \overline{\nu+2,k-1}$, and so $\Phi_{\nu+1,\langle-1\rangle}(x, \lambda_0) \equiv 0$. This contradiction yields (ii). \end{proof} In view of the asymptotics \eqref{asymptla}, we have $\lambda_{l,k} \ne \lambda_{r,k + 1}$ for sufficiently large $l$ and $r$. Therefore, Lemma~\ref{lem:N2} implies the following corollary. \begin{cor} \label{cor:N} For sufficiently large $|\lambda_0|$, $\lambda_0 \in \Lambda$, all the entries of $\mathcal N(\lambda_0)$ equal zero except $\mathcal N_{k+1,k}(\lambda_0)$, $k = \overline{1,n-1}$. \end{cor} Define \textit{the weight numbers} $\beta_{l,k} := \mathcal N_{k+1,k}(\lambda_{l,k})$. It is worth considering $\beta_{l,k}$ only for sufficiently large $l$. It follows from \eqref{defN} and \eqref{Mjk} that $$ \beta_{l,k} = M_{k+1,k,\langle -1\rangle}(\lambda_{l,k}) = -\frac{\Delta_{k+1,k}(\lambda_{l,k})}{\tfrac{d}{d\lambda} \Delta_{k,k}(\lambda_{l,k})}. $$ Consequently, Theorem~6.2 from \cite{Bond22-asympt} yields the asymptotics \begin{equation} \label{asymptbe} \beta_{l,k} = l^{n-1 + p_{k+1,0} - p_{k,0}} (\beta^0_k + \varkappa_{l,k}^0), \quad \{ \varkappa_{l,k}^0 \} \in l_2, \quad k = \overline{1,n-1}, \end{equation} where the constants $\beta^0_k$ depend only on $n$, $k$, and $\{ p_{s,a} \}$. \section{Main equation} \label{sec:main} This section is devoted to the constructive solution of the auxiliary Problem~\ref{prob:sd}, that is, to the recovery of the Weyl solutions $\{ \Phi_k(x, \lambda) \}_{k = 1}^n$ from the spectral data $\{ \lambda_0, \mathcal N(\lambda_0) \}_{\lambda_0 \in \Lambda}$. We consider this problem for $\mathcal L = (F(x), U_0, U_1) \in W$ with an arbitrary $F \in \mathfrak F_n$. Thus, the results of this section can be applied to a wide class of differential expressions \eqref{defl} with associated matrix of $\mathfrak F_n$. Along with $\mathcal L$, we consider another problem $\tilde {\mathcal L} = (\tilde F(x), \tilde U_0, \tilde U_1)$ of the same form but with different coefficients. Assume that $\tilde F \in \mathfrak F_n$, $p_{s,a} = \tilde p_{s,a}$, $s = \overline{1, n}$, $a = 0, 1$. The quasi-derivatives for $\tilde {\mathcal L}$ are defined by the matrix $\tilde F(x)$, so they are different from the quasi-derivatives of the problem $\mathcal L$. The problem $\tilde {\mathcal L}^{\star}$ is defined similarly to $\mathcal L^{\star}$. For simplicity, we assume that $\tilde {\mathcal L} \in W$. The case $\tilde {\mathcal L} \in W$ requires technical modifications (see Remark~\ref{rem:mult}). Denote $\mathcal I := \Lambda \cup \tilde \Lambda$. In Subsection~\ref{sec:cont}, we reduce the studied problem to the infinite system \eqref{infphi} of linear equations with respect to some entries of $\phi_{\langle 0 \rangle}(x,\lambda_0)$, $\lambda_0 \in \mathcal I$. Our technique is based on the contour integration in the $\lambda$-plane and on the Residue theorem. In Subsection~\ref{sec:Banach}, the system \eqref{infphi} is transformed into the main equation \eqref{main} in the Banach space $m$ of infinite bounded sequences. The unique solvability of the main equation is proved. Finally, we arrive at the constructive Algorithm~\ref{alg:1} for finding $\{ \Phi_k(x, \lambda) \}_{k = 1}^n$ by the spectral data. This algorithm will be used in the next section for solving the inverse spectral problem. \subsection{Contour integration} \label{sec:cont} In order to formulate and prove the main lemma of this subsection (Lemma~\ref{lem:cont}), we first need some preliminaries. Introduce the notations \begin{gather} \label{defD} D(x, \mu, \lambda) := (\lambda - \mu)^{-1} [\Phi(x, \mu)]^{-1} \Phi(x, \lambda), \quad \tilde D(x, \mu, \lambda) := (\lambda - \mu)^{-1} [\tilde \Phi(x, \mu)]^{-1} \tilde \Phi(x, \lambda), \\ \label{defDa} D_{\langle \alpha \rangle}(x, \lambda_0, \lambda) := [D(x, \mu, \lambda)]_{|\mu = \lambda_0}^{\langle \alpha \rangle}, \quad \alpha \in \mathbb Z. \end{gather} and similarly define $\tilde D_{\langle \alpha \rangle}(x, \lambda_0, \lambda)$. \begin{lem} \label{lem:D} The following relations hold: \begin{align} \label{DN1} & D_{\langle -1 \rangle}(x, \lambda_0, \lambda) = - \mathcal N(\lambda_0) D_{\langle 0 \rangle}(x, \lambda_0, \lambda), \\ \label{DN2} & [D(x, \mu, \lambda)]_{|\lambda = \lambda_0}^{\langle -1 \rangle} = [D(x, \mu, \lambda)]_{|\lambda = \lambda_0}^{\langle 0 \rangle} \mathcal N(\lambda_0), \\ \label{DN3} & [(\lambda - \lambda_0) I + \mathcal N(\lambda_0)] D_{\langle 0 \rangle}(x, \lambda_0, \lambda) = J_0^{-1} \langle [\phi^{\star}_{\langle 0 \rangle}(x, \lambda_0)]^T, \phi(x, \lambda) \rangle, \\ \label{Ddx} & D'(x, \mu, \lambda) = J_0^{-1} [\phi^{\star}(x, \mu)]^T \phi(x, \lambda). \end{align} \end{lem} \begin{proof} Using \eqref{PJP} and \eqref{defD}, we obtain \begin{equation} \label{relD} D(x, \mu, \lambda) = (\lambda - \mu)^{-1} J_0^{-1} [\Phi^{\star}(x, \mu)]^T J \Phi(x, \lambda). \end{equation} It follows from \eqref{relD} and \eqref{defDa} that \begin{align} \label{Dm1} & D_{\langle -1 \rangle}(x, \lambda_0, \lambda) = (\lambda - \lambda_0)^{-1} J_0^{-1} [\Phi^{\star}_{\langle -1 \rangle}(x, \lambda_0)]^T J \Phi(x, \lambda), \\ \label{D0} & D_{\langle 0 \rangle}(x, \lambda_0, \lambda) = (\lambda - \lambda_0)^{-1} J_0^{-1} [\Phi^{\star}_{\langle 0 \rangle}(x, \lambda_0)]^T J \Phi(x, \lambda) + (\lambda - \lambda_0)^{-2} J_0^{-1} [ \Phi^{\star}_{\langle -1 \rangle}(x, \lambda_0)]^T J \Phi(x, \lambda). \end{align} Using \eqref{Dm1},\eqref{D0} together with Lemma~\ref{lem:N1}, we derive \eqref{DN1}. The relation \eqref{DN2} is proved similarly. It follows from \eqref{wrona1} that \begin{equation} \label{wronphi} [\Phi^{\star}(x, \mu)]^T J \Phi(x, \lambda) = \langle [\phi^{\star}(x, \mu)]^T, \phi(x, \lambda) \rangle. \end{equation} Using \eqref{Dm1}, \eqref{D0}, and \eqref{wronphi}, we obtain $$ (\lambda - \lambda_0) D_{\langle 0 \rangle}(x, \lambda_0, \lambda) = J_0^{-1} \langle [\phi_{\langle 0 \rangle}^{\star}(x, \mu)]^T, \phi(x, \lambda) \rangle + D_{\langle -1 \rangle}(x, \lambda_0, \lambda). $$ Taking \eqref{DN1} into account, we arrive at \eqref{DN3}. In order to prove \eqref{Ddx}, we combine \eqref{relD}, \eqref{wronphi}, and \eqref{wron2}: $$ D'(x, \mu, \lambda) = (\lambda - \mu)^{-1} J_0^{-1} \frac{d}{dx} \langle [\phi^{\star}(x, \mu)]^T, \phi(x, \lambda) \rangle = J_0^{-1} [\phi^{\star}(x, \mu)]^T \phi(x, \lambda). $$ \end{proof} Put $\hat {\mathcal N}(\lambda_0) := \mathcal N(\lambda_0) - \tilde {\mathcal N}(\lambda_0)$. Below in this section, we suppose that $x \in [0, 1]$ is fixed. \begin{lem} \label{lem:cont} The following relations hold: \begin{gather} \label{contphi} \phi(x, \lambda) = \tilde \phi(x, \lambda) + \sum_{\lambda_0 \in \mathcal I} \phi_{\langle 0 \rangle} (x, \lambda_0) \hat {\mathcal N}(\lambda_0) \tilde D_{\langle 0\rangle}(x, \lambda_0, \lambda), \\ \label{contD} D(x, \mu, \lambda) - \tilde D(x, \mu, \lambda) = \sum_{\lambda_0 \in \mathcal I} [D(x, \mu, \xi)]_{\xi = \lambda_0}^{\langle 0 \rangle} \hat {\mathcal N}(\lambda_0) \tilde D_{\langle 0 \rangle}(x, \lambda_0, \lambda), \end{gather} where the series converge in the sense $$ \sum_{\lambda_0 \in \mathcal I} = \lim_{R \to \infty} \sum_{\lambda_0 \in \mathcal I_R}, \quad \mathcal I_R := \{ \lambda \in \mathcal I \colon |\lambda| < R \}, $$ uniformly by $\lambda, \mu$ on compact sets of $(\mathbb C \setminus \mathcal I)$. \end{lem} \begin{proof} In this proof, a crucial role is played by the matrix of spectral mappings \begin{equation} \label{defP} \mathcal P(x, \lambda) = \Phi(x, \lambda) [\tilde \Phi(x, \lambda)]^{-1}. \end{equation} It follows from \eqref{PJP} and \eqref{defP} that \begin{equation} \label{relP} \mathcal P(x, \lambda) = \Phi(x, \lambda) J_0^{-1} [\tilde \Phi^{\star}(x,\lambda)]^T J. \end{equation} The proof consists of three steps. \smallskip \textsc{ Step 1. Regions and contours.} Choose a circle $\mathcal C_* := \{ \lambda \in \mathbb C \colon |\lambda| < \lambda_* \}$ of sufficiently large radius $\lambda_*$. Choose the $\sqrt[n]{\lambda}$ branch so that $\arg(\sqrt[n]{\lambda}) \in \left( -\tfrac{\pi}{2n}, \tfrac{3\pi}{2n} \right)$. Then, it follows from the asymptotics \eqref{asymptla} that the roots $\rho_0 := \sqrt[n]{\lambda_0}$ of the eigenvalues $\lambda_0 \in (\mathcal I \setminus \mathcal C_*)$ lie in the two strips \begin{equation} \label{defSj} \mathcal S_j := \{ \rho \colon \mbox{Re}\, (\epsilon_j \rho) > 0, \, |\mbox{Im} (\epsilon_j \rho)| < c \}, \quad \epsilon_j := \exp(-2\pi \mathrm{i} j/n), \quad j = 0, 1, \end{equation} for an appropriate choice of the constant $c$. More precisely, $\sqrt[n]{\lambda_{l,k}} \in \mathcal S_0$ if $(n-k)$ is even and $\sqrt[n]{\lambda_{l,k}} \in \mathcal S_1$ otherwise. For $j = 0, 1$, denote by $\Xi_j$ the image of $\mathcal S_j$ in the $\lambda$-plane under the mapping $\lambda = \rho^n$. Put $\Xi := \Xi_0 \cup \Xi_1 \cup \mathcal C_*$. Clearly, $\mathcal I \subset \Xi$. Further, fix a sufficiently small $\delta > 0$ and define the regions $$ \mathcal S_{j,\delta} := \{ \rho \in \mathcal S_j \colon \exists \rho_0 \in \mathcal S_j \cap \mathcal I \:\text{s.t.}\: |\rho - \rho_0| < \delta \}, \quad j = 0, 1. $$ For $j = 0, 1$, denote by $\Xi_{j,\delta}$ the image of $\mathcal S_j$ in the $\lambda$-plane under the mapping $\lambda = \rho^n$. Put $$ \mathcal H_{\delta} := \mathbb C \setminus (\Xi_{1,\delta} \cup \Xi_{2,\delta} \cup \mathcal C_*). $$ Let $\lambda = \rho^n$, $\Theta(\rho) := \diag \{ 1, \rho, \ldots, \rho^{n-1} \}$. It can be shown in the standard way (see, e.g., the relation (2.1.37) in \cite{Yur02} and the proof of Theorem~2 in \cite{Bond21}) that \begin{equation} \label{asymptP} \mathcal P(x, \lambda) = \Theta(\rho)(I + o(1))[\Theta(\rho)]^{-1}, \quad |\lambda| \to \infty, \end{equation} uniformly with respect to $\lambda \in \mathcal H_{\delta}$. For sufficiently large values of $R > 0$, define the regions (see Fig.~\ref{img:contours}): $$ \Xi_R := \{ \lambda \in \Xi \colon |\lambda| < R \}, \quad \Xi_R^{\pm} := \{ \lambda \colon |\lambda| < R, \, \lambda \not\in \Xi, \, \pm \mbox{Im}\lambda > 0 \}, $$ and their boundaries $\gamma_R := \partial \Xi_R$, $\gamma_R^{\pm} := \partial \Xi_R^{\pm}$ with the counter-clockwise circuit. Below we consider only such radii $R$ that $\gamma_R \subset \mathcal H_{\delta}$. \begin{figure}[h!] \centering \begin{tikzpicture}[scale = 0.7] \draw (1,1) arc(45:135:1.41); \draw (1,-1) arc(-45:-135:1.41); \draw (1,1) .. controls (2, 1.8) .. (5,3); \draw (1,-1) .. controls (2,-1.8) .. (5,-3); \draw (-1,1) .. controls (-2, 1.8) .. (-5,3); \draw (-1,-1) .. controls (-2, -1.8) .. (-5,-3); \draw (0, 0) circle (4); \draw (0,0) node{$\Xi_R$}; \draw (0,3) node{$\Xi_R^+$}; \draw (0,-3) node{$\Xi_R^-$}; \filldraw (3,0) circle (1pt); \draw[dashed] (3,0) circle (0.7); \filldraw (5.3,0) circle (1pt); \draw[dashed] (5.3,0) circle (1); \filldraw (-3,0) circle (1pt); \draw[dashed] (-3,0) circle (0.7); \filldraw (-5.3,0) circle (1pt); \draw[dashed] (-5.3,0) circle (1); \end{tikzpicture} \caption{Contours} \label{img:contours} \end{figure} \smallskip \textsc{Step 2. Contour integration.} In view of \eqref{relP}, the matrix function $\mathcal P(x, \lambda)$ is meromorpic in $\lambda$ with the poles $\mathcal I$. Hence, $\mathcal P(x, \lambda)$ is analytic in $\Xi_R^{\pm}$. Let $\mathcal P_1(x, \lambda)$ be the first row of $\mathcal P(x, \lambda)$. The Cauchy formula implies \begin{gather*} \mathcal P_1(x, \lambda) - e_1^T = -\frac{1}{2\pi \mathrm{i}} \oint\limits_{\gamma_R^{\pm}} \frac{\mathcal P_1(x, \xi) - e_1^T}{\lambda - \xi} \, d\xi, \quad \lambda \in \Xi_R^{\pm}, \\ \frac{\mathcal P(x, \lambda) - \mathcal P(x, \mu)}{\lambda - \mu} = -\frac{1}{2\pi \mathrm{i}} \oint\limits_{\gamma_R^{\pm}} \frac{\mathcal P(x, \xi)}{(\lambda - \xi)(\xi - \mu)} \, d\xi, \quad \lambda, \mu \in \Xi_R^{\pm}. \end{gather*} Consequently, \begin{gather} \label{smP1} \mathcal P_1(x, \lambda) = e_1^T + \frac{1}{2 \pi \mathrm{i}} \oint\limits_{\gamma_R} \frac{\mathcal P_1(x, \xi)}{\lambda - \xi}\, d\xi - \frac{1}{2 \pi \mathrm{i}} \oint\limits_{|\xi| = R} \frac{\mathcal P_1(x, \xi) - e_1^T}{\lambda - \xi}\, d\xi, \\ \label{smP2} \frac{\mathcal P(x, \lambda) - \mathcal P(x, \mu)}{\lambda - \mu} = \frac{1}{2 \pi \mathrm{i}} \oint\limits_{\gamma_R} \frac{\mathcal P(x, \xi)}{(\lambda - \xi)(\xi - \mu)} \, d\xi - \frac{1}{2 \pi \mathrm{i}} \oint\limits_{|\xi| = R} \frac{\mathcal P(x, \xi)}{(\lambda - \xi)(\xi - \mu)} \, d\xi. \end{gather} Using \eqref{defP}, \eqref{defD}, \eqref{smP1}, and \eqref{smP2}, we derive \begin{align} \label{smP3} \phi(x, \lambda) = \mathcal P_1(x, \lambda) \tilde \Phi(x, \lambda) & = \tilde \phi(x, \lambda) + \frac{1}{2 \pi \mathrm{i}} \oint\limits_{\gamma_R} \frac{\mathcal P_1(x, \xi) \tilde \Phi(x, \lambda)}{\lambda - \xi}\, d\xi + \varepsilon_R^1(x, \lambda), \\ \nonumber D(x, \mu, \lambda) - \tilde D(x, \mu, \lambda) & = \frac{[\Phi(x, \mu)]^{-1} (\mathcal P(x, \lambda) - \mathcal P(x, \mu)) \tilde \Phi(x, \lambda)}{\lambda - \mu} \\ \nonumber & = \frac{1}{2 \pi \mathrm{i}} \oint\limits_{\gamma_R} \frac{[\Phi(x, \mu)]^{-1} \Phi(x, \xi)}{\xi - \mu} \frac{[\tilde \Phi(x, \xi)]^{-1} \tilde \Phi(x, \lambda)}{\lambda - \xi} \, d\xi + \varepsilon_R^2(x, \mu, \lambda) \\ \label{smP4} & = \frac{1}{2 \pi \mathrm{i}} \oint\limits_{\gamma_R} D(x, \mu, \xi) \tilde D(x, \xi, \lambda) \, d\xi + \varepsilon_R^2(x, \mu, \lambda), \end{align} where \begin{align*} \varepsilon_R^1(x, \lambda) := & - \frac{1}{2 \pi \mathrm{i}} \oint\limits_{|\xi| = R} \frac{(\mathcal P_1(x, \xi) - e_1^T) \tilde \Phi(x, \lambda)}{\lambda - \xi}\, d\xi, \\ \varepsilon_R^2(x, \mu, \lambda) := & - \frac{1}{2 \pi \mathrm{i}} \oint\limits_{|\xi| = R} \frac{[\Phi(x, \mu)]^{-1} \mathcal P(x, \xi) \tilde \Phi(x, \lambda)}{(\lambda - \xi) (\xi - \mu)} \, d\xi. \end{align*} It follows from \eqref{asymptP} that \begin{equation} \label{limeps} \lim_{\substack{R \to \infty \\ \gamma_R \subset \mathcal H_{\delta}}} \varepsilon_R^1(x, \lambda) = 0, \quad \lim_{\substack{R \to \infty \\ \gamma_R \subset \mathcal H_{\delta}}} \varepsilon_R^2(x, \mu, \lambda) = 0. \end{equation} \smallskip \textsc{Step 3. Residues}. Using the first row of \eqref{relP}: $$ \mathcal P_1(x, \lambda) = \phi(x, \lambda) J_0^{-1} [\tilde \Phi^{\star}(x,\lambda)]^T J $$ and the Residue theorem, we obtain \begin{equation} \label{PRes} \frac{1}{2 \pi \mathrm{i}} \oint\limits_{\gamma_R} \frac{\mathcal P_1(x, \xi) \tilde \Phi(x, \lambda)}{\lambda - \xi}\, d\xi = \sum_{\lambda_0 \in \mathcal I_R} \Res_{\xi = \lambda_0} \phi(x, \xi) \tilde D(x, \xi, \lambda). \end{equation} Using \eqref{smP3}, \eqref{limeps}, and \eqref{PRes}, we get \begin{equation} \label{smphi1} \phi(x, \lambda) = \tilde \phi(x, \lambda) + \sum_{\lambda_0 \in \mathcal I} (\phi_{\langle -1 \rangle}(x, \lambda_0) \tilde D_{\langle 0 \rangle}(x, \lambda_0, \lambda) + \phi_{\langle 0 \rangle}(x, \lambda_0) \tilde D_{\langle -1 \rangle}(x, \lambda_0, \lambda)). \end{equation} It follows from \eqref{relNPhi} that \begin{equation} \label{smphi2} \phi_{\langle -1 \rangle}(x, \lambda_0) = \phi_{\langle 0 \rangle}(x, \lambda_0) \mathcal N(\lambda_0). \end{equation} Substituting \eqref{DN1} for $\tilde D_{\langle -1\rangle}(x, \lambda_0, \lambda)$ and \eqref{smphi2} into \eqref{smphi1}, we derive the relation \eqref{contphi}. It remains to prove \eqref{contD}. Using Lemma~\ref{lem:D}, we derive \begin{align} \nonumber \Res_{\xi = \lambda_0} D(x, \mu, \xi) \tilde D(x, \xi, \lambda) & = [D(x, \mu, \xi)]_{|\xi = \lambda_0}^{\langle -1 \rangle} \tilde D_{\langle 0 \rangle}(x, \lambda_0, \lambda) + [D(x, \mu, \xi)]_{|\xi = \lambda_0}^{\langle 0 \rangle} \tilde D_{\langle -1 \rangle}(x, \lambda_0, \lambda) \\ \label{ResD} & = [D(x, \mu, \xi)]_{|\xi = \lambda_0}^{\langle 0 \rangle} \hat {\mathcal N}(\lambda_0) \tilde D_{\langle 0 \rangle}(x, \lambda_0, \lambda). \end{align} Combining \eqref{smP4}, \eqref{limeps}, \eqref{ResD} all together and applying the Residue theorem, we arrive at \eqref{contD}. Now \eqref{contphi} and \eqref{contD} are proved only for $\lambda, \mu \in (\mathbb C \setminus \Xi)$. Using analytic continuation, we conclude that these relations hold for $\lambda, \mu \in (\mathbb C \setminus \mathcal I)$. \end{proof} Our next goal is to obtain an infinite system of linear equations with respect to some entries of $\phi_{\langle 0 \rangle}(\lambda_0)$, $\lambda_0 \in \mathcal I$. Introduce the ordered set $$ V := \{ (l,k,\varepsilon) \colon l \ge 1, \, k \in \{ 1, \ldots, n-1 \}, \, \varepsilon \in \{ 0, 1 \}. $$ For $v = (l,k,\varepsilon)$, $v_0 = (l_0,k_0,\varepsilon_0)$, $v,v_0 \in V$, we mean that $v < v_0$ if $l < l_0$ or $(l = l_0 \: \text{and} \: k < k_0)$ or $(l = l_0, \, k= k_0 \: \text{and} \: \varepsilon < \varepsilon_0)$. Denote \begin{gather} \label{lalk} \lambda_{l,k,0} := \lambda_{l,k}, \quad \lambda_{l,k,1} := \tilde \lambda_{l,k}, \quad \mathcal N_0(\lambda_0) := \mathcal N(\lambda_0), \quad \mathcal N_1(\lambda_0) := \tilde {\mathcal N}(\lambda_0), \\ \label{defphilk} \varphi_{l,k,\varepsilon}(x) := \Phi_{k+1,\langle 0 \rangle}(x, \lambda_{l,k,\varepsilon}), \quad \tilde \varphi_{l,k,\varepsilon}(x) := \tilde \Phi_{k+1,\langle 0 \rangle}(x, \lambda_{l,k,\varepsilon}), \\ \label{defPlk} \tilde P_{l,k,\varepsilon}(x, \lambda) := e_{k+1}^T \mathcal N_{\varepsilon}(\lambda_{l,k,\varepsilon}) \tilde D_{\langle 0 \rangle}(x, \lambda_{l,k,\varepsilon}, \lambda), \\ \label{defG} \tilde G_{(l,k,\varepsilon), (l_0,k_0,\varepsilon_0)} (x) := [\tilde P_{l,k,\varepsilon}(x,\lambda)]_{\lambda = \lambda_{l_0,k_0,\varepsilon_0}}^{\langle 0 \rangle} e_{k_0+1}, \end{gather} and similarly define $P_{l,k,\varepsilon}(x, \lambda)$, $G_{(l,k,\varepsilon),(l_0,k_0,\varepsilon_0)}(x)$. Using these notations, we obtain the following corollary of Lemma~\ref{lem:cont}. \begin{cor} \label{cor:inf} The following relations hold: \begin{gather} \label{findphi} \phi(x, \lambda) = \tilde \phi(x, \lambda) + \sum_{(l,k,\varepsilon) \in V} (-1)^{\varepsilon} \varphi_{l,k,\varepsilon}(x) \tilde P_{l,k,\varepsilon}(x, \lambda), \\ \label{infphi} \varphi_{l_0, k_0,\varepsilon_0}(x) = \tilde \varphi_{l_0,k_0,\varepsilon_0}(x) + \sum_{(l,k,\varepsilon) \in V}(-1)^{\varepsilon} \varphi_{l,k,\varepsilon}(x) \tilde G_{(l,k,\varepsilon), (l_0, k_0,\varepsilon_0)}(x), \\ \label{infG} G_{(l_0,k_0,\varepsilon_0), (l_1,k_1,\varepsilon_1)}(x) - \tilde G_{(l_0,k_0,\varepsilon_0), (l_1,k_1,\varepsilon_1)}(x) = \sum_{(l,k,\varepsilon) \in V} (-1)^{\varepsilon} G_{(l_0,k_0,\varepsilon_0),(l,k,\varepsilon)}(x) \tilde G_{(l, k,\varepsilon), (l_1, k_1,\varepsilon_1)}(x), \end{gather} where $x \in [0, 1]$, $(l_0,k_0,\varepsilon_0), (l_1,k_1,\varepsilon_1) \in V$. \end{cor} \begin{proof} Taking Lemma~\ref{lem:N2} on the structure of $\mathcal N(\lambda_0)$ and $\tilde{\mathcal N}(\lambda_0)$ into account, we rewrite \eqref{contphi} in the form \begin{equation*} \phi(x, \lambda) = \tilde \phi(x, \lambda) + \sum_{(l,k,\varepsilon) \in V} (-1)^{\varepsilon} \Phi_{k+1, \langle 0 \rangle}(x, \lambda_{l, k,\varepsilon}) e_{k+1}^T \mathcal N_\varepsilon(\lambda_{l,k,\varepsilon}) \tilde D_{\langle 0 \rangle }(x, \lambda_{l,k,\varepsilon}, \lambda). \end{equation*} Using \eqref{defphilk} and \eqref{defPlk}, we arrive at \eqref{findphi}. Taking the $(k_0+1)$-th entry in the relation \eqref{findphi}, putting $\lambda = \lambda_{l_0,k_0,\varepsilon_0}$ and using \eqref{defphilk}, \eqref{defG}, we readily obtain \eqref{infphi}. Analogously, we represent \eqref{contD} as follows: $$ D(x, \mu, \lambda) - \tilde D(x, \mu, \lambda) = \sum_{(l,k,\varepsilon) \in V} (-1)^{\varepsilon} [D(x, \mu,\xi)]_{\xi = \lambda_{l,k,\varepsilon}}^{\langle 0 \rangle} e_{k+1} e_{k+1}^T \mathcal N_{\varepsilon}(\lambda_{l,k,\varepsilon}) \tilde D_{\langle 0 \rangle}(x, \lambda_{l,k,\varepsilon}, \lambda). $$ Passing from $D(x, \mu,\lambda)$ and $\tilde D(x, \mu, \lambda)$ to $P_{l_0,k_0,\varepsilon_0}(x, \lambda)$ and $\tilde P_{l_0,k_0,\varepsilon_0}(x, \lambda)$, respectively, we derive $$ P_{l_0,k_0,\varepsilon_0}(x, \lambda) - \tilde P_{l_0,k_0,\varepsilon_0}(x, \lambda) = \sum_{(l,k,\varepsilon) \in V} (-1)^{\varepsilon} [P_{l_0,k_0,\varepsilon_0}(x, \xi)]_{\xi = \lambda_{l,k,\varepsilon}}^{\langle 0 \rangle} e_{k+1} \tilde P_{l,k,\varepsilon}(x, \lambda). $$ Using \eqref{defG} and the analogous relation for $G_{(l,k,\varepsilon), (l_0,k_0,\varepsilon_0)}(x)$, we finally arrive \eqref{infG}. \end{proof} The relations \eqref{infphi} can be considered as an infinite linear system with respect to $\varphi_{l,k,\varepsilon}(x)$, $(l,k,\varepsilon) \in V$. However, it is inconvenient to use \eqref{infphi} as the main equation system for the inverse problem, because the series in \eqref{infphi} converges only ``with brackets'': $$ \sum_{(l,k,\varepsilon) \in V} = \sum_{(l,k)} \left( \sum_{\varepsilon = 0, 1} (\dots)\right). $$ Therefore, in the next section, we transform the system \eqref{infphi} to a linear equation in a suitable Banach space. The relation \eqref{infG} will be used to prove the unique solvability of the main equation. \begin{remark} \label{rem:mult} If $\tilde {\mathcal L} \not\in W$, that is, the poles of $\tilde M(\lambda)$ are not necessarily simple, then this influences on the calculation of the residues in \eqref{PRes}. Consequently, we obtain the following relation instead of \eqref{contphi}: \begin{align} \nonumber \phi(x, \lambda) = \tilde \phi(x, \lambda) + \sum_{\lambda_0 \in \mathcal I} \biggl[ & \phi_{\langle 0 \rangle}(x, \lambda_0) (\mathcal N(\lambda_0) \tilde D_{\langle 0 \rangle}(x, \lambda_0, \lambda) + \tilde D_{\langle -1 \rangle}(x, \lambda_0, \lambda)) \\ \label{contphi1} & + \sum_{k = 1}^{m_{\lambda_0}-1} \phi_{\langle k \rangle}(x, \lambda_0) \tilde D_{\langle -(k+1) \rangle}(x, \lambda_0, \lambda)\biggr], \end{align} where $m_{\lambda_0}$ is the multiplicity of $\lambda_0 \in \tilde \Lambda$. Using \eqref{contphi1}, one can derive an infinite system analogous to \eqref{infphi}, containing not only entries of the vectors $\phi_{\langle 0 \rangle}(x, \lambda_0)$ but also of $\phi_{\langle k \rangle}(x, \lambda_0)$ for $k = \overline{1,m_{\lambda_0}-1}$. \end{remark} \subsection{Linear equation in a Banach space} \label{sec:Banach} Define the numbers $\{ \xi_l \}$, which characterize ``the difference'' of the two spectral data sets $\{ \lambda_0, \mathcal N(\lambda_0) \}_{\lambda_0 \in \Lambda}$ and $\{ \tilde \lambda_0, \tilde {\mathcal N}(\tilde \lambda_0) \}_{\tilde \lambda_0 \in \tilde \Lambda}$: \begin{equation} \label{defxi} \xi_l := \sum_{k = 1}^{n-1} \left( |\lambda_{l,k} - \tilde \lambda_{l,k}| + \sum_{j = k+1}^n |\mathcal N_{j,k}(\lambda_{l,k}) - \tilde{\mathcal N}_{j,k}(\tilde \lambda_{l,k})| l^{p_{k,0} - p_{k+1,0}}\right) l^{1-n}, \quad l \ge 1. \end{equation} Taking Corollary~\ref{cor:N} into account, we reduce \eqref{defxi} to the following form for all sufficiently large values of $l$: \begin{equation} \label{relxi} \xi_l = \sum_{k = 1}^{n-1} \left( |\lambda_{l,k} - \tilde \lambda_{l,k}| + |\beta_{l,k} - \tilde \beta_{l,k}| l^{p_{k,0} - p_{k+1,0}}\right) l^{1-n}. \end{equation} Relation~\eqref{relxi} together with the asymptotics \eqref{asymptla} and \eqref{asymptbe} imply $\{ \xi_l \} \in l_2$. \begin{lem} \label{lem:est} The following estimates hold for $(l,k,\varepsilon), (l_0,k_0,\varepsilon_0) \in V$: \begin{gather*} |\varphi_{l,k,\varepsilon}(x)| \le C w_{l,k}(x), \quad |\varphi_{l,k,0}(x) - \varphi_{l,k,1}(x)| \le C w_{l,k}(x) \xi_l, \\ |G_{(l,k,\varepsilon),(l_0,k_0,\varepsilon_0)}(x)| \le \frac{C}{|l - l_0| + 1} \cdot \frac{w_{l_0,k_0}(x)}{w_{l,k}(x)}, \\ |G_{(l,k,0),(l_0,k_0,\varepsilon_0)}(x) - G_{(l,k,1),(l_0,k_0,\varepsilon_0)}(x)| \le \frac{C \xi_l}{|l - l_0| + 1} \cdot \frac{w_{l_0,k_0}(x)}{w_{l,k}(x)}, \\ |G_{(l,k,\varepsilon),(l_0,k_0,0)}(x) - G_{(l,k,\varepsilon),(l_0,k_0,1)}(x)| \le \frac{C \xi_{l_0}}{|l - l_0| + 1} \cdot \frac{w_{l_0,k_0}(x)}{w_{l,k}(x)}, \\ |G_{(l,k,0),(l_0,k_0,0)}(x) - G_{(l,k,0),(l_0,k_0,1)}(x) - G_{(l,k,1),(l_0,k_0,0)}(x) + G_{(l,k,1),(l_0,k_0,1)}(x) \le \\ \frac{C \xi_l \xi_{l_0}}{|l-l_0| + 1} \cdot \frac{w_{l_0,k_0}(x)}{w_{l,k}(x)}, \end{gather*} where $$ w_{l,k}(x) := l^{-p_{k+1,0}} \exp(-xl \cot (k\pi/n)), $$ and the constant $C$ does not depend on $x, l,\varepsilon,k, l_0,\varepsilon_0,k_0$. \end{lem} The proof of Lemma~\ref{lem:est} repeats the technique of \cite[Section 2.3.3]{Yur02}, so we omit it. The similar estimates are valid for $\tilde \varphi_{l,k,\varepsilon}(x)$ and $\tilde G_{(l_0,k_0,\varepsilon_0),(l,k,\varepsilon)}(x)$. Put $\theta_l := \xi_l^{-1}$ if $\xi_l \ne 0$ and $\theta_l = 0$ otherwise. Introduce the notations \begin{equation} \label{defpsi} \begin{bmatrix} \psi_{l,k,0}(x) \\ \psi_{l,k,1}(x) \end{bmatrix} := w_{l,k}^{-1}(x) \begin{bmatrix} \theta_l & -\theta_l \\ 0 & 1 \end{bmatrix} \begin{bmatrix} \varphi_{l,k,0}(x) \\ \varphi_{l,k,1}(x) \end{bmatrix}, \end{equation} \begin{multline} \label{defR} \begin{bmatrix} R_{(l_0,k_0,0),(l,k,0)}(x) & R_{(l_0,k_0,0),(l,k,1)}(x) \\ R_{(l_0,k_0,1),(l,k,0)}(x) & R_{(l_0,k_0,1),(l,k,1)}(x) \end{bmatrix} := \\ \frac{w_{l,k}(x)}{w_{l_0,k_0}(x)} \begin{bmatrix} \theta_{l_0} & -\theta_{l_0} \\ 0 & 1 \end{bmatrix} \begin{bmatrix} G_{(l,k,0),(l_0,k_0,0)}(x) & G_{(l,k,1),(l_0,k_0,0)}(x) \\ G_{(l,k,0),(l_0,k_0,1)}(x) & G_{(l,k,1),(l_0,k_0,1)}(x) \end{bmatrix} \begin{bmatrix} \xi_{l} & 1 \\ 0 & -1 \end{bmatrix}. \end{multline} For brevity, put $\psi_v(x) := \psi_{l,k,\varepsilon}(x)$, $R_{v_0,v}(x) := R_{(l_0,k_0,\varepsilon_0),(l,k,\varepsilon)}(x)$, $v = (l,k,\varepsilon)$, $v_0 = (l_0,k_0,\varepsilon_0)$, $v,v_0 \in V$. The functions $\tilde \psi_v(x)$ and $\tilde R_{v_0,v}(x)$ are defined analogously. Using \eqref{infphi}, \eqref{infG}, and the above notations, we obtain \begin{gather} \label{sumpsi} \psi_{v_0}(x) = \tilde \psi_{v_0}(x) + \sum_{v \in V} \tilde R_{v_0,v}(x) \psi_v(x), \quad v_0 \in V, \\ \label{sumR} R_{v_1,v_0}(x) - \tilde R_{v_1,v_0}(x) = \sum_{v \in V} \tilde R_{v_1,v}(x) R_{v,v_0}(x), \quad v_1, v_0 \in V. \end{gather} Lemma~\ref{lem:est} yields the estimates \begin{equation} \label{estpsiR} |\psi_v(x)| \le C, \quad |R_{v_0,v}(x)| \le \frac{C\xi_l}{|l-l_0| + 1}, \quad v,v_0 \in V, \end{equation} and the similar estimates for $\tilde \psi_v(x)$, $\tilde R_{v_0,v}(x)$. Consequently, the Cauchy-Bunyakovsky-Schwarz inequality \begin{equation} \label{CBS} \sum_{l} \frac{\xi_l}{|l-l_0| + 1} \le \left(\sum_{l} \xi_l^2 \right)^{1/2} \left(\sum_l \frac{1}{(|l-l_0| + 1)^2}\right)^{1/2} < \infty, \end{equation} implies the absolute convergence of the series in \eqref{sumpsi} and \eqref{sumR}. Consider the Banach space $m$ of bounded infinite sequences $\alpha = [\alpha_v]_{v \in V}$ with the norm $\| \alpha \|_m = \sum\limits_{v \in V} |\alpha_v|$. Obviously, $\psi(x), \tilde \psi(x) \in m$ for each fixed $x \in [0,1]$. Define the linear operator $R(x) = [R_{v_0,v}(x)]_{v_0, v \in V}$ acting on an element $\alpha = [\alpha_v]_{v \in V} \in m$ by the following rule: \begin{equation} \label{Ral} [R(x) \alpha]_{v_0} = \sum_{v \in V} R_{v_0,v}(x) \alpha_v, \quad v_0 \in V. \end{equation} The operator $\tilde R(x) = [\tilde R_{v_0,v}(x)]_{v_0,v \in V}$ is defined similarly. It follows from \eqref{estpsiR} and \eqref{CBS} that the operators $R(x)$, $\tilde R(x)$ are bounded from $m$ to $m$ for each fixed $x \in [0,1]$. Denote by $\mathbf{I}$ the unit operator in $m$. Using the introduced notations, we obtain the following theorem on the main equation and its unique solvability. \begin{thm} \label{thm:main} For each fixed $x\in[0,1]$, the linear operator $R(x)$ is compact in $m$ and can be approximated by finite-rank operators: $R(x) = \lim\limits_{N \to \infty} R^N(x)$. The same properties are valid for $\tilde R(x)$. Furthermore, the following relation holds \begin{equation} \label{main} (\mathbf{I} - \tilde R(x)) \psi(x) = \tilde \psi(x), \quad x \in [0,1], \end{equation} which is called \textit{the main equation} of the inverse problem. The operator $(\mathbf{I} + \tilde R(x))$ has a bounded inverse of form \begin{equation} \label{inv} (\mathbf{I} - \tilde R(x))^{-1} = \mathbf{I} + R(x). \end{equation} Thus, the main equation~\eqref{main} is uniquely solvable in $m$ for each fixed $x \in [0,1]$. \end{thm} \begin{proof} For $N \in \mathbb N$, define the index set $V^N := \{ v = (l,k,\varepsilon) \in V \colon l \le N \}$ and the finite-rank operator $R^N(x)$: \begin{equation} \label{RNal} [R^N(x) \alpha]_{v_0} = \sum_{v \in V^N} R_{v_0,v}(x) \alpha_v. \end{equation} Using \eqref{estpsiR}--\eqref{RNal}, we show that $$ \| R(x) - R^N(x) \|_{m \to m} = \sup_{v_0 \in V} \sum_{v \in (V\setminus V^N)} |R_{v_0,v}(x)| \le \sup_{l_0}\sum_{l \ge N} \frac{C\xi_l}{|l-l_0|+1} \to 0, \quad N \to \infty. $$ Hence, the operator $R(x)$ is compact. According to our notations, the relations \eqref{sumpsi} and \eqref{sumR} take the form \eqref{main} and $$ R(x) - \tilde R(x) = \tilde R(x) R(x), $$ respectively. The latter relation implies \eqref{inv}, which completes the proof. \end{proof} Thus, we arrive at the following algorithm for solving Problem~\ref{prob:sd}. \begin{alg} \label{alg:1} Suppose that the spectral data $\{ \lambda_0, \mathcal N(\lambda_0) \}_{\lambda_0 \in \Lambda}$ of the problem $\mathcal L \in W$ are given. We have to find the Weyl solutions $\{ \Phi_k(x,\lambda) \}_{k = 1}^n$. \begin{enumerate} \item Choose an arbitrary model problem $\tilde {\mathcal L} \in W$ with $\tilde p_{s,a} = p_{s,a}$, $s = \overline{1,n}$, $a = 0, 1$. In particular, one can take $\tilde F(x) = [\delta_{k+1,j}]_{k,j = 1}^n$, $\tilde U_a = [\delta_{j,p_{s,a}+1}]_{s,j = 1}^n$. \item For the problem $\tilde {\mathcal L}$, find the matrix function $\tilde \Phi(x, \lambda)$ and then $\tilde D(x, \mu,\lambda)$ by \eqref{defD}. \item Using $\tilde \Phi(x, \lambda)$, $\tilde D(x, \mu, \lambda)$, the spectral data $\{ \lambda_0, \mathcal N(\lambda_0)\}_{\lambda_0 \in \Lambda}$, $\{ \tilde \lambda_0, \tilde{\mathcal N}(\tilde \lambda_0)\}_{\tilde \lambda_0 \in \tilde \Lambda}$, and the notations \eqref{lalk}, find $\tilde \varphi_{l,k,\varepsilon}(x)$, $\tilde P_{l,k,\varepsilon}(x,\lambda)$, and $\tilde G_{(l,k,\varepsilon),(l_0,k_0,\varepsilon_0)}$ for $(l,k,\varepsilon), (l_0,k_0,\varepsilon_0) \in V$ via \eqref{defphilk}, \eqref{defPlk}, and \eqref{defG}, respectively. \item Construct the infinite sequence $\tilde \psi(x)$ and the operator $\tilde R(x)$ by using \eqref{defpsi} and \eqref{defR} (with tilde), respectively. \item Find $\psi(x)$ by solving the main equation~\eqref{main}. \item Find $\{ \varphi_{l,k,\varepsilon}(x) \}_{(l,k,\varepsilon) \in V}$ from \eqref{defpsi}: $$ \begin{bmatrix} \varphi_{l,k,0}(x) \\ \varphi_{l,k,1}(x) \end{bmatrix} = w_{l,k}(x) \begin{bmatrix} \xi_l & 1 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} \psi_{l,k,0}(x) \\ \psi_{l,k,1}(x) \end{bmatrix} $$ \item Construct $\phi(x,\lambda) = [\Phi_k(x, \lambda)]_{k = 1}^n$ by \eqref{findphi}. \end{enumerate} \end{alg} \section{Reconstruction formulas} \label{sec:rec} In this section, we use the solution $\psi(x)$ of the main equation \eqref{main} to obtain the solution of Problem~\ref{prob:sd-coef} for some classes of differential operators. We derive the reconstruction formulas in the form of series for the coefficients $\{ \tau_{\nu} \}_{\nu = 0}^{n-2}$ of the differential expression \eqref{defl}. In Subsection~\ref{sec:gen}, the general approach to obtaining reconstruction formulas is described. However, for certain classes of the coefficients $\{ \tau_{\nu} \}_{\nu = 0}^{n-2}$, the convergence of the obtained series has to be studied in the corresponding spaces. Therefore, in Subsection~\ref{sec:series}, we prove an auxiliary lemma on the series convergence. In Subsections~\ref{sec:3}--\ref{sec:evenW}, we study the three classes of operators: \medskip (i) $n = 3$, $\tau_0 \in W_2^{-1}(0,1)$, $\tau_1 \in L_2(0,1)$; \smallskip (ii) $n$ is even, $\tau_{\nu} \in L_2(0,1)$, $\nu = \overline{0,n-2}$; \smallskip (iii) $n$ is even, $\tau_{\nu} \in W_2^{-1}(0,1)$, $\nu = \overline{0,n-2}$. \medskip For each case, we provide the uniqueness theorem of the inverse problem solution in an appropriate statement, obtain reconstruction formulas and prove the convergence of the series, and so get constructive algorithms for solving Problem~\ref{prob:sd-coef}. For the cases (ii) and (iii), we recover the coefficients $\tau_{n-2}$, $\tau_{n-3}$, \ldots, $\tau_1$, $\tau_0$ one-by-one in order to achieve the convergence estimates for the corresponding series. The even order in (ii) and (iii) is considered for definiteness. The similar ideas can be applied to the odd-order differential operators. For simplicity, in all the three cases, we choose such boundary conditions that their coefficients cannot be uniquely recovered from the spectral data and so do not consider their reconstruction. However, for other types of boundary conditions, the recovery of their coefficients also can be studied similarly to the regular case (see Lemma~2.3.7 in \cite{Yur02}). Let us introduce some notations used throughout this section. Note that the collection $\{ \lambda_{l,k,\varepsilon} \}_{(l,k,\varepsilon) \in V}$ may contain multiple eigenvalues for a fixed $\varepsilon \in \{ 0, 1\}$: $\lambda_{l,k,\varepsilon} = \lambda_{l_0,k_0,\varepsilon}$, $(l,k) \ne (l_0,k_0)$. In order to exclude such values, we define the set $$ V' := \{ (l,k,\varepsilon) \in V \colon \not\exists (l_0,k_0,\varepsilon) \in V \: \text{s.t.} \: (l_0,k_0) < (l,k) \: \text{and} \: \lambda_{l_0,k_0,\varepsilon} = \lambda_{l,k,\varepsilon} \}. $$ In this section, we use the following notations for an index $v = (l,k,\varepsilon) \in V'$: \begin{gather} \label{defPv} \lambda_v := \lambda_{l,k,\varepsilon}, \quad \phi_v(x) := \phi_{\langle 0\rangle}(x, \lambda_v), \quad \tilde P_v(x, \lambda) := (-1)^{\varepsilon} \mathcal N_{\varepsilon}(\lambda_v) \tilde D_{\langle 0 \rangle}(x, \lambda_v, \lambda), \\ \label{defcg} c_v := (-1)^{\varepsilon} \mathcal N_{\varepsilon}(\lambda_v) J_0^{-1}, \quad \tilde g_v(x) := [\tilde \phi_{\langle 0\rangle}^{\star}(x,\lambda_v)]^T. \end{gather} Additionally, define the scalar functions \begin{equation} \label{defeta} \tilde \eta_{l,k,\varepsilon}(x) := (-1)^{\varepsilon} e_{k+1}^T \mathcal N_{\varepsilon}(\lambda_{l,k,\varepsilon})J_0^{-1} [\tilde \phi^{\star}_{\langle 0 \rangle}(x, \lambda_{l,k,\varepsilon})]^T, \quad v \in V. \end{equation} \subsection{General approach} \label{sec:gen} In terms of the notations \eqref{defPv}, the relation \eqref{contphi} can be rewritten as $$ \phi(x, \lambda) = \tilde \phi(x, \lambda) + \sum_{v \in V'} \phi_v(x) \tilde P_v(x, \lambda). $$ Formal calculations show that $$ \ell_n(\phi(x, \lambda)) = \ell_n(\tilde \phi(x, \lambda)) + \sum_{v \in V'} \ell_n(\phi_v(x) \tilde P_v(x, \lambda)). $$ Recall that $$ \ell_n(\phi(x, \lambda)) = \lambda \phi(x, \lambda), \quad \tilde \ell_n(\tilde \phi(x, \lambda)) = \lambda \tilde \phi(x, \lambda), $$ and, by virtue of \eqref{lnphi}, $$ \ell_n(\phi_v(x)) = \lambda_v \phi_v(x) + \phi_v(x) \mathcal N_0(\lambda_v). $$ Define $\hat \ell_n(y) := \ell_n(y) - \tilde \ell_n(y)$. Consequently, \begin{multline} \label{sml4} \lambda (\phi(x, \lambda) - \tilde \phi(x, \lambda)) - \sum_{v \in V'} \ell_n(\phi_v(x)) \tilde P_v(x, \lambda) = \sum_{v \in V'} \phi_v(x) [(\lambda - \lambda_v) I - \mathcal N_0(\lambda_v)] \tilde P_v(x, \lambda) \\ = \hat \ell_n(\tilde \phi(x, \lambda)) + \sum_{v \in V'} \ell_n(\phi_v(x) \tilde P_v(x, \lambda)) - \sum_{v \in V'} \ell_n(\phi_v(x)) \tilde P_v(x, \lambda). \end{multline} Using \eqref{defPv} and \eqref{DN3}, we derive \begin{align*} [(\lambda - \lambda_v) - \mathcal N_0(\lambda_v)] \tilde P_v(x,\lambda) = & (-1)^{\varepsilon} \mathcal N_{\varepsilon}(\lambda_v) J_0^{-1} \langle [\tilde \phi_v^{\star}(x)]^T, \tilde \phi(x,\lambda)\rangle \\ & + (-1)^{\varepsilon+1}[\mathcal N_{\varepsilon}(\lambda_v) \mathcal N_1(\lambda_v) + \mathcal N_0(\lambda_v) \mathcal N_{\varepsilon}(\lambda_v)] \tilde D_{\langle 0 \rangle}(x,\lambda_0,\lambda). \end{align*} The summation yields \begin{equation} \label{sml2} \sum_{v \in V'} \phi_v(x) [(\lambda - \lambda_v) I - \mathcal N_0(\lambda_v)] \tilde P_v(x, \lambda) = \sum_{v \in V'} \phi_v(x) c_v \langle \tilde g_v(x), \tilde \phi(x,\lambda)\rangle, \end{equation} where $c_v$ and $\tilde g_v(x)$ are defined by \eqref{defcg}. Combining \eqref{sml4} and \eqref{sml2} together, we obtain \begin{equation} \label{sml5} \sum_{v \in V'} \phi_v(x) c_v \langle \tilde g_v(x), \tilde \phi(x,\lambda)\rangle = \hat \ell_n(\tilde \phi(x, \lambda)) + \sum_{v \in V'} \ell_n(\phi_v(x) \tilde P_v(x, \lambda)) - \sum_{v \in V'} \ell_n(\phi_v(x)) \tilde P_v(x, \lambda). \end{equation} Suppose that the differential expression $y^{[n]} = \ell_n(y)$ has the form \eqref{defl}. Then, $\ell_n(y)$ can be formally represented as \begin{equation} \label{deflp} \ell_n(y) = y^{(n)} + \sum_{s = 0}^{n-2} p_s(x) y^{(s)}, \end{equation} where \begin{equation} \label{defps} p_s = \sum_{k = \lceil s/2\rceil}^{\min \{s, \lfloor n/2\rfloor - 1\}} C_k^{s-k} [\tau_{2k}^{(2k-s)} + \tau_{2k+1}^{(2k-s+1)}] + \sum_{k = \lceil (s-1)/2\rceil}^{\min \{ s,\lfloor (n-1)/2 \rfloor\}-1} 2 C_k^{s-k-1} \tau_{2k+1}^{(2k+1-s)}. \end{equation} (We assume that $\tau_{n-1}(x) \equiv 0$). Suppose that $\tilde \ell_n(y)$ has the form similar to \eqref{deflp} with the coefficients $\tilde p_s(x)$, so \begin{equation} \label{deflh} \hat \ell_n(y) := \sum_{s = 0}^{n-2} \hat p_s(x) y^{(s)}, \quad \hat p_s := p_s - \tilde p_s. \end{equation} Using \eqref{deflp}, we derive \begin{equation} \label{sml1} \ell_n(\phi_v \tilde P_v) = \ell_n(\phi_v) \tilde P_v + \sum_{k = 1}^n C_n^k \sum_{v \in V'} \phi_v^{(n-k)} \tilde P_v^{(k)} + \sum_{k = 1}^{n-2} p_k \sum_{r = 1}^k C_k^r \sum_{v \in V'} \phi_v^{(k-r)} \tilde P_v^{(r)}. \end{equation} The relations \eqref{defPv} and \eqref{Ddx} imply \begin{equation} \label{Pvdx} \tilde P_v'(x, \lambda) = c_v \tilde g_v(x) \tilde \phi(x,\lambda). \end{equation} Substituting \eqref{Pvdx} into \eqref{sml1} and grouping the terms at $\tilde \phi^{(s)}(x, \lambda)$, we obtain \begin{equation} \label{sml3} \ell_n(\phi_v \tilde P_v) - \ell_n(\phi_v) \tilde P_v = \sum_{s = 0}^{n-1} t_{n,s} \tilde \phi^{(s)} + \sum_{s = 0}^{n-3} \sum_{k = s+1}^{n-2} p_k t_{k,s} \tilde \phi^{(s)}, \end{equation} where \begin{equation} \label{deftT} t_{k,s}(x) := \sum_{r = s}^{k-1} C_k^{r+1} C_r^s T_{k-r-1,r-s}(x), \qquad T_{j_1, j_2}(x) := \sum_{v \in V'} \phi_v^{(j_1)}(x) c_v \tilde g_v^{(j_2)}(x). \end{equation} Combining \eqref{sml5}, \eqref{deflh}, and \eqref{sml3} all together, we arrive at the relation \begin{multline} \label{sml6} \sum_{v \in V'} \phi_v(x) c_v \langle \tilde g_v(x), \tilde \phi(x, \lambda) \rangle = \sum_{s = 0}^{n-2} \hat p_s(x) \tilde \phi^{(s)}(x, \lambda) + \sum_{s = 0}^{n-1} t_{n,s}(x) \tilde \phi^{(s)}(x, \lambda) \\ + \sum_{s = 0}^{n-3} \sum_{k = s+1}^{n-2} p_k(x) t_{k,s}(x) \tilde \phi^{(s)}(x, \lambda) \end{multline} For definiteness, suppose that $\tilde p_s(x) = 0$, $s = \overline{0,n-2}$. Then $y^{[s]} = y^{(s)}$, $s = \overline{0, n}$, for the problem $\tilde{\mathcal L}$, and so $$ \langle \tilde g_v(x), \tilde \phi(x, \lambda) \rangle = \sum_{s = 0}^{n-1} (-1)^{n-s-1} \tilde g_v^{(n-s-1)}(x) \tilde \phi^{(s)}(x, \lambda). $$ Therefore, combining the terms at $\tilde \phi^{(s)}(x, \lambda)$, we obtain the formulas for finding the coefficients \begin{equation} \label{findp} p_s = (-1)^{n-s-1} \sum_{v \in V'} \phi_v(x) c_v \tilde g_v^{(n-s-1)}(x) - t_{n,s}(x) - \sum_{k = s+1}^{n-2} p_k(x) t_{k,s}(x), \end{equation} where $s = n-2,n-1,\ldots,1,0$. These formulas coincide with the ones for the regular case (see \cite[Lemma~2.3.7]{Yur02}). Using the relations \eqref{findp} and \eqref{defps}, one can find $\tau_{\nu}$ for $\nu = n-2, n-3, \ldots, 1, 0$. However, the formulas \eqref{findp} have been obtained by formal calculations. They can be used for reconstruction if the coefficients $\{ \tau_{\nu} \}_{\nu = 0}^{n-2}$ are so smooth that the series in \eqref{findp} and \eqref{deftT} converge. If the coefficients $\{ \tau_{\nu}\}_{\nu = 0}^{n-2}$ are non-smooth or even distributional, then the convergence of the series is a non-trivial question, which should be investigated separately for different classes of operators. For some classes, this question is considered in Subsections~\ref{sec:3}--\ref{sec:evenW}. \subsection{Series convergence} \label{sec:series} In this subsection, we prove the following auxiliary lemma. \begin{lem} \label{lem:series} Suppose that $j_1, j_2 \in \{ 0, 1, \ldots, n-1 \}$ and $\{ l^{(j_1 + j_2)} \xi_l \} \in l_2$. Then, there exist constants $\{ A_v \}_{v \in V'}$ such that the series \begin{equation} \label{ser} \sum_{v \in V'} (\phi_v^{[j_1]}(x) c_v \tilde g_v^{[j_2]}(x) - A_v) \end{equation} converges in $L_2(0,1)$. Moreover, if $\{ l^{(j_1 + j_2)} \xi_l\} \in l_1$, then the series \begin{equation} \label{defT} \sum_{v \in V'} \phi_v^{[j_1]}(x) c_v \tilde g_v^{[j_2]}(x) \end{equation} converges absolutely and uniformly on $[0,1]$. \end{lem} Here and below, the quasi-derivatives for $\phi_v(x)$ are generated by the matrix $F(x)$ and for $\tilde g_v(x)$, by $\tilde F^{\star}(x)$. In order to prove Lemma~\ref{lem:series}, we need to formulate preliminary propositions. Consider the sector $\Gamma_1 = \left\{ \rho \in \mathbb C \colon 0 < \arg \rho < \frac{\pi}{n} \right\}$. Denote by $\{ \omega_k \}_{k = 1}^n$ the roots of the equation $\omega^n = 1$ numbered so that \begin{equation*} \mbox{Re} \, (\rho \omega_1) < \mbox{Re} \, (\rho \omega_2) < \dots < \mbox{Re} \, (\rho \omega_n), \quad \rho \in \Gamma_1. \end{equation*} In addition, define the extended sector \begin{equation*} \Gamma_{1, h} := \left\{ \rho \in \mathbb C \colon \rho + h \exp\bigl( \tfrac{\mathrm{i} \pi}{2 n}\bigr) \in \Gamma_1 \right\}, \quad h > 0. \end{equation*} In the proof of Lemma~\ref{lem:series}, we need the following proposition on the Birkhoff-type solutions of equation \eqref{eqv} with certain asymptotic behavior as $|\rho| \to \infty$. \begin{prop}[\cite{SS20}] \label{prop:y} For some $\rho^* > 0$, equation \eqref{eqv} has a fundamental system of solutions $\{ y_k(x, \rho) \}_{k = 1}^n$ whose quasi-derivatives $y_k^{[j]}(x, \rho)$, $k = \overline{1,n}$, $j = \overline{0, n-1}$, are continuous for $x \in [0, 1]$, $\rho \in \overline{\Gamma}_{1,h}$, $|\rho| \ge \rho^*$, analytic in $\rho \in \Gamma_{1,h}$, $|\rho| > \rho^*$ for each fixed $x \in [0, 1]$, and satisfy the relation $$ y_k^{[j]}(x, \rho) = (\rho \omega_k)^j \exp(\rho \omega_k x) (1 + \zeta_{jk}(x, \rho)), $$ where $$ \max_{j,k,x}|\zeta_{jk}(x, \rho)| \le C(\Upsilon(\rho) + |\rho|^{-1}), \quad \rho \in \overline{\Gamma}_{1,h}, \, |\rho| \ge \rho^*, $$ and $\Upsilon(\rho)$ fulfills the condition $\{ \Upsilon(\rho_l) \} \in l_2$ for any non-condensing sequence $\{ \rho_l \} \subset \Gamma_{1,h}$. \end{prop} Consider the strip $\mathcal S_0$ defined by \eqref{defSj}. Clearly, for a suitable choice of $h$ and $c$, we have $\mathcal S_0 \subset \Gamma_{1,h}$ and $\lambda_{l,k,\varepsilon} = \rho_{l,k,\varepsilon}^n$, $\rho_{l,k,\varepsilon} \in \mathcal S_0$ for even $(n-k)$ and for sufficiently large $l$. Further in this section, we confine ourselves to considering even $(n-k)$, since the case of odd $(n-k)$ is similar. \begin{prop} \label{prop:Phi} Suppose that $k \in \{ 1, 2, \ldots, n-1 \}$ and $(n-k)$ is even. Then the Weyl solution can be expanded as $$ \Phi_{k+1}(x, \lambda) = \sum_{s = 1}^n b_{s,k+1}(\rho) y_s(x, \rho), \quad \lambda = \rho^n, \quad \rho \in \mathcal S_0, $$ where the coefficients $b_{s,k+1}(\rho)$ are analytic in $\rho \in \mathcal S_0$, $|\rho| \ge \rho^*$, and fulfill the estimate \begin{equation} \label{estb} b_{s,k+1}(\rho) = O\left(\rho^{-p_{k+1,0}} \stackrel{if \: s > k+1}{\times} \exp(\rho(\omega_{k+1} - \omega_s)) \right). \end{equation} \end{prop} \begin{proof} The properties of the coefficient $b_{s,k+1}(\rho)$ follow from the certain formulas for these coefficients obtained in the proof of Lemma~3 in \cite{Bond21}. \end{proof} \begin{prop} \label{prop:ser} Let $z$ be a non-zero complex with $\mbox{Re}\, z \le 0$, and let $\{ \varkappa_l \}_{l\ge 1} \in l_2$. Then the series $\sum\limits_{l\ge 1} \varkappa_l \exp(zlx)$ converges in $L_2(0,1)$. \end{prop} \begin{proof}[Proof of Lemma~\ref{lem:series}] Let $j_1, j_2 \in \{ 0,1,\ldots,n-1\}$ be fixed. In order to prove the convergence of the series \eqref{ser},\eqref{defT}, it is sufficient to consider their terms for $v = (l,k,\varepsilon)$ with sufficiently large $l$. For technical simplicity, let us assume that $\lambda_{l_1,k_1,\varepsilon} \ne \lambda_{l_2,k_2,\varepsilon}$ for any sufficiently large $l_1, l_2$ such that $l_1 \ne l_2$. In view of Corollary~\ref{cor:N}, we have \begin{gather} \label{smser} \sum_{v: \: l \: \text{is fixed}} \phi_v^{[j_1]}(x) c_v \tilde g_v^{[j_2]}(x) = \sum_{k = 1}^{n-1} (-1)^{n-1-p_{k,0}} \mathscr Z_{l,k}(x), \\ \nonumber \mathscr Z_{l,k}(x) := \sum_{\varepsilon = 0,1} (-1)^{\varepsilon} \beta_{l,k,\varepsilon}\varphi_{l,k,\varepsilon}^{[j_1]}(x) \tilde \varphi^{\star [j_2]}_{l,n-k,\varepsilon}(x), \end{gather} where \begin{gather*} \varphi_{l,k,\varepsilon}^{[j_1]}(x) = \Phi^{[j_1]}_{k+1}(x, \lambda_{l,k,\varepsilon}), \quad \tilde \varphi_{l,n-k,\varepsilon}^{\star [j_2]}(x) = \tilde\Phi^{\star [j_2]}_{n-k+1}(x, \lambda_{l,k,\varepsilon}), \quad \beta_{l,k,0} := \beta_{l,k}, \quad \beta_{l,k,1} := \tilde \beta_{l,k}, \end{gather*} Fix $k \in \{ 1, 2, \ldots, n-1\}$ such that $(n-k)$ is even. Then, by Proposition~\ref{prop:Phi}, we have \begin{align} \nonumber \Phi_{k+1}^{[j_1]}(x, \lambda_{l,k,\varepsilon}) & = \sum_{s_1 = 1}^n b_{s_1,k+1}(\rho_{l,k,\varepsilon}) y_{s_1}^{[j_1]}(x, \rho_{l,k,\varepsilon}), \\ \label{defal} \tilde\Phi^{\star [j_2]}_{n-k+1}(x, \lambda_{l,k,\varepsilon}) & = \sum_{s_2 = 1}^n \tilde b_{n-s_2+1,n-k+1}^{\star}(\rho_{l,k,\varepsilon}) \tilde y_{n-s_2+1}^{\star [j_2]}(x, \rho_{l,k,\varepsilon}). \end{align} Using the above relations and Proposition~\ref{prop:y}, we obtain \begin{align} \nonumber \mathscr Z_{l,k}(x) & = \sum_{s_1 = 1}^n \sum_{s_2 = 1}^n Z_{l,k,s_1,s_2}(x), \\ \nonumber Z_{l,k,s_1,s_2}(x) & = \sum_{\varepsilon =0,1} \alpha_{l,k,s_1,s_2,\varepsilon} \exp(\rho_{l,k,\varepsilon}(\omega_{s_1} - \omega_{s_2})x)(1 + \zeta_{s_1,j_1}(x,\rho_{l,k,\varepsilon})) (1 + \tilde \zeta^{\star}_{n-s_2+1,j_2}(x, \rho_{l,k,\varepsilon})), \\ \label{defalpha} \alpha_{l,k,s_1,s_2,\varepsilon} & := \beta_{l,k,\varepsilon} b_{s_1,k+1}(\rho_{l,k,\varepsilon}) b_{n-s_2+1,n-k+1}^{\star}(\rho_{l,k,\varepsilon}) (\omega_{s_1})^{j_1} (-\omega_{s_2})^{j_2} \rho_{l,k,\varepsilon}^{j_1 + j_2}. \end{align} Consider the sums \begin{align*} Z_{l,k,s_1,s_2}(x) & = Z^1_{l,k,s_1,s_2}(x) + Z^2_{l,k,s_1,s_2}(x) + Z^3_{l,k,s_1,s_2}(x) + Z^4_{l,k,s_1,s_2}(x), \\ Z^1_{l,k,s_1,s_2}(x) & := \sum_{\varepsilon = 0,1} \alpha_{l,k,s_1,s_2,\varepsilon} \exp(\rho_{l,k,\varepsilon}(\omega_{s_1} - \omega_{s_2})x), \\ Z^2_{l,k,s_1,s_2}(x) & := \sum_{\varepsilon = 0,1} \alpha_{l,k,s_1,s_2,\varepsilon} \exp(\rho_{l,k,\varepsilon}(\omega_{s_1} - \omega_{s_2})x) \zeta_{s_1,j_1}(x, \rho_{l,k,\varepsilon}), \\ Z^3_{l,k,s_1,s_2}(x) & := \sum_{\varepsilon = 0,1} \alpha_{l,k,s_1,s_2,\varepsilon} \exp(\rho_{l,k,\varepsilon}(\omega_{s_1} - \omega_{s_2})x) \tilde \zeta^{\star}_{n-s_2 + 1,j_2}(x, \rho_{l,k,\varepsilon}), \\ Z^4_{l,k,s_1,s_2}(x) & := \sum_{\varepsilon = 0,1} \alpha_{l,k,s_1,s_2,\varepsilon} \exp(\rho_{l,k,\varepsilon}(\omega_{s_1} - \omega_{s_2})x) \zeta_{s_1,j_1}(x, \rho_{l,k,\varepsilon}) \tilde \zeta^{\star}_{n-s_2 + 1,j_2}(x, \rho_{l,k,\varepsilon}). \end{align*} Thus, it is sufficient to study the convergence of the series $\sum\limits_{l \ge l_0} Z^{\nu}_{l,k,s_1,s_2}(x)$ for fixed $k$, $s_1$, $s_2$, and $\nu = \overline{1,4}$. The asymptotics \eqref{asymptla} and \eqref{asymptbe} imply \begin{equation} \label{smest1} |\rho_{l,k,\varepsilon}| \le C l, \quad |\beta_{l,k,\varepsilon}| \le C l^{n-1+p_{k+1,0}-p_{k,0}}. \end{equation} Using \eqref{defalpha} together with the estimates \eqref{smest1} and \eqref{estb}, we obtain $$ |\alpha_{l,k,s_1,s_2,\varepsilon}| \le C l^{j_1 + j_2} \stackrel{if \: s_1 > k+1}{\times} \exp(\mbox{Re}\,(\omega_{k+1} - \omega_{s_1})r_k l) \stackrel{if \: s_2 < k}{\times} \exp(\mbox{Re}\,(\omega_{s_2} - \omega_k) r_k l), $$ where $r_k := \frac{\pi}{\sin \tfrac{\pi k}{n}}$. The relation \eqref{relxi} yields $$ |\rho_{l,k,0} - \rho_{l,k,1}| \le C \xi_l, \quad |\beta_{l,k,0} - \beta_{l,k,1}| \le C \xi_l l^{n-1+p_{k+1,0}-p_{k,0}}. $$ Since the functions $b_{s,k+1}(\rho)$ are analytic and satisfy \eqref{estb}, we obtain $$ |b_{s,k+1}(\rho_{l,k,0}) - b_{s,k+1}(\rho_{l,k,1})| \le C \xi_l l^{-p_{k+1,0}} \stackrel{if \: s > k+1}{\times} \exp(\mbox{Re}\,(\omega_s - \omega_k) r_k l). $$ It follows from \eqref{defalpha} that \begin{align*} \alpha_{l,k,s_1,s_2,0} & - \alpha_{l,k,s_1,s_2,1} = (\beta_{l,k,0} - \beta_{l,k,1}) b_{s_1,k+1}(\rho_{l,k,0}) b_{n-s_2+1,n-k+1}^{\star}(\rho_{l,k,0}) (\omega_{s_1})^{j_1} (-\omega_{s_2})^{j_2} \rho_{l,k,0}^{j_1 + j_2} \\ & + \beta_{l,k,1} (b_{s_1,k+1}(\rho_{l,k,0}) - b_{s_1,k+1}(\rho_{l,k,1})) b_{n-s_2+1,n-k+1}^{\star}(\rho_{l,k,0}) (\omega_{s_1})^{j_1} (-\omega_{s_2})^{j_2} \rho_{l,k,0}^{j_1 + j_2} \\ & + \beta_{l,k,1} b_{s_1,k+1}(\rho_{l,k,1}) (b_{n-s_2+1,n-k+1}^{\star}(\rho_{l,k,0}) - b_{n-s_2+1,n-k+1}^{\star}(\rho_{l,k,1})) (\omega_{s_1})^{j_1} (-\omega_{s_2})^{j_2} \rho_{l,k,0}^{j_1 + j_2} \\ & + \beta_{l,k,1} b_{s_1,k+1}(\rho_{l,k,1}) b_{n-s_2+1,n-k+1}^{\star}(\rho_{l,k,1}) (\omega_{s_1})^{j_1} (-\omega_{s_2})^{j_2} (\rho_{l,k,0}^{j_1 + j_2} - \rho_{l,k,1}^{j_1 + j_2}). \end{align*} Consequently, we estimate \begin{gather*} |\alpha_{l,k,s_1,s_2,0} - \alpha_{l,k,s_1,s_2,1}| \le C l^{j_1 + j_2} \xi_l \stackrel{if \: s_1 > k+1}{\times} \exp(\mbox{Re}\,(\omega_{k+1} - \omega_{s_1})r_k l) \\ \stackrel{if \: s_2 < k}{\times} \exp(\mbox{Re}\,(\omega_{s_2} - \omega_k) r_k l). \end{gather*} Suppose that $\{ l^{j_1 + j_2} \xi_l \} \in l_2$. Consider the cases: \begin{enumerate} \item If $s_1 = s_2 \not\in \{ k, k+1 \}$, then the terms of the series $\sum\limits_{l \ge l_0} Z^1_{l,k,s_1,s_2}(x)$ decay exponentially, so the series converges absolutely. \item If $s_1 = s_2 \in \{ k, k+1 \}$, then the series $\sum\limits_{l \ge l_0} (\alpha_{l,k,s_1,s_2,0} - \alpha_{l,k,s_1,s_2,1})$ not necessarily converges. \item If $s_1 \ne s_2$, then \begin{align*} Z^1_{l,k,s_1,s_2}(x) = & ((\alpha_{l,k,s_1,s_2,0} - \alpha_{l,k,s_1,s_2,1}) \\ & + \alpha_{l,k,s_1,s_2,1} [(\rho_{l,k,0} - \rho_{l,k,1})(\omega_{s_1} - \omega_{s_2}) x + O(\xi_l^2)])\exp(\rho_{l,k,0}(\omega_{s_1} - \omega_{s_2})x). \end{align*} Consequently, the series $\sum\limits_{l \ge l_0} Z^1_{l,k,s_1,s_2}(x)$ converges in $L_2(0,1)$ by virtue of Proposition~\ref{prop:ser}. \end{enumerate} Using Proposition~\ref{prop:y}, we show that \begin{align*} & |\zeta_{s_1,j_1}(x, \rho_{l,k,\varepsilon})| \le C (\Upsilon(\rho_{l,k,\varepsilon}) + l^{-1}), \\ & |\zeta_{s_1,j_1}(x, \rho_{l,k,0}) - \zeta_{s_1,j_1}(x, \rho_{l,k,1})| \le C \xi_l (\Upsilon(\rho_{l,k,0}^*) + l^{-1}), \end{align*} where $\Upsilon(\rho_{l,k,0}^*) = \max\limits_{|\rho - \rho_{l,k,0}|\le \delta} \Upsilon(\rho)$. Note that $\{ \Upsilon(\rho^*_{l,k,0}) \} \in l_2$. Consequently, the series $\sum\limits_{l \ge l_0} Z^2_{l,k,s_1,s_2}(x)$ converges absolutely and uniformly on $[0,1]$. The proof for $Z^3$ and $Z^4$ is analogous. Thus, the regularized series $\sum\limits_{l \ge l_0} (\mathscr Z_{l,k}(x) - A_{l,k})$ converges in $L_2(0,1)$ with the constants $$ A_{l,k} = \sum_{s = k,k+1} (\alpha_{l,k,s,s,0} - \alpha_{l,k,s,s,1}). $$ Using the arguments above, we obtain the estimate $$ |\mathscr Z_{l,k}(x)| \le C l^{j_1 + j_2} \xi_l. $$ Hence, in the case $\{ l^{j_1 +j_2} \xi_l \} \in l_1$, the series $\sum\limits_{l \ge l_0} \mathscr Z_{l,k}(x)$ converges absolutely and uniformly with respect to $x \in [0,1]$. Taking \eqref{smser} into account, we arrive at the assertion of the lemma. \end{proof} \subsection{Case $n = 3$.} \label{sec:3} Consider the differential expression $$ \ell_3(y) = y^{(3)} + (\tau_1(x) y)' + \tau_1(x) y' + \tau_0(x) y, \quad x \in (0,1), $$ where $\tau_1 \in L_2(0,1)$ and $\tau_0 \in W_2^{-1}(0,1)$, that is, $\tau_0 = \sigma_0'$, $\sigma_0 \in L_2(0,1)$. The associated matrix has the form (see, e.g., \cite{MS19}): \begin{equation} \label{F3} F(x) = \begin{bmatrix} 0 & 1 & 0 \\ -(\sigma_0 + \tau_1) & 0 & 1 \\ 0 & (\sigma_0 - \tau_1) & 0 \end{bmatrix}, \end{equation} so $y^{[1]} = y'$, $y^{[2]} = y'' + (\sigma_0 + \tau_1) y$, $y^{[3]} = \ell_3(y)$. Suppose that $p_{s,0} = s-1$, $p_{s,1} = 3-s$, $s = \overline{1,3}$, in the linear forms \eqref{defU}. Using the technique of \cite{Bond22-asympt}, we obtain the eigenvalue asymptotics \begin{equation} \label{asymptla3} \lambda_{l,k} = (-1)^{k+1}\left( \frac{2 \pi}{\sqrt 3} \Bigl( l + \frac{1}{6} + \frac{(-1)^k}{\pi^2 l} \int_0^1 \tau_1(t) \, dt + \frac{\varkappa_{l,k}}{l}\Bigr)\right)^3, \quad \{ \varkappa_{l,k} \} \in l_2, \quad l \ge 1, \: k = 1,2. \end{equation} Assume that $\mathcal L \in W$. It can be easily shown that, if $\Lambda_1 \cap \Lambda_2 = \varnothing$, then the spectral data $\{ \lambda_0, \mathcal N(\lambda_0) \}_{\lambda_0 \in \Lambda}$ do not depend on the boundary condition coefficients $u_{s,j,a}$. Therefore, let us assume that $U_0 = I$, $U_1 = [\delta_{k,4-j}]_{k,j = 1}^3$. Consider the following inverse problem. Consider the problems $\mathcal L = (F(x), U_0, U_1) \in W$ and $\tilde {\mathcal L} = (\tilde F(x), U_0, U_1) \in W$, where $\tilde F(x)$ is the matrix function associated with the differential expression $\tilde \ell_3(y)$ having the coefficients $\tilde \tau_1 \in L_2(0,1)$ and $\tilde \tau_0 = \tilde \sigma_0' \in W_2^{-1}(0,1)$. Under the above assumptions, the following uniqueness theorem for solution of Problem~\ref{prob:sd-coef} is valid. \begin{thm} \label{thm:uniq3} If $\Lambda = \tilde \Lambda$ and $\mathcal N(\lambda_0) = \tilde{\mathcal N}(\lambda_0)$ for all $\lambda_0 \in \Lambda$, then $\tau_1(x) = \tilde \tau_1(x)$ and $\sigma_0(x) = \tilde \sigma_0(x) + const$ a.e. on $(0,1)$. Thus, the spectral data $\{ \lambda_0, \mathcal N(\lambda_0) \}_{\lambda_0 \in \Lambda}$ uniquely specify $\tau_1 \in L_2(0,1)$ and $\tau_0 \in W_2^{-1}(0,1)$. \end{thm} In order to prove Theorem~\ref{thm:uniq3}, we need the following auxiliary lemma, which is valid for $n$ not necessarily equal $3$. \begin{lem} \label{lem:Pconst} If $\mathcal L, \tilde{\mathcal L} \in W$, $\Lambda = \tilde \Lambda$ and $\mathcal N(\lambda_0) = \tilde{\mathcal N}(\lambda_0)$ for all $\lambda_0 \in \Lambda$, then the matrix of spectral mappings $\mathcal P(x, \lambda)$ defined by \eqref{defP} does not depend on $\lambda$. \end{lem} \begin{proof} It follows from \eqref{defP} and \eqref{PJP} that $$ \mathcal P(x, \lambda) = \Phi(x, \lambda) J_0^{-1} [\tilde \Phi^{\star}(x, \lambda)]^T J. $$ Using \eqref{relN1} and \eqref{relNPhi}, we derive for $\lambda_0 \in \Lambda$: \begin{align*} \mathcal P_{\langle -2 \rangle}(x, \lambda) J^{-1} & = \Phi_{\langle -1 \rangle}(x, \lambda_0) J_0^{-1} [\tilde \Phi_{\langle -1 \rangle}^{\star}(x, \lambda_0)]^T \\ & = \Phi_{\langle 0 \rangle}(x, \lambda_0) \mathcal N(\lambda_0) J_0^{-1} [\mathcal N^*(\lambda_0)]^T[\tilde \Phi_{\langle 0 \rangle}^{\star}(x, \lambda_0)]^T = 0, \\ \mathcal P_{\langle -1 \rangle}(x, \lambda) J^{-1} & = \Phi_{\langle -1 \rangle}(x, \lambda_0) J_0^{-1} [\tilde \Phi_{\langle 0 \rangle}^{\star}(x, \lambda_0)]^T + \Phi_{\langle 0 \rangle}(x, \lambda_0) J_0^{-1} [\tilde \Phi_{\langle -1 \rangle}^{\star}(x, \lambda_0)]^T \\ & = \Phi_{\langle 0 \rangle}(x, \lambda_0) (\mathcal N(\lambda_0) J_0^{-1} + J_0^{-1} [\mathcal N^{\star}(\lambda_0)]^T) [\tilde \Phi_{\langle 0 \rangle}^{\star}(x, \lambda_0)]^T = 0. \end{align*} Hence, $\mathcal P(x, \lambda)$ is entire in $\lambda$. Using the asymptotics \eqref{asymptP} and Liouville's theorem, we conclude that $\mathcal P(x,\lambda) \equiv \mathcal P(x)$, $x \in [0,1]$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:uniq3}] This proof is similar to the proof of Theorem~2 in \cite{Bond21}, so we outline it briefly. By Lemma~\ref{lem:Pconst}, $\mathcal P(x, \lambda) \equiv \mathcal P(x)$. Furthermore, $\mathcal P(x)$ is a unit lower-triangular matrix. One can easily show that \begin{equation} \label{PF} \mathcal P'(x) + \mathcal P(x) \tilde F(x) = F(x) \mathcal P(x), \quad x\in (0,1), \end{equation} where the matrix functions $F(x)$ and $\tilde F(x)$ have the form \eqref{F3}. In the element-wise form, \eqref{PF} implies $\mathcal P_{2,1} = \mathcal P_{3,2} = \mathcal P_{3,1}' = 0$, $\mathcal P_{3,1} = \hat \sigma_0 \pm \hat \tau_1$. Hence, $\hat \tau_1 = 0$, $\hat \sigma_0 = const$ in $L_2(0,1)$, which concludes the proof. \end{proof} Now suppose that the spectral data $\{ \lambda_0, \mathcal N(\lambda_0) \}_{\lambda_0 \in \Lambda}$ of the problem $\mathcal L = (F(x), U_0, U_1)$ are given. Using the asymptotics \eqref{asymptla3}, one can find the number $\tilde \tau_1 := \int_0^1 \tau_1(t) \, dt$. Put \begin{equation} \label{F3t} \tilde F(x) = \begin{bmatrix} 0 & 1 & 0 \\ -\tilde \tau_1 & 0 & 1 \\ 0 & - \tilde \tau_1 & 0 \end{bmatrix}, \end{equation} and $\tilde{\mathcal L} := (\tilde F(x), U_0, U_1)$. Clearly, $\tilde F^{\star}(x) = \tilde F(x)$. Consequently, in our case, $$ \langle \tilde g_v, \tilde \phi \rangle = \tilde g''_v \tilde \phi - \tilde g_v' \tilde \phi' + \tilde g_v \tilde \phi'' + 2 \tilde \tau_1 \tilde g_v \tilde \phi. $$ Hence, the relation \eqref{sml5} takes the form \begin{multline*} T_{0,0} \tilde \phi'' - T_{0,1} \tilde \phi' + (T_{0,2} + 2 \tilde \tau_1 T_{0,0}) \tilde \phi \\ = \hat \tau_1 \tilde \phi' + (\hat \tau_1' + \hat \tau_0) \tilde \phi + T_{0,0} \tilde \phi'' + (3 T_{1,0} + 2 T_{0,1}) \tilde \phi' + (3 T_{2,0} + 3 T_{1,1} + T_{0,2} + 2 \tau_1 T_{0,0}) \tilde \phi, \end{multline*} where $T_{j_1,j_2}$ were defined in \eqref{deftT}. Grouping the terms at $\tilde \phi'(x, \lambda)$ and $\tilde \phi(x, \lambda)$, we derive the formulas \begin{align*} \tau_1 & = \tilde \tau_1 -\frac{3}{2} \sum_{v \in V'} (\phi_v' c_v \tilde g_v + \phi_v c_v \tilde g_v'),\\ \tau_0 & = -\hat \tau_1' - 3 \frac{d}{dx}\left( \sum_{v \in V'} \phi_v' c_v \tilde g_v \right) - 2 \hat \tau_1 \sum_{v \in V'} \phi_v c_v \tilde g_v. \end{align*} By virtue of Corollary~1.3 and Theorem~6.4 from \cite{Bond22-asympt} and \eqref{relxi}, we have $\{ l \xi_l \} \in l_2$. Applying Lemma~\ref{lem:series} to prove the series convergence in suitable spaces and using the notations \eqref{defeta}, we arrive at the following reconstruction formulas for $\tau_1$ and $\tau_0$. \begin{thm} Let $\mathcal L$ and $\tilde{\mathcal L}$ be the problems defined above in this section. The following relations hold: \begin{align} \label{rectau1} \tau_1 & = \tilde \tau_1 -\frac{3}{2} \sum_{(l,k,\varepsilon) \in V} (\varphi'_{l,k,\varepsilon} \tilde \eta_{l,k,\varepsilon} + \varphi_{l,k,\varepsilon} \tilde \eta'_{l,k,\varepsilon}), \\ \label{rectau0} \tau_0 & = -\hat \tau_1' - 3 \frac{d}{dx}\left( \sum_{(l,k,\varepsilon)\in V} \varphi'_{l,k,\varepsilon} \tilde \eta_{l,k,\varepsilon} \right) - 2 \hat \tau_1 \sum_{(l,k,\varepsilon)\in V} \varphi_{l,k,\varepsilon} \tilde \eta_{l,k,\varepsilon}. \end{align} The series in \eqref{rectau1} converges in $L_2(0,1)$. In \eqref{rectau0}, the series in brackets converges in $L_2(0,1)$ with regularization, and the second series converges absolutely and uniformly with respect to $x \in [0,1]$, so the right-hand side of \eqref{rectau0} belongs to $W_2^{-1}(0,1)$. \end{thm} Following the proof of Lemma~\ref{lem:series}, one can easily show that the regularization constants $A_v$ for the series in \eqref{rectau1} equal zero. The regularization constants in \eqref{rectau0} are omitted because of the differentiation. Finally, we arrive at the following algorithm for solving Problem~\ref{prob:sd-coef}. \begin{alg} Suppose that the spectral data $\{ \lambda_0, \mathcal N(\lambda_0) \}_{\lambda_0 \in \Lambda}$ of the problem $\mathcal L = \mathcal L(F(x), U_0, U_1) \in W$ are given. Here $F(x)$ is defined by \eqref{F3}, $U_0 = I$, $U_1 = [\delta_{k,4-j}]_{k,j = 1}^3$. We have to find $\tau_1$ and $\tau_0$. \begin{enumerate} \item Find $\tilde \tau_1 = \int_0^1 \tau_1(x) \,dx$ from the eigenvalue asymptotics \eqref{asymptla3}. \item Take the model problem $\tilde {\mathcal L} = \mathcal L(\tilde F(x), U_0, U_1)$, where $\tilde F(x)$ is defined by \eqref{F3t}. \item Implement the steps 2--6 of Algorithm~\ref{alg:1} to obtain $\{ \varphi_{l,k,\varepsilon}(x) \}_{(l,k,\varepsilon) \in V}$. \item Using the problem $\tilde {\mathcal L}$ and the spectral data $\{ \lambda_0, \mathcal N(\lambda_0) \}_{\lambda_0 \in \Lambda}$, $\{ \tilde \lambda_0, \tilde{\mathcal N}(\tilde \lambda_0) \}_{\tilde \lambda_0 \in \tilde \Lambda}$, construct the functions $\{ \tilde \eta_{l,k,\varepsilon}(x) \}_{(l,k,\varepsilon) \in V}$ by \eqref{defeta}. \item Construct $\tau_1(x)$ and $\tau_0(x)$ by \eqref{rectau1} and \eqref{rectau0}, respectively. \end{enumerate} \end{alg} \subsection{Case of even $n$, $\tau_{\nu} \in L_2(0,1)$.} \label{sec:evenL} Consider the differential expression \eqref{defl} with even $n$ and $\tau_{\nu} \in L_2(0,1)$, $\nu = \overline{0, n-2}$. The associated matrix $F(x) = [f_{k,j}(x)]_{k,j = 1}^n$ is given by the relations \begin{align*} & f_{n-k, k+1} = -\tau_{2k}, \quad k = \overline{0,\lfloor n/2\rfloor-1}, \\ & f_{n-k-1,k+1} = f_{n-k,k+2} = -\tau_{2k+1}, \quad k = \overline{0,\lfloor n/2 \rfloor - 2}, \end{align*} and all the other elements are defined by $f_{k,j} = \delta_{k,j-1}$. For instance, $$ \ell_6(y) = y^{(6)} + (\tau_4 y'')'' + [(\tau_3 y'')' + (\tau_3 y')''] + (\tau_2 y')' + [(\tau_1 y)' + \tau_1 y'] + \tau_0 y, $$ and the corresponding associated matrix is $$ F(x) = \begin{bmatrix} 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & -\tau_3 & -\tau_4 & 0 & 1 & 0 \\ -\tau_1 & -\tau_2 & -\tau_3 & 0 & 0 & 1 \\ -\tau_0 & -\tau_1 & 0 & 0 & 0 & 0 \end{bmatrix}. $$ Suppose that $U_0 = I$, $U_1 = [\delta_{k,n-j+1}]_{k,j = 1}^n$, $\mathcal L = (F(x), U_0, U_1) \in W$ and $\tilde{\mathcal L} = (\tilde F(x), U_0, U_1)$, where $\tilde F(x)$ is constructed in the same way as $F(x)$ by different coefficients $\tilde \tau_{\nu} \in L_2(0,1)$, $\nu = \overline{0, n-2}$. The following uniqueness theorem is proved similarly to Theorem~\ref{thm:uniq3}. \begin{thm} If $\Lambda = \tilde \Lambda$ and $\mathcal N(\lambda_0) = \tilde {\mathcal N}(\lambda_0)$ for all $\lambda_0 \in \Lambda$, then $\tau_{\nu}(x) = \tilde \tau_{\nu}(x)$ a.e. on $(0,1)$, $\nu = \overline{0,n-2}$. Thus, the spectral data $\{ \lambda_0, \mathcal N(\lambda_0) \}_{\lambda_0 \in \Lambda}$ uniquely specify $\tau_{\nu} \in L_2(0,1)$, $\nu = \overline{0,n-2}$. \end{thm} Further, we need the following proposition, which is an immediate corollary of Theorems~1.2 and~6.4 from \cite{Bond22-asympt} for the problems $\mathcal L$, $\tilde{\mathcal L}$ defined above in this subsection and the sequence $\{ \xi_l \}$ defined by \eqref{defxi} (see also Example~5.2 in \cite{Bond22-asympt}). \begin{prop}[\cite{Bond22-asympt}] \label{prop:difL} Suppose that $\nu_0 \in \{ 1, 2, \ldots, n-1 \}$, $\tau_{\nu}(x) = \tilde \tau_{\nu}(x)$ a.e. on $(0,1)$ for $\nu = \overline{\nu_0, n-2}$, and $\int_0^1 \hat \tau_{\nu_0-1}(x) \, dx = 0$. Then $\{ l^{n - \nu_0} \xi_l \} \in l_2$. \end{prop} We will construct the solution of Problem~\ref{prob:sd-coef} step-by-step. \medskip \textsc{Step 1.} Take the model problem $\tilde {\mathcal L} = \tilde {\mathcal L}^{(1)} := (\tilde F^{(1)}(x), U_0, U_1)$, where $\tilde F^{(1)}(x)$ is the associated matrix for the differential expression $\tilde l_n^{(1)}(y)$ with the coefficients $\tilde \tau_{n-2} := \int_0^1 \tau_{n-2}(x) \, dx$, $\tilde \tau_\nu := 0$, $\nu = \overline{0,n-3}$. The coefficient $\int_0^1 \tau_{n-2}(x) \, dx$ can be found from the eigenvalue asymptotics similarly to the case of Subsection~\ref{sec:3}. Using the terms of \eqref{sml5} at $\tilde \phi^{(n-2)}(x, \lambda)$, we derive the reconstruction formula $$ \tau_{n-2} = \tilde \tau_{n-2} - t_{n,n-2} - T_{0,1} = \tilde \tau_{n-2} - n \sum_{v \in V'} (\phi_v' c_v \tilde g_v + \phi_v c_v \tilde g_v'). $$ By virtue of Proposition~\ref{prop:difL}, $\{ l \xi_l \} \in l_2$. Therefore, Lemma~\ref{lem:series} implies that the obtained series converges in $L_2(0,1)$ with the regularization constants $A_v = 0$. \smallskip \textsc{Step 2.} Take the model problem $\tilde {\mathcal L} = \tilde {\mathcal L}^{(2)} := (\tilde F^{(2)}(x), U_0, U_1)$, where $\tilde F^{(2)}(x)$ is the associated matrix for the differential expression $\tilde l_n^{(2)}(y)$ with the coefficients $\tilde \tau_{n-2} := \tau_{n-2}$, $\tilde \tau_{n-3} := \int_0^1 \tau_{n-3}(x) \, dx$, $\tilde \tau_\nu := 0$, $\nu = \overline{0,n-4}$. The coefficient $\int_0^1 \tau_{n-3}(x) \, dx$ can be found from the eigenvalue asymptotics. Using the terms of \eqref{sml5} at $\tilde \phi^{(n-2)}(x, \lambda)$, we show that $T_{0,0}'(x) = 0$. One can easily show that $T_{0,0}(0) = 0$, so $T_{0,0}(x) \equiv 0$. Consequently, grouping the terms of \eqref{sml5} at $\tilde \phi^{(n-3)}(x, \lambda)$, we obtain \begin{align*} 2 \tau_{n-3} & = 2 \tilde \tau_{n-3} - t_{n,n-3} + T_{0,2} \\ & = 2 \tilde \tau_{n-3} - \sum_{v \in V'} \left( \tfrac{n(n-1)}{2} \phi''_v c_v \tilde g_v + n(n-2) \phi'_v c_v \tilde g_v' + [\tfrac{(n-1)(n-2)}{2}-1] \phi_v c_v g_v''\right). \end{align*} By virtue of Proposition~\ref{prop:difL}, $\{ l^2 \xi_l \} \in l_2$. Lemma~\ref{lem:series} implies that the series converges in $L_2(0,1)$ with the zero regularization constants. \smallskip \textsc{Step $s$}. Take the model problem $\tilde{\mathcal L} = \tilde {\mathcal L}^{(s)} := (\tilde F^{(s)}(x), U_0, U_1)$, where $\tilde F^{(s)}(x)$ is the associated matrix for the differential expression $\tilde l^{(s)}_n(y)$ with \begin{equation} \label{tL} \tilde \tau_{\nu} := \tau_{\nu}, \: \nu = \overline{n-s,n-2}, \quad \tilde \tau_{n-s-1} := \int_0^1 \tau_{n-s-1}(x) \, dx, \quad \tilde \tau_{\nu} := 0, \: \nu = \overline{0, n-s-2}. \end{equation} For this model problem, we have $T_{j_1, j_2}(x) \equiv 0$ for all $j_1 + j_2 \le s-2$. Grouping the terms of \eqref{sml5} at $\tilde \phi^{(n-s-1)}(x,\lambda)$, we obtain \begin{align} \nonumber \tau_{n-s-1} & = \tilde \tau_{n-s-1} - (t_{n,n-s-1} + (-1)^{s+1} T_{0,s}) \stackrel{if \: s \: is \: even} {\times} \tfrac{1}{2} \\ \nonumber & = \tilde \tau_{n-s-1} - \sum_{v \in V'} \left(\sum_{r = n-s}^n C_n^r C_{r-1}^{n-s-1} \phi_v^{(n-r)} c_v \tilde g_v^{(r-n+s)} + (-1)^{s+1} \phi_v c_v \tilde g_v^{(s)} \right) \stackrel{if \: s \: is \: even} {\times} \tfrac{1}{2} \\ \label{rectau} & = \tilde \tau_{n-s-1} - \sum_{v \in V'} \left(\sum_{r = n-s}^n C_n^r C_{r-1}^{n-s-1} \phi_v^{[n-r]} c_v \tilde g_v^{[r-n+s]} + (-1)^{s+1} \phi_v c_v \tilde g_v^{[s]} \right) \stackrel{if \: s \: is \: even} {\times} \tfrac{1}{2} \end{align} Proposition~\ref{prop:difL} implies that $\{ l^s \xi_l \} \in l_2$. Therefore, it follows from Lemma~\ref{lem:series} that the series in \eqref{rectau} converges in $L_2(0,1)$. The regularization constants equal zero because $$ \sum_{r = n-s}^n C_n^r C_{r-1}^{n-s-1} (-1)^r + (-1)^{s+1} = 0. $$ Note that all functions $\{ \tau_{\nu} \}$ necessary for computation of the quasi-derivatives $\phi_v^{[n-r]}$ in \eqref{rectau} are computed at the previous steps, so the formula \eqref{rectau} can be used for finding $\tau_{n-s-1}$. In terms of the notations \eqref{defeta}, the relation \eqref{rectau} can be written as follows: \begin{equation} \label{rec} \tau_{n-s-1} = \tilde \tau_{n-s-1} - \sum_{(l,k,\varepsilon) \in V} \left(\sum_{r = n-s}^n C_n^r C_{r-1}^{n-s-1} \varphi_{l,k,\varepsilon}^{[n-r]} \tilde \eta_{l,k,\varepsilon}^{[r-n+s]} + (-1)^{s+1} \varphi_{l,k,\varepsilon} \tilde \eta_{l,k,\varepsilon}^{[s]} \right) \stackrel{if \: s \: is \: even} {\times} \tfrac{1}{2} \end{equation} Thus, we obtain the following algorithm for solving Problem~\ref{prob:sd-coef} in the considered case. \begin{alg} \label{alg:even} Suppose that the spectral data $\{ \lambda_0, \mathcal N(\lambda_0) \}_{\lambda_0 \in \Lambda}$ of the problem $\mathcal L = (F(x),U_0,U_1) \in W$ are given. Here $F(x)$ is the matrix associated with the differential expression $\ell_n(y)$, $n$ is even, $\tau_{\nu} \in L_2(0,1)$, $\nu = \overline{0,n-2}$, $U_0 = I$, $U_1 = [\delta_{k,n-j+1}]_{k,j=1}^n$. We have to find $\{ \tau_{\nu} \}_{\nu = 0}^{n-2}$. For simplicity, assume that the values $\int_0^1 \tau_{\nu}(x) \, dx$ are known. In fact, they can be found from the eigenvalue asymptotics. For $s = 1, 2, \ldots, n-1$, we find $\tau_{n-s-1}$ implementing the following steps: \begin{enumerate} \item Take the model problem $\tilde {\mathcal L} = \tilde {\mathcal L}^{(s)} = (\tilde F^{(s)}, U_0, U_1)$ induced by the differential expression $\tilde \ell^{(s)}_n(y)$ with the coefficients \eqref{tL}. \item Implement steps 2--6 of Algorithm~\ref{alg:1} to find $\{ \varphi_{l,k,\varepsilon}(x) \}_{(l,k,\varepsilon) \in V}$. \item Using the problem $\tilde {\mathcal L}$ and the spectral data $\{ \lambda_0, \mathcal N(\lambda_0) \}_{\lambda_0 \in \Lambda}$, $\{ \tilde \lambda_0, \tilde{\mathcal N}(\tilde \lambda_0) \}_{\tilde \lambda_0 \in \tilde \Lambda}$, construct the functions $\{ \tilde \eta_{l,k,\varepsilon}(x) \}_{(l,k,\varepsilon) \in V}$ by \eqref{defeta}. \item Construct $\tau_{n-s-1}(x)$ by \eqref{rec}. \end{enumerate} \end{alg} \subsection{Case of even $n$, $\tau_{\nu} \in W_2^{-1}(0,1)$} \label{sec:evenW} Suppose that $n$ is even and $\tau_{\nu} \in W_2^{-1}(0,1)$ in \eqref{defl} for $\nu = \overline{0, n-2}$, that is, $\tau_{\nu} = \sigma_{\nu}'$, where $\sigma_{\nu} \in L_2(0,1)$ and the derivative is understood in the sense of distributions. Put $m := \lfloor n/2 \rfloor$ and define the matrix function $$ Q(x) = [q_{r,j}(x)]_{r,j = 0}^m := \sum_{\nu = 0}^{n-2} (-1)^{\lfloor (\nu - 1)/2\rfloor} \chi_{\nu} \sigma_{\nu}(x), $$ where $\chi_{\nu} := [\chi_{\nu;r,j}]_{r,j = 0}^m$, $$ \chi_{2k;k,k+1} = \chi_{2k;k+1,k} = 1, \quad \chi_{2k+1;k,k+2} = -\chi_{2k+1;k+2,k} = 1, $$ and all the other entries $\chi_{\nu;r,j}$ equal zero. The associated matrix $F(x) = [f_{k,j}(x)]_{k,j = 1}^n$ for $\ell_n(y)$ is defined as follows (see \cite{Bond22} for details): \begin{gather*} f_{m,j} := (-1)^{m+1} q_{j-1,m}, \: j = \overline{1, m}, \qquad f_{k,m+1} := (-1)^{k+1} q_{m,2m-k}, \: k = \overline{m+1, 2m}, \\ f_{k,j} := (-1)^{k+1} q_{j-1,2m-k} + (-1)^{m+k} q_{j-1,m} q_{m,2m-k}, \quad k = \overline{m+1,2m}, \, j = \overline{1,m}, \end{gather*} and $f_{k,j} = \delta_{k,j-1}$ for all the other indices. Clearly, $F \in \mathfrak F_n$. For example, if $n = 4$, then $$ Q(x) = \begin{bmatrix} 0 & -\sigma_0 & \sigma_1 \\ -\sigma_0 & 0 & \sigma_2 \\ -\sigma_1 & \sigma_2 & 0 \end{bmatrix}, \quad F(x) = \begin{bmatrix} 0 & 1 & 0 & 0 \\ -\sigma_1 & -\sigma_2 & 1 & 0 \\ -\sigma_0 - \sigma_1 \sigma_2 & -\sigma_2^2 & \sigma_2 & 1 \\ -\sigma_1^2 & \sigma_0 - \sigma_1 \sigma_2 & \sigma_1 & 0 \end{bmatrix} $$ Consider Problem~\ref{prob:sd-coef} for $\mathcal L = (F(x), U_0, U_1)$, $U_0 = I$, $U_1 = [\delta_{k,n-j+1}]_{k,j=1}^n$. Let $\tilde{\mathcal L} = (\tilde F(x), U_0, U_1)$, where $\tilde F(x)$ is the associated matrix for the differential expression $\tilde l_n(y)$ with the coefficients $\tilde \tau_{\nu} = \tilde \sigma_{\nu}' \in W_2^{-1}(0,1)$, $\nu = \overline{0,n-2}$. The following uniqueness theorem is proved analogously to Theorem~\ref{thm:uniq3}. \begin{thm} If $\Lambda = \tilde \Lambda$ and $\mathcal N(\lambda_0) = \tilde{\mathcal N}(\lambda_0)$ for all $\lambda_0 \in \Lambda$, then $\sigma_{\nu}(x) = \tilde \sigma_{\nu}(x) + const$ a.e. on $(0,1)$ for $\nu = \overline{0,n-2}$. Thus, the spectral data $\{ \lambda_0, \mathcal N(\lambda_0) \}_{\lambda_0 \in \Lambda}$ uniquely specify $\tau_{\nu} \in W_2^{-1}(0,1)$, $\nu = \overline{0,n-2}$. \end{thm} The functions $\{ \sigma_{\nu} \}_{\nu = 0}^{n-2}$ are specified uniquely up to a constant, so for simplicity we assume that $\int_0^1 \sigma_{\nu}(x) \, dx = 0$, $\nu = \overline{0,n-2}$. Theorems~1.2 and~6.4 of \cite{Bond22-asympt} (see also Example~5.3 in \cite{Bond22-asympt}) readily imply the following proposition for the problems $\mathcal L$ and $\tilde{\mathcal L}$ of the considered form and the sequence $\{ \xi_l \}$ defined by \eqref{defxi}. \begin{prop}[\cite{Bond22-asympt}] \label{prop:difW} Suppose that $\nu_0 \in \{ 1, 2, \ldots, n-1 \}$ and $\sigma_{\nu}(x) = \tilde \sigma_{\nu}(x)$ a.e. on $(0,1)$ for $\nu = \overline{\nu_0, n-2}$. Then $\{ l^{n - \nu_0 - 1} \xi_l \} \in l_2$. \end{prop} The algorithm of recovering the coefficients $\{ \tau_{\nu} \}_{\nu = 0}^{n-2}$ from the spectral data is similar to Algorithm~\ref{alg:even}. At \textsc{Step $s$}, we take the model problem $\tilde {\mathcal L} = \tilde {\mathcal L}^{(s)}$ induced by the coefficients $\tilde \sigma_{\nu} := \sigma_{\nu}$, $\nu = \overline{n-s,n-2}$, and $\tilde \sigma_{\nu} := 0$, $\nu = \overline{0, n-s-1}$. Note that the series in \eqref{rectau} has the form $$ a_0 T_{s,0} + a_1 T_{s-1,1} + \ldots + a_s T_{0,s} = (b_0 T_{s-1,0} + b_1 T_{s-2,1} + \ldots + b_{s-1} T_{0,s-1})', $$ where \begin{gather*} a_j := C_n^{s-j} C_{n-s+j-1}^j \stackrel{if \: j = s}{+} (-1)^{s+1}, \quad \sum_{j = 0}^s a_j = 0, \\ b_j := \sum_{i = 0}^j (-1)^{j-i} a_i, \quad j = \overline{0,s-1}. \end{gather*} Using this idea, we derive \begin{equation} \label{rec2} \tau_{n-s-1} = -\frac{d}{dx} \sum_{v \in V'} \left( \sum_{j = 0}^{s-1} b_j \phi_v^{[s-j-1]} c_v \tilde g_v^{[j]}\right) \stackrel{if \: s \: is \: even}{\times} \tfrac{1}{2}. \end{equation} In view of Proposition~\ref{prop:difW}, we have $\{ l^{s-1} \xi_l \} \in l_2$. Hence, by virtue of Lemma~\ref{lem:series}, the series in \eqref{rec2} converges in $L_2(0,1)$ with some regularization constants. Because of the differentiation, we omit these constants. Thus, formula \eqref{rec2} induces a function of $W_2^{-1}(0,1)$, and $\sigma_{n-s-1}$ can be found uniquely up to a constant. This constant is chosen so that $\int_0^1 \sigma_{n-s-1}(x) \, dx = 0$. Taking $s = 1, 2, \ldots, n-1$, we step-by-step construct all the coefficients $\tau_{n-2}$, $\tau_{n-3}$, \ldots, $\tau_1$, $\tau_0$. Note that the algorithms of this section are valid for $\tilde{\mathcal L} \in W$. However, the case $\tilde{\mathcal L} \not\in W$ requires only technical modifications due to Remark~\ref{rem:mult}, which do not influence on the convergence of the series. \section{Conclusion} \label{sec:concl} Let us briefly summarize the results of this paper. We have studied the inverse spectral problem which consists in recovering the coefficients $\{ \tau_{\nu}\}_{\nu = 0}^{n-2}$ of the differential expression \eqref{defl} from the spectral data $\{ \lambda_0, \mathcal N(\lambda_0)\}_{\lambda_0 \in \Lambda}$. An approach to constructive solution of the inverse problem is developed. Our approach can be applied to a wide class of differential expressions $\ell_n(y)$, which admit regularization in terms of associated matrix. The inverse problem solution consists of the two steps. First, we consider the auxiliary problem of finding the Weyl solutions $\{ \Phi_k(x, \lambda)\}_{k = 1}^n$ by using the spectral data. This problem is reduced to the linear equation \eqref{main} in the Banach space $m$ of bounded infinite sequences. Theorem~\ref{thm:main} on the unique solvability of the main equation \eqref{main} is proved. Second, by using the solution of the main equation, we derive reconstruction formulas for the coefficients $\{ \tau_{\nu}\}_{\nu = 0}^{n-2}$ and investigate the convergence of resulting series. \smallskip Let us mention the most important \textbf{advantages} of our approach: \begin{enumerate} \item The obtained results can be applied to a wide range of differential operators of arbitrary order with either integrable or distributional coefficients of various classes. \item Our approach does not require self-adjointness. \item Our method is constructive. \item The results of this paper can be used for studying existence and stability of the inverse problem solution as well as for developing numerical methods. \end{enumerate} \medskip {\bf Funding.} This work was supported by Grant 21-71-10001 of the Russian Science Foundation, https://rscf.ru/en/project/21-71-10001/. \medskip
{ "redpajama_set_name": "RedPajamaArXiv" }
7,173
\section{Introduction} \subsection{Background and motivation} Renewable energy, as a clean and sustainable energy source, is playing an increasingly important role in power systems \cite{renewstatus}. For example, from the year 2007 to 2017, the global installed capacity of solar panels has increased from 8 Gigawatts to 402 Gigawatts, and the wind power capacity has increased from 94 Gigawatts to 539 Gigawatts\cite{renewstatus}. Compared with traditional larger-scale generators, renewable energy sources can be more spatially distributed across the power system, e.g., at the distribution level near residential consumers\cite{renewstatus}. Due to the distributed nature of renewable energy generations, there has been growing interest in forming local energy markets for renewable energy suppliers and consumers to trade electricity at the distribution level \cite{dismarket}. Such local energy markets will allow consumers to purchase electricity from the least costly sources locally\cite{localmarketover}, and allow suppliers to compete in selling electricity directly to consumers (instead of dealing with the utility companies). However, many types of renewable energy are inherently random, due to factors such as weather conditions that are difficult to predict and control. Under current multi-settlement energy market structures with day-ahead and real-time bidding rules (which are mostly designed for controllable generations)\cite{fundamentals}, renewable energy suppliers face a severe disadvantage in the competition by making forward commitment (in the day-ahead market) that they may not be able to deliver in real time. For example, suppliers are often subject to a penalty cost if their real-time delivery deviates from the commitment in the day-ahead market\cite{dt2014bid_renwable_deviation}. Energy storage has been considered as an important type of flexible resources for renewable energy suppliers to stabilize their outputs\cite{overview1}. Investing in storage can potentially improve the renewable energy suppliers' position in these energy markets. However, investing in storage incurs substantial investment costs. Furthermore, the return of storage investment depends on the outcome of the market, which in turn depends on how suppliers with or without storage compete for the demand. Therefore, it remains an open problem regarding \textit{whether competing renewable energy suppliers should invest in energy storage in the market competition and what economic benefits the storage can bring to the suppliers.} \subsection{Main results and contributions} In this paper, we formulate a three-stage game-theoretic model to study the market equilibrium for both storage investment as well as price and quantity bidding of competing renewable energy suppliers. In Stage \uppercase\expandafter{\romannumeral1}, at the beginning of the investment horizon, each supplier decides whether to invest in storage. We formulate a storage-investment game between two suppliers in Stage \uppercase\expandafter{\romannumeral1}, which is based on a bimatrix game to model suppliers' storage-investment decisions for maximizing profits\cite{mangasarian1964equilibrium}. Given the storage-investment decisions in Stage \uppercase\expandafter{\romannumeral1}, competing suppliers decide the bidding price and bidding quantity in the (daily) local energy market in Stage \uppercase\expandafter{\romannumeral2}. We formulate a price-quantity competition game between suppliers using the Bertrand-Edgeworth model \cite{betrand3} (which models price competition with capacity constraints) in Stage \uppercase\expandafter{\romannumeral2}. Given suppliers' bidding strategies, consumers decide the electricity quantity purchased from each supplier in Stage \uppercase\expandafter{\romannumeral3}. To the best of our knowledge, our work is the first to study the storage-investment equilibrium between competing renewable energy suppliers in the two-settlement energy market. This problem is quite nontrivial due to the penalty cost on the random generations of a general probability distribution. By studying this three-stage model, we reveal a number of new and surprising insights that are against the prevailing wisdom in the literature on the renewable energy suppliers' revenues in such a two-settlement market \cite{dt2014bid_renwable_deviation,bringwind} and on the economic benefits of storage supplementing in renewable energy sources \cite{renewsto3,connolly2012technical}. \begin{itemize} \item First, \emph{the uncertainty of the renewable generation can be favorable to suppliers}. Note that the prevailing wisdom is that storage investment (especially when the storage cost is low) will improve suppliers' revenue by stabilizing their outputs \cite{renewsto3,connolly2012technical}. In contrast, we find that the opposite may be true when considering market competition. Specifically, without storage, suppliers with random generations always have strictly positive revenues when facing any positive consumer demand. However, if both suppliers invest in storage and stabilize their renewable outputs, their revenues reduce to zero once the consumer demand is below a threshold, which is due to the increased market competition after storage investment. \item Second, \textit{a higher penalty and a higher storage cost can also be favorable to the suppliers}. Note that the common wisdom is that a higher penalty\cite{bringwind} and a higher storage cost\cite{renewsto3} will decrease suppliers' profit. However, when considering market competition, the opposite may be true. With a higher penalty for not meeting the commitment, renewable energy suppliers become more conservative in their bidding quantities, which can decrease market competition and increase their profits. Furthermore, a higher storage cost may change one supplier's storage-investment decision, which can benefit the other supplier. \item Third, \emph{the first-mover supplier who invests in energy storage can be at the disadvantage in terms of profit increase}, which is contrary to the first-mover advantage gained by early investment of resources or new technologies \cite{grant2016contemporary}. We find that although investing in storage can increase one supplier's profit, it may benefit himself less than his competitor (who does not invest in storage). This is because the later mover becomes a free rider, who may benefit from the changed price equilibrium in the energy market (due to the storage investment of the other supplier) but does not need to bear the investment cost. \end{itemize} In addition to these surprising and new insights, a key technical contribution of our work is the solution to the game-theoretic model for the price-quantity competition, which involves a general penalty cost due to random generations of a general probability distribution. Note that such a price-quantity competition with the Bertrand-Edgeworth model has been studied in literature under quite different conditions from ours. The works in \cite{betrand1,betrand2,betrand4} studied a general competition between suppliers with strictly convex production costs. They focused on the analysis of pure strategy equilibrium without characterizing the mixed strategy equilibrium. The study in \cite{capacityprice} characterized both pure and mixed strategy equilibrium between suppliers with deterministic supply. However, this work considered zero cost related to the production (i.e., no production cost or possible penalty cost). In electricity markets, the works in \cite{capacitypricerenew} and \cite{bertrandrenew} also used Bertrand-Edgeworth model to analyze the competition among renewable energy suppliers with random generations. However, both \cite{capacitypricerenew} and \cite{bertrandrenew} considered the suppliers' electricity-selling competition in a single-settlement energy market, and suppliers deliver random generations in real time. These studies did not consider day-ahead bidding strategies and any deviation penalty cost. In particular, the two-settlement markets with deviation penalty have been essential for ensuring the reliable operation of power systems. Our work is the first to consider the two-settlement energy market, characterizing both pure and mixed strategy equilibrium based on the Bertrand-Edgeworth model. Such a setting is nontrivial due to the penalty cost caused by the suppliers' random production of a general probability distribution. The remainder of the paper is organized as follows. First, we introduce the system model in Section \ref{section:model}, as well as the three-stage game-theoretic formulation between suppliers and consumers in Section \ref{section:stage}. Then, we solve the three-stage problem through backward induction. We first characterize the consumers' optimal purchase decision of Stage \uppercase\expandafter{\romannumeral3} in Section \ref{section:stage3}. Then, we characterize the price-quantity equilibrium of Stage \uppercase\expandafter{\romannumeral2} and the storage-investment equilibrium of Stage \uppercase\expandafter{\romannumeral1} in Sections \ref{section:stage2} and \ref{section:stage1}, respectively. We propose a probability-based method to compute the storage capacity in Section \ref{section:capacity}. Furthermore, in Section \ref{section:extenstion}, we extend some of the theoretical results and insights from the duopoly case to the oligopoly case. Finally, we present the simulation results in Section \ref{section:sim} and conclude this paper in Section \ref{section:con}. \section{System Model}\label{section:model} We consider a local energy market at the distribution level as shown in Figure \ref{fig_sim}. Consumers can purchase energy from both the main grid and local renewable energy suppliers. To achieve a positive revenue, the renewable energy suppliers (simply called suppliers in the rest of the paper) need to set their prices no greater than the grid price, and they will compete for the market share. Furthermore, suppliers can choose to invest in energy storage to stabilize their renewable outputs and reduce the uncertainty in their delivery. Next, we will introduce the detailed models of timescales, suppliers and consumers, and characterize their interactions in the two-settlement local energy market. \begin{figure}[ht] \centering \includegraphics[width=2.6in]{./figure/system} \vspace{-1mm} \caption{System structure.} \label{fig_sim} \vspace{-3mm} \end{figure} \subsection{Timescale} We consider two timescales of decision-making. One is the investment horizon $\mathcal{D}\hspace{-1mm}=\hspace{-1mm}\{1,2,...,D_s\}$ of $D_s$ days (e.g., $D_s$ corresponding to the total number of days for the storage investment horizon). Suppliers can decide (once) whether to invest in energy storage at the beginning of the investment horizon. The investment horizon is divided into many operational horizons (many days), and each $d\in \mathcal{D}$ corresponds to the daily operation of the energy market, consisting of many time slots $\mathcal{T}\hspace{-1mm}=\hspace{-1mm}\{1,2,...,T\}$ (e.g., 24 hours of each day). In the day-ahead market on day $d-1$, suppliers decide the electricity price and quantity to consumers for each hour $t\in \mathcal{T}$ of the next day $d\in \mathcal{D}$. We will introduce the market structure in detail later in Section \ref{section:model}.D. \subsection{Suppliers} In Sections \ref{section:stage3}-\ref{section:stage1}, we focus on the duopoly case of two suppliers in our analysis. Later in Section \ref{section:extenstion}, we further generalize to the oligopoly case with more than two suppliers. The reason for focusing on the duopoly case is twofold. First, our work focuses on a local energy market that is much smaller than a traditional wholesale energy market. The number of suppliers serving one local area is also expected to be limited \cite{locallimit}, compared with thousands of suppliers in the wholesale energy market \cite{pjmdata}. In such a small local energy market, a few large suppliers may dominate the market\cite{ilas2018renewable}. Second, we consider two suppliers for analytical tractability, which is without losing key insights and can effectively capture the impact of competition among suppliers considering the storage investment. For example, we show that in the duopoly case, the uncertainty of renewable generation can be beneficial to suppliers. Such an insight is still valid in the oligopoly case. We denote $\mathcal{I}=\{1,2\}$ as the set of two suppliers. For hour $t$ of day $d$, the renewable output of supplier $i\in \mathcal {I}$ is denoted as a random variable $X_i^{d,t}$, which is bounded in $[0,\bar{X}_i^{d,t}]$. We assume that the random generation $X_i^{d,t}$ has a continuous cumulative distribution function (CDF) $F_i^{d,t}$ with the probability density function (PDF) $f_i^{d,t}$. {The distribution of wind or solar power can be characterized using the historical data, which is known to the renewable energy suppliers.\footnote{In Section \ref{section:sim} of simulations, we use historical data to model the empirical CDF of renewable generations, which is explained in detail in Appendix.\ref{appendix:sim}. }} As renewables usually have extremely low marginal production costs compared with traditional generators, we assume zero marginal production costs for the suppliers\cite{capacitypricerenew} \cite{bertrandrenew}. \subsection{Consumers} We consider the aggregate consumer population, and we denote the total consumer demand at hour $t$ of day $d$ as $D^{d,t}>0$. Note that consumers in one local area usually face the same electricity price from the same utility. Thus, if the local market's electricity price is lower than the grid price, all the consumers will first purchase electricity from local suppliers. From the perspective of suppliers, they only care about the total demand of consumers and how much electricity they can sell to consumers. Furthermore, our work conforms to the current energy market practice that suppliers make decisions in the day-ahead market based on the predicted demand. Thus, for the demand $D^{d,t}$, we consider it as a deterministic (predicted) demand in our model.\footnote{The day-ahead prediction of consumers' aggregated demand can be fairly accurate\cite{sevlian2014loadforecast}. We assume that the demand and supply mismatch due to the demand forecast error will be regulated by the operator in the real-time market. } Since the electricity demand is usually inelastic \cite{fundamentals}, we also assume the following. \begin{ass} Consumers' demand is perfectly inelastic in the electricity price. \end{ass} \noindent Consumers must purchase their demand $D^{d,t}$ either from the main grid (at a fixed unit price $P_g$) or from the local renewable suppliers (with prices to be discussed later).\footnote{We do not consider demand response for the consumers. } \subsection{Two-settlement local energy market} We consider a two-settlement local energy market, which consists of a day-ahead market and a real-time market\cite{fundamentals}. In such an energy market, suppliers have market power and can strategically decide their selling prices.\footnote{This price model is different from the usual practice of the wholesale energy market, where the market usually sets a uniform clearing price for all the suppliers through market clearing \cite{fundamentals}.} Consumers have the flexibility to choose suppliers by comparing prices \cite{localmarketover}. We explain the two-settlement energy market in detail as follows. \begin{itemize} \item In the day-ahead market on day $d-1$ (e.g., suppliers' bids are cleared around 12:30pm of day $d-1$, one day ahead of the delivery day $d$\cite{nord}), supplier $i\in \mathcal{I}$ decides the bidding price $p_i^{d,t}$ and the bidding quantity $y_i^{d,t}$ for each future hour $t\in \mathcal{T}$ of the delivery day $d$. Based on suppliers' bidding strategies, consumers decide the electricity quantity $x_i^{d,t}~(\leq y_i^{d,t})$ purchased from supplier $i$. Supplier $i$ will get the revenue of $p_i^{d,t} x_i^{d,t}$ in the day-ahead market by committing the delivery quantity $x_i^{d,t}$ to consumers. Thus, the day-ahead market is cleared through matching supply and demand. Any excessive demand from the consumers will be satisfied through energy purchase from the main grid. \item In the real-time market at each hour on the next day $d$, if supplier $i$'s actual generation falls short of the committed quantity $x_i^{d,t}$ (i.e., $x_i^{d,t}>X_i^{d,t}$), he needs to pay the penalty $\lambda( x_i^{d,t}-X_i^{d,t})$ in the real-time market, which is proportional to the shortfall with a unit penalty price $\lambda$. For the consumers, although suppliers may not deliver the committed electricity to them, the shortage part can be still satisfied by the system operator using reserve resources. The cost of reserve resources can be covered by the penalty cost on the suppliers. \end{itemize} {Note that the suppliers and consumers make decisions only in the day-ahead market. No active decisions are made in the real-time market, but there may be penalty cost on the delivery shortage.} To facilitate the analysis, we further make several assumptions of this local energy market as follows. First, for the excessive amount of generations (i.e., $x_i^{d,t}<X_i^{d,t}$), we assume the following. \begin{ass} Suppliers can curtail any excessive renewable energy generation (beyond any specific given level). \end{ass} \noindent Assumption 2 implies that we do not need to consider the possible penalty or reward on the excessive renewable generations in real time.\footnote{There are different policies to deal with the surplus feed-in energy of renewables. In some European countries, the energy markets give rewards to the surplus energy \cite{Chakraborty2018renew}. In the US, some markets deal with the surplus energy using the real-time imbalance price that can be either penalties or rewards \cite{bringwind}. } Second, the local energy market is much smaller compared with the wholesale energy market. Thus, the suppliers are usually small and hence may focus on serving local consumers. It is less likely for them to trade in the wholesale energy market. This is summarized in the following assumption. \begin{ass} Suppliers only participate in the local energy market and serve local consumers. They do not participate in the wholesale energy market. \end{ass} \noindent Third, for the bidding price ${p_i}$ and penalty price $\lambda$, we impose the following bounds. \begin{ass}\label{a:cap} Each supplier $i$'s bidding price $p_i$ has a cap $\bar{p}$ that satisfies $p_i\leq \bar{p} < P_g$. \end{ass} \begin{ass} \label{a:penalty} The penalty price satisfies $\lambda>\bar{p}$. \end{ass} \noindent Assumption \ref{a:cap} is without loss of generality, since no supplier will bid a price higher than $P_g$; otherwise, consumers will purchase from the main grid.\footnote{We avoid the case $\bar{p}= P_g$ as it may bring ambiguity to the local energy market if the bidding price is equal to the main grid price $P_g$, in which case it is not clear whether consumers purchase energy from the local energy market or from the main grid.} Assumption \ref{a:penalty} ensures that the penalty is high enough to discourage suppliers from bidding higher quantities beyond their capability. Note that price cap $\bar{p}$ and the penalty $\lambda$ are exogenous fixed parameters in our model. Next, we introduce how suppliers invest in the energy storage to stabilize their outputs. \subsection{Storage investment} Each supplier decides whether to invest in storage at the beginning of the investment horizon. We denote supplier $i$'s storage-investment decision variable as $\varphi_i$, where $\varphi_i=1$ means investing in storage and $\varphi_i=0$ means not investing. If supplier $i$ invests in storage, we assume the following. \begin{ass} \label{A:storage} The with-storage supplier will utilize the storage to completely smooth out his power output at the mean value of renewable generations. \end{ass} \noindent Thus, supplier $i$ with the renewable generation $X_i^{d,t}$ will charge and discharge his storage\footnote{There can be different ways to deal with the randomness of renewable generations, including the curtailment of renewable energy and the use of additional fossil generators to provide additional energy. It is interesting to combine energy storage with other mechanisms (such as renewable energy curtailment), which we will explore in the future work.} to stabilize the power output at the mean value $\mathbb{ E }[X_i^{d,t}]$. The charge and discharge power $CD_i^{d,t}$ is as follows. \vspace{-2mm} \begin{align} CD_i^{d,t}=X_i^{d,t}-\mathbb{ E }[X_i^{d,t}],\label{eq:chdis} \end{align}\par \vspace{-1mm} \noindent where $CD_i^{d,t}>0$ means charging the storage and $CD_i^{d,t}<0$ means discharging the storage. Note that $\mathbb{ E }_{X_i^{d,t}}[CD_i^{d,t}]=0$, which implies the long-term average power that the suppliers need to charge or discharge his storage is zero. Next, we introduce how to characterize the storage capacity and the storage cost. First, based on the charge and discharge random variable $CD_i^{d,t}$, we propose a simple yet effective probability-based method to characterize the storage capacity $S_i$ using historical data of renewable generation $X_i^{d,t}$. In particular, we set a probability threshold, and then aim to find a minimum storage capacity $S_i$ such that the energy level in the storage exceeds the capacity with a probability no greater than the probability threshold. We will explain this methodology in Section \ref{section:capacity}. Second, we calculate the storage cost of suppliers over the investment horizon (scaled into one hour) as $C_i=c_i \kappa_i S_i$, where $c_i$ is the unit capacity cost over the investment horizon and $\kappa_i$ is the scaling factor that scales the investment cost over years to one hour. The factor $\kappa_i$ is calculated as follows. We first calculate the present value of an annuity (a series of equal annual cash flows) with the annual interest rate $r_i$ (e.g., $r_i=5\%$), and then we divide the annuity equally to each hour. This leads to the formulation of the factor $\kappa_i$ as follows \cite{NearSitSizeSto}. \begin{align} \kappa_i=\frac{r_i(1+r_i)^{y_i}}{(1+r_i)^{y_i}-1}\cdot \frac{1}{Y_d},\label{eq:factor} \end{align} where $y_i$ is the number of years over the investment horizon (e.g., $y_i=15$ for Li-ion battery that can last for 15 years), and $Y_d$ is the total hours in one year (e.g., $Y_d= 365\times 24$). Therefore, given the parameter $c_i$ and $\kappa_i$ as well as the probability distribution of random generation, the storage capacity and storage cost can be regarded as the fixed values for the supplier who invests in storage. Note that a higher storage capacity leads to a higher storage investment cost, which can further affect the storage-investment decisions in the suppliers' competition. Next, in the Section \ref{section:stage}, we will introduce the three-stage model between suppliers and consumers in detail. \section{Three-stage game-theoretic model}\label{section:stage} We build a three-stage model between suppliers and consumers. In Stage \uppercase\expandafter{\romannumeral1}, at the beginning of the investment horizon, each supplier decides whether to invest in storage. In the day-ahead energy market, for each hour of the next day, suppliers decide the bidding prices and quantities in Stage \uppercase\expandafter{\romannumeral2}, and consumers make the purchase decision in Stage \uppercase\expandafter{\romannumeral3}. Next, we first introduce the types of renewable-generation distributions for computing suppliers' electricity-selling revenues over the investment horizon, and then we explain the three stages respectively in detail. \vspace{-2mm} \subsection{Type of renewable-generation distributions} {We cluster the distribution of renewable generation into several types. Note that suppliers' revenues depend on the distribution of renewable generations. We use historical data of renewable energy to model the generation distribution. Specifically, for the renewable generations at hour $t$ of all the days over the investment horizon, we cluster the empirical distribution into $M$ types, e.g., $M=12$ for 12 months considering the seasonal effect. In this case, each type $m\in\mathcal{M}=\{1,2,\ldots,M\}$ occurs with a probability $\rho^m=\frac{1}{12}$ considering 12 months.\footnote{There can be other types of clustering with unequal probabilities.} We use the data of renewable energy of all days in month $m$ at hour $t$ to approximate the distribution of renewable generation at hour $t$ for all the days in this month $m$. Then, to study the interactions between consumers and suppliers in the local energy market, we will assume that the renewable generation of day $d$ follows a random type (month) $m$, uniformly chosen from $m\in\mathcal{M}$. For notation convenience, we replace all the superscripts $d,t$ into $m,t$. \subsection{The three-stage model} We illustrate the three-stage model between suppliers and consumers in Figure \ref{fig_stage}. \begin{itemize} \item Stage \uppercase\expandafter{\romannumeral1}: at the beginning of the investment horizon, each supplier $i\in\{1,2\}$ decides the storage-investment decisions $\varphi_{i}\in\{0,1\}$. \item Stage \uppercase\expandafter{\romannumeral2}: in the day-ahead market, for each hour $t$ of the next day, each supplier $i$ decides his bidding price $p_i^{m,t}$ and bidding quantity $y_i^{m,t}$ based on suppliers' storage-investment decisions, assuming that the renewable-generation distribution is of month $m$. \item Stage \uppercase\expandafter{\romannumeral3}: in the day-ahead market, for each hour $t$ of the next day, consumers decide the electricity quantity $x_i^{m,t}$ purchased from each supplier $i$ based on each supplier's bidding price and quantity, assuming that the renewable-generation distribution is of month $m$. \end{itemize} \begin{figure}[t] \centering \includegraphics[width=2.9in]{./figure/stagecx} \caption{Three-stage model.} \label{fig_stage} \end{figure} This three-stage problem is a dynamic game. The solution concept of a dynamic game is known as Subgame Perfect Equilibrium, which can be derived through backward induction\cite{gamex}. Therefore, in the following, we will explain the three stages in detail in the order of Stage \uppercase\expandafter{\romannumeral3}, Stage \uppercase\expandafter{\romannumeral2}, and Stage \uppercase\expandafter{\romannumeral1}, respectively. \subsubsection{Stage \uppercase\expandafter{\romannumeral3}} At hour $t$ of month $m$, given the bidding price $(p_1^{m,t}, p_2^{m,t})$ and bidding quantity $(y_1^{m,t}, y_2^{m,t})$ of both suppliers in Stage \uppercase\expandafter{\romannumeral2}, consumers decide the electricity quantity $(x_1^{m,t},x_2^{m,t})$ purchased from supplier 1 and supplier 2, respectively. The objective of consumers is to maximize the cost saving of purchasing energy from local suppliers compared with purchasing from the main grid only. We denote such cost saving as follows: \vspace{-2mm} \begin{align} \pi_c^{m,t}(x_1^{m,t}, x_2^{m,t})=(P_g-p_1^{m,t})x_1^{m,t}+(P_g-p_2^{m,t})x_2^{m,t}. \end{align} \par \vspace{-1mm} \noindent {Recall that we model the collective purchase decision of the entire consumer population together. Consumers must satisfy their demand either from the local energy market or from the main grid (at the fixed price $P_g$). The total cost of satisfying the entire demand by the main grid is fixed. Therefore, minimizing the total energy cost is equivalent to maximizing the cost savings in the local energy market.} We present consumers' {optimal purchase problem} as follows. \noindent \textbf{Stage \uppercase\expandafter{\romannumeral3}: Consumers' Cost Saving Maximization Problem} \vspace{-1mm} \begin{subequations}\label{eq:consumer} \vspace{-1mm} \begin{align} \max_{x_1^{m,t},x_2^{m,t}}~ & (P_g-p_1^{m,t})x_1^{m,t}+(P_g-p_2^{m,t})x_2^{m,t}, \label{sg2:ob}\\ \text{s.t.} ~~&x_1^{m,t}+x_2^{m,t}\leq D^{m,t}, \label{sg2:c1}\\ ~~&0\leq x_i^{m,t} \leq y_i^{m,t},i=1,2. ~\label{sg2:c2} \end{align} \end{subequations}\par \vspace{-1mm} \noindent Constraint \eqref{sg2:c1} states that the total purchased quantity $x_1^{m,t}+x_2^{m,t}$ is no greater than the demand $D^{m,t}$. Constraints \eqref{sg2:c2} states that the quantity purchased from supplier $i$ is no greater than his bidding quantity $y_i^{m,t}$. This problem is a linear programming and can be easily solved, which we show in Section \ref{section:stage3}. We denote the optimal solution to Problem \eqref{eq:consumer} as a function of suppliers' bidding prices and quantities $(\bm{p}^{m,t},\bm{y}^{m,t})$, i.e., $x_i^{m,t*}(\bm{p}^{m,t},\bm{y}^{m,t}),~\forall i=1,2$, where $\bm{p}^{m,t}=(p_1^{m,t},p_2^{m,t})$ and $\bm{y}^{m,t}=(y_1^{m,t},y_2^{m,t})$. \subsubsection{Stage \uppercase\expandafter{\romannumeral2}} Given the storage-investment decision $\bm{\varphi}=(\varphi_1,\varphi_2)$ in Stage \uppercase\expandafter{\romannumeral1}, both suppliers decide the bidding price $\bm{p}^{m,t}$ and bidding quantity $\bm{y}^{m,t}$ to maximize their revenues in Stage \uppercase\expandafter{\romannumeral2}. We denote supplier $i$'s electricity-selling revenue as $\pi_i^{R,m,t}$, which consists of two parts: the commitment revenue $p_i^{m,t}x_i^{m,t*}(\bm{p}^{m,t},\bm{y}^{m,t})$ from committing the delivery quantity in the day-ahead market, and the penalty cost in the real-time market. Supplier $i$ who invests in storage (i.e., $\varphi_{i}=1$) will be penalized if the committed quantity $x_i^{m,t*}(\bm{p}^{m,t},\bm{y}^{m,t})$ is larger than his stable generation $\mathbb{ E }[{X}_i^{m,t}]$. Supplier $i$ who does not invest in storage (i.e., $\varphi_{i}=0$) will be penalized if the commitment $x_i^{m,t*}(\bm{p}^{m,t},\bm{y}^{m,t})$ is larger than his actual random generation ${X}_i^{m,t}$. Note that the decisions of two suppliers are coupled with each other. If one supplier bids a lower quantity or a higher price, it is highly possible that consumers will purchase more electricity from the other supplier. We formulate a price-quantity competition game between suppliers given storage-investment decisions $\bm{\varphi}$ as follows. \textbf{Stage \uppercase\expandafter{\romannumeral2}: Price-quantity competition game} \begin{itemize} \item Players: supplier $i\in\{1,2\}$. \item Strategies: bidding quantity $ y_i^{m,t}\geq 0$ and bidding price $p_i^{m,t}\in [0,\bar{p}]$ of each supplier $i$. \item Payoffs: supplier $i$'s revenue at hour $t$ of month $m$ is \ \begin{equation} \begin{aligned} &\hspace{-4mm}\pi_i^{R,m,t}\left({p}_i^{m,t},x_i^{m,t*}(\bm{p}^{m,t},\bm{y}^{m,t}),\bm{\varphi}\right)\\&\hspace{-7mm}=\left \{ \begin{aligned} &\hspace{-0mm}p_i^{m,t} x_i^{m,t*}(\bm{p}^{m,t},\bm{y}^{m,t})-\lambda (x_i^{m,t*}(\bm{p}^{m,t},\bm{y}^{m,t})-\mathbb{E}[{X}_i^{m,t}])^+,\\ &\hspace{60mm}~\text{if}~\varphi_i=1;\\ &\hspace{-0mm}p_i^{m,t} x_i^{m,t*}(\bm{p}^{m,t},\bm{y}^{m,t})-\lambda \mathbb{E}_{X_i^{m,t}}\left[(x_i^{m,t*}(\bm{p}^{m,t},\bm{y}^{m,t})- X_i^{m,t})^+\right],\\&\hspace{60mm}~\text{if}~\varphi_i=0,\\ \end{aligned} \right. \end{aligned}\label{eq:revenue} \end{equation} \text{where we define} $(g)^+=\max (g,0).$ \end{itemize} If both suppliers invest in storage (i.e., $\sum_i\varphi_i=2$), the equilibrium has been characterized in \cite{capacityprice}. However, if there is at least one supplier who does not invest in storage (i.e., $\sum_i\varphi_i\leq 1$), characterizing the equilibrium is quite non-trivial due to the penalty cost on the random generation of a general probability distribution. We will discuss how to characterize the equilibrium in detail in Section \ref{section:stage2}. We denote the equilibrium revenue of supplier $i$ as $\pi_i^{RE,m,t}(\bm{\varphi})$. \subsubsection{Stage \uppercase\expandafter{\romannumeral1}} At the beginning of the investment horizon, each supplier decides whether to invest in storage to maximize his expected profit. We denote supplier $i$'s expected profit as $\Pi_i$, which incorporates the expected revenue in the local energy market and the possible storage investment cost. As one supplier varies his storage-investment decisions, it leads to a different price-quantity subgame, which will affect both suppliers' profits. Thus, suppliers' storage-investment decisions are coupled and we formulate a storage-investment game between suppliers as follows. \textbf{Stage \uppercase\expandafter{\romannumeral1}: Storage-investment game} \begin{itemize} \item Players: supplier $i\in\{1,2\}$. \item Strategies: whether investing in storage $\varphi_i\in \{0,1\}$. \item Payoffs: supplier $i$'s expected profit (scaled in one hour) is \vspace{-2mm} \begin{align} & \Pi_i\left(\bm{\varphi}\right)=\mathbb{ E }_{m,t}[\pi_i^{RE,m,t}(\bm{\varphi}) ]-\varphi_i C_i. \end{align}\par \vspace{-1mm} \end{itemize} This storage-investment game is a $2\times 2$ bimatrix game where each supplier has two strategies. Although the Nash equilibrium of $2\times 2$ bimatrix game can be easily solved numerically, the close-form equilibrium does not exist in all subgames of Stage \uppercase\expandafter{\romannumeral2}. It is challenging to analyze the storage-investment equilibrium with respect to the parameters, e.g., demand and storage cost, and we discuss it in detail in Section \ref{section:stage1}. We solve this three-stage problem through backward induction. We first analyze the solution in Stage III given the bidding prices and bidding quantities in Stage II. Then, we incorporate the solution in Stage III to analyze the price and quantity equilibrium in Stage II, given (arbitrary) storage-investment decisions in Stage I. Finally, we incorporate the equilibrium of Stage II into Stage I to solve the storage-investment equilibrium. In the next three sections of Section \ref{section:stage3}, Section \ref{section:stage2}, and Section \ref{section:stage1}, we will analyze the three stages in the order of Stage \uppercase\expandafter{\romannumeral3}, Stage \uppercase\expandafter{\romannumeral2}, and Stage \uppercase\expandafter{\romannumeral1}, respectively. \section{Solution of Stage \uppercase\expandafter{\romannumeral3}}\label{section:stage3} In this section, we characterize consumers' optimal purchase solution to Problem \eqref{eq:consumer} in Stage \uppercase\expandafter{\romannumeral3}. We use subscript $i\in \{1,2\} $ to denote supplier $i$ and we use $-i$ to denote the other supplier. Note that in Stage \uppercase\expandafter{\romannumeral3}, the decisions are made independently for each hour of each day. For notation simplicity, we omit the superscript $m,t$ in the corresponding variables and parameters. Given the bidding price $\bm{p}$ and bidding quantity $\bm{y}$ of suppliers, we characterize in Proposition \ref{prop:stage3} consumers' optimal decision $\boldsymbol{x}^*(\boldsymbol{p},\boldsymbol{y}) = (x_i^*(\boldsymbol{p},\boldsymbol{y}),~ i=1,2)$ in Stage \uppercase\expandafter{\romannumeral3}. Recall that we assume that the bidding price in the local energy market is lower than the main grid price (i.e., $\bar{p}< P_g$). \begin{prop}[{optimal purchase $\boldsymbol{x}^*(\boldsymbol{p},\boldsymbol{y})$ in Stage \uppercase\expandafter{\romannumeral3}}]\mbox{}\label{prop:stage3} \begin{itemize} \item If $p_i<p_{-i}$ for some $i\in\{1,2\}$, then ${x}_i^*(\boldsymbol{p},\boldsymbol{y})= \min \left(D, y_i\right)$ and $ {x}_{-i}^*(\boldsymbol{p},\boldsymbol{y})=\min \left(D-\min \left(D, y_i\right),y_{-i}\right).$ \item {If $p_1=p_2$}, then the optimal purchase solution can be any element in the following set. \vspace{-1.5mm} \begin{align*} \mathcal{X}^{opt}\hspace{-0.5mm}=\hspace{-0.5mm}\{\boldsymbol{x}^*(\boldsymbol{p},\boldsymbol{y})\hspace{-0.5mm}: \hspace{-0.5mm}\sum_{i=1}^2 x_i^*(\boldsymbol{p},\boldsymbol{y}) = \min(D, \sum_{i=1}^{2} y_i),\\ 0\leq x_i \leq y_i,~ i=1,2\}. \end{align*} We assume that the demand will be allocated to the suppliers according to the condition either $p_1<p_2$ or $p_2<p_1$. The condition $p_1<p_2$ or $p_2<p_1$ is selected based on maximizing the two suppliers' total revenue.\footnote{If there is no difference between $p_1<p_2$ and $p_2<p_1$, the demand will be allocated by either $p_1 < p_2$ or $p_2 < p_1$ with equal probabilities.} \end{itemize} \end{prop} Proposition \ref{prop:stage3} shows that the consumers will first purchase the electricity from the supplier who sets a lower price. If there is remaining demand, then they will purchase from the other supplier. Furthermore, if consumers' demand cannot be fully satisfied by the local suppliers, they will purchase the remaining demand from the main grid. We show the proof of Proposition \ref{prop:stage3} in Appendix.\ref{appendix:proofstage3}. Next we analyze the strategic bidding of suppliers in Stage \uppercase\expandafter{\romannumeral2}} by incorporating consumers' optimal purchase decisions $\bm{x}^*(\boldsymbol{p},\boldsymbol{y})$. \section{Equilibrium analysis of Stage \uppercase\expandafter{\romannumeral2}}\label{section:stage2} In this section, we will characterize the bidding strategies of suppliers for the price-quantity competition subgame in Stage \uppercase\expandafter{\romannumeral2}, given the storage-investment decision in Stage \uppercase\expandafter{\romannumeral1}. Note that, depending on the storage-investment decisions in Stage \uppercase\expandafter{\romannumeral1}, there are three types of subgames: (i) the both-investing-storage ({$\text{S}_1\text{S}_1$}) case, (ii) the mixed-investing-storage ($\text{S}_1\text{S}_0$) case, where one invests in storage and one does not, and (iii) the neither-investing-storage ($\text{S}_0\text{S}_0$) case. The competition-equilibrium characterization between suppliers is highly non-trivial, due to the general distribution of renewable generations and the penalty cost. In particular, the pure price equilibrium may not exist, which requires the characterization of the mixed price equilibrium. Next, we first show that each supplier's equilibrium bidding quantity is actually a weakly dominant strategy that does not depend on the other supplier's decision, based on which we further derive the suppliers' bidding prices at the equilibrium for each subgame. Note that in Stage \uppercase\expandafter{\romannumeral2}, the decisions are made independently for each hour of each day. For notation simplicity, we omit the superscript $m,t$ in the corresponding variables and parameters. \subsection{Weakly-dominant strategy for bidding quantity} We show that given the bidding price $\boldsymbol{p}$, each supplier has a weakly dominant strategy for the bidding quantity that does not depend on the other supplier's quantity or price choice. This is rather surprising, and it will help reduce the two-dimensional bidding process (involving both quantity and price) into a one-dimensional bidding process (involving only price). Deriving the weakly dominant strategy is nontrivial due to the penalty cost on the renewable generation of a general probability distribution faced by the without-storage supplier. We first define the weakly dominant strategy for the bidding quantity $y_i^*$ in Definition \ref{def:quantity}, which enables a supplier to obtain a revenue at least as high as any other bidding quantity $y_i$, no matter what is the other supplier's decision. \begin{defi}[weakly dominant strategy]\label{def:quantity} Given price $\bm{p}$ and storage-investment decision $\bm{\varphi}$, a bidding quantity ${y}_i^*$ is a weakly dominant strategy for supplier $i$ if \vspace{-2mm} \begin{align*} \pi_i^R(p_i, x_i^*(\bm{p},({y}_i^*,y_{-i})),\bm{\varphi})\geq \pi_i^R(p_i, x_i^*(\bm{p},({y}_i,y_{-i})),\bm{\varphi}), \end{align*}\par \vspace{-2.5mm} \noindent \text{for any} $y_{-i}$ and $y_i\neq y_i^*$. \end{defi} We then characterize suppliers' weakly dominant strategy $\bm{y}^*(\bm{p},\bm{\varphi})$ for the bidding quantity in Theorem \ref{thm:quantity}. \begin{thm}[weakly dominant strategy for the bidding quantity]\label{thm:quantity} The weakly dominant strategy $\bm{y}^*(\bm{p},\bm{\varphi})$ is given by \vspace{-1mm} \begin{equation} y_i^*(p_i,\varphi_i)=\left \{ \begin{aligned} &\mathbb{ E }[{X}_i],~\text{if}~\varphi_i=1,\\ &F_i^{-1}\left(\frac{p_i}{\lambda}\right),~\text{if}~\varphi_i=0, \end{aligned} \right. \end{equation}\par \vspace{-0.5mm} \noindent where $F_i^{-1}$ is the inverse function of the CDF $F_i$ of supplier $i$'s random generation. \end{thm} Theorem \ref{thm:quantity} shows that a with-storage supplier $i$ (i.e., $\varphi_i=1$) should bid the quantity at the stable production level $\mathbb{ E }[{X}_i]$ (independent of price $\boldsymbol{p}$) so that he can attract the most demand but do not face any penalty risk in the real-time market. For a without-storage supplier $i$ (i.e., $\varphi_i=0$), however, he has to trade off between his bidding quantity and the penalty cost incurred by the random generation. His weakly dominant strategy $y_i^*(p_i,\varphi_i)$ depends on his own bidding price $p_i$, but does not depend on the other supplier $-i$'s bidding price $p_{-i}$ or bidding quantity $y_{-i}$. Note that when price $p_i\hspace{-0.5mm}=\hspace{-0.5mm}0$, the bidding quantity $y_i^*(0,\varphi_i)\hspace{-1mm}=\hspace{-1mm}F_i^{-1}\left(0\right)\hspace{-1mm}=\hspace{-1mm}0$. Furthermore, the bidding quantity $y_i^*(p_i,\varphi_i)$ increases in price $p_i$, which shows that the without-storage supplier $i$ should bid more quantities when he bids a higher price. When price $p_i\hspace{-0.7mm}=\hspace{-0.6mm}\bar{p}$, the bidding quantity satisfies $y_i^*(\bar{p},\varphi_i )\hspace{-0.6mm}=\hspace{-0.6mm}F_i^{-1}\left(\frac{\bar{p}}{\lambda}\right)\hspace{-0.6mm}<\hspace{-0.6mm}\bar{X}_i$ (i.e., the maximum generation amount) since we assume $\bar{p}\hspace{-0.5mm}<\hspace{-0.5mm}\lambda$. \subsection{Equilibrium price-bidding strategy: pure equilibrium } We will further analyze the price equilibrium between suppliers based on the weakly dominant strategies for the bidding quantities in Theorem \ref{thm:quantity}. We characterize the price equilibrium with respect to the demand that can affect the competition level between suppliers. For the $\text{S}_1\text{S}_0$ and $\text{S}_0\text{S}_0$ cases, we show that a pure price equilibrium exists when the demand $D$ is higher than a threshold (characterized in the later analysis). However, when the demand $D$ is lower than the threshold, there exists no pure price equilibrium due to the competition for the limited demand. For the $\text{S}_1\text{S}_1$ case, the equilibrium structure is characterized by two thresholds of the demand (characterized in the later analysis). A pure price equilibrium will exist when the demand $D$ is higher than the larger threshold or lower than the other smaller threshold. However, when the demand $D$ is in the middle of the two thresholds, there exists no pure price equilibrium. We first define the pure price equilibrium of suppliers in Definition \ref{def:pureprice}, where no supplier can increase his revenue through unilateral price deviation. \begin{defi}[pure price equilibrium]\label{def:pureprice} Given the storage-investment decision $\bm{\varphi}$, a price vector $\bm{p}^*$ is a pure price equilibrium if for both $i=1,2$, \vspace{-2mm} \begin{align} &\pi_i^R\left(p_i^*, x_i^*(\bm{p}^*,\bm{y}^*(\bm{p}^*,\bm{\varphi})),\bm{\varphi}\right)\notag\\&~~~~~~~~~~~~~~\geq \pi_i^R\left(p_i, x_i^*\left((p_i,p_{-i}^*),\bm{y}^*((p_i,p_{-i}^*),\varphi_{i})\right),\bm{\varphi}\right), \end{align}\par\vspace{-2mm} \noindent \text{for all} ~$0\leq p_i\leq \bar{p}$, where $\bm{y}^*$ is the weakly dominant strategies derived in Theorem \ref{thm:quantity}. \end{defi} Then, we show the existence of the pure price equilibrium in Proposition \ref{prop:pureprice}. \begin{prop}[existence of the pure price equilibrium]\mbox{}\label{prop:pureprice} \begin{itemize} \item Subgames of type $\text{S}_1\text{S}_0$ and type $\text{S}_0\text{S}_0$ (i.e., when $\sum_i\varphi_i<2$): \begin{itemize} \item If $D \geq \sum_i y_i^*(\bar{p},\varphi_i)$, there exists a pure price equilibrium $p_i^*=\bar{p}$, with equilibrium revenue $\pi_i^{RE}=\lambda \int_{0}^{F_i^{-1}(\bar{p}/\lambda)}xf_i(x)dx$, for any $i=1,2$. \item If $0<D<\sum_i y_i^*(\bar{p},\varphi_i)$, there is no pure price equilibrium. \end{itemize} \item Subgame of type $\text{S}_1\text{S}_1$ ($\sum_i\varphi_i=2$): \begin{itemize} \item If $D \geq \sum_i y_i^*(\bar{p},\varphi_i)$, there exists a pure price equilibrium $p_i^*=\bar{p}$, with equilibrium revenue $\pi_i^{RE}=\bar{p}\mathbb{ E }[X_i]$, for any $i=1,2$. \item If $D\leq \min_i y_i^*(\bar{p},\varphi_i)$, there exists a pure price equilibrium $p_i^*=0$, with equilibrium revenue $\pi_i^{RE}=0$, for any $i=1,2$. \item If $\min_i y_i^*(\bar{p},\varphi_i)<D< \sum_i y_i^*(p_i,\varphi_i)$, there is no pure price equilibrium. \end{itemize} \end{itemize} \end{prop} We summarize the existence of pure price equilibrium and the weakly dominant strategy of bidding quantity in Table \ref{table:price}. \begin{table*}[ht] \normalsize \centering \renewcommand\arraystretch{1.1} \begin{tabular}{|p{1.8cm}<{\centering}|p{3cm}<{\centering}|p{5.5cm}<{\centering}|p{5cm}<{\centering}|} \hline Subgame & Weakly dominant strategy of bidding quantity & Existence of pure price equilibrium & Non-existence of pure price equilibrium\\ \hline $\text{S}_1\text{S}_1$ & $y_i^*(p_i,\varphi_i)$, $\forall i=1,2$ & (a) $D \geq \sum_i y_i^*(\bar{p},\varphi_i)$: $p_i^*=\bar{p}$, $\forall i=1,2$ (b) $D\leq \min_i y_i^*(\bar{p},\varphi_i)$: $p_i^*=0$, $\forall i=1,2$ & $\min_i y_i^*(\bar{p},\varphi_i)<D< \sum_i y_i^*(\bar{p}_i,\varphi_i)$: no pure price equilibrium\\ \hline $\text{S}_1\text{S}_0$, $\text{S}_0\text{S}_0$ & $y_i^*(p_i,\varphi_i)$, $\forall i=1,2$ & $D \geq \sum_i y_i^*(\bar{p},\varphi_i)$: $p_i^*=\bar{p}$, $\forall i=1,2$. & $0<D<\sum_i y_i^*(\bar{p},\varphi_i)$: no pure price equilibrium\\ \hline \end{tabular} \caption{Weakly dominant strategy of bidding quantity as well as the conditions for the existence of pure price equilibrium.} \label{table:price} \vspace{-1mm} \end{table*} According to Proposition \ref{prop:pureprice}, for all the types of subgames, when the demand $D$ is higher than the summation of the suppliers' maximum bidding quantities (i.e., $D\geq \sum_iy_i^*(\bar{p},\varphi_i)$), both suppliers will bid the highest price $\bar{p}$. The reason is that both suppliers' bidding quantities will be fully sold out in this case, and the highest price will give the highest revenue to each supplier. Basically there is no impact of market competition in this case. However, for the $\text{S}_1\text{S}_0$ and $\text{S}_0\text{S}_0$ subgames, if the demand $D$ is lower than the threshold $\sum_i y_i^*(\bar{p},\varphi_i)$, there exists no pure price equilibrium. In contrast, for the $\text{S}_1\text{S}_1$ subgame, it is also possible that when the demand $D$ is smaller than a threshold (i.e., $D<\min_i y_i^*(\bar{p},\varphi_i)$), both suppliers have to bid zero price and get zero revenue. The intuition is that the competition level of the $\text{S}_1\text{S}_1$ subgame is higher than that of the $\text{S}_1\text{S}_0$ and $\text{S}_0\text{S}_0$ subgames due to both suppliers' stable outputs, which leads to zero bidding prices if the demand is low. The result of the subagame $\text{S}_1\text{S}_1$ has been proved in \cite{capacityprice}. We present the proofs of subgames of type $\text{S}_1\text{S}_0$ and type $\text{S}_0\text{S}_0$ in Appendix.\ref{appendix:proofstage2}. \subsection{Equilibrium price-bidding strategy: mixed equilibrium } When the demand is such a level that there is no pure price equilibrium as shown in Proposition \ref{prop:pureprice}, we characterize the mixed price equilibrium between suppliers. First, we define the mixed price equilibrium under the weakly dominant strategy $\bm{y}^*(\bm{p},\bm{\varphi})$ in Definition \ref{def:mix}, where $\mu$ denotes a probability measure\footnote{A probability measure is a real-valued function that assigns a probability to each event in a probability space.} of the price over $[0,\bar{p}]$\cite{capacityprice}. \begin{defi}[mixed price equilibrium]\label{def:mix} A vector of probability measures $(\mu_1^*, \mu_2^*)$ is a mixed price equilibrium if, for both $i=1,2$, \vspace{-2mm} \begin{align*} &\int_{{[0,\bar{p}]}^{2}}\hspace{-0.6mm}\pi_i^R \left(p_i, {x}_i^*\left((p_i, {p}_{-i}),\bm{y}^*(p_i, p_{-i})),\bm{\varphi}\right) d (\mu_i^{*}(p_i) \hspace{-0.6mm}\times\hspace{-0.6mm} {\mu}_{-i}^{*}({p}_{-i}) \right)\\ \geq &\int_{{[0,\bar{p}]}^{2}}\hspace{-0.6mm} \pi_i^R \left(p_i, {x}_i^*((p_i, {p}_{-i}),\bm{y}^*(p_i, p_{-i})),\bm{\varphi}) d (\mu_i(p_i) \hspace{-0.6mm}\times\hspace{-0.6mm} {\mu}_{-i}^{*}({p}_{-i}) \right), \end{align*} for any measure $\mu_i$. \end{defi} Definition \ref{def:mix} states that the expected revenue of supplier $i$ cannot be increased if he unilaterally deviates from the mixed equilibrium price strategy $\mu_i^{*}$. Let $F_i^e$ denote the CDF of $\mu_i^*$, i.e., $F_i^e(p_i)=\mu_i^*(\{p\leq p_i\})$. Let $u_i$ and $l_i$ denote the upper support and lower support of the mixed price equilibrium $\mu_i^*$, respectively, i.e., $u_i\hspace{-0.5mm}=\hspace{-0.5mm}\inf\{{p}_i\hspace{-0.5mm}: \hspace{-0.5mm}F_i^e(p_i)=1\}$ and $l_i\hspace{-0.5mm}=\hspace{-0.5mm}\sup\{{p}_i\hspace{-0.5mm}:\hspace{-0.5mm} F_i^e(p_i)=0\}$. To characterize the mixed price equilibrium, we need to fully characterize the CDF function $F_i^e$ (including $u_i$ and $l_i$) for each $i \in\{1,2\}$. Then, we show that the mixed price equilibrium exists for each type of subgames and characterize some properties of the mixed price equilibrium in Lemma 1. Lemma 1 can be derived following the same method for the $\text{S}_1\text{S}_1$ case in \cite{capacityprice}. Later, we discuss how to compute the mixed price equilibrium of the $\text{S}_1\text{S}_1$, $\text{S}_1\text{S}_0$, and $\text{S}_0\text{S}_0$ cases, respectively. \begin{lemma}[characterization of the mixed price equilibrium]\label{lem:mix} For any pair $(\varphi_i,\varphi_{-i} )$, when the demand $D$ falls in the range where no pure price equilibrium exists as shown in Proposition \ref{prop:pureprice}, the mixed price equilibrium exists and has properties as follows. (i) Both suppliers have the same lower support and the same upper support: \vspace{-2mm} \begin{align} &l_1=l_2=l>0,~u_1=u_2= \bar{p}.\label{eq:4b} \end{align}\par\vspace{-2mm} (ii) The equilibrium electricity-selling revenues $\pi_i^{RE}$ satisfy: \begin{align} &\pi_i^{RE}(\bm{\varphi})=\pi_i^R(l,\min(D,y_2^*(l,\varphi_{i})),\bm{\varphi}).\label{eq:5b} \end{align}\par\vspace{-1mm} (iii) For any $i=1,2$, $F_i ^e$ is strictly increasing over $[l,\bar{p}]$, and has no atoms\footnote{The atom at $p$ means that the left-limit of CDF at $p$ satisfies $F_i^e(p^-)\triangleq \lim_{p'\uparrow p}F_i^e(p')<F_i^e(p)$.} over $[l, \bar{p})$. Also, $F_i ^e$ cannot have atoms at $\bar{p}$ for both $i=1,2$. \end{lemma} Lemma \ref{lem:mix} shows that both suppliers' mixed-price-equilibrium strategies have the same support and have continuous CDFs over $[l,\bar{p})$. Based on Lemma \ref{lem:mix}, we next characterize the mixed price equilibrium for the subgames of each type $\text{S}_1\text{S}_1$, $\text{S}_1\text{S}_0$, and $\text{S}_0\text{S}_0$. \subsubsection{$\text{S}_1\text{S}_1$ subgame (i.e., $\sum \varphi_i=2$)} As shown in Proposition \ref{prop:pureprice}, when the demand satisfies $\min_i y_i^*(\bar{p},\varphi_i)<D< \sum_i y_i^*(\bar{p},\varphi_i)$, there is no pure price equilibrium. We can characterize a close-form equilibrium revenue for each supplier at the mixed price equilibrium, which has been proved in \cite{capacityprice}. Furthermore, under the mixed price equilibrium, both suppliers get strictly positive revenues, while they may get zero revenues under the pure price equilibrium as shown in Proposition \ref{prop:pureprice}. We show the close-form equilibrium revenue in Appendix.\ref{appendix:s1s1}. \subsubsection{$\text{S}_1\text{S}_0$ subgame (i.e., $\sum_i\varphi_i=1$)} In the $\text{S}_1\text{S}_0$ subgame, a mixed price equilibrium arises when $0<D<\sum_i y_i^*(\bar{p},\varphi_i)$. However, we cannot characterize a close-form equilibrium revenue, as in the $\text{S}_1\text{S}_1$ case due to the penalty cost on the general renewable generations for the without-storage supplier. Instead, we can first characterize the CDF of the mixed price equilibrium assuming the lower support $l$ in Theorem \ref{thm:mscdf}, and then show how to compute the lower support $l$ in Proposition \ref{thm:mscomp}. We present the proofs in Appendix.\ref{appendix:proofstage2}. \vspace{-1mm} \begin{thm}[$\text{S}_1\text{S}_0$: CDF of the mixed price equilibrium]\label{thm:mscdf} In the $\text{S}_1\text{S}_0$ subgame (i.e., $\sum_i\varphi_1=1$), when $0<D<\sum_i y_i^*(\bar{p},\varphi_i)$, suppose that the common lower support $l_1=l_2=l$ of the mixed price equilibrium is known. Then, the suppliers' mixed equilibrium price strategies are characterized by the following CDF $F_i^e$: \begin{itemize} \item If $\varphi_i=1$, we have \begin{align} &\hspace{-9mm}F_i^e(p)= \frac{ \pi_{-i}^R\left(p,\min\left(y_{-i}^*(p,\varphi_{-i}),D\right),\bm{\varphi}\right)-\pi_{-i}^{RE}(\bm{\varphi})}{\pi_{-i}^R(p,\min\left(y_{-i}^*(p,\varphi_{-i}),D\right),\bm{\varphi})-\pi_{-i}^R(p, (D-\mathbb{ E }[X_i])^+,\bm{\varphi})}.\label{F1} \end{align} \item If $\varphi_i=0$, we have \begin{align} &\hspace{-3mm}F_i^e(p)=\int_l^{\bar{p}} \frac{\pi_{-i}^{RE}(\bm{\varphi})}{p^2\cdot \min\left(y_i^*(p,\varphi_{i}),D\right)-p^2\cdot (D-\mathbb{ E }[X_{-i}])^+}dp.\label{F2} \end{align} for any $l \leq p< \bar{p}$. \end{itemize} \end{thm} As shown in Theorem \ref{thm:mscdf}, supplier $i$'s mixed strategy $F_i^e$ is coupled with the other supplier's equilibrium revenue $\pi_{-i}^{RE}$. Next, we will explain how to compute the lower support $l$. Toward this end, in \eqref{F1} and \eqref{F2}, we replace the equilibrium lower support $l$ by a variable $l_i^\dagger$, and replace $F_i^e({p})$ by $F_i^e({p}\mid l_i^\dagger)$ to emphasize that $F_i^e(p\mid l_i^\dagger)$ is a function of $l_i^\dagger$. Lemma \ref{lem:mix} (iii) implies that there exists a solution $l_i^\dagger$ to the equation $F_i^e(\bar{p}^-\mid l_i^\dagger)=1$ for at least one of the suppliers. Furthermore, we can prove that $F_i^e(\bar{p}^-\mid l_i^\dagger)$ decreases in $l_i^\dagger$, and hence the solution (in $l_i^\dagger$) to $F_i^e(\bar{p}^-\mid l_i^\dagger)=1$ is unique. Then, we can compute the lower support $l$ in Proposition \ref{thm:mscomp}. \begin{prop}[{$\text{S}_1\text{S}_0$: computing the lower support $l$}]\label{thm:mscomp} Based on the solution $l_i^\dagger$ such that $F_i^e(\bar{p}^-\mid l_i^\dagger)=1$, $ \forall i=1,2$, we consider two cases and compute the lower support $l$ as follows. \begin{enumerate} \item If $F_i^e(\bar{p}^-\mid l_i^\dagger)=1$ has a solution $l_i^\dagger$ for both suppliers, then the equilibrium lower support is $l=\max_i (l_i^\dagger)$. \item If $F_i^e(\bar{p}^-\mid l_i^\dagger)=1$ has a solution $l_i^\dagger$ for only one supplier $i$, we have this unique solution $l_i^\dagger$ as the equilibrium lower support $l$. \end{enumerate} \end{prop} Through Theorem \ref{thm:mscdf} and Proposition \ref{thm:mscomp}, we can compute the lower support and suppliers' equilibrium revenues. Although we cannot obtain a close-form equilibrium revenue, in Theorem \ref{prop:comparison}, we can show that in the $\text{S}_1\text{S}_0$ subgame, if two suppliers' random generations have the same mean value, then the with-storage supplier's equilibrium revenue is always strictly higher than that of the without-storage supplier. \begin{thm}[$\text{S}_1\text{S}_0$: revenue comparison]\label{prop:comparison} If $\varphi_i=1$, $\varphi_{-i}=0$ and $\mathbb{E}[X_i]=\mathbb{E}{[X_{-i}]}$, then $\pi_i^{RE}(\bm{\varphi})>\pi_{-i}^{RE}(\bm{\varphi})$ for both pure and mixed price equilibrium. Particularly, if $X_{-i}$ follows a uniform distribution over $[0,\bar{X}_{-i}]$, we have \begin{equation} \frac{\pi_i^{RE}(\bm{\varphi})}{\pi_{-i}^{RE}(\bm{\varphi})}\geq \left \{ \begin{aligned} &2,~\text{if}~0<D \leq \mathbb{E}[X_i],\\ &4,~\text{if}~D = \mathbb{E}[X_i],\\ &\frac{\lambda}{\bar{p}},~\text{if}~D > \mathbb{E}[X_i].\\ \end{aligned} \right. \end{equation} \end{thm} Theorem \ref{prop:comparison} shows the dominance of the with-storage supplier in the $\text{S}_1\text{S}_0$ subgame, whose electricity-selling revenue can be much higher than that of the without-storage supplier. The intuition is that the random generation makes the without-storage supplier at the disadvantage in the market (due to the penalty cost). This suggests potential economic benefits of storage investment for the supplier.\footnote{Note that Theorem 3 only compares the revenue of the two suppliers. When considering the storage investment cost in Stage \uppercase\expandafter{\romannumeral1} and comparing the suppliers' profit, we will have some surprising results shown in Section \ref{section:stage1} and Section \ref{section:sim}.} However, investing in storage does not always bring benefits. If both suppliers invest in storage, it may reduce both suppliers' revenues compared with the case that at least one supplier does not invest in storage. We will discuss it later in Proposition \ref{prop:positiverev}. \subsubsection{$\text{S}_0\text{S}_0$ subgame (i.e., $\sum_i\varphi_i=0$)} In the $\text{S}_0\text{S}_0$ case, both suppliers do not invest in storage and face the penalty cost. When $0<D<\sum_i y_i^*(\bar{p},\varphi_i)$, for the mixed price equilibrium, we can neither obtain the close-form equilibrium revenue as in the $\text{S}_1\text{S}_1$ case nor obtain the equilibrium strategy CDF as in Theorem \ref{thm:mscdf} of the $\text{S}_1\text{S}_0$ case. Note that in the $\text{S}_1\text{S}_1$ and $\text{S}_1\text{S}_0$ subgames, at least one supplier is not subject to the penalty cost, which makes it possible to characterize the equilibrium strategy CDF or even close-form equilibrium revenue. In this $\text{S}_0\text{S}_0$ subgame, we will characterize a range of the lower support $l$ in Proposition \ref{prop:ns}. \begin{prop}[$\text{S}_0\text{S}_0$: lower support]\label{prop:ns} In the $\text{S}_0\text{S}_0$ subgame (i.e., $\sum_i\varphi_1=0$), when $0<D<\sum_i y_i^*(\bar{p},\varphi_i)$, the lower support $l$ of the mixed price equilibrium satisfies \vspace{-1mm} \begin{align} \min_i ~y_i^*(l,\varphi_i)<D\leq \sum_i y_i^*(l,\varphi_i) ~\text{and} ~l <\bar{p}. \label{eq:nslower} \end{align} \end{prop} The bidding quantity $y_i^*(l,\varphi_i)$ is the minimal bidding quantity of supplier $i$ when he uses the mixed price strategy. Proposition \ref{prop:ns} shows that this minimal bidding quantity cannot be too lower or too higher for both suppliers. Note that the mixed price equilibrium has a continuous CDF over $[l,\bar{p})$ shown in Lemma \ref{lem:mix}, but we cannot derive it in close form. To have a better understanding of the CDF, we discretize the price to approximate the original continuous price set, and compute the mixed equilibrium for the discrete price set. The details are shown in Appendix.\ref{appendix:s0s0}. \subsection{Strictly positive revenue in the $\text{S}_1\text{S}_0$ and $\text{S}_0\text{S}_0$ subgames} Analyzing the equilibrium revenues of the three types of subgames, we show in Proposition \ref{prop:positiverev} that in the $\text{S}_1\text{S}_0$ and $\text{S}_0\text{S}_0$ subgames, both suppliers always get strictly positive revenues. \begin{prop}[strictly positive revenue with randomness]\label{prop:positiverev} In the $\text{S}_1\text{S}_0$ and $\text{S}_0\text{S}_0$ subgames, each supplier $i$ always gets strictly positive revenue at (both pure and mixed) equilibrium, i.e., $\pi_i^{RE}>0$. \end{prop} This result is counter-intuitive for the following reason. Recall that in the $\text{S}_1\text{S}_1$ subgame, both suppliers can get zero revenue if the demand is below a threshold as shown in Proposition 2. The common wisdom is that when the generation is random, the revenues of suppliers tend to be low due to the penalty cost. In contrast, Proposition \ref{prop:positiverev} shows that the suppliers' revenues are always strictly positive when the generation is random. Thus, the randomness can in fact be beneficial. The underlying reason should be understood from the point of view of market competition. The randomness makes suppliers bid more conservatively in their bidding quantities, which leads to less-fierce market competition and thus increases their revenues. \section{Equilibrium analysis of Stage \uppercase\expandafter{\romannumeral1}} \label{section:stage1} In Stage \uppercase\expandafter{\romannumeral1}, each supplier $i$ has two strategies: (i) investing in storage, i.e., $\varphi_i=1$, and (ii) not investing storage, i.e., $\varphi_i=0$, which leads to a bimatrix game. For this bimatrix game, we can analyze the equilibrium strategy by simply comparing the profits for each strategy pair of the two suppliers. Note that while the electricity-selling revenue is given in the results of Section \ref{section:stage2}, the profit also depends on the storage cost. To calculate the storage investment cost, we also propose a probability-based method using real data to characterize the storage capacity for the with-storage supplier in Section \ref{section:capacity}. Each supplier's profit can be calculated by taking the expectation of the equilibrium revenue in the local energy market at each hour, and subtracting storage investment cost over the investment horizon (scaled into one hour). Note that suppliers' storage-investment strategy pairs $\bm{\varphi}=(\varphi_1,\varphi_2)$ lead to four possible subgames: $\text{S}_1\text{S}_1$ subgame (i.e. $\sum_i\varphi_i=2$), $\text{S}_1\text{S}_0$ subgame (i.e., $\sum_i\varphi_i=1$, including two cases: $(\varphi_1,\varphi_2)=(1,0)$ and $(\varphi_1,\varphi_2)=(0,1)$), and $\text{S}_0\text{S}_0$ subgame (i.e. $\sum_i\varphi_i=0$). Taking the expectation of equilibrium revenue over all the hours in the investment horizon, we denote supplier $i$'s equilibrium revenue in the $\text{S}_1\text{S}_1$ and $\text{S}_0\text{S}_0$ subgames as $\pi_i^{\text{S}_1\text{S}_1}$ and $\pi_i^{\text{S}_0\text{S}_0}$, respectively. For the $\text{S}_1\text{S}_0$ subgame, we denote the with-storage and without-storage supplier $i$'s equilibrium revenue as $\pi_i^{\text{S}_1\text{S}_0|\text{Y}}$ and $\pi_i^{\text{S}_1\text{S}_0|\text{N}}$, respectively. For illustration, we list the profit table with all four strategy pairs in Table \ref{tab:profit}. \begin{table*}[ht] \normalsize \centering \renewcommand\arraystretch{1.1} \begin{tabular}{|p{4cm}<{\centering}|p{4cm}<{\centering}|p{4cm}<{\centering}|} \hline & Supplier 2: invest & Supplier 2: not invest\\ \hline Supplier 1: invest& $(\pi_1^\text{$\text{S}_1\text{S}_1$}-C_1, \pi_2^{\text{S}_1\text{S}_1}-C_2)$&$(\pi_1^{\text{S}_1\text{S}_0|\text{Y}}-C_1, \pi_2^{\text{S}_1\text{S}_0|\text{N}})$\\ \hline Supplier 1: not invest& $(\pi_1^{\text{S}_1\text{S}_0|\text{N}}, \pi_2^{\text{S}_1\text{S}_0|\text{Y}}-C_2)$&$(\pi_1^{\text{S}_0\text{S}_0}, \pi_2^{\text{S}_0\text{S}_0})$\\ \hline \end{tabular} \vspace{-1mm} \caption{Supplier's profits under different $\bm{\varphi}$.} \label{tab:profit} \end{table*} Next, we will first derive the conditions for each storage-investment strategy pair to be an equilibrium, respectively. Then, we analyze the equilibrium with respect to the parameters of storage cost and demand. Finally, we show that both suppliers can get strictly positive profits in this storage-investment game. \subsection{Conditions of pure storage-investment equilibrium} We will characterize the conditions on the storage cost and the subgame equilibrium revenue for each strategy pair to become an equilibrium, respectively. First, we define the pure storage-investment equilibrium in Definition \ref{defi:stoeq}, which states that neither supplier has an incentive to deviate from his storage-investment decision at the equilibrium. \begin{defi}[pure storage-investment equilibrium]\label{defi:stoeq} A storage-investment vector $\bm{\varphi}^*$ is a pure storage-investment equilibrium if the profit satisfies $\Pi_i\left(\varphi_i^*,\varphi_{-i}^* \right)\geq ~\Pi_i\left(\varphi_i,\varphi_{-i}^* \right),$ \text{for any} ~$\varphi_{i}\neq \varphi_i^*$, and any $i=1,2$. \end{defi} Based on Definition \ref{defi:stoeq}, we characterize the conditions on the storage cost and the subgame equilibrium revenue for the storage-investment pure equilibrium in Theorem \ref{thm:stoeq}, the proof of which is presented in Appendix.\ref{appendix:proofstage1}. \begin{thm}[conditions of pure storage-investment equilibrium]\mbox{}\label{thm:stoeq} \begin{itemize} \item $\text{S}_0\text{S}_0$ case is an equilibrium if $C_i\in [\pi_i^{\text{S}_1\text{S}_0|\text{Y}}-\pi_i^{\text{S}_0\text{S}_0},+\infty)$, for both $i=1,2$. \item $\text{S}_1\text{S}_0$ case is an equilibrium (where $\varphi_i=1$ and $\varphi_{-i}=0$ ) if $C_i\in [0, \pi_i^{\text{S}_1\text{S}_0|\text{Y}}-\pi_i^{\text{S}_0\text{S}_0}]$ and $C_{-i}\in [\pi_{-i}^{\text{S}_1\text{S}_1}-\pi_{-i}^{\text{S}_1\text{S}_0|\text{N}},+\infty)$. \item $\text{S}_1\text{S}_1$ case is an equilibrium if $C_i\in [0, \pi_i^{\text{S}_1\text{S}_1}-\pi_i^{\text{S}_1\text{S}_0|\text{N}}]$, for both $i=1,2$. \end{itemize} If $C_i$ satisfies none of the conditions above, there exists no pure storage-investment equilibrium.\footnote{Note that if $\pi_i^{\text{S}_1\text{S}_0|\text{Y}}-\pi_i^{\text{S}_0\text{S}_0}<0$ or $\pi_i^{\text{S}_1\text{S}_1}-\pi_i^{\text{S}_1\text{S}_0|\text{N}}<0$, then the set $ [0, \pi_i^{\text{S}_1\text{S}_0|\text{Y}}-\pi_i^{\text{S}_0\text{S}_0}]=\emptyset$ or $[0, \pi_i^{\text{S}_1\text{S}_1}-\pi_i^{\text{S}_1\text{S}_0|\text{N}}]=\emptyset$. This means that the condition $C_i\in [0, \pi_i^{\text{S}_1\text{S}_0|\text{Y}}-\pi_i^{\text{S}_0\text{S}_0}]$ or $C_i\in [0, \pi_i^{\text{S}_1\text{S}_1}-\pi_i^{\text{S}_1\text{S}_0|\text{N}}]$ cannot be satisfied. } \end{thm} Theorem \ref{thm:stoeq} shows that the storage-investment equilibrium depends on the comparison between the storage cost and the revenue difference between the cases $\text{S}_1\text{S}_0$ and $\text{S}_1\text{S}_0$, or the cases $\text{S}_1\text{S}_0$ and $\text{S}_1\text{S}_1$. Also, Theorem \ref{thm:stoeq} implies that a lower storage cost will incentivize the supplier to invest in storage. According to Theorem \ref{thm:stoeq}, given the storage cost and the expected equilibrium revenue of each subgame, we can characterize the pure equilibrium for nearly all values of $C_i$. However, if storage cost $C_i$ satisfies none of the conditions in Theorem \ref{thm:stoeq}, there will be no pure price equilibrium. Note that when there is no pure storage-investment equilibrium, we can always characterize the mixed equilibrium as the game in Stage \uppercase\expandafter{\romannumeral1} is a finite game\cite{gamex}. We show how to compute the mixed equilibrium in Appendix.\ref{appendix:proofstage1}. Since we cannot characterize close-form equilibrium revenues for the $\text{S}_1\text{S}_0$ and $\text{S}_0\text{S}_0$ subgames, it remains challenging to characterize the storage-investment equilibrium with respect to the system parameters, e.g., the storage cost and demand. In the next subsection, we will focus on deriving insights of the storage-investment equilibrium in some special and practically interesting cases. \begin{figure}[t] \centering {\includegraphics[width=2.4in]{./figure/split_lamb2p}} \vspace{-1mm} \caption{\small Equilibrium split with storage cost and demand at $\lambda=1.5$ HKD/kWh.} \vspace{-3mm} \label{fig:subfig:lam1} \end{figure} \vspace{-1mm} \subsection{Impact of storage cost and demand on storage-investment equilibrium} We analyze the impact of storage cost and demand on the storage-investment equilibrium and have the analytical results for the cases when: (i) the storage cost $C_i$ is sufficiently large; (ii) the demand $D^{m,t}$ is sufficiently large or small. We present all the proofs in Appendix.\ref{appendix:proofstage1}. To better illustrate the storage-investment equilibrium, we show one simulation result of the equilibrium split (i.e., the storage-investment equilibrium with respect to parameters such as the demand and the storage cost) in Figure \ref{fig:subfig:lam1}, and the details of the simulation setup are presented in Section \ref{section:sim}. In this simulation, for the illustration purpose, we consider the same demand $D$ for any hour $t$ of any month $m$. We also consider two homogeneous suppliers (with the same storage cost, the same renewable energy capacity and the same renewable energy distribution) to reveal the impact on storage-investment decision.\footnote{We can prove that a pure Nash equilibrium of storage investment always exists in this homogeneous case. However, for the heterogeneous case, we cannot theoretically prove that the pure Nash equilibrium always exists. In Appendix, we simulate an example with two heterogeneous suppliers (with different capacities of renewables) and show the storage-investment equilibrium in such a heterogeneous case.} In Figure \ref{fig:subfig:lam1} (where the penalty price is $\lambda=1.5$ Hong Kong dollars (HKD) per kWh), with respect to the demand and storage cost, the storage-investment equilibrium is divided into three regions: Region \uppercase\expandafter{\romannumeral1} of $\text{S}_1\text{S}_1$ (the left side of the red curve), Region \uppercase\expandafter{\romannumeral2} of $\text{S}_1\text{S}_0$ (between the red curve and the blue curve), and Region \uppercase\expandafter{\romannumeral3} of $\text{S}_0\text{S}_0$ (the right side of the blue curve). First, for the impact of the storage cost, a higher storage cost will discourage suppliers from investing in storage as implied in Theorem \ref{thm:stoeq}. We will further show that when the storage cost is higher than a threshold, no suppliers will invest in storage no matter what the demand or penalty. However, counter-intuitively, we also find that in the case of a zero storage cost, not both suppliers will invest in storage once the demand is lower than a certain threshold. As shown in Figure \ref{fig:subfig:lam1}, when the storage cost is larger than a threshold, i.e., $C>0.86\times 10^3$ HKD, the $\text{S}_0\text{S}_0$ case will be the only equilibrium (independent of the demand $D$) and no suppliers invest in storage. We show this property in Proposition \ref{prop:stocost}. The reason is that the benefit from investing in storage is bounded. When the storage cost is greater than a threshold corresponding to the bounded benefit, no suppliers will choose to invest in storage. \begin{prop}\label{prop:stocost} There exists a threshold $C_i^{\text{S}_0\text{S}_0}$ such that if the storage cost satisfies $C_i>C_i^{\text{S}_0\text{S}_0}$ for both $i=1,2$, the $\text{S}_0\text{S}_0$ case will be the unique pure storage-investment equilibrium. \end{prop} However, as shown in Figure \ref{fig:subfig:lam1}, when the demand is smaller than a certain threshold, i.e., $D<2.8$ MW, the $\text{S}_1\text{S}_1$ case cannot be a pure equilibrium even when the storage cost $C=0$. We show this property in Proposition \ref{prop:stodemandl}. The reason is that when the demand is smaller than a certain threshold, in the $\text{S}_1\text{S}_1$ case, both suppliers can only get zero revenues (as shown in Proposition \ref{prop:pureprice}) due to the competition. Thus, if the $\text{S}_1\text{S}_1$ case is the storage-investment state where both suppliers invest in storage, one supplier can always deviate to not investing in storage, which can bring him a strictly positive profit as implied in Proposition \ref{prop:positiverev}. \begin{prop}\label{prop:stodemandl} If the demand satisfies $0<D^{m,t}\leq \min_i \mathbb{ E }[X_i^{m,t}]$ for any $t$ and $m$, the $\text{S}_1\text{S}_1$ case cannot be the equilibrium. \end{prop} \vspace{-1mm} Second, for the impact of demand, we already show that at a sufficiently low demand, the $\text{S}_1\text{S}_1$ case cannot be the equilibrium in Proposition \ref{prop:stodemandl}. We will further show that if the demand is higher than a certain threshold, each supplier has a dominant strategy of whether to invest in storage based on his storage cost, which does not depend on the other supplier's decision. For example, at $D>11$ MW in Figure \ref{fig:subfig:lam1}, for these two homogeneous suppliers, if the storage cost is higher than a threshold, i.e., $C>0.63\times 10^3$ HKD, each supplier will not invest in storage (i.e., $\text{S}_0\text{S}_0$); otherwise, each supplier will invest (i.e., $\text{S}_1\text{S}_1$). We show this property in Proposition \ref{prop:stodemandh}. The reason is that if the demand is large enough, both suppliers can bid the highest price and sell out the maximum bidding quantity. Thus, there is no competition between suppliers, and they will make storage-investment decisions based on their own storage costs. \vspace{-1mm} \begin{prop}\label{prop:stodemandh} There exists $D^{m,t,th}>0$ and $C_i^\text{th}>0$, such that when the demand satisfies $D^{m,t}\geq D^{m,t,th}$ for any $t$ and $m$, supplier $i$ has the dominant strategy $\varphi_{i}^*$ as follows.\footnote{ We characterize the close-form threshold $D^{m,t,th}>0$ and $C_i^\text{th}>0$ in Appendix.\ref{appendix:proofstage1}.} \begin{equation} \varphi_{i}^*= \left \{ \begin{aligned} &1,~\text{if}~\text{the~storage~cost}~ C_i\leq C_i^\text{th},\\ &0, ~\text{if}~\text{the~storage~cost}~ C_i> C_i^\text{th}. \end{aligned} \right. \end{equation} \end{prop} \subsection{Strictly positive profits of suppliers} We show that in suppliers' competition facing the cost of storage investment, both suppliers can get strictly positive profits. \vspace{-2mm} \begin{prop}[strictly positive profit]\label{prop:stoprofit} Both suppliers will get strictly positive profits at the storage-investment equilibrium. \end{prop} This proposition also shows the benefit of the uncertainty of renewable generation, which is similar to Proposition \ref{prop:positiverev}. Recall that if both suppliers have stable outputs, they may get zero revenue (shown in Proposition \ref{prop:pureprice}) and thus get negative profit considering the storage cost. However, with the random generation, both suppliers will get strictly positive profits at the storage-investment equilibrium even facing the storage cost. We will explain it as follows. Note that in the $\text{S}_0\text{S}_0$ case or the $\text{S}_1\text{S}_0$ case, the without-storage supplier always gets a strictly positive revenue (shown in Proposition \ref{prop:positiverev}) with a zero storage cost. In the $\text{S}_1\text{S}_0$ case or the $\text{S}_1\text{S}_1$ case, if the with-storage supplier gets a non-positive profit, he can always deviate to not investing in storage. This deviation provides him a strictly positive profit, which implies that the supplier will always get strictly positive profit. \section{Characterization of storage capacity}\label{section:capacity} We propose a probability-based method using historical data of renewable generations to compute the storage capacity. Note that suppliers charge and discharge the storage to maintain his output at the mean value of the random renewable generations as shown in \eqref{eq:chdis}.\footnote{It is interesting to size the variable storage capacity considering the possibility of not completely smoothing out the renewable output. However, it is quite challenging to characterize such an equilibrium storage capacity in closed-form, which we will study as future work.} Therefore, the charge and discharge amounts are also random variables, and we characterize the storage capacity such that its energy level will not exceed the storage capacity with a targeted probability. In this part, we focus on the storage with 100\% charge and discharge efficiency and no degradation cost. In Appendix.\ref{appendix:stotage}, we show that a lower charge/discharge efficiency and the consideration of degradation cost will increase the total storage cost of a supplier, which further affects the storage-investment equilibrium. To begin with, we set a probability target $\alpha$, and we aim to find a storage capacity $S_i$ such that the energy level in the storage exceeds the capacity with a probability no greater than $\alpha$. Specifically, the with-storage supplier $i$ will charge and discharge storage with value $CD_i^{m,t}$ at hour $t$ of month $m$ as shown in \eqref{eq:chdis}. We assume that the initial energy level of storage is fixed for all the months and denote it as $S^l_i$. Note that the energy level of storage is the sum of the charge and discharge over the time, and is constrained by the storage capacity. Starting from the initial energy level $S^l_i$, the probability that energy level exceeds the minimum capacity (i.e., zero) and the maximum capacity (i.e., $S_i$) of the storage in a day of month $m$ is $\max_{t'\in\mathcal{T}}\text{Pr}(\sum_{t=1}^{t'} CD_i^{m,t}+S^l_i<0)$ and $\max_{t'\in\mathcal{T}} \text{Pr}(\sum_{t=1}^{t'} CD_i^{m,t}+S^l_i> S_i)$, respectively. Considering all months $m$, we aim to choose the storage capacity $S_i$ so that the following hold: \begin{align} &\mathbb{ E }_m\big[\max_{t'\in\mathcal{T}}\text{Pr}(\sum_{t=1}^{t'} CD_i^{m,t}+S^l_i<0)\big]\leq \alpha,\label{eq:pl}\\ &\mathbb{ E }_m\big[\max_{t'\in\mathcal{T}} \text{Pr}(\sum_{t=1}^{t'} CD_i^{m,t}+S^l_i> S_i)\big]\leq \alpha.\label{eq:pu} \end{align} Then, we describe how to use historical data \cite{hkob} to compute the storage capacity that satisfies the probability threshold as in \eqref{eq:pl} and \eqref{eq:pu}. we will first characterize an upper bound for the probability that energy level exceeds the given storage capacity in terms of the random variable $CD_i^{m,t}$, and then we propose Algorithm 1 to compute the required storage capacity to satisfy \eqref{eq:pl} and \eqref{eq:pu}. First, given the underflow capacity $S_i^l>0$ and overflow capacity $S_i^u\triangleq S_i-S_i^l>0$, we characterize an upper bound $Pr^{l,m}(S_i^l)$ for $ \max_{t'}\text{Pr}(\sum_{t=1}^{t'} CD_i^{m,t}+S_i^l<0)$ and an upper bound $Pr^{u,m}(S_i^u)$ for $\max_{t'}\text{Pr}(\sum_{t=1}^{t'} CD_i^{m,t}+S_i^l> S_i)$, respectively. We characterize these upper bounds based on Markov inequality \cite{concentration}, which is shown in Proposition \ref{prop:bound}. \begin{prop}[Markov-inequality-based upper bound] \label{prop:bound} Given $S_i^l>0$ and $S_i^u>0$, the Markov-inequality-based upper bounds are shown as follows. \begin{itemize} \item For the upper bound $Pr^{l,m}(S_i^l)$: \begin{align} Pr^{l,m}(S_i^l)=\max_{t'} \min_{s>0} B^l(s), \label{eq:sl} \end{align} where $B^l(s)\triangleq e ^ { - s S_i^l} \cdot \mathbb { E } \left[ e ^ { s \sum_{t=1}^{t'} -CD_i^{m,t} }\right] $. \item For the upper bound $Pr^{u,m}(S_i^u)$: \begin{align} Pr^{u,m}(S_i^u)\triangleq~\max_{t'}\min_{s>0} B^u(s),\label{eq:su} \end{align} where $B^u(s)\triangleq e ^ { - s S_i^u} \cdot \mathbb { E } \left[ e ^ { s \sum_{t=1}^{t'} CD_i^{m,t} }\right] $. \end{itemize} \end{prop} Note that $Pr^{l,m}(S_i^l)$ and $Pr^{u,m}(S_i^u)$ are decreasing in $S_i^l$ and $S_i^u$, respectively. Also, $Pr^{l,m}(S_i^l)\rightarrow 0$ as $S_i^l\rightarrow +\infty$, and $Pr^{u,m}(S_i^u)\rightarrow 0$ as $S_i^u\rightarrow +\infty$. These show that a larger capacity will decrease the probability that the charge/discharge exceeds the capacity. Also, for any probability threshold $\alpha>0$, we can always find a capacity, such that the probability that energy level exceeds the capacity is below $\alpha$. Second, we propose Algorithm \ref{algorithm:sapacity} to characterize the storage capacity $S_i$ based on the historical data of $CD_i^{m,t}$ (derived from the renewable generation data of $X_i^{m,t}$). We use the underflow capacity $S_i^l$ for supplier $i$ as an example for illustration, and the overflow capacity $S_i^u$ follows the same procedure. Specifically, for the underflow capacity $S_i^l$, we search it in an increasing order from zero as in Step 4. Given $S_i^l$, for each month $m$, we calculate the exceeding probability $Pr^{l,m}(S_i^l)$ according to \eqref{eq:sl} as in Steps 5-7. Note that based on the data samples of $\sum_{t=1}^{t'} -CD_i^{m,t}$, $B^l(s)$ is strictly convex in $s$. Thus, for any $S_i^l>0$, the value of $ \min_{s>0} B^l(s)$ can be efficiently computed using Newton's method\cite{newtonmethod}. Further, we conduct an exhaustive search for ${t'\in\mathcal{T}} $ to obtain $Pr^{l,m}(S_i^l)$. We calculate the expected exceeding probability $\mathbb{ E }_m[Pr^{l,m}(S_i^l)]$ over months as in Step 8. We obtain the minimal underflow capacity $S_i^l$ if the exceeding probability satisfies $\mathbb{ E }_m[Pr^{l,m}(S_i^l)]\leq \alpha$ as in Step 9. Similarly, we can get the minimal overflow capacity $S_i^u$. The required storage capacity is calculated as in Step 11. \begin{algorithm} \caption{Storage capacity $S_i$} \label{alg:B} \begin{algorithmic}[1] \label{algorithm:sapacity} \STATE {\textbf{initialization}: set iteration index $S_i^l=S_i^u=0$, step size $\Delta S$;} \FOR {each $k\in\{l,u\}$} \REPEAT \STATE $S_i^k:=S_i^k+\Delta S;$ \FOR {each $m\in\mathcal{M}$} \STATE Supplier $i$ calculates $Pr^{k,m}(S_i^k)$ according to \eqref{eq:sl} or \eqref{eq:su}; \ENDFOR \STATE Supplier $i$ calculates $\mathbb{ E }_m[Pr^{k,m}(S_i^k)]$; \UNTIL{ $$\mathbb{ E }_m[Pr^{k,m}(S_i^k)]\leq \alpha;$$ } \ENDFOR \STATE Each supplier $i$ computes $$S_i=S_i^l+S_i^u;$$ \STATE {\textbf{output}}: $S_i$. \end{algorithmic} \end{algorithm} \begin{figure}[ht] \centering {\includegraphics[width=2.7in]{./figure/prcapacitypp}} \caption{ Characterization of storage capacity. } \label{fig:subfig:capacity2} \end{figure} As an illustration, we calculate and show the underflow probability $\mathbb{ E }_m[Pr^{l,m}(S_i^l)]$ and overflow probability $\mathbb{ E }_m[Pr^{u,m}(S_i^u)]$ in the blue solid curve and red dashed curve respectively in Figure \ref{fig:subfig:capacity2}. The probability of $\mathbb{ E }_m[Pr^{l,m}(S_i^l)]$ ($\mathbb{ E }_m[Pr^{u,m}(S_i^u)]$, respectively) decreases with respect to the capacity $S_i^l$ ($S_i^u$, respectively). If the capacity $S_i^l$ ($S_i^u$, respectively) is small and close to zero, the exceeding probability $\mathbb{ E }_m[Pr^{l,m}(S_i^l)]$ ($\mathbb{ E }_m[Pr^{u,m}(S_i^u)]$, respectively) will approach one. However, when the capacity is large and close to a certain value (e.g., 6 in Figure \ref{fig:subfig:capacity2}), the corresponding exceeding probability will be close to zero. We choose the probability threshold $\alpha=5\%$ and obtain the corresponding minimal capacity $S_i^{l*}$ and $S_i^{u*}$ as marked in Figure \ref{fig:subfig:capacity2}. \section{Simulation}\label{section:sim} In simulations, in addition to some analytical properties of storage-investment equilibrium shown in Section \ref{section:stage1}, we will further investigate the impact of the penalty, storage cost, and demand on suppliers' profits. We will show some counter-intuitive results due to the competition between suppliers. For example, a higher penalty, a higher storage cost, and a lower demand can even increase a supplier's profit at the storage-investment equilibrium. Furthermore, the first supplier who invests in storage may benefit less than the competitor who does not invest in storage. We will illustrate the detailed results in the following. \subsection{Simulation setup} In simulations, we consider two homogeneous suppliers (with the same renewable capacity, generation distribution, and storage cost) to show the storage-investment equilibrium. We also consider a fixed demand $D$ for all the hours and months for illustration. Next, we explain the empirical distribution of renewable generation as well as parameter configurations of the penalty price $\lambda$, demand $D$, and storage cost $C$. \subsubsection{Empirical distribution of renewable generation} We use the historical data of solar energy generation in Hong Kong from the year 1993 to year 2012 \cite{hkob} to approximate the continuous CDF of suppliers' renewable generations. Specifically, we cluster the renewable generations at hour $t$ of all days into $M=12$ types (months) considering the seasonal effect. We use daily data (from the year 1993 to year 2012) of renewable energy in month $m$ at hour $t$ to approximate the distribution of renewable generation at hour $t$ of month $m$. Based on the discrete data, we characterize a continuous empirical CDF to model the distribution of renewable power. We present the details of the characterization of empirical CDF in Appendix.\ref{appendix:sim}. Furthermore, to check the reliability of the empirical distribution, we consider two sample data sets: one set consists of all the data samples from the year 1993 to 2012, and the other consists of the data samples from another specific year (e.g., 2013). We conduct Kolmogorov-Smirnov test \cite{massey1951kolmogorov} using the Matlab function \textit{kstest2} to test whether these two data sets are from the same continuous distribution \cite{kstest2}. The result shows that most of the hours of a month can pass the test. Also, our model is general for any continuous distribution of renewable generations. Interested readers can also use other data or other distributions of renewable energy to test the results. \begin{figure}[t] \centering \includegraphics[width=2.4in]{./figure/solar_m5c} \vspace{-2mm} \caption{\small Average solar energy of different hours in May.} \label{fig:solar} \end{figure} \subsubsection{Parameters configuration } We explain the configuration of the parameters of the penalty price $\lambda$, demand $D$, and storage cost $C$, respectively. We set the parameters to reflect the real-world practice, and study the impact of the parameters on the market equilibrium. \begin{itemize} \item The penalty $\lambda$: We choose the price cap $\bar{p}=1$ HKD/kWh, since the electricity price for residential users in Hong Kong is around 1 HKD/kWh \cite{hkhome}. Note that a penalty price satisfies $\lambda>\bar{p}$. In Figure \ref{fig:payoff}(a), we will consider a wide range of the ratio $\frac{\lambda}{\bar{p}}\in [1.2,20]$ to demonstrate the impact of the penalty. In Figures \ref{fig:payoff}(b)(c)(d), we fix the penalty price $\lambda=1.5$ HKD/kWh and focus on illustrating the impact of other parameters. \item The demand $D$: In Figure \ref{fig:payoff}(d), we will discuss a wide range of demand from 0 MW to 15MW to show the impact of the demand. As a comparison, in Figure \ref{fig:solar}, we show the average renewable power across hours in May. In Figure \ref{fig:payoff}(a) and (b), we fix the demand at $D=1$ MW to show the impact of other parameters ($\lambda$ and $C$). In Figure \ref{fig:payoff}(c), we choose a larger demand $D=12$ MW and a smaller demand $D=6$ MW to show the impact of demand on the equilibrium profit. \item The Storage cost $C_i$: Recall that the storage investment cost is $C_i=c_i \kappa_i S_i$. There are different types of storage technologies with diverse capital costs and lifespans. For example, the pumped hydroelectric storage is usually cheap, and can last for 30 years with the capital cost $c_i=40\sim 800$ HKD/kWh, while the Li-ion battery can last 15 years with the capital cost about $c_i=1600\sim 9000$ HKD/kWh \cite{storagecost2017}. We choose the annual interest rate $ r_i=5\%$, and the storage capacity for the with-storage supplier is characterized as 43 MWh by Algorithm 1. We capture the impact of parameters $c_i$ and $\kappa_i$ through the storage cost $C_i$. According to the calculation of storage investment cost $C_i=c_i \kappa_i S_i$, we can calculate that the (hourly) investment cost $C_i$ of the pumped hydroelectric storage is $0.012\times 10^3-0.255\times 10^3$ HKD and the cost of the Li-ion battery is $0.76\times 10^3-4.36\times 10^3$ HKD. This shows that the storage cost can have a wide range.\footnote{Note that we only consider the investment cost in the storage cost. In practice, there are also other costs that need to be included, such as maintenance cost.} Then, in Figures \ref{fig:payoff}(c), we will consider a wide range of storage costs from 0 to $2\times 10^3$ HKD. Although zero storage cost is not very practical, we use it to show a low storage cost and capture the entire range of the impact of the storage costs. In Figure \ref{fig:payoff}(a)(b)(d), we choose lower storage costs ($0.1\times 10^3$ and $0.15 \times 10^3$ HKD) and higher storage costs ($1\times 10^3$ and $1.5\times 10^3$ HKD) to show the different results under different storage costs. \end{itemize} \subsection{Simulation results} We will discuss the impact of penalty, storage cost, and demand on suppliers' profits, and show some counter-intuitive results due to the competition between suppliers. \begin{figure*}[t] \centering \subfigure[]{ \label{fig:subfig:payoff1} \raisebox{-2mm}{\includegraphics[width=2.32in]{./figure/payoff_penalty_new1p}}} \hspace{-2mm} \subfigure[]{ \label{fig:subfig:price} \raisebox{-2mm}{\includegraphics[width=2.32in]{./figure/payoff_pricep}}} \hspace{-2mm} \subfigure[]{ \label{fig:subfig:payoff2} \raisebox{-2mm}{\includegraphics[width=2.352in]{./figure/payoff_storage_costnew2p}}} \hspace{-2mm} \subfigure[]{ \label{fig:subfig:payoff3} \raisebox{-2mm}{\includegraphics[width=2.36in]{./figure/payoff_demand_new1p}}} \vspace{-2mm} \caption{(a) Profit of suppliers with penalty ($D=1$ MW); (b) Expected bidding price of suppliers with penalty ($D=1$ MW); (c) Profit of suppliers with storage cost ($\lambda=1.5$ HKD/kWh); (d) Profit of suppliers with demand ($\lambda=1.5$ HKD/kWh). } \vspace{-2mm} \label{fig:payoff} \end{figure*} \subsubsection{The impact of penalty on suppliers' profits} \textit{Although a higher penalty $\lambda$ can increase the penalty cost on the without-storage supplier, surprisingly, we find that a higher penalty can also increase this supplier's profit, due to the reduced market competition in the energy market.} We show how suppliers' profits and expected bidding prices at the storage-investment equilibrium change with the penalty (at demand $D=1$ MW) in Figure \ref{fig:subfig:payoff1} and \ref{fig:subfig:price}, respectively. Different colors represent different storage costs. The diamond marker shows that $\text{S}_0\text{S}_0$ is the storage-investment equilibrium, and the circle marker shows that $\text{S}_1\text{S}_0$ is the equilibrium. Also, when $\text{S}_1\text{S}_0$ is the equilibrium, the solid lines and dashed lines distinguish the with-storage supplier and without-storage supplier, respectively. First, we show that at the equilibrium where both suppliers do not invest in storage (i.e., $\text{S}_0\text{S}_0$), a higher penalty $\lambda$ can increase both suppliers' profits. As shown in Figure \ref{fig:subfig:payoff1}, when the storage cost is high at $C=1.5\times 10^3$ HKD, both suppliers will not invest in storage for any value of the penalty $\lambda$ from $1.2$ HKD/kWh to $20$ HKD/kWh (in blue curve with diamond marker). In this case, both suppliers' profits can first increase (at $\lambda<11$ HKD/kWh) and then decrease (at $\lambda>11$ HKD/kWh) with $\lambda$ (in blue curve). The intuition for the increase of profit at $\lambda<11$ HKD/kWh is that a higher penalty decreases both suppliers' bidding quantity if the bidding price remains the same. This reduces the market competition and enables both suppliers to bid a higher price in the local energy market as shown in Figure \ref{fig:subfig:price} (in blue curve). However, the increased penalty also increases the penalty cost on suppliers, so the suppliers' profits will also decrease if the penalty is too high (at $\lambda>11$ HKD/kWh). Second, we show that at the equilibrium where one supplier invests in storage and one does not (i.e., $\text{S}_1\text{S}_0$), a higher penalty $\lambda$ can also increase both suppliers' profits. We consider a low storage cost $C=0.15 \times 10^3$ HKD as in red curves in Figure \ref{fig:subfig:payoff1} and Figure \ref{fig:subfig:price}. We see that if $\lambda$ is low (at $\lambda<\lambda_a$), both suppliers will not invest in storage (i.e., $\text{S}_0\text{S}_0$), and their profits increase with penalty shown in Figure \ref{fig:subfig:payoff1} (at $\lambda<\lambda_a$ in red curve with diamond marker that overlaps with blue curve). As the penalty increases (at $\lambda>\lambda_a$), the equilibrium will change from $\text{S}_0\text{S}_0$ to $\text{S}_1\text{S}_0$, since a higher penalty and a lower storage cost can enable a supplier to enjoy more benefits by investing in storage. We discuss the profit of the with-storage supplier and without-storage supplier respectively as follows. \begin{itemize} \item For the with-storage supplier, as shown in Figure \ref{fig:subfig:payoff1}, when $\lambda>\lambda_a$, his profit increases as penalty increases (in red solid curve), which can be much higher than the without-storage supplier (in red dashed curve). The reason is that in the $\text{S}_1\text{S}_0$ case, the penalty cost makes the with-storage supplier dominate over the without-storage one. The with-storage supplier can bid higher prices than the without-storage supplier as shown in Figure \ref{fig:subfig:price} (in red solid curve and red dashed curve), and he also does not need to pay the penalty cost. \item However, for the without-storage supplier, as shown in Figure \ref{fig:subfig:payoff1}, his profit also slightly increases as the penalty increases around $\lambda_a<\lambda<10$ HKD/kWh (in red dashed curve). The intuition is that a higher penalty gives the advantage to the with-storage supplier, which reduces the market competition and increases both suppliers' bidding price as shown in Figure \ref{fig:subfig:price} (in red curves). Thus, it can also benefit the without-storage supplier. However, as shown in Figure \ref{fig:subfig:payoff1}, if the penalty further increases to $\lambda>10$ HKD/kWh (in red dashed curve), the without-storage supplier's profit will also decrease due to the increased penalty cost. \end{itemize} \subsubsection{The impact of storage cost on suppliers' profits} \textit{Intuitively, a higher storage cost will discourage a supplier from investing in storage, which generally decreases a supplier's profit. However, we find that it may also increase a supplier's profit if the other supplier changes his strategy due to the increased storage cost.} We show how suppliers' profits at the storage-investment equilibrium change with the storage cost in Figure \ref{fig:subfig:payoff2}. Different colors represent different demands. The diamond marker, circle marker, and star marker correspond to different storage-investment equilibria of $\text{S}_0\text{S}_0$, $\text{S}_1\text{S}_0$, and $\text{S}_1\text{S}_1$, respectively. For the $\text{S}_1\text{S}_0$ case, the solid lines and dashed lines distinguish the with-storage supplier and without-storage supplier, respectively. As shown in Figure \ref{fig:subfig:payoff2} (in both red curve and blue curve), generally the higher storage cost decreases suppliers' profits. However, we show that the opposite may be true using the example of $D=6$ MW (in red curve). When the demand is at $D=6$ MW (in red curve), as the storage cost increases, the equilibrium changes from $\text{S}_1\text{S}_1$ (when $C<C_a$), to $\text{S}_1\text{S}_0$ (when $C_a<C<C_b$), and finally to $\text{S}_0\text{S}_0$ (when $C>C_b$). When the equilibrium changes from $\text{S}_1\text{S}_1$ to $\text{S}_1\text{S}_0$ at the threshold $C=C_a$, one with-storage supplier in the original $\text{S}_1\text{S}_1$ case has a higher (upward jumping) profit, after the other supplier chooses not to invest in storage due to the high storage cost. This changes the equilibrium from $\text{S}_1\text{S}_1$ to $\text{S}_1\text{S}_0$, which reduces the competition and gives more advantages to the with-storage supplier. \subsubsection{The impact of demand on suppliers' profits} \textit{Intuitively, a higher demand will increase a supplier's profit. However, we show that a higher demand may also decrease a supplier's profit if the other supplier changes his strategy due to the increased demand.} We show how suppliers' profits at the storage-investment equilibrium change with the demand in Figure \ref{fig:subfig:payoff3}. Different colors represent different storage costs. The diamond marker, circle marker, and star marker correspond to different storage-investment equilibria of $\text{S}_0\text{S}_0$, $\text{S}_1\text{S}_0$, and $\text{S}_1\text{S}_1$, respectively. For the $\text{S}_1\text{S}_0$ case, the solid lines and dashed lines distinguish the with-storage supplier and without-storage supplier respectively. As shown in Figure \ref{fig:subfig:payoff3} (in both red curve and blue curve), generally a higher demand increases a supplier's profit. However, we show that the opposite may be true using the example of $C=0.1\times 10^3$ HKD (in red curve). When the storage cost is low at $C=0.1\times 10^3$ HKD (in red curve), as the demand increases, the equilibrium changes from $\text{S}_0\text{S}_0$ (when $D<D_a$), to $\text{S}_1\text{S}_0$ (when $D_a<D<D_b$), and finally to $\text{S}_1\text{S}_1$ (when $D>D_b$). When the equilibrium changes from $\text{S}_1\text{S}_0$ to $\text{S}_1\text{S}_1$ at the threshold $D=D_b$, the with-storage supplier in the original $\text{S}_1\text{S}_0$ case has a smaller (downward jumping) profit, after the other supplier also chooses to invest in storage due to the high demand. This changes the equilibrium from $\text{S}_1\text{S}_0$ to $\text{S}_1\text{S}_1$, which increases the market competition and weakens the advantage of the with-storage supplier in the original $\text{S}_1\text{S}_0$ case. Furthermore, when the storage cost is high at $C=1.5\times 10^3$ HKD (in blue curve with diamond marker), both suppliers will not invest in storage independent of the demand. \subsubsection{First-mover disadvantage and advantage}Intuitively, the first supplier who invests in storage can benefit more than the without-storage competitor. \textit{However, we find that if the storage cost is high, the first-mover supplier in investing storage can also benefit less than the free-rider competitor who does not invest in storage. } As shown in Figure \ref{fig:subfig:payoff2} at $D=6$ MW (in red curve), the $\text{S}_1\text{S}_0$ case is the equilibrium when the storage cost is in the range $C_a<C<C_b$. If the storage cost is low at $C_a<C<0.7\times 10^3 \text{ HKD}$, the with-storage supplier's profit is higher than the without-storage supplier's profit. However, if the storage cost is high at $0.7\times 10^3 \text{ HKD}<C<C_b$, the with-storage supplier's profit is lower than the without-storage supplier. This shows both advantage and disadvantage of the first-mover. Although in some situations investing storage will increase the supplier's profit, he can get more profits if he waits for the other to invest first when the storage cost is high. However, if the storage cost is low, he should be the first to invest storage in order to get a higher profit. \section{Extensions: A more general oligopoly model}\label{section:extenstion} We build a more general oligopoly model and extend some of the theoretical results and insights from the duopoly case to the oligopoly case. Compared with the duopoly model, the only difference of the oligopoly model is that the number of suppliers can be more than two, i.e., $|\mathcal{ I}|\geq 2$. Following the analysis of the duopoly model, we also analyze the equilibrium in Stage II and Stage I in the oligopoly case and derive some insights. Specifically, in Stage II, we extend the theoretical results of the price-quantity competition equilibrium. In Stage I, we generalize analytical results of the impact of storage cost and demand on the storage-investment equilibrium. Furthermore, we show that some of the key insights from the duopoly case, e.g., the uncertainty of renewable generation can be beneficial to suppliers, still hold in the oligopoly case. Next, we will discuss the extensions of Stage II and Stage I in detail, respectively. We include all the proofs of the propositions in Appendix.\ref{appendix:proofoligopoly}. \subsection{ Stage II Analysis} For Stage II, the weakly dominant strategy of bidding quantities still hold for the case of more than two suppliers. We generalize the conditions on the existence of the pure price equilibrium and show that the mixed price equilibrium also exists in the oligopoly case. Furthermore, we show that suppliers get positive revenues at the mixed price equilibrium. We show the extended analysis in detail as follows. \subsubsection{Weakly dominant strategy for bidding quantities} The weakly dominant strategies for bidding quantities still hold as in Theorem \ref{thm:quantity}. \subsubsection{Existence of the pure price equilibrium} We derive the conditions for the existence of the pure price equilibrium among suppliers, and generalize Proposition \ref{prop:pureprice}. Specifically, we consider a general subgame in Stage II denoted as ${S}^{{\mathcal{U}|\mathcal{V}}}$, where suppliers in the set $\mathcal{U}$ invest in storage and suppliers in the set $\mathcal{V}$ do not invest. Recall we denote the set of all the suppliers as $\mathcal{ I}$, and we have $\mathcal{U}\bigcup \mathcal{V}=\mathcal{I}$. The case $\mathcal{U}=\mathcal{I}$ means that all the suppliers invest in storage, and the case $\mathcal{V}=\mathcal{I}$ means that no supplier invests in storage. We show the existence of the pure price equilibrium in Proposition \ref{prop:purepricennn}. \begin{prop}[existence of the pure price equilibrium in the oligopoly case]\mbox{}\label{prop:purepricennn} Considering a subgame $\text{S}^{{\mathcal{U}|\mathcal{V}}}$ of storage investment among suppliers in Stage II, the existence of the pure price equilibrium depends on the demand $D$ as follows: \begin{itemize} \item If $D \geq \sum_{i\in \mathcal{I}} y_i^*(\bar{p},\varphi_i)$, there exists a pure price equilibrium $p_i^*=\bar{p}$, with an equilibrium revenue $\pi_i^{RE}=\lambda \int_{0}^{F_i^{-1}(\bar{p}/\lambda)}xf_i(x)dx$ for any $i\in \mathcal{V}$ and $\pi_i^{RE}=\bar{p} \mathbb{E}[X_i]$ for any $i\in \mathcal{U}$. \item If $D\leq \sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)-y_j^*(\bar{p},\varphi_j)$ for any $j\in\mathcal{ U}$, there exists a pure price equilibrium $p_i^*=0$, with an equilibrium revenue $\pi_i^{RE}=0$, for any $i\in \mathcal{ I}$. \item If there exists $j\in\mathcal{ U}$ such that $\sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)-y_j^*(\bar{p},\varphi_j)<D < \sum_{i\in \mathcal{I}} y_i^*(\bar{p},\varphi_i)$, there is no pure price equilibrium. \end{itemize} \end{prop} Similar to the duopoly case, the result of this proposition can be interpreted as follows. If the demand is higher than the threshold $\sum_{i\in \mathcal{I}} y_i^*(\bar{p},\varphi_i)$, all the suppliers can bid the price cap to sell the maximum quantities. If the demand is very low such that $D\leq \sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)-y_j^*(\bar{p},\varphi_j)$ for any $j\in\mathcal{ U}$, the competition is fierce and all the suppliers bid zero price. However, if the demand is in the middle, there will be no pure price equilibrium. Note that if the number of with-storage suppliers is no greater than one, i.e., $\mid \mathcal{ U} \mid\leq 1$, the condition that there exists $j\in\mathcal{ U}$ such that $D\leq \sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)-y_j^*(\bar{p},\varphi_j)$ cannot be satisfied. It means that there will be no pure equilibrium of $p_i^*=0$ for any demand $D>0$. \subsubsection{Existence of the mixed price equilibrium} For the case in Proposition \ref{prop:purepricennn} that there exists no pure price equilibrium, we show that there exists a mixed price equilibrium. However, the characterization of mixed strategy is highly non-trivial for the oligopoly case and it is difficult to completely generalize Lemma \ref{lem:mix}. We generalize it partially as Proposition \ref{lem:mixnnn} to show the existence of the mixed price equilibrium and show that all the suppliers get positive revenues at the mixed price equilibrium. \begin{prop}[mixed price equilibrium in the oligopoly case]\label{lem:mixnnn} For any $\bm{\varphi}$, when there is no pure price equilibrium, a mixed price equilibrium exists and the equilibrium electricity-selling revenues $\pi_i^{RE}$ satisfies $\pi_i^{RE}(\bm{\varphi})>0, \text{~for any}~ i\in \mathcal{ I} $. \end{prop} The equilibrium revenue for the case where all the suppliers invest storage (i.e., $\mathcal{U}=\mathcal{I}$) has been characterized in \cite{capacityprice}. When there are two suppliers, we can also characterize the cumulative distribution function (CDF) of the mixed price strategy for the case of one investing storage and one not investing in storage as in Theorem \ref{thm:mscdf}. However, when $I>2$, for any case where $\mid \mathcal{U} \mid<I$, it is highly non-trivial to characterize the corresponding CDF analytically. \subsection{ Stage I Analysis} For Stage I, for the general oligopoly case, we show that a mixed storage-investment equilibrium always exists. We can also generalize the analytical results of the impact of storage cost and demand on the storage-investment equilibrium for those settings where (i) the storage cost is sufficiently large; and (ii) the demand is sufficiently large or small. Furthermore, some of the key insights, e.g., the uncertainty of renewable generation can be beneficial to suppliers, will still hold for the oligopoly case. We discuss the extensions in details in the following. \subsubsection{Existence of the storage-investment equilibrium} A mixed equilibrium of storage investment always exists. Note that each supplier has two strategies: investing in storage and not investing in storage. Numerically, we can check the pure storage-investment equilibrium by the Nash equilibrium definition. Also, a mixed equilibrium of storage investment always exists due to the finite numbers of storage-investment strategies \cite{gamex}. \subsubsection{Impacts of the storage cost and demand on storage-investment equilibrium} Some analysis of the impact of the storage cost and demand on storage-investment equilibrium in the duopoly case can also be extended. Specifically, we can extend Propositions \ref{prop:stocost}, \ref{prop:stodemandl} and \ref{prop:stodemandh} and to the oligopoly case, which generalizes the analytical results for the settings where (i) the storage cost is sufficiently large; and (ii) the demand is sufficiently large or small. First, since the benefit from investing in storage is bounded, we can show that when the storage cost is greater than a threshold, no suppliers will choose to invest in storage. \begin{prop}\label{prop:stocostnnn} There exists a threshold $C_i^{\text{no}}$ such that if the storage cost satisfies $C_i>C_i^{\text{no}}$ for any $i\in \mathcal{ I}$, the $S^{\emptyset| \mathcal{ I}}$ case (i.e., no suppliers investing in storage) will be the unique pure storage-investment equilibrium. \end{prop} Second, in the subgame $S^{\mathcal{ U}|\mathcal{ V}}$ where $\mid \mathcal{ U}\mid \geq 2$, if the demand is too low, all the suppliers may get zero revenue in the energy market as implied in Proposition \ref{prop:purepricennn}. This will make the with-storage suppliers deviate to not investing in storage. Thus, we have the proposition as follows. \begin{prop}\label{prop:stodemandlnnn} In the subgame $S^{\mathcal{ U}|\mathcal{ V}}$, if the demand satisfies $0<D^{m,t}\leq \min_{j\in \mathcal{ U}} (\sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)-y_j^*(\bar{p},\varphi_j))$ for any $t$ and $m$, the case $S^{\mathcal{ U}|\mathcal{ V}}$ (i.e., suppliers in set $\mathcal{ U}$ invest in storage and suppliers in set $\mathcal{V}$ do not invest in storage) cannot be a pure storage-investment equilibrium. \end{prop} Third, as in Proposition \ref{prop:purepricennn}, when the demand is higher than certain threshold, all the suppliers can bid the price cap to sell all his bidding quantity. In this case, there is no competition between suppliers, and they will make storage-investment decisions independently based on their own storage costs. We show this proposition as follows. \begin{prop}\label{prop:stodemandhnnn} There exist $D^{m,t,th}>0$ and $C_i^\text{th}>0$, such that when the demand satisfies $D^{m,t}\geq D^{m,t}_{\text{th}}$ for any $t$ and $m$, supplier $i$ has the dominant strategy $\varphi_{i}^*$ as follows \begin{equation} \varphi_{i}^*= \left \{ \begin{aligned} &1,~\text{if}~\text{the~storage~cost}~ C_i\leq C_i^\text{th},\\ &0, ~\text{if}~\text{the~storage~cost}~ C_i> C_i^\text{th}. \end{aligned} \right. \end{equation} \end{prop} \subsubsection{Positive profits at the storage-investment equilibrium} We can further extend Proposition \ref{prop:stoprofit} to show the benefit of the uncertainty to the equilibrium profit. We show that in suppliers' competition (even with the potential cost of the storage investment), all the suppliers can get strictly positive profits at the equilibrium. \begin{prop}[strictly positive profit]\label{prop:stoprofitnnn} All the suppliers will get strictly positive profits at the storage-investment equilibrium. \end{prop} This proposition shows the benefit of the renewable generation randomness. If all the suppliers have stable outputs, they may get zero revenue as implied in Proposition \ref{prop:purepricennn} and thus get negative profit under possible storage cost. However, with the random generation, all the suppliers will get strictly positive profit at the storage-investment equilibrium even considering the storage cost. The intuition is that if one supplier invests in storage and gets non-positive profit, he can always choose not to invest in storage. This at least saves him the cost of storage investment, which increases his profit. Also, note that when no supplier invests in storage, all the suppliers can get positive profits. Therefore, only the state where all the suppliers get positive profits can be an equilibrium. In summary, we can extend some of our major theoretical results and insights to the oligopoly case of more than two suppliers. Some of the key insights from the duopoly case, e.g., the uncertainty of renewable generation can be beneficial to suppliers, still hold in the oligopoly case. However, we are not able to analytically extend all insights to the oligopoly case due to the complexity of analysis. We would like to explore it in our future work. \section{Conclusion}\label{section:con} We study a duopoly two-settlement local energy market where renewable energy suppliers compete to sell electricity to consumers with or without energy storage. We formulate the interactions between suppliers and consumers as a three-stage game-theoretic model. We characterize a price-quantity competition equilibrium in the local energy market, and further characterize a storage-investment equilibrium at the beginning of the investment horizon between suppliers. Surprisingly, we find the uncertainty of renewable generation can increase suppliers' profits compared with the case where both suppliers invest in storage and stabilize the outputs. In simulations, we show more counterintuitive results due to the market competition. For example, a higher penalty, a higher storage cost, and a lower demand may increase a supplier's profit. We also show that the first-mover in investing in storage may benefit less than the free-rider competitor who does not invest in storage. In the future work, we will size the variable storage capacity considering the possibility of not completely smoothing out the renewable output. \newpage \section*{Appendix }\label{appendix:a} This appendix is organized as follows: \begin{itemize} \item Section \ref{appendix:s1s1}: We show the equilibrium revenue of suppliers in the $\text{S}_1\text{S}_1$ case, when the demand satisfies $\min_i y_i^*(\bar{p},\varphi_i)<D< \sum_i y_i^*(\bar{p},\varphi_i)$ and there is no pure price equilibrium but the mixed price equilibrium. \item Section \ref{appendix:s0s0}: We show how we discretize the continuous price set to approximate the mixed price equilibrium in the $\text{S}_0\text{S}_0$ case. \item Section \ref{appendix:stotage}: For the storage capacity characterization, we first show the proof of the propositions in Section \ref{section:capacity}, and then we present the model of the imperfect storage. \item Section \ref{appendix:sim}: For the simulations, we first show the characterization of the continuous CDF for the renewable-generation distribution using historical data, and then we simulate an example of two heterogeneous suppliers. \item Section \ref{appendix:proofstage3}: We prove the theorems and propositions of Stage III. \item Section \ref{appendix:proofstage2}: We prove the theorems and propositions of Stage II. \item Section \ref{appendix:proofstage1}: We prove the theorems and propositions of Stage I. \item Section \ref{appendix:proofoligopoly}: We prove the propositions in the oligopoly model. \end{itemize} \vspace{5mm} \section{Appendix: Mixed price equilibrium of $\text{S}_1\text{S}_1$ subgame}\label{appendix:s1s1} As shown in Proposition \ref{prop:pureprice}, when the demand satisfies $\min_i y_i^*(\bar{p},\varphi_i)<D< \sum_i y_i^*(\bar{p},\varphi_i)$, there is no pure price equilibrium. We can characterize a close-form equilibrium revenue for each supplier at the mixed price equilibrium in Proposition \ref{thm:bothpricen}, which has been proved in \cite{capacityprice}. \begin{prop}[$\text{S}_1\text{S}_1$: mixed-equilibrium revenue]\label{thm:bothpricen} In the $\text{S}_1\text{S}_1$ case (i.e., $\sum_{ i = 1 }\varphi_i=2$), if $\min_i y_i^*<D< \sum_i y_i^*$, there exists no pure price equilibrium but exists the mixed price equilibrium, with the equilibrium revenue as follows. \begin{equation} \pi_i^{RE}(\bm{\varphi})=\left \{ \begin{aligned} &\bar{p}(D-y_{-i}^*),~{if}~y_i^*> y_{-i}^*,\\ &\frac{\bar{p}(D-y_i^*) y_i^*}{\min(y_{-i}^*,D)},~otherwise,\notag\\ \end{aligned} \right. \end{equation} where $y_i^*=\mathbb{ E }[X_i]$ and $y_{-i}^*=\mathbb{ E }[X_{-i}]$ as characterized in Theorem \ref{thm:quantity}. \end{prop} According to Proposition \ref{thm:bothpricen}, one supplier's equilibrium revenue is related to the other supplier's bidding quantity (i.e., mean value of generations). Specifically, one supplier's equilibrium revenue decreases if the other supplier's bidding quantity increases. Furthermore, under the mixed price equilibrium, both suppliers get strictly positive revenues while they may get zero revenues when the demand is below the threshold $\min_i y_i^*$ as shown in Proposition \ref{prop:pureprice} under the pure price equilibrium. \vspace{5mm} \section{Appendix: Mixed price equilibrium of $\text{S}_0\text{S}_0$ subgame}\label{appendix:s0s0} In the $\text{S}_0\text{S}_0$ case, both suppliers do not invest in storage and face the general penalty cost. When $0<D<\sum_i y_i^*(\bar{p},\varphi_i)$, the mixed price equilibrium has a continuous CDF over $[l,\bar{p})$ shown in Lemma \ref{lem:mix}, but we cannot derive it in close form. To have a better understanding of the CDF, we discretize the price to approximate the original continuous price set, and compute the mixed equilibrium for the discrete price set. Specifically, we discretize the price between $(0,\bar{p}]$ into $\{\Delta p,2\Delta p,3\Delta p,..., \bar{p}-\Delta p,\bar{p}\}$ with a small $\Delta p>0$. We search for the lower support in the range given in \eqref{eq:nslower} in the following way. Given a lower support $l'$, the mixed strategy of each supplier has the support $\{l',l'+\Delta p, l'+\Delta p,..., \bar{p}\}$ that approximates the original continuous support $[l,\bar{p}]$. For each supplier, each of price strategies in the support yields the same expected revenue, which can be used to construct a set of linear equations and calculate the mixed equilibrium. If the probability of each price for each supplier is between $(0,1)$, then the lower support $l'$ is feasible; otherwise, there exists the price that should be excluded from the support $\{l',l'+\Delta p, l'+\Delta p,..., \bar{p}\}$ and the lower support $l'$ is not feasible. We calculate the equilibrium revenue according to Lemma \ref{lem:mix} (ii) \section{Appendix: Characterization of storage capacity}\label{appendix:stotage} We will first prove Proposition \ref{prop:bound} and show some properties of the upper bound $Pr^{l,m}(S_i^l)$ and $Pr^{u,m}(S_i^u)$. Then, we discuss the imperfect storage model and show how it affects the storage cost. \subsection{Proof of Proposition \ref{prop:bound}} \textbf{Proof}: Below, we illustrate the upper bound $Pr^{l,m}(S_i^l)$. The upper bound $Pr^{u,m}(S_i^u)$ can be derived analogously. Given $t'\in \mathcal{T}$, we have \vspace{-2mm} \begin{align} \text{Pr}(\sum_{t=1}^{t'} -CD_i^{m,t}> S_i^l)&=\text{Pr} \left( e ^ { s \sum_{t=1}^{t'} -CD_i^{m,t} \geq e ^ { s S_i^l} }\right)\leq e ^ { -s S_i^l } \cdot \mathbb { E } \left[ e ^ { s \sum_{t=1}^{t'} -CD_i^{m,t} } \right]\triangleq B^l(s),\label{eq:capacity} \end{align} for any $s>0$. The inequality in \eqref{eq:capacity} is due to the Markov inequality.\footnote{This inequality is also known as Chernoff bound, which can achieve a tight probability bound\cite{mousavi2010tight}. } Given $S_i^l>0$, we can find a tight upper bound for the probability $\text{Pr}(\sum_{t=1}^{t'} -CD_i^{m,t}> S_i^l)$ by minimizing the RHS in \eqref{eq:capacity} over $s$. Therefore, $Pr^{l,m}(S_i^l)=\max_{t'} \min_{s>0} B^l(s)$. \qed \subsection{Properties of some properties of the upper bound $Pr^{l,m}(S_i^l)$ and $Pr^{u,m}(S_i^u)$. } We have properties for $Pr^{l,m}(S_i^l)$ and $Pr^{u,m}(S_i^u)$ as follows. \begin{prop}[properties of the upper bounds] \label{prop:boundpro} Given $S_i^l>0$ and $S^u>0$, the Markov-inequality-based upper bounds have properties as follows. \begin{enumerate} \item $Pr^{l,m}(S_i^l)\leq 1$ and $Pr^{u,m}(S_i^u)\leq 1$. \item $Pr^{l,m}(S_i^l)$ and $Pr^{u,m}(S_i^u)$ are decreasing in $S_i^l$ and $S_i^u$, respectively. \item $Pr^{l,m}(S_i^l)\rightarrow 0$ as $S_i^l\rightarrow +\infty$, and $Pr^{u,m}(S_i^u)\rightarrow 0$ as $S_i^u\rightarrow +\infty$. \end{enumerate} \end{prop} \textbf{Proof}: The first property is because $\min_{s>0} B^l(s)\leq B^l(0^-)=1$ and $\min_{s>0} B^u(s)\leq B^u(0^-)=1$. The second property is straightforward from the function $B^l(s)$ and $B^u(s)$. The third property is because $CD_i^{m,t}$ is bounded. Thus, $B^l(s)\rightarrow 0$ as $S_i^l\rightarrow +\infty$, and $B^u(s)\rightarrow 0$ as $S_i^u\rightarrow +\infty$. \qed Proposition \ref{prop:boundpro} shows that a larger capacity decreases the charge/discharge exceeding probability. Also, for any positive probability threshold $\alpha$, we can always find a sufficiently large capacity to let the exceeding probability below $\alpha$. This lays the foundation for Algorithm 1. \subsection{Generalization of imperfect storage model} We consider the imperfect energy storage in two aspects: (i) less-than-100\% charge and discharge efficiency and (ii) the degradation cost incurred by the charge and discharge. Next, we will explain how the storage charge and discharge are determined in our work, and then further discuss how the imperfect storage impacts the total storage cost and investment equilibrium. To begin with, we explain the model of the storage charge and discharge as well as the energy level of the storage in our work. Specifically, the with-storage supplier charges and discharges the energy storage to stabilize his renewable output at the mean value. Thus, the charge and discharge power is only dependent on the random variable of renewable generations. At hour $t$ of renewable-generation-type (month) $m$, we denote the charge amount as $CD_i^{m,t+}\geq 0$ and the discharge amount as $CD_i^{m,t-}\geq 0$. These values are characterized based on the random generation $X_i^{m,t}$ as follows: \begin{align} &CD_i^{m,t+}=(X_i^{m,t}-\mathbb{ E }[X_i^{m,t}])^+,\label{eq:newc1n}\\ &CD_i^{m,t-}=(X_i^{m,t}-\mathbb{ E }[X_i^{m,t}])^-,\label{eq:newc2n} \end{align} where $g^+\triangleq \max(g,0)$ and $g^-\triangleq\max(-g,0)$. Furthermore, we denote the charge efficiency as $\eta_i^c$ and the discharge efficiency as $\eta_i^d$. The energy level in the storage can be calculated by adding the charge and discharge over time at month $m$ as follows. \begin{align} e_i^{m,t}=e_i^{m,t-1}+\eta_i^c CD_i^{m,t+}- CD_i^{m,t-}/\eta_i^d. \label{eq:cd} \end{align} Next, we discuss how the degradation cost and the less-than-100\% charge and discharge efficiency impact the total storage cost. \subsubsection{Degradation cost} We show that the degradation cost will increase the total cost of deploying the storage for the with-storage supplier. The degradation cost is caused by the charge and discharge of the storage. In the ideal case, we do not include the degradation cost as part of the storage cost. With the degradation, the total cost of deploying the storage will be higher. One widely used model in the literature for the degradation cost is a linear model \cite{hao2}\cite{datasto}. We denote the unit cost of charge and discharge as $c_i^o$. Thus, the expected degradation cost $C_i^o$ (in each hour) is \begin{align} &C_i^o=\mathbb{ E }_{m,t}[c_i^o CD_i^{m,t+}+c_i^o CD_i^{m,t-}], \label{eq:degradation} \end{align} which can be calculated based on the historical data of $X_i^{m,t}$. Therefore, We can simply add \eqref{eq:degradation} to the original storage cost. We calculate the total storage cost as $ C_i'=C_i+C_i^o$, which includes both investment cost and the degradation cost. \subsubsection{Charge and discharge efficiency} The lower charge and discharge efficiency will increase the storage capacity and thus increase the total storage cost. Our goal is to characterize a minimum storage capacity such that the energy level $e_i^{m,t}$ will exceed the storage capacity with a probability no greater than $\alpha$. As shown in \eqref{eq:cd}, the charge and discharge efficiency ($\eta_i^c,~ \eta_i^d$) will affect the energy level $e_i^{m,t}$. Compared with the perfect storage model with $\eta_i^c=\eta_i^d=100\%$, the difference in the imperfect storage model is that $\eta_i^c<100\%$ and $\eta_i^d<100\%$. With the charge and discharge efficiency, we modify \eqref{eq:pl} and \eqref{eq:pu} in Section \ref{appendix:stotage}.A into the following. \begin{align} &\mathbb{ E }_m\big[\max_{t'\in\mathcal{T}}\text{Pr}(\sum_{t=1}^{t'} \eta_i^c CD_i^{m,t+}- CD_i^{m,t-}/\eta_i^d+S^l_i<0)\big]\leq \alpha,\label{eq:pla}\\ &\mathbb{ E }_m\big[\max_{t'\in\mathcal{T}} \text{Pr}(\sum_{t=1}^{t'} \eta_i^c CD_i^{m,t+}- CD_i^{m,t-}/\eta_i^d+S^l_i> S_i)\big]\leq \alpha.\label{eq:pua} \end{align} Similarly, we can follow Algorithm \ref{algorithm:sapacity} in Section \ref{appendix:stotage}.A to compute $S_i$ given the probability threshold $\alpha$. According to Algorithm 1 that computes the storage capacity, we show how charge/discharge efficiency impacts the storage capacity in Figure \ref{fig:eff}. The blue curve shows the case where the probability that the energy level exceeds the capacity is smaller than 5\% and the red curve shows the case where the probability that the energy level exceeds the capacity is smaller than 10\%. We see that as the efficiency decreases, the required storage capacity increases (which further increases the storage investment cost). \begin{figure}[ht] \centering \includegraphics[width=3.4in]{./figure/eff_capacityp} \vspace{-2mm} \caption{\small Storage capacity with charge/discharge efficiency.} \label{fig:eff} \end{figure} In summary, compared with the case of perfect storage, a lower charge/discharge efficiency with the degradation cost will increase the total storage cost of a supplier. In Section VI, we present some analytical results of the storage cost' s impact on the storage-investment equilibrium. In Section VIII, we also show the simulation results of the impact of the storage cost on the suppliers' profits. These discussions can capture the impact of the imperfect storage. \vspace{5mm} \section{Appendix: Simulations}\label{appendix:sim} We will first show the details of how we approximate the continuous CDF for the renewable-generation distribution using historical data. Then, we show a simulation result for two heterogeneous suppliers. \subsection{Empirical distribution of renewable generations} We use the historical data of solar energy in Hong Kong from the year 1993 to year 2012 \cite{hkob} to approximate the continuous CDF of suppliers' renewable generations. Specifically, we cluster the renewable generations at hour $t$ of all days into $M=12$ types (months) considering the seasonal effect. We use daily data (from the year 1993 to year 2012) of renewable energy in month $m$ at hour $t$ to approximate the distribution of renewable generation at hour $t$ of month $m$. Based on the discrete data, we first use an \emph{empirical cumulative distribution function} (ECDF) to model the renewable power distribution.\footnote{Given a sample of real-world data $X_1,X_2,\ldots,X_m$, the standard ECDF $\widehat { F } ( x ):~R \rightarrow [0,1]$ is defined as $\widehat { F } ( x ) = \frac { 1 } { m } \sum _ { i = 1 } ^ { m } I \left( X _ { i } \leq x \right)$, where $I(\cdot)$ is the indicator function\cite{ecdf}.} Note that our model is built on the continuous CDF of suppliers' renewable generations. Thus, we further use linear interpolation to set up the continuous ECDF from the ECDF\cite{simulation}. We illustrate the ECDF and linearly-interpolated ECDF in Figure \ref{fig:subfig:ecdf_ill}, where the stepwise blue solid curve represents the ECDF and the red dotted curve represents the linearly-interpolated ECDF. For the illustration of renewable generation distribution, we show the ECDF and linearly-interpolated ECDF of hour $t=9$ of month $m=5$ (May) in Figure \ref{fig:subfig:ecdf}. Through the linearly-interpolated ECDF $F_i$, we can also compute the value $F_i^{-1}(\cdot)$ efficiently. \begin{figure}[ht] \centering \subfigure[]{ \label{fig:subfig:ecdf_ill} \raisebox{-4mm}{\includegraphics[width=2.2in]{./figure/ecdf_ill}}} \hspace{-3mm} \subfigure[]{ \label{fig:subfig:ecdf} \raisebox{-4mm}{\includegraphics[width=2.2in]{./figure/ecdf}}} \vspace{-4mm} \caption{(a) Illustration of ECDF and linearly-interpolated ECDF; (b) ECDF and linearly-interpolated ECDF at hour 9 of May. } \vspace{-3mm} \label{fig:ecdf} \vspace{-2mm} \end{figure} \subsection{Simulations of two heterogeneous suppliers} We simulate an example with two heterogeneous suppliers. Note that we can prove that a pure Nash equilibrium of storage investment will always exist in the homogeneous case (with the same storage cost, the same renewable energy capacity and the same renewable energy distribution). However, for the general heterogeneous case, we cannot theoretically prove that the pure Nash equilibrium always exists. In our following example of heterogeneous suppliers, the pure Nash equilibrium of storage investment still exists. Specifically, we consider that supplier 2's renewable generation capacity is twice as much as the capacity of supplier 1, where both suppliers have the same distribution of renewable energy. For comparison, we consider the homogeneous case as in the simulation of the main text where each supplier's renewable generation capacity is equal to supplier 1's capacity of the heterogeneous case. In the following, we first assume that the storage investment cost is the same across the two suppliers, and study the storage-investment equilibrium with respect to the storage cost and demand in the homogeneous (capacity) case and heterogeneous (capacity) case, respectively. Then, we allow the storage investment cost to also differ across the two suppliers in the heterogeneous case, and study the storage-investment equilibrium with respect to the two suppliers' different storage costs. We first consider the case that two suppliers' bear the same investment cost of storage, so as to focus on showing the impact of different capacities of renewables.\footnote{Note that the two suppliers have different storage capacities due to the different capacities of renewables. We choose different unit costs of storage capacity and let two suppliers have the same storage investment cost.} Figure \ref{fig:subfig:cd1} shows the equilibrium split in terms of demand and storage cost under the homogeneous case. Note that this figure has been shown as Figure \ref {fig:subfig:lam1} of the main text. Figure \ref{fig:subfig:cd2} shows the equilibrium split in terms of demand and storage cost under the heterogeneous case. \begin{itemize} \item In Figure \ref{fig:subfig:cd1}, in Region I, both-investing-storage is one equilibrium; in Region III, neither-investing-storage is one equilibrium; in Region II, one investing in storage and one not investing in storage will be one equilibrium. \item In Figure \ref{fig:subfig:cd2}, in the solid-grid region, both-investing-storage is one equilibrium; in the dash-grid region, neither-investing-storage is one equilibrium; in the region bounded by the red curve, supplier 1 does not invest in storage while supplier 2 should invest in storage; and in the region bounded by the blue curve, supplier 1 invests in storage while supplier 2 does not invest in storage. \end{itemize} \begin{figure}[t] \centering \subfigure[]{ \label{fig:subfig:cd1} \raisebox{-2mm}{\includegraphics[width=2.52in]{./figure/split_lamb2p}}} \hspace{-1mm} \subfigure[]{ \label{fig:subfig:cd2} \raisebox{-2mm}{\includegraphics[width=2.52in]{./figure/split_asy_cost_demandp}}} \vspace{-5mm} \caption{(a) Equilibrium split in the homogeneous case; (b) Equilibrium split in the heterogeneous case.} \vspace{-2mm} \label{fig:heter} \end{figure} \begin{figure}[t] \centering \includegraphics[width=2.7in]{./figure/split_asy_costp} \vspace{-2mm} \caption{\small Equilibrium split with storage cost.} \label{fig:cost} \end{figure} Generally, in the heterogeneous case, we see that the region where supplier 2 should invest in storage is larger than the region of supplier 1. The intuition is that supplier 2 has a larger capacity of renewables, which gives her advantage in the competition. When both suppliers face the same high storage cost greater than 1000 HKD as in Figure \ref{fig:subfig:cd2}, supplier 1 will not invest in storage at the equilibrium for any demand $D$ but supplier 2 may still invest in storage when the demand is high. Also, the region of only supplier 1 investing in storage and only supplier 2 investing in storage can overlap under the heterogeneous case, which means that only supplier 1 investing in storage and only supplier 2 investing in storage are both equilibria. Next we consider the case that two heterogeneous suppliers bear different storage investment costs. We choose a certain demand ($D=4$ MW) and show the equilibrium split with respect to the storage cost of the two suppliers in Figure \ref{fig:cost}. In Figure \ref{fig:cost}, if the storage costs of supplier 1 and supplier 2 lie in Region A, neither supplier will invest in storage. In Region B, both suppliers will invest in storage. In Region D, only supplier 2 invests in storage and supplier 1 will not invest in storage. In Region C, only supplier 1 invests in storage and supplier 2 will not invest in storage. However, in Region E, only supplier 1 investing in storage and only supplier 2 investing in storage, are both equilibria. \vspace{5mm} \section{Appendix: Proofs of Stage \uppercase\expandafter{\romannumeral3}}\label{appendix:proofstage3} To prove Proposition \ref{prop:stage3}, we will discuss the following two cases and analyze the objective function of Problem \eqref{eq:consumer} based on linear functions. For notation simplicity, we omit the superscript $m,t$ in the corresponding variables and parameters. \begin{itemize} \item If $p_1=p_2=p$, we rewrite the objective function \eqref{sg2:ob} as \begin{align} (P_g-p)(x_1+x_2). \label{sg2:obn} \end{align} Since $P_g-p>0$, the optimal value is achieved at the maximum value of $x_1+x_2$, i.e., $\min(D, y_1+y_2)$ according to the constraints \eqref{sg2:c1} and \eqref{sg2:c2}. \item If $p_1\neq p_2$, we assume $p_1>p_2$ without loss of generality. We rewrite the objective function \eqref{sg2:ob} as \begin{align} (P_g-p_2)(x_1+x_2)+(p_1-p_2)x_1. \label{sg2:obn2} \end{align} Since $P_g-p_2>0$ and $p_1-p_2>0$, the optimal value is achieved at the maximum value of $x_1+x_2$ and the maximum value of $x_1$ as follows: \begin{align} &x_1^*+x_2^*=\min(D, y_1+y_2),\\ &x_1^*=\min(y_1,D). \end{align} Then, we obtain the optimal solution $x_2^*=\min(D, y_1+y_2)-\min(y_1,D)$, which is equivalent to $x_2^*=\min(D-\min(y_1,D),y_2)$. \end{itemize} Combining the above two cases, we have Proposition 1 proved. \qed \textit{Remark 1}: Proposition \ref{prop:stage3} can be easily extended to the oligopoly case with more than 2 suppliers. \textit{Remark 2}: Given the other supplier $-i$'s bidding price $p_{-i}$ and bidding quantity $y_{-i}$, the supplier $i$'s payoff function generally is not continuous in price $p_i$ at $p_i=p_{-i}$ due to the discontinuous change of the optimal capacity $x_i^*$. This shows that given the other supplier $-i$'s decisions, supplier $i$'s payoff function generally is discontinuous. \section{Appendix: Proofs of Stage \uppercase\expandafter{\romannumeral2}}\label{appendix:proofstage2} \subsection{Proof of Theorem \ref{thm:quantity}} To prove Theorem \ref{thm:quantity}, the key step is to show that given price $p_i$, the revenue function $\pi_i^R\hspace{-1mm}\left(p_i, x_i^*(\bm{p},\bm{y}),\bm{\varphi}\right)$ of supplier $i$ with respect to $x_i^*(\bm{p},\bm{y})$ is increasing on the interval $(0,y_i^*)$ and decreasing on the interval $ (y_i^*,+\infty)$. Then, combined with Proposition 1, we can prove that $y_i^*$ will be the weakly dominant strategy for the bidding quantity. We discuss the weakly dominant strategy for supplier $i$ with $\varphi_{i}=1$ and $\varphi_{i}=0$, respectively. \subsubsection{Case of $\varphi_{i}=1$}We will prove that the weakly dominant strategy of bidding quantity for the with-storage supplier $i$ (i.e., $\varphi_i=1$) is $y_i^*\left(p_i, \varphi_i\right)=\mathbb{ E }[X_i]$. Given any price $p_i\leq \bar{p}<\lambda$, the function $\pi_i^R\hspace{-1mm}\left(p_i, x_i^*(\bm{p},\bm{y}),\bm{\varphi}\right)$ with respect to $x_i^*(\bm{p},\bm{y})$ is linearly increasing on the interval $ (0,\mathbb{ E }[X_i])$ and linearly decreasing on the interval $ (\mathbb{ E }[X_i],+\infty)$. Thus, given any price $p_i$, we always have \begin{align} \pi_i^R\hspace{-1mm}\left(p_i, x_i^*(\bm{p},\bm{y}),\bm{\varphi}\right)\leq \pi_i^R\hspace{-1mm}\left(p_i, \mathbb{ E }[X_i],\bm{\varphi} \right)\label{eq:proof_quantity 1} \end{align} Then, we discuss a total of three cases to show that with-storage supplier's revenue cannot be better off if he chooses strategy $y_i$ other than $y_i^*\left(p_i, \varphi_i\right)=\mathbb{ E }[X_i]$. For notation simplicity, we use $y_i^*$ to represent $y_i^*\left(p_i, \varphi_i\right)$ in the later discussion. (a) If $y_i<y_i^*=\mathbb{ E }[X_i]$, according to Proposition 1, we have \begin{align} x_i^*(\bm{p},({y}_i,y_{-i})) \leq x_i^*(\bm{p},({y}_i^*,y_{-i}))\leq \mathbb{ E }[X_i],~ \text{for~any~} y_{-i}, \end{align} which (according to \eqref{eq:proof_quantity 1}) implies \begin{align} \pi_i^R\left(p_i, x_i(\bm{p},({y}_i,y_{-i})),\bm{\varphi})\leq \pi_i(p_i, x_i(\bm{p},({y}_i^*,y_{-i})),\bm{\varphi}\right). \end{align} (b) If $y_i >y_i^*=\mathbb{ E }[X_i]$ and $x_i^*(\bm{p},({y}_i,y_{-i}))>\mathbb{ E }[X_i]$, according to Proposition 1, we have $$x_i^*(\bm{p},({y}_i^*,y_{-i}))=\mathbb{ E }[X_i],$$ which (according to \eqref{eq:proof_quantity 1}) implies $$\pi_i^R(p_i, x_i^*(\bm{p},({y}_i,y_{-i})),\bm{\varphi}) \leq \pi_i^R(p_i,\mathbb{ E }[X_i],\bm{\varphi})=\pi_i^R(p_i, x_i^*(\bm{p},({y}_i^*,y_{-i})),\bm{\varphi}).$$ (c) If $y_i>y_i^*=\mathbb{ E }[X_i]$ and $x_i^*(\bm{p},({y}_i,y_{-i}))\leq \mathbb{ E }[X_i]$, according to Proposition 1, we have $$x_i^*(\bm{p},({y}_i,y_{-i}))= x_i^*(\bm{p},({y}_i^*,y_{-i})),$$ which implies $$\pi_i^R(p_i, x_i^*(\bm{p},({y}_i,y_{-i})),\bm{\varphi})= \pi_i^R(p_i, x_i^*(\bm{p},({y}_i^*,y_{-i})),\bm{\varphi}).$$ Combining the above three conditions (a)-(c), we complete the proof that $y_i^*=\mathbb{ E }[X_i]$ if $\varphi_i=1$. \subsubsection{Case of $\varphi_i=0$} We prove the weakly dominant strategy of bidding quantity for the without-storage supplier $i$ (i.e., $\varphi_i=0$) is $y_i^*=F_i^{-1}(\frac{p_i}{\lambda})$. We take the derivative of $\pi_i^R\hspace{-1mm}\left(p_i, x_i^*(\bm{p},\bm{y}),\bm{\varphi}\right)$ with respect to $x_i^*(\bm{p},\bm{y})$ and give any $p_i>0$, it is easy to show that the function $\pi_i^R\left(p_i, x_i^*(\bm{p},\bm{y}),\bm{\varphi}\right)$ is increasing on the interval interval $ (0,F_i^{-1}(\frac{p_i}{\lambda}))$ and decreasing on the interval $ (F_i^{-1}(\frac{p_i}{\lambda}),+\infty)$. Thus, given any price $p_i$, we always have \begin{align} \pi_i^R\hspace{-1mm}\left(p_i, x_i^*(\bm{p},\bm{y}),\bm{\varphi}\right)\leq \pi_i^R\hspace{-1mm}\left(p_i, F_i^{-1}(\frac{p_i}{\lambda}),\bm{\varphi} \right).\label{eq:proof_quantity 0} \end{align} Then, we can follow the proof step for $y_i^*$ in the case of $\varphi_{i}=1$ and prove that $y_i^*=F_i^{-1}(\frac{p_i}{\lambda})$ for supplier $i$ with $\varphi_i=0$. \qed \subsection{Proof of Proposition \ref{prop:pureprice}} We verify the pure price equilibrium according to Definition \ref{def:pureprice} that the supplier cannot be better off if he deviates unilaterally. Towards this end, note that for supplier $i$ with or without storage, the revenue function $\pi_i^R\left(p_i, x_i^*(\bm{p},\bm{y}),\bm{\varphi}\right)$ is strictly increasing with respect to both the price $p_i$ and the selling quantity $x_i^*(\bm{p},\bm{y})$ that is in the range $[0, y_i^*(p_i,\varphi_i)]$ (without considering the other supplier's coupled decisions). We will discuss the three types of subgames respectively. \subsubsection{The type $\text{S}_0\text{S}_0$ (i.e., $\sum_i\varphi_i=0$)} We first prove that when $D \geq \sum_i y_i^*(\bar{p},{\varphi}_i)$, $p_1=p_2=\bar{p}$ is a pure price equilibrium and show that this pure price equilibrium is unique. Then, we show that when $D < \sum_i y_i^*(\bar{p},{\varphi}_i)$, there exists no pure price equilibrium. (a) The case of $D \geq \sum_i y_i^*(\bar{p},{\varphi}_i)$. We first prove that when $D \geq \sum_i y_i^*(\bar{p},{\varphi}_i)$, $p_1=p_2=\bar{p}$ is a pure price equilibrium. When $p_1=p_2=\bar{p}$, according to Proposition 1, the total selling energy quantities of supplier 1 and supplier 2 satisfy \begin{align} \sum_i x_i^*\left((\bar{p},\bar{p}),({y}_1^*(\bar{p},{\varphi}_1), {y}_2^*(\bar{p},{\varphi}_2)) \right)&= \min(D, {y}_1^*(\bar{p},{\varphi}_1)+ {y}_2^*(\bar{p},{\varphi}_2))\\&={y}_1^*(\bar{p},{\varphi}_1)+ {y}_2^*(\bar{p},{\varphi}_2).\label{eq:proof_pureprice_1} \end{align} Since $x_i^*(\bm{p},\bm{y})\leq y_i^*(p_i,\varphi_i)$ always holds for any $i=1,2$, based on \eqref{eq:proof_pureprice_1}, we have \begin{align} &x_i\triangleq x_i^*((\bar{p},\bar{p}),({y}_1^*(\bar{p},{\varphi}_1), {y}_2^*(\bar{p},{\varphi}_2)) )= {y}_i^*(\bar{p},{\varphi}_i). \end{align} We will show that both suppliers cannot be better off if they deviate from such a bidding strategy. Without loss of generality, if supplier $1$ bids a price $p_1'< \bar{p}$ unilaterally, according to Proposition 1, we have \begin{align} x_1'\triangleq{x}_1^*(({p}_1',\bar{p}),({y}_1^*(p_1',{\varphi}_1), {y}_2^*(\bar{p},{\varphi}_2))&= \min \left\{D, {y}_1^*(p_1',{\varphi}_1)\right\}\\&={y}_1^*(p_1',{\varphi}_1)\\&<x_1. \end{align} Since the revenue function $\pi_i^R\left(p_i, x_i^*(\bm{p},\bm{y}),\bm{\varphi}\right)$ is strictly increasing with respect to the price $p_i$ and the selling quantity $x_i^*(\bm{p},\bm{y})\leq y^*$, we have \begin{align} \pi_1^R\left(p_1', x_1',\bm{\varphi}\right)<\pi_1^R\left(\bar{p}, x_1,\bm{\varphi}\right), \end{align} which shows that supplier 1's revenue decreases if he deviates from the price $\bar{p}$. This proves that $p_1=p_2=\bar{p}$ is a pure price equilibrium. Next, we show that this equilibrium is unique. Without loss of generality, suppose that supplier 1 bids a price $p_1'<\bar{p}$ while the other supplier bids a price $p_{2}'\leq \bar{p}$. Since $D \geq \sum_i y_i^*(\bar{p},{\varphi}_i)$, according to Proposition 1, each supplier's maximum bidding quantity will be sold out and we have \begin{align} x_1^*(\bm{p}',\bm{y}^*(\bm{p}',\bm{\varphi}))= y_1^*(p_1',\varphi_1)\leq y_1^*(\bar{p},\varphi_1). \end{align} Therefore, supplier 1 can always increase his price $p_i'$ to $\bar{p}$, which will increase his revenue due to the increased price and non-decreasing selling quantity. Thus, any price pair $(p_1,p_2) \neq (\bar{p},\bar{p})$ can't be an equilibrium. (b) {Case of $0<D< \sum_i y_i^*(\bar{p},{\varphi}_i)$ }. {We will assume that both suppliers bid the pure prices and will discuss a total of three cases in the following to show that no pure price strategy can be an equilibrium. First, suppliers' bidding prices are not equal, and we assume $p_i<p_{-i}$ without loss of generality. the lower-price supplier can always increase the price by a small $\varepsilon>0$ such that $p_i'=p_i+\varepsilon<p_{-i}$. Then, the bidding price $p_i'>p_i$ and the selling quantity at $p_i'$ denoted as ${x}_i'$ satisfies ${x}_i'= \min \left\{D, y_i^*(p_i+\varepsilon,\varphi_i)\right\}\geq x_i=\min \left(D, y_i^*(p_i,\varphi_i)\right)$. In this case, we denote the revenue at the original price $p_i$ as $\pi_i$, and the revenue at the price $p_i'$ as $\pi_i'$. We have $\pi_i'>\pi_i$ since $p_i'>p_i$ and ${x}_i'\geq x_i$. Thus, the unequal bidding price cannot be an equilibrium. Second, two suppliers bid the same positive price, i.e., $p_1=p_1=p>0$. Based on Proposition 1, the selling quantities of two suppliers satisfy the following condition \begin{align} \sum_i x_i^*(\bm{p},\bm{y}^*(\bm{p},\bm{\varphi}) )= \min(D, y_1^*({p},\varphi_1)+y_2^*({p},\varphi_2)). \end{align} For simplicity, we denote the original selling quantity of supplier 1 and supplier 2 as $x_1$ and $x_2$, respectively when $p_1=p_1=p>0$. Then we discuss two cases (i) and (ii). \begin{itemize} \item (i) When $D< y_1^*({p},\varphi_1)+y_2^*({p},\varphi_2)$, we have \begin{align} x_1+x_2= D.\label{eq:proof_ndev} \end{align} In this case, if supplier 1 reduces the price by a small $\varepsilon_1>0$ to a price $p_1'=p-\varepsilon_1$ unilaterally, we have \begin{align} &x_1'\triangleq {x}_1^*(({p}-\varepsilon_1,{p}),(y_1^*({p-\varepsilon_1},\varphi_1),y_2^*({p},\varphi_2))= \min \left\{D, y_1^*({p-\varepsilon_1},\varphi_1)\right\}. \end{align} If supplier $2$ reduces the price by a small $\varepsilon_2>0$ to a price $p_2'=p-\varepsilon_2$ unilaterally , we have \begin{align} x_2'\triangleq{x}_2^*(({p},{p}-\varepsilon_2),(y_1^*({p},\varphi_1),y_2^*({p-\varepsilon_2},\varphi_2))=\min \left\{D, y_2^*(p-\varepsilon_2,\varphi_2)\right\}. \end{align} We choose small $\varepsilon_1$ and $\varepsilon_2$ such that $D< y_1^*({p}-\varepsilon_1,\varphi_1)+y_2^*({p}-\varepsilon_2,\varphi_2)$ holds. Then, we have \begin{align} x_1'+x_2'=\min \left\{D, y_1^*({p-\varepsilon_1},\varphi_1)\right\}+\min \left\{D, y_2^*(p-\varepsilon_2,\varphi_2)\right\}.\label{eq:proof_dev1} \end{align} Combining \eqref{eq:proof_ndev} and \eqref{eq:proof_dev1}, we see that at least one supplier $i$ can always reduce the price by a small $\varepsilon_i>0$ unilaterally such that the selling quantity increases by $$x_i'-x_i>\frac{1}{2}\min(D, y_1^*({p-\varepsilon_1},\varphi_1),y_2^*({p-\varepsilon_2},\varphi_2) ).$$ Since we can choose a sufficiently small $\varepsilon_i, \forall i=1,2$, the revenue $\pi_i$ will increase due to the increased selling quantity $x_i'-x_i$ (with an upward jumping). \item (ii) When $ y_1^*({p},\varphi_1)+y_2^*({p},\varphi_2)\leq D< y_1^*(\bar{p},\varphi_1)+y_2^*(\bar{p},\varphi_2)$, we have \begin{align} x_1+x_2=y_1^*({p},\varphi_1)+y_2^*({p},\varphi_2).\label{eq:ndev2} \end{align} Both suppliers can sell out the bidding quantities completely as follows. \begin{align} x_1=y_1^*({p},\varphi_1),~ x_2=y_2^*({p},\varphi_2).\label{eq:ndev3} \end{align} Note that $D-y_2^*({p},\varphi_2)\geq y_1^*({p},\varphi_1)=x_1$. Supplier $1$ can always increase his price $p$ to $p'=\bar{p}>p$ unilaterally, and $x_1'=\min(y_1^*({p'},\varphi_1), D-y_2^*({p},\varphi_2))$. Since we also have $y_1^*({p'},\varphi_1)\geq y_1^*({p},\varphi_1)=x_1$, supplier $1$'s obtained demand $x_1'$ at $p'$ will not decrease, i.e., $x_i'\geq x_i$. Thus, the revenue of supplier $i$ after increasing the price will also increase. \end{itemize} In summary, when two suppliers bid the same positive price, one supplier can always deviate so as to obtain a higher revenue, which shows that the equal positive bidding prices cannot be pure price equilibrium. Third, both suppliers bid the price at zero: $p_1=p_2=0$. In this case, both suppliers have zero revenues: $\pi_1^R=\pi_2^R=0$. Note that both without-storage suppliers will also bid the zero quantity $y_i^*(p_i,\varphi_i)=0$ as shown in Theorem \ref{thm:quantity}. Thus, any supplier $i$ can always set a positive price $p_i'>0$ to obtain the positive demand since the other supplier bid zero quantity. This makes his revenue $\pi_1^{R'}>0$ after increasing the price. There, the pure price strategy $p_1=p_2=0$ cannot be the equilibrium So far, for the {case of $0<D< \sum_i y_i^*(\bar{p},{\varphi}_i)$ }, we have discussed all the three cases of the pure price strategies but none of them is an equilibrium. Thus, there exists no pure price equilibrium when $0<D< \sum_i y_i^*(\bar{p},{\varphi}_i)$. \subsubsection{The type $\text{S}_1\text{S}_0$ (i.e., $\sum_i\varphi_i=1$)} Following the same arguments as in the type $\text{S}_0\text{S}_0$, we can first prove that when $D \geq \sum_i y_i^*(\bar{p},{\varphi}_i)$, $p_1=p_2=\bar{p}$ is a pure price equilibrium and show that this pure price equilibrium is unique. Furthermore, we can show that when $D < \sum_i y_i^*(\bar{p},{\varphi}_i)$, there exists no pure price equilibrium. \subsubsection{The type $\text{S}_1\text{S}_1$ (i.e., $\sum_i\varphi_i=2$)} The results have been proved in the paper \cite{capacityprice}. In conclusion, we have Proposition \ref{prop:pureprice} proved. \subsection{Proof of Theorem \ref{thm:mscdf}} We prove Theorem \ref{thm:mscdf} based on Lemma \ref{lem:mix} that has been shown in \cite{capacityprice}. However, based on Lemma \ref{lem:mix}, deriving the mixed price equilibrium in our model is still not straightforward compared with \cite{capacityprice}. That is because, in \cite{capacityprice}, supplier's bidding quantity is upper-bounded by his deterministic production quantity, while in our model, without-storage supplier's bidding quantity is upper-bounded by a function of price. The difference significantly increases the complexity of the analysis in our work. % { To prove Theorem \ref{thm:mscdf}, we will utilize a basic property of mix strategy equilibrium as shown in Lemma 2\cite{mic}. In Lemma 2, we use $\pi_i^{RM}(\mu_i,\mu_{-i},\bm{\varphi})$ to denote the expected revenue of supplier $i$ at any arbitrary mixed price strategy $(\mu_1,\mu_2)$, which is defined as follows.} \begin{align*} \pi_i^{RM}(\mu_i,\mu_{-i},\bm{\varphi})\hspace{-0.9mm}=\int_{{[0,\bar{p}]}^{2}}\hspace{-0.6mm}\pi_i^R (p_i, {x}_i^*((p_i, {p}_{-i}),\bm{y}^*(p_i, p_{-i})),\bm{\varphi}) d (\mu_i(p_i) \hspace{-0.6mm}\times\hspace{-0.6mm} {\mu}_{-i}({p}_{-i}) ) \end{align*} \begin{lemma}\label{lemma:mix2} $\pi_i^{RM}(p_i,\mu_{-i}^*,\boldsymbol{\varphi}) = \pi_i^{RE}(\boldsymbol{\varphi})$, for all $p_i\in[l,\bar{p}]$, where $\pi_i^{RE}$ is the equilibrium revenue \cite{mic}. \end{lemma} {Lemma \ref{lemma:mix2} shows that the equilibrium revenue $\pi_i^{RE}$ of supplier $i$ is equal to the expected revenue when he plays any pure strategy $p_i$ in the support, i.e., $p_i\in[l,\bar{p}]$, against the mixed strategy $\mu_{-i}^*$ of the other supplier at the equilibrium.} Based on Lemma \ref{lem:mix} and Lemma \ref{lemma:mix2}, we will characterize the equilibrium revenue $\pi_i^{RE}$ as well as the CDF of the mixed price equilibrium $F_i^e(p)$ using the lower support $l$ over $p\in[l,\bar{p})$.\footnote{Note that $F_2^e(p)$ may not be continuous at $p=\bar{p}$ as indicated in Lemma \ref{lem:mix}. } We make the analysis of the with-storage supplier (i.e., $\varphi_i=1$) and without-storage supplier (i.e., $\varphi_i=0$) as follows. \subsubsection{With-storage supplier $i$ (i.e., $\varphi_i=1$)} For supplier $i$, based on Lemma 2, the equilibrium revenue $\pi_i^{RE}$ can be characterized by the expected revenue when he plays any pure strategy $p_i \in [l,\bar{p})$ against the mixed strategy of supplier $-i$ (with CDF $F_{-i}^e$ and PDF $f_{-i}^e$) at the equilibrium as follows \begin{align} \pi_i^{RE}(\bm{\varphi})&=\pi_i^{RM}(p_i,\mu_{-i}^*,\bm{\varphi})\notag \\&=\underbrace{p_i\min(D,\mathbb{ E }[X_i])\cdot(1-F_{-i}^e(p_i))}_{p_i\leq p_{-i}} \notag\\&~~~~+ \underbrace{p_i\int_{l}^{p_i}\min \left(D-\min(y_{-i}^*(p_{-i},\varphi_{-i}),D),\mathbb{ E }[X_i]\right)\cdot f_{-i}^e(p_{-i})dp_{-i}}_{p_i> p_{-i}}\label{eq:par1o}. \end{align} Note that in \eqref{eq:par1o}, $D-\min(y_{-i}^*(p_{-i},\varphi_{-i}),D)\leq \mathbb{ E }[X_i]$ will always hold for any $ p_{-i}\in [l,\bar{p}]$, i.e., $D-\min(y_{-i}^*(l,\varphi_{-i}),D)\leq \mathbb{ E }[X_i]$, or \begin{align} D\leq \mathbb{ E }[X_i]+y_{-i}^*(l,\varphi_{-i}). \end{align} \noindent This helps us simplify the second part ``$p_i> p_{-i}$" in \eqref{eq:par1o}. We can prove this by contradiction as follows. If $D-\min(y_{-i}^*(l,\varphi_{-i}),D)> \mathbb{ E }[X_i]$, there exists a small $\varepsilon>0$ such that $D-\min(y_{-i}^*(l+\varepsilon,\varphi_{-i}),D)> \mathbb{ E }[X_i]$ still holds. Based on \eqref{eq:par1o}, we have \begin{align} &\pi_i^{RE}=\pi_i^{RM}(l,\mu_{-i}^*,\bm{\varphi})={l\cdot \min(D,\mathbb{ E }[X_i])},\label {eq:pif2} \end{align} and for any $\varepsilon>0$, \begin{align} \pi_i^{RE}(\bm{\varphi})&=\pi_i^{RM}(l+\varepsilon,\mu_{-i}^*,\boldsymbol{\varphi})\notag \\&={(l+\varepsilon)\cdot \min(D,\mathbb{ E }[X_i]) (1-F_{-i}^e(l+\varepsilon))}+{(l+\varepsilon)\int_{l}^{l+\varepsilon} \mathbb{ E }[X_i]\cdot f_{-i}^e(p_{-i})dp_{-i}}\notag \\ &={(l+\varepsilon)\cdot \min(D,\mathbb{ E }[X_i]) (1-F_{-i}^e(l+\varepsilon))}+{(l+\varepsilon)\cdot \mathbb{ E }[X_i]\cdot F_{-i}^e(l+\varepsilon)}\notag \\ &\geq (l+\varepsilon)\cdot \min(D,\mathbb{ E }[X_i])\label{eq:pif1}. \end{align} Then, we can see that $\eqref {eq:pif2}$ and $\eqref {eq:pif1}$ contradict with each other, and thus $D-\min(y_{-i}^*(p_{-i},\varphi_{-i}),D)\leq \mathbb{ E }[X_i]$ will always hold for $p_2\in [l,\bar{p}]$, which enables us to simplify \eqref{eq:par1o}. Since $\pi_i^{RE}(\bm{\varphi})=\pi_i^{RM}(p_i,\mu_{-i}^*,\bm{\varphi})$ is constant over $p_i\in [l,\bar{p})$, the derivative of $\pi_i^{RM}(p_i,\mu_{-i}^*,\bm{\varphi})$ with respect to $p_i$ is zero over $p_i\in [l,\bar{p})$, i.e., \begin{align} \frac{\partial \pi_i^{RM}(p_i,\mu_{-i}^*,\bm{\varphi})}{\partial p_i}=&{\min(D,\mathbb{ E }[X_i])(1-F_{-i}^e(p_i))}+p_i\min(D,\mathbb{ E }[X_i])(-f_{-i}^e(p_i))\notag\\ +&{\int_{l}^{p_i} \left(D-\min(y_{-i}^*(p_{-i}),D)\right)f_{-i}^e(p_{-i})dp_{-i}} +{{p_i} \left(D-\min(y_{-i}^*(p_i,\varphi_{-i}),D)\right)f_{-i}^e(p_i)} \label{eq:par1} \notag\\ =&0. \end{align} Combining \eqref{eq:par1} with \eqref{eq:par1o}, we have the PDF of mixed price strategy at the equilibrium for without-storage supplier ${-i}$'s as follows. \begin{align} &f_{-i}^e(p)= \frac{\pi_i^{RE}(\bm{\varphi})}{p^2\cdot \min\{y_{-i}^*(p,\varphi_{-i}),D\}-p^2\cdot [D-\mathbb{E}{[X_i]}]^+},\label{eq: mf2} \end{align} which is characterized by the equilibrium revenue $\pi_i^{RE}$ of supplier $i$. \subsubsection{Without-storage supplier $i$ (i.e., $\varphi_i=0$)} For supplier $i$ without storage, similarly, based on Lemma 2, the equilibrium revenue $\pi_i^{RE}(\bm{\varphi})$ can be characterized by the expected revenue when he plays any pure strategy $p_i \in [l,\bar{p})$ against the mixed strategy of supplier ${-i}$ (with CDF $F_{-i}^e$) at the equilibrium as follows \begin{align} \pi_i^{RE}(\bm{\varphi})\hspace{-0.5mm}=&\hspace{-0.5mm}\pi_i^{RM}(p_i,\mu_{-i}^*,\bm{\varphi})\hspace{-0.5mm}\\=&\hspace{-0.5mm}\underbrace{\pi_i^R(p_i,\min \left(D, y_i^*(p_i,\varphi_i),\bm{\varphi}\right) )\cdot (1-F_{-i}^e(p_i))}_{p_i\leq p_{-i}} \notag\\&\hspace{-4mm}+ \underbrace{\pi_i^R\left (p_i, \min(D-\min(\mathbb{ E }[X_{-i}],D),y_i^*(p_i,\varphi_i)),\bm{\varphi}\right) \cdot F_{-i}^e(p_i)}_{p_i>p_{-i}}. \label{eq: mpi2} \end{align} Similarly, we have that $D-\min(\mathbb{ E }[X_{-i}],D)\leq y_i^*(p_i,\varphi_i)$ always holds for any $ p_i\in [l,\bar{p}]$. Then, according to \eqref{eq: mpi2}, we have the PDF of the mixed price strategy at the equilibrium for the with-storage supplier $-i$ as follows. \begin{align} &F_{-i}^e(p)= \frac{ \pi_i^R\left(p,\min\{y_i^*(p,\varphi_i),D\},\bm{\varphi}\right)-\pi_i^{RE}(\bm{\varphi})}{\pi_i^R\left(p,\min\{y_i^*(p,\varphi_i),D\},\bm{\varphi}\right)-\pi_i^R\left(p, [D-\mathbb{ E }[X_{-i}]]^+,\bm{\varphi}\right)}\label{eq: mf1}, \end{align} which is characterized by the equilibrium revenue $\pi_{i}^{RE}$ of supplier $i$. In conclusion, if $\varphi_i=1$, we have \begin{align} &F_i^e(p)= \frac{ \pi_{-i}^R\left(p,\min\{y_{-i}^*(p,\varphi_{-i}),D\},\bm{\varphi}\right)-\pi_{-i}^{RE}(\bm{\varphi})}{\pi_{-i}^R(p,\min\{y_{-i}^*(p,\varphi_{-i}),D\},\bm{\varphi})-\pi_{-i}^R(p, (D-\mathbb{ E }[X_i])^+,\bm{\varphi})}. \end{align} If $\varphi_i=0$, we have \begin{align} &F_i^e(p)=\int_l^{\bar{p}} \frac{\pi_{-i}^{RE}(\bm{\varphi})}{p^2\cdot \min\{y_i^*(p,\varphi_{i}),D\}-p^2\cdot (D-\mathbb{ E }[X_{-i}])^+}dp. \end{align} for any $l \leq p< \bar{p}$. \qed \subsection{Proof of Proposition \ref{thm:mscomp}} To prove Proposition \ref{thm:mscomp}, we first show that $F_i^e(\bar{p}^-\mid l_i^\dagger)$ is always decreasing in $l_i^\dagger,\forall i$, based on which we can prove Proposition \ref{thm:mscomp} (1) by contradiction. Then, we can have Proposition \ref{thm:mscomp} (2) proved directly from Lemma \ref{lem:mix} (iii). We now prove that $F_i^e(\bar{p}^-\mid l_i^\dagger)$ is always decreasing with $l_i^\dagger$, for both $\varphi_i=1$ and $\varphi_i=0$. \subsubsection{With-storage supplier $i$ (i.e, $\varphi_i=1$)} For the without-storage supplier $i$, according to \eqref{F1}, we have \begin{align} &F_i^e(\bar{p}^-\mid l_i^\dagger)= \frac{ \pi_{-i}^R\left(\bar{p},\min\{y_{-i}^*(\bar{p},\varphi_{-i}),D\},\bm{\varphi}\right)-\pi_{-i}^{RE}(\bm{\varphi})}{\pi_{-i}^R(\bar{p},\min\{y_{-i}^*(p,\varphi_{-i}),D\},\bm{\varphi})-\pi_{-i}^R(\bar{p}, (D-\mathbb{ E }[X_i])^+,\bm{\varphi})}.\label{FF1} \end{align} Note that the equilibrium revenue function $\pi_{-i}^{RE}(\bm{\varphi})$ (shown in Lemma \ref{lem:mix} (iii)) is increasing in the lower support $l_i^\dagger$, and thus $F_i^e(\bar{p}^-\mid l_1^\dagger)$ is decreasing in $l_i^\dagger$. \subsubsection{Without-storage supplier $i$ (i.e, $\varphi_i=0$)} For the without-storage supplier $i$, according to \eqref{F2}, we have \begin{align} &F_i^e(\bar{p}^-\mid l_i^\dagger)=\int_{l_i^\dagger}^{\bar{p}} \frac{{l_i^\dagger}\cdot\min(D,\mathbb{ E }[X_i])}{p^2\cdot \min\{y_i^*(p,\varphi_i),D\}-p^2\cdot [D-\mathbb{ E }[X_i]]^+}dp. \end{align} We take the first-order derivative of $F_i^e(\bar{p}^-\mid l_i^\dagger)$ with respect to $l_i^\dagger$ and obtain \begin{align} \frac{\partial F_i^e(\bar{p}^-\mid l_i^\dagger)}{\partial l_i^\dagger}&= \int_{l_i^\dagger}^{\bar{p}} \frac{\min(D,\mathbb{ E }[X_i])}{p^2\cdot \min\{y_i^*(p,\varphi_i),D\}-p^2\cdot (D-\mathbb{ E }[X_i])^+}dp\notag \\ &~~~~~~~-\frac{\min(D,\mathbb{ E }[X_i])}{{l_i^\dagger}\cdot \min\{y_i^*(l_i^\dagger,\varphi_i),D\}-{l_i^\dagger}\cdot (D-\mathbb{ E }[X_i])^+}.\label{ss} \end{align} Further, we take the derivative of \eqref{ss} with respect to $l_i^\dagger$ again and have \begin{align} \frac{\partial^2 F_i^e(\bar{p}^-\mid l_i^\dagger)}{\partial {l_i^\dagger}^2}=- \frac{1}{{l_i^\dagger}}\cdot \frac{\partial\frac{\min(D,\mathbb{ E }[X_i])}{\min\{y_i^*(l_i^\dagger,\varphi_i),D\}- (D-\mathbb{ E }[X_i])^+}}{\partial l_i^\dagger}. \end{align} Note that $\frac{\min(D,\mathbb{ E }[X_i])}{\min\{y_i^*(l_i^\dagger,\varphi_i),D\}- (D-\mathbb{ E }[X_i])^+}$ decreases in $l_i^\dagger$ because $y_i^*(l_i^\dagger,\varphi_i)$ increases in $l_i^\dagger$. Thus, we always have \begin{align} \frac{\partial^2 F_i^e(\bar{p}^-\mid l_i^\dagger)}{\partial {l_i^\dagger}^2}\geq 0, \end{align} which shows that $\frac{\partial F_i^e(\bar{p}^-\mid l_i^\dagger)}{\partial l_i^\dagger}$ is non-decreasing with $l_i^\dagger$. Then, we choose $l_i^\dagger=\bar{p}$ and have \begin{align} \frac{\partial F_i^e(\bar{p}^-\mid l_i^\dagger)}{\partial l_i^\dagger}&=-\frac{\min(D,\mathbb{ E }[X_i])}{{\bar{p}}\cdot \min\{y_i^*(\bar{p},\varphi_i),D\}-{\bar{p}}\cdot (D-\mathbb{ E }[X_i])^+}\notag \\&<0,\notag \end{align} which holds for all $l_i^\dagger\leq \bar{p}$. The reason is that $D<\mathbb{ E }[X_i]+y_i^*(\bar{p},\varphi_i)$ in the subgame $\text{S}_1\text{S}_0$ without the pure price equilibrium. Therefore, we have that ${ F_i^e(\bar{p}^-\mid l_i^\dagger)}$ decreases with $l_i^\dagger$. Till now, we have shown that $F_i^e(\bar{p}^-\mid l_i^\dagger)$ is always decreasing in $l_i^\dagger$ for both $\varphi_i=1$ and $\varphi_i=0$. Then, we can prove Proposition \ref{thm:mscomp} (1) by contradiction. According to Lemma \ref{lem:mix} (iii), if $F_i^e(\bar{p}^-\mid l_i^\dagger)=1$ has a solution $l_i^{\dagger*}$ for both suppliers $i=1,2$, either $l=\max (l_1^{\dagger*},l_2^{\dagger*})$ or $l=\min (l_1^{\dagger*},l_2^{\dagger*})$ will hold. If $l=\min (l_1^{\dagger*},l_2^{\dagger*})$, without loss of generality, we assume $l_1^{\dagger*}< l_2^{\dagger*}$ and $l=l_1^{\dagger*}$. Note that $F_1^e(\bar{p}^-\mid l_1^{\dagger*})=1$ and hence $F_2^e(\bar{p}^-\mid l_2^{\dagger*})=1$. Since $F_2^e(\bar{p}^-\mid l_2^\dagger)$ is decreasing with $l_2^\dagger$, then $F_2^e(\bar{p}^-\mid l_1^{\dagger*})>1$, which is a contradiction of the CDF. Therefore, we can only choose $l=\max (l_1^{\dagger*},l_2^{\dagger*})$ and we have Proposition \ref{thm:mscomp} (1) proved. Furthermore, according to Lemma \ref{lem:mix} (iii), we have that $F_i^e(\bar{p}^-)=1$ ~\text{is true for at least one of the suppliers. } Thus, if we have only one solution of $l_i^{\dagger}$ among $i=1$ and $i=2$, it must be the equilibrium lower support, which has Proposition \ref{thm:mscomp} (2) proved. \qed \subsection{Proof of Theorem \ref{prop:comparison}} We first prove that $\pi_i^{RE}>\pi_{-i}^{RE}$ always holds for a general distribution for the renewable generation $X_i$ if $\varphi_i=1$, $\varphi_{-i}=0$ and $\mathbb{E}[X_i]=\mathbb{E}{[X_{-i}]}$. Then, we consider the case that $X_{-i}$ follows a uniform distribution. \subsubsection{A general distribution for $X_i$} We consider the cases of pure price equilibrium and mixed price equilibrium respectively, and characterize suppliers' revenue as follows. (a) The case with pure price equilibrium: According to Proposition \ref{prop:pureprice} and Lemma \ref{lem:mix} (ii), we have \begin{equation} \pi_i^{RE}(\bm{\varphi})= \left \{ \begin{aligned} &\bar{p}\min(\mathbb{ E }[X_i],D),~\text{if}~\varphi_i=1,\\ &\pi_i^R(\bar{p},\min(D,y_i^*(\bar{p},\varphi_{i})),\bm{\varphi}),~\text{if}~\varphi_i=0. \end{aligned} \right. \end{equation} Note that $D\geq \mathbb{ E }[X_i]+y_i^*(\bar{p},\varphi_{i})$ when there is the pure price equilibrium. Therefore, if $\varphi_i=1$ and $\varphi_{-i}=0$, we have \begin{align} \pi_i^{RE}(\bm{\varphi})&=\bar{p}\mathbb{ E }[X_i]. \label{eq:proof_c1}\\ \pi_{-i}^{RE}(\bm{\varphi})&= \pi_{-i}^R(\bar{p},y_{-i}^*(\bar{p},\varphi_{-i}),\bm{\varphi})\\ &=\lambda \int_0^{F_{-i}^{-1}(\frac{\bar{p}}{\lambda})}xf_{-i}(x)dx.\\ &=\lambda \int_0^{F_{-i}^{-1}(\frac{\bar{p}}{\lambda})}xdF_{-i}(x)\\ &=\bar{p}F_{-i}^{-1}(\frac{\bar{p}}{\lambda})-\lambda \int_0^{F_{-i}^{-1}(\frac{\bar{p}}{\lambda})}F_{-i}(x)dx\label{eq:proof_mcom1}\\ &< \bar{p}F_{-i}^{-1}(\frac{\bar{p}}{\lambda})-\bar{p} \int_0^{F_{-i}^{-1}(\frac{\bar{p}}{\lambda})}F_{-i}(x)dx. \label{eq:proof_mcom} \end{align} Based on \eqref{eq:proof_mcom}, we consider the following function $h(x)$ for any $p>0$ and $0\leq x<\bar{X}_{-i}$. Note that $F_{-i}^{-1}(\frac{\bar{p}}{\lambda})<\bar{X}_{-i}$ since $\bar{p}<\lambda$. \begin{align} h(x)={p}x-p \int_0^{x}F_{-i}(x)dx. \label{eq:proof_pro3} \end{align} The, we have \begin{align} h'(x)={p}-p F_{-i}(x)>0, \end{align} which shows that $h(x)$ increases in $x$. Since $F_{-i}^{-1}(\frac{\bar{p}}{\lambda})< \bar{X}_{-i}$, according to \eqref{eq:proof_mcom}, we have \begin{align} \pi_{-i}^{RE}(\bm{\varphi}) &<\bar{p}\bar{X}_{-i}-\bar{p} \int_0^{\bar{X}_{-i}}F_{-i}(x)dx\\ &=\bar{p}\mathbb{ E }[X_{-i}]\leq \pi_i^{RE}(\bm{\varphi}) \label{eq:proof_c2}. \end{align} Based on \eqref{eq:proof_c1} and \eqref{eq:proof_c2}, if $\mathbb{ E }[X_{-i}] \leq \mathbb{ E }[X_{i}]$, then we always have \begin{align} \pi_{-i}^{RE}(\bm{\varphi})< \pi_i^{RE}(\bm{\varphi}). \label{eq:proof_rev} \end{align} (b) The case without pure price equilibrium: The proof procedure is the similar to the case (a) with pure price equilibrium. The difference is to replace $\bar{p}$ into the lower support $l$, i.e., \begin{equation} \pi_i^{RE}(\bm{\varphi})= \left \{ \begin{aligned} &l\cdot \min(\mathbb{ E }[X_i],D),~\text{if}~\varphi_i=1,\\ &\pi_i^R(l,\min(D,y_i^*(l,\varphi_{i})),\bm{\varphi}),~\text{if}~\varphi_i=0. \end{aligned} \right. \end{equation} We will discuss the following two cases. \begin{itemize} \item $\mathbb{ E }[X_i]\leq D$: If $\varphi_i=1$ and $\varphi_{-i}=0$, we have \begin{align} \pi_i^{RE}(\bm{\varphi})&= l\cdot \mathbb{ E }[X_i].\\ \pi_{-i}^{RE}(\bm{\varphi})&=\pi_{-i}^R(l,\min(D,y_{-i}^*(l,\varphi_{-i})),\bm{\varphi})\\&\leq \pi_{-i}^R(l,y_{-i}^*(l,\varphi_{{-i}}),\bm{\varphi}). \end{align} We can follow the same argument as in (a) with the pure price equilibrium to show that $\pi_{i}^{RE}> \pi_{-i}^{RE}$ if $\mathbb{ E }[X_i]\geq \mathbb{ E }[X_{-i}]$. The only difference is to replace $\bar{p}$ by $l$. \item $\mathbb{ E }[X_i]> D$: If $\varphi_i=1$ and $\varphi_{-i}=0$, we have \begin{align} \pi_i^{RE}(\bm{\varphi})&= l\cdot D.\\ \pi_{-i}^{RE}(\bm{\varphi})&=\pi_{-i}^R(l,\min(D,y_{-i}^*(l,\varphi_{-i})),\bm{\varphi}). \end{align} \begin{itemize} \item If $y_{-i}^*(l,\varphi_{{-i}})\leq D$, we have \begin{align} \pi_{-i}^{RE}(\bm{\varphi})&=\pi_{-i}^R(l,y_{-i}^*(l,\varphi_{-i}),\bm{\varphi})\\ &\leq ly_{-i}^*(l,\varphi_{{-i}})-l \int_0^{y_{-i}^*(l,\varphi_{{-i}})}F_{-i}(x)dx~(\text{as in}~ \eqref{eq:proof_mcom})\\ &<lD\\ &= \pi_i^{RE}(\bm{\varphi}). \end{align} \item If $y_{-i}^*(l,\varphi_{{-i}})> D$, we have \begin{align} \pi_{-i}^{RE}(\bm{\varphi})&=\pi_{-i}^R(l,D,\bm{\varphi})\\ &= lD-\lambda \int_0^{D}(D-x)f_{-i}(x)dx\\ &< lD\\ &= \pi_i^{RE}(\bm{\varphi}). \end{align} \end{itemize} \end{itemize} Therefore, for the case (b) without pure price equilibrium, we also have that $\pi_{i}^{RE}> \pi_{-i}^{RE}$ if $\mathbb{ E }[X_i]\geq \mathbb{ E }[X_{-i}]$. Combining case (a) with the pure price equilibrium, for a general distribution of $X_i$, we prove that $\pi_{i}^{RE}> \pi_{-i}^{RE}$ if $\mathbb{ E }[X_i]\geq \mathbb{ E }[X_{-i}]$. \subsubsection{Uniform distribution of $X_{-i}$} We will derive the revenues (at both pure and mixed price equilibrium) of suppliers under the uniform renewable-generation distribution. For the pure price equilibrium, it is straightforward to calculate the equilibrium revenue when there is $p_1=p_2=\bar{p}$ when $D\geq \sum_i y_i(\bar{p},\varphi_i)$. For the case without pure price equilibrium, i.e., $D< \sum_i y_i(\bar{p},\varphi_i)$, we will characterize the lower support for the mixed price equilibrium and characterize the equilibrium revenue based on Theorem \ref{thm:mscdf} and Proposition \ref{thm:mscomp}. We consider $\varphi_i=1$ and $\varphi_{-i}=0$. We have the PDF and CDF of the uniform distribution $X_{-i}$ as follows: \begin{align} f_{-i}= \frac{1}{\bar{X}_{-i}}, ~F_{-i}(x)= \frac{x}{\bar{X}_{-i}}. \end{align} According to Theorem \ref{thm:quantity}, the weakly dominant bidding quantity strategy is \begin{align} &y_i^*=\mathbb{ E }[{X}_i],\\& y_{-i}^*(p_{-i},\varphi_{{-i}})=F_{-i}^{-1}\left(\frac{p_{-i}}{\lambda}\right)=\frac{p_{-i}}{\lambda}\bar{X}_{-i}. \end{align} Next we discuss the case (a) with pure price equilibrium and the case (b) without pure price equilibrium respectively. (a) The case with pure price equilibrium: When $D \geq \sum_i y_i(\bar{p},\varphi_i)$, both suppliers' bid price $\bar{p}$ and we have \begin{align} &\pi_i^{RE}(\bm{\varphi})=\bar{p}\mathbb{ E }[X_i],\\ &\pi_{-i}^{RE}(\bm{\varphi})=\pi_{-i}^R(\bar{p},y_{-i}^*(\bar{p},\varphi_{-i}),\bm{\varphi})=\frac{\bar{X}_{-i}}{2\lambda}\bar{p}^2, \end{align} which leads to the revenue ratio: \begin{align} &\frac{\pi_i^{RE}(\bm{\varphi})}{\pi_{-i}^{RE}(\bm{\varphi})}=\frac{\lambda \mathbb{ E }[X_i]}{\mathbb{ E }[X_{-i}] \bar{p}}. \end{align} If $\mathbb{E}[X_i] \geq \mathbb{E}[X_{-i}]$, then \begin{align} &\frac{\pi_i^{RE}(\bm{\varphi})}{\pi_{-i}^{RE}(\bm{\varphi})}\geq \frac{\lambda }{ \bar{p}}. \end{align} (b) The case without pure price equilibrium: When $D < \sum_i y_i(\bar{p},\varphi_i)$, based on the characterization of CDF in Theorem \ref{thm:mscdf}, we discuss the following cases respectively. \begin{itemize} \item Case of $0<D \leq \mathbb{ E }[X_i]$: According to Theorem \ref{thm:mscdf}, we have the CDF of the mixed equilibrium price over $p\in [l,\bar{p})$ as follows: \begin{align} &F_i^e(p)= \frac{ \pi_i^R\left(p,\min\{y_{-i}^*(p,\varphi_{-i}),D\},\bm{\varphi}\right)-\pi_{-i}^{RE}(\bm{\varphi})}{\pi_i^R\left(p,\min\{y_{-i}^*(p,\varphi_{-i}),D\},\bm{\varphi}\right)},\label{eq:F1c1}\\ &F_{-i}^e(p)=\int_l^{\bar{p}} \frac{\pi_{i}^{RE}(\bm{\varphi})}{p^2\cdot \min\{y_{-i}^*(p,\varphi_{-i}),D\}}dp. \end{align} We can see that $F_i^e(p)<1 $ over $p\in[l,\bar{p})$ since $\pi_{-i}^{RE}(\bm{\varphi})>0$.\footnote{Note that $\pi_{-i}^{RE}(\bm{\varphi})>0$ since the lower support $l>0$.} According to Proposition \ref{thm:mscomp}, we solve the following equation to derive the equilibrium lower support $l$. \begin{align} F_{-i}^e(\bar{p})=\int_l^{\bar{p}} \frac{\pi_i^{RE}(\bm{\varphi})}{p^2\cdot \min\{y_{-i}^*(p,\varphi_{-i}),D\}}dp=1. \end{align} We discuss the following two cases \begin{itemize} \item 1) If $D\geq y_{-i}^*(\bar{p},\varphi_{-i})$, we have \begin{align} &l=\frac{\bar{p}^2\bar{X}_{-i}}{D\lambda}(-1+\sqrt{1+\frac{D^2\lambda^2}{\bar{p}^2\bar{X}_{-i}^2}}). \end{align} \item 2) If $D< y_{-i}^*(\bar{p},\varphi_{-i})$, we have \begin{align} &l=\frac{D\lambda}{\bar{X}_{-i}(1+\sqrt{2\frac{D\lambda}{\bar{p}\bar{X}_{-i}}})}. \end{align} \end{itemize} We verify that in both cases (1) and (2), $\min(D,y_{-i}^*(l,\varphi_{-i}))=y_{-i}^*(l,\varphi_{-i})$. According to Lemma \ref{lem:mix}, the equilibrium revenue of both suppliers will be \begin{align} &\pi_i^{RE}(\bm{\varphi})=l\cdot (D,\mathbb{ E }[X_i])=l\cdot D,\\ &\pi_{-i}^{RE}(\bm{\varphi})=\pi_{-i}^{R}(l,\min(D,y_{-i}^*(l,\varphi_{-i})),\bm{\varphi})=\pi_{-i}^{R}(l,y_{-i}^*(l,\varphi_{-i}),\bm{\varphi})=\frac{\bar{X}_{-i}}{2\lambda}l^2, \end{align} which leads to the revenue ratio: \begin{align} &\frac{\pi_i^{RE}(\bm{\varphi})}{\pi_{-i}^{RE}(\bm{\varphi})}=\frac{2\lambda D}{l \bar{X}_{-i} }. \end{align} In summary, we have \begin{align} &\frac{\pi_i^{RE}(\bm{\varphi})}{\pi_{-i}^{RE}(\bm{\varphi})}=\left \{ \begin{aligned} &2\sqrt{2\frac{D\lambda}{\bar{p}\bar{X}_{-i}}}+2~, ~~~~~~~~\text{if}~\frac{D\lambda}{\bar{p}\bar{X}_{-i}} < 1,\\ &2\sqrt{1+\frac{D^2\lambda^2}{\bar{p}^2\bar{X}_{-i}^2}}+2 ,~~~~\text{if}~\frac{D\lambda}{\bar{p}\bar{X}_{-i}} \geq 1. \end{aligned} \right. \end{align} Therefore, when $0<D \leq \mathbb{ E }[X_i]$, we have \begin{itemize} \item when $0<D \leq \mathbb{ E }[X_i]$, $\frac{\pi_i^{RE}(\bm{\varphi})}{\pi_{-i}^{RE}(\bm{\varphi})}\geq 2$; \item when $D = \mathbb{ E }[X_i]$ and $\mathbb{E}[X_{-i}] =\frac{\bar{X}_{-i}}{2}\leq \mathbb{ E }[X_i]$, $\frac{\pi_i^{RE}(\bm{\varphi})}{\pi_{-i}^{RE}(\bm{\varphi})}\geq 4$ (due to $\lambda/\bar{p}>1$). \end{itemize} \item {Case of $\mathbb{ E }[X_i]<D< \sum_i y_i(\bar{p},\varphi_i) $}: We characterize the revenue ratio between the two suppliers according to Lemma \ref{lem:mix} as follows. \begin{align} &\pi_i^{RE}(\bm{\varphi})=l \cdot \min (D,\mathbb{ E }[X_i])=l \cdot \mathbb{ E }[X_i],\\ &\pi_{-i}^{RE}(\bm{\varphi})=\pi_{-i}^{R}(l,\min(D,y_{-i}^*(l,\varphi_{-i})),\bm{\varphi})\leq \pi_{-i}^{R}(l,y_{-i}^*(l,\varphi_{-i}),\bm{\varphi})=\frac{\bar{X}_{-i}}{2\lambda}l^2. \end{align} Then, we have \begin{align} &\frac{\pi_i^{RE}(\bm{\varphi})}{\pi_{-i}^{RE}(\bm{\varphi})}\geq \frac{l \cdot \mathbb{ E }[X_i]}{ \pi_{-i}^{R}(l,y_{-i}^*(l,\varphi_{-i}),\bm{\varphi})}=\frac{2\lambda \mathbb{ E }[X_i]}{l \bar{X}_{-i} }. \end{align} If $\mathbb{E}[X_{-i}] \leq \mathbb{ E }[X_i]$, then \begin{align} \frac{\pi_i^{RE}(\bm{\varphi})}{\pi_{-i}^{RE}(\bm{\varphi})}\geq \frac{\lambda }{l}> \frac{\lambda }{ \bar{p}}. \end{align} Therefore, combining case (a), when $D > \mathbb{ E }[X_i]$ and $\mathbb{E}[X_{-i}] \geq \mathbb{ E }[X_i]$, we have $\frac{\pi_i^{RE}(\bm{\varphi})}{\pi_{-i}^{RE}(\bm{\varphi})}\geq \frac{\lambda }{ \bar{p}}.$ \end{itemize} Finally, combining (a) and Subsection (b), we have Theorem \ref{prop:comparison} proved. \qed \subsection{Proof of Proposition \ref{prop:positiverev}} We will discuss the equilibrium revenue with pure price equilibrium and without pure price equilibrium, respectively. \subsubsection{With the pure price equilibrium (i.e., $D\geq \sum_iy_i^*(\bar{p},\varphi_{i})$}: If $\varphi_i=1$, we have \begin{align} \pi_i^{RE}(\bm{\varphi})&=\bar{p}\mathbb{ E }[X_i]>0. \end{align} If $\varphi_i=0$, we have \begin{align} \pi_{i}^{RE}(\bm{\varphi})&= \pi_{i}^R(\bar{p},y_{i}^*(\bar{p},\varphi_{i}),\bm{\varphi})\\ &=\lambda \int_0^{F_{-i}^{-1}(\frac{\bar{p}}{\lambda})}xf_i(x)dx\\ &>0. \end{align} \subsubsection{Without the pure price equilibrium (i.e., $D<\sum_iy_i^*(\bar{p},\varphi_{i})$}:If $\varphi_i=1$, due to the lower support $l>0$, we have \begin{align} \pi_i^{RE}(\bm{\varphi})&=l \min(D,\mathbb{ E }[X_i]) >0. \end{align} If $\varphi_i=0$, due to the lower support $l>0$, we have \begin{align} \pi_{i}^{RE}(\bm{\varphi})&= \pi_{i}^R(l,\min(D,y_{i}^*(l,\varphi_{i})),\bm{\varphi})\\ &>\pi_{i}^R(0,\min(D,y_{i}^*(0,\varphi_{i})),\bm{\varphi})\\ &=0. \end{align} In conclusion, we have Proposition \ref{prop:positiverev} proved. \subsection{Proof of Proposition \ref{prop:ns}} We prove Proposition \ref{prop:ns} by contradiction. First, we prove $\min_i ~y_i^*(l,\varphi_i)<D$ by contradiction. Suppose that $y_i^*(l,\varphi_i)\geq D$ for both $i=1,2$ and supplier $-i$'s mixed strategy $F_{-i}^e$ has no atom at $\bar{p}$ based on Lemma \ref{lem:mix} (iii). Then, against supplier $-i$'s bidding price $p\in[l,\bar{p})$, according to Proposition \ref{prop:stage3}, supplier $i$'s selling out electricity quantity at the price $\bar{p}$ is \begin{align} {x}_{i}^*(\boldsymbol{p},\boldsymbol{y})=&\min \left\{D-\min \left\{D, y_{-i}^*(p,\varphi_{-i})\right\},y_{i}^*(\bar{p},\varphi_{i})\right\}\\ =&0. \end{align} Thus, the equilibrium revenue of supplier $i$ can be characterized as follows \begin{align} \pi_i^{RE}(\bm{\varphi})&=\pi_i^{RM}(\bar{p},\mu_{-i}^*,\bm{\varphi})\notag \\&=\bar{p} \int_l^{\bar{p}} {x}_{i}^*(\boldsymbol{p},\boldsymbol{y})\cdot f_{-i}^e(p_{-i})dp_{-i}\\ &=0. \label{eq:proof_p4} \end{align} However, at the case of mixed price equilibrium, both suppliers' equilibrium revenue is strictly positive as shown in Proposition \ref{prop:positiverev}, i.e., $\pi_i^{RE}(\bm{\varphi})>0$, which is contradiction to \eqref{eq:proof_p4}. Therefore, we have $\min_i ~y_i^*(l,\varphi_i)<D$. Second, we prove $D\leq \sum_i y_i^*(l,\varphi_i)$ by contradiction. Suppose that $D> \sum_i y_i^*(l,\varphi_i)$. Thus, there exists a small $\varepsilon>0$ such that $D> \sum_i y_i^*(l+\varepsilon,\varphi_i)$ still holds. Note that $\min_i ~y_i^*(l,\varphi_i)<D$ and we assume that $ ~y_{-i}^*(l,\varphi_{-i})<D$ without loss of generality. We also let this small $\varepsilon$ satisfy $ ~y_{-i}^*(l+\varepsilon,\varphi_{-i})<D$. We can characterize supplier $i$'s equilibrium revenue using $l$ and $l+\varepsilon$, respectively as follows. (a) With $l$: \begin{align} \pi_i^{RE}(\boldsymbol{\varphi})&=\pi_i^{RM}(l,\mu_{-i}^*,\boldsymbol{\varphi})={l\int_{l}^{\bar{p}} \min(D,y_i^*(l,\varphi_i)) f_{-i}^e(p_{-i})dp_{-i}}.\label{eq:proof_p41} \end{align} (b) With $l+\varepsilon$: \begin{align} \pi_i^{RE}(\boldsymbol{\varphi})&=\pi_i^{RM}(l+\varepsilon,\mu_{-i}^*,\boldsymbol{\varphi})\notag \\&={(l+\varepsilon)\cdot\int_{l+\varepsilon}^{\bar{p}} \min(D,y_i^*(l+\varepsilon,\varphi_i)) f_{-i}^e(p_{-i})dp_{-i}}\notag\\&~~~~~~~~~+{(l+\varepsilon)\int_{l}^{l+\varepsilon} \min \left\{D-\min \left\{D, y_{-i}^*(p,\varphi_{-i})\right\},y_{i}^*(l+\varepsilon,\varphi_{i})\right\} \cdot f_{-i}^e(p_{-i})dp_{-i}}\notag \\ &= {(l+\varepsilon)\int_{l+\varepsilon}^{\bar{p}} \min(D,y_i^*(l+\varepsilon,\varphi_i)) f_{-i}^e(p_{-i})dp_{-i}}+{(l+\varepsilon)\int_{l}^{l+\varepsilon} y_{i}^*(l+\varepsilon,\varphi_{i}) \cdot f_{-i}^e(p_{-i})dp_{-i}} \notag \\ &>{l\int_{l+\varepsilon}^{\bar{p}} \min(D,y_i^*(l,\varphi_i)) f_{-i}^e(p_{-i})dp_{-i}}+{l\int_{l}^{l+\varepsilon} \min(D,y_i^*(l,\varphi_i))\cdot f_{-i}^e(p_{-i})dp_{-i}} \notag \\ &={l\int_{l}^{\bar{p}} \min(D,y_i^*(l,\varphi_i)) f_{-i}^e(p_{-i})dp_{-i}}.\label{eq:proof_p42} \end{align} We see that \eqref{eq:proof_p41} and \eqref{eq:proof_p42} contradict with each other. Therefore, we have $D\leq \sum_i y_i^*(l,\varphi_i)$. In conclusion, we have Proposition \ref{prop:ns} proved.\qed \vspace{5mm} \section{Appendix: Proofs of Stage \uppercase\expandafter{\romannumeral1}}\label{appendix:proofstage1} \subsection{Proof of Theorem \ref{thm:stoeq}} We prove Theorem \ref{thm:stoeq} based on Definition \ref{defi:stoeq} for the storage-investment equilibrium. We first discuss the pure storage-investment equilibrium and then discuss the mixed storage-investment equilibrium. First, for the pure price equilibrium, we use the example of the $\text{S}_0\text{S}_0$ case. If the $\text{S}_0\text{S}_0$ case is an equilibrium, each supplier will not be better off if he deviates to investing in storage, i.e., \begin{align} \pi_i^{\text{S}_1\text{S}_0|Y}-C_i\leq \pi_i^{\text{S}_0\text{S}_0},\forall i=1,2 \end{align} Therefore, $C_i\in [\pi_i^{\text{S}_1\text{S}_0|\text{Y}}-\pi_i^{\text{S}_0\text{S}_0},+\infty)$, for both $i=1,2$. Similarly, we can derive the conditions for the $\text{S}_1\text{S}_0$ case and the $\text{S}_1\text{S}_1$ case to be the equilibrium, respectively. Second, if there is no pure storage-investment equilibrium, we can always compute the mixed storage-investment equilibrium\cite{gamex}. Supplier $i$ invests in the storage with probability $pr_i^s$ and does not invest in storage with with probability $pr_i^n$, where $pr_i^s+pr_i^n=1$. We construct the following set of linear equations as follows to compute $pr_i^s$ and $pr_i^n$\cite{gamex}. \begin{equation} \left \{ \begin{aligned} &pr_i^s+pr_i^n=1,\forall i=1,2,\\ &pr_{-i}^s\cdot(\pi_i^{\text{S}_1\text{S}_1}-C_i)+pr_{-i}^n\cdot(\pi_i^{\text{S}_1\text{S}_0|\text{Y}}-C_i)=pr_{-i}^s\cdot\pi_i^{\text{S}_1\text{S}_0|\text{N}}+pr_{-i}^n \cdot \pi_i^{\text{S}_0\text{S}_0},\forall i=1,2. \end{aligned} \label{eq:proof_thmstage1} \right. \end{equation} By solving \eqref{eq:proof_thmstage1}, we can obtain $pr_i^s$ and $pr_i^n$ for both $i=1,2$, which is the mixed storage-investment equilibrium. \qed \subsection{Proof of Proposition \ref{prop:stocost} } We prove Proposition \ref{prop:stocost} based on Theorem \ref{thm:stoeq}. Note that $ \pi_i^{\text{S}_1\text{S}_0|\text{Y}}-\pi_i^{\text{S}_0\text{S}_0}$ is bounded for both $i=1,2$. Thus, there always exists $C_i^{\text{S}_0\text{S}_0}$ such that $C_i^{\text{S}_0\text{S}_0}>\pi_i^{\text{S}_1\text{S}_0|\text{Y}}-\pi_i^\text{$\text{S}_0\text{S}_0$}$ for each $i=1,2$. According to Theorem \ref{thm:stoeq}, the $\text{S}_0\text{S}_0$ case will be the storage-investment equilibrium, which is also unique. \qed \subsection{Proof of Proposition \ref{prop:stodemandl} } We prove Proposition \ref{prop:stodemandl} based on the storage-investment-equilibrium shown in Theorem \ref{thm:stoeq} and suppliers' equilibrium revenue in the case $\text{S}_0\text{S}_0$ shown in Proposition \ref{prop:pureprice}. We will show that if the demand $D^{m,t}\leq \min_i \mathbb{ E }[X_i^{m,t}]$, the condition $C_i\in [0, \pi_i^{\text{S}_1\text{S}_1}-\pi_i^{\text{S}_1\text{S}_0|\text{N}}]$, for both $i=1,2$ cannot be satisfied. According to Proposition \ref{prop:pureprice}, in the $\text{S}_0\text{S}_0$ case, if the demand $D^{m,t}\leq \min_i \mathbb{ E }[X_i^{m,t}]$, then both suppliers' revenue is zero. Therefore, if the demand $0<D^{m,t}\leq \min_i \mathbb{ E }[X_i^{m,t}]$ for any $m$ and $t$, we have \begin{align} \pi_i^{\text{S}_1\text{S}_1}=0,\forall i=1,2. \end{align} However, according to Proposition \ref{prop:positiverev}, we have that $\pi_i^{\text{S}_1\text{S}_0|\text{N}}>0$ always holds. Therefore, if the demand $0<D^{m,t}\leq \mathbb{ E }[X_i^{m,t}]$ for any $m$ and $t$, we have \begin{align} \pi_i^{\text{S}_1\text{S}_1}-\pi_i^{\text{S}_1\text{S}_0|\text{N}}<0,\forall i=1,2. \end{align} Based on the condition of $\text{S}_1\text{S}_1$ being the equilibrium in Theorem \ref{thm:stoeq}, the $\text{S}_1\text{S}_1$ case cannot be a pure equilibrium if $\pi_i^{\text{S}_1\text{S}_1}-\pi_i^{\text{S}_1\text{S}_0|\text{N}}<0,\forall i=1,2$.\qed \subsection{Proof of Proposition \ref{prop:stodemandh} } We will prove Proposition \ref{prop:stodemandh} based on Theorem \ref{thm:stoeq}. The key is to show $\pi_i^{\text{S}_1\text{S}_0|\text{Y}}-\pi_i^{\text{S}_0\text{S}_0}=\pi_i^{\text{S}_1\text{S}_1}-\pi_i^{\text{S}_1\text{S}_0|\text{N}}>0$ for both $i=1,2$. When $D^{m,t}\geq D^{m,t,th}=\max( \sum_i y_i^{m,t*}(\bar{p},1),$ $ \sum_i y_i^{m,t*}(\bar{p},0))$, there exists the pure price equilibrium $p_1=p_2=\bar{p}$ for each type of subgame in Stage \uppercase\expandafter{\romannumeral2} according to Proposition \ref{prop:pureprice}. Therefore, for both $i=1,2$, \begin{align} \pi_i^{\text{S}_0\text{S}_0}=\pi_i^{\text{S}_1\text{S}_0|\text{N}}&=\mathbb{ E }_{m,t}[\pi_{i}^{R,m,t}(\bar{p},y_{i}^*(\bar{p},\varphi_{i}),\bm{\varphi})], ~\text{where}~\sum_i\varphi_i=0\\ &=\mathbb{ E }_{m,t}[\lambda \int_0^{y_i^{m,t*}(\bar{p},0)}xf_i^{m,t}(x)dx]\label{eq:proof_p81}\\ &=\mathbb{ E }_{m,t}[\bar{p}y_i^{m,t*}(\bar{p},0)-\lambda \int_0^{y_i^{m,t*}(\bar{p},0)}F_{i}^{m,t}(x)dx], \end{align} which has been shown in \eqref{eq:proof_mcom1}. Furthermore, we also have \begin{align} \pi_i^{\text{S}_1\text{S}_1}=\pi_i^{\text{S}_1\text{S}_0|\text{Y}}&=\mathbb{ E }_{m,t}[\pi_{i}^{R,m,t}(\bar{p},y_{i}^*(\bar{p},\varphi_{i}),\bm{\varphi})], ~\text{where}~\sum_i\varphi_i=2\\ &=\mathbb{ E }_{m,t}\left[\bar{p} y_i^{m,t*}(\bar{p},1)\right]\label{eq:proof_p82}\\ &=\mathbb{ E }_{m,t}[\bar{p}\bar{X}_i-\bar{p} \int_0^{\bar{X}_i}F_{i}^{m,t}(x)dx]. \end{align} Thus, we have \begin{align} \pi_i^{\text{S}_1\text{S}_0|\text{Y}}-\pi_i^{\text{S}_0\text{S}_0}=\pi_i^{\text{S}_1\text{S}_1}-\pi_i^{\text{S}_1\text{S}_0|\text{N}}&=\mathbb{E}_{m,t}[\bar{p} y_i^{m,t*}(\bar{p},1)-\lambda \int_{0}^{y_i^{m,t*}(\bar{p},0)}xf_i^{m,t}(x)dx] \label{eq:proof_p83}\\ &\triangleq C_i^{th}. \end{align} which is based on \eqref{eq:proof_p81} and \eqref{eq:proof_p82}. Note that $C_i^{th}>0$ always holds as implied in \eqref{eq:proof_rev}. According to Theorem \ref{thm:stoeq}, if $C_i\leq C_i^{th}$, then supplier $i$ will invest in storage (i.e., $\varphi_{i}^*=1$) while if $C_i> C_i^{th}$, then supplier $i$ will not invest in storage (i.e., $\varphi_{i}^*=0$). \qed \subsection{Proof of Proposition \ref{prop:stoprofit} } Suppliers always have strictly positive profit at the storage-investment equilibrium because the without-storage supplier can always have positively revenue in the cases of $\text{S}_1\text{S}_0$ and $\text{S}_0\text{S}_0$ according to Proposition \ref{prop:positiverev}. We show it as follows. \begin{itemize} \item If the $\text{S}_0\text{S}_0$ case is the equilibrium, both suppliers get strictly positive profit (with zero storage investment cost) according to Proposition \ref{prop:positiverev}. \item If the $\text{S}_1\text{S}_0$ case is the equilibrium, the without-storage suppliers get strictly positive profit (with zero storage investment cost) according to Proposition \ref{prop:positiverev}. If the with-storage supplier gets non-positive profit, he can always deviate to not investing in storage, which leads to the case $\text{S}_0\text{S}_0$ and brings him strictly positive profit. \item If the $\text{S}_1\text{S}_1$ case is the equilibrium and one supplier gets non-positive profit, he can always deviate to not investing in storage, which leads to the case $\text{S}_1\text{S}_0$ and brings him strictly positive profit. \end{itemize} In summary, suppliers always have strictly positive profits at the storage-investment equilibrium.\qed \vspace{5mm} \section{Appendix: Proofs of oligopoly model} \label{appendix:proofoligopoly} \subsection{Proof of Proposition \ref{prop:purepricennn}} This proof can follow the same procedure in the proof of Proposition \ref{prop:pureprice} by verifying the pure price equilibrium according to the definition of the Nash equilibrium. Towards this end, note that for supplier $i$ with or without storage, the revenue function $\pi_i^R\left(p_i, x_i^*(\bm{p},\bm{y}),\bm{\varphi}\right)$ is strictly increasing with respect to both the price $p_i$ and the selling quantity $x_i^*(\bm{p},\bm{y})$ that is in the range $[0, y_i^*(p_i,\varphi_i)]$ (without considering the other supplier's coupled decisions). We will discuss the three cases, respectively. \subsubsection{ The case of $D \geq \sum_{i\in \mathcal{I}} y_i^*(\bar{p},\varphi_i)$} We can prove that when $D \geq \sum_i y_i^*(\bar{p},{\varphi}_i)$, $p_i=\bar{p}$ is a pure price equilibrium . Also, this pure price equilibrium is unique. This proof can follow the same procedure in the Section \ref{appendix:proofstage2}.B.1.a. of the proof of Proposition \ref{prop:pureprice}. The intuition is that when the demand is larger than the maximum bidding quantity, if any supplier deviates to a lower price, his selling quantity cannot be increased, which leads to a lower revenue. \subsubsection{ The case of $D\leq \sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)-y_j^*(\bar{p},\varphi_i)$ for any $j\in\mathcal{ U}$} We first prove by the definition of the Nash equilibrium that when $D\leq \sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)-y_j^*(\bar{p},\varphi_j)$ for any $j\in\mathcal{ U}$, there exists a pure price equilibrium $p_i^*=0$ with an equilibrium revenue $\pi_i^{RE}=0$, for any $i\in \mathcal{ I}$. Then, note that this equilibrium is not unique, but we show that suppliers always get zero revenue at any equilibrium. First, we prove the pure price equilibrium $p_i^*=0$. We assume that $p_i^*=0,\forall i \in \mathcal{ I}$. We will discuss two cases of with-storage supplier and without-storage supplier, respectively. (a) For a supplier $j\in \mathcal{U}$ who invests in storage, if he deviates to a higher price $p_j'>0$, the demand that he gets is the following. \begin{align} \min\left(D-\min(D,\sum_{i\in\mathcal{I}\backslash j} y_i^*(0,\varphi_i)),y_j^*(p_j',\varphi_j)\right ),~j\in \mathcal{ U}.\label{eq:proofget} \end{align} Note that according to Theorem \ref{thm:quantity}, we have $y_k^*(0,\varphi_k)=0,\forall k\in \mathcal{ V}$. Also, we have $y_k^*(p_k,\varphi_k)=\mathbb{ E }[X_k],\forall k\in \mathcal{ U}$. Therefore, \begin{align} \eqref{eq:proofget}= \min\left(D-\min(D,\sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)-y_j(\bar{p},\varphi_j)),y_j^*(p_j',\varphi_j)\right ),~j\in \mathcal{ U}, \end{align} which is zero since $D\leq \sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)-y_j^*(\bar{p},\varphi_j),\forall j\in\mathcal{ U}$. Therefore, if this supplier deviates to a higher price, his revenue will be still zero. (b) For a supplier $j\in \mathcal{ V}$ who does not invest in storage, if he deviates to a higher price $p_j'>0$, the demand that he gets is \begin{align} &\min\left(D-\min(D,\sum_{i\in\mathcal{I}\backslash j} y_i^*(0,\varphi_i)),y_j^*(p_j',\varphi_j)\right ), ~j\in \mathcal{ V},\\ =&\min\left(D-\min(D,\sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)),y_j^*(p_j',\varphi_j)\right ), ~j\in \mathcal{ V} \end{align} which is still zero since $D\leq \sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)$. Therefore, if this supplier deviates to a higher price, his revenue will be still zero. In conclusion, the bidding price $p_i^*=0,\forall i \in \mathcal{ I}$ is an equilibrium where no supplier will deviate. Second, note that the equilibrium here is not unique, however, each supplier always gets zero revenue at any equilibrium. We show this by contradiction as follows. If supplier $k$ gets positive revenue, it means that his bidding price and his obtained demand are both positive. We assume that a set of suppliers $\mathcal{P}$ bid the price $p>0$ the same as this supplier $k$. We denote the set of suppliers whose prices are lower than $p$ as $\mathcal{PL}$ and the set of suppliers whose prices are higher than $p$ as $\mathcal{PH}$.\footnote{Note that $\mathcal{PL}$ and $\mathcal{PH}$ can be both empty sets} Since this supplier gets positive demand, it means \begin{align} \sum_{i\in \mathcal{P}} y_i^*({p_i,\varphi_i}) \leq D-\sum_{i\in \mathcal{PL}} y_i^*({p_i,\varphi_i}), \label{eq:poofol} \end{align} or \begin{align} 0<\ D-\sum_{i\in \mathcal{PL}} y_i^*({p_i,\varphi_i})<\sum_{i\in \mathcal{P}} y_i^*({p_i,\varphi_i}). \label{eq:poofoh} \end{align} \begin{itemize} \item Case \eqref{eq:poofoh} and $|\mathcal{P}|\geq 2$: At least one of suppliers in $\mathcal{P}$ can decrease his price by a sufficiently positive value, which can increase his obtained demand and increase his revenue. This shows that this case cannot be one equilibrium. \item Case \eqref{eq:poofoh}; $|\mathcal{P}|=1$ and $p<\bar{p}$: This supplier can increase his price by a small positive value (which makes the bidding price smaller than the lowest bidding price in set $\mathcal{ PH}\bigcup \bar{p}$), which will not decrease his obtained demand. Thus, this deviation increases his revenue and this case cannot be one equilibrium. \item Case \eqref{eq:poofoh}; $|\mathcal{P}|=1$ and $p=\bar{p}$: Due to \eqref{eq:poofoh}, we have $\sum_{i\in \mathcal{PL}} y_i^*({p_i,\varphi_i})< D$. Note that the set $\mathcal{PL}$ contains all the suppliers except the single supplier $k$. Thus, there alyways exists $j\in \mathcal{ U }$ such that $\sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)-y_j^*(\bar{p},\varphi_j)<\sum_{i\in \mathcal{PL}} y_i^*({p_i,\varphi_i})< D$, which contradicts the condition $D\leq \sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)-y_j^*(\bar{p},\varphi_j),\forall j\in\mathcal{ U}$. This case is impossible. \item Case \eqref{eq:poofol} and $p<\bar{p}$: any supplier in $\mathcal{P}$ can always increase his price by a small positive value (which makes the bidding price smaller than price cap $\bar{p}$) without decreasing his obtained demand, which increases his revenue. This shows that this case cannot be one equilibrium. \item Case \eqref{eq:poofol} and $p=\bar{p}$: Due to \eqref{eq:poofol}, we have $\sum_{i\in \mathcal{ I}} y_i^*({p_i,\varphi_i}) \leq D$, which contradicts the condition $D\leq \sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)-y_j^*(\bar{p},\varphi_j),\forall j\in\mathcal{ U}$. Thus, this case is impossible. \end{itemize} Therefore, we can draw the conclusion that at any equilibrium, suppliers get zero revenue. \subsubsection{The case that there exists $j\in\mathcal{ U}$ such that $\sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)-y_j^*(\bar{p},\varphi_j)<D < \sum_{i\in \mathcal{I}} y_i^*(\bar{p},\varphi_i)$} In this case, there is no pure price equilibrium. This proof can follow the similar procedure in the Section \ref{appendix:proofstage2}.B.1.b of the proof of Proposition \ref{prop:pureprice}. We can discuss three cases: (i) all the suppliers bid zero prices; (ii) suppliers' bidding prices are all equal and positive. (iii) suppliers' bidding prices are not equal for all the suppliers. We show that all theses cases cannot be the pure price equilibrium. First, for case (i) , at least one supplier $j$ (i.e., the $j$ satisfying $\sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)-y_j^*(\bar{p},\varphi_j)<D$) who invests in storage can increase his price, and he will get positive demand. This increases his revenue and shows that case (i) cannot be an equilibrium. Second, for case (ii), we can discuss two conditions $\sum_{i\in \mathcal{ I}}y_i^*(p_i,\varphi_i) \leq D$ and $\sum_{i\in \mathcal{ I}}y_i^*(p_i,\varphi_i) > D$, which is the same Section \ref{appendix:proofstage2}.B.1.b. For $\sum_{i\in \mathcal{ I}}y_i^*(p_i,\varphi_i) \leq D$, any supplier can always increase his price without decreasing his obtained demand, which increases his revenue. For $\sum_{i\in \mathcal{ I}}y_i^*(p_i,\varphi_i) > D$, at least one supplier can always reduce his price by a sufficiently small positive value, which can increase his demand and increase his revenue. Thus, case (ii) can not be an equilibrium. Third, for case (iii), we denote the set of suppliers with the lowest bidding prices $p$ among all the suppliers as $\mathcal{ L }$. Similarly, we discuss two conditions $\sum_{i\in \mathcal{ L}}y_i^*(p_i,\varphi_i) \leq D$ and $\sum_{i\in \mathcal{ L}}y_i^*(p_i,\varphi_i) > D$. For $\sum_{i\in \mathcal{ L}}y_i^*(p_i,\varphi_i) \leq D$, any supplier can always increase his price by a small positive value (which makes the bidding price smaller than the second lowest price) without decreasing his obtained demand, which increases his revenue. Thus, this case cannot be an equilibrium. For $\sum_{i\in \mathcal{ L}}y_i^*(p_i,\varphi_i) > D$, there are three possibilities. \begin{itemize} \item The lowest price $p>0$ and $|\mathcal{ L }\mid=1$: This supplier can increase his price by a small positive value (which makes the bidding price smaller than the second lowest bidding price), which will not decrease his obtained demand. Thus, it increases his revenue and this case cannot be one equilibrium. \item The lowest price $p>0$ and $|\mathcal{ L }\mid \geq 2$: At least one of suppliers in $\mathcal{L}$ can decrease his price by a sufficiently small positive value, which can increase his obtained demand and increase his revenue. This shows that this case cannot be one equilibrium. \item The lowest price $p=0$: In this case, all the suppliers have zero revenue, and $\sum_{i\in \mathcal{ L}}y_i^*(0,\varphi_i) > D$. Note that demand $D$ also satisfies $\sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)-\max_j y_j^*(\bar{p},\varphi_i)<D$, $j\in\mathcal{ U }$. We denote $\arg \max_{j \in \mathcal{ U }} y_j^*(\bar{p},\varphi_j)=j^*$. Thus, there are two possibilities that lead to make this: \begin{itemize} \item $j^*\in \mathcal{ L }$: The supplier $j^*$ can increase his zero price to a positive price (which is smaller than the second lowest price) and get positive demand since $\sum_{i\in\mathcal{L}\setminus j^*} y_i^*(0,\varphi_i)\leq \sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)- y_{j^*}^*(\bar{p},\varphi_i)<D$. This increases supplier $j^*$'s revenue. \item $j^*\notin \mathcal{ L }$: Any supplier $k$ in $\mathcal{ L }$ can increase his zero price to a positive price (which is smaller than the second lowest price) and get positive demand since $\sum_{i\in\mathcal{L}\setminus k} y_i^*(0,\varphi_i)<\sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)- y_{j^*}^*(\bar{p},\varphi_i)<D$. This increases supplier $k$'s revenue. \end{itemize} Therefore, the case that the lowest price $p=0$ cannot be one equilibrium. Combining the case $p=0$ and $p>0$, the condition $\sum_{i\in \mathcal{ L}}y_i^*(p_i,\varphi_i) > D$ is not an equilibrium. \end{itemize} Combining cases (i)-(iii), we show that all theses cases cannot the equilibrium. Thus, there is no pure price equilibrium if there exists $j\in\mathcal{ U}$ such that $\sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)-y_j^*(\bar{p},\varphi_j)<D < \sum_{i\in \mathcal{I}} y_i^*(\bar{p},\varphi_i)$}. Finally, we have Proposition \ref{prop:purepricennn} proved. \qed \subsection{Proof of Proposition \ref{lem:mixnnn}} We first show the existence of mixed price equilibrium and then prove the positive revenues for all the suppliers in the mixed price equilibrium. \subsubsection{Existence of mixed price equilibrium} This result can be derived from Theorem 5 \cite{dasgupta1986existence}. \subsubsection{Positive revenue} Note that the case that there exists $j\in\mathcal{ U}$ such that $\sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)-y_j^*(\bar{p},\varphi_j)<D < \sum_{i\in \mathcal{I}} y_i^*(\bar{p},\varphi_i)$ is equivalent to the case $\sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)-\max_{j \in \mathcal{ U }}y_j^*(\bar{p},\varphi_j)<D < \sum_{i\in \mathcal{I}} y_i^*(\bar{p},\varphi_i)$. We will first prove by contradiction that for supplier $n$ with $n=\arg \max_{i\in \mathcal{ U }} y_i^*(\bar{p},\varphi_i)$, his equilibrium revenue is positive. Then, we prove that other suppliers except supplier $n$ also have the positive revenues. We denote the support of supplier $i$'s mixed price strategy as $\mathcal{SP}_i$. First, we will prove that for supplier $n$, his revenue equilibrium $\pi_n^{RE}>0$. We prove this by contradiction. We assume that supplier $n$'s equilibrium revenue $\pi_n^{RE}=0$, and discuss two cases. \begin{itemize} \item For each supplier $j\neq n$, the support $\mathcal{SP}_j$ only contains 0, which means each supplier $j\neq i$ has the pure price strategy $p_j=0$: Then, for supplier $n$, he can always set a pure price $p_n>0$ to achieve positive demand and get positive revenue since $\sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)-y_n^*(\bar{p},\varphi_n)<D $, which contradicts the assumption that $\pi_n^{RE}=0$. \item In all the suppliers except $n$, there exists at least one supplier $k$ such that $\mathcal{SP}_k$ contains positive price $p_k>0$: For all the suppliers whose supports contain positive prices (except $n$), we denote the set of those suppliers as $\mathcal{PS}$. For any supplier $k\in\mathcal{PS}$, we choose one positive price $p_k \in \mathcal{SP}_k$. Thus, supplier $n$ can always choose a pure price strategy $0<p_n<\min_{k\in\mathcal{PS}} p_k$, such that he can get positive demand and positive revenue with a positive probability. This contradicts the assumption that $\pi_n^{RE}=0$. \end{itemize} Thus, we can have the conclusion that at the equilibrium, supplier $n$'s revenue $\pi_n^{RE}> 0$. This also implies that for supplier $n$, his support $\mathcal{SP}_n$ does not contain zero. Second, we will prove that for any supplier $j \neq n$, his equilibrium revenue is positive. We assume that supplier $j$'s equilibrium revenue $\pi_j^{RE}=0$. Note that among the suppliers except $j$, there exists at least one supplier $n$ such that $\mathcal{SP}_n$ contains positive price $p_n>0$. For all the suppliers (except $j$) whose supports contain positive prices, we denote the set of those suppliers as $\mathcal{PS}'$. For any supplier $k\in\mathcal{PS}'$, we choose one positive price $p_k \in \mathcal{SP}_k$. Thus, supplier $j$ can always choose a pure price strategy $0<p_j<\min_{k\in\mathcal{PS}'} p_k$, such that he can get positive demand and positive revenue with a positive probability. Therefore, at the equilibrium, supplier $j$'s revenue cannot be zero. Therefore, based on above discussions, we have that all the suppliers have the positive revenues in the case of $\sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)-\max_{j\in\mathcal{ U}}y_j^*(\bar{p},\varphi_j)<D < \sum_{i\in \mathcal{I}} y_i^*(\bar{p},\varphi_i)$. \qed \subsection{Proof of Proposition \ref{prop:stocostnnn}} The proof follows the definition of Nash equilibrium. It is straightforward that the benefit brought by investing storage for supplier $i$ is bounded. At the case $\mathcal{S}^{\mathcal{ U}|\mathcal{ V}}$, we denote the equilibrium profit of supplier $i$ as $\Pi_i^*(\mathcal{S}^{\mathcal{ U}|\mathcal{ V}})$ and the expected equilibrium revenue (scaled in one hour) over the investment horizon as $\pi_i^{REE}(\mathcal{S}^{\mathcal{ U}|\mathcal{ V}})$. For any case $\mathcal{S}^{\mathcal{ U}|\mathcal{ V}}$, one without-storage supplier $i$ has the profit $\Pi_i^*(\mathcal{S}^{\mathcal{ U}|\mathcal{ V}})=\pi_i^{REE}(\mathcal{S}^{\mathcal{ U}|\mathcal{ V}}),i\in\mathcal{ V}$ at the equilibrium. However, if he deviates to investing in storage, he has the profit $\Pi_i^*(\mathcal{S}^{\mathcal{ U}\bigcup i|\mathcal{ V}\setminus i})=\pi_i^{REE}(\mathcal{S}^{\mathcal{ U }\bigcup i|\mathcal{ V}\setminus i})-C_i$. Thus, for $i\in \mathcal{V}$, we have \begin{align} &\Pi_i^*(\mathcal{S}^{\mathcal{ U}\bigcup i|\mathcal{ V}\setminus i})-\Pi_i^*(\mathcal{S}^{\mathcal{ U}|\mathcal{ V}})\\=&\pi_i^{REE}(\mathcal{S}^{\mathcal{ U }\bigcup i|\mathcal{ V}\setminus i})-\pi_i^{REE}(\mathcal{S}^{\mathcal{ U}|\mathcal{ V}})-C_i. \end{align} Note that $\pi_i^{REE}(\mathcal{S}^{\mathcal{ U }\bigcup i|\mathcal{ V}\setminus i})-\pi_i^{REE}(\mathcal{S}^{\mathcal{ U}|\mathcal{ V}})$ is bounded for any $\mathcal{S}^{\mathcal{ U}|\mathcal{ V}}$. If the storage cost $C_i>C_i^{no}$, where $C_i^{no}$ is the maximum value of $\pi_i^{REE}(\mathcal{S}^{\mathcal{ U }\bigcup i|\mathcal{ V}\setminus i})-\pi_i^{REE}(\mathcal{S}^{\mathcal{ U}|\mathcal{ V}})$ over all the cases $\mathcal{S}^{\mathcal{ U}|\mathcal{ V}}$, then this supplier $i\in \mathcal{V}$ will not deviate to investing in storage in any case of $\mathcal{S}^{\mathcal{ U}|\mathcal{ V}}$. Thus, no supplier investing in storage is the unique equilibrium. \qed \subsection{Proof of Proposition \ref{prop:stodemandlnnn}} The proof follows the definition of Nash equilibrium. Note that in the subgame $S^{\mathcal{ U}|\mathcal{ V}}$, when $0<D^{m,t}\leq \min_{j\in \mathcal{ U}} (\sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)-y_j^*(\bar{p},\varphi_j))$ for any $t$ and $m$, each supplier has zero revenue for any $t$ and $m$ as shown in Proposition \ref{prop:purepricennn}. Thus, for each supplier $i\in \mathcal{ I}$, his expected equilibrium revenue $\pi_i^{REE}(S^{\mathcal{ U}|\mathcal{ V}})=0$. Then, for supplier $j\in\mathcal{ U}$ who invests in storage, his profit is $\pi_i^{REE}(S^{\mathcal{ U}|\mathcal{ V}})-C_i<0$ since $C_i>0$. Therefore, this supplier $i$ can always deviate to not investing storage which leads to a nonnegative profit. This shows that when $0<D^{m,t}\leq \min_{j\in \mathcal{ U}} (\sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)-y_j^*(\bar{p},\varphi_j))$, the case $S^{\mathcal{ U}|\mathcal{ V}}$ (i.e., suppliers of set $\mathcal{ U}$ investing in storage and suppliers of set $\mathcal{V}$ not investing in storage) cannot be a pure storage-investment equilibrium. \qed \subsection{Proof of Proposition \ref{prop:stodemandhnnn}} The intuition of this proposition is that when the demand $D$ is sufficiently large, there is not competition between suppliers and they make decisions of storage investment independently. As implied in Proposition \ref{prop:purepricennn}, when demand $D^{m,t}\geq\sum_{i\in \mathcal{I}}y_i^{m,t*}(\bar{p},\varphi_i)$ in subgame $\mathcal{S}^{\mathcal{ U }|\mathcal{ V}}$, each supplier $i$ can bid the price cap $\bar{p}$ to get his biding quantity $y_i^{m,t*}(\bar{p},\varphi_i)$. For convenience, at hour $t$ of month $m$, we denote the bidding quantity of supplier $i$ at price cap $\bar{p}$ in subgame $\mathcal{S}^{\mathcal{ U }|\mathcal{ V}}$ as $y_i^{m,t*}(\bar{p},\varphi_i| \mathcal{S}^{\mathcal{ U }|\mathcal{ V}})$. We also denote the set of all the subgames as $\mathcal{ S}^{\Omega}$. Thus, if the demand $D^{m,t}\geq\max_{\mathcal{S}^{\mathcal{ U }|\mathcal{ V}} \in \mathcal{ S}^{\Omega}}\sum_{i\in \mathcal{I}}y_i^{m,t*}(\bar{p},\varphi_i| \mathcal{S}^{\mathcal{ U }|\mathcal{ V}})\triangleq D^{m,t,th'}$ for any $t$ and $m$, then each supplier $i$ can bid the price cap $\bar{p}$ to get his biding quantity $y_i^{m,t*}(\bar{p},\varphi_i)$ in any subgame for any $t$ and $m$. This leads to the revenue $\pi_{i}^{R,m,t}(\bar{p},y_{i}^{m,t*}(\bar{p},\varphi_{i}),\bm{\varphi})$ that can be directly calculated based on supplier $i$'s parameter. In this case, we have the following. \begin{itemize} \item If supplier invests in storage, i.e., $\varphi_{i}=1$, his equilibrium revenue is \begin{align} \mathbb{ E }_{m,t}[\pi_{i}^{R,m,t}(\bar{p},y_{i}^*(\bar{p},1),\bm{\varphi})]= &=\mathbb{ E }_{m,t}\left[\bar{p} y_i^{m,t*}(\bar{p},1)\right], \label{eq:nearf1} \end{align} which has been shown in \eqref{eq:proof_p82}. \item If supplier does not invest in storage i.e., $\varphi_{i}=0$, his equilibrium revenue is \begin{align} \mathbb{ E }_{m,t}[\pi_{i}^{R,m,t}(\bar{p},y_{i}^*(\bar{p},0),\bm{\varphi})]=\mathbb{ E }_{m,t}[\lambda \int_0^{y_i^{m,t*}(\bar{p},1)}xf_i^{m,t}(x)dx], \label{eq:nearf2} \end{align} which has been shown in \eqref{eq:proof_p81}. \end{itemize} We compared \eqref{eq:nearf1} and \eqref{eq:nearf2}, and we characterize $C_i^{th'}$ the same as \eqref{eq:proof_p83} as follows. \begin{align} \mathbb{E}_{m,t}[\bar{p} y_i^{m,t*}(\bar{p},1)-\lambda \int_{0}^{y_i^{m,t*}(\bar{p},0)}xf_i^{m,t}(x)dx] \triangleq C_i^{th'}. \end{align} \qed \subsection{Proof of Proposition \ref{prop:stoprofitnnn}} We prove this by contradiction and discuss a total of three cases. \begin{itemize} \item If one supplier does not invest in storage and gets zero profit (note that a without-storage supplier always has nonnegative profits), it only means the demand lies in the condition $D\leq \sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)-y_j^*(\bar{p},\varphi_j),\forall j\in \mathcal{ U }$ as shown in Proposition \ref{prop:purepricennn} and Proposition \ref{lem:mixnnn}, where all the suppliers get zero revenues in the local energy market. This state is not stable because the with-storage supplier gets negative profit and he can always choose not to invest in storage, which increases his profit. \item If one supplier invests in storage and gets negative profit, he can always choose not to invest in storage, which increases his profit. Thus, this case cannot be an equilibrium. \item If one supplier invests in storage and gets zero profit, it means the demand cannot lie in the condition $D\leq \sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)-y_j^*(\bar{p},\varphi_j),\forall j\in \mathcal{ U }$ (otherwise, this supplier will get negative profit) as shown in Proposition \ref{prop:purepricennn} and Proposition \ref{lem:mixnnn}. This state is not stable since this supplier can further choose not to invest ins storage, where the demand still cannot satisfy $D\leq \sum_{i\in\mathcal{U}} y_i^*(\bar{p},\varphi_i)-y_j^*(\bar{p},\varphi_j),\forall j \in \mathcal{ U }$. This leads to a positive revenue, i.e., the positive profit for this supplier. \end{itemize} In summary, any supplier always has strictly positive profits at the storage-investment equilibrium.\qed \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,872
\section*{Introduction} Let $G$ be a group with a $G$-conjugacy class $X$ of involutions, let $\omega(G)$ denote the set of element orders of $G$, and let $\pi\subseteq\omega(G)$. Denote by $gF_{\pi}(G,X)$ the graph on $X$, in which involutions $x$ and $y$ are adjacent if and only if $x\ne y$ and the order of the product $xy$ is contained in $\pi$; we call $gF_{\pi}(G,X)$ a \textit{$\pi$-product involution graph} of $G$. If the set $\pi$ consists only of odd numbers, then this graph is called a \textit{$\pi$-local fusion graph} of $G$ and denoted by $F_{\pi}(G,X)$. Local fusion graphs of a finite group reflect the structure of its conjugacy classes of involutions, since each path between given vertices in such a graph allows one to determine an element that conjugates its end vertices. A study of their various properties is of interest both for abstract group theory and for computational group theory (in particular, when solving the problem of construction of an involution centralizer). In this regard, it becomes an important step to investigate local fusion graphs of finite simple groups. For more background and motivation, we refer the reader to \cite{B11}. To date, main attention has been paid to local fusion graphs of symmetric groups, sporadic simple groups, and finite simple groups of Lie-type. These graphs were studied in \cite{B11,BGR13,RW16,BR16}, where their basic properties, such as diameter and connectivity, were determined. In this paper, we are concerned with further structure of $\pi$-local fusion graphs of finite simple groups of Lie-type. We indicate a strong connection between such graphs and intricate combinatorial objects, such as antipodal covers and Deza graphs. In particular, we find several infinite families of $\pi$-local fusion graphs of finite simple groups of Lie-type of even characteristic that are strictly Deza graphs. It is worth noting that the proofs proposed in the present paper are of a mainly combinatorial nature. \section{Preliminaries} Next we list some terminology and notation that are used in this paper. Throughout the paper we consider undirected graphs without loops or multiple edges. The distance between vertices $a$ and $b$ of a graph $\Gamma$ is denoted by $\partial_\Gamma(a,b)$. For a vertex $a$ of a graph $\Gamma$, we denote by $\Gamma_i(a)$ the \textit{$i$-neighborhood} of $a$, that is the subgraph of $\Gamma$ induced by the set $\{b\in \Gamma |\partial_\Gamma(a,b)=i\}$, and the size of $\Gamma_1(a)$ is said to be the \textit{valency} of $a$ in $\Gamma$. For a connected graph $\Gamma$ of diameter $d$ and a subset of indices $I\subseteq\{0,...,d\}$ we denote by $\Gamma_{I}$ the graph on the vertex set of $\Gamma$, whose edges are the pairs of vertices $a$ and $b$ such that $\partial_\Gamma(a,b)\in I$. A graph is said to be \textit{regular}, if all its vertices have the same valency. A connected graph $\Gamma$ of diameter $d$ is called \textit{distance-regular}, if there are constants $c_i,a_i$ and $b_i$ such that for all $i\in \{0,1,\ldots,d\}$ and for each pair of vertices $x$ and $y$ such that $\partial_{\Gamma}(x,y)=i$, the following equalities hold: $c_i=|\Gamma_{i-1} (x)\cap \Gamma_1(y)|, a_i=|\Gamma_{i} (x)\cap \Gamma_1(y)|$, and $b_i=|\Gamma_{i+1} (x)\cap \Gamma_1(y)|$ (it is assumed that $b_d=c_0=0$), and, in particular, $|\Gamma_1(x)|=b_0=c_i+a_i+b_i$ (implying $\Gamma$ is regular of valency $b_0$). The consequence $\{b_0, b_1,\ldots, b_{d-1} ; c_1,\ldots, c_d \}$ is called the \textit{intersection array} of $\Gamma$. If the binary relation ``to be at distance 0 or d'' on the set of vertices of a connected graph $\Gamma$ of diameter $d$ is an equivalence relation, then the graph $\Gamma$ is called \textit{antipodal} and classes of this relation are called \textit{antipodal classes} of $\Gamma$. An important subclass of the so-called imprimitive distance-regular graphs is formed by antipodal distance-regular graphs of diameter 3. The latter ones are antipodal covers of complete graphs, which emerge in various geometric and combinatorial objects \cite{BCN,GH}. A regular graph $\Gamma$ of valency $k$ on $n$ vertices is called a \textit{Deza graph} with parameters $(n,k,b,a)$ if the number of common neighbours of two distinct vertices takes on only two values $a$ or $b$ (it is assumed that $a\leq b$). Deza graphs were originally invented as a generalization of \textit{strongly regular graphs} (distance-regular graphs of diameter 2). A Deza graph is called a \textit{strictly Deza graph} if it has diameter $2$ and $a\ne b$ (the last condition implies it cannot be strongly regular). Apart from other important classes of graphs, they also include the so-called \textit{divisible design graphs}, that is, the class of graphs whose every representative is regular and admits a partition of the vertex set into classes of the same size such that the number of common neighbors for any two distinct vertices depends only on whether these vertices belong to the same partition class or not \cite{HKM}. For a divisible design graph, such a partition of the vertex set is called \textit{canonical}. Our group-theoretic terminology and notation are mostly standard and follow \cite{Asch,B11}. The next result shows that there are infinite families of Deza graphs which are related to antipodal distance-regular graphs of diameter 3. \begin{lem}\label{l1} Let $\Gamma$ be an antipodal distance-regular graph of diameter $3$ with intersection array $\{k,(r-1)\mu,1;1,\mu,k\}$, where $r>2$ and $k=r\mu+1$. Then $\Gamma$ is a Deza graph with parameters $$(r(k+1), k, \mu, 0),$$ and $\Gamma_2$ is a Deza graph of diameter $2$ with parameters $$(r(k+1), (r-1)k, b, a),$$ where $\{a, b\}=\{(r-1)^2\mu, k(r-2)\}$. In particular, $\Gamma_2$ is a divisible design graph whose canonical partition coincides with the antipodal partition of $\Gamma$. \end{lem} \begin{proof} First, let us compute the values of the intersection numbers $p^1_{22}, p^2_{22}$ and $p^3_{22}$ of $\Gamma$ (recall that $p_{ij}^t=|\{x\in \Gamma| \partial_\Gamma(a,x)=i, \partial_\Gamma(x,b)=j\}|$ does not depend on the choice of the pair of vertices $(a,b)$ with $\partial_\Gamma(a,b)=t$). By \cite[Lemma 4.1.7]{BCN}, we have \[ p^0_{22}=(r-1)k, p^1_{22}=\frac{b_1^2}{c_2}=(r-1)^2\mu, p^2_{22}=b_1+\frac{a_2(a_2-a_1)}{c_2}=(r-1)^2\mu, \] \[ p^3_{22}=\frac{c_3(a_2+a_3-a_1)}{c_2}=k(r-2). \] It follows that $\Gamma_2$ is a regular graph of valency $p^0_{22}$. Moreover, observe that $\Gamma_2$ has diameter 2. Indeed, for any distinct non-adjacent vertices $a$ and $b$ of $\Gamma_2$ we have $\partial_\Gamma(a,b)=1,3$. Besides, since $p^1_{22}$ and $p^3_{22}$ are both non-zero, there is a vertex $x$ such that $\partial_\Gamma(a,x)=\partial_\Gamma(b,x)=2$, and thus $\partial_{\Gamma_2}(a,b)\le 2$. It is also clear that $\Gamma_2$ cannot be a complete graph, which proves the required claim. Hence any two distinct vertices of $\Gamma_2$ have precisely $p^1_{22}=p^2_{22}$ or $p^3_{22}$ common neighbors, which implies that $\Gamma_2$ is a Deza graph (which is also edge-regular with $\lambda=p^2_{22}$). The remaining statements follow immediately from the definition of $\Gamma$. \end{proof} \begin{rem} Note that the result of this lemma was initially proved in \cite[Proposition 4.15]{HKM} in a matrix form, however, no explicit formulas for the quadruple of parameters of $\Gamma_2$ were provided there.\end{rem} \begin{lem}\label{l2} Let $\Gamma$ be an antipodal distance-regular graph of diameter $3$ with intersection array $\{k,(r-1)\mu,1;1,\mu,k\}$, where $r\notin\{2, \mu+2\}$ and $k=r\mu+1$, and put $\Phi=\Gamma_2$. Then the graph $\Omega^c$ on the vertex set of $\Phi$, whose edges are the pairs of distinct vertices $x$ and $y$ such that $|\Phi_1(x)\cap\Phi_1(y)|=c$, is isomorphic to the graph $K_{(k+1)\times r}$ (that is, a complete multipartite graph with $k+1$ parts of the same size $r$) if $c=(r-1)^2\mu$, while it is a union of $k+1$ isolated $r$-cliques if $c=k(r-2)$. \end{lem} \begin{proof} First note that condition $r\neq \mu+2$ certifies that $\Phi$ is a strictly Deza graph. Then by Lemma~\ref{l1} it easy to see that $\Omega^{(r-1)^2\mu}$ is the complement graph for $\Omega^{k(r-2)}$. Take $c=k(r-2)$. It remains to note that $x\in (\Omega^c)_1(y)$ if and only if $x\in \Gamma_3(y)$. That is, $\Omega^c=\Gamma_3$ is a union of $k+1$ isolated $r$-cliques and hence $\Omega^{(r-1)^2\mu}=\overline{\Omega^c}\cong K_{(k+1)\times r}$. \end{proof} \begin{rem} It follows that Lemma~\ref{l1} actually provides a construction of an infinite class of Deza graphs with imprimitive strongly regular graphs $\Omega^c$, which seems to be unnoticed before. As we will see below (see also \cite{Tsi15,Tsi17}), the structure of the automorphism group of such a Deza graph can be rather sophisticated. \end{rem} \medskip Let $G\in \{L_2(q),Sz(q),U_3(q)\}$, where $q=2^n\ge 4$. Further we denote by $\chi(G)$ the associated prime number of $G$ in the sense by M. Suzuki, that is \[\chi(G)= \begin{cases} 5, & \mbox{if } G=Sz(q) \\ 3, & \mbox{if } G\in \{PSL_2(q),PSU_3(q)\} \end{cases}.\] A pair of involutions $\{x,y\}$ of $G$ is called \textit{distinguished}, if $|xy|=\chi(G)$. Note that due to a result of Suzuki (e.g. see \cite{Suz0, Suz2}), $G$ acts transitively on the set of ordered distinguished pairs of involutions, and hence there is a unique conjugacy class $\mathcal{S}$ of dihedral subgroups of order ${2\cdot\chi(G)}$ in $G$. In other words, a pair of involutions $\{x,y\}$ of $G$ is distinguished if and only if $\langle x, y\rangle \in \mathcal{S}$. This observation, in particular, implies there is an isomorphism between $\{\chi(G)\}$-local fusion graph of $G=PSL_2(q)$ with $q=2^n\ge 4$ and its $S_3$-involution graph, which was shown to have diameter 3 in \cite{DevGiu} and moreover, by \cite{Tsi17}, is isomorphic to a Mathon graph (for a construction of the latter, see \cite[Proposition 12.5.3]{BCN}). Next we reformulate some results of \cite{Tsi15} and \cite{Tsi17} in terms of local fusion graphs, that will be basic to further arguments. \begin{thm}[see {\cite{Tsi15,Tsi17}}]\label{t2} For each group $G\in \{PSL_2(q), PSU_3(q), Sz(q)\}$ with $q=2^n\ge 4$, its $\{\chi(G)\}$-local fusion graph is an arc-transitive antipodal distance-regular graph of diameter $3$ with intersection array \begin{itemize} \item[$(1)$] $\{q,q-2,1;1,1,q\}$ if $G=PSL_2(q)$, \item[$(2)$] $\{q^2,q^2-q-2,1;1,q+1,q^2\}$ if $G=Sz(q)$, or \item[$(3)$] $\{q^3,q^3-q^2-q-2,1;1,q^2+q+1,q^3\}$ if $G=PSU_3(q)$. \end{itemize}\end{thm} \section{Main result} Now we present the main result of this paper. \begin{thm}\label{t1} For each group $G\in \{PSL_2(q), PSU_3(q), Sz(q)\}$ with $q=2^n\ge 4$ and for $\pi$ being the subset of all odd integers in $\omega(G)-\{2,\chi(G)\}$, a (unique) $\pi$-local fusion graph of $G$ is a vertex-transitive edge-regular Deza graph of diameter $2$ with parameters $(v, k, b, a)$ as follows: \begin{itemize} \item[$(1)$] $(q^2-1, q(q-2), q(q-3), (q-2)^2)$ if $G=PSL_2(q)$, \item[$(2)$] $((q^2+1)(q-1), q^2(q-2), (q-2)^2(q+1), q^2(q-3))$ if $G=Sz(q)$, \item[$(3)$] $((q^3+1)(q-1), q^3(q-2), (q-2)^2(q^2+q+1), q^3(q-3))$ if $G=PSU_3(q)$. \end{itemize} Moreover, it is a divisible design graph, in which every class of the canonical partition is formed by the set of involutions of a Sylow $2$-subgroup of $G$. \end{thm} \begin{proof} Put $\tilde\pi=\omega(G)-\{2,\chi(G)\}$. Let $X$ be the class of involutions of $G$ and let $\Gamma$ be the $\{\chi(G)\}$-local fusion graph of $G$. Then by Theorem~\ref{t2} $\Gamma$ is an antipodal distance-regular graph of diameter 3 with intersection array $$\{q^l, (q-2)(q^l-1)/(q-1),1;1,(q^l-1)/(q-1), q^l\},$$ where $q^l$ is exactly the size of a Sylow 2-subgroup of $G$ (so that $l\in \{1,2,3\}$). Note that each antipodal class of $\Gamma$ is formed by the set of (central) involutions of a Sylow $2$-subgroup of $G$ and there exactly $q^l+1$ such classes. Now define the graph $\Phi$ as the graph $\Gamma$ with antipodal classes turned into cliques. By construction, $\Phi$ coincides with $\Gamma_{1,3}$. Let us prove that $\Phi$ is a complement graph to a (unique) $\tilde\pi$-product involution graph $gF_{\tilde\pi}(G,X)$ of $G$. To find this, it suffices to observe that by \cite{Suz0} (see also \cite{Suz2}) the order of product of any two involutions of $G$ equals 2 if and only if these involutions commute (and thus they belong to the same Sylow 2-subgroup of $G$). Hence $gF_{\tilde\pi}(G,X)=\Gamma_2$. Therefore, by Lemma~\ref{l1} $gF_{\tilde\pi}(G,X)$ is a Deza graph (more precisely, a divisible design graph whose canonical partition coincides with antipodal partition of $\Gamma$) of diameter 2, in which the number of common neighbors of two distinct vertices equals $(q-2)^2(q^l-1)/(q-1)$ or $q^l(q-3)$. Clearly, these values are the same if and only if $l=1$ and $G=PSL_2(4)$ (yielding $gF_{\tilde\pi}(G,X)$ is isomorphic to the Petersen graph). Hence in all other cases $gF_{\tilde\pi}(G,X)$ is a strictly Deza graph. Furthermore, note that the order of the product of every two non-commuting involutions of $G$ cannot be even. On the contrary, suppose there are involutions $x$ and $y$ such that $|xy|=2m$ and $z=(xy)^m\ne 1$. Then $z^2=1$ and hence $z^x=z^{-1}=z=z^y$. But ${\rm Syl}_2(G)$ is a TI-subset of $G$ and each Sylow 2-subgroup $S$ of $G$ acts (by conjugation) regularly on ${\rm Syl}_2(G)\setminus\{S\}$ (see \cite{Suz0}), a contradiction. Thus, we conlude that $gF_{\tilde\pi}(G,X)$ coincides with the ${\pi}$-local fusion graph of $G$, where ${\pi}$ consists of odd elements of $\tilde\pi$. \end{proof} \begin{rem} Except for the case $G=PSL_2(4)$, it appears that ${\pi}$-local fusion graphs of $G$ defined in Theorem~\ref{t1} are first established to be (strictly) Deza graphs in the course of this work. \end{rem} \section*{Acknowledgements} This research was supported by the Russian Science Foundation under grant no. 20-71-00122 and performed in N.N. Krasovskii Institute of Mathematics and Mechanics of the Ural Branch of the Russian Academy of Sciences.
{ "redpajama_set_name": "RedPajamaArXiv" }
245
Q: Modify gtk3 theme - menu bar I've installed FaienceZukiMac gtk3 theme and there's a "white balloon" surrounding each entry in the menu of every gtk3 program. Here's a screenshot of what I'm talking about How can I change it in order to remove that "white balloon" and make it the same color as the general background? I was messing with the color values of the gtk-main.css with no luck. Also, I don't know the "real" name of that area of the window. Is it called menu bar? Thanks in advance. A: .menubar .menuitem { background-color: inherit; backround-image: none} would fix the problem i think
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,964
'use strict'; /** * Module dependencies. */ var _ = require('lodash'); /** * Extend user's controller */ module.exports = _.extend( require('./accounts/accounts.authentication.server.controller'), require('./accounts/accounts.authorization.server.controller'), require('./accounts/accounts.password.server.controller'), require('./accounts/accounts.profile.server.controller'), require('./hipchatter.server.controller'), require('./images.server.controller') );
{ "redpajama_set_name": "RedPajamaGithub" }
9,254
The aftermath of Hurricane Katrina...told from a totally unique perspective. Brent Joseph found a subject and did the best thing with him...he let him speak. A powerful film...of loss and survival...and, caring. HOLDOUT (short documentary) from Brent Joseph on Vimeo. 'Holdout' is a portrait of a New Orleanian. Jimmy, a resident of the Mid-City neighborhood, refused to evacuate for the storm. Despite being surrounded by floodwaters for two weeks, he insisted on staying home to care for his family of 18 pets. With helicopters and gunshots just outside his window, Jimmy reflects on the death of his life-long boyfriend who died exactly one year earlier.
{ "redpajama_set_name": "RedPajamaC4" }
4,119
Q: Load flutter dependencies from hosted pub repository of jfrog artifctory Cannot load self hosted packages from my pub repository on jfrog into my flutter project. It looks like it is there anywhere a specific pub version on jfrog to set? pubspec.yaml dependencies: flutter: sdk: flutter characters: hosted: https://jfrog/artifactory/pub-local version: 1.2.0 response: : HTTP response 406 Not Acceptable for GET https://jfrog/artifactory/pub-local/api/packages/characters Pub 2.16.1 is incompatible with the current version of jfrog.int. Upgrade pub to the latest version and try again.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,026
Mayra Lissette Conde (born January 28, 1969 in Guatemala City, Guatemala) is a professional Personal Trainer and mixed martial artist, who has also competed in bodybuilding, kickboxing and brazilian jiu-jitsu. Biography Conde hails from Guatemala City, where she grew up on her grandparents' farm. She has spoken fondly of her time there in interviews, as well as citing her grandfather as "my source of strength". However, at the age of nine when her younger brother was born, Conde moved with her parents to Toronto, Ontario, Canada, hoping for a better education. Her early years as an immigrant were tough; "I was constantly picked on because I couldn't speak English and was very small". Conde succeeded academically and became bilingual, speaking both her native Spanish and adopted English. While still at school, Conde developed an interest in sports drinks and nutrition through her father, which led to her becoming a bodybuilder at the age of 16. Her muscular physique proved useful when she moved into construction work after high school. Earning a living as a welder, she would later become a foreman. Meanwhile, she continued to attend fitness seminars and read health-related books. In 1993, she met Zee Vjesalicu a wrestler, hockey player and mixed martial artist who introduced Conde and her friend Olga Bakalapolous to Brazilian Jui Jitusu. For a short period of time Zee trained Conde. Conde's first grappling competition, she won. In the summer of 1996, Conde moved to Los Angeles for the love of personal fitness training, and in search for something bigger and more challenging. She trained in Brazilian jiu-jitsu at the Beverly Hills Jiu Jitsu Club with such fighters including Mark Kerr and Bas Rutten. Learning from them and another trainer, Marcus Vinicius, she would later fight in the United States, Brazil and Greece and Japan, being part of the all-female "Hook N Shoot" Revolution promotion. She was asked her to represent Team Canada at the 1st World Wide Pankration Championships on 11 November 2000, in Lamia, Greece. Despite a last minute notice and traveling at her own expense, she earned two gold medals. She was later invited to train with Coach Scott Miller and the Women's Wrestling Team at Pacific University in late 2002 and March 2003. Conde is a Purple Belt Brazilian Jiu-Jitsu Instructor, having studied under 4th Degree Black Belt Master Marcus Vinicius and Bas Rutten at The Beverly Hills Jiu Jitsu Club. Conde is now living and working in Frankfurt am Main, Germany as a professional personal trainer. According to German authorities (in Frankfurt), she is the only foreign self-employed personal trainer authorized to live and work in Germany, without the need of an employer at first, due to the fact that she is a high level certified martial artist, instructor and personal trainer. Career Mayra Lissette Conde started her Personal Training career in the 1990s, where she worked at The Workout in Toronto, Ontario, Canada. In 1996 Conde moved to Los Angeles to pursue her bodybuilding dream and to enhance her Fitness Training career. On 2003 Conde moved to Europe, mainly to get away from the competitive world of Full Contact Sports. On 2005 Conde started working full-time as a Personal Fitness Trainer in Frankfurt am Main, Germany. Mayra Lissette Conde has been known as Canadian bodybuilder, who at 26 entered the world of Full Contact Fighting. These days Conde is known as an experienced Personal Fitness Trainer, Martial Arts Instructor, who shares her passions with her clients using most effective training techniques. Conde is known as a Body Sculpting Specialist with over 27 years experience, "Get In The Best Shape Of Your Life" faster than you thought..." is her driven motivation. 1st Place at the 3rd American International Championships, Grappling (148-160 lbs) Light Weight Division, Carson, CA, 9 December 2001 1st Place at the 3rd American International Championships, Brazilian Jiu Jitsu Women's Open Division, Carson, CA, 8 December 2001 1st Place in the US Open Brazilian Jiu Jitsu CHAMPIONSHIPS, October 2001 2001 Bronze Medalist in the World Wide Brazilian Jiu Jitsu Championships in Rio de Janeiro, Brazil (July 26–29 ) Double GOLD Medalist in 1st World Wide Pankration Championships on 11 November 2000, Lamia, Greece 1st place winner in the Heavy weight & Open Division in the U.S. Open Brazilian Jiu Jitsu Championships in Santa Cruz, 5 November 2000 Winner in NHB Bout at the Mark "The Cobra" Hall's Cobra Challenge Federation, in Anza CA on 3 June 2000 1st Place winner in the California State Brazilian Jiu Jitsu Championships, 26 March 2000 U.S. Open Brazilian Jiu Jitsu Heavy Weight and Overall Winner, Santa Cruz (White Belt), 5 November 2000 Won Cobra Challenge NHB fight, 6 March 2000 Women's Kick-Boxing Division Winner, September 1999 Women's Heavy Weight Grappler's Challenge Champion, August 1999 Winner by technical K.O. in Kick-Boxing Bout at the Bas Rutten Invitational in Littleton CO, 26 June 1999 Winner in NHB/Vale Tudo Bout at the 1st Bas Rutten Invitational, 6 February 1999 1st Place Women's Heavyweight Winner at the Grapplers Challenge in Toronto, Ontario, Canada on 8 August 1998 Nite Magic Flexmania Ladies Overall (Bodybuilding), July 1996 See also List of female mixed martial artists References Mixed martial arts record |- |Loss | align=center| 2-1-1 |Yuuki Kubota |Submission (Armbar) |Smackgirl - Third Season 1 |03/3/2003 |1 |2:20 | | |- | Draw | align=center| 2-0-1 |Angela Reestad | |HOOKnSHOOT - Revolution |04/13/2002 |2 |5:00 | | |- |Win | align=center| 2-0 |Val Leota |Submission (Choke) |CFF - The Cobra Challenge 2000 |06/3/2000 |1 |1:00 | | |- |Win | align=center| 1-0 |Kelsey Beard |TKO |BRI 1 - Bas Rutten Invitational 1 |02/6/1999 |1 |N/A | | |} External links Official site Mayra Conde Awakening Fighters 1969 births Canadian female bodybuilders Canadian female mixed martial artists Mixed martial artists utilizing Brazilian jiu-jitsu Canadian practitioners of Brazilian jiu-jitsu Female Brazilian jiu-jitsu practitioners Living people Exercise instructors Guatemalan female bodybuilders Guatemalan female mixed martial artists Guatemalan practitioners of Brazilian jiu-jitsu
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,576
We are a looking for an enthusiastic marketing volunteer to assist in delivering our internal and external marketing activities. This is an exciting role in which the right person can really make an impact in growing their experience and develop their career in Marketing. The post holder will be working on a range of marketing and promotional events/activities, assisting with internal and external marketing campaigns including taking images/videos for digital media and building segmented email list. The successful candidate will need no prior knowledge or experience. They will require basic ICT knowledge, work independently and well under pressure. Ideally the successful candidate would be available for up to 3 days per week. To apply, please complete an online application form via our website by the above stated closing date for your application to be considered. Please note, all posts at the College are subject to the Enhanced Disclosure & Barring Service, with child barred list check. Should the post be involved in regulated activity with vulnerable adults an adult barred list check will also be requested.
{ "redpajama_set_name": "RedPajamaC4" }
8,983
Q: Yii2 give array fields names and save to database Can I give each item that will be saved to an array and then to a database a name. For example say I have three form fields which will be saved to the same array. Can I setup my fields like below or if below is not correct what is the best way to do it? echo $form->field($model, 'arrayfields')->textinput([ 'name'=>'field1inarray[]', ]); echo $form->field($model, 'arrayfields')->textinput([ 'name'=>'field2inarray[]', ]); echo $form->field($model, 'arrayfields')->textinput([ 'name'=>'field3inarray[]', ]); Also how do I go about saving the array to my database in my controller/model? A: I used the below in my view <?= $form->field($model, 'example[arrayfield1]')->textInput(['maxlength' => 255])->label('Array Field 1') ?> <?= $form->field($model, 'example[arrayfield2]')->textInput(['maxlength' => 255])->label('Array Field 2') ?> And then within my controller to save to database did the following public function actionCreate() { $model = new someModel(); if ($model->load(Yii::$app->request->post())) { $array = $_POST['someModel']['example']; $model->stockists = json_encode($array); $model->save(); return $this->redirect(['view', 'id' => $model->id]); } else { return $this->render('create', [ 'model' => $model, ]); } } Open to code improvements, need to add some validation to check whether or not a post value exists. Now I need to work a way for the value to be populated in my update form.
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,030
The staff at the local RBC branch in Brockville presented a cheque of $4,500 to the United Way Leeds and Grenville from RBC Foundation. The cheque is intended to support the United Way's annual giving campaign. "RBC is proud of our ongoing commitment to the Leeds & Grenville region through our support of United Way. We wish you continued success with the 2017 Campaign." writes Sue-Ellen Holst, Director, Donations — Ontario North & East RBC Foundation. Tammy Sokolowski, United Way Leeds & Grenville Board Chair and Trish Buote, Executive Director were on hand to accept the generous donation. "We are extremely grateful for RBC's ongoing support of the United Way and in helping the Leeds & Grenville community live healthy and prosperous lives. We couldn't do what we do without them." says Trish Buote.
{ "redpajama_set_name": "RedPajamaC4" }
376
M (1931 and 1951) In the original version of M made in 1931, as well as in the remake of 1951, a city is plagued by a man who is killing children. The police become so relentless in their pursuit of the killer that the ordinary way of life of the criminal underclass becomes disrupted. As a result, the criminals take matters into their own hands, capture the child killer, and have a trial of sorts, during which he tells everyone that he is compulsively driven to do what he does. Before the mob can do anything to him, the police show up and take him away. In the 1931 movie, it is never explicitly stated that the children are sexually molested, but it is implied, and in any event, we would automatically assume as much anyway. In the remake, however, the movie goes out of its way to make it clear that the children are not molested. While a crowd watches the chief of police on television warning parents about the child killer, someone in the crowd asks, "What's he mean the children were neither violated nor outraged?" Someone else in the crowd responds, "What's the difference? He killed them, didn't he?" Well, it may not make any difference to the people in the crowd, but apparently it must have made a difference to the Production Code Administration. It was not sufficient merely to omit all reference to sexual molestation. It had to be explicitly denied. At the same time, all of the killer's victims are little girls, which would indicate a sexual preference. Presumably, just in case the audience refused to believe sex was not involved, the producers went the extra step to avoid any hint of homosexuality. The killer takes the shoes of his victims, which suggests a fetish, which in turn suggests a sexual perversion. Furthermore, in one scene, a man and wife are informed that their child has been a victim. As they start to leave, the woman turns around in desperation and says that maybe it is a mistake, that the child is someone else's. We can only conclude from this that there was no body in the morgue for them to identify, that the police were only going by the doll and the girl's dress, which are on the chief's desk. He holds up the dress for her to look at, which she recognizes as belonging to her daughter. From this we can only conclude one thing: the killer took off the girl's clothes, and her naked body is yet to be found. Still, we are supposed to believe that sex is not the motive for these murders. Censorship can be confusing. It goes without saying that the original was much better, and one way in which it was better is that the killer simply had an evil impulse that he did not understand. In the remake, owing to the popularity of psychoanalysis at the time, we are given an explanation for the killer's behavior as resulting from something that happened when he was a child. As a harbinger of that explanation, we see him strangling a clay model of a child, with a picture of his elderly mother sitting right beside him, almost as if she were watching him do it. At the end, when the child killer is surrounded by the underworld figures that captured him, he gives a garbled explanation about how his father mistreated his mother, and how she raised him to believe that all men are evil. As a result, he reasons that since he is a man, then he is evil and deserves punishment. So, he has to kill little girls, partly to keep them from growing up and being mistreated by evil men, and partly so he will get caught and get the punishment he deserves. The explanation comes across as artificial, unsatisfying, and unbelievable. Fortunately, we are not told why he took the girls' shoes, which would only have made the explanation even more tortured. The remake was destined to be inferior to the original, but it would still have been a lot better movie had all that psychobabble at the end been left out. January 5, 2016 disinterested spectator crime drama ← Soylent Green (1973) Dr. Ehrlich's Magic Bullet (1940) → One thought on "M (1931 and 1951)"
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,277
Q: Give a shortcut an icon I am trying to make an "installer" for my exe program in C++, but what I want the program to do is, create a program that makes a shortcut from the exe file, and then goes into an images folder and use an image.ico to set the newly made shortcuts image to be that of image.ico I really don't have any ideas on how to do this
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,732
Kiromic Announces the Filing of Key European Patents for Its Chimeric PD-1 (chPD1) Target HOUSTON–(BUSINESS WIRE)–Kiromic Biopharma, Inc. (Nasdaq: KRBP), a target discovery and gene-editing company utilizing artificial intelligence and proprietary neural network platform with a therapeutic focus on immuno-oncology, announces the filing of key European patents for its chimeric PD-1 (chPD1) target. Kiromic chPD1 receptor targets PD-1 ligands expressed on many different types of cancer cells, including ovarian, pancreatic, prostate, colon, kidney, and breast cancer and melanoma. Kiromic chPD1-expressing T-cells engage with the PD-1 ligands on the surface of the cancer cells and this interaction activates the T-cells to directly kill the tumor cells. Kiromic chPD1 has shown in preclinical data to show a cytotoxic response in 9 different in vivo models with 100% long-term PFS with the induction of host memory responses. chPD1 will be used in the Company's proprietary chimeric antigen receptor therapy (CAR-T) platform using gamma-delta T-cells (GD-T). Kiromic's deep understanding of the tumor micro environment (TME) and the tumors' escape and masking mechanisms led to development of a promising platform for chimeric antigen receptor therapy (CAR-T). We believe our allogenic CAR-T platform is significantly stronger with chPD1 target licensed from Longwood University. Prof. Amorette Barber of Longwood University will be heading up our chPD1 program. "We believe Prof. Barber's work will give Kiromic a significant acceleration in the clinical development of its therapy platform and an even more significant advantage over its competitors. We believe this collaboration marks the beginning of an exciting revolution in cell therapies," said Gianluca Rotino, Chief of Strategy and Innovation Officer. "chPD1 is an exciting and differentiated target for our allogenic CAR-T solid tumors platform. We look forward to updating you in the months ahead as we move closer to filing our first IND with our chPD1 for ovarian cancer," said Maurizio Chiriva-Internati, PhD, CEO of Kiromic Biopharma. About PD-1 Check-point inhibition PD-1 has always been a challenge for CAR-T development. PD-1 is the brakes of the immune system, inhibiting immune cells from killing tumor cells. Traditional PD-1 inhibitors block the PD-1 receptor, "removing the brakes" of T-cell activity. Conversely, Kiromic's chPD1 not only "removes the brakes" but also engages the PD-1 receptor to "accelerate" T-cell activity. About Kiromic chPD1 Mechanism of Action Kiromic's chPD1 receptor targets PD-1 ligands expressed on many different types of cancer cells, including ovarian, pancreatic, prostate, colon, kidney, and breast cancer and melanoma. Chimeric PD-1-expressing T-cells engage with the PD-1 ligands on the surface of the cancer cells and this interaction activates the T-cells to directly kill the tumor cells. Chimeric PD-1 T-cells also release cytokines to further initiate immune responses to eradicate the tumor cells. Through expression of the chPD1 receptor, the inhibitory signal the T-cells would have received through engagement of the PD-1 ligands on tumor cells now acts as an activating signal and induces destruction of tumors. A large variety of cancer types express PD-1 ligands thus the chPD1 T-cells could potentially be used to treat many types of tumors. About Dr. Amorette Barber (Longwood University) Dr. Barber is an associate professor of biology and director of the Office of Student Research. Dr. Barber is in her tenth year at Longwood and serves as the President-Elect of the Virginia Academy of Science. Dr. Barber is a tumor immunologist. She is an expert in chimeric antigen T-cells. Her research focuses on determining the role costimulatory domains play in enhancing chimeric T-cell activity and the creation and testing of the chPD1 receptor as a therapy for multiple types of cancer. Dr. Barber has had 26 publications in cellular signaling in the following scientific journals: The Journal of Clinical Investigation, Blood, Cancer Research, The Journal of Immunology, and many others. Link to her profile: http://www.longwood.edu/directory/profile/barberarlongwoodedu/ Dr. Barber also serves on Kiromic's Scientific Advisory Board. About Longwood University Longwood has a robust research department with 7 post doctorates conducting research in molecular biology, microbiology, genomics, cancer biology, and immunology and publishing over 75 publications in different scientific journals in the past 5 years. Link to Longwood University: www.longwood.edu About Kiromic Kiromic BioPharma, Inc. is a preclinical stage biopharmaceutical company which is focused on discovering, developing, and commercializing novel immune-oncology applications through its robust product pipeline. The pipeline development is leveraged through the Company's proprietary target discovery engine called "DIAMOND." Kiromic's DIAMOND is big data science meeting target identification, dramatically compressing man-years and billions of drug development dollars to develop a live drug. The Company maintains offices in Houston, Texas. For more information, please visit the company's website at www.kiromic.com. This press release contains forward-looking statements that involve substantial risks and uncertainties. We make such forward-looking statements pursuant to the safe harbor provisions of the U.S. Private Securities Litigation Reform Act, Section 21E of the Securities Exchange Act of 1934, as amended, and other federal securities laws. All statements other than statements of historical facts are forward-looking statements. These statements relate to future events or to our future financial performance and involve known and unknown risks, uncertainties and other factors that may cause our actual results, levels of activity, performance or achievements to be materially different from any future results, levels of activity, performance or achievements expressed or implied by these forward-looking statements. Forward-looking statements include, but are not limited to, statements about: our goals and strategies; our future business development, financial condition and results of operations; expected changes in our revenue, costs or expenditures; growth of and competition trends in our industry; our expectations regarding demand for, and market acceptance of, our products; our expectations regarding our relationships with investors, institutional funding partners and other parties we collaborate with; fluctuations in general economic and business conditions in the markets in which we operate; including those fluctuations caused by COVID-19; and relevant government policies and regulations relating to our industry. In some cases, you can identify forward-looking statements by terms such as "may," "could," "will," "should," "would," "expect," "plan," "intend," "anticipate," "believe," "estimate," "predict," "potential," "project" or "continue" or the negative of these terms or other comparable terminology. These statements are only predictions. You should not place undue reliance on forward-looking statements because they involve known and unknown risks, uncertainties and other factors, which are, in some cases, beyond our control and which could materially affect results. Factors that may cause actual results to differ materially from current expectations include, among other things, those listed under the heading "Risk Factors" included in our Registration Statement on Form S-1 (file no. 333-238153), originally filed with the Securities and Exchange Commission (SEC) on May 11, 2020, as amended, and elsewhere in this press release. If one or more of these risks or uncertainties occur, or if our underlying assumptions prove to be incorrect, actual events or results may vary significantly from those implied or projected by the forward-looking statements. No forward-looking statement is a guarantee of future performance. The forward-looking statements made in this press release relate only to events or information as of the date on which the statements are made in this press release. Except as expressly required by the federal securities laws, there is no undertaking to publicly update or revise any forward-looking statements, whether as a result of new information, future events, changed circumstances or any other reason. You are advised, however, to review any further disclosures we make on related subjects in our Forms 10-Q, 8-K and other reports filed with the SEC. Tony Tontat ttontat@kiromic.com Previous Guess?, Inc. Announces Participation at the 23rd Annual ICR XChange Conference Next CCB International Believes Redsun Services' Recent M&A will Strengthen its Central China and Non-Residential Exposure
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,844
\section{Introduction} Neutron stars are observed as several classes of self-gravitating systems: as radio and X-ray pulsars, as X-ray bursters, as compact thermal X-ray sources in supernova remnants, as rotating radio transients. In general, the structure of neutron stars and the relation between the mass and the radius are determined by equations of state (EoS) of dense matter. The maximal mass of neutron star is still an open question. Recent observations allows to estimate this limit at least as $2M_{\odot}$: the well-measured limit of the pulsar PSR J1614-2230 is $1.97M_{\odot}$ \cite{Demorest}, while, for the pulsar J0348+0432, it is $2.01M_{\odot}$ \cite{Antoniadis}. Other examples of massive neutron stars are Vela X-1 ($\sim 1.8M_{\odot}$ \cite{Rawls}) and 4U 1822-371 ($\sim 2M_{\odot}$, \cite{Munoz}). There are some indications in favor of the existence of more massive neutron stars with masses $\sim 2.4M_{\odot}$ (the possible masses of B1957+20 \cite{Kerk} and 4U 1700-377 \cite{Clark}) or even $\sim 2.7M_{\odot}$ (J1748-2021B \cite{Freire}). It is interesting to note that for various EoS including hyperons, the maximal mass limit for non-magnetic neutron stars is considerably below than of the two-solar masses limit. The hyperonization process softens EoS and then the maximal allowable mass results reduced \cite{Glendenning,Glendenning-2,Schaffner,Vidana,Schulze}. There are several ways to approach the solution of this problem (the so called ``{\it hyperon puzzle}''). Firstly, the extensions of the simple model of hyperonic matter (with three exchange meson fields - the so called '$\rho\omega\sigma$'-model) allow to achieve the increasing of maximal mass. Various approaches are proposed following this track. For instance, larger hyperon-vector couplings (in comparison with quark counting rule) require stiffness of the EoS \cite{Hofmann,Rikovska,Sedrakian,Miyatsu-2}. Similar effect occurs in model with chiral quark-meson coupling \cite{Miyatsu}. The quartic vector-meson terms in the Lagrangian \cite{Bednarek} or the inclusion of an additional vector-meson mediating repulsive interaction amongst hyperons \cite{Weissenborn} also lead to the increasing of the maximal mass limit. Authors of Ref. \cite{Whittenbury} proposed an EoS with maximum mass $\sim 2.1M_{\odot}$ using the quark-meson coupling model, which naturally incorporates hyperons without additional parameters. A model with in-medium hyperon interactions is considered in \cite{Wei}. Another source for increasing the maximal mass limit is the existence of strong magnetic fields inside the star. The existence of soft gamma-ray repeaters and anomalous X-ray pulsars can be linked to neutron stars with very strong magnetic fields of the order $10^{15}$ G on the surface. In these cases, the maximum magnetic field in the central regions of neutron star can exceed $10^{18}$ G, according to the scalar virial theorem. Such magnetic fields affect considerably the EoS for dense matter and result in increasing the maximal mass of neutron stars. Various models of dense nuclear matter in presence of strong magnetic fields have been considered in literature. The simplest model with interacting $npe\mu$ gas is investigated in \cite{Lattimer}. Models with hyperons and quarks are considered in \cite{Rabhi}-\cite{Lopes}. It has been demonstrated that the Landau quantization leads to the softening of the EoS for matter but account for contributions of magnetic field into pressure and density. This fact leads, on the other hand, to the stiffening of EoS. Therefore neutron stars are very peculiar objects for testing theories of matter at high density regimes and in strong magnetic fields. It is interesting to note that data about neutron stars (mainly mass-radius ($M-R$) relation) can be used for investigating possible deviations from General Relativity (GR). The initial motivation for studying modified gravity came from the discovered accelerated expansion of the universe confirmed by numerous independent observations. These observations include Hubble diagram for Ia type supernovae \cite{Perlmutter, Riess1,Riess2}, cosmic microvawe background radiation (CMBR) data \cite{Spergel}, surveys of gravitational weak lensing \cite{Schmidt} and data on Lyman alpha forest absorption lines \cite{McDonald}. This acceleration takes place at relatively small distances (``Hubble flow'') and requires (in GR) non-standard cosmic fluid (dark energy) filling the universe with negative pressure but not clustered in large scale structure. The nature of dark energy is unclear. Although from an observational viewpoint, the so called $\Lambda$CDM model (where dark energy is considered as Einstein Cosmological Constant) is in agreement with data coming from observations there are various problems and shortcomings at theoretical level. One of this issues is the ``smallness'' of cosmological constant i.e. the difference of 120 orders of magnitude between its observed value and the one predicted by quantum field theory \cite{Weinberg}. An alternative approach to dark energy problem consists of extending of GR. In this case, the accelerated expansion can be obtained without using ``dark energy'' but enlarging the gravitational sector \cite{Capozziello1, Capozziello2, Odintsov1, Turner, Odintsov-3, Capozziello3,Capozziello_book, Capozziello4, Cruz}. Therefore theories of modified gravity can be considered as real alternative to GR. The study of relativistic stars in modified gravity is interesting from several reasons and could constitute a formidable probe for such theories. Firstly one can reject some models that do not allow the existence of stable star configurations \cite{Briscese, Abdalla, Bamba, Kobayashi-Maeda, Nojiri5,Lang} (however one has to note that stability can be achieved due to ``chameleon mechanism'' \cite{Tsujikawa, Upadhye-Hu} and may depend on the choice of the EoS). Secondly there is the possibility for the existence of new stellar structures, in the framework of modified gravity, escaping the standard stellar models. The observation of such self-gravitating anomalous structures could provide strong evidence for the Extended Gravity (see e.g. \cite{Laurentis, Laurentis2, Farinelli}). The present paper is devoted to neutron stars with strong magnetic fields in framework of analytic $f(R)$ gravity. Assuming a simple model for strong interactions, one can obtain the EoS for dense matter in magnetic field. Landau quantization, due to magnetic fields, results to have significant effects. We consider the cases of slowly and fast varying fields. The paper is organized as follows. In Sec. II, we briefly consider the the field equations for $f(R)$ gravity and the modified Tolman--Oppenheimer--Volkoff (TOV) equations. Then relativistic mean field theory for dense matter in strong magnetic fields is presented (Sec.III). In Sec. IV, the neutron star models for strong magnetic fields in quadratic ($f(R)=R+\alpha R^2$) and cubic $f(R)=R+\alpha R^3$ gravity are presented. The $M-R$ relation is derived and compared with the one in GR. Conclusions and outlooks are reported in Sec.V. \section{Modified TOV equations in $f(R)$ gravity} The action of $f(R)$-gravity is \begin{equation}\label{action} S=\frac{c^4}{16\pi G}\int d^4x \sqrt{-g}f(R) + S_{{\rm matter}}\quad\,. \end{equation} It can be expressed as $f(R)=R+\alpha h(R)$. The field equations are \begin{equation}\label{field} (1+\alpha h_{R})G_{\mu \nu }-\frac{1}{2}\alpha(h-h_{R}R)g_{\mu \nu }-\alpha (\nabla _{\mu }\nabla _{\nu }-g_{\mu \nu }\Box )h_{R}=8\pi G T_{\mu \nu }/c^{4}. \end{equation} Here $g$ is the determinant of the metric $g_{\mu\nu}$ and $S_{\rm matter}$ is the action of the standard perfect fluid matter. The Einstein tensor is $G_{\mu\nu}=R_{\mu\nu}-\frac12Rg_{\mu\nu}$ and ${\displaystyle h_R=\frac{dh}{dR}}$. For star configurations, one can assumes a spherically symmetric metric with two independent functions of radial coordinate, that is: \begin{equation}\label{metric} ds^2= -e^{2\phi}c^2 dt^2 +e^{2\lambda}dr^2 +r^2 (d\theta^2 +\sin^2\theta d\phi^2). \end{equation} For the exterior solution, we assume a Schwarzschild metric. Therefore it is convenient to define the variable \cite{Stephani,Cooney} \begin{equation}\label{mass} e^{-2\lambda}=1-\frac{2G M}{c^2 r}. \end{equation} The value of variable $M$ on the star surface is the gravitational mass. For a perfect fluid, the energy-momentum tensor is $T_{\mu\nu}=\mbox{diag}(e^{2\phi}\rho c^{2}, e^{2\lambda}P, r^2P, r^{2}\sin^{2}\theta P)$, where $\rho$ is the matter density and $P$ is the pressure. The field equations of interest are \begin{eqnarray} -8\pi G \rho/c^2 &=& -r^{-2} +e^{-2\lambda}(1-2r\lambda')r^{-2} +\alpha h_R(-r^{-2} +e^{-2\lambda}(1-2r\lambda')r^{-2}) \nonumber \\ && -\frac12\alpha(h-h_{R}R) +e^{-2\lambda}\alpha[h_R'r^{-1}(2-r\lambda')+h_R''] \label{f-tt},\\ 8\pi G P/c^4 &=& -r^{-2} +e^{-2\lambda}(1+2r\phi')r^{-2} +\alpha h_R(-r^{-2} +e^{-2\lambda}(1+2r\phi')r^{-2}) \nonumber \\ && -\frac12\alpha(h-h_{R}R) +e^{-2\lambda}\alpha h_R'r^{-1}(2+r\phi'), \label{f-rr} \end{eqnarray} where $'\equiv d/dr$. The second TOV equation follows from the conservation law $T_{\nu;\mu}^{\mu}=0$ and Eq.(\ref{f-rr}). As result, the modified TOV equations can be written as \cite{Astashenok} \begin{equation}\label{TOV-1} \left(1+\alpha h_{{R}}+\frac{1}{2}\alpha h'_{{R}} r\right)\frac{dm}{dr}=4\pi{\rho}r^{2}-\frac{1}{4}\alpha r^2 \left[h-h_{{R}}{R}-2\left(1-\frac{2m}{r}\right)\left(\frac{2h'_{{R}}}{r}+h''_{{R}}\right)\right], \end{equation} \begin{equation}\label{TOV-2} 8\pi p=-2\left(1+\alpha h_{{R}}\right)\frac{m}{r^{3}}-\left(1-\frac{2m}{r}\right)\left(\frac{2}{r}(1+\alpha h_{{R}})+\alpha r_{g}^{2} h'_{{R}}\right)({\rho}+p)^{-1}\frac{dp}{dr}- \end{equation} $$ -\frac{1}{2}\alpha \left[h-h_{{R}}{R}-4\left(1-\frac{2m}{r}\right)\frac{h'_{{R}}}{r}\right], $$ Here we have introduced the dimensionless variables $M=m M_{\odot},\quad r\rightarrow r_{g}r, \quad \rho\rightarrow\rho M_{\odot}/r_{g}^{3},\quad P\rightarrow p M_{\odot}c^{2}/r_{g}^{3}, \quad R\rightarrow {R}/r_{g}^{2}$, $\alpha r_{g}^{2}h(R)\rightarrow \alpha h(R)$, where $r_{g}=GM_{\odot}/c^{2}=1.47473$ km. The third independent equation for Ricci curvature scalar is \begin{equation}\label{TOV-3} 3\alpha r_{g}^{2}\left[\left(\frac{2}{r}-\frac{3m}{r^{2}}-\frac{dm}{rdr}-\left(1-\frac{2m}{r}\right)\frac{dp}{(\rho+p)dr}\right)\frac{d}{dr}+ \left(1-\frac{2m}{r}\right)\frac{d^{2}}{dr^{2}}\right]h_{{R}}+\alpha r_{g}^{2} h_{{R}}{R}-2\alpha r_{g}^{2} h-{R}=-8\pi({\rho}-3p)\,. \end{equation} Eqs. (\ref{TOV-1}), (\ref{TOV-2}), and (\ref{TOV-3}) can be solved numerically for given EOS. In order to get solution, one can use perturbative approach (see for details \cite{Arapoglu,Alavirad,Astashenok,Astashenok-2}). In the framework of perturbative approach, terms containing $h(R)$ are assumed to be of first order in the small parameter $\alpha$, so all such terms should be evaluated at ${\mathcal O}(\alpha)$ order. The Ricci curvature scalar at zero order is $R^{(0)}=8\pi (\rho^{(0)}-3p^{(0)})$. Therefore the deviation from GR strongly depends from the assumed form of EoS. \section{Relativistic mean field theory for dense matter in presence of strong magnetic field} Let us assume a simple model for describing nuclear matter in magnetic field. The magnetic field $B$ is assumed along $z$-axis i.e. the 4-potential is $A^{mu}=(0,0,Bx,0)$. For nuclear matter consisting of baryon octet ($b=$$p$, $n$, $\Lambda$, $\Sigma^{0,\pm}$, $\Xi^{0,-}$) interacting with magnetic field and scalar $\sigma$, isoscalar-vector $\omega_\mu$ and isovector-vector $\rho_\mu$ meson fields and leptons ($l=$$e^{-}$, $\mu^{-}$), it is \cite{Typel} \begin{equation} \mathcal{L}=\sum_{b}\bar{\psi}_{b}\left(\gamma_{\mu}(i\partial^{\mu}-q_{b}A^{\mu}-g_{\omega b}\omega^{\mu}-\frac{1}{2}g_{\rho b}{\tau}\cdot{\rho}^{\mu})-(m_{b}-g_{\sigma b}\sigma)\right)\psi_{b}+\sum_{l}\bar{\psi}_{l}\left(\gamma_{\mu}(i\partial^{\mu}-q_{l}A^{\mu})-m_{l}\right)\psi_{l}+ \end{equation} $$ +\frac{1}{2}\left((\partial_{\mu}\sigma)^{2}-m^{2}_{\sigma}\sigma^{2}\right)-V(\sigma)-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+\frac{1}{2}m^{2}_{\omega}\omega^{2}-\frac{1}{4}\omega_{\mu\nu}\omega^{\mu\nu}- \frac{1}{4}{\rho}_{\mu\nu}{\rho}^{\mu\nu}+\frac{1}{2}m^{2}_{\rho}{\rho}_{\mu}^{2}. $$ Here the mesonic and electromagnetic field strength tensors are defined by the usual relations $\omega_{\mu\nu}=\partial_{\mu}\omega_{\nu}-\partial_{\nu}\omega_{\mu}$, $\rho_{\mu\nu}=\partial_{\mu}\rho_{\nu}-\partial_{\nu}\rho_{\mu}$, $F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$. For the sake of simplicity, we consider frozen-field configurations of electromagnetic field. Also we neglect the anomalous magnetic moments (AMM) of baryons and leptons because their effect is very small. The strong interaction couplings $g_{b\sigma}$, $g_{b\omega}$ and $g_{b\rho}$ depend from density. We use the parameterization adopted in \cite{Typel}: \begin{equation} g_{i}(\rho)=g_{i0}f_{i}(x), x=\rho/\rho_{0}\,, \end{equation} where $$f_{i}(x)=a_{i}\frac{1+b_{i}(x+d_{i})^2}{1+c_{i}(x+d_{i})^{2}}\,.$$ For the isovector field, it is $$ g_{b\rho}=g_{b0}\exp[-a_{\rho}(x-1)]. $$ The values of constants $a_{i}$, $b_{i}$, $c_{i}$, $d_{i}$ are given in \cite{Typel}. Using the mean-field approximation, one can obtain the following equations for meson fields: \begin{equation}\label{0} m^{2}_{\sigma}\sigma+\frac{dV}{d\sigma}=\sum_{b}g_{\sigma b} n_{b}^{s},\quad m^{2}_{\omega}\omega_{0}=\sum_{b}g_{\omega b} n_{b},\quad m^{2}_{\omega}\rho_{03}=\sum_{b}g_{\rho b} n_{b}. \end{equation} Here $\sigma$, $\omega^{0}$, $\rho^{0}$ are the expectation values of the meson fields in uniform matter. The quantities $n^{s}_{b}$, $n_{b}$ are the scalar and vector baryon number densities, correspondingly. The simplest scalar field potential is defined as \begin{equation} V(\sigma)=\frac{1}{3}b m_{N}(g_{\sigma N}\sigma)^{3}+\frac{1}{4}c(g_{\sigma N}\sigma)^{4}, \end{equation} where $b$ and $c$ are dimensionless constants. The values of nucleon-meson couplings and parameters $b$, $c$ are given in Table I. From the Dirac equations for charged and neutral baryons and leptons, we have the energy spectra: \begin{equation} E^{b}_{\nu}=(k_{z}^{2}+m_{b}^{*2}+2\nu|q_{b}|B)^{1/2}+g_{\omega b}\omega^{0}+\tau_{3b}g_{\rho b}\rho^{0}+\Sigma^{R}_{0}, \end{equation} \begin{equation} E^{b}=(k^{2}+m_{b}^{2})^{1/2}+g_{\omega b}\omega^{0}+\tau_{3b}g_{\rho b}\rho^{0}+\Sigma^{R}_{0}, \end{equation} \begin{equation} E^{l}_{\nu}=(k_{z}^{2}+m_{l}^{2}+2\nu|q_{l}|B)^{1/2}. \end{equation} The number $\nu=n+1/2-sgn(q)s/2$ denotes the Landau levels of the fermions with electric charge $q$, spin number $s=\pm 1$ for spin up and spin down cases correspondingly. The spin degeneracy is $g_{\nu}=1$ for lowest Landau level ($\nu=0$) and 2 for all other levels. The effective mass for baryons is $m_{b}^{*}=m_{b}-g_{\sigma b}\sigma$. The rearrangement self-energy term is defined by \begin{equation} \Sigma^{R}_{0}=-\frac{\partial \ln g_{\sigma N}}{\partial n}m^{2}_{\sigma}\sigma^{2}+\frac{\partial \ln g_{\omega N}}{\partial n}m^{2}_{\omega}\omega^{2}_{0}+\frac{\partial \ln g_{\rho N}}{\partial n}m^{2}_{\rho}\rho^{2}_{0}. \end{equation} Here $n=\sum_{b} n_{b}$. The scalar densities for neutral baryons are \cite{Lattimer} \begin{equation} n^{s}_{b}=\frac{m_{b}^{*2}}{2\pi^2}\left(E^{b}_{f}k^{b}_{f}- m_{b}^{*2}\ln\left|\frac{k^{b}_{f}+E^{b}_{f}}{m_{b}^{*}}\right|\right) \end{equation} and for charged baryons, it is \begin{equation} n^{s}_{b}=\frac{|q_{b}|B m_{b}^{*}}{2\pi^2}\sum_{\nu} g_{\nu}\ln\left|\frac{k^{b}_{f,\nu}+E^{b}_{f}}{\sqrt{m_{b}^{*2}}+2\nu|q_{b}|B}\right|. \end{equation} For the vector densities for neutral baryons, we have \begin{equation} n_{b}=\frac{1}{3\pi^2} k^{b 3}_{f}. \end{equation} and for charged baryons and leptons, it is \begin{equation} n_{b,l}=\frac{|q_{b,l}|B}{2\pi^2} \sum_{\nu} g_{\nu} k^{b,l}_{f,\nu}. \end{equation} Here $E_{f}^{b,l}$ is the Fermi energy. For charged baryon, $E_{f}^{b}$ is related to the Fermi momentum $k_{f,\nu}^{b}$ as $E_{f}^{b}=(k_{f}^{2}+m^{*2}_{b}+2\nu |q_{b}| B)^{1/2}$. For neutral baryon, it is $E_{f}^{b}=(k_{f}^{2}+m^{*2}_{b})^{1/2}$. The summation over $\nu$ terminates at value $\nu_{max}$ where the square of Fermi momenta is still positive. For large magnetic fields $B\sim 10^{18}$ G, only few Landau levels are occupied. For hyperon-meson couplings there are no well-defined rule. One can use for these constants quark counting rule \cite{Dover,Schafner}: \begin{equation} g_{\omega \Lambda}=g_{\omega \Sigma}=\frac{2}{3}g_{\omega N}, \quad g_{\omega \Xi}=\frac{1}{2}g_{\omega N}, \end{equation} and \begin{equation} g_{\rho \Sigma}=2g_{\rho N}, \quad g_{\rho\Xi}=g_{\rho N}. \end{equation} Another choice is assuming that the fractions of nucleon-meson couplings, i.e. $g_{iH}=x_{iH}g_{iN}$, is fixed. Here, it is $x_{\sigma H}=x_{\rho H}=0.600$, $x_{\omega H}=0.653$, $x_{\rho H}=0.6$ (see \cite{Rabhi}). We use this definition for further calculations. \begin{table} \label{Table1} \begin{centering} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & $n_{s}$, & $-B/A$, & $M^{*}/M$ & $g_{0\sigma N}/m_{\sigma}$, & $g_{0\omega N}/m_{\omega}$, & $g_{0\rho N}/m_{\rho}$, & & \\ Model & fm$^{-3}$ & MeV & & fm & fm & fm & b & c \\ \hline TW & 0.153 & 16.30 & 0.56 & 3.84901 & 3.34919 & 1.89354 & 0 & 0 \\ GM1 & 0.153 & 16.30 & 0.70 & 3.434 & 2.674 & 2.100 & 0.002947 & $-0.001070$ \\ GM2 & 0.153 & 16.30 & 0.78 & 3.025 & 2.195 & 2.189 & 0.003487 & $0.01328$ \\ GM3 & 0.153 & 16.30 & 0.78 & 3.151 & 2.195 & 2.189 & 0.008659 & $-0.002421$ \\ \hline \end{tabular} \caption{The nucleon-meson couplings and parameters of scalar field potential for some models (GM1-3 - \cite{Glendenning}, TW - \cite{Typel}). The nuclear saturation density $n_{s}$, the Dirac effective mass $M^{*}$ and the binding energy ($B/A$) are also given.} \end{centering} \end{table} For chemical potential of baryons and leptons, one has $$ \mu_{b}=E^{f}_{b}+g_{\omega b} \omega_{0}+g_{\rho b}I_{3}\rho_{03}+\Sigma_{0}^{R},\quad \mu_{l}=E^{f}_{l}. $$ The following conditions should be imposed on the matter in order to obtain the EoS with the following properties:\\ \\ (i) baryon number conservation: \begin{equation}\label{1} \sum_{b}n_{b}=n, \end{equation} (ii) charge neutrality: \begin{equation}\label{2} \sum_{i} q_{i}n_{i}=0,\quad i=b,l, \end{equation} (iii) beta-equilibrium conditions: \begin{equation}\label{3} \mu_{n}=\mu_{\Lambda}=\mu_{\Xi^{0}}=\mu_{\Sigma^{0}}, \quad \mu_{p}=\mu_{\Sigma^{+}}=\mu_{n}-\mu_{e},\quad \mu_{\Sigma^{-}}=\mu_{\Xi^{-}}=\mu_{n}+\mu_{e},\quad \mu_{e}=\mu_{\mu}. \end{equation} At given $n$, Eqs. (\ref{0})-(\ref{3}) can be solved numerically and one can find the Fermi energy for particles and meson fields. Resulting matter energy density is \begin{equation} \epsilon_{m}=\sum_{b} \epsilon_{b}+\sum_{l} \epsilon_{l}+\frac{1}{2}m_{\sigma}^{2}+\frac{1}{2}m_{\omega}^{2}+\frac{1}{2}m_{\rho}\rho^{2}_{0}+U(\sigma). \end{equation} The energy density for charged baryons is \begin{equation}\label{encb} \epsilon^{c}_{b}=\frac{|q_{b}|B}{4\pi^2}\sum_{\nu}g_{\nu}\left(k^{b}_{f,\nu}E^{b}_{f}+ (m_{b}^{*2}+2\nu|q_{b}|B)\ln\left|\frac{k^{b}_{f,\nu}+E^{b}_{f}}{\sqrt{m_{b}^{*2}+2\nu|q_{b}|B}}\right|\right) \end{equation} and for neutral baryons $$ \epsilon^{n}_{b}=\frac{1}{4\pi^2}\left[k^{b}_{f}(E^{b}_{f})^{3}-\frac{1}{2}m_{b}^{*}\left(m_{b}^{*}k^{b}_{f}E^{b}_{f} +m_{b}^{*3}\ln\left|\frac{k^{b}_{f}+E^{b}_{f}}{m_{b}^{*}}\right|\right)\right]. $$ The expression of energy density for leptons can be obtained from (\ref{encb}) by changing $m_{b}^{*}\rightarrow m_{l}$. The pressure of dense matter is defined as $$ p=\sum_{b} p_{b}+\sum_{l} p_{l}-\frac{1}{2}m_{\sigma}^{2}+\frac{1}{2}m_{\omega}^{2}+\frac{1}{2}m_{\rho}\rho^{2}_{0}-U(\sigma)+\Sigma^{R}_{0}. $$ where pressure for charged baryons is \begin{equation} p^{c}_{n}=\frac{|q_{b}|B}{12\pi^2}\sum_{\nu}g_{\nu}\left(k^{b}_{f,\nu}E^{b}_{f}- (m_{b}^{*2}+2\nu|q_{b}|B)\ln\left|\frac{k^{b}_{f,\nu}+E^{b}_{f}}{\sqrt{m_{b}^{*2}+2\nu|q_{b}|B}}\right|\right) \end{equation} and for neutral baryons \begin{equation} p^{n}_{b}=\frac{1}{12\pi^2}\left[k^{b}_{f}(E^{b}_{f})^{3}-\frac{3}{2}m_{b}^{*}\left(m_{b}^{*}k^{b}_{f}E^{b}_{f} -m_{b}^{*3}\ln\left|\frac{k^{b}_{f}+E^{b}_{f}}{m_{b}^{*}}\right|\right)\right]. \end{equation} In order to obtain the EoS, one needs to add the contribution of magnetic field, that is \begin{equation} \epsilon=\epsilon_{m}+\frac{B^2}{8\pi},\quad p=p_{m}+\frac{B^2}{8\pi}. \end{equation} We use a model where the magnetic field depends from baryon density only. The parameterization proposed in \cite{Rabhi}, \cite{Ryu-2} has the form \begin{equation} B=B_{s}+B_{0}\left[1-\exp\left(-\beta(n/n_{s})^{\gamma}\right)\right], \end{equation} where $B_s$ is the magnetic field on the star surface ($10^{15}$ G). For parameters $\gamma$ and $\beta$, one takes the values $\gamma=2$, $\beta=0.05$ (slowly varying field) and $\gamma=3$, $\beta=0.02$ (fast varying field). The value $B_{0}$ is convenient to give in units of critical field for electron $B_{c}=4.414\times 10^{13}$ G. All these considerations can be applied to models where curvature corrections appear in the TOV equations. Specifically, we adopt quadratic and cubic corrections. \begin{table} \label{Table2} \begin{centering} \begin{tabular}{|c|c|c|c|c|c|} \hline $B_{0}$, & $\alpha$, & $M_{max}$, & $R$, & $E_{c}$, & $B_{c}$, \\ $10^5$ G & $10^{9}$ cm$^{2}$ & $M_{\odot}$ & km & GeV/fm$^{3}$ & $10^{18}$ G \\ \hline & 0 & 1.51 & 10.00 & 1.61 & 0 \\ 0 & $-5$ & 1.55 & 10.00 & 1.61 & 0 \\ & $5$ & 1.46 & 10.05 & 1.49 & 0 \\ \hline & 0 & 2.21 & 11.69 & 1.17 & 3.38 \\ 1 & $-5$ & 2.30 & 11.58 & 1.27 & 3.56 \\ & 5 & 2.14 & 11.82 & 1.09 & 3.20 \\ \hline & 0 & 2.80 & 13.99 & 0.79 & 3.50 \\ 2 & $-5$ & 2.91 & 13.44 & 0.97 & 3.93 \\ & $5$ & 2.71 & 14.31 & 0.68 & 3.06 \\ \hline & 0 & 3.21 & 15.67 & 0.63 & 3.29 \\ 3 & $-5$ & 3.33 & 15.24 & 0.68 & 3.75 \\ & 5 & 3.11 & 16.03 & 0.54 & 2.87 \\ \hline \end{tabular} \caption{Neutron star properties using TW model for quadratic gravity (slowly varying field). The energy density ($E_{c}$) and magnetic field ($B_{c}$) in the center for neutron star with maximal mass are given.} \end{centering} \end{table} \begin{table} \label{Table3} \begin{centering} \begin{tabular}{|c|c|c|c|c|c|} \hline $B_{0}$, & $\alpha$, & $M_{max}$, & $R$, & $E_{c}$, & $B_{c}$, \\ $10^5$ & $10^{9}$ cm$^{2}$ & $M_{\odot}$ & km & GeV/fm$^{3}$ & $10^{18}$ G \\ \hline & 0 & 2.32 & 11.50 & 1.17 & 3.96 \\ 1 & $-5$ & 2.44 & 11.47 & 1.22 & 4.36 \\ & 5 & 2.23 & 11.66 & 1.04 & 3.66 \\ \hline & 0 & 2.73 & 12.44 & 0.93 & 4.20 \\ 2 & $-5$ & 2.89 & 12.81 & 0.97 & 4.34 \\ & $5$ & 2.60 & 13.31 & 0.71 & 3.29 \\ \hline & 0 & 2.98 & 13.78 & 0.76 & 3.87 \\ 3 & $-5$ & 3.12 & 13.81 & 0.83 & 4.12 \\ & 5 & 2.84 & 14.06 & 0.68 & 3.50 \\ \hline \end{tabular} \caption{Neutron star properties using TW model for quadratic gravity (fast varying field).} \end{centering} \end{table} \begin{table} \label{Table4} \begin{centering} \begin{tabular}{|c|c|c|c|c|c|} \hline $B_{0}$, & $\alpha$, & $M_{max}$, & $R$, & $E_{c}$, & $B_{c}$, \\ $10^5$ & $10^{9}$ cm$^{2}$ & $M_{\odot}$ & km & GeV/fm$^{3}$ & $10^{18}$ G \\ \hline 2 & $10$ & 2.30 & 11.82 & 1.49 & 5.97 \\ \hline 3 & $10$ & 2.52 & 12.86 & 1.40 & 6.08 \\ \hline \end{tabular} \caption{Compact star properties on the second ``branch of stability'' using TW model for quadratic gravity (fast varying field). The magnetic field at the center can exceed $6\times 10^{18}$ G and the central energy density is approximately twice than in GR.} \end{centering} \end{table} \begin{center} \begin{figure} \includegraphics[scale=1.1]{MR0.eps}\\ \caption{The mass-radius diagram in model $f(R)=R+\alpha R^2$ for two values of $\alpha$ without magnetic field in comparison with GR.} \end{figure} \begin{figure} \includegraphics[scale=1.1]{MRB.eps}\\ \caption{The mass-radius diagram in model $f(R)=R+\alpha R^2$ and in GR for slowly varying magnetic field ($B_{0}=1,\quad 2,\quad 3\times 10^5$). The cases $\alpha=-5\times 10^9$, $0$, $5\times 10^9$ cm$^2$ correspond to dotted, thick and thin lines correspondingly.} \end{figure} \begin{figure} \includegraphics[scale=1.1]{MRBF.eps}\\ \includegraphics[scale=1.1]{MRBF2.eps}\\ \caption{The Mass-Radius diagram in model $f(R)=R+\alpha R^2$ and in GR for fast varying field. On upper panel the cases $\alpha=-5\times 10^9$, $0$, $5\times 10^9$ cm$^2$ correspond to dotted, thick and thin lines correspondingly. On lower panel the mass radius relation for $\alpha=10^{10}$ cm$^2$ is given (dotted lines). The second ``branch of stability'' with more compact (in comparison with GR) neutron stars exists.} \end{figure} \end{center} \begin{table} \label{Table5} \begin{centering} \begin{tabular}{|c|c|c|c|c|c|} \hline $B_{0}$, & $\beta$, & $M_{max}$, & $R$, & $E_{c}$, & $B_{c}$, \\ $10^5$ & $r_{g}^{4}$ & $M_{\odot}$ & km & GeV/fm$^{3}$ & $10^{18}$ G \\ \hline & 0 & 1.51 & 10.00 & 1.61 & 0 \\ 0 & $-50$ & 2.11 & 9.87 & 1.27 & 0 \\ & $-75$ & 2.45 & 10.02 & 1.22 & 0 \\ \hline & 0 & 2.21 & 11.69 & 1.17 & 3.38 \\ 1 & $-50$ & 2.70 & 11.07 & 1.67 & 4.09 \\ & $-75$ & 3.10 & 10.97 & 1.81 & 4.19 \\ \hline & 0 & 2.80 & 13.99 & 0.79 & 3.50 \\ 2 & $-25$ & 3.07 & 13.52 & 0.93 & 3.97 \\ & $-50$ & 3.29 & 13.78 & 0.83 & 3.62 \\ \hline \end{tabular} \caption{Neutron star properties using TW model for cubic gravity for several values of $\beta$ (in units of $r_{g}^4=4.73\times 10^{21}$ cm$^4$) for slowly varying magnetic field.} \end{centering} \end{table} \begin{table} \label{Table6} \begin{centering} \begin{tabular}{|c|c|c|c|c|c|} \hline $B_{0}$, & $\beta$, & $M_{max}/M_{\odot}$ & $R$, & $E_{c}$, & $B_{c}$, \\ $10^5$ & $r_{g}^{4}$ & & km & GeV/fm$^{3}$ & $10^{18}$ G \\ \hline & 0 & 2.32 & 11.50 & 1.17 & 3.96 \\ & $-25$ & 2.73 & 11.10 & 1.27 & 4.14 \\ 1 & $-50$ & 3.14 & 11.19 & 1.17 & 3.96 \\ & $-75$ & 3.64 & 11.18 & 1.17 & 3.96 \\ \hline & 0 & 2.73 & 12.44 & 0.93 & 4.20 \\ 2 & $-25$ & 3.24 & 12.87 & 0.86 & 3.93 \\ & $-50$ & 3.71 & 13.09 & 0.76 & 3.54 \\ \hline \end{tabular} \caption{Compact star properties using TW model for cubic gravity for several values of $\beta$ for fast varying magnetic field.} \end{centering} \end{table} \begin{figure} \includegraphics[scale=1.1]{MR3s.eps}\\ \includegraphics[scale=1.1]{MR3f.eps}\\ \caption{The mass-radius diagram in model $f(R)=R+\beta R^3$ and in GR for slowly (upper panel) and fast (lower panel) varying field. One can see that for the deviation of M-R relation from GR is smaller for larger values of $B_{0}$.} \end{figure} \section{The cases of quadratic and cubic curvature corrections} Let us firstly take into account models with quadratic gravity corrections, that is \begin{equation} f(R)=R+\alpha R^2. \end{equation} Neutron stars with strong magnetic field in quadratic gravity was considered in \cite{EKSI} for relatively stiff EoS based on model with five meson fields. We consider the quadratic gravity case for EoS based on the above described model. For our calculations, the Typel-Wolter (TW) parametrization is used. Let us note the following feature: for $B=0$, the mass of neutron star increases with decreasing $\alpha$ for the various radii (see Fig. 1). For strong magnetic field, the $M-R$ relation for $M>0.7M_{\odot}$ differs from such in GR only for masses close to the maximal one (see Figs. 2, 3). Another interesting feature appears for fast varying field. At high central densities, a second ``branch'' of stability can exists (Fig. 3, lower panel). It is interesting to note that similar effects take place for non-magnetic neutron stars in the framework of a model like $f(R)=R+\alpha R^2(1+\gamma R)$ \cite{Astashenok}. The stabilization of star configurations occurs thanks to the cubic term. The maximal masses and corresponding radii are given in Tables II, III for some values of $\alpha$ and $B_{0}$. The maximal value of central density (and therefore magnetic field) decreases with increasing $\alpha$. The parameters for compact (in comparison with GR) neutron stars on second ``branch of stability'' are given in Table IV. For modified gravity with cubic term, $f(R)=R+\beta R^{3}$, the maximal value of neutron star mass for given EoS increases for $\beta<0$ (Fig. 4). Some results are given in Tables V, VI. The maximal mass of neutron star can exceed $3M_{\odot}$. One can note that stars with magnetic field and cubic curvature corrections result stable for central energy density close to $\sim 1.8$ GeV/fm$^3$. In principle, calculations show that, for EoS based on GM2-GM3 parameterizations, we have similar results for models with $f(R)=R+\alpha R^2$ and $f(R)=R+\beta R^3$. For more stiff EoS the deviation from GR is larger. \section{Conclusions and perspectives} We presented neutron star models with strong magnetic fields in the framework of power-law $f(R)$ gravity models. For describing dense matter in magnetic field, a model with baryon octet interacting through $\sigma$$\rho$$\omega$-fields is used. Although the softening of nucleon EoS, due to hyperonization, leads to the decrease of the upper limit mass of neutron star, the strong magnetic field can increase considerably the maximal mass of star. In particular, we investigated the effect of strong magnetic field in models of quadratic, $f(R)=R+\alpha R^2$, and cubic, $f(R)=R+\alpha R^3$, gravity. For large fields, the $M-R$ relation differs considerably from such in GR only for stars with masses close to maximal. Another interesting feature is the possible existence of more compact stable stars with extremely large fields ($\sim 6\times 10^{18}$ G instead $\sim 4\times 10^{18}$ G in GR) in central regions of star. Due to the cubic term, the significant increasing of maximal mass ($M_{max}>3M_{\odot}$) is possible. The central energy density can exceed $\sim 1.8$ GeV/fm$^3$. However, it is worth stressing that the $f(R)$ models considered here can be related to the presence of strong gravitational fields where higher order curvature terms can emerge. Their origin is related to the effective actions of quantum field theory formulated in curved spacetime \cite{buchbinder, birrel}. In the extreme field of neutron stars, it is realistic supposing the emergence of curvature corrections that improve the pressure effects and could explain supermassive self-gravitating systems. As a next step, we will consider models of self-bounded quark stars and hybrid stars with quark cores. The EoS for quark matter (without magnetic field) is close to $p\sim \frac{1}{3} \rho c^2$ and therefore, in the framework of perturbative approach, the deviations from GR occur only for very large values of $\alpha$ in comparison with the above considered in quadratic gravity. However, for large magnetic fields, considerable effects can be induced on EoS and therefore the modified gravity effects can appear. \acknowledgments This work is supported in part by projects 14-02-31100 (RFBR, Russia) (AVA), by MINECO (Spain), FIS2010-15640 and by MES project TSPU-139 (Russia) (SDO). SC is supported by INFN ({\it iniziative specifiche} TEONGRAV and QGSKY).
{ "redpajama_set_name": "RedPajamaArXiv" }
1,372
Police: Pedestrian struck in Belleville 'improving' Joshua Jongsma Staff Writer, @jongsmjo An 83-year-old pedestrian who was hit by a car in Belleville on Wednesday morning, Feb. 22, was hospitalized in "very serious condition," but is improving as of Thursday, according to police. A woman from Nutley was believed to have fallen and then subsequently hit by a car at South Franklin Avenue and Joralemon Street at 9:30 a.m. Wednesday morning, Police Chief Mark Minichini said. She was transported to University Hospital in Newark, Minichini said. The woman was doing better and was still under observation as of Thursday afternoon, the chief said. "For an elderly woman to be so strong and endure this, that's good news," Minichini said. "So let's just pray for her." The chief said the investigation into the incident is ongoing. He did not yet know if any summonses or charges may be issued. "We're going over every detail of it," he said. Pedestrian struck in Belleville hospitalized in 'very serious condition' Woman hit by truck's side view mirror in Montville Two women hit by car at Montclair intersection
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,771
\section{Introduction} In parameter estimation, the task is to estimate unknown parameters, denoted by a vector ${\bf x}$, from available information such as measurement records. A powerful tool for parameter estimation is the probability density function (PDF), often called the {\em state} of the system, as it is possible to compute from this any estimate of ${\bf x}$, e.g., the mean or the mode of the PDF. This turns the problem into one of state estimation. There are numerous techniques for classical state estimation. Specifically, for continuous measurements, there are the techniques of {\em filtering} and {\em smoothing} \cite{Weinert01,Hay01,BroHwa12,Ein12,Fri12,vanTrees1} for classical states. Filtering uses any measurement information prior to the estimation time $\tau$, the `past' measurement record $\past{\color{nblack}\rm O}$, to estimate the state of the system, yielding the {\em filtered} state $\wp_{\text F}({\bf x}) := \wp({\bf x}|\past{\color{nblack}\rm O})$. The complement to the filtered state is the {\em retrofiltered} effect $E_{\text R}({\bf x}) := \wp(\fut{\color{nblack}\rm O}|{\bf x})$, more commonly referred to as the likelihood function \cite{Ein12,BroHwa12,Sarkka13} for the future measurement record $\fut{\color{nblack}\rm O}$ given ${\bf x}$. The estimation technique of smoothing combines the filtered state and retrofiltered effect to obtain a {\em smoothed} state $\wp_{\text S}({\bf x}) := \wp({\bf x}|\both{\color{nblack}\rm O}) \propto E_{\text R}({\bf x})\wp_{\text F}({\bf x})$, conditioned on both past and future measurement records, the `past-future' measurement record $\both{\color{nblack}\rm O}$. While smoothing may be inapplicable for some purposes, as it requires information after the estimation time, it is a more accurate estimation technique for data post-processing than filtering as it utilises more information. As we make the transition to quantum technologies, it becomes increasingly important to estimate the quantum state $\rho$ of a system. There are well-known techniques to estimate quantum state preparation from an ensemble of measurement results, e.g.,~tomography \cite{DarParSac03}. Here, however, we are interested in techniques using a single realization of a continuous measurement record, such as quantum trajectory theory \cite{Belav87,Bel92,WisMil10}. This technique is analogous to the classical technique of filtering in that it only uses the past measurement record to obtain the filtered quantum state $\rho_{\text F}(\tau)$. As in the classical case, the complement of the filtered quantum state is the retrofiltered quantum effect $\hat E_{\text R}(\tau)$, a positive operator defined such that $\text{Tr}[\hat{E}_{\text R}\rho] = \wp(\fut{\color{nblack}\rm O}|\rho)$. For the quantum analog of smoothing, it is not as simple as combining the filtered state and the retrofiltered effect as it was in the classical case. If we were to combine them following the pattern of the classical case,~$\varrho(\tau) \propto \rho_{\text F}(\tau)\hat{E}_{\text R}(\tau)$, the resulting operator would not be a valid quantum state. That is in general, the operator is not positive semidefinite \cite{Tsa09a,Tsa09b,GJM13,GueWis15,Ohki15,Tsa19,LCW-QS19}. We do not want to give the reader the impression that this operator is useless; in fact, it has an interesting connection to weak values \cite{ABL64,AAV88,Tsa09b}. Consequently, a symmetrized version of $\varrho(\tau)$ has been referred to as the smoothed weak-value (SWV) state $\varrho_{\rm SWV}$ \cite{LCW19,LCW-QS19}. There is, however, a quantum state smoothing formalism developed by Guevara and Wiseman \cite{GueWis15} which guarantees a valid smoothed quantum state. The formalism considers a quantum system partially observed by an observer, Alice, whose task is to estimate the true state of the systems using only her observed record. However, for Alice to obtain a valid smoothed quantum state, that is, a state conditioned on her past-future measurement record, it is necessary to introduce a secondary observer, say Bob, who gathers all information unobserved by Alice, see Fig.~\ref{Fig-QSS}. By using both Alice's and Bob's measurement records to estimate the quantum state, we would obtain the {\em true} quantum state, a state containing maximal information about the quantum system. The true state is crucial to calculating the smoothed state. \begin{figure} \includegraphics[scale=0.335]{QSS.png} \caption{A diagrammatic representation of the quantum state smoothing formalism. Bob, who has access to both the observed record ${\color{nblack}\rm O}$ and the unobserved record ${\color{nblack}\rm U}$, is able to obtain the best estimate of the quantum state, the true state $\rho_{\text T}}%{^{\rm true}}%{_{\text G} :=\rho_{\protect\past{\color{nblack}\rm O}\protect\past{\color{nblack}\rm U}}$ {\color{nblack} of the quantum system ${\cal Q}$}. Alice, on the other hand has access to only the observed record ${\color{nblack}\rm O}$. If Alice does not know of the existence of the ${\color{nblack}\rm U}$, then her best estimate would be the filtered estimate $\rho_{\text F} :=\rho_{\protect\past{\color{nblack}\rm O}}$. However, if Alice knows the measurement setting Bob used to obtain ${\color{nblack}\rm U}$, she can utilise the full past-future observed record to obtain the smoothed state $\rho_{\text S}:=\rho_{\protect\both{\color{nblack}\rm O}}$, which is a more accurate estimate of Bob's true state than the filtered state.} \label{Fig-QSS} \end{figure} The smoothed quantum state has been shown to offer a better estimate of the true state than the conventional filtered state, where the improvement is quantified by the state purity \cite{GueWis15,LCW19,CGW19}. Interestingly, the purity improvement of the smoothed state over the filtered state depends on both Alice's and Bob's choices of measurement on their parts of the system's environment. Note, these choices do not affect the unconditioned system evolution, described by a master equation. This raises an interesting question: How should Alice observe and `unobserve' (that is, Bob observe) the quantum system in order to obtain the maximum purity improvement for the smoothed quantum state? Recently \cite{CGW19}, the optimal measurement strategy for Alice and Bob has been investigated for a single qubit example. However, due to the vast number of unobserved measurement records that are needed in order to calculate the smoothed quantum state in such a system, the authors were only able to consider a handful of measurement scenarios. Since the original proposal in 2015 \cite{GueWis15}, the quantum state smoothing theory has been adapted by the present authors to linear Gaussian quantum (LGQ) systems \cite{LCW19}. Thanks to the nice properties of LGQ systems, the theory of Ref.~\cite{LCW19} provided simple closed-form solutions for the smoothed quantum state, enabling its properties to be investigated either analytically or semianalytically \cite{LCW19,LCW-QS19}. If we restrict our analysis to LGQ systems, though we are also restricting to diffusive-type unravelings of the system, we can drastically increase the number of measurement scenarios for Alice and Bob in the search for the optimal measurement strategy. As a result, we can numerically determine the optimal diffusive measurement scenario for Alice and Bob for any type of LGQ system. But can we understand the results intuitively? In this paper, we first review the necessary theory required for LGQ state smoothing, and provide a more detailed derivation of the theory than that presented in Ref.~\cite{LCW19}. We then present numerically simulated LGQ trajectories, showing their means and covariances, of the filtered, SWV, and smoothed quantum states. This is to observe the differences in these estimators and analyze their properties as a function of time. As expected, we observe that the smoothed quantum state estimates the true state better than the filtered state could. The SWV state, on the other hand, performs very differently. As the main focus of this paper, we present three possible hypotheses for the optimal measurement strategy for Alice and Bob, and study how well they predict the optimal measurements found numerically for two LGQ physical systems: an on-threshold optical parametric oscillator and a stochastic linear attenuator. The most successful strategy has a surprisingly counter intuitive logic to it. Lastly, we generalize the logic behind the most successful hypotheses from the LGQ setting to the qubit setting by defining analogous quantities for a driven qubit measured using homodyne detections. Moreover, we find that the success of the counter intuitive strategy is replicated in the qubit system. The structure of this paper is as follows. In Sec.~\ref{sec-LGC} we will briefly review the classical linear Gaussian (LG) state estimation. Then, in Sec.~\ref{sec-LGQ} we review LGQ systems along with the LGQ state smoothing theory. Next, in Sec.~\ref{sec-PS} we introduce the two physical systems that we will consider throughout the paper. We simulate the trajectories for the filtered, true, SWV, and smoothed quantum states in Sec.~\ref{sec-Traj}. Finally, in Sec.~\ref{sec-Opt} we find a simple hypothesis for the best measurement strategy for Alice and Bob to maximize the purity of the smoothed state compared to the filtered state, which works for our two LGQ examples and, suitably generalized, for a very different qubit example. \section{Classical LG State Estimation}\label{sec-LGC} For a classical dynamical system, a state of knowledge of the system is defined as the PDF $\wp({\color{nblack}{\check{ \bf x}}})$, where ${\color{nblack} {\check{ \bf x}} = (\check{x}_1,\check{x}_2,...,\check{x}_D)\!^{\top}}$ is the vector of $D$ parameters required to completely describe the system, with $\top$ denoting the transpose. {\color{nblack} Note, we have used the wedge mark on ${\color{nblack} {\check{ \bf x}}}$ to make it clear that this is a dummy variable for the PDF and not the corresponding random variable which we denote by ${\bf x}$.} We will restrict our analysis to Gaussian states, $\wp({\check{ \bf x}}) = g({\check{ \bf x}};\ex{{\bf x}},V)$. That is, the state is specified by its mean $\ex{{\bf x}}$ and covariance matrix $V = \ex{{\bf x}\bx^{\top}} - \ex{{\bf x}}\ex{{\bf x}}\!^{\top}$. In order to guarantee that the state remains Gaussian throughout its evolution even when conditioned on continuous observation, the system must be initialized in a Gaussian state and must satisfy the following constraints \cite{WisMil10,Hay01,Weinert01,vanTrees1,BroHwa12,Ein12,Fri12}. First, the system's dynamical evolution must be described by a linear Langevin equation \begin{equation}\label{LLE} {\rm d}{\bf x} = A{\bf x}{\rm d} t + E{\rm d}{\bf v}_{\text{p}}\,. \end{equation} Here $A$ and $E$ are constant matrices and ${\rm d}{\bf v}_{\text{p}}$ is the process noise, which is a vector of independent Wiener increments that satisfies \begin{equation} \label{WeinCond} \mathbb{E}[{\rm d}{\bf v}_{\text{p}}] = {\bf 0}\,, \qquad {\rm d}{\bf v}_{\text{p}}({\rm d}{\bf v}_{\text{p}})\!^{\top} \!= I {\rm d}t\,, \end{equation} where $\mathbb{E}[...]$ denotes an ensemble average over all possible realisations of the noise. The second constraint is that any measurement record obtained must be linear in ${\bf x}$, i.e., \begin{equation} {\bf y}{\rm d} t = C{\bf x}{\rm d} t + {\rm d}{\bf v}_{\text{m}}, \end{equation} where $C$ is a constant matrix and ${\rm d}{\bf v}_{\text{m}}$ is the measurement noise, a vector of independent increments satisfying similar conditions to \erf{WeinCond}. There may exist some correlations between the measurement noise and the process noise of the system, for example, from measurement back-action, which can be described by a cross-correlation matrix $\Gamma^{\top}{\rm d} t = E{\rm d}{\bf v}_{\text{p}}({\rm d}{\bf v}_{\text{m}})\!^{\top}$ \cite{WisMil10}. We will note that the majority of classical texts \cite{Weinert01,Hay01,BroHwa12,Ein12,Fri12,vanTrees1} on this topic assume that $\Gamma = 0$. The classical LG systems are defined by the above constraints. We can condition the estimate of the LG state on the past measurement record to obtain the filtered estimate $\wp_{\text F}({\check{ \bf x}}) = g({\check{ \bf x}},\ex{\bx}\fil,V_{\text F})$, whose mean and covariance are given by the Kalman-Bucy filtering equations \cite{WisMil10,KaiFro68,Kai70,Kai73,BLP79} \begin{align} &{\rm d}\ex{{\bf x}}_{\text F} = \, A\ex{{\bf x}}_{\text F} {\rm d}t + {\cal K}^{+}[V_{\text F}]{\rm d}{\bf w}_{\text F}\,,\label{cmf}\\ &\frac{{\rm d}V_{\text F}}{{\rm d}t} = \, AV_{\text F} +V_{\text F} A^{\top} \!+ D - {\cal K}^{+} [V_{\text F}] {\cal K}^{+} [V_{\text F}]^{\top}\,, \label{cVf} \end{align} with initial conditions $\ex{{\bf x}}_{\text F}(t_0) = \ex{{\bf x}}_0$ and $V_{\text F} (t_0) = V_0$. Here, ${\rm d}{\bf w}_{\text F} := {\bf y}{\rm d} t - C\ex{{\bf x}}_{\text F}{\rm d} t$ is a vector of innovations, $D = EE^{\top}$ is the diffusion matrix, and \begin{equation} \label{Kick} {\cal K}^{\pm}[V] := VC^{\top} \! \pm \Gamma^{\top} \end{equation} is the optimal Kalman gain matrix, as a function of the covariance. As mentioned earlier, if we want to obtain a more accurate estimate of the state, we can utilise the past-future measurement record $\both{\color{nblack}\rm O}$ as opposed to the past record $\past{\color{nblack}\rm O}$ the filtered state uses. The smoothed state obtained by using $\both{\color{nblack}\rm O}$ can be calculated using the filtered state according to \begin{equation}\label{Csm} \wp_{\text S}({\color{nblack} {\check{ \bf x}}}) := \wp({\color{nblack} {\check{ \bf x}}}|\both{\color{nblack}\rm O}) \propto E_{\text R}({\color{nblack} {\check{ \bf x}}}) \wp_{\text F}({\color{nblack} {\check{ \bf x}}})\,, \end{equation} where we have assumed that the system is Markovian. To explicitly see the dependence on the measurement records, we remind the reader that the filtered state is a function of the past measurement record, $\wp_{\text F}({\check{ \bf x}}):= \wp({\check{ \bf x}}|\past{\color{nblack}\rm O})$. The retrofiltered effect is the likelihood of a particular realization of a future measurement record occurring from a configuration ${\color{nblack}{\check{ \bf x}}}$, i.e., $E_{\text R}({\color{nblack} {\check{ \bf x}}}):= \wp(\fut{\color{nblack}\rm O}|{\color{nblack} {\check{ \bf x}}})$. Using Bayes' theorem \cite{Jaz07} results in \erf{Csm}. As we already have calculated the filtered state, all we need to calculate to obtain the smoothed state is the retrofiltered effect. If we apply Bayes' theorem to the retrofiltered effect, we obtain $E_{\text R}({\color{nblack} {\check{ \bf x}}}) \propto \wp({\color{nblack} {\check{ \bf x}}}|\fut{\color{nblack}\rm O})\wp(\fut{\color{nblack}\rm O})$. As we are using the retrofiltered effect to calculate the smoothed state, the future measurement record will be fixed and the probability $\wp(\fut{\color{nblack}\rm O})$ for that fixed record will be a constant. As a result, the retrofiltered effect is $E_{\text R}({\color{nblack} {\check{ \bf x}}}) \propto \wp({\color{nblack} {\check{ \bf x}}}|\fut{\color{nblack}\rm O})$, from which we can define a normalised retrofiltered effect $E_{\text R}'({\color{nblack} {\check{ \bf x}}}) = \wp({\color{nblack} {\check{ \bf x}}}|\fut{\color{nblack}\rm O})$. As we are limiting our discussion to Gaussian systems, the normalized retrofiltered effect will be a Gaussian, $E'_{\text R}({\color{nblack} {\check{ \bf x}}}) = g({\color{nblack} {\check{ \bf x}}};\ex{{\bf x}}_{\text R},V_{\text R})$, where the retrofiltered mean $\ex{{\bf x}}_{\text R}$ and corresponding covariance matrix $V_{\text R}$ are given by \begin{align} &- {\rm d}\ex{{\bf x}}_{\text R} = -A\ex{{\bf x}}_{\text R} {\rm d}t + {\cal K}^{-} [V_{\text R}]{\rm d}{\bf w}_{\text R},\label{crm}\\ &- \frac{{\rm d}V_{\text R}}{{\rm d}t} = -AV_{\text R} - V_{\text R} A^{\top} \!+ D - {\cal K}^{-} [V_{\text R}] {\cal K}^{-} [V_{\text R}]^{\top}.\label{cVr} \end{align} Here ${\rm d}{\bf w}_{\text R}= {\bf y}{\rm d} t - C\ex{{\bf x}}_{\text R}{\rm d} t$ and ${\cal K}^-[V_{\text R}]$ is defined in \erf{Kick}. These retrofiltering equations evolve backwards in time, as evident from the negative sign on the left-hand side of both equations, from a final uninformative state with $V_{\text R}(T) = \infty$. However, due to the infinite final retrofiltered covariance, there is no sensible final condition for the retrofiltered mean. One can obtain more practical equations \cite{Fraser67}, which can be used in numerical computations and the upcoming SWV state, by instead solving for the inverse retrofiltered covariance $\Lambda_{\text R} = V_{\text R}^{-1}$, referred to as an information matrix, and defining a new `informative' mean ${\bf z}_{\text R} = \Lambda_{\text R} \ex{{\bf x}}_{\text R}$. Using the identity \begin{equation}\label{V2L} \frac{{\rm d}}{{\rm d} t}V^{-1} = -V^{-1} \frac{{\rm d} V}{{\rm d} t} V^{-1}\,, \end{equation} we obtain the equations for the retrofiltered informative mean and the information matrix \begin{align} &-{\rm d} {\bf z}_{\text R} = (\tilde{A} - \tilde{D}\Lambda_{\text R})\!^{\top} {\bf z}_{\text R} {\rm d} t + (C^{\top} \!- \Lambda_{\text R}\Gamma^{\top}) {\bf y}{\rm d} t\,,\label{zr}\\ &- \frac{{\rm d} \Lambda_{\text R}}{{\rm d} t} = \Lambda_{\text R}\tilde{A} + \tilde{A}^{\top} \!\Lambda_{\text R} - \Lambda_{\text R}\tilde{D} \Lambda_{\text R} + C^{\top} \!C\,,\label{Lr} \end{align} with $\tilde{A} = A - \Gamma^{\top} \!C$ and $\tilde{D} = D - \Gamma^{\top}\Gamma$. We can now simply set the final conditions to be ${\bf z}_{\text R}(T) = 0$ and $\Lambda_{\text R}(T) = 0$. Finally, now that we have equations for both the filtered state and the retrofiltered effect, we can compute the smoothed state using \erf{Csm}. Due to the proportionalilty in \erf{Csm}, we can replace the retrofiltered effect $E_{\text R}({\color{nblack} {\check{ \bf x}}})$ with its normalized counterpart $E_{\text R}'({\color{nblack} {\check{ \bf x}}})$, as any proportionality constants will be accounted for during the normalization process. Since both the filtered state and retrofiltered effect are Gaussians, then by the multiplicative property of Gaussians, the smoothed state will also be Gaussian. That is, $\wp_{\text S}({\color{nblack} {\check{ \bf x}}}) = g({\color{nblack} {\check{ \bf x}}};\ex{{\bf x}_{\text S}},V_{\text S})$, with smoothed mean and covariance \cite{Weinert01,Ein12,Sarkka13,Mayne66,Fraser67,FraPot69} \begin{align} &\ex{{\bf x}}_{\text S} = V_{\text S}\left[V_{\text F}^{-1}\ex{{\bf x}}_{\text F} + V_{\text R}^{-1}\ex{{\bf x}}_{\text R}\right]\,,\label{csm}\\ &V_{\text S} = \left[V_{\text F}^{-1} + V_{\text R}^{-1}\right]^{-1}.\label{cVs} \end{align} Using the definition of the retrofiltered informative mean and information matrix in \erfs{zr}{Lr}, the equations can be simplified to \begin{align} \ex{{\bf x}}_{\text S} &= V_{\text S}\left[V_{\text F}^{-1}\ex{{\bf x}}_{\text F} + {\bf z}_{\text R}\right]\,,\label{wvsm}\\ V_{\text S} &= \left[V_{\text F}^{-1} + \Lambda_{\text R}\right]^{-1}.\label{wvsV} \end{align} We can see that the smoothed state is more accurate than the filtered state through the covariances, where it is simple to see that $V_{\text F} \geq V_{\text S}$ in the $N = 1$ case. \section{LGQ State Estimation} \label{sec-LGQ} \subsection{Unconditioned Quantum State} In the quantum state estimation, we are concerned with estimating a density operator $\rho$ of a quantum system as opposed to a PDF $\wp({\color{nblack} {\check{ \bf x}}})$. For an open quantum system, the evolution of the state $\rho$, without observation, is governed by the Lindblad master equation $\hbar\dot\rho = {\cal L} \rho$, with the initial condition $\rho(t_0) = \rho_0$, where the Lindbladian superoperator ${\cal L}$ is \begin{equation}\label{LME} \quad {\cal L}\bullet = -i[\hat H,\bullet] +{\cal D}[\hat{\bf c}]\bullet\,. \end{equation} Here the Hamiltonian $\hat H$ describes the unitary dynamics of the system and $\hat{\bf c} \equiv (\hat{c}_1,\hat{c}_2,...,\hat{c}_M)\!^{\top}$ is the vector of Lindblad operators describing the interacting channels between the system and the environment. {\color{nblack} It will also be useful to define the row vector form of $\hat{\bf c}$, which we denote by $\hat{\bf c}^{\top} \!= (\hat{c}_1,\hat{c}_2,...,\hat{c}_M)$, where the reader should notice that the transpose does not act on the operators within the vector. Furthermore, the conjugate transpose is defined as the row vector $\hat{\bf c}^\dagger = (\hat{c}_1^\dagger,\hat{c}_2^\dagger,...,\hat{c}_M^\dagger)$. Thus to obtain a column vector form for $\hat{\bf c}^\dagger$, we need to take the transpose. To denote this we will adopt the double dagger notation of Ref.~\cite{ChiWis11}, i.e., $\hat{\bf c}^\ddagger = (\hat{c}_1^\dagger,\hat{c}_2^\dagger,...,\hat{c}_M^\dagger)\!^{\top}$. We can now express the} nonunitary part of \erf{LME} as {\color{nblack} \begin{equation} {\cal D}[\hat{\bf c}]\bullet = \hat{\bf c}^{\top}\!\!\bullet\hat{\bf c}^\ddagger - \{\hat{\bf c}^\dagger\hat{\bf c}/2,\bullet\}\,, \end{equation} where $\{A,B\} = AB + BA$ is the anticommutator.} Without monitoring the environment to gain information about the quantum system, a solution to \erf{LME} is the most accurate estimate of the system's quantum state. We now assume that we can describe the quantum system by $N$ bosonic modes. From this we define a vector of $2N$ operators $\hat\bx = (\hat{q}_1,\hat{p}_1,...,\hat{q}_N,\hat{p}_N)\!^{\top}$, where $\hat{q}_k$ and $\hat{p}_k$ are the canonical position and conjugate momentum operators, respectively, describing the $k$th bosonic mode and satisfying the commutation relation $[\hat q_k,\hat p_l] = i\hbar\delta_{kl}$. Furthermore, we assume that the system's Hamiltonian is quadratic and the vector of Lindblad operators is linear in $\hat\bx$, i.e.,~$\hat{H} = \hat\bx^{\top} G\hat\bx/2$ and $\hat{\bf c}= (I_N,iI_N)\bar{C}\hat\bx$, where $G$ and $\bar{C}$ are constant real matrices and $I_n$ denotes an $n\times n$ identity matrix. These assumptions ensure that a state initially prepared in a Gaussian state will remain Gaussian throughout the evolution. By a Gaussian state we mean one whose Wigner function is Gaussian, $W({\check{ \bf x}}) = g({\check{ \bf x}};\ex{\hat\bx},V)$, with mean $\ex{\hat\bx}$ and covariance $V$. The mean and covariance are defined as $\ex{\hat x_k} = \text{Tr}[\hat x_k\rho]$ and $V_{k,l} = \text{Tr}[\{\hat x_k\hat x_l + \hat x_l \hat x_k\}\rho/2] - \ex{\hat x_k}\ex{\hat x_l}$, respectively, where $\hat x_k$ is an element of $\hat\bx$. For any state $\rho$ the covariance matrix will satisfy the \sch-Heisenberg uncertainty relation~\cite{WisMil10}, \begin{equation}\label{SHUR} V+i\hbar\Sigma/2 \geq 0\,, \end{equation} where $\Sigma_{kl} = -i[\hat x_k,\hat x_l]/\hbar$ is a real symplectic matrix. With these assumptions we can calculate the evolution of the unconditioned LGQ state via its mean and covariance, \begin{align} &{\rm d}{\ex \hat\bx} = A{\ex\hat\bx} {\rm d}t ,\label{LLE}\\ &\frac{{\rm d}V}{{\rm d}t} = AV +V A^{\top} \!+ D\,, \label{UncondV} \end{align} with the initial conditions for the mean and covariance $\ex{\hat\bx}(t_0) = \ex{\hat\bx}_0$ and $V(t_0) = V_0$, respectively. Here the drift and diffusion matrices are \cite{WisMil10} \begin{equation} A = \Sigma(G+\bar{C}^{\top} \!S\bar{C})\,, \qquad D = \hbar\Sigma\bar{C}^{\top}\bar{C}\Sigma^{\top}\,, \end{equation} respectively, with $S = \left[\begin{smallmatrix} 0&I_N\\ -I_N&0 \end{smallmatrix}\right]$ being another symplectic matrix. \subsection{Filtered Quantum State} \label{Mintro} In order to obtain a better estimate of the system's state than the unconditioned state, we need to gain more information about the system by measuring the environment. In this work we focus on diffusive-type unravelings of the master equation as opposed to a jump unraveling, as the former preserves Gaussian states. The corresponding stochastic master equation, sometimes referred to as a quantum filtering equation \cite{Belav87,Bel92} for reasons that will become apparent, in the $M$ representation \cite{ChiWis11} is \begin{equation}\label{SME} \hbar{\rm d} \rho_{\text F} = {\cal L}\rho_{\text F}{\rm d} t + \sqrt{\hbar}{\rm d}{\bf w}_{\text F}^{\top}{\cal H}[M^\dagger \hat{\bf c}]\rho_{\text F}\,. \end{equation} Here, ${\cal H}[\hat{\bf a}]\bullet =\hat{\bf a}\bullet+\bullet\hat{\bf a}^\ddagger-\text{Tr}[\bullet(\hat{\bf a}+\hat{\bf a}^\ddagger)]\bullet$, and the initial condition is $\rho_{\text F}(t_0) = \rho_0$. We have also implicitly introduced a vector of measurement currents ${\bf y}{\rm d} t = \langle M^\dagger\hat{\bf c} + M^{\top}\hat{\bf c}^\ddagger\rangle_{\text F}{\rm d} t + {\rm d}{\bf w}_{\text F}$ where $\ex{\bullet}_{\text F} := \text{Tr}[\bullet \rho_{\text F}]$ through the vector of innovations ${\rm d}{\bf w}_{\text F}$, which satisfies similar conditions to \erf{WeinCond}. To ensure that evolution under \erf{SME} does not result in an invalid quantum state, it is necessary and sufficient \cite{ChiWis11} for $M$ to satisfy $MM^\dagger = {\rm diag}(\eta_1,\eta_2,...,\eta_M)$, where $\eta_k$ can be interpreted as the monitoring efficiency of the channel $\hat{c}_k$. Note, we can also define an un-normalized filtered state $\tilde\rho_{\text F}$, which explicitly depends on the measurement results ${\bf y}{\rm d} t$ (instead of the innovation ${\rm d}{\bf w}_{\text F}$), reflecting the observer's knowledge of the system. This un-normalized filtered state satisfies the stochastic master equation \begin{equation}\label{USME} \hbar{\rm d} \tilde\rho_{\text F} = {\cal L}\tilde\rho_{\text F}{\rm d} t + \sqrt{\hbar}{\bf y}^{\top}\widetilde{\cal H}[M^\dagger \hat{\bf c}] \tilde\rho_{\text F}{\rm d} t\,, \end{equation} where $\widetilde{\cal H}[\hat{\bf a}]\bullet =\hat{\bf a}\bullet+\bullet\hat{\bf a}^\ddagger$. Restricting the discussion to LGQ systems, we can express the vector of measurement current as \begin{equation} {\bf y}{\rm d} t = C\ex{\hat{\bf x}}_{\text F}{\rm d} t + {\rm d}{\bf w}_{\text F}\,, \end{equation} where $C = 2\sqrt{\hbar^{-1}} T^{\top}\bar{C}$, $T^{\top} \!= ({\rm Re}[M^{\top}],{\rm Im}[M^{\top}])$, and ${\rm d}{\bf w}_{\text F} \equiv {\bf y}{\rm d}t - C\ex{\hat\bx}_{\text F}{\rm d}t$. From the stochastic master equation in \erf{SME}, we can derive the equations for the mean and covariance of the filtered state, giving \begin{align} &{\rm d}\ex{\hat\bx}_{\text F}=\, A\ex{\hat\bx}_{\text F} {\rm d}t + {\cal K}^{+}[V_{\text F}]{\rm d}{\bf w}_{\text F}\,,\label{qfm}\\ &\frac{{\rm d}V_{\text F}}{{\rm d}t} = \, AV_{\text F} +V_{\text F} A^{\top} \!+ D - {\cal K}^{+} [V_{\text F}] {\cal K}^{+} [V_{\text F}]^{\top}\,, \label{qVf} \end{align} with initial conditions $\ex{\bx}\fil(t_0) = \ex{\hat\bx}_0$ and $V_{\text F}(t_0) = V_0$. The optimal Kalman gain matrix, ${\cal K}^+[V_{\text F}]$, which we will later refer to as a {\em kick} matrix, is defined in \erf{Kick}, with the measurement back-action $\Gamma = -\sqrt{\hbar}T^{\top} \!S\bar{C}\Sigma^{\top}$. Note that these equations for the filtered quantum state have exactly the same form as the classical Kalman-Bucy filtering equations. \subsection{Retrofiltered Effect and Smoothed Weak-value State} The retrofiltered effect gives the probability density of a measurement result occurring at a later time given a particular quantum state at the current time: \begin{equation} \wp(\fut{\color{nblack}\rm O}|\rho) = \text{Tr}[\rho\hat E_{\text R}]\,, \end{equation} where $\hat{E}_{\text R}$ is a function of the future record $\fut{\color{nblack}\rm O}$. The effect $\hat{E}_{\text R}$ can be computed backward in time from a final uninformative effect $\hat E_{\text R}(T) \propto \hat I$. The stochastic equation for the (unnormalized) retrofiltered effect $\hat{E}_{\text R}$ is obtained by taking the adjoint of \erf{USME}, giving \begin{equation}\label{USEE} -\hbar{\rm d}\hat{E}_{\text R} = {\cal L}^\dagger \hat{E}_{\text R} {\rm d} t + \sqrt{\hbar}{\bf y}\widetilde{{\cal H}}[M^{\top}\hat{\bf c}^\ddagger] \hat{E}_{\text R}{\rm d} t\,, \end{equation} where ${\cal L}^\dagger$ is the adjoint of the Lindbladian superoperator. Note that \erf{USEE} is not trace-preserving and evolves backward in time. Following a similar logic to that presented in the classical case, we will normalize the retrofiltered effect, as ultimately we are interested in a smoothed state which will require normalization regardless. In doing so, we obtain a normalized retrofiltered effect $\hat{E}_{\text R}'$ \cite{ZhaMol17}, \begin{equation} \begin{split} -\hbar{\rm d} \hat{E}'_{\text R} = {\cal L}^\dagger \hat{E}'_{\text R} {\rm d} t - \ex{\hat\kappa}&_{\text R}\hat{E}'_{\text R}{\rm d} t \\+ & \sqrt{\hbar}{\rm d}{\bf w}_{\text R}{\cal H}[M^{\top}\hat{\bf c}^\ddagger] \hat E_{\text R}'\,, \end{split} \end{equation} where ${\rm d}{\bf w}_{\text R} = {\bf y}{\rm d} t - \ex{M^\dagger\hat{\bf c} + M^{\top}\hat{\bf c}^\ddagger}_{\text R}{\rm d} t$ with $\ex{\bullet}_{\text R} := \text{Tr}[\bullet \hat{E}'_{\text R}]$ and $\hat\kappa = \hat{\bf c}^{\top}\hat{\bf c}^\ddagger - \hat{\bf c}^\dagger\hat{\bf c}$. Considering an LGQ system, the Wigner function for the normalized retrofiltered effect is a normalized Gaussian, i.e.,~$W_{\text R}({\check{ \bf x}}) = g({\check{ \bf x}};\ex{\hat\bx}_{\text R},V_{\text R})$. Consequently, we can obtain, in a similar way to the filtered case in \erfs{qfm}{qVf}, the equations for the retrofiltered mean and covariance, \begin{align} &- {\rm d}\ex{\hat\bx}_{\text R} = -A\ex{\hat\bx}_{\text R} {\rm d}t + {\cal K}^{-} [V_{\text R}]{\rm d}{\bf w}_{\text R},\label{qrm}\\ -& \frac{{\rm d}V_{\text R}}{{\rm d}t} = -AV_{\text R} - V_{\text R} A^{\top} \!+ D - {\cal K}^{-} [V_{\text R}] {\cal K}^{-} [V_{\text R}]^{\top}.\label{qVr} \end{align} These equations completely describe the effect, with the final condition $V_{\text R}(T) = \infty$. Once again, there is no sensible final condition for the retrofiltered mean due to the infinite covariance. Following the same procedure presented in the classical case, we obtain \erfs{zr}{Lr}, where in the quantum case ${\bf z}_{\text R} := \Lambda_{\text R}\ex{\hat\bx}_{\text R}$. Following the classical equations, one might think that we could obtain a Gaussian smoothed quantum state $W_{\rm SWV}({\check{ \bf x}}) = g({\check{ \bf x}};\ex{\hat\bx}_{\rm SWV},V_{\rm SWV})$, with mean $\ex{\hat\bx}_{\rm SWV}$ and covariance $V_{\rm SWV}$ given by \begin{align} \ex{\hat\bx}_{\rm SWV} &= V_{\rm SWV}\left[V_{\text F}^{-1}\ex{\hat\bx}_{\text F} + V_{\text R}^{-1}\ex{\hat\bx}_{\text R}\right]\,,\label{wvsm}\\ V_{\rm SWV} &= \left[V_{\text F}^{-1} + V_{\text R}^{-1}\right]^{-1}.\label{wvsV} \end{align} While this construction might seem valid, {\color{nblack} we will show using an example in Sec.~\ref{sec-Traj} that the SWV covariance does not always satisfy the \sch-Heisenberg uncertainty relation~\erf{SHUR}, as it would if it were a valid quantum state.} The problem lies with {\color{nblack} how the classical smoothed state converts to the quantum analogue. The above procedure for Gaussian states is equivalent to taking the symmetrized product of the filtered state and the retrofiltered effect}~\cite{LCW19,LCW-QS19}, \begin{equation} \varrho_{\rm SWV} = \frac{\rho_{\text F} \circ \hat{E}_{\text R}}{\text{Tr}[\rho_{\text F} \circ \hat{E}_{\text R}]}\,. \end{equation} Here $A \circ B = (AB + BA)/2$ denotes the Jordan product \cite{Jordan33,JorNeuWig34}, and the denominator $\text{Tr}[\rho_{\text F} \circ \hat E_{\text R}]$ ensures that the state is normalized. We are using $\varrho$ to denote the SWV state to stress that this is not a valid quantum state which is represented by a density matrix $\rho$. The reason $\varrho_{\rm SWV}$ is not a valid quantum state is because, in general, the retrofiltered effect does not commute with the filtered quantum state. As a result, the SWV state is not guaranteed to be positive semidefinite \cite{Tsa09b,LCW-QS19}. {\color{nblack} Thus we turn to quantum state smoothing theory instead.} \subsection{LGQ State Smoothing} For the quantum state smoothing theory \cite{GueWis15}, we consider an open quantum system coupled to two baths. In principle, each of these baths can comprise any number of physically distinct baths, but for simplicity we will consider them collectively. An observer, Alice, monitors one of the baths and is able to construct a measurement record $\color{nblack}\rm O$, which we will refer to as the `observed' record. A (perhaps hypothetical) secondary observer, Bob, monitors the remaining bath and constructs his own measurement record $\color{nblack}\rm U$ that is unobserved by Alice, which we will call the `unobserved' record. See Fig.~\ref{Fig-QSS}. Now Bob, assumed to have access to both the observed and the unobserved record, can estimate the quantum state conditioned on both $\past{\color{nblack}\rm O}$ and $\past{\color{nblack}\rm U}$. That is, he obtains a state with maximal information about the quantum system, which can be regarded as the {\em true} state $\rho_{\text T}}%{^{\rm true}}%{_{\text G}:=\rho_{\past{\color{nblack}\rm O}\past{\color{nblack}\rm U}}$. However, since Alice does not have access to $\past{\color{nblack}\rm U}$, she can only obtain an estimate of the true state based on her observed measurement record. In this case she can construct a conditioned state with the form \begin{equation} \label{QSS} \rho\c = \sum_{\past{\color{nblack}\rm U}} \wp\c(\past{\color{nblack}\rm U}) \rho_{\text T}}%{^{\rm true}}%{_{\text G}\,, \end{equation} where the conditioning `${\rm C}$' depends on the amount of the observed measurement record used in the estimation. If Alice wishes to obtain a filtered state, i.e., ${\rm C} \equiv \rm{F}$, the conditioned probability distribution for the unobserved record becomes $\wp_{\text F}(\past{\color{nblack}\rm U}) = \wp(\past{\color{nblack}\rm U}|\past{\color{nblack}\rm O})$. To obtain a smoothed state, i.e., ${\rm C} \equiv \rm{S}$, the conditional probability becomes $\wp_{\text S}(\past{\color{nblack}\rm U}) = \wp(\past{\color{nblack}\rm U}|\both{\color{nblack}\rm O})$. For LGQ state smoothing \cite{LCW19}, the true state of the system is represented by a Gaussian Wigner function $W_{\text T}}%{^{\rm true}}%{_{\text G}({\check{ \bf x}}) = g({\check{ \bf x}};\ex{\hat\bx}_{\text T}}%{^{\rm true}}%{_{\text G},V_{\text T}}%{^{\rm true}}%{_{\text G})$. We introduce an unobserved measurement current ${\bf y}_{\text{u}}{\rm d} t= C_{\text{u}} \ex{\hat\bx}_{\text T}}%{^{\rm true}}%{_{\text G}{\rm d} t + {\rm d}{\bf w}_{\text{u}}$ to account for Bob's monitoring of the environment, in addition to Alice's observed measurement current ${\bf y}_{\text{o}}{\rm d} t = C_{\text{o}}\ex{\hat\bx}_{\text T}}%{^{\rm true}}%{_{\text G}{\rm d} t + {\rm d}{\bf w}_{\text{o}}$, where ${\rm d}{\bf w}_{\text{u}}$ and ${\rm d}{\bf w}_{\text{o}}$ are the unobserved and observed innovations, respectively. The true state of the system can be obtained by conditioning the estimate on both Alice's and Bob's past measurement records, giving \begin{align} &{\rm d}\ex{\hat \bx}\god = A\ex{\hat \bx}\god{\rm d}t + {\cal K}^{+}_{\text{o}}[V_{\text T}}%{^{\rm true}}%{_{\text G}]{\rm d}{\bf w}_{\text{o}} + {\cal K}^{+}_{\text{u}}[V_{\text T}}%{^{\rm true}}%{_{\text G}]{\rm d} {\bf w}_{\text{u}}\,,\label{truest}\\ &\frac{{\rm d}V_{\text T}}%{^{\rm true}}%{_{\text G}}{{\rm d}t} = A V_{\text T}}%{^{\rm true}}%{_{\text G} + V_{\text T}}%{^{\rm true}}%{_{\text G} A^{\top} \!+ D \nonumber\\ &\qquad\qquad- {\cal K}^{+}_{\text{o}}[V_{\text T}}%{^{\rm true}}%{_{\text G}]{\cal K}^{+} _{\text{o}}[V_{\text T}}%{^{\rm true}}%{_{\text G}]^{\top} - {\cal K}^{+}_{\text{u}}[V_{\text T}}%{^{\rm true}}%{_{\text G}] {\cal K}^{+}_{\text{u}}[V_{\text T}}%{^{\rm true}}%{_{\text G}]^{\top}\,, \label{truvar} \end{align} where ${\cal K}^\pm_{\rm r} [V] = VC^{\top}_{\rm r} \!+ \Gamma^{\top}_{\rm r}$ for r $\in \{$o,u$\}$ and the initial conditions are $\ex{\hat \bx}\god(t_0) = \ex{\hat\bx}_0$ and $V_{\text T}}%{^{\rm true}}%{_{\text G}(t_0) = V_0$. This follows trivially by extending \erfs{qfm}{qVf} to two measurement records. Since we are restricting to Gaussian states, the true state depends on $\past{\color{nblack}\rm U}$ only via the mean in \erf{truest}. This means that we can replace the (symbolic) summation in \erf{QSS} by an integral over the true mean, so that the smoothed state (${\rm C=S}$) is given by \begin{equation}\label{conv1} \rho_{\text S} = \int \wp_{\text S}(\ex{\hat \bx}\god) \rho_{\text T}}%{^{\rm true}}%{_{\text G}(\ex{\hat{\bf x}}_{\text T}}%{^{\rm true}}%{_{\text G}) {\rm d}\ex{\hat \bx}\god\,, \end{equation} where the PDF $\wp_{\text S}(\ex{\hat \bx}\god)$ is for the true mean conditioned on the past-future observed record. We can replace the smoothed state and the true state by their Wigner functions, the latter of which is replaced by a Gaussian $g({\check{ \bf x}};\mathring{\bf x},V_{\text T}}%{^{\rm true}}%{_{\text G})$. Here we have defined a haloed variable $\mathring{\bf x} = \ex{\hat\bx}_{\text T}}%{^{\rm true}}%{_{\text G}$ for notational simplicity\footnote{\color{nblack} We use this halo notation because these haloed variables are effectively a mediary between an estimate known only to an omniscient observer (i.e., the true state) and estimates available to partially ignorant observers (e.g. the smoothed state).}. To obtain the smoothed state in \erf{conv1}, we convolve the true state with the conditional PDF (which is a classically smoothed LG distribution) $\wp_{\text S}(\mathring{\bf x}) = g(\mathring{\bf x};\ex{\mathring{\bf x}}_{\text S},\halo{V}_{\text S})$, where $\ex{\mathring{\bf x}}_{\text S}$ and $\halo{V}_{\text S}$ will be determined later. Since both functions in the convolution are Gaussian, the resulting smoothed state is also Gaussian. Consequently, we can rewrite \erf{conv1} as \begin{equation} g({\check{ \bf x}};\ex{\hat{\bf x}}_{\text S},V_{\text S}) = \int g(\mathring{\bf x};\ex{\mathring{\bf x}}_{\text S},\mathring V_{\text S}) g({\check{ \bf x}};\mathring{\bf x},V_{\text T}}%{^{\rm true}}%{_{\text G}) {\rm d} \mathring{\bf x}\,. \end{equation} From the properties of a Gaussian convolution, we find that $\ex{\hat\bx}_{\text S} = \ex{\mathring{\bf x}}_{\text S}$ and $V_{\text S} = \halo{V}_{\text S} + V_{\text T}}%{^{\rm true}}%{_{\text G}$. All that remains is to determine the haloed mean and covariance of the smoothed Gaussian PDF $\wp_{\text S}(\ex{\hat \bx}\god)$. By rewriting the equation for the true mean, \erf{truest}, as \begin{equation}\label{hLLE} {\rm d}\mathring{\bf x} = A\mathring{\bf x}{\rm d} t + \mathring E{\rm d}\mathring{\bf v}_{\text{p}}\,, \end{equation} where $\mathring{E}{\rm d}\mathring{\bf v}_{\text{p}} = {\cal K}^+_{\text{o}}[V_{\text T}}%{^{\rm true}}%{_{\text G}]{\rm d}{\bf w}_{\text{o}} + {\cal K}^+_{\text{u}}[V_{\text T}}%{^{\rm true}}%{_{\text G}]{\rm d}{\bf w}_{\text{u}}$, we see that the system evolves according to a classical linear Langevin equation of the form in \erf{LLE}. Furthermore, the observed measurement record ${\bf y}_{\text{o}} = C_{\text{o}}\mathring{\bf x} + {\rm d}{\bf w}_{\text{o}}$ is linear in $\mathring{\bf x}$ and we can define a new cross-correlation $\mathring\Gamma^{\top} = {\cal K}^+_{\text{o}}[V_{\text T}}%{^{\rm true}}%{_{\text G}]$. Since the PDF satisfies the requirements for classical LG state estimation, we can use \erfs{csm}{cVs} and obtain the haloed smoothed mean and covariance, given by \begin{align} \ex{\mathring{\bf x}}_{\text S} &= \halo{V}_{\text S}\left[\halo{V}_{\text F}^{-1}\ex{\mathring{\bf x}}_{\text F} + \halo{V}_{\text R}^{-1}\ex{\mathring{\bf x}}_{\text R}\right]\,,\label{hxs}\\ \halo{V}_{\text S} &= \left[\halo{V}_{\text F}^{-1} + \halo{V}_{\text R}^{-1}\right]^{-1}\,.\label{hVs} \end{align} We can obtain the haloed filtered mean and covariance, $\ex{\mathring{\bf x}}_{\text F}$ and $\halo{V}_{\text F}$, and haloed retrofiltered mean and covariance, $\ex{\mathring{\bf x}}_{\text R}$ and $\halo{V}_{\text R}$, by conditioning $\mathring{\bf x}$ on the past observed and future observed measurement records, respectively. By conditioning \erf{hLLE} on only the past observed measurement record, we obtain the haloed filtered variables \begin{align} {\rm d}\ex{\mathring{\bf x}}_{\text F} = &\,\, A\ex{\mathring{\bf x}}_{\text F}{\rm d}t + {\cal K}^{+}_{\text{o}}[\halo{V}_{\text F}+V_{\text T}}%{^{\rm true}}%{_{\text G}] {\rm d}\mathring{\bf w}_{\text F}\,,\\ \ddt{\halo{V}_{\text F}} = & \,\, A\halo{V}_{\text F} + \halo{V}_{\text F} A^{\top} \!+ \mathring D \nonumber\\ &- {\cal K}^{+}_{\text{o}}[\halo{V}_{\text F} +V_{\text T}}%{^{\rm true}}%{_{\text G}]{\cal K}^{+} _{\text{o}}[\halo{V}_{\text F} +V_{\text T}}%{^{\rm true}}%{_{\text G}]^{\top}\,,\label{hVfil} \end{align} where $\mathring D = {\cal K}^+_{\text{o}}[V_{\text T}}%{^{\rm true}}%{_{\text G}]{\cal K}^+_{\text{o}}[V_{\text T}}%{^{\rm true}}%{_{\text G}]^{\top} \!+ {\cal K}^+_{\text{u}}[V_{\text T}}%{^{\rm true}}%{_{\text G}]{\cal K}^+_{\text{u}}[V_{\text T}}%{^{\rm true}}%{_{\text G}]^{\top}$ and ${\rm d}\mathring{\bf w}_{\text F} = {\bf y}_{\text{o}}{\rm d}t - C_{\text{o}}\ex{\mathring{\bf x}}_{\text F} {\rm d}t$. From \erf{qVf} and (\ref{hVfil}), it can easily be shown that $\mathring V_{\text F} = V_{\text F} - V_{\text T}}%{^{\rm true}}%{_{\text G}$, and using this relationship we can show that $\ex{\hat\bx}_{\text F} = \ex{\mathring{\bf x}}_{\text F}$. Similarly, the haloed retrofiltered variables are given by \begin{align} -{\rm d}\ex{\mathring{\bf x}}_{\text R} = &-A\ex{\mathring{\bf x}}_{\text R}{\rm d}t + {\cal K}^{-}_{\text{o}}[\halo{V}_{\text R} - V_{\text T}}%{^{\rm true}}%{_{\text G}]{\rm d} \mathring{\bf w}_{\text R}\,,\\ -\ddt{\halo{V}_{\text R}} = & -A\halo{V}_{\text R} - \halo{V}_{\text R} A^{\top} \!+ \mathring D \nonumber\\ & - {\cal K}^{-}_{\text{o}}[\halo{V}_{\text R} -V_{\text T}}%{^{\rm true}}%{_{\text G}]{\cal K}^{-}_{\text{o}}[\halo{V}_{\text R} - V_{\text T}}%{^{\rm true}}%{_{\text G}]^{\top}\,,\label{hVR} \end{align} where ${\rm d}\mathring{\bf w}_{\text R} = {\bf y}_{\text{o}}{\rm d} t - C_{\text{o}}\ex{\mathring{\bf x}}_{\text R}$. It can be shown, using \erf{qVr}, that $\mathring V_{\text R} = V_{\text R} + V_{\text T}}%{^{\rm true}}%{_{\text G}$, and from this we can also show that $\ex{\hat\bx}_{\text R} = \ex{\mathring{\bf x}}_{\text R}$. Finally, using \erfs{hxs}{hVs}, we can compute the mean and covariance of the smoothed quantum state: \begin{align} \ex{\hat \bx}\sm = (V_{\text S} - V_{\text T}}%{^{\rm true}}%{_{\text G}) [(V_{\text F} &- V_{\text T}}%{^{\rm true}}%{_{\text G})^{-1}\ex{\hat\bx}_{\text F} \nonumber\\ &+ (V_{\text R}+V_{\text T}}%{^{\rm true}}%{_{\text G})^{-1} \ex{\hat\bx}_{\text R}]\,,\label{estsm}\\ V_{\text S} = \big[(V_{\text F} - V_{\text T}}%{^{\rm true}}%{_{\text G} )^{-1} & + (V_{\text R} + V_{\text T}}%{^{\rm true}}%{_{\text G})^{-1}\big]^{-1} + V_{\text T}}%{^{\rm true}}%{_{\text G}\,. \label{varsm} \end{align} Interestingly, we notice that the equations for the smoothed quantum state are similar to the equations for the SWV state in \erfs{wvsm}{wvsV}. In fact they are identical if we allow for $V_{\text T}}%{^{\rm true}}%{_{\text G}\to 0$, which is equivalent to a classical limit where we set $\hbar\to 0$ in \erf{SHUR}. Unsurprisingly, we can see that the LGQ smoothed covariance places less emphasis on the retrofiltered covariance than the SWV covariance. This can be seen from \erf{varsm} where $(V_{\text R} + V_{\text T}}%{^{\rm true}}%{_{\text G})^{-1}$ is smaller than $V_{\text R}^{-1}$. The reason this is unsurprising is because combining the filtered covariance with the retrofiltered covariance resulted in the SWV covariance violating the \sch-Heisenberg uncertainty relation, which is avoided when combining with the smaller $(V_{\text R} + V_{\text T}}%{^{\rm true}}%{_{\text G})^{-1}$. As was the case with the retrofiltered mean, the haloed retrofiltered mean $\ex{\mathring{\bf x}}_{\text R}$ does not have a well defined final condition due to the haloed retrofiltered covariance being infinite at the final time. However, we can solve this problem in the same way as we did for the retrofiltered mean and covariance by defining the haloed retrofiltered informative mean $\mathring{\bf z}_{\text R} = \halo\Lambda_{\text R}\ex{\mathring{\bf x}}_{\text R}$ and corresponding information matrix $\halo\Lambda_{\text R} = \halo{V}_{\text R}^{-1}$. Using \erf{V2L}, we obtain \begin{align} -{\rm d}\mathring{\bf z}_{\text R} =&\, (\bar{A} - \bar{D}\halo\Lambda_{\text R})\!^{\top} \mathring{\bf z}_{\text R}{\rm d} t \nonumber\\&+ (C_{\text{o}}^{\top} \!- \halo\Lambda_{\text R} V_{\text T}}%{^{\rm true}}%{_{\text G} C_{\text{o}}^{\top} \!- \halo\Lambda_{\text R} \Gamma^{\top}) {\bf y}_{\text{o}}{\rm d} t\,,\\ -\frac{{\rm d} \halo\Lambda_{\text R}}{{\rm d} t} =&\, \halo\Lambda_{\text R}\bar{A} + \bar{A}^{\top}\halo\Lambda_{\text R} - \halo\Lambda_{\text R}\bar{D}\halo\Lambda_{\text R} + C_{\text{o}}^{\top} C_{\text{o}}\,, \end{align} where $\bar{A} = A - \Gamma_{\text{o}}^{\top} C_{\text{o}} - V_{\text T}}%{^{\rm true}}%{_{\text G} C_{\text{o}}^{\top} C_{\text{o}}$ and $\bar{D} = {\cal K}^{+}_{\text{u}}[V_{\text T}}%{^{\rm true}}%{_{\text G}]{\cal K}^{+}_{\text{u}}[V_{\text T}}%{^{\rm true}}%{_{\text G}]^{\top}$. The final conditions become $\mathring{\bf z}_{\text R}(T) = 0$ and $\halo\Lambda_{\text R}(T) = 0$. With these definitions, we can further simplify the LGQ smoothing equations, \erfs{estsm}{varsm}, to \begin{align} \ex{\hat \bx}\sm = (V_{\text S} - V_{\text T}}%{^{\rm true}}%{_{\text G}) [(V_{\text F} &- V_{\text T}}%{^{\rm true}}%{_{\text G})^{-1}\ex{\hat\bx}_{\text F} + \mathring{\bf z}_{\text R}]\,,\label{estsm}\\ V_{\text S} = \big[(V_{\text F} - V_{\text T}}%{^{\rm true}}%{_{\text G} )^{-1} & + \halo\Lambda_{\text R}\big]^{-1} + V_{\text T}}%{^{\rm true}}%{_{\text G}\,. \label{varsm} \end{align} \section{Physical LGQ Systems} \label{sec-PS} For the remainder of this paper, we will consider two examples of LGQ systems: an on-threshold optical parametric oscillator and a {\color{nblack} noisy linear attenuator}. In both examples, Alice and Bob perform homodyne measurements on the environment, where we use measurement efficiencies to quantify the fraction of the environment that they can observe. \subsection{On-Threshold Optical Parametric Oscillator}\label{Sec-OPO} The first system we consider is an optical parametric oscillator (OPO) with one output channel (loss, at rate unity). This is described by the master equation \begin{equation} \hbar \dot\rho = i\chi[(\hat q\hat p + \hat p \hat q)/2,\rho] + \gamma{\cal D}[(\hat q +i\hat p)] \rho\,, \end{equation} where the number of modes is $N = 1$ and $\hat\bx = (\hat{q},\hat{p})\!^{\top}$. We will consider the on-threshold parameter regime, when $\chi = \gamma$ and for simplicity we measure time in units of $\chi^{-1}$. The first term is generated by the squeezing Hamiltonian $\hat{H} = (\hat q\hat p + \hat p \hat q)/2$ and the second term is the Lindblad term with $\hat{\bf c} = \hat q +i\hat p$ describing photon loss. From these we find that \begin{equation} G = \left(\begin{array}{cc} 0&1\\ 1&0 \end{array}\right)\,, \qquad \bar{C} = I_2\,, \end{equation} by remembering that $\hat{H} = \hat\bx^{\top} G\hat\bx/2$ and $\hat{\bf c} = (I_2,iI_2)\bar{C}\hat\bx$. We then find the drift and diffusion matrices $A = {\rm diag}(0,-2)$ and $D = \hbar I_2$. Let us assume that the output (loss) channel is monitored by Alice and Bob using homodyne measurements with homodyne phases $\theta_{\text{o}}$ and $\theta_{\text{u}}$ and measurement efficiencies $\eta_{\text{o}}$ and $\eta_{\text{u}}$, respectively. The resulting measurement current for this type of measurement is \begin{equation} {\bf y}_{\rm r}{\rm d} t = \sqrt{\eta_{\rm r}}\ex{e^{-i\theta_{\rm r}} a + e^{i\theta_{\rm r}}a^\dagger}_{\text T}}%{^{\rm true}}%{_{\text G} + {\rm d} {\bf w}_{\rm r} \end{equation} for ${\rm r} \in \{{\rm o,u}\}$ and the annihilation operator $a = (\hat{q} + i\hat{p})/\sqrt{2}$. As a result, we can define $M_{\rm r} = \sqrt{\eta_{\rm r}}e^{i\theta_{\rm r}}$, where $M$ is the {\color{nblack} unraveling} matrix introduced in \erf{SME}. Thus, Alice's measurement and back-action matrices are $C_{\text{o}} = 2\sqrt{\eta_{\text{o}}/\hbar} (\cos\theta_{\text{o}},\sin\theta_{\text{o}})$ and $\Gamma_{\text{o}} = -\hbar C_{\text{o}}/2$, respectively. Similarly, Bob's unobserved measurement and back-action matrices are $C_{\text{u}} = 2\sqrt{\eta_{\text{u}}/\hbar} (\cos\theta_{\text{u}}, \sin\theta_{\text{u}})$ and $\Gamma_{\text{u}} = -\hbar C_{\text{u}}/2$, respectively. \subsection{Noisy Linear Attenuator}\label{ssec-LA} The second system we consider is a single-mode ($N=1$) {\color{nblack} noisy linear attenuator}, described by the master equation \begin{equation} \hbar \dot\rho = \gamma_{\downarrow}{\cal D}[\hat q + i\hat p]\rho + \gamma_{\uparrow}{\cal D}[\hat q - i\hat p]\rho\,, \end{equation} where $\gamma_{\downarrow}$ and $\gamma_{\uparrow}$ are the rate of photon loss and gain, respectively. The fact that this system acts as an attenuator can be seen in how the annihilation operator changes, on average, over time, \begin{equation} \ex{\dot{\hat{a}}} = (\gamma_{\uparrow} - \gamma_{\downarrow})\ex{\hat a}\,, \end{equation} where for the system to be classed as an attenuator and not an amplifier, we consider the case when $\gamma_{\downarrow}>\gamma_{\uparrow}$. Since there are no Hamiltonian dynamics for this system, i.e.,~$G = 0$, we only need to concern ourselves with the vector of Lindblad operators, \begin{equation} \hat{\bf c} = \left[\sqrt{\gamma_{\downarrow}}(\hat q + i\hat p),\sqrt{\gamma_{\uparrow}}(\hat q - i\hat p)\right]^{\top}\,. \end{equation} Note, this vector of Lindblad operators is not to be confused with a commutator. From this, we calculate \begin{equation} \bar{C} = \left(\begin{array} {cccc} \sqrt{\gamma_{\downarrow}}&\sqrt{\gamma_{\uparrow}}&0&0\\ 0&0&\sqrt{\gamma_{\downarrow}}&-\sqrt{\gamma_{\uparrow}}\\ \end{array}\right)^{\top}\ \end{equation} and arrive at $A = (\gamma_{\uparrow} - \gamma_{\downarrow})I_2$ and $D = \hbar(\gamma_{\uparrow} + \gamma_{\downarrow})I_2$. In this case, since we are considering homodyne measurements on both channels, we can take $M_{\rm r} = {\rm diag}(\sqrt{\eta_{\downarrow,{\rm r}}}e^{i\theta_{\downarrow,{\rm r}}}, \sqrt{\eta_{\uparrow,{\rm r}}}e^{i\theta_{\uparrow,{\rm r}}})$ for ${\rm r} \in \{{\rm o},{\rm u}\}$. Here we have introduced the measurement efficiencies $\eta_{\downarrow,{\rm r}}$ and $\eta_{\uparrow,{\rm r}}$ for the attenuation and the amplification channels, respectively, to indicate the fraction of each output that is measured by Alice (o) and Bob (u), with the homodyne phases $\theta_{\downarrow,{\rm r}}$ and $\theta_{\uparrow,{\rm r}}$. The measurement and back-action matrices, for either Alice or Bob, are given by \begin{equation} C_{\rm r} = \frac{2}{\sqrt{\hbar}}\left(\begin{array}{cc} \sqrt{\eta_{\downarrow,{\rm r}}\gamma_{\downarrow}}\cos\theta_{\downarrow,{\rm r}} & \sqrt{\eta_{\downarrow,{\rm r}}\gamma_{\downarrow}}\sin\theta_{\downarrow,{\rm r}}\\ \sqrt{\eta_{\uparrow,{\rm r}}\gamma_{\uparrow}}\cos\theta_{\uparrow,{\rm r}} & -\sqrt{\eta_{\uparrow,{\rm r}}\gamma_{\uparrow}}\sin\theta_{\downarrow,{\rm r}}\\ \end{array}\right) \end{equation} and \begin{equation} \Gamma_{\rm r} = \sqrt{\hbar}\left(\begin{array}{cc} -\sqrt{\eta_{\downarrow,{\rm r}}\gamma_{\downarrow}}\cos\theta_{\downarrow,{\rm r}} & -\sqrt{\eta_{\downarrow,{\rm r}}\gamma_{\downarrow}}\sin\theta_{\downarrow,{\rm r}}\\ \sqrt{\eta_{\uparrow,{\rm r}}\gamma_{\uparrow}}\cos\theta_{\uparrow,{\rm r}} & -\sqrt{\eta_{\downarrow,{\rm r}}\gamma_{\uparrow}}\sin\theta_{\downarrow,{\rm r}}\\ \end{array} \right)\,, \end{equation} respectively. For this system there are many scenarios we could consider for Alice and Bob. For example, Alice and Bob could each perfectly monitor one of the channels, or they could both monitor the same output channel with some fractions. However, for simplicity, we will only consider the case where Alice perfectly measures the attenuation channel, i.e.,~$\eta_{\downarrow,{\rm o}} = 1$ and $\eta_{\uparrow,{\rm o}}=0$, with a homodyne phase $\theta_{\downarrow,{\rm o}} = \theta_{\text{o}}$, and Bob perfectly measures the amplification channel, i.e.,~$\eta_{\downarrow,{\rm u}} = 0$ and $\eta_{\uparrow,{\rm u}}=1$, with a homodyne phase $\theta_{\uparrow,{\rm u}} = \theta_{\text{u}}$. \section{Example Trajectories} \label{sec-Traj} \begin{figure*} \begin{minipage}{.5\textwidth} \includegraphics[scale = 0.32]{Fig2a.eps} \includegraphics[scale = 0.32]{Fig2c.eps} \includegraphics[scale = 0.32]{Fig2e.eps} \end{minipage}% \begin{minipage}{.5\textwidth} \includegraphics[scale = 0.32167895]{Fig2b.eps} \includegraphics[scale = 0.32]{Fig2d.eps} \includegraphics[scale = 0.32]{Fig2f.eps} \end{minipage} \caption{A sample realization of the OPO system's state trajectory, where $\eta_{\text{o}} = \eta_{\text{u}} = 0.5$, $\theta_{\text{o}} = \pi/4$, and $\theta_{\text{u}} = -\pi/8$, where the time $t$ is in units of $\chi^{-1}$ and the total run time $T = 4$. We have set $\hbar = 2$ for this simulation. The evolutions in the $q$ and $p$ quadratures, in panels (a) and (b), respectively, clearly show that the smoothed mean (red) outperforms the filtered mean (blue) in terms of estimating the true mean (black). The SWV mean (green), on the other hand, does a terrible job of estimating the true mean, as expected. The disparity between the SWV state and the remaining states can clearly be seen in the phase-space diagrams, plotted at four snapshots in time in the panels (c)--(f). In (c), the filtered, smoothed, and true states all begin at the same point, with the same covariance (where the ellipse indicates the 1-SD region of the Wigner function). However, the mean of the SWV state (green dot) is largely displaced from the rest and its covariance is significantly smaller. As time progresses, the filtered, smoothed, and true states begin to separate and the covariances decrease, where the smoothed covariance sits somewhere between the filtered and true covariance. At the final time $T$, only the true state is displaced from the remaining states, which are all the same, as there is no future record left.} \label{Fig-Traj} \end{figure*} In this section we will compare the filtered, SWV, and smoothed quantum states in order to see the differences between these estimated states and how well they estimate the true state. We will only consider the OPO system in this section, since the results are similar for the {\color{nblack} noisy linear attenuator}. The measurement scenario we are considering is $\theta_{\text{o}} = \pi/4$ and $\theta_{\text{u}} = -\pi/8$. We have chosen this scenario, as it gives an unbiased impression of how the smoothing technique will perform, i.e., it is not the best nor worst measurement scheme for the system but somewhere in between. Let us choose the system's initial state with a mean $\ex{\hat\bx}_0 = (0,0)\!^{\top}$ and covariance $V_0 = (\hbar/2)\, {\rm diag}(10,1/2)$. We have chosen the initial condition for the covariance so that it is similar to the unconditioned steady-state covariance, $V = (\hbar/2)\, {\rm diag}(\infty,1/2)$, while still being finite. The trajectories for the $q$ and $p$ quadratures [Figs.~\ref{Fig-Traj}(a) and (b)] show that the smoothed mean (red line) seems to be closer, on average, to the true mean (black line) than the filtered mean (blue line). Therefore, as expected, the smoothed state provides a better estimate of the true state than the filtered state. The SWV mean (green line), on the other hand, bares very little similarity to the true mean in both quadratures, showing how poorly even the mean of the SWV state works for this purpose. We can also see how the covariances, which determine the purity of a Gaussian state, defined as $P = (\hbar/2)^N\sqrt{|V|^{-1}}$, evolve over time in Figs.~\ref{Fig-Traj}(c)--(f). At $t = 0$, in Fig.~\ref{Fig-Traj}(c), the filtered, smoothed, and true states all begin with the same initial covariance $V_0$. As time progresses, the covariances begin to shrink, indicating the increase of the purity, until they all reach their steady states at around $t = 0.5T$ in Fig.~\ref{Fig-Traj}(e). At this time the true state is guaranteed to be a pure state, and the smoothed state is purer than the filtered state (as the smoothed covariance can fit within the filtered covariance). Moreover, at the final time in Fig.~\ref{Fig-Traj}(f), the smoothed covariance is exactly the same as the filtered covariance, as expected, since there is no more future information to condition on. By contrast, the true state remains in its steady state. The covariance of the SWV state, as one might expect by now, behaves very differently. Initially, the covariance is not the same as that of the initial true state; it is substantially smaller. As time progresses, the SWV covariance reaches its steady state in Fig.~\ref{Fig-Traj}(e), where it is clear that the SWV state is unphysical. It has a purity greater than unity (the SWV covariance can fit entirely within the pure true covariance), violating the \sch-Heisenberg uncertainty relation. At the final time, Fig.~\ref{Fig-Traj}(f), the SWV covariance matches the filtered state (as well as the smoothed state), as it must since there is no future record left. \section{Optimal Measurement Strategies for Quantum State Smoothing} \label{sec-Opt} In the previous section we looked at the improvement in the purity that the smoothed state offered over the filtered state. However, the degree of improvement offered by the smoothed state depends on the choice of Alice's and Bob's measurements. In this section we study this phenomenon and seek a method for predicting the best measurement strategy for Alice and Bob to maximize the purity improvement. In general, the purity of the filtered and smoothed quantum states varies depending on a particular realization of the measurement record ${\color{nblack}\rm O}$. As a result, it is necessary to average over all possible realizations of the observed record ${\color{nblack}\rm O}$ in order to draw any conclusions about the purity improvement. The measure of purity improvement we will investigate in this paper is the relative average purity recovery of a smoothed state. This is the same measure considered in Ref.~\cite{CGW19}, given by \begin{equation} {\cal R} = \frac{{\mathbb E}_{{\color{nblack}\rm O}}[P(\rho_{\text S})] - {\mathbb E}_{{\color{nblack}\rm O}}[P(\rho_{\text F})]} {{\mathbb E}_{{\color{nblack}\rm O}\past{\color{nblack}\rm U}}[P(\rho_{\text T}}%{^{\rm true}}%{_{\text G})]- {\mathbb E}_{{\color{nblack}\rm O}}[P(\rho_{\text F})]}\,.\label{RAPR} \end{equation} Here ${\mathbb E}_{\color{nblack}\rm O}[...]$ ($\mathbb{E}_{{\color{nblack}\rm O}\past{\color{nblack}\rm U}}[...]$) represents averaging over all possible realizations of the observed record ${\color{nblack}\rm O}$ (and the past unobserved record $\past{\color{nblack}\rm U}$), and $P(\rho) = \text{Tr}[\rho^2]$ represents the purity of a state $\rho$. The {\color{nblack} relative average purity recovery} is a measure of the purity increase given from smoothing compared to filtering on average, relative to the maximum average recovery possible. For Gaussian systems, the expression for the purity recovery can be greatly simplified. The purity of a Gaussian state is independent of observed and unobserved measurement records, and depends solely on the state's covariance matrix. Consequently, we only need to consider a relative purity recovery (RPR) \cite{LCW19}, which simplifies the {\color{nblack} relative average purity recovery} to \begin{equation} {\cal R} = \frac{P_{\text S} - P_{\text F}}{P_{\text T}}%{^{\rm true}}%{_{\text G} - P_{\text F}}\,. \end{equation} Here, for Gaussian states, the purity of the conditioned state is $P\c = (\hbar/2)^N\sqrt{|V\c|^{-1}}$. We will now construct three different hypotheses for the optimal measurement scheme for Alice and Bob in order to maximize the purity recoveries and compare their predictions to the numerical optimal for the physical examples. \begin{figure*}[t] \centering \includegraphics[width = \textwidth]{Fig3.pdf} \caption{Contour plots of {\color{nblack} (a) the measurement overlap \erf{Overlap_Meas}, (b) the unobserved overlap \erf{Overlap-u}, (c) the observed overlap \erf{Overlap-o} and (d) the RPR for the {\color{nblack} noisy linear attenuator} system in the steady state} for different values of the observed (Alice) and unobserved (Bob) homodyne phases. {\color{nblack} In this example, the range of the unobserved and observed homodyne phases are $\Theta_{\text{u}} = [-\pi/2,\pi/2)$ and $\Theta_{\text{o}} = [-\pi/2,\pi/2)$, respectively.} Note that while (a), (b), and (c) look identical, the scales of the contours are very different due to Alice and Bob measuring different channels. In (d) we see that the {\color{nblack} RPR closely resembles the objective functions in (a)--(c), and the optimal RPR (solid white line), obtained numerically, perfectly matches the maximum of the objective functions.} In all plots we consider the case where $\gamma_{\uparrow} = 0.999 \gamma_{\downarrow}$. Alice perfectly measures the attenuation channel ($\eta_{\downarrow,{\rm o}} = 1$, $\theta_{\downarrow,{\rm o}} = \theta_{\text{o}}$), and Bob perfectly measures the amplification channel ($\eta_{\uparrow,{\rm u}} = 1$, $\theta_{\downarrow,{\rm u}} = \theta_{\text{u}}$). We have set $\hbar =2$.} \label{Fig-SLA} \end{figure*} \begin{figure*} \includegraphics[width = \textwidth]{Fig4.pdf} \caption{Contour plots of (a) the measurement overlap \erf{Overlap_Meas}, (b) the {\color{nblack} unobserved overlap \erf{Overlap-u}, (c) the observed overlap \erf{Overlap-o}} and (d) the RPR for the on-threshold OPO in steady state for different values of the observed (Alice) and unobserved (Bob) homodyne phases. {\color{nblack} In this example, the range of the unobserved and observed homodyne phases are $\Theta_{\text{u}} = [-\pi/2,\pi/2)$ and $\Theta_{\text{o}} = [0,\pi)$, respectively.} In (a), we immediately see that the optimal measurement strategy according to hypothesis A (dashed black line) is very different from the optimal measurement strategy, obtained numerically, for RPR [solid black line in (d)], indicating that it is incorrect. In (b), both the solution to \erf{guess-2} (dashed black line) and the unobserved overlap behave very differently compared to the optimal measurement strategy and the RPR, respectively, in (d). On the contrary, in (c) the solution to \erf{guess-1} (dashed black line) gives a close approximation to the optimal measurement strategy. Furthermore, the square overlap has developed some of the characteristics of the RPR. In all plots, both Alice and Bob measure the same damping channel (with homodyne phases $\theta_{\text{o}}$ and $\theta_{\text{u}}$, respectively) but with $\eta_{\text{o}} = \eta_{\text{u}} = 0.5$. We have set $\hbar = 2$.} \label{Fig-OPO} \end{figure*} \subsection{Hypothesis A} The first and simplest guess at the optimal strategy would be for both Alice and Bob to gather information about the same quantity, e.g.,~both measuring the same quadrature. Since in the LGQ case the measurement matrices $C_{\text{o}}$ and $C_{\text{u}}$ provide information about how Alice and Bob measure the system, we can look at the overlap between Alice's and Bob's measurement matrices, \begin{equation}\label{Overlap_Meas} {\cal O}_{\rm m}^{\theta_{\text{u}}}(\theta_{\text{o}}) = \text{Tr}\left[C_{\text{o}}^{\theta_{\text{o}}}(C_{\text{u}}^{\theta_{\text{u}}})\!^{\top} C_{\text{u}}^{\theta_{\text{u}}} (C_{\text{o}}^{\theta_{\text{o}}})\!^{\top}\right]\,. \end{equation} {\color{nblack} Here, for simplicity, we have used the notation $\theta_{\text{o}}$ and $\theta_{\text{u}}$ to denote the parameters specifying Alice's and Bob's measurement matrices because in this paper we are restricting to homodyne measurements of a single channel so that only one angle is needed. For the fully general case, we would have to replace $\theta$ by the unraveling matrix $M$ as introduced in Sec.~\ref{Mintro}.} It is easiest to see why we call \erf{Overlap_Meas} an overlap function when Alice and Bob only have a single measurement channel at their disposal, like in the OPO example presented in Sec.~\ref{Sec-OPO}. In this case, {\color{nblack} $C_{\text{o}}$ and $C_{\text{u}}$ become vectors and \erf{Overlap_Meas} is exactly the square of their scalar product. This intuition also works for the} {\color{nblack} noisy linear attenuator} example where the only nonzero element in the resulting matrix corresponds to the squared overlap between Alice's measurement on her channel and Bob's measurement on his channel. {\color{nblack} Note that the square is important here, because there is no difference in the information obtained by a measurement with matrix $C$ and one with matrix $-C$, so the objective function ${\cal O}$ should be invariant under a sign change.} {\color{nblack} Thus, for hypothesis A, that Alice should obtain} information about the same quantity as Bob, {\color{nblack} she choose her measurement by} maximizing the measurement overlap function \erf{Overlap_Meas} over the allowed range $\Theta_{\text{o}}$ of homodyne angles. That is, she should choose \begin{equation}\label{guess-1} {\color{nblack} \theta_{\text{o}}^\star}(\theta_{\text{u}}) = \arg\max_{\theta\in\Theta_{\text{o}}} {\cal O}_{\rm m}^{\theta_{\text{u}}}(\theta)\,, \end{equation} where we have written Alice's optimal phase ${\color{nblack} \theta_{\text{o}}^\star}(\theta_{\text{u}})$ as a function of Bob's homodyne phase. In \erf{guess-1} we point out that there is no reason to maximize over Alice's homodyne phase as opposed to Bob's homodyne phase as the measurement overlap is identical if $C_{\text{o}}$ and $C_{\text{u}}$ are swapped. We test this intuition by considering the two physical systems presented in Sec.~\ref{sec-PS} in the steady state. For the {\color{nblack} noisy linear attenuator} system, \erf{guess-1} results in Alice and Bob measuring their respective channels with homodyne phases such that $\theta_{\text{o}} = -\theta_{\text{u}}$. The negative sign arises from the fact that Alice and Bob measure different types of channels, that is, Alice measures an attenuation channel with the Lindblad operator $\sqrt{\gamma_{\downarrow}}(\hat{q} + i\hat{p})$, and Bob measures the amplification channel with the Lindblad operator $\sqrt{\gamma_{\uparrow}}(\hat{q} - i\hat{p})$. Comparing the measurement overlap function in Fig.~\ref{Fig-SLA}(a) to the RPR in Fig.~\ref{Fig-SLA}(d), for all $\theta_{\text{o}} = \theta_{\downarrow}$ and $\theta_{\text{u}} = \theta_{\uparrow}$, we see that hypothesis A [dashed black line in (a)] matches perfectly with the optimal measurement strategy [solid white line in (d)] obtained by a numerical search. In fact, the measurement overlap function has a striking resemblance to the RPR for the {\color{nblack} noisy linear attenuator}. The {\color{nblack} noisy linear attenuator} is, however, a very simple system without any unitary dynamics, so we should not jump to any conclusions about hypothesis A's success in predicting the optimal measurement. We thus examine the on-threshold OPO system to see how well hypothesis A works. Based on \erf{guess-1}, the optimal measurement strategy for the OPO system is $\theta_{\text{o}} = \theta_{\text{u}}$. This is clearly incorrect, as we can see by comparing the measurement overlap function Fig.~\ref{Fig-OPO}(a) to the RPR in Fig.~\ref{Fig-OPO}(d). The numerically obtained optimal strategies [solid black lines in (d)] are drastically different from the hypothesis $\theta_{\text{o}} = \theta_{\text{u}}$ [dashed black lines in (a)]. Furthermore, the measurement overlap function does not resemble the RPR. Consequently, we have to come up with a more refined argument to explain the optimal strategy. \subsection{Hypothesis B} {\color{nblack} On reflection, it is perhaps not surprising that hypothesis A failed. Alice's ultimate goal is to guess Bob's state as well as possible. Why should that be achieved by trying to get the same type of information as Bob? Rather, it would seem, Alice should try to get information about how Bob's state changes in reaction to his measurement results, which are unknown to her. That is, it seems that a better hypothesis would take into account} the correlation between the measurement setups and the measurement back-action affecting the system. We can see how a measurement and its corresponding back-action affects the state by comparing the unconditioned equations, \erfs{LLE}{UncondV}, to the filtered equations, \erfs{qfm}{qVf}. Specifically, the effect of back-action is given by the kick matrix ${\cal K}^+_{\rm r}[V_{\past{\rm R}}]$, from which we define a mean-square kick tensor \begin{equation} B_{\rm r}^{\theta_{\rm r}} = {\cal K}^+_{\rm r}[V^{\theta_{\rm r}}_{\past{\rm R}}] {\cal K}^+_{\rm r}[V^{\theta_{\rm r}}_{\past{\rm R}}]^{\top}\,. \end{equation} Here the superscript $\theta_{\rm r}$ specifies the homodyne phase used to calculate {\color{nblack} the measurement matrix $C_{\rm r}$, the cross-correlation matrix $\Gamma_{\rm r}$, and the covariance matrix $V_{\past{\rm R}}$ (which all feed into ${\cal K}^+_{\rm r}[V_{\past{\rm R}}]$). The covariance matrix is conditioned on the past measurement record $\past{\rm R} = \past{\rm O},\past{\rm U}$, for ${\rm r} = {\rm o},{\rm u}$ respectively. Note that for ${\rm r} = {\rm u}$ we are considering the state conditioned only on Bob's records $\past{\color{nblack}\rm U}$, with a filtered covariance matrix $V^{\theta_{\text{u}}}_{\past{\color{nblack}\rm U}}$ satisfying } \begin{equation} \frac{{\rm d}V^{\theta_{\text{u}}}_{\past{\color{nblack}\rm U}}}{{\rm d}t} = AV^{\theta_{\text{u}}}_{\past{\color{nblack}\rm U}} + V^{\theta_{\text{u}}}_{\past{\color{nblack}\rm U}} A^{\top} \!+ D - {\cal K}_{\text{u}}^{+} [V^{\theta_{\text{u}}}_{\past{\color{nblack}\rm U}}] {\cal K}_{\text{u}}^{+} [V^{\theta_{\text{u}}}_{\past{\color{nblack}\rm U}}]^{\top}\,, \end{equation} similar to \erf{qVf}. As Alice is trying to estimate Bob's true state of the system, {\color{nblack} the obvious hypothesis is} that Alice should choose her measurement to observe the back-action (kick) Bob's measurement induces on the system. By choosing this measurement scheme, {\color{nblack} one would think that} Alice's measurement would contain the most relevant information about Bob's measurement {\color{nblack} results} and consequently provide a {\color{nblack} good} estimate {\color{nblack} of} the true state. With this in mind, we can construct another {\color{nblack} objective} function, the unobserved overlap function, \begin{equation}\label{Overlap-u}\color{nblack}{ {\cal O}^{\theta_{\text{u}}}_{\text{u}}(\theta) = \text{Tr}\left[C_{\text{o}}^{\theta} B^{\theta_{\text{u}}}_{\text{u}}(C_{\text{o}}^{\theta})\!^{\top}\right]\,,} \end{equation} where we have just replaced Bob's measurement matrix in \erf{Overlap_Meas} with his kick matrix. {\color{nblack} Thus our hypothesis B is that} Alice should choose her measurement in order to maximize the unobserved overlap, i.e., \begin{equation}\label{guess-2}{\color{nblack} {\color{nblack} \theta_{\text{o}}^\star}(\theta_{\text{u}}) = \arg\max_{\theta\in\Theta_{\text{o}}}{\cal O}_{\text{u}}^{\theta_{\text{u}}}(\theta)\,. } \end{equation} Unsurprisingly, when we consider the {\color{nblack} noisy linear attenuator} example, we see in Fig.~\ref{Fig-SLA}(b) that the maximum of the unobserved overlap (dashed black line) is obtained when Alice chooses her measurement angle such that $\theta_{\text{o}} = -\theta_{\text{u}}$. {\color{nblack} However,} the same cannot be said for the OPO system, as shown in Fig.~\ref{Fig-OPO}(b), where both the hypothesized optimal strategy \erf{guess-2} (dashed black line) and the unobserved overlap function {\color{nblack} bears} little resemblance to the optimal strategy and the RPR in Fig.~\ref{Fig-OPO}(d), respectively. \subsection{Hypothesis C} Even though hypothesis B {\color{nblack} also failed}, the construction is still useful. Specifically, we consider the same construction but with Alice and Bob swapped. That is, we consider the counter intuitive hypothesis that it is best for Bob to observe {\color{nblack} as well as possible} the kick from Alice's measurement on the system. Consequently, we define the observed overlap function \begin{equation}\label{Overlap-o}{\color{nblack} {\cal O}_{\text{o}}^{\theta_{\text{o}}}(\theta) = \text{Tr}\left[C_{\text{u}}^{\theta} B_{\text{o}}^{\theta_{\text{o}}} (C^{\theta}_{\text{u}})\!^{\top}\right]\,,} \end{equation} {\color{nblack} where, compared to \erf{Overlap-u}, we have swapped the labels ${\rm o}$ and ${\rm u}$. With this overlap function defined, our {\color{nblack} third and} last hypothesis for the optimal unobserved homodyne phase is } \begin{equation}\label{guess-3}{\color{nblack} {\color{nblack} \theta_{\text{u}}^\star}(\theta_{\text{o}}) = \arg\max_{\theta\in\Theta_{\text{u}}} {\cal O}_{\text{o}}^{\theta_{\text{o}}}(\theta)\,,} \end{equation} {\color{nblack} where we have written Bob's optimal homodyne phase ${\color{nblack} \theta_{\text{u}}^\star}(\theta_{\text{o}})$ as a function of Alice's homodyne phase and $\Theta_{\text{u}}$ is the range of Bob's homodyne phase.} Once again, when we consider the {\color{nblack} noisy linear attenuator}, hypothesis C, \erf{guess-3}, still gives the correct optimal solution $\theta_{\text{o}} = -\theta_{\text{u}}$, as can be seen in Fig.~\ref{Fig-SLA}(c). {\color{nblack} And} this time when we consider the OPO system in Fig.~\ref{Fig-OPO}(c), we {\color{nblack} finally do} see remarkably good agreement between \erf{guess-3} (dashed black line) and the optimal measurement strategy [solid black line in Fig.~\ref{Fig-OPO}(d)]. Furthermore, the {\color{nblack} objective} function for hypothesis C {\color{nblack} is qualitatively similar} to the RPR, with the distinctive {\color{nblack} asymmetrical} peaks {\color{nblack} close to} $\theta_{\text{o}} = \pi/2$ in Fig.~\ref{Fig-OPO}(d) {\color{nblack} appearing also in (c).} The above results were for $\eta_{\text{o}} = \eta_{\text{u}}$, but we can also check that hypothesis C can {\color{nblack} reasonably well} predict the optimal measurement strategy for any value of measurement efficiencies. We consider the OPO system, choosing two measurement phases for Alice ($\theta_{\text{o}} = \pi/8$ and $3\pi/8$), and compare the optimal measurement angle for Bob from the hypotheses and from numerics, for all possible observed measurement efficiencies $\eta_{\text{o}}$ with $\eta_{\text{u}} = 1-\eta_{\text{o}}$; see Fig.~\ref{Fig5}. Comparing the numerically optimal measurement strategy (solid black lines) to hypothesis C (dashed red lines), we observe, in both of Alice's measurement phases, that this hypothesis {\color{nblack} very well} captures the optimal measurement phases $\theta_{\text{u}}$ {\color{nblack} when} Alice's efficiency {\color{nblack} is low}. At higher efficiencies the agreement in optimal phases (see curves associated with the left axis) is not as perfect. However, when comparing the resulting RPR (curves for the right axis) we observe that the phases given by hypothesis C can still give an RPR extremely close to the maximum value. We can also see how well this approximately optimal solution does compared to another (suboptimal) measurement strategy, hypothesis A (the blue dotted lines), where, especially in the case that $\theta_{\text{o}} = 3\pi/8$, the differences in the RPR are much larger. While hypothesis C seems to provide a good approximation of the optimal strategy; it is not based on any simple physical intuition, unlike hypothesis A and B. However, {\color{nblack} further evidence that its success here is not a fluke can be gained} by applying similar logic to a {\color{nblack} very} different type of quantum system, namely, a qubit. \begin{figure} \includegraphics[scale=0.355]{Fig5_var_eta} \caption{The hypothesized and optimal unobserved measurement phases (left-hand-side axis) and the RPR (right-hand-side axis) for the OPO system in the steady state with varying observed measurement efficiency $\eta_{\text{o}}$ ($\eta_{\text{u}} = 1-\eta_{\text{o}}$), for two fixed observed measurement phases (top: $\theta_{\text{o}} = \pi/8$, bottom: $\theta_{\text{o}} = 3\pi/8$). We consider two hypotheses of the optimal measurement strategy for Bob, hypothesis A, \erf{guess-1} (blue dotted line), and hypothesis C, \erf{guess-3} (red dashed line), comparing to the numerically obtained optimal strategy (black solid line). The results show that the strategy in \erf{guess-3} gives a very close approximation to the optimal RPR.} \label{Fig5} \end{figure} \subsection{Qubit Example} The single-qubit example we consider in this section is the same as that presented in Refs.~\cite{GueWis15,CGW19}. The qubit {\color{nblack} has Hamiltonian $\hat{H}_0 = \hbar\omega\s{z}$ and} is {\color{nblack} coherently} driven {\color{nblack} at frequency $\omega$} and is coupled to a bosonic bath. In a frame {\color{nblack} that removes $\hat{H}_0$}, the master equation for the qubit's unconditioned dynamics is given by \begin{equation}\label{qubit-ME} \hbar \dot\rho = i[(\Omega/2) \hat\sigma_x,\rho] + \gamma {\cal D}[\hat\sigma_-] \rho\,, \end{equation} where $(\Omega/2) \hat\sigma_x$ is the {\color{nblack} driving} Hamiltonian, and $\hat\sigma_- \equiv (\hat\sigma_x - i \hat\sigma_y)/2$ is the Lindblad operator. Here $\hat\sigma_k$ are the standard Pauli matrices. The system-bath coupling rate is denoted by $\gamma$. Alice and Bob could measure the bosonic bath in many different ways \cite{WisMil10}. In this work, we only consider homodyne measurements, as we did for the LGQ systems. {\color{nblack} The resulting homodyne photocurrent from monitoring the bath is} \begin{equation}\label{Photocurrent} y_{\rm r}{\rm d} t = \sqrt{\gamma\eta}\, C_{\rm r} \ex{ {\color{nblack} \hat{\bf r}}}_{\past{\rm R}}{\rm d} t + {\rm d} w_{\rm r}\,. \end{equation} Here, {\color{nblack} $\hat{\bf r}$ is the 3-vector of Pauli operators \begin{equation} \hat{\bf r} = \left( \s{x}, \s{y} , \s{z}\right) ^{\top}\,, \end{equation} whose mean is the Bloch vector, which represents the quantum state. In \erf{Photocurrent}, this mean is conditioned on the past record $\past {\rm R} = \past {\color{nblack}\rm O}, \past {\color{nblack}\rm U}$ corresponding to $\rm r = \rm o, \rm u$ respectively.} As before, $\eta$ is the measurement efficiency and the qubit analogue of measurement matrix is \begin{equation} \label{defCqubit} C_{\rm r} = [\cos(\theta_{\rm r}), \sin(\theta_{\rm r}), 0] \end{equation} for this particular example. We will restrict our analysis to two cases for the measurement: $x$ homodyne and $y$ homodyne, i.e., {\color{nblack} $\theta_{\rm r} = 0$ and $\theta_{\rm r} = \pi/2$, respectively. These choices are the natural ones } given the symmetries of \erf{qubit-ME}. These are named {\color{nblack} $x$ and $y$ homodyne} because of the corresponding Pauli {\color{nblack} operator} appearing in the mean photocurrent signal, from \erf{defCqubit}. These {\color{nblack} two cases} best illuminate the effect of measurement choices on the {\color{nblack} relative average purity recovery} in the limit of large $\Omega$. Here we choose $\Omega = 5 \gamma$. We will also assume that Alice and Bob monitor this bath with equal measurement efficiencies, i.e., $\eta_{\text{o}} = \eta_{\text{u}} = 1/2$. We follow the analysis of the qubit's {\color{nblack} relative average purity recovery} presented in Ref.~\cite{CGW19}, using numerical analyses, because there is no closed- form solution for the qubit case. By numerically generating a large ensemble of measurement records and qubit trajectories (including true states, filtered states, and smoothed states) as functions of time, we can calculate the purity recovery averaged over the observed records as in Eq.~\eqref{RAPR}. Since we are interested in the steady-state regime, we need to consider the time period in the simulation to study the qubit's dynamics independently of the transient effects at the start and end of the interval. Using the dephasing time defined as $T_\gamma = 1/\gamma$ and the final time $T = 8 T_\gamma$, we choose the steady-state period to be $\mathfrak{T}_{\rm ss} = [ 4.5 T_\gamma, 6 T_\gamma]$. We show in Fig.~\ref{Fig6}{\color{nblack} (d)}, the $2\times 2$ table of the {\color{nblack} relative average purity recovery} averaged over the steady-state period quoted from Ref.~\cite{CGW19}, considering four options of Alice's ({\color{nblack}\rm O}) and Bob's ({\color{nblack}\rm U}) measurements. The combination with the best performance is when Alice and Bob measure the same quadrature and the worst performance when Alice measures the $y$ quadrature and Bob measures the $x$ quadrature. Thus we next ask whether hypothesis A, B, or C can correctly predict all features of the {\color{nblack} relative average purity recovery}. As we have already defined the measurement matrix for this qubit example, $C_{\rm r}$ in \erf{Photocurrent}, the measurement overlap and optimal measurement strategy for hypothesis A are as defined in \erf{Overlap_Meas} and (\ref{guess-1}), respectively. As we are only considering two measurement possibilities for Alice and Bob, the maximization over the range of the unobserved homodyne phases can be replaced by maximizing over the set $\Theta_{\text{o}} = \{0,\pi/2\}$. Calculating the measurement overlap for {\color{nblack} the} four possible measurement combinations for Alice and Bob, we see, in Fig.~\ref{Fig6}(a), that the optimal measurement strategy, according to hypothesis A, occurs when Alice and Bob choose the same measurement. This is consistent with the greatest improvement in the average purity of the smoothed state, as seen in Fig.~\ref{Fig6}(d). However, in the cases where Alice and Bob choose different measurements, we see that the measurement overlap function suggest that there is no difference between these last two cases, which clearly is not true when we look at the {\color{nblack} relative average purity recovery}. Once again, hypothesis A is {\color{nblack} not very accurate.} To analyze hypotheses B and C for the qubit case, we need to define a quantity that resembles the {\rm mean-square kick tensor} of the LGQ system. The {\color{nblack} kick matrix} is defined in Eqs.~\eqref{qfm} and \eqref{truest} and describes the measurement back-action for an LGQ system in terms of the change in the system's expectation values in the $q$ and $p$ quadratures. Given a measurement setting $\rm r \in \{ \rm o, \rm u \}$ and its corresponding measurement record $\past{\color{nblack}\rm R} \in \{ \past {\color{nblack}\rm O}, \past {\color{nblack}\rm U} \}$, respectively, we can {\color{nblack} rewrite} the {\color{nblack} mean- square kick tensor} as \begin{align}\label{dir-Kick} {\color{nblack} B_{\rm r}}{\rm d} t = {\cal K}^{+}_{\rm r} [V_{\past{\color{nblack}\rm R}} ]{\cal K}^{+}_{\rm r} [V_{\past{\color{nblack}\rm R}} ]^{\top} {\rm d} t = \mathbb{E}_{\past{\color{nblack}\rm R}} \left [{\rm d}\ex{\hat\bx}_{\past{\color{nblack}\rm R}} {\rm d}{\ex{\hat\bx}^{\top}_{\past{\color{nblack}\rm R}}} \right]. \end{align} Here $\ex{\hat\bx}_{\past{\color{nblack}\rm R}}$ is the {\color{nblack} LGQ phase-space} mean conditioned on a realization of the (past) record $\past{\color{nblack}\rm R}$, and the expected average on the right-hand side of \erf{dir-Kick} is over all possible record realizations. The right-hand side is exactly the mean-square change (during an infinitesimal time ${\rm d}t$) of the system's expectation values, {\color{nblack} in a tensorial sense}, averaging over all the possible records. Therefore we can define an analogous quantity to the mean-square kick tensor for the qubit system as \begin{equation} \begin{split}\label{qbkick} {\color{nblack} B_{\rm r}}= \mathbb{E}_{\past{\color{nblack}\rm R}}\left\{ \frac{1}{|\mathfrak{T}_{\rm ss}|} \,\!\! \sum_{t \in \mathfrak{T}_{\rm ss}} \left[ {\rm d}\ex{{\color{nblack} \hat{\bf r}}}_{\past{\rm R}}(t) \, {\rm d}\ex{{\color{nblack} \hat{\bf r}}}_{\past{\rm R}}(t)^{\top} \right]\right\}\,, \end{split} \end{equation} for the steady-state period $\mathfrak{T}_{\rm ss}$ of length $|\mathfrak{T}_{\rm ss}|$. \begin{figure} \includegraphics[width=8.4cm]{Fig6_table} \caption{Analysis of hypothesis A, B, and C and the {\color{nblack} relative average purity recovery} (${\cal R}$) for the example of a driven {\color{nblack} qubit coupled dissipatively} to a bosonic bath. We {\color{nblack} restrict Alice and Bob to only two} measurement choices, either $x$ or $y$ homodyne. The numerical values in tables (a), (b), and (c) are the {\color{nblack} objective functions for the respective hypotheses. For B and C this required stochastic simulated, and we used} 3000 records each. The qubit's {\color{nblack} relative average purity recovery} [Table (d)] is obtained using the numerical techniques presented in Ref.~\cite{CGW19}, simulating $3000$ observed and $10000$ unobserved records for both measurement settings. Here the coloured cells indicate good (green), moderate (yellow), and bad (red) improvement. {\color{nblack} Only hypothesis C [Table (c)] correctly predicts the pattern of the {\color{nblack} relative average purity recovery}.}} \label{Fig6} \end{figure} Now that we have defined the mean-square kick tensor for the qubit setting, we can formalize and analyze both hypothesis B and C. {\color{nblack} We will begin with hypothesis B, where the unobserved overlap and optimal measurement strategy are as defined in \erfs{Overlap-u}{guess-2}, where, as in hypothesis A, we maximize over the set $\Theta_{\text{o}} = \{0,\pi/2\}$.} As seen from the four possible measurement combinations for Alice and Bob in Fig.~\ref{Fig6}(b), the optimal measurement choice for Alice, according to \erf{guess-2}, occurs when Alice and Bob choose the same measurement, {\color{nblack} and best of all is when both choose homodyne measurements along the $y$ direction. This is consistent with the actual} {\color{nblack} relative average purity recovery}, as seen in Fig.~\ref{Fig6}(d). However, when we investigate the other measurement combinations, specifically when Alice and Bob choose different measurements, we see that the unobserved overlap function does not reproduce the {\color{nblack} pattern seen for the {\color{nblack} relative average purity recovery}. That is, it predicts that smoothing would be better if Alice chose $y$ and Bob $x$ rather than the other way around, whereas the truth is the opposite.} For hypothesis C, the roles of Alice and Bob are reversed compared to hypothesis B, and the optimal measurement for Bob is given by \erf{guess-3} {\color{nblack} and the observed overlap defined in \erf{Overlap-o}. As was the case for hypothesis B, we are restricting our analysis to two measurement choices for Alice and Bob, and the maximization is instead over the set $\Theta_{\text{u}} = \{0,\pi/2\}$.} {\color{nblack} For the four possible measurement choices for Alice and Bob}, shown in Fig.~\ref{Fig6}(c), {\color{nblack} the best combination is when both measure $y$ and the second best when both measure $x$, consistent with the {\color{nblack} relative average purity recovery}, Fig.~\ref{Fig6}(d), and the same as in hypothesis B. However, unlike for hypothesis B, this time the objective} function for the cases when Alice and Bob choose different measurements also {\color{nblack} matches} the {\color{nblack} relative average purity recovery}. {\color{nblack} This shows that hypothesis C is better at predicting when smoothing will work well than either hypothesis A or hypothesis B.} This is consistent with the results obtained for the LGQ systems. \section{Conclusion} In this paper we provided a detailed derivation of the smoothed quantum state for LGQ systems {\color{nblack} and contrasted it with the theory of the smoothed weak-value state.} To exemplify the differences between these techniques, we simulated a single trajectory and witnessed clear differences in the dynamics of the estimates by looking at the filtered, the SWV {\color{nblack} state}, and the smoothed quantum states for LGQ systems. {\color{nblack} As expected, the last of these provides the best estimate of the true state conditioned on the results of measurements on a channel unavailable to the observer, Alice, as well as on the results of Alice's measurements.} {\color{nblack} A key question of interest is how much improvement smoothing can offer relative to filtering and how this depends on the measurement choices of Alice and Bob (the observer of the channel unavailable to Alice). We studied this through the purity recovery of smoothing over filtering relative to the maximum possible purity recovery. We constructed three different hypotheses about what properties of Alice and Bob's measurements would lead to higher relative purity recovery. We found that the only hypothesis that worked, qualitatively, for the two LQG systems we studied is the most counter intuitive of the three. It is the hypothesis that says Bob should choose his measurement so that his signal tells him as much as possible about the {\em disturbance} to the state caused by Alice's measurements. This is counter intuitive because one would have thought that it is Alice, the one doing the smoothing, who needs to be able to infer as accurately as possible the disturbance to the state caused by Bob's measurement. After all, it is the existence of this disturbance that makes Alice's filtered state impure and allows the possibility of increasing the purity by smoothing.} {\color{nblack} The qualitative success of our third hypothesis is the main result of this paper. However, it presents a puzzle because it is not grounded in physical intuition. For this reason we also put our three hypotheses to the test on a very different system, specifically, a qubit system, not an LGQ system. We formulated the problem in a closely analogous way to that used for LGQ systems and found that, once again, our third hypothesis was clearly superior to the other two in predicting which combinations of measurements by Alice and Bob would give better relative purity recovery than the other combinations. It can be hoped that further study will elucidate why it is preferable for Bob to measure the system so as to detect the `kick' to the state by Alice's measurement, rather than the converse. Another interesting question is what would happen to the smoothed state if Alice were to assume the incorrect type of measurement for Bob.} {\color{nblack} Could the smoothed state be a {\em worse} estimate of the true state than the filtered state? The LGQ formalism offers a convenient way to explore this because of the possibility of semianalytic solutions.} There is also a great deal of work to be done in comparing the various other ways of utilising past and future measurement information, such as the most likely path formalism \cite{Cha13, Web14}, and in applying these theories to the LGQ scenario. \begin{acknowledgments} We would like to thank Prahlad Warszawski for useful discussions regarding the retrofiltered effect. We acknowledge the traditional owners of the land on which this work was undertaken at Griffith University, the Yuggera people. This research is funded by the Australian Research Council Centre of Excellence Program through Grant No. CE170100012. A.C.~acknowledges the support of the Griffith University Postdoctoral Fellowship scheme. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,968
\section{Introduction} Fix $n\geq 1$ and let $(d_1,\ldots,d_n)$ be a sequence of positive integers that may depend on $n$. Consider a graph with $n$ vertices and degrees $(d_1,\ldots,d_n)$ generated by the configuration model, that is, equip each vertex $i\in\{1,\ldots,n\}$ with $d_i$ half-edges, and pair half-edges uniformly at random to create edges. For all half-edges to find a partner we assume that the total degree $\sum d_i$ is even. Assign independently to each edge $e$ in the resulting graph two independent exponentially distributed passage times $X_1(e)$ and $X_2(e)$ with parameter $\lambda_1$ and $\lambda_2$, respectively. At time 0, two uniformly chosen vertices are infected with infections type 1 and type 2, respectively, and the infections then spread via nearest neighbors: When a vertex becomes type 1 (2) infected, the time that it takes for the infection to traverse an edge $e$ emanating from the vertex is given by $X_1(e)$ ($X_2(e)$). If the other end point of the edge $e$ is still uninfected at that time, it becomes type 1 (2) infected and remains so forever. It also becomes immune to the other infection type. In this paper we study the above competing growth process on a random graph generated from a given degree sequence subject to the regularity conditions stated below. These conditions ensure that the graph contains a giant component occupying all but a vanishing fraction of the vertices as $n\to\infty$, and hence that almost all vertices will \whp{} be infected when the process terminates. The question that we will be interested in is the outcome of this competition. Specifically, will both types occupy a strictly positive fraction of the vertices in the limit as $n\to\infty$? We show that the answer is yes if and only if $\lambda_1=\lambda_2$. This question has previously been studied for the configuration model with constant degrees~\cite{regular} and infinite variance degrees~\cite{winner}; see the end of this section for a summary of earlier work. Given a degree sequence $(d_1^{\sss(n)},d_2^{\sss(n)},\ldots,d_n^{\sss(n)})$ with $\sum d_i^{\sss(n)}$ even, write $D_n$ for the degree of a vertex chosen uniformly at random, so that $$ \P(D_n=k)=\#\{i:d_i=k\}/n. $$ Our assumptions on the (sequence of) degree sequences are the following: \begin{itemize} \item[(A1)] $(D_n)_{n\ge1}$ converges in distribution to a random variable $D$ with $\E[D^2]<\infty$, and $$ \E[D_n^2]\to\E[D^2]; $$ \item[(A2)] $d_i\geq 2$ for all $i$, and $\P(D>2)>0$. \end{itemize} Assumption (A1) could equivalently be formulated as the sequence of empirical distributions being uniformly square integrable and converging to a probability distribution $(p_d)_{d\in\mathbb{N}}$ on the positive integers. One standard example in which (A1) is satisfied is when $(d_1, d_2,\ldots, d_n)$ are independent realizations of a random variable $D$ with finite variance. By increasing a randomly chosen degree by 1, if necessary, we can make sure that the total degree is even. If we condition on the sequence $(D_i)_{i=1}^n$ and assume that $\E[D^2]<\infty$, $\P(D\ge2)=1$ and $\P(D>2)>0$, then (A1) and (A2) hold \whp{} and thus our results, as stated below, apply. A graph generated by the configuration model may contain self-loops and multiple edges, but the assumption (A1) implies that the probability of obtaining a simple graph is bounded away from 0 as $n\to\infty$; see \cite{AngHofHol16,simpleI,simpleII}. Furthermore, it is well-known that conditioning on the resulting graph being simple yields a uniform sample among simple graphs with the specified degree sequence; see \cite[Chapter 7]{Remco_book}. Hence our results apply also for such a uniformly chosen simple graph. Let $D^*$ be a size biased version of $D$, that is, $\P(D^*=d)=d\P(D=d)/\E[D]$. The threshold for the occurrence of a (unique) giant component in the graph is given by $\E[D^*-1]=1$; see \cite{SvanteMalwina,MR-95}. This can be seen by exploring the components in the graph via nearest neighbors, starting from a uniformly chosen vertex. As $n\to\infty$, this exploration can be approximated by a branching process and, by construction of the graph, the offspring distribution of explored vertices in the second and later generations is given by $D^*-1$. The relative size of the giant component is given by the survival probability in the approximating branching process; see \cite{SvanteMalwina,MR-98}. Condition (A2) above implies that the survival probability is 1, so that the asymptotic fraction of vertices in the giant component is 1. Now consider the competition process described above. Write $N_i(n)$ for the total number of type $i$ infected vertices when the process terminates, and $\bar{N}_i(n)=N_i(n)/n$ for the corresponding fraction. Note that, since the giant component spans all but a vanishing fraction of the vertices, we have that $\bar{N}_1(n)+\bar{N}_2(n)\pto 1$, and it is therefore enough to consider $\bar{N}_1(n)$. Furthermore, by symmetry, we may assume that $\lambda_1\leq \lambda_2$. The following is our main result. \begin{theorem}\label{th:main} Assume that the degree sequence satisfies (A1) and (A2). \begin{itemize} \item[{\rm{(a)}}] If $\lambda_1=\lambda_2$, then $\bar{N}_1(n)\dto V$, where $V$ is a continuous random variable with a strictly positive density on $(0,1)$. \item[{\rm{(b)}}] If $\lambda_1<\lambda_2$, then $\bar{N}_1(n)\dto 0$. \end{itemize} \end{theorem} \begin{remark} Starting with two given infected vertices, e.g.\ vertices 1 and 2, or several infected vertices of each type (fixed in number as $n\to\infty$) gives the same results, except that the distribution of the limiting fraction $V$ will depend on the degrees of the initially infected vertices. Moreover, the theorem extends to a fixed number of competing types larger than two, in which case all types of maximal strength each conquer a positive fraction of the vertices. \end{remark} \begin{remark} The assumption $d_i\geq 2$ ensures that the giant component comprises almost all vertices. Weakening this condition to $\E[D^*-1]>1$ gives a graph where the giant component may contain a smaller fraction of the vertices. The competition process can be analyzed also on such a graph and the non-trivial case then arises when both initial vertices belong to the giant component. We believe that our methods apply also in this case, but it would require dealing with a conditioning on both initial vertices being in the giant component. Establishing a version of Theorem \ref{th:main} in that case would make it applicable also for e.g.\ the Erd\H{o}s--Renyi graph and the generalized random graph analyzed in \cite{BDM-L}. These models give simple graphs with random degrees and, conditionally on the degrees, the graph is uniform on the set of all simple graphs with those given degrees. \end{remark} \textbf{Outline of the proof} In the proof below we establish that there is an initial phase where the outcome of the competition is determined, followed by a phase that lasts until close to the end, and where the fractions of two types are essentially constant. An important tool in the proof is a standard technique for exploring the graph and the evolution of the infections simultaneously. A vertex is detected when it is reached by the infection and the half-edges attached to the vertex are then declared active, of type 1 or 2 depending on the type of the vertex. A half-edge remains active until it is opened for infection. A partner half-edge is then chosen and, if the vertex of this half-edge is still uninfected at that time, this leads to infection transfer and activation of new half-edges. The process can be defined in continuous time or in discrete steps by observing it only at the time points when an edge is opened; see Section 2 for a more detailed description. Write $S^{\sss (i)}_k$ for the number of active type $i$ half-edges after $k$ steps in this process and $S_k=S^{\sss (1)}_k+S^{\sss (2)}_k$ for the total number active half-edges. Define $M_k$ to be the fraction of active type 1 half-edges among all active half-edges, more precisely defined by \begin{equation}\label{eq:Mk} M_k:=\begin{cases} \frac{S^{\sss (1)}_k}{S_k} & \text{if } S_k>0;\\ M_{k-1} & \text{if }S_k=0. \end{cases} \end{equation} In a key step we show that, if $\lambda_1=\lambda_2$, then $M_k$ is a martingale. We then give an estimate of its quadratic variation which implies that $M_k$ is essentially constant for $k\geq \nu_n$ for any sequence of integers $\nu_n\to\infty$. The probability that a newly infected vertex is infected by type 1 is hence roughly constant for $k\geq \nu_n$ and equal to $M_{\nu_n}$. The initial stages of the competition, on the other hand, can be approximated by a branching process and asymptotic results on branching processes imply that $M_{\nu_n}$ converges to a continuous random variable $V\in(0,1)$ if $\lambda_1=\lambda_2$, and to 0 if $\lambda_1<\lambda_2$. This yields Theorem \ref{th:main}(a). The proof of Theorem \ref{th:main}(b) is completed by letting the weaker type 1 infection spread with the same larger intensity $\lambda_2$ as the type 2 infection for $k\geq\nu_n$. The fraction of type 1 vertices among infected vertices for $k\geq \nu_n$ in such a process is close to 0 by the above results, and the type 1 infection clearly captures even fewer vertices in the original process. The rest of the paper is organized so that the exploration process is described in more detail in Section 2, along with the initial branching process approximation. The results on $M_k$, specifying the evolution of the infections during the main phase, are then given in Section 3. Theorem \ref{th:main} is proved in Section 4. Finally some directions for future work are described in Section 5.\medskip \textbf{Previous work} Competition on the configuration model has previously been studied in the case when the degree distribution follows a power-law with exponent $\tau\in(2,3)$, that is, when the mean degree is finite, but the variance infinite. In that case one of the types occupies all but a finite number of vertices as $n\to\infty$, and both types have a positive probability of winning, regardless of the values of the intensities; see \cite{winner}. The process has also been studied on random regular graphs generated by the configuration model with constant degree; see \cite{regular}. Our results generalize the results in \cite{regular} when the competition starts from fixed initial sets. However, the results in \cite{regular} also cover the case with growing initial sets, and give precise quantifications of the asymptotic number of vertices of each type. In the present work, as well as in \cite{regular,winner}, the passage times are assumed to be exponential. The model can of course be defined analogously for passage times with arbitrary distributions. It has been analyzed in \cite{fixspeedI,fixspeedII} for configuration graphs with power-law exponent $\tau\in(2,3)$ and constant passage times, so that all randomness comes from the underlying graph. When the types have different speed, the faster type occupies all but a vanishing fraction of the vertices, while when the speeds are the same, the types may or may not occupy positive fractions depending on the specific choice of the two initial vertices. A slightly different competition process with constant passage times is analyzed in \cite{Cooper}, and the present competition process is analyzed on preferential attachment graphs in \cite{prefatt}. Finally, we mention that competing first passage percolation with exponential passage times has previously been studied on $\mathbb{Z}^d$. In that setting coexistence may occur for equal strength competitors, whereas the case of unequal strength remains to be fully resolved; see \cite{2tRich} for a survey and references. \section{The initial phase} In this section we first define the exploration of the graph and the flow of infection in more detail. We then describe a branching process approximation of the number of active half-edges of the two types in the early stages of the growth. This leads to a characterization of the limiting behavior of a continuous time version of $M_k$ (defined in \eqref{eq:Mk}) at the end of the initial phase. \medskip \textbf{The exploration process} To describe the exploration process, fix $\lambda_1,\lambda_2>0$, possibly different. At time 0 we start with the vertices and the attached half-edges. The pairing of the half-edges however is hidden and is revealed during the process. Each half-edge is throughout the process classified as either \emph{free} or \emph{paired}, and a free half-edge is in turn labeled as \emph{active} of either type 1 or 2, or \emph{inactive}. The initial set of active type $i$ half-edges consists of the half-edges attached to the uniformly chosen initial type $i$ vertex, while all other half-edges are inactive. Since the initially infected vertices are chosen randomly, the initial numbers $a_1$ and $a_2$ of active type 1 and type 2 half-edges, respectively, are random. However, we condition on them in the sequel, and hence assume that they are given numbers. The sets of half-edges are now updated inductively in continuous time as follows, with $\cSi_t$ denoting the number of active half-edges of type $i$ at time $t$. \begin{enumerate} \addtolength{\leftmargini}{-5pt \renewcommand{\labelenumi}{\textup{(P\arabic{enumi})} \renewcommand{\theenumi}{\labelenumi \item \label{sj1} Each active half-edge of type $i=1,2$ infects with intensity $\gl_i$, that is, it is equipped with an exponential clock with intensity $\gl_i$, and infects when the clock rings. When a half-edge $q$ infects, it picks a partner $r$ uniformly at random from all free half-edges distinct from $q$. Let $x$ and $y$ be the vertices that $q$ and $r$, respectively, are attached to. Then $q$ and $r$ go from free to paired and form an edge $xy$. \item \label{sjold} If $y$ is already infected, nothing more happens. In this case, $r$ was also active (of the same type as $q$ or not), and the number of active half-edges decreases by 2. \item \label{sjnew} If $y$ is not infected, it becomes infected by the same type as $x$, and all remaining half-edges at $y$ become active of this type. This means that, if $q$ has type $i$ and $y$ has degree $d_y$, then the number $\cSi_t$ of active half-edges of type $i$ increases by $d_y-2$, while the number of active half-edges of the other type does not change. \end{enumerate} A discrete version of the process is obtained by observing the continuous time process at the times half-edges are paired. In each step $k$ of the discrete time process, an active half-edge $q$ is chosen at random, with probability proportional to $\gl_i$ where $i$ is its type. The chosen half-edge infects as in \ref{sj1}--\ref{sjnew} above. In both cases, if there are no remaining active half-edges, the infections have stopped, but it still remains to complete the graph. We then join any two uniformly chosen half-edges, that is, we choose a uniform matching of the remaining half-edges. The number of active type $i$ half-edges after $k$ steps is denoted by $\Si_k$. Throughout, quantities related to discrete time processes will be denoted by standard roman letter, while quantities related to processes in continuous time will be denoted by calligraphy letter. For instance $\cM_t$ denotes the continuous time version of $M_k$, defined in \eqref{eq:Mk}, that is, $\cM_t=\cSe_t/(\cSe_t+\cSt_t)$.\medskip \textbf{Branching process approximation} We now describe how the early evolution of $\cSi_t$ ($i=1,2$) can be coupled with two independent branching processes. Stronger results in this direction have been obtained in \cite{one_fpp}. However, we only need the coupling up to some time $t_n\to\infty$ (without further restrictions on $t_n$). This is fairly easy to establish and we therefore describe it here. Our aim is to prove the following result on the fraction of active type 1 half-edges in the initial phase. \begin{prop}\label{prop:initial} There exists a deterministic sequence of integers $t_n\to \infty$ such that $\cM_{t_n}\dto V$ as $n\to\infty$, where $V$ is a continuous random variable with strictly positive density on $(0,1)$ if $\lambda_1=\lambda_2$ and $V\equiv 0$ if $\lambda_1<\lambda_2$. \end{prop} We now consider the initial phase of the continuous time exploration process when $t$ is so small that rather few vertices have been infected. First consider the general case with $\gl_1,\gl_2>0$, possibly different, and the process described by \ref{sj1}--\ref{sjnew} above. In order to study the initial phase, we introduce the corresponding process where half-edges in \ref{sj1} are drawn with replacement, that is, the half-edge $r$ is chosen uniformly at random from the set of \emph{all} half-edges, independently of previous picks. In this version we do not have to keep track of the actual sets of active half-edges, only their numbers, which we denote by $\cBe_t$ and $\cBt_t$. Moreover, we pretend that the chosen half-edge and its vertex are not used before, so we ignore \ref{sjold} and always update $\cBe$ and $\cBt$ as in \ref{sjnew}. This means that $\cBe$ and $\cBt$ are two independent continuous time Markov branching processes with intensities $\gl_1$ and $\gl_2$, respectively, and the same offspring distribution $D_n^*-1$, where $D_n^*$ is the size-biased distribution corresponding to the empirical distribution $D_n$, that is, $\P(D_n^*=d):=d\P(D_n=d)/\E[D_n]$. Of course, we take $\cBi_0=a_i$. Furthermore, define $\cBhi_t$ to be a branching process defined as $\cBi_t$ but with the offspring distribution changed to $D^*-1$. Thus $\cBhi_t$, unlike $\cSi_t$ and $\cBi_t$, does not depend on $n$. Since $\E [D^*-1]=\E[D(D-1)]/\E[D]<\infty$, there is no explosion, and $\cBhi_t$ is a.s.\ finite for all $t$. Specifically, for every fixed $T<\infty$, the process $\cBhi$ has a.s.\ only a finite number of births (infections) in $[0,T]$. Moreover, since $D_n\dto D$ and $\E[D_n]\to\E [D]<\infty$, we have that $D_n^*\dto D^*$. It follows that, for every fixed $T<\infty$, we can couple $\cBi$ and $\cBhi$ such that they agree with probability $1-o(1)$ each time an individual gets offspring at a time $t\leq T$, that is, \whp{} $\cBi_t=\cBhi_t$ for all $t\le T$. Now return to the actual exploration process. We can obtain it from the version with replacement by accepting a selected half-edge $r$ if it is free, and otherwise resampling. Moreover, we also check if the accepted half-edge already is active, and then we apply \ref{sjold} instead of \ref{sjnew}. During a fixed time interval $[0,T]$, the process $\cBhi_t$ has a.s.\ only finitely many births and thus, since $\cBi_t=\cBhi_t$ \whp{} on this interval, the number of births in $[0,T]$ for $\cBi_t$ is $O_p(1)$. Furthermore, the number of half-edges that are paired in $[0,T]$ is $O_p(1)$, and so is the number of half-edges that are declared active in $[0,T]$. Hence, at each of the $O_p(1)$ births in $[0,T]$, the probability that a paired or active half-edge is picked in the process $\cBi_t$ is $o(1)$. Consequently, \whp, only free inactive half-edges are selected in $\cBi_t$ for $t\le T$ and the process then agrees completely with $\cSi_t$ for $t\le T$. We have shown that the processes $\cSi_t$ and $\cBhi_t$ can be coupled (for $i=1,2$ simultaneously) such that, for every fixed $T$, we have that $\cSi_t=\cBhi_t$ for $t\le T$. Let $$ \tau_n:=\inf\bigset{t\ge0:\cSi_t\neq\cBhi_t \text{ for some $i\in\set{1,2}$}}. $$ It follows that $\P(\tau_n\le T)\to0$ for every fixed $T$, that is, $\tau_n\pto\infty$. This implies that there is a deterministic sequence $t_n\to\infty$ such that $\P(\tau_n\le t_n)\to0$. In other words, \whp{} \begin{equation}\label{x=z} \cSe_t=\cBhe_t \quad\text{and}\quad \cSt=\cBht_t\quad \text{for } t\le t_n. \end{equation} Fix such a sequence $t_n\to\infty$ where, for later use, we pick the sequence such that each $t_n$ is an integer. For the proof of Theorem \ref{th:main}, it will be useful to adjust the sequence slightly to ensure that the number of vertices that have been infected at time $t_n$ is small. Thus, let $\cN_t$ be the number of edges identified in the exploration process at time $t$; this equals the number of times that \ref{sj1} has been performed. Also let $\cNh_t$ be the analogous quantity for the process $\cBhe_t\cup\cBht_t$. With the coupling above, we have $\cN_t=\cNh_t$ for $t<\tau_n$, and hence \whp{} $\cN_{t_n}=\cNh_{t_n}$. We may assume, by decreasing $t_n$ if necessary, that $\cNh_{t_n}\le n\qqq$ \whpx. We also define a related sequence of integers $\nu_n$ such that, in the discrete time exploration process, the branching process approximation remains valid beyond step $\nu_n$. To do this, note that $\cNh_{t_n}\asto\infty$ as \ntoo, since $t_n\to\infty$. Hence, $\cNh_{t_n}\pto\infty$ and $\cN_{t_n}\pto\infty$, and thus there exists a deterministic sequence $\nu_n$ of integers such that $\nu_n\to\infty$ and \whp{} \begin{equation}\label{nun} n\qqq\ge\cNh_{t_n}= \cN_{t_n}\ge \nu_n. \end{equation} Finally note that, by our assumptions, $D_n^*\ge2$ and thus $D^*\ge 2$ so that $D^*-1\ge1$. This means that the branching processes $\cBhi_t$ never decrease. In particular, they never become extinct, and therefore $\cBhi_t\to\infty$ a.s.\ as \ttoo. With the above coupling at hand we can prove Proposition \ref{prop:initial}. \begin{proof}[Proof of Proposition \ref{prop:initial}] Suppose first that $\gl_1=\gl_2$. The branching processes $\cBhe_t$ and $\cBht_t$ are independent and have the same offspring distribution, but possibly different initial values $a_1$ and $a_2$. If we restrict to integer values of $t$, we obtain two independent Galton--Watson processes $\cBhe_k$ and $\cBht_k$ with the same offspring distribution. Moreover, this offspring distribution has a finite mean $m>1$, since, by assumption, $\E[D^2]<\infty$ and thus $\E[D^*]<\infty$ (in fact we have $m=e^{\E[D^*-1]}$). By the Seneta--Heyde theorem \cite{Heyde} (see also \cite[Theorem I.10.3]{AthreyaNey}) there exists a derministic sequence $c_k$ such that $\cBhi_k/c_k\to W_i$ a.s., where $W_i\in(0,\infty)$ is a random variable, and thus $$ \frac{\cBhe_k}{\cBhe_k+\cBht_k} \asto V $$ for some random variable $V\in(0,1)$. By \cite[Theorem II.5.2]{AthreyaNey} and the subsequent remark, the variable $W_i$ ($i=1,2$) is continuous with strictly positive density on $(0,\infty)$ and hence $V$ is continuous with strictly positive density on $(0,1)$. Since $t_n\to\infty$, and we have assumed that $t_n\in\bbN$, it follows that \begin{equation}\label{Szlim} \frac{\cBhe_{t_n}}{\cBhe_{t_n}+\cBht_{t_n}} \asto V \in(0,1) \end{equation} as \ntoo. Alternatively, we can use the continuous-time version of the Seneta--Heyde theorem by Cohn \cite{Cohn} to directly arrive at~\eqref{Szlim}. Since $\cSi_{t_n}=\cBhi_{t_n}$ \whp\, by \eqref{x=z}, it follows from \eqref{Szlim} that \begin{equation}\label{Sxlim} \cM_{t_n}= \frac{\cSe_{t_n}}{\cSe_{t_n}+\cSt_{t_n}} \dto V\in(0,1), \end{equation} and the first part of Proposition \ref{prop:initial} is proved. Now suppose that $\gl_1<\gl_2$. By time-scaling we may assume that $\gl_1=1$ and $\gl_2=\gl>1$. Then $\cBhe_{\gl t}$ and $\cBht_t$ are two independent continuous time branching processes, with the same intensity and the same offspring distribution (with finite mean). Hence, as in the case with equal intensities, there exist $c_k$ such that a.s.\ \begin{align} \cBhe_{\gl k}/c_k&\to W_1 \label{ax1} \\ \cBht_k/c_k&\to W_2, \label{ax2} \end{align} where $W_1$ and $W_2$ are random variables with $W_i\in(0,\infty)$ a.s. Furthermore, $c_{k+1}/c_k\to m>1$. For any fixed $j\ge0$, we have for large enough $k$ that $\gl k\ge k+j$, and thus $\cBhe_{k+j}\le\cBhe_{\gl k}$. Hence, by \eqref{ax1}, a.s. $$ \limsup_\ktoo\frac{\cBhe_k}{c_k} = \limsup_\ktoo\frac{\cBhe_{k+j}}{c_{k+j}} \le \limsup_\ntoo\frac{\cBhe_{\gl k}}{c_k}\cdot\frac{c_k}{c_{k+j}} =W_1 m^{-j}. $$ Since $W_1<\infty$, $m>1$ and $j\ge0$ is arbitrary, it follows that $\limsup_\ktoo\cBhe_k/c_k=0$ a.s.\ and thus, recalling from \eqref{ax2} that $\cBht_k/c_k\to W_2>0$, that $\cBhe_k/\cBht_k\asto 0$. Hence, \eqref{Szlim} and \eqref{Sxlim} hold with $V\equiv 0$. \end{proof} \section{The deterministic phase} In this section we show that the fraction $M_k$ of active type 1 half-edges among all active half-edges remains roughly constant after the initial phase in the exploration process for equal intensities. At the very end of the process, when most half-edges have already been paired, this might fail, but we show that the fraction is indeed constant during the main part of the process. Here we will work mainly in discrete time, and then connect to continuous time in the proof of Theorem \ref{th:main}. We denote the total number of edges in the graph by $N$, that is, $$ N=\frac{1}{2}\sum_id_i; $$ this is the total number of steps in the discrete time exploration process. \begin{prop}\label{prop:main} Assume that $\lambda_1=\lambda_2=1$ and let $\nu_n$ be defined as in \eqref{nun}. As \ntoo\, we have for any $\eps>0$ that \begin{equation}\label{lx1} \sup_{\nu_n\le k\le (1-\eps)N}\bigabs{M_k-M_{\nu_n}} \pto 0. \end{equation} \end{prop} \begin{remark} Proposition \ref{prop:main} is valid for any sequence $\nu_n\to\infty$ with $\nu_n\le(1-\eps)N$. However, we will apply it to the sequence $\nu_n$ defined in \eqref{nun} and therefore formulate it for this. The idea is that the branching process approximation in Section 2 remains valid beyond step $\nu_n$ in the discrete process, and Proposition \ref{prop:main} then ensures that the proportion of type 1 vertices does not change after that. \end{remark} The key observation in the proof of Proposition \ref{prop:main} is that $M_k$ is a martingale when $\lambda_1=\lambda_2$. We then show that the second moment assumption implies that the contribution to the quadratic variation of this martingale during the range $\nu_n$ to $(1-\eps)N$ is vanishingly small. With this at hand it is not hard to show \eqref{lx1}. \begin{lemma}\label{LM} If $\lambda_1=\lambda_2$, then $(M_k)_{k=0}^N$ is a martingale. \end{lemma} \begin{proof} Recall that $S_k$ denotes the total number of active half-edges after $k$ steps. Define $\gD S_k=S_{k+1}-S_k$, and similarly for other sequences. Let $\cF_k$ be the $\gs$-field generated by all events up to step $k$. Next, reveal whether a new vertex is infected in step $k$, and if so, the identity (and thus the degree) of the new infected vertex (however, we do not yet reveal the classification of the involved half-edges). Let $\cF_k^+\supset \cF_k$ denote the $\gs$-field generated by the events revealed so far. If a new node of degree $d$ is infected, then $\gD S_k=d-2$, and $\gD \Se_k$ is either $d-2$ or 0, with conditional probabilities (given $\cF_k^+$) $M_k$ and $1-M_k$, respectively. Hence, in this case, $$ \E\left[\gD \Se_k\mid\cF_k^+\right] = M_k(d-2) $$ and thus $$ \E\left[\Se_{k+1}\mid\cF_k^+\right] = \Se_k+M_k(d-2) = M_k(S_k+d-2)=M_kS_{k+1}; $$ Hence, $\E\bigpar{M_{k+1}\mid\cFx_k}=M_k$. If no new vertex is infected, and $S_k>0$, then $\gD S_k=-2$. Since the two paired half-edges are then both drawn uniformly at random (without replacement) from the active half-edges, each one of them has (conditional) probability $M_k$ of being of type 1. Hence $$ \E\left[\gD \Se_k\mid\cF_k^+\right] = -2M_k $$ and thus $$ \E\left[\Se_{k+1}\mid\cF_k^+\right] = \Se_k-2M_k= M_k(S_k-2)=M_kS_{k+1}. $$ Consequently, if $S_k>2$, so that $S_{k+1}>0$, then $\E\left[M_{k+1}\mid\cFx_k\right]=M_k$. If $S_k=2$, so that $S_{k+1}=0$, or if $S_k=S_{k+1}=0$, then $M_{k+1}=M_k$ by definition. Hence, in all cases $\E\left[M_{k+1}\mid\cFx_k\right]=M_k$, and thus $\E\left[M_{k+1}\mid\cF_k\right]=M_k$. \end{proof} In order to obtain a bound on the quadratic variation of (a stopped version of) $M_k$, we need to show that $S_k$ grows at least linearly in $k$ throughout the range $\nu_n$ to $(1-\eps)N$. \begin{lemma}\label{LC} If\/ $\lambda_1=\lambda_2$, then, for every $\eps>0$ there exists $c>0$ such that \whp{} $S_k\ge ck$ whenever $\nu_n\le k\le (1-\eps)N$. \end{lemma} \begin{proof} Assume that $\lambda_1=\lambda_2=1$. The total set of active half-edges then evolves as in a one-type process with a single unit rate infection type. We consider a continuous time representation of such a process, inspired by \cite{SvanteMalwina}. As in our continuous time exploration process, each half-edge is throughout classified as \emph{free} or \emph{paired}, and free half-edges are labeled as \emph{active} or \emph{inactive}. All half-edges are assigned independent unit rate exponential life lengths and, to start the growth, two vertices are chosen uniformly at random and their half-edges are declared active, while all other half-edges are inactive. The process then evolves in that an active half-edge $q$ is chosen uniformly at random and, when the life length of a free half-edge $r\neq q$ (active or inactive) expires, then $q$ and $r$ are paired. The vertex to which $r$ is attached becomes infected (if it was not infected already) and its remaining half-edges are activated. This procedure is repeated until there are no active half-edges left. It is straightforward to verify that the process is equivalent to the two-type growth process with equal rates once types are ignored, and we furthermore ignore the time scales. Note that, in the original continuous time process, the growth is slow in the beginning when there are few active half-edges, while in this version, the growth is fast in the beginning when there are many free half-edges whose life lengths compete. We first show that a large proportion of the edges are identified in finite time. \begin{claim} For every $\eps>0$ there exists $t_0=t_0(\eps)$ such that the number of pairings up to time $t_0$ is at least $(1-\eps)N$ \whp \end{claim} \begin{proof}[Proof of claim] Note that the time of the $k$th pairing is the sum of $k$ independent exponentials with parameters $2N-1,2N-3,\ldots,2N-2k+1$. Let $\xi_1,\xi_2,\ldots,\xi_N$ be independent and exponentially distributed with parameter 2 and write $\xi_{(1)}<\xi_{(2)}<\cdots<\xi_{(N)}$ for the order statistics of the $\xi_k$'s. Due to the memoryless property $\xi_{(k)}$ is the sum of $k$ independent exponentials with parameters $2N,2N-2,\ldots,2N-2k+2$, and it follows that the time of the $k$th pairing is stochastically dominated by $\xi_{(k+1)}$. We are hence done if we show that $\xi_{(\lceil(1-\eps)N\rceil+1)}\le t_0$ \whp\, for some $t_0$ or, equivalently, that the number of $\xi_k$ that exceed $t_0$ is at most $\eps N-1$. This however follows from the law of large numbers if we pick $t_0$ large such that $\PP(\xi_k>t_0)<\eps$. \end{proof} \begin{claim} There exists $\delta>0$ such that throughout the interval $[0,t_0]$ the proportion of uninfected vertices with degree at least $3$ is at least $\delta$ \whp \end{claim} \begin{proof}[Proof of claim] Fix $d\ge3$ such that $p_d>0$. Let $V_d(t)$ denote the number of vertices of degree $d$ with all half-edges with life lengths longer than $t$. Again by the (weak) law of large numbers we have that $$ \Bigl|\frac{1}{n}V_d(t_0)-p_d\,e^{-dt_0}\Bigr|\pto0\quad\text{as }n\to\infty. $$ The number of uninfected vertices of degree $d$ at time $t_0$ is at least $V_d(t_0)-2$, so the claim follows. \end{proof} We now return to the discrete time exploration process. Recall that $\Delta S_k=S_{k+1}-S_k$ and that $\mathcal{F}_k$ is the $\sigma$-field of events determined by the process up to time $k$. After $k$ steps there are $2N-2k$ unpaired half-edges and hence $$ \P\big(\Delta S_k=-2\bigmid\mathcal{F}_k\big)\,=\,\frac{S_k-1}{2N-2k-1}\,\le\,\frac{S_k}{2N-2k}. $$ If the active half-edge that is paired in step $k+1$ is connected to an inactive half-edge attached to a vertex with degree at least 3, then the number of active half-edges increases. The degree of the vertex of the inactive half-edge has a size biased distribution, and hence the probability that it is at least 3 is at least as large as the proportion of uninfected vertices with degree at least 3. Combining the above two claims we find that, for all $k=1,2,\ldots,(1-\eps)N$, \whp $$ \P\big(\Delta S_k\ge1\bigmid\mathcal{F}_k\big)\,\ge\,\delta\Big(1-\frac{S_k}{2N-2k}\Big). $$ In particular, whenever $1\le S_k\le \eps\delta N/4$, we have that $$ \P\big(\Delta S_k=-2\bigmid\mathcal{F}_k\big)\le\delta/8\quad \text{and}\quad\P\big(\Delta S_k\ge1\bigmid\mathcal{F}_k\big)\ge\delta/2. $$ Now, let $\zeta_1,\ldots,\zeta_N$ be i.i.d.\ random variables taking values $-2$ and $1$ with probability $\delta/8$ and $\delta/4+\eps\delta/8$, respectively, and otherwise the value $0$, and define $X_k:=\sum_{j=1}^k\zeta_j$. Then, by the law of large numbers, $X_k>\eps\delta k/16$ \whp\, for all $k\ge\nu_n$, while $X_k$ is unlikely to ever exceed $\eps\delta N/4$. Moreover, since $\nu_n=o(\sqrt{n})$ by \eqref{nun}, the number of active half-edges is unlikely to ever hit zero in the first $\nu_n$ steps.\footnote{Indeed, either $S_k$ exceeds $2\nu_n$ before reaching zero, which is good, or the probability of pairing two active half-edges is at most $2\nu_n/(N-2\nu_n)$ in each of these step, so the claim follows from the union bound.} We conclude that there is a coupling between $(S_k)_{k\ge1}$ and $(X_k)_{k\ge1}$ such that \whp $$ S_k\ge X_k\quad\text{for all }k=1,2,\ldots,(1-\eps)N. $$ Consequently, $S_k\ge\eps\delta k/16$ \whp\, whenever $\nu_n\le k\le(1-\eps)N$. \end{proof} Fix $\eps>0$ and $c$ as in \refL{LC}, and let $\tau$ be the stopping time $\min\{k\ge\nu_n:S_k< ck\}$. Thus, by \refL{LC}, \whp{} $\tau>(1-\eps)N$. Let $\tM_k:=M_{k\land \tau}$, that is, the martingale $M$ stopped at $\tau$. Then $(\tM_k)_{k=0}^N$ is also a martingale. We consider the quadratic variation of this martingale. \begin{lemma}\label{LQ} As \ntoo, $$ \E\left[\sum_{k=\nu_n}^{(1-\eps)N} |\gD \tM_k|^2\right] \to0. $$ \end{lemma} \begin{proof} Throughout the proof, $C$ denotes a constant, possibly depending on $\eps$ and $c$, that may be different on each occurrence. Let $k\in[\nu_n,(1-\eps)N]$. We may suppose that $S_k\ge ck$, since otherwise $\tau\le k$ and $\gD \tM_k=0$. Then, \begin{equation}\label{gDM} \gD \tM_k = \gD M_k = \frac{\Se_k+\gD \Se_k}{S_{k}+\gD S_{k}} - \frac{\Se_k}{S_k} = \frac{S_k\gD \Se_k-\Se_k\gD S_k}{S_k(S_{k}+\gD S_k)}. \end{equation} If a new vertex of degree $d$ is infected at time $k+1$, then $\gD\Se_k$ equals either $0$ or $\gD S_k=d-2$. In either case, \eqref{gDM} implies that $$ | \gD \tM_k| \le \frac{d-2}{S_{k}+d-2} \le \frac{d}{S_{k}+d} \le \frac{d}{c{k}+d} \le C \frac{d}{k+d}. $$ If no new vertex is infected at time $k+1$, then $\gD S_k=-2$ and \eqref{gDM} yields (for large $k$) $$ | \gD\tM_k| \le \frac{2}{S_{k}-2} \le \frac{2}{c{k}-2} \le \frac{C}{k}. $$ Hence, if $d\kk$ is the degree of the vertex infected at time $k+1$, with $d\kk=0$ if there is no such vertex, then \begin{equation}\label{ele} \E\left[\sum_{k=\nu_n}^{(1-\eps)N} |\gD \tM_k|^2\right] \le C \E\left[\sum_{k=\nu_n}^{(1-\eps)N} \Bigpar{\frac{d\kk}{k+d\kk}}^2\right] + C\sum_{k=\nu_n}^\infty \frac{1}{k^2}. \end{equation} After step $k$, there are $2(N-k)$ free half-edges and hence, for each vertex $i$ and step $k\le(1-\eps)N$, the probability that $i$ is infected in step $k+1$, given that it has not been infected earlier, equals $d_i/(2(N-k)-1)\leq Cd_i/n$. Hence, for any $k\le(1-\eps)N$, \begin{equation}\label{win} \E \left[\Bigpar{\frac{d\kk}{k+d\kk}}^2\right] \le C \sumin \frac{d_i}{n}\Bigpar{\frac{d_i}{k+d_i}}^2 = C \frac{1}n\sumin \frac{d_i^3}{(k+d_i)^2} =C \E\left[\frac{D_n^3}{(k+D_n)^2}\right]. \end{equation} For any $d\ge1$, we have the estimates $$ \sumk \frac{d^3}{(k+d)^2} \le \sum_{k=1}^d \frac{d^3}{d^2} + \sum_{k=d+1}^\infty\frac{d^3}{k^2} \le d^2 + \frac{d^3}d=2d^2 $$ and $$ \sum_{k=\nu_n}^\infty \frac{d^3}{(k+d)^2} \le \sum_{k=\nu_n}^\infty\frac{d^3}{(k+1)^2} \le \frac{d^3}{\nu_n}. $$ Hence, \begin{equation}\label{auf} \sum_{k=\nu_n}^\infty \frac{D_n^3}{(k+D_n)^2} \le {2D_n^2\land\frac{D_n^3}{\nu_n}}. \end{equation} By assumption, $D_n\dto D$ and $\nu_n\to\infty$, and thus $2D_n^2\land\nu_n\qw{D_n^3} \le \nu_n\qw D_n^3\pto 0$. Furthermore, $D_n^2$ is uniformly integrable, and thus so is $2D_n^2\land\nu_n\qw{D_n^3}$. Consequently, we have by \eqref{auf} that \begin{equation}\label{sof} \E\left[\sum_{k=\nu_n}^\infty \frac{D_n^3}{(k+D_n)^2}\right] \le \E\left[2D_n^2\land\frac{D_n^3}{\nu_n}\right]\to0. \end{equation} The proposition now follows from \eqref{ele}, \eqref{win} and \eqref{sof}. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:main}] Since $\tM_k-\tM_{\nu_n}$, with $k\ge\nu_n$, is a martingale, Doob's inequality and \refL{LQ} yield \begin{equation*} \E\left[\sup_{\nu_n\le k\le (1-\eps)N}\bigabs{\tM_k-\tM_{\nu_n}}^2\right] \le 4 \E\left[\bigabs{\tM_{\floor{(1-\eps)N}}-\tM_{\nu_n}}^2\right] =4 \E\left[\sum_{k=\nu_n}^{\floor{(1-\eps)N}-1} |\gD \tM_k|^2\right]\to0. \end{equation*} Hence, $ \sup_{\nu_n\le k\le (1-\eps)N}\bigabs{\tM_k-\tM_{\nu_n}} \pto 0$, and \eqref{lx1} follows since by \refL{LC}, \whp{} $\tau>(1-\eps)N$ and thus $M_k=\tM_k$ for $k\le(1-\eps)N$. \end{proof} \section{Proof of Theorem \ref{th:main}} We can now prove Theorem \ref{th:main} by combining Proposition \ref{prop:initial} and Proposition \ref{prop:main}. \begin{proof} First assume that $\lambda_1=\lambda_2$. Fix $\varepsilon>0$ and let the sequences $\nu_n$ and $t_n$ be as in Propositions \ref{prop:initial} and \ref{prop:main}. Recall from the paragraph preceding \eqref{nun} that $\cN_t$ denotes the number of steps (pairings of half-edges) that have been performed at time $t$ in the continuous time exploration process. By definition, we have that $\cM_{t_n}=M_{\cN_{t_n}}$ and, by \eqref{nun}, that $\cN_{t_n}\geq \nu_n$ w.h.p. Hence, by Proposition \ref{prop:main}, $$ \sup_{\nu_n\le k\le (1-\eps)N}\bigabs{M_k-\cM_{t_n}} \pto 0. $$ Furthermore, by Proposition \ref{prop:initial}, the fraction $\cM_{t_n}$ converges in distribution to a continuous random variable with support on $(0,1)$. Since a vertex that is infected in step $k+1$ in the discrete time exploration process is infected by type 1 independently with probability $M_k$, it follows from the law of large numbers that the fraction of type 1 vertices among all vertices that are infected at steps $k\in[\nu_n,(1-\eps)N]$ converges in distribution to $V$. Recall from \eqref{nun} that $\nu_n\leq n^{1/3}$ by definition. Hence the number of vertices that are infected before step $\nu_n$ does not exceed $n^{1/3}$. The number of vertices that are infected after step $(1-\eps)N$ w.h.p.\ does not exceed $\eps (\E[D]+1)n$, since $N\leq (\E[D]+1)n$ w.h.p. The asymptotic fraction of vertices infected for $k\in[\nu_n,(1-\eps)N]$ is hence at least $1-C\eps$. Since $\eps>0$ is arbitrary, part (a) of the theorem follows. To prove part (b), assume that $\lambda_1<\lambda_2$ and consider a modified version of the process where, after time $t_n$, the weaker type 1 infection spread with the same larger intensity $\lambda_2$ as the type 2 infection. To generate this process, we independently equip each half-edge $h$ with two independent Poisson processes $\cP^{\sss (1)}_h$ and $\cP^{\sss (2)}_h$, both with rate $\lambda_2$, and let $\check{\cP}^{\sss(1)}_h$ denote a thinned version of $\cP^{\sss (1)}_h$ where each point is kept with probability $\lambda_1/\lambda_2$, so that $\check{\cP}^{\sss (1)}_h$ is a Poisson process with rate $\lambda_1$. The process is then generated by letting the possible infection times for an active type 1 or 2 half-edge $h$ be specified by $\check{\cP}^{\sss (1)}_h$ and $\cP^{\sss (2)}_h$, respectively, up until time $t_n$, and by $\cP^{\sss (1)}_h$ and $\cP^{\sss (2)}_h$ after that time. The original process can be generated by using the thinned process $\check{\cP}^{\sss (1)}_h$ for type 1 throughout the whole time course. The corresponding discrete time processes are defined by observing the continuous time processes at the times of pairings. Let $\check{\cS}^{\sss(i)}_t$ denote the number of active type $i$ half-edges at time $t$ in the modified process, and similarly for other quantities. The above construction provides a coupling of the original process and the modified process where $\check{\cS}^{\sss(i)}_t=\cS^{\sss(i)}_t$ for $t\leq t_n$ and $i=1,2$, while $\check{\cS}^{\sss(1)}_t\geq \cS^{\sss(1)}_t$ and $\check{\cS}^{\sss(2)}_t\leq \cS^{\sss(2)}_t$ for $t>t_n$. It follows that $\check{\cM}_t=\cM_t$ for $t\leq t_n$ and $\check{\cM}_t\geq \cM_t$ for $t>t_n$. Analogously, if $\cV^{\sss(i)}_i$ denotes the set of infected vertices of type $i$ at time $t$, we have that $\check{\cV}^{\sss(1)}_t\supseteq \cV^{\sss(1)}_t$ and $\check{\cV}^{\sss(2)}_t\subseteq \cV^{\sss(2)}_t$ for all $t$. Hence the number of type 1 infected vertices is at least as large in the modified process as in the original process, and it will suffice to show that the fraction of type 1 infected vertices in the modified process converges to 0. The modified process has equal intensities for the infection types after time $t_n$, that is, after step $\cN_{t_n}$ in the discrete time process. By \eqref{nun}, we have $\cN_{t_n}\geq \nu_n$ \whp\, and it then follows from Proposition \ref{prop:main} that $$ \sup_{\cN_{t_n}\le k\le (1-\eps)N}\bigabs{\check{M}_k-\check{\cM}_{t_n}} \pto 0. $$ Up to time $t_n$, on the other hand, type 1 spreads with a strictly smaller intensity and thus, by Proposition \ref{prop:initial}, the fraction $\check{\cM}_{t_n}$ converges to 0 in probability. By the same arguments as in the proof of part (a), this yields that the fraction of type 1 infected vertices in the modified process converges to 0, as desired. \end{proof} \section{Further work} We have studied competing first passage percolation on the configuration model with finite variance degrees and exponential edge weights, and shown that both infection types occupy positive fractions of the vertex set if and only if they spread with the same intensity. There are several natural extensions of this work. One would be to investigate the scaling of the number of vertices of the losing type when the intensities are different. The results in \cite{regular} contain results in this direction for random regular graph and we conjecture that the results would be similar for finite variance graphs. Specifically we conjecture that, when $\lambda_1<\lambda_2$, the number of vertices occupied by type 1 is of the order $n^{\lambda_1/\lambda_2}$. In contrast to the case when the degree variance is infinite, treated in \cite{winner}, the winner hence does not take it all, but the loosing type also grows to infinity with $n$. In \cite{regular}, also more general initial conditions are considered, where the initial number of one or both types may grow with $n$. This could also be done in our case and, in addition, one could consider initial sets where the vertices are chosen based on degree. Is it for instance possible for a weaker type to capture a positive fraction of the vertices if it can start from one or more high degree vertices, while the stronger type starts from a vertex with small degree? Another extension would be to study more general passage time distributions, possibly different for the two types. Also in the general case, the initial growth of the types can be approximated by branching processes, but these are then not Markovian. A reasonable guess is that the possibility for both types to occupy positive fractions of the vertex set is determined by the relation between the Malthusian parameters of these branching processes, as discussed in \cite{fixspeedI}.
{ "redpajama_set_name": "RedPajamaArXiv" }
582
\section{Introduction} \vspace{0.2 cm} Let $(M, Y, g^{M})$ be an $m$-dimensional compact oriented Riemannian manifold with boundary $Y$ and $f : M \rightarrow M$ be a smooth map such that $f(Y) \subset Y$. A point $p \in M$ is said to be a simple fixed point of $f$ if \begin{eqnarray} \label{E:1.1} f(p) = p, \qquad \operatorname{det} \left( I - df(p) \right) \neq 0. \end{eqnarray} \noindent If $p$ is a simple fixed point, the graph of $f$ is transverse to the diagonal of $M \times M$ at $(p, p)$, which implies that simple fixed points are discrete. All through this paper we assume that all fixed points of $f$ are simple and hence $f$ has only finitely many fixed points. For fixed points on the boundary $Y$, we need one more structure. Let $f(x_{0}) = x_{0}$ with $x_{0} \in Y$. Then $df(x_{0}) : T_{x_{0}}M \rightarrow T_{x_{0}}M$ induces a map $df_{Y}(x_{0}) : T_{x_{0}}Y \rightarrow T_{x_{0}}Y$. We consider \begin{eqnarray*} a_{x_{0}} & = & df(x_{0}) (\operatorname{mod} T_{x_{0}}Y) : T_{x_{0}} M / T_{x_{0}} Y \rightarrow T_{x_{0}} M / T_{x_{0}} Y. \end{eqnarray*} \noindent Since the quotient space $T_{x_{0}} M / T_{x_{0}} Y$ is one-dimensional, the map $a_{x_{0}}$ is simply multiplication by a number, which we denote by $a_{x_{0}}$ again. It's not difficult to see that $a_{x_{0}} \geq 0$ by considering the quotient space $T_{x_{0}} M / T_{x_{0}} Y$ as a normal half-line pointing inward at the boundary point $x_{0}$. Moreover, since the fixed point $x_{0}$ is simple, $a_{x_{0}} \neq 1$ (see [5] for details). \begin{definition} (1) A simple boundary fixed point $x_{0} \in Y$ is called {\it attracting} if $a_{x_{0}} < 1$ and {\it repelling} if $a_{x_{0}} > 1$. \newline (2) We denote by ${\mathcal F}_{0}(f)$, ${\mathcal F}^{+}_{Y}(f)$ and ${\mathcal F}^{-}_{Y}(f)$ the set of all simple fixed points in the interior of $M$, the attracting fixed points in $Y$ and the repelling fixed points in $Y$, respectively. We denote ${\mathcal F}_{Y}(f) := {\mathcal F}^{+}_{Y}(f) \cup {\mathcal F}^{-}_{Y}(f)$ and ${\mathcal F}(f) := {\mathcal F}_{0}(f) \cup {\mathcal F}_{Y}(f)$. \end{definition} \noindent A. V. Brenner and M. A. Shubin proved the following result in [5]. \begin{eqnarray} \label{E:1.2} \sum_{q=0}^{m} (-1)^{q} \operatorname{Tr} \left( f^{\ast} : H^{q}(M) \rightarrow H^{q}(M) \right) & = & \sum_{p \in {\mathcal F}_{0}(f) \cup {\mathcal F}^{+}_{Y}(f)} \operatorname{sign} \operatorname{det} \left( I - df(p) \right), \nonumber \\ \sum_{q=0}^{m} (-1)^{q} \operatorname{Tr} \left( f^{\ast} : H^{q}(M, Y) \rightarrow H^{q}(M, Y) \right) & = & \sum_{p \in {\mathcal F}_{0}(f) \cup {\mathcal F}^{-}_{Y}(f)} \operatorname{sign} \operatorname{det} \left( I - df(p) \right). \end{eqnarray} \noindent This result extends the Atiyah-Bott-Lefschetz fixed point formula proven on a closed manifold in [1]. On the other hand, the authors introduced new de Rham complexes $\left( \Omega^{\bullet}_{{\widetilde {\mathcal P}}_{0}}(M), d \right)$ and $\left( \Omega^{\bullet}_{{\widetilde {\mathcal P}}_{1}}(M), d \right)$ by using some boundary conditions ${\widetilde {\mathcal P}}_{0}$ and ${\widetilde {\mathcal P}}_{1}$, which compute $H^{q} \left( \Omega^{\bullet}_{{\widetilde {\mathcal P}}_{0}}(M), d \right) = \begin{cases} H^{q}(M, Y) & \text{if} \quad q = \operatorname{even} \\ H^{q}(M) & \text{if} \quad q = \operatorname{odd} \end{cases}$ and $H^{q} \left( \Omega^{\bullet}_{{\widetilde {\mathcal P}}_{1}}(M), d \right) = \begin{cases} H^{q}(M) & \text{if} \quad q = \operatorname{even} \\ H^{q}(M, Y) & \text{if} \quad q = \operatorname{odd} \end{cases}$. In this paper, we are going to discuss the Lefschetz fixed point formula on these complexes. More precisely, when $f : M \rightarrow M$ is a smooth map having simple fixed points and satisfying some special condition near the boundary $Y$ (see Definition \ref{Definition:3.1}), we are going to describe \begin{eqnarray*} & & \sum_{q=\operatorname{even}} \operatorname{Tr} \left( f^{\ast} : H^{q}(M, Y) \rightarrow H^{q}(M, Y) \right) \hspace{0.1 cm} - \hspace{0.1 cm} \sum_{q=\operatorname{odd}} \operatorname{Tr} \left( f^{\ast} : H^{q}(M) \rightarrow H^{q}(M) \right) \qquad \text{and} \\ & & \sum_{q=\operatorname{even}} \operatorname{Tr} \left( f^{\ast} : H^{q}(M) \rightarrow H^{q}(M) \right) \hspace{0.1 cm} - \hspace{0.1 cm} \sum_{q=\operatorname{odd}} \operatorname{Tr} \left( f^{\ast} : H^{q}(M, Y) \rightarrow H^{q}(M, Y) \right) \end{eqnarray*} \noindent in terms of fixed points of $f$ and some additional data (see Theorem \ref{Theorem:3.3} below). For this purpose, we are going to use the heat kernel method for the Lefschetz fixed point formula (cf. [3], [6]). \vspace{0.2 cm} \section{de Rham complex $( \Omega^{\bullet}_{{\widetilde {\mathcal P}}_{0}/{\widetilde {\mathcal P}}_{1}}(M), d )$ on a compact Riemannian manifold with boundary} \vspace{0.2 cm} In this section we are going to introduce the de Rham complex $( \Omega^{\bullet}_{{\widetilde {\mathcal P}}_{0}/{\widetilde {\mathcal P}}_{1}}(M), d )$ on a compact Riemannian manifold with boundary by using the boundary condition ${\widetilde {\mathcal P}}_{0}/{\widetilde {\mathcal P}}_{1}$. We recall that $(M, Y, g^{M})$ is an $m$-dimensional compact oriented Riemannian manifold with boundary $Y$. From now on, we assume that $g^{M}$ is a product metric near the boundary $Y$. We denote by $d^{Y}_{q} : \Omega^{q}(Y) \rightarrow \Omega^{q+1}(Y)$ the de Rham operator induced from $d : \Omega^{q}(M) \rightarrow \Omega^{q+1}(M)$ and denote by $\star_{Y} : \Omega^{q}(Y) \rightarrow \Omega^{m-1-q}(Y)$ the Hodge star operator on $Y$ induced from the Hodge star operator $\star_{M}$ on $M$. Then the formal adjoint $(d^{Y}_{q})^{\ast}$ of $d^{Y}_{q}$ is defined in the usual way. We denote $\Delta_{Y}^{q} := (d^{Y}_{q})^{\ast} d^{Y}_{q} + d^{Y}_{q-1} (d^{Y}_{q-1})^{\ast}$ and ${\mathcal H}^{q}(Y) := \operatorname{ker} \Delta_{Y}^{q}$. By the Hodge decomposition, we have \begin{eqnarray*} \Omega^{q}(Y) & = & \operatorname{Im} d^{Y}_{q-1} \oplus {\mathcal H}^{q}(Y) \oplus \operatorname{Im} (d^{Y}_{q})^{\ast} \end{eqnarray*} \noindent Let $N$ be a collar neighborhood of $Y$ which is isometric to $[0,1) \times Y$ and $u$ be the coordinate normal to the boundary $Y$ on $N$. If $d \phi = d^\ast \phi = 0$ for $\phi \in \Omega^{q}(M)$, simple computation shows that $\phi$ is expressed on the boundary $Y$ by \begin{equation} \label{E:2.1} \phi|_{Y} = \left( d^{Y} \varphi_{1} + \varphi_{2} \right) + du \wedge \left( d^{Y \ast} \psi_{1} + \psi_{2} \right), \quad \varphi_{1}, \hspace{0.1 cm} \psi_{1} \in \Omega^{\bullet}(Y), \quad \varphi_{2}, \hspace{0.1 cm} \psi_{2} \in {\mathcal H}^{\bullet}(Y). \end{equation} \noindent In other words, $\varphi_{2}$ and $\psi_{2}$ are harmonic parts of $\iota^{\ast} \phi$ and $\star_{Y} \iota^{\ast} ( \star_{M} \phi )$ up to sign, where $ \iota : Y \rightarrow M$ is the natural inclusion. We denote ${\mathcal K}^{q}$ and ${\mathcal K}$ by \begin{equation} \label{E:2.2} {\mathcal K}^{q} := \{ \varphi_{2} \in {\mathcal H}^{q}(Y) \mid d \phi = d^\ast \phi = 0 \}, \qquad {\mathcal K} := \oplus_{q=0}^{m-1} {\mathcal K}^{q}, \end{equation} \noindent where $\phi$ has the form (\ref{E:2.1}). If $d \phi = d^\ast \phi = 0$ for $\phi \in \Omega^{q}(M)$, $d (\star_{M} \phi) = d^\ast (\star_{M} \phi) = 0$, which implies that \begin{equation} \label{E:2.3} \star_{Y} {\mathcal K}^{m-q} = \{ \psi_{2} \in {\mathcal H}^{q-1}(Y) \mid d \phi = d^\ast \phi = 0 \}, \end{equation} \noindent where $\phi$ has the form (\ref{E:2.1}). We have the following lemma, whose proof we refer to Lemma 2.4 in [8]. \vspace{0.2 cm} \begin{lemma} \label{Lemma:2.1} ${\mathcal K}$ is orthogonal to $\star_{Y} \mathcal{K}$ and ${\mathcal K} \oplus (\star_{Y} {\mathcal K}) = {\mathcal H}^{\bullet}(Y)$. \end{lemma} \vspace{0.2 cm} \noindent We consider the homomorphism $\iota^{\ast} : H^{\bullet}(M) \rightarrow H^{\bullet}(Y)$ induced from the natural inclusion $\iota : Y \rightarrow M$. It is well known that each cohomology class $[\omega] \in H^{\bullet}(M)$ has a unique representative $\omega_{0} \in \Omega^{\bullet}(M)$ such that $d \omega_{0} = d^{\ast} \omega_{0} = 0$ and $\iota^{\ast} (\star_{M} \omega_{0}) = 0$ (see Theorem 2.7.3 in [6]). Since $\iota^{\ast} \omega_{0}$ is a closed form, $[\iota^{\ast} \omega_{0}] \in H^{\bullet}(Y)$. We denote by $(\iota^{\ast} \omega_{0})_{h}$ the harmonic part of $\iota^{\ast} \omega_{0}$ and define a map \begin{eqnarray*} {\mathcal G } : \operatorname{Im} \left( \iota^{\ast} : H^{\bullet}(M) \rightarrow H^{\bullet}(Y) \right) \rightarrow {\mathcal K}, \qquad {\mathcal G } ([\iota^{\ast} \omega_{0}]) = (\iota^{\ast} \omega_{0})_{h} . \end{eqnarray*} \noindent A standard argument using the Lefschetz-Poincar\'e duality shows that $\operatorname{dim} \operatorname{Im} \left( \iota^{\ast} : H^{\bullet}(M) \rightarrow H^{\bullet}(Y) \right)$ is equal to $\frac{1}{2} \operatorname{dim} H^{\bullet}(Y)$. Since ${\mathcal G}$ is a monomorphism, this fact together with Lemma \ref{Lemma:2.1} shows that ${\mathcal G}$ is an isomorphism. Summarizing this fact, we have the following result (cf. Corollary 8.4 in [9]). \begin{lemma} \label{Lemma:2.2} For each $q$, ${\mathcal K}^{q}$ can be naturally identified with $\operatorname{Im} \left( \iota^{\ast} : H^{q}(M) \rightarrow H^{q}(Y) \right)$. \end{lemma} \vspace{0.2 cm} We next consider the natural isomorphism \begin{equation} \label{E:2.4} \Psi : \Omega^{p}(N) \rightarrow C^{\infty}([0, 1), \Omega^{p}(Y) \oplus \Omega^{p-1}(Y)), \qquad \Psi(\omega_{1} + du \wedge \omega_{2}) = \left( \begin{array}{clcr} \omega_{1} \\ \omega_{2} \end{array} \right). \end{equation} \noindent We put ${\mathcal L}_{0} := \left( \begin{array}{clcr} {\mathcal K} \\ {\mathcal K} \end{array} \right)$, ${\mathcal L}_{1} := \left( \begin{array}{clcr} {\star_{Y} \mathcal K} \\ \star_{Y} {\mathcal K} \end{array} \right)$ and consider the orthogonal projections defined by \begin{eqnarray*} & & \hspace{1.0 cm} {\mathcal P}_{-, {\mathcal L}_{0}}, \hspace{0.1 cm} {\mathcal P}_{+, {\mathcal L}_{1}} : \Omega^{\bullet}(Y) \oplus \Omega^{\bullet}(Y) \rightarrow \Omega^{\bullet}(Y) \oplus \Omega^{\bullet}(Y) \\ & & \operatorname{Im} {\mathcal P}_{-, {\mathcal L}_{0}} = \left( \begin{array}{clcr} \operatorname{Im} d^{Y} \oplus {\mathcal K} \\ \operatorname{Im} d^{Y} \oplus {\mathcal K} \end{array} \right), \qquad \operatorname{Im} {\mathcal P}_{+, {\mathcal L}_{1}} = \left( \begin{array}{clcr} \operatorname{Im} (d^{Y})^{\ast} \oplus \star_{Y} {\mathcal K} \\ \operatorname{Im} (d^{Y})^{\ast} \oplus \star_{Y} {\mathcal K} \end{array} \right) . \end{eqnarray*} \noindent We then define the spaces of differential forms satisfying the boundary conditions ${\mathcal P}_{-, {\mathcal L}_{0}}$ and ${\mathcal P}_{+, {\mathcal L}_{1}}$ by \begin{eqnarray*} \Omega^{q}_{{\mathcal P}_{-, {\mathcal L}_{0}}}(M) & := & \{ \phi \in \Omega^{q}(M) \mid {\mathcal P}_{-, {\mathcal L}_{0}} ( \phi|_{Y} ) = 0, \quad {\mathcal P}_{-, {\mathcal L}_{0}} (( \star_{M} ( d + d^{\ast} ) \phi)|_{Y} ) = 0 \}, \\ \Omega^{q}_{{\mathcal P}_{+, {\mathcal L}_{1}}}(M) & := & \{ \phi \in \Omega^{q}(M) \mid {\mathcal P}_{+, {\mathcal L}_{1}} ( \phi|_{Y} ) = 0, \quad {\mathcal P}_{+, {\mathcal L}_{1}} (( \star_{M} ( d + d^{\ast} ) \phi)|_{Y} ) = 0 \}, \end{eqnarray*} \noindent and also define \begin{eqnarray} \label{E:2.5} \Omega^{q, \infty}_{{\mathcal P}_{-, {\mathcal L}_{0}}}(M) & = & \{ \phi \in \Omega^{q}(M) \mid {\mathcal P}_{-, {\mathcal L}_{0}} \left( \left( ( \star_{M} (d + d^{\ast}))^{l} \phi \right)|_{Y}\right) = 0, \quad l = 0, 1, 2, \cdots \}, \nonumber \\ \Omega^{q, \infty}_{{\mathcal P}_{+, {\mathcal L}_{1}}}(M) & = & \{ \phi \in \Omega^{q}(M) \mid {\mathcal P}_{+, {\mathcal L}_{1}} \left( \left( ( \star_{M} (d + d^{\ast}))^{l} \phi \right)|_{Y}\right) = 0, \quad l = 0, 1, 2, \cdots \}. \end{eqnarray} \vspace{0.2 cm} Simple computation shows that if $\phi \in \Omega^{q}_{{\mathcal P}_{-, {\mathcal L}_{0}}}(M)$, then $\star_{M} \phi \in \Omega^{m-q}_{{\mathcal P}_{+, {\mathcal L}_{1}}}(M)$ and vice versa. Similarly, for each $\phi \in \Omega^{q}_{{\mathcal P}_{-, {\mathcal L}_{0}}}(M)$ and $\psi \in \Omega^{q}_{{\mathcal P}_{+, {\mathcal L}_{1}}}(M)$, we have \begin{eqnarray} \label{E:2.6} {\mathcal P}_{+, {\mathcal L}_{1}} ( (d \phi )|_{Y} ) = 0 \qquad \text{and} \qquad {\mathcal P}_{-, {\mathcal L}_{0}} ( (d \psi )|_{Y} ) = 0. \end{eqnarray} \noindent These imply that $\star_{M}$ maps $\hspace{0.1 cm} \Omega^{q, \infty}_{{\mathcal P}_{-, {\mathcal L}_{0}}}(M)$ ($\Omega^{q, \infty}_{{\mathcal P}_{+, {\mathcal L}_{1}}}(M)$) into $\hspace{0.1 cm} \Omega^{m-q, \infty}_{{\mathcal P}_{+, {\mathcal L}_{1}}}(M)$ ($\Omega^{m-q, \infty}_{{\mathcal P}_{-, {\mathcal L}_{0}}}(M)$) and $d$ maps $\hspace{0.1 cm} \Omega^{q, \infty}_{{\mathcal P}_{-, {\mathcal L}_{0}}}(M)$ ($\Omega^{q, \infty}_{{\mathcal P}_{+, {\mathcal L}_{1}}}(M)$) into $\hspace{0.1 cm} \Omega^{q+1, \infty}_{{\mathcal P}_{+, {\mathcal L}_{1}}}(M)$ ($\Omega^{q+1, \infty}_{{\mathcal P}_{-, {\mathcal L}_{0}}}(M)$). \begin{definition} \label{Definition:2.1} We define projections ${\widetilde {\mathcal P}}_{0}$, ${\widetilde {\mathcal P}}_{1} : \Omega^{\bullet}(Y) \oplus \Omega^{\bullet}(Y) \rightarrow \Omega^{\bullet}(Y) \oplus \Omega^{\bullet}(Y)$ as follows. For $\phi \in \Omega^{q}(M, E)$ $$ {\widetilde {\mathcal P}}_{0} (\phi|_{Y}) = \begin{cases} {\mathcal P}_{-, {\mathcal L}_{0}} (\phi|_{Y}) \quad \text{if} \quad q \quad \text{is} \quad \text{even} \\ {\mathcal P}_{+, {\mathcal L}_{1}} (\phi|_{Y}) \quad \text{if} \quad q \quad \text{is} \quad \text{odd} , \end{cases} \qquad {\widetilde {\mathcal P}}_{1} (\phi|_{Y}) = \begin{cases} {\mathcal P}_{+, {\mathcal L}_{1}} (\phi|_{Y}) \quad \text{if} \quad q \quad \text{is} \quad \text{even} \\ {\mathcal P}_{-, {\mathcal L}_{0}} (\phi|_{Y}) \quad \text{if} \quad q \quad \text{is} \quad \text{odd} . \end{cases} $$ \end{definition} \noindent \noindent Then the above argument leads to the following cochain complexes \begin{eqnarray} (\Omega^{\bullet, \infty}_{{\widetilde {\mathcal P}}_{0}}(M), \hspace{0.1 cm} d) & : & 0 \longrightarrow \Omega^{0, \infty}_{{\mathcal P}_{-, {\mathcal L}_{0}}}(M) \stackrel{d}{\longrightarrow} \Omega^{1, \infty}_{{\mathcal P}_{+, {\mathcal L}_{1}}}(M) \stackrel{d}{\longrightarrow} \Omega^{2, \infty}_{{\mathcal P}_{-, {\mathcal L}_{0}}}(M) \stackrel{d}{\longrightarrow} \cdots \longrightarrow 0. \label{E:2.7} \\ (\Omega^{\bullet, \infty}_{{\widetilde {\mathcal P}}_{1}}(M), \hspace{0.1 cm} d) & : & 0 \longrightarrow \Omega^{0, \infty}_{{\mathcal P}_{+, {\mathcal L}_{1}}}(M) \stackrel{d}{\longrightarrow} \Omega^{1, \infty}_{{\mathcal P}_{-, {\mathcal L}_{0}}}(M) \stackrel{d}{\longrightarrow} \Omega^{2, \infty}_{{\mathcal P}_{+, {\mathcal L}_{1}}}(M) \stackrel{d}{\longrightarrow} \cdots \longrightarrow 0. \label{E:2.8} \end{eqnarray} \vspace{0.2 cm} \noindent \noindent We define the Laplacians $\Delta^{q}_{{\widetilde {\mathcal P}}_{0}}$ and $\Delta^{q}_{{\widetilde {\mathcal P}}_{1}}$ by \begin{eqnarray*} \Delta^{q} := d_{q}^{\ast} d_{q} + d_{q-1} d_{q-1}^{\ast}, \qquad \operatorname{Dom} \left( \Delta^{q}_{{\widetilde {\mathcal P}}_{0}} \right) \hspace{0.1 cm} = \hspace{0.1 cm} \Omega^{q, \infty}_{{\widetilde {\mathcal P}}_{0}}(M) \hspace{0.1 cm} = \hspace{0.1 cm} \begin{cases} \Omega^{q, \infty}_{{\mathcal P}_{-, {\mathcal L}_{0}}}(M) & \text{for} \hspace{0.2 cm} q \hspace{0.2 cm} \operatorname{even} \\ \Omega^{q, \infty}_{{\mathcal P}_{+, {\mathcal L}_{1}}}(M) & \text{for} \hspace{0.2 cm} q \hspace{0.2 cm} \operatorname{odd} . \end{cases} \end{eqnarray*} \noindent We define $\operatorname{Dom} \left( \Delta^{q}_{{\widetilde {\mathcal P}}_{1}} \right)$ in the same way. It is not difficult to see that ${\mathcal P}_{-, {\mathcal L}_{0}}$ and ${\mathcal P}_{+, {\mathcal L}_{1}}$ are well-posed boundary conditions for the odd signature opertator and Laplacian in the sense of Seeley ([7], [11]). We refer to Lemma 2.15 in [8] for details. Hence, $\Delta^{q}_{{\widetilde {\mathcal P}}_{0}}$ and $\Delta^{q}_{{\widetilde {\mathcal P}}_{1}}$ have compact resolvents and discrete spectra. Moreover, the Green formula shows that $\Delta^{q}_{{\widetilde {\mathcal P}}_{0}}$ and $\Delta^{q}_{{\widetilde {\mathcal P}}_{1}}$ are formally self-adjoint and non-negative. The following lemma is straightforward (see Lemma 2.11 in [8] for details). \vspace{0.2 cm} \begin{lemma} \label{Lemma:2.3} The cohomologies of the complex $(\Omega^{\bullet, \infty}_{{\widetilde {\mathcal P}}_{0}/{\widetilde {\mathcal P}}_{1}}(M), \hspace{0.1 cm} d)$ are given as follows. \begin{eqnarray} \label{E:2.9} H^{q}((\Omega^{\bullet, \infty}_{{\widetilde {\mathcal P}}_{0}}(M), \hspace{0.1 cm} d)) & = & \operatorname{ker} \Delta^{q}_{{\widetilde {\mathcal P}}_{0}} = \begin{cases} H^{q}(M, Y) \quad \text{if} \quad q \quad \text{is} \quad \text{even} \\ H^{q}(M) \quad \text{if} \quad q \quad \text{is} \quad \text{odd} , \end{cases} \nonumber \\ H^{q}((\Omega^{\bullet, \infty}_{{\widetilde {\mathcal P}}_{1}}(M), \hspace{0.1 cm} d)) & = & \operatorname{ker} \Delta^{q}_{{\widetilde {\mathcal P}}_{1}} = \begin{cases} H^{q}(M) \quad \text{if} \quad q \quad \text{is} \quad \text{even} \\ H^{q}(M, Y) \quad \text{if} \quad q \quad \text{is} \quad \text{odd} . \end{cases} \end{eqnarray} \end{lemma} \begin{proof} We denote by ${\mathcal H}_{\operatorname{rel}}^{q}(M) := \{ \phi = \phi_{1} + du \wedge \phi_{2} \in \Omega^{q}(M) \mid d \phi = d^{\ast} \phi = 0, \phi_{1}|_{Y} = 0 \}$ the space of harmonic $q$-forms satisfying the relative boundary condition. It is well known that ${\mathcal H}_{\operatorname{rel}}^{q}(M)$ is isomorphic to the singular cohomology $H^{q}(M, Y)$. The Green theorem shows that $\operatorname{ker} \Delta^{q}_{{\mathcal P}_{-, {\mathcal L}_{0}}} = \{ \phi \in \Omega^{q}(M) \mid d \phi = d^{\ast} \phi = 0, {\mathcal P}_{-, {\mathcal L}_{0}} (\phi|_{Y}) = 0 \}$. We are going to show that $\operatorname{ker} \Delta^{q}_{{\mathcal P}_{-, {\mathcal L}_{0}}} = {\mathcal H}_{\operatorname{rel}}^{q}(M)$. Let $\phi = \phi_{1} + du \wedge \phi_{2} \in {\mathcal H}_{\operatorname{rel}}^{q}(M)$. Then by (\ref{E:2.1}) with the fact that $\phi_{1}|_{Y} = 0$, we have $\phi|_{Y} = du \wedge \left( d^{Y \ast} \psi_{1} + \psi_{2} \right)$, which shows that ${\mathcal P}_{-, {\mathcal L}_{0}} (\phi|_{Y}) = 0$. Hence, $\phi \in \operatorname{ker} \Delta^{q}_{{\mathcal P}_{-, {\mathcal L}_{0}}}$. Conversely, let $\phi = \phi_{1} + du \wedge \phi_{2} \in \operatorname{ker} \Delta^{q}_{{\mathcal P}_{-, {\mathcal L}_{0}}}$. By (\ref{E:2.1}) with the fact that ${\mathcal P}_{-, {\mathcal L}_{0}} (\phi|_{Y}) = 0$, we have $\phi|_{Y} = du \wedge \left( d^{Y \ast} \psi_{1} + \psi_{2} \right)$, which shows that $\phi \in {\mathcal H}_{\operatorname{rel}}^{q}(M)$. Other cases can be checked in the same way. This completes the proof of the lemma. \end{proof} \noindent In the next section, we discuss the Lefschetz fixed point formula on the complexes (\ref{E:2.7}) and (\ref{E:2.8}). \vspace{0.2 cm} \section{Lefschetz fixed point formula on the complex $(\Omega^{\bullet, \infty}_{{\widetilde {\mathcal P}}_{0}/{\widetilde {\mathcal P}}_{1}}(M), \hspace{0.1 cm} d)$} \vspace{0.2 cm} We recall that $g^{M}$ is assumed to be a product metric near $Y$ and begin with the following definition. \begin{definition} \label{Definition:3.1} For a smooth map $f : M \rightarrow M$, $f$ is said to satisfy the Condition A if on some collar neighborhood $[0, \epsilon) \times Y$ of $Y$, $f : [0, \epsilon) \times Y \rightarrow M$ is expressed by $f (u, y) = ( c u, B(y) )$, where $c$ is a positive real number which is not equal to $1$ and $B : (Y, g^{Y}) \rightarrow (Y, g^{Y})$ is an isometry. \end{definition} \vspace{0.2 cm} \noindent {\it Remark} : If $f : M \rightarrow M$ satisfies the Condition A, then all the fixed points in $Y$ are attracting if $0 < c < 1$ and repelling if $c > 1$. \vspace{0.2 cm} \noindent If $f$ satisfies the Condition A, for $\omega = \omega_{1} + du \wedge \omega_{2}$ on a collar neighborhood of $Y$, $f^{\ast} \omega = B^{\ast} \omega_{1} + c du \wedge B^{\ast} \omega_{2}$. Since $B$ is an isometry, $B^{\ast}$ maps $\operatorname{Im} d^{Y}$ and $\operatorname{Im} (d^{Y})^{\ast}$ onto $\operatorname{Im} d^{Y}$ and $\operatorname{Im} (d^{Y})^{\ast}$, respectively. The following lemma shows that $f^{\ast}$ maps $\Omega^{\bullet, \infty}_{{\widetilde {\mathcal P}}_{0}}(M)$ into $\Omega^{\bullet, \infty}_{{\widetilde {\mathcal P}}_{0}}(M)$ and maps $\Omega^{\bullet, \infty}_{{\widetilde {\mathcal P}}_{1}}(M)$ into $\Omega^{\bullet, \infty}_{{\widetilde {\mathcal P}}_{1}}(M)$. \vspace{0.2 cm} \begin{lemma} \label{Lemma:3.1} $B^{\ast}$ maps ${\mathcal K}^{q}$ onto ${\mathcal K}^{q}$ and $\star_{Y} {\mathcal K}^{q}$ onto $\star_{Y} {\mathcal K}^{q}$. \end{lemma} \begin{proof} Since $B$ is an isometry, it is enough to show that $B^{\ast}$ maps ${\mathcal K}^{q}$ into ${\mathcal K}^{q}$. The following commutative diagrams show that for $[\omega] \in H^{q}(M)$, $B^{\ast} \iota^{\ast} \omega = \iota^{\ast} f^{\ast} \omega$. \begin{eqnarray*} \begin{CD} Y & @> \iota >> & M \\ @V B VV \circlearrowright & & @VV f V \\ Y & @> \iota >> & M \end{CD} \qquad \Longrightarrow \qquad \begin{CD} H^{q}(M) & @> \iota^{\ast} >> & H^{q}(Y) \\ @V f^{\ast} VV \circlearrowright & & @VV B^{\ast} V \\ H^{q}(M) & @> \iota^{\ast} >> & H^{q}(Y) \end{CD} \end{eqnarray*} \noindent This fact together with Lemma \ref{Lemma:2.2} implies the result. \end{proof} \vspace{0.2 cm} Since $f^{\ast}$ commutes with $d$, $f^{\ast} : (\Omega^{\bullet, \infty}_{{\widetilde {\mathcal P}}_{0}/{\widetilde {\mathcal P}}_{1}}(M), \hspace{0.1 cm} d) \rightarrow (\Omega^{\bullet, \infty}_{{\widetilde {\mathcal P}}_{0}/{\widetilde {\mathcal P}}_{1}}(M), \hspace{0.1 cm} d)$ is a cochain map. In this section we are going to discuss the Lefschetz fixed point formula on these complexes for smooth maps having only simple fixed points and satisfying the Condition A. \begin{definition} \label{Definition:3.2} Suppose that $f : M \rightarrow M$ is a smooth map satisfying the Condition A. We define the Lefschetz number of $f$ with respect to the complex $(\Omega^{\bullet, \infty}_{{\widetilde {\mathcal P}}_{i}}(M), \hspace{0.1 cm} d)$ ($i = 0, 1$) by \begin{eqnarray*} L_{\widetilde{\mathcal P}_{i}}(f) & = & \sum_{q=0}^m (-1)^{q} \operatorname{Tr} \left( f^* : H^{q}((\Omega^{\bullet, \infty}_{{\widetilde {\mathcal P}}_{i}}(M), \hspace{0.1 cm} d)) \rightarrow H^{q}((\Omega^{\bullet, \infty}_{{\widetilde {\mathcal P}}_{i}}(M), \hspace{0.1 cm} d)) \right). \end{eqnarray*} \end{definition} \vspace{0.2 cm} We are going to express $L_{\widetilde{\mathcal P}_{i}}(f)$ in terms of fixed points of $f$ and some additional data. We consider $L_{\widetilde{\mathcal P}_{0}}(f)$ first. Using Lemma \ref{Lemma:2.3} and the standard argument for the trace of a heat operator (see Lemma 1.10.1 in [6] or Theorem 4 in [3] for details), we have \begin{eqnarray} \label{E:3.1} L_{\widetilde{\mathcal P}_{0}}(f) & = & \sum_{q = \operatorname{even}} \operatorname{Tr} \left( f^{\ast} : H^{q}(M, Y) \rightarrow H^{q}(M, Y) \right) \hspace{0.1 cm} - \hspace{0.1 cm} \sum_{q=\operatorname{odd}} \operatorname{Tr} \left( f^{\ast} : H^{q}(M) \rightarrow H^{q}(M) \right) \\ & = & \sum_{q=0}^{m} (-1)^{q} \operatorname{Tr} \left( f^{\ast} e^{- t \Delta^{q}_{\widetilde{\mathcal P}_{0}}} \right) \hspace{0.1 cm} = \hspace{0.1 cm} \lim_{t \rightarrow 0} \sum_{q=0}^{m} (-1)^{q} \operatorname{Tr} \left( f^{\ast} e^{- t \Delta^{q}_{\widetilde{\mathcal P}_{0}}} \right) \nonumber \\ & = & \lim_{t \rightarrow 0} \left\{ \sum_{q= \operatorname{even}} \operatorname{Tr} \left( f^{\ast} e^{- t \Delta^{q}_{{\mathcal P}_{-, {\mathcal L}_{0}}}} \right) \hspace{0.1 cm} - \hspace{0.1 cm} \sum_{q= \operatorname{odd}} \operatorname{Tr} \left( f^{\ast} e^{- t \Delta^{q}_{{\mathcal P}_{+, {\mathcal L}_{1}}}} \right) \right\} \nonumber \\ & = & \lim_{t \rightarrow 0} \int_{M} \left\{ \sum_{q= \operatorname{even}} \operatorname{Tr} \left( {\mathcal T}_{q} (x) {\mathcal E}^{q}_{{\mathcal P}_{-, {\mathcal L}_{0}}} (t, f(x), x) \right) \hspace{0.1 cm} - \hspace{0.1 cm} \sum_{q= \operatorname{odd}} \operatorname{Tr} \left( {\mathcal T}_{q}(x) {\mathcal E}^{q}_{{\mathcal P}_{+, {\mathcal L}_{1}}} (t, f(x), x) \right) \right\} d vol(x), \nonumber \end{eqnarray} \noindent where ${\mathcal T}_{q}(x) := \Lambda^{q}( (df (x))^{T} ) : \Lambda^{q} T^{\ast}_{f(x)} M \rightarrow \Lambda^{q} T^{\ast}_{x} M$ is the pull-back map mapping the fiber over $f(x)$ to the fiber over $x$ and ${\mathcal E}^{q}_{{\mathcal P}_{-, {\mathcal L}_{0}}/{\mathcal P}_{+, {\mathcal L}_{1}}} (t, x, z)$ is the kernel of $e^{- t \Delta^{q}_{{\mathcal P}_{-, {\mathcal L}_{0}}/{\mathcal P}_{+, {\mathcal L}_{1}}}}$. We choose $\epsilon > 0$ such that $( [0, 2 \epsilon) \times Y ) \cap {\mathcal F}(f) = {\mathcal F}_{Y}(f)$. For each $x \in {\mathcal F}_{0}(f)$, choose a small open neighborhood $U_{x}$ of $x$ such that $U_{x} \cap ( [0, \epsilon) \times Y ) = \emptyset$. Putting $W:= M - \left( \cup_{x \in {\mathcal F}_{0}(f)} U_{x} \cup [0, \frac{\epsilon}{7}) \times Y \right)$, the standard argument (see Lemma 1.10.2 in [6] or Theorem 5 in [3] for details) shows that \begin{eqnarray} \label{E:3.2} \lim_{t \rightarrow 0} \int_{W} \operatorname{Tr} \left( {\mathcal T}_{q}(x) {\mathcal E}^{q}_{{\mathcal P}_{-, {\mathcal L}_{0}}/{\mathcal P}_{+, {\mathcal L}_{1}}} (t, f(x), x) \right) d vol(x) & = & 0. \end{eqnarray} \noindent Hence, we can rewrite (\ref{E:3.1}) as follows. \begin{eqnarray} \label{E:3.3} L_{\widetilde{\mathcal P}_{0}}(f) & = & \lim_{t \rightarrow 0} \sum_{x \in {\mathcal F}_{0}(f)} \sum_{q= \operatorname{even}} \int_{U_{x}} \operatorname{Tr} \left( {\mathcal T}_{q}(x) {\mathcal E}^{q}_{{\mathcal P}_{-, {\mathcal L}_{0}}} (t, f(x), x) \right) d vol(x) \nonumber \\ & - & \lim_{t \rightarrow 0} \sum_{x \in {\mathcal F}_{0}(f)} \sum_{q= \operatorname{odd}} \int_{U_{x}} \operatorname{Tr} \left( {\mathcal T}_{q}(x) {\mathcal E}^{q}_{{\mathcal P}_{+, {\mathcal L}_{1}}} (t, f(x), x) \right) d vol(x) \nonumber \\ & + & \lim_{t \rightarrow 0} \sum_{q= \operatorname{even}} \int_{Y} \int_{0}^{\frac{\epsilon}{7}} \operatorname{Tr} \left( {\mathcal T}_{q}(x) {\mathcal E}^{q}_{{\mathcal P}_{-, {\mathcal L}_{0}}} (t, f(x), x) \right) du \hspace{0.1 cm} d vol(y) \nonumber \\ & - & \lim_{t \rightarrow 0} \sum_{q= \operatorname{odd}} \int_{Y} \int_{0}^{\frac{\epsilon}{7}} \operatorname{Tr} \left( {\mathcal T}_{q}(x) {\mathcal E}^{q}_{{\mathcal P}_{+, {\mathcal L}_{1}}} (t, f(x), x) \right) du \hspace{0.1 cm} d vol(y) . \end{eqnarray} We next construct the parametrix $Q^{q}_{{\mathcal P}_{-, {\mathcal L}_{0}}/{\mathcal P}_{+, {\mathcal L}_{1}}}(t, x, z)$ of the heat kernel ${\mathcal E}^{q}_{{\mathcal P}_{-, {\mathcal L}_{0}}/{\mathcal P}_{+, {\mathcal L}_{1}}} (t, x, z)$ by combining the interior contribution and the boundary contribution. We denote by ${\widetilde M}$ the closed double of $M$, {\it i.e.}, ${\widetilde M} = M \cup_{Y} M$ and extend the Laplacian $\Delta^{q}$ on $M$ to the Laplacian on ${\widetilde M}$, denoted by ${\widetilde \Delta}^{q}$. Let ${\widetilde \mathcal{E}_{q}}(t, x, z)$ be the kernel of the heat operator $e^{-t {\widetilde \Delta}^{q}}$. It is well known (for example, p.225 in [4]) that \begin{eqnarray} \label{E:3.4} | {\widetilde \mathcal{E}_{q}}(t, x, z) | \hspace{0.1 cm} \leq \hspace{0.1 cm} c_{1} t^{- \frac{m}{2}} e^{- c_{2} \frac{d(x, z)^{2}}{t}} , \end{eqnarray} \noindent where $c_{i}$'s are some positive constants. Let $N_{\infty} := [0, \infty) \times Y$ be a half infinite cylinder and $\Delta^{q}_{N_{\infty}} := - \partial_{u}^{2} + \left( \begin{array}{clcr} \Delta_{Y}^{q} \\ \Delta^{q-1}_{Y} \end{array} \right) $ be the Laplacian acting on $q$-forms on $N_{\infty}$. We decompose $\Omega^{q}(Y)$ by $\Omega^{q}(Y) = \Omega^{q}_{-}(Y) \oplus \Omega^{q}_{+}(Y)$, where \begin{eqnarray} \label{E:3.5} \Omega^{q}_{-}(Y) & := & \left( \operatorname{Im} d^{Y} \oplus {\mathcal K} \right) \cap \Omega^{q}(Y), \qquad \Omega^{q}_{+}(Y) \hspace{0.1 cm} := \hspace{0.1 cm} \left( \operatorname{Im} (d^{Y})^{\ast} \oplus \star_{Y} {\mathcal K} \right) \cap \Omega^{q}(Y). \end{eqnarray} \noindent We denote by $\{ \phi_{q, j} \mid j = 1, 2, \cdots \}$ and $\{ \psi_{q, j} \mid j = 1, 2, \cdots \}$ the orthonormal bases of $\Omega^{q}_{-}(Y)$ and $\Omega^{q}_{+}(Y)$ consisting of eigenforms of $\Delta_{Y}^{q}$ with eigenvalues $\{ \lambda_{q, j} \mid j = 1, 2, \cdots \}$ and $\{ \mu_{q, j} \mid j = 1, 2, \cdots \}$, respectively. Then the heat kernels ${\mathcal E}^{\operatorname{cyl}, q}_{{\mathcal P}_{-, {\mathcal L}_{0}}}$ and ${\mathcal E}^{\operatorname{cyl}, q}_{{\mathcal P}_{+, {\mathcal L}_{1}}}$ of $\Delta^{q}_{N_{\infty}}$ with respect to the boundary conditions ${\mathcal P}_{-, {\mathcal L}_{0}}$ and ${\mathcal P}_{+, {\mathcal L}_{1}}$ on $\{ 0 \} \times Y$ are given as follows (cf. p.226 in [4]). \begin{eqnarray} \label{E:3.6} {\mathcal E}^{\operatorname{cyl}, q}_{{\mathcal P}_{-, {\mathcal L}_{0}}} (t, (u, y), (v, y^{\prime})) & = & \sum_{j=1}^{\infty} \frac{e^{-t \lambda_{q, j}}}{\sqrt{4 \pi t}} \left( e^{- \frac{(u - v)^{2}}{4t}} - e^{- \frac{(u + v)^{2}}{4t}} \right) \phi_{q, j}(y) \otimes \phi^{\ast}_{q, j}(y^{\prime}) \\ & + & \sum_{j=1}^{\infty} \frac{e^{-t \mu_{q, j}}}{\sqrt{4 \pi t}} \left( e^{- \frac{(u - v)^{2}}{4t}} + e^{- \frac{(u + v)^{2}}{4t}} \right) \psi_{q, j}(y) \otimes \psi^{\ast}_{q, j}(y^{\prime}) \nonumber \\ & + & \sum_{j=1}^{\infty} \frac{e^{-t \lambda_{q-1, j}}}{\sqrt{4 \pi t}} \left( e^{- \frac{(u - v)^{2}}{4t}} - e^{- \frac{(u + v)^{2}}{4t}} \right) (du \wedge \phi_{q-1, j}(y)) \otimes (dv \wedge \phi_{q-1, j}(y^{\prime}))^{\ast} \nonumber \\ & + & \sum_{j=1}^{\infty} \frac{e^{-t \mu_{q-1, j}}}{\sqrt{4 \pi t}} \left( e^{- \frac{(u - v)^{2}}{4t}} + e^{- \frac{(u + v)^{2}}{4t}} \right) (du \wedge \psi_{q-1, j}(y)) \otimes (dv \wedge \psi_{q-1, j}(y^{\prime}))^{\ast}, \nonumber \end{eqnarray} \begin{eqnarray} \label{E:3.7} {\mathcal E}^{\operatorname{cyl}, q}_{{\mathcal P}_{+, {\mathcal L}_{1}}} (t, (u, y), (v, y^{\prime})) & = & \sum_{j=1}^{\infty} \frac{e^{-t \lambda_{q, j}}}{\sqrt{4 \pi t}} \left( e^{- \frac{(u - v)^{2}}{4t}} + e^{- \frac{(u + v)^{2}}{4t}} \right) \phi_{q, j}(y) \otimes \phi^{\ast}_{q, j}(y^{\prime}) \\ & + & \sum_{j=1}^{\infty} \frac{e^{-t \mu_{q, j}}}{\sqrt{4 \pi t}} \left( e^{- \frac{(u - v)^{2}}{4t}} - e^{- \frac{(u + v)^{2}}{4t}} \right) \psi_{q, j}(y) \otimes \psi^{\ast}_{q, j}(y^{\prime}) \nonumber \\ & + & \sum_{j=1}^{\infty} \frac{e^{-t \lambda_{q-1, j}}}{\sqrt{4 \pi t}} \left( e^{- \frac{(u - v)^{2}}{4t}} + e^{- \frac{(u + v)^{2}}{4t}} \right) (du \wedge \phi_{q-1, j}(y)) \otimes (dv \wedge \phi_{q-1, j}(y^{\prime}))^{\ast} \nonumber \\ & + & \sum_{j=1}^{\infty} \frac{e^{-t \mu_{q-1, j}}}{\sqrt{4 \pi t}} \left( e^{- \frac{(u - v)^{2}}{4t}} - e^{- \frac{(u + v)^{2}}{4t}} \right) (du \wedge \psi_{q-1, j}(y)) \otimes (dv \wedge \psi_{q-1, j}(y^{\prime}))^{\ast}. \nonumber \end{eqnarray} Let $\rho(a, b)$ be a smooth increasing function of real variable such that \[ \rho(a, b) (u) = \left\{ \begin{array}{ll} 0 & \mbox{for $u \leq a$} \\ 1 & \mbox{for $u \geq b$} \hspace{0.1 cm}. \end{array} \right. \] We put \begin{eqnarray*} \phi_{1} := 1 - \rho(\frac{5 \epsilon}{7}, \frac{6 \epsilon}{7}), \quad \psi_{1} := 1 - \rho(\frac{3 \epsilon}{7}, \frac{4 \epsilon}{7}), \quad \phi_{2} := \rho(\frac{\epsilon}{7}, \frac{2 \epsilon}{7}), \quad \psi_{2} := \rho(\frac{3 \epsilon}{7}, \frac{4 \epsilon}{7}), \end{eqnarray*} \noindent and \begin{eqnarray} \label{E:3.8} {\mathcal Q}^{q}_{{\mathcal P}_{-, {\mathcal L}_{0}}}(t, (u, y), (v, y^{\prime})) & = & \phi_{1}(u) \mathcal{E}^{\operatorname{cyl}, q}_{{\mathcal P}_{-, {\mathcal L}_{0}}}(t, (u, y), (v, y^{\prime})) \psi_{1}(v) + \phi_{2}(u) {\widetilde \mathcal{E}}^{q}(t, (u, y), (v, y^{\prime})) \psi_{2}(v), \nonumber \\ {\mathcal Q}^{q}_{{\mathcal P}_{+, {\mathcal L}_{1}}}(t, (u, y), (v, y^{\prime})) & = & \phi_{1}(u) \mathcal{E}^{\operatorname{cyl}, q}_{{\mathcal P}_{+, {\mathcal L}_{1}}}(t, (u, y), (v, y^{\prime})) \psi_{1}(v) + \phi_{2}(u) {\widetilde \mathcal{E}}^{q}(t, (u, y), (v, y^{\prime})) \psi_{2}(v). \end{eqnarray} \noindent Then, ${\mathcal Q}^{q}_{{\mathcal P}_{-, {\mathcal L}_{0}}}$ and ${\mathcal Q}^{q}_{{\mathcal P}_{+, {\mathcal L}_{1}}}$ are parametrices for the kernels of $e^{-t \Delta^{q}_{{\mathcal P}_{-, {\mathcal L}_{0}}}}$ and $e^{-t \Delta^{q}_{{\mathcal P}_{+, {\mathcal L}_{1}}}}$, respectively. The standard computation using (\ref{E:3.4}), (\ref{E:3.6}) and (\ref{E:3.7}) (see [2], [4] for details) shows that for $0 < t \leq 1$ and $\alpha = {\mathcal P}_{-, {\mathcal L}_{0}}$ or ${\mathcal P}_{+, {\mathcal L}_{1}}$, there exist some positive constants $c_{1}$ and $c_{2}$ such that \begin{equation} \label{E:3.9} | {\mathcal E}^{q}_{\alpha} (t, (u, y), (v, y^{\prime})) - {\mathcal Q}^{q}_{\alpha} (t, (u, y), (v, y^{\prime})) | \leq c_{1} e^{- \frac{c_{2}}{t}}, \end{equation} \noindent which shows that \begin{eqnarray} \label{E:3.10} \lim_{t \rightarrow 0} \left( {\mathcal E}^{q}_{\alpha} (t, (u, y), (v, y^{\prime})) - {\mathcal Q}^{q}_{\alpha} (t, (u, y), (v, y^{\prime})) \right) & = & 0. \end{eqnarray} \noindent Hence, in view of (\ref{E:3.3}) with $x \in {\mathcal F}_{0}(f)$, we have \begin{eqnarray} \label{E:3.11} \lim_{t \rightarrow 0} \int_{U_{x}} \operatorname{Tr} \left( {\mathcal T}_{q}(x) {\mathcal E}^{q}_{\alpha}(t, f(x), x) \right) d vol(x) & = & \lim_{t \rightarrow 0} \int_{U_{x}} \operatorname{Tr} \left( {\mathcal T}_{q}(x) {\mathcal Q}^{q}_{\alpha}(t, f(x), x) \right) d vol(x) \nonumber \\ & = & \lim_{t \rightarrow 0} \int_{U_{x}} \operatorname{Tr} \left( {\mathcal T}_{q}(x) {\widetilde {\mathcal E}}^{q}(t, f(x), x) \right) d vol(x), \end{eqnarray} \noindent which yields the following equalities. \begin{eqnarray} \label{E:3.12} & & \lim_{t \rightarrow 0} \sum_{x \in {\mathcal F}_{0}(f)} \sum_{q= \operatorname{even}} \int_{U_{x}} \operatorname{Tr} {\mathcal T}_{q}(x) \left( {\mathcal E}^{q}_{{\mathcal P}_{-, {\mathcal L}_{0}}}(t, f(x), x) \right) d vol(x) \\ & & - \lim_{t \rightarrow 0} \sum_{x \in {\mathcal F}_{0}(f)} \sum_{q= \operatorname{odd}} \int_{U_{x}} \operatorname{Tr} {\mathcal T}_{q}(x) \left( {\mathcal E}^{q}_{{\mathcal P}_{+, {\mathcal L}_{1}}}(t, f(x), x) \right) d vol(x) \nonumber \\ & = & \lim_{t \rightarrow 0} \sum_{x \in {\mathcal F}_{0}(f)} \sum_{q=0}^{m} (-1)^{q} \int_{U_{x}} \operatorname{Tr} \left( {\mathcal T}_{q}(x) {\widetilde {\mathcal E}}^{q} (t, f(x), x) \right) d vol(x) \hspace{0.1 cm} = \hspace{0.1 cm} \sum_{x \in {\mathcal F}_{0}(f)} \operatorname{sign} \operatorname{det} \left( I - df(x) \right), \nonumber \end{eqnarray} \noindent where we refer to Theorem 1.10.4 in [6] or Theorem 10.12 in [10] for the proof of the last equality. We next analyze the boundary contribution. For $\alpha = {\mathcal P}_{-, {\mathcal L}_{0}}$ or ${\mathcal P}_{+, {\mathcal L}_{1}}$, by (\ref{E:3.10}) we have \begin{eqnarray} \label{E:3.13} & & \lim_{t \rightarrow 0} \int_{Y} \int_{0}^{\frac{\epsilon}{7}} \operatorname{Tr} \left( {\mathcal T}_{q}(x) {\mathcal E}^{q}_{\alpha}(t, f(x), x) \right) du \hspace{0.1 cm} d vol(y) \hspace{0.1 cm} = \hspace{0.1 cm} \lim_{t \rightarrow 0} \int_{Y} \int_{0}^{\frac{\epsilon}{7}} \operatorname{Tr} \left( {\mathcal T}_{q}(x) {\mathcal Q}^{q}_{\alpha}(t, f(x), x) \right) du \hspace{0.1 cm} d vol(y) \nonumber \\ & = & \lim_{t \rightarrow 0} \int_{Y} \int_{0}^{\frac{\epsilon}{7}} \operatorname{Tr} \left( {\mathcal T}_{q}(x) {\mathcal E}^{\operatorname{cyl}, q}_{\alpha}(t, f(x), x) \right) du \hspace{0.1 cm} d vol(y). \end{eqnarray} \noindent We note that on $[0, \frac{\epsilon}{7}) \times Y$, $f$ is assumed to be $f(u, y) = ( c \hspace{0.1 cm} u, B(y))$, where $B : (Y, g^{Y}) \rightarrow (Y, g^{Y})$ is an isometry. Let us consider the case of $\alpha = {\mathcal P}_{-, {\mathcal L}_{0}}$. We can treat the case of $\alpha = {\mathcal P}_{+, {\mathcal L}_{1}}$ in the same way. Put $x = ( u, y)$ and ${\frak B}_{q}(y) := \Lambda^{q} \left( ( d^{Y}B (y) )^{T} \right)$. Since ${\mathcal T}_{q} (u, y) \phi_{q, j} (B(y)) = {\frak B}_{q}(y) \phi_{q, j} (B(y))$, we have \begin{eqnarray} \label{E:3.14} & & \lim_{t \rightarrow 0} \int_{Y} \int_{0}^{\frac{\epsilon}{7}} \sum_{j=1}^{\infty} \frac{e^{-t \lambda_{q, j}}}{\sqrt{4 \pi t}} \left( e^{- \frac{(c - 1)^{2} u^{2}}{4t}} - e^{- \frac{(c + 1)^{2} u^{2}}{4t}} \right) \langle {\frak B}_{q}(y) \phi_{q, j}(B(y)), \hspace{0.1 cm} \phi_{q, j}(y) \rangle \hspace{0.1 cm} du \hspace{0.1 cm} d vol(y) \nonumber \\ & = & \lim_{t \rightarrow 0} \frac{1}{\sqrt{\pi}} \int_{0}^{\frac{\epsilon}{14 \sqrt{t}}} \left( e^{- (c - 1)^{2} x^{2}} - e^{- (c + 1)^{2} x^{2}} \right) dx \cdot \lim_{t \rightarrow 0} \int_{Y} \sum_{j=1}^{\infty} e^{-t \lambda_{q, j}} \langle {\frak B}_{q}(y) \phi_{q, j}(B(y)), \phi_{q, j}(y) \rangle d vol(y) \nonumber \\ & = & \frac{1}{2} \left( \frac{1}{| 1 - c |} - \frac{1}{1 + c} \right) \cdot \lim_{t \rightarrow 0} \operatorname{Tr} \left( B^{\ast} e^{-t \Delta_{Y}^{q}}|_{\Omega^{q}_{-}(Y)} \right), \end{eqnarray} \vspace{0.2 cm} \noindent where $\langle \hspace{0.1 cm} , \hspace{0.1 cm} \rangle$ is the pointwise inner product of differential forms induced by the metric $g_{Y}$. Similarly, since ${\mathcal T}_{q}(u, y) \left( du \wedge \phi_{q-1, j} (B(y)) \right) = c \hspace{0.1 cm} du \wedge \left( {\frak B}(y) \phi_{q-1, j}(B(y)) \right)$, we have \begin{eqnarray} \label{E:3.15} & & \lim_{t \rightarrow 0} \hspace{0.1 cm} \int_{Y} \int_{0}^{\frac{\epsilon}{7}} \sum_{j=1}^{\infty} \frac{e^{-t \lambda_{q, j}}}{\sqrt{4 \pi t}} \left( e^{- \frac{(c - 1)^{2} u^{2}}{4t}} - e^{- \frac{(c + 1)^{2} u^{2}}{4t}} \right) \times \nonumber \\ & & \hspace{0.3 cm} \langle c du \wedge {\frak B}_{q-1}(y) \phi_{q-1, j} (B(y)), \hspace{0.1 cm} du \wedge \phi_{q-1, j}(y) \rangle \hspace{0.1 cm} du \hspace{0.1 cm} d vol(y) \nonumber \\ & = & \frac{1}{2} \left( \frac{c}{| 1 - c |} - \frac{c}{1 + c} \right) \cdot \lim_{t \rightarrow 0} \operatorname{Tr} \left( B^{\ast} e^{-t \Delta_{Y}^{q-1}}|_{\Omega^{q-1}_{-}(Y)} \right). \end{eqnarray} \vspace{0.2 cm} \noindent Same computation using (\ref{E:3.6}) shows that \begin{eqnarray} \label{E:3.16} & & \lim_{t \rightarrow 0} \int_{Y} \int_{0}^{\frac{\epsilon}{7}} \operatorname{Tr} \left( {\mathcal T}_{q}(u, y) {\mathcal E}^{\operatorname{cyl}, q}_{{\mathcal P}_{-, {\mathcal L}_{0}}} (t, f(u, y), (u, y)) \right) du \hspace{0.1 cm} d vol(y) \\ & = & \frac{1}{2} \left( \frac{1}{| 1 - c |} - \frac{1}{1 + c} \right) \cdot \lim_{t \rightarrow 0} \operatorname{Tr} \left( B^{\ast} e^{-t \Delta_{Y}^{q}}|_{\Omega^{q}_{-}(Y)} \right) \hspace{0.1 cm} + \hspace{0.1 cm} \frac{1}{2} \left( \frac{1}{| 1 - c |} + \frac{1}{1 + c} \right) \cdot \lim_{t \rightarrow 0} \operatorname{Tr} \left( B^{\ast} e^{-t \Delta_{Y}^{q}}|_{\Omega^{q}_{+}(Y)} \right) \nonumber \\ & + & \frac{1}{2} \left( \frac{c}{| 1 - c |} - \frac{c}{1 + c} \right) \cdot \lim_{t \rightarrow 0} \operatorname{Tr} \left( B^{\ast} e^{-t \Delta_{Y}^{q-1}}|_{\Omega^{q-1}_{-}(Y)} \right) \nonumber \\ & + & \frac{1}{2} \left( \frac{c}{| 1 - c |} + \frac{c}{1 + c} \right) \cdot \lim_{t \rightarrow 0} \operatorname{Tr} \left( B^{\ast} e^{-t \Delta_{Y}^{q-1}}|_{\Omega^{q-1}_{+}(Y)} \right) \nonumber \\ & = & \frac{1}{2 | 1 - c |} \cdot \lim_{t \rightarrow 0} \operatorname{Tr} \left( B^{\ast} e^{-t \Delta_{Y}^{q}} \right) \hspace{0.1 cm} + \hspace{0.1 cm} \frac{c}{2 | 1 - c |} \cdot \lim_{t \rightarrow 0} \operatorname{Tr} \left( B^{\ast} e^{-t \Delta_{Y}^{q-1}} \right) \nonumber \\ & & \hspace{0.1 cm} + \hspace{0.1 cm} \frac{1}{2 (1 + c)} \cdot \lim_{t \rightarrow 0} \left( \operatorname{Tr} \left( B^{\ast} e^{-t \Delta_{Y}^{q}}|_{\Omega^{q}_{+}(Y)} \right) \hspace{0.1 cm} - \hspace{0.1 cm} \operatorname{Tr} \left( B^{\ast} e^{-t \Delta_{Y}^{q}}|_{\Omega^{q}_{-}(Y)} \right) \right) \nonumber \\ & & \hspace{0.1 cm} + \hspace{0.1 cm} \frac{c}{2 (1 + c)} \cdot \lim_{t \rightarrow 0} \left( \operatorname{Tr} \left( B^{\ast} e^{-t \Delta_{Y}^{q-1}}|_{\Omega^{q-1}_{+}(Y)} \right) \hspace{0.1 cm} - \hspace{0.1 cm} \operatorname{Tr} \left( B^{\ast} e^{-t \Delta_{Y}^{q-1}}|_{\Omega^{q-1}_{-}(Y)} \right) \right) . \nonumber \end{eqnarray} \noindent Similarly, using (\ref{E:3.7}), we have \begin{eqnarray} \label{E:3.17} & & \lim_{t \rightarrow 0} \int_{Y} \int_{0}^{\frac{\epsilon}{7}} \operatorname{Tr} \left( {\mathcal T}_{q}(u, y) {\mathcal E}^{\operatorname{cyl}, q}_{{\mathcal P}_{+, {\mathcal L}_{1}}} (t, f(u, y), (u, y)) \right) du \hspace{0.1 cm} d vol(y) \\ & = & \frac{1}{2 | 1 - c |} \cdot \lim_{t \rightarrow 0} \operatorname{Tr} \left( B^{\ast} e^{-t \Delta_{Y}^{q}} \right) \hspace{0.1 cm} + \hspace{0.1 cm} \frac{c}{2 | 1 - c |} \cdot \lim_{t \rightarrow 0} \operatorname{Tr} \left( B^{\ast} e^{-t \Delta_{Y}^{q-1}} \right) \nonumber \\ & & \hspace{0.1 cm} - \hspace{0.1 cm} \frac{1}{2 (1 + c)} \cdot \lim_{t \rightarrow 0} \left( \operatorname{Tr} \left( B^{\ast} e^{-t \Delta_{Y}^{q}}|_{\Omega^{q}_{+}(Y)} \right) \hspace{0.1 cm} - \hspace{0.1 cm} \operatorname{Tr} \left( B^{\ast} e^{-t \Delta_{Y}^{q}}|_{\Omega^{q}_{-}(Y)} \right) \right) \nonumber \\ & & \hspace{0.1 cm} - \hspace{0.1 cm} \frac{c}{2 (1 + c)} \cdot \lim_{t \rightarrow 0} \left( \operatorname{Tr} \left( B^{\ast} e^{-t \Delta_{Y}^{q-1}}|_{\Omega^{q-1}_{+}(Y)} \right) \hspace{0.1 cm} - \hspace{0.1 cm} \operatorname{Tr} \left( B^{\ast} e^{-t \Delta_{Y}^{q-1}}|_{\Omega^{q-1}_{-}(Y)} \right) \right) . \nonumber \end{eqnarray} \vspace{0.2 cm} \noindent Finally, combining (\ref{E:3.16}) and (\ref{E:3.17}), we have \begin{eqnarray} \label{E:3.18} & & \lim_{t \rightarrow 0} \sum_{q = \operatorname{even}} \int_{Y} \int_{0}^{\frac{\epsilon}{7}} \operatorname{Tr} \left( {\mathcal T}_{q}(u, y) {\mathcal E}^{\operatorname{cyl}, q}_{{\mathcal P}_{-, {\mathcal L}_{0}}} (t, f(u, y), (u, y)) \right) du \hspace{0.1 cm} d vol(y) \nonumber \\ & & \hspace{0.5 cm} - \hspace{0.1 cm} \lim_{t \rightarrow 0} \sum_{q = \operatorname{odd}} \int_{Y} \int_{0}^{\frac{\epsilon}{7}} \operatorname{Tr} \left( {\mathcal T}_{q}(u, y) {\mathcal E}^{\operatorname{cyl}, q}_{{\mathcal P}_{+, {\mathcal L}_{1}}} (t, f(u, y), (u, y)) \right) du \hspace{0.1 cm} d vol(y) \nonumber \\ & = & \frac{1-c}{2 | 1 - c | } \cdot \lim_{t \rightarrow 0} \sum_{q=0}^{m-1} (-1)^{q} \operatorname{Tr} \left( B^{\ast} e^{-t \Delta_{Y}^{q}} \right) \nonumber \\ & & \hspace{0.5 cm} + \hspace{0.1 cm} \lim_{t \rightarrow 0} \frac{1}{2} \sum_{q=0}^{m-1} \left( \operatorname{Tr} \left( B^{\ast} e^{-t \Delta_{Y}^{q}}|_{\Omega^{q}_{+}(Y)} \right) \hspace{0.1 cm} - \hspace{0.1 cm} \operatorname{Tr} \left( B^{\ast} e^{-t \Delta_{Y}^{q}}|_{\Omega^{q}_{-}(Y)} \right) \right) . \end{eqnarray} \noindent Using (\ref{E:3.5}) and the following commutative diagram \begin{eqnarray*} \begin{CD} \operatorname{Im} (d^{Y})^{\ast} \cap \Omega^{q}(Y) & @> d^{Y} >> & \operatorname{Im} d^{Y} \cap \Omega^{q+1}(Y) \\ @V{B^{\ast} e^{-t \Delta_{Y}}}VV \circlearrowright & & @VV {B^{\ast} e^{-t \Delta_{Y}}} V \\ \operatorname{Im} (d^{Y})^{\ast} \cap \Omega^{q}(Y) & @> d^{Y} >> & \operatorname{Im} d^{Y} \cap \Omega^{q+1}(Y) \end{CD} \end{eqnarray*} \noindent with the fact that $\operatorname{sign} \operatorname{det} ( I - df(y) ) = \operatorname{sign} ( 1 - c ) \cdot \operatorname{sign} \operatorname{det} ( I - df_Y(y) )$, we can rewrite (\ref{E:3.18}) by \begin{eqnarray} \label{E:3.19} (\ref{E:3.18}) & = & \frac{1}{2} \sum_{y \in {\mathcal F}_{Y}(f)} \operatorname{sign} \operatorname{det} ( I - df(y) ) + \frac{1}{2} \left\{ \operatorname{Tr} \left( B^{\ast} : \left( \star_{Y} {\mathcal K} \right) \rightarrow \left( \star_{Y} {\mathcal K} \right) \right) - \operatorname{Tr} \left( B^{\ast} : {\mathcal K} \rightarrow {\mathcal K} \right) \right\}. \nonumber \end{eqnarray} \noindent Furthermore, $\frac{1}{2} \left\{ \operatorname{Tr} \left( B^{\ast} : \left( \star_{Y} {\mathcal K} \right) \rightarrow \left( \star_{Y} {\mathcal K} \right) \right) - \operatorname{Tr} \left( B^{\ast} : {\mathcal K} \rightarrow {\mathcal K} \right) \right\}$ ie equal to $0$ if $B : (Y, g^{Y}) \rightarrow (Y, g^{Y})$ is orientation preserving and is equal to $\hspace{0.1 cm} - \operatorname{Tr} \left( B^{\ast} : {\mathcal K} \rightarrow {\mathcal K} \right)$ if $B$ is orientation reversing. We can compute $L_{\widetilde{\mathcal P}_{1}}(f)$ in the same way. Summarizing the above arguments with Lemma \ref{Lemma:2.2}, we have the following result, which is the main result of this paper. \begin{theorem} \label{Theorem:3.3} Let $(M, Y, g^{M})$ be an $m$-dimensional compact oriented Riemannian manifold with boundary $Y$ and $g^{M}$ be assumed to be a product metric near $Y$. Suppose that $f : M \rightarrow M$ is a smooth map having only simple fixed points and satisfying the condition A. Then the following equalities hold. \begin{eqnarray*} & (1) & \sum_{q= \operatorname{even}} \operatorname{Tr} \left( f^{\ast} : H^{q}(M, Y) \rightarrow H^{q}(M, Y) \right) - \sum_{q= \operatorname{odd}} \operatorname{Tr} \left( f^{\ast} : H^{q}(M) \rightarrow H^{q}(M) \right) \\ & & \hspace{0.5 cm} = \hspace{0.1 cm} \sum_{x \in {\mathcal F}_{0}(f)} \operatorname{sign} \operatorname{det} ( I - df (x) ) + \frac{1}{2} \sum_{y \in {\mathcal F}_{Y}(f)} \operatorname{sign} \operatorname{det} ( I - df(y) ) \hspace{0.1 cm} - \hspace{0.1 cm} K_{0} \\ & (2) & \sum_{q= \operatorname{even}} \operatorname{Tr} \left( f^{\ast} : H^{q}(M) \rightarrow H^{q}(M) \right) - \sum_{q= \operatorname{odd}} \operatorname{Tr} \left( f^{\ast} : H^{q}(M, Y) \rightarrow H^{q}(M, Y) \right) \\ & & \hspace{0.5 cm} = \hspace{0.1 cm} \sum_{x \in {\mathcal F}_{0}(f)} \operatorname{sign} \operatorname{det} ( I - df (x) ) + \frac{1}{2} \sum_{y \in {\mathcal F}_{Y}(f)} \operatorname{sign} \operatorname{det} ( I - df(y) ) \hspace{0.1 cm} + \hspace{0.1 cm} K_{0}, \end{eqnarray*} \noindent where $K_{0} = 0$ if $B$ is orientation preserving and $K_{0} = \operatorname{Tr} \left( B^{\ast} : \operatorname{Im} \iota^{\ast} \rightarrow \operatorname{Im} \iota^{\ast} \right)$ with $\iota^{\ast} : H^{\bullet}(M) \rightarrow H^{\bullet}(Y)$ if $B$ is orientation reversing. \end{theorem} \vspace{0.2 cm} \noindent Combining this result with (\ref{E:1.2}), we have the following result. \begin{corollary} \label{Corollary:3.5} We assume the same assumptions as in Theorem \ref{Theorem:3.3}. Then : \begin{eqnarray*} & (1) & \sum_{q= \operatorname{even}} \operatorname{Tr} \left( f^{\ast} : H^{q}(M) \rightarrow H^{q}(M) \right) - \sum_{q= \operatorname{even}} \operatorname{Tr} \left( f^{\ast} : H^{q}(M, Y) \rightarrow H^{q}(M, Y) \right) \\ & & \hspace{0.5 cm} = \hspace{0.1 cm} \frac{1}{2} \sum_{y \in {\mathcal F}^{+}_{Y}(f)} \operatorname{sign} \operatorname{det} ( I - df (y) ) - \frac{1}{2} \sum_{y \in {\mathcal F}^{-}_{Y}(f)} \operatorname{sign} \operatorname{det} ( I - df(y) ) \hspace{0.1 cm} + \hspace{0.1 cm} K_{0}, \\ & (2) & \sum_{q= \operatorname{odd}} \operatorname{Tr} \left( f^{\ast} : H^{q}(M) \rightarrow H^{q}(M) \right) - \sum_{q= \operatorname{odd}} \operatorname{Tr} \left( f^{\ast} : H^{q}(M, Y) \rightarrow H^{q}(M, Y) \right) \\ & & \hspace{0.5 cm} = \hspace{0.1 cm} - \frac{1}{2} \sum_{y \in {\mathcal F}^{+}_{Y}(f)} \operatorname{sign} \operatorname{det} ( I - df (y) ) + \frac{1}{2} \sum_{y \in {\mathcal F}^{-}_{Y}(f)} \operatorname{sign} \operatorname{det} ( I - df(y) ) \hspace{0.1 cm} + \hspace{0.1 cm} K_{0}, \end{eqnarray*} \noindent where either ${\mathcal F}^{+}_{Y}(f) = \emptyset$ or ${\mathcal F}^{-}_{Y}(f) = \emptyset$, depending on $c$ in the Condition A. \end{corollary}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,235
{"url":"https:\/\/www.ias.ac.in\/listing\/bibliography\/boms\/MANITA_KHATRI","text":"\u2022 MANITA KHATRI\n\nArticles written in Bulletin of Materials Science\n\n\u2022 Manganese dioxide nanoparticles: synthesis, application and challenges\n\nIn recent days, manganese oxide nanoparticles (MnO$_2$ NPs) have intrigued material science researches extensively due to its wide range of applications. They are widely used in energy storage devices (lithium-ion batteries, capacitors), catalysts, adsorbent, sensors and imaging, therapeutic activity, etc. Since they hold a lot of distinguished potentials, a robust protocol for cheap, stable, biocompatible and eco-friendly MnO$_2$ NPs is necessary. They can be categorized into different phases like $\\alpha$, $\\beta$, $\\delta$ and others. Thus, owing to their peculiar character, they could be utilized forvarious purposes depending on the mode of action and applications. Hence, this review has summarized conventional methods, such as hydrothermal, sol\u2013gel, oxidation\u2013reduction used for the generation of MnO$_2$ NPs. Likewise, morphological characterization by various spectroscopic techniques also outlined. It is found that the particular method of generation of MnO$_2$ NPs is useful for a specific phase.\n\n\u2022 # Bulletin of Materials Science\n\nVolume 44, 2021\nAll articles\nContinuous Article Publishing mode\n\n\u2022 # Dr Shanti Swarup Bhatnagar for Science and Technology\n\nPosted on October 12, 2020\n\nProf. Subi Jacob George \u2014 Jawaharlal Nehru Centre for Advanced Scientific Research, Jakkur, Bengaluru\nChemical Sciences 2020","date":"2021-06-14 23:37:30","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4761790335178375, \"perplexity\": 11715.177345871807}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623487614006.8\/warc\/CC-MAIN-20210614232115-20210615022115-00568.warc.gz\"}"}
null
null
\section{Introduction} In Paper\,I \citep{Conn2018} we demonstrated that the ultra-faint dwarf galaxy candidate Tucana V, also known as DES J2337-6316~\citep{Drlica-Wagner2015} does not have the stellar concentration typical for an ultra-faint star cluster or dwarf galaxy. Our results based on deep stellar photometry led to the conclusion that Tucana V, originally detected at a significance level of $\sigma=8.0$, must be either the debris of a completely tidally disrupted star cluster or an excess of stars in the halo of the Small Magellanic Cloud. In regard to the search for ultra-faint stellar systems in the Milky Way halo, we propose that Tucana V is an example of a false-positive detection. This raises concern that other candidates reported in the literature and taken at face value by other researchers are in fact false-positives too. We highlighted the region around Tucana V in the size-luminosity plane as a ``Trough of Uncertainty" regarding these types of objects. The other two known objects which reside in that region are Draco II \citep{Laevens2015b} and Cetus II \citep[DESJ0117-1725]{Drlica-Wagner2015}. Cetus II is the focus of this paper. Cetus II has a reported heliocentric distance of $d_\odot=30\pm$3\,kpc, a half-light radius $r_h = {1.9}_{-0.5}^{+1.0}$ arcmin and a total luminosity of $M_V = 0.00\pm0.68$ \citep{Drlica-Wagner2015}. It also has the lowest detection significance ($\sigma=5.5$) of all objects reported by \citet{Drlica-Wagner2015} as estimated from their stellar density map search method. These authors further noted that Cetus II should be treated with caution due to inter-CCD gaps in the DES\footnote{Dark Energy Survey, http://des.ncsa.illinois.edu/releases/sva1D} data available at that time. However, if confirmed it would be the least luminous galaxy known to date. In this paper we seek to better understand the phenomenon Cetus II and refine the object's properties by obtaining deep photometry with the GMOS-S instrument. We also want to determine whether its location in the Trough of Uncertainty unveils it as another false-positive detection or a true ultra-faint dwarf galaxy candidate. The rapid increase in the number of known Milky Way satellites over the last couple of years \citep{Balbinot2013, Belokurov2014,Laevens2014, Bechtol2015, Drlica-Wagner2015, Kim2, Kim2015b, KimJerjen2015a, KimJerjen2015b, Koposov2015, Laevens2015a, Laevens2015b, Martin2015, Kim2016, Luque2016, Martin2016b, Torrealba2016a, Torrealba2016b, Koposov2017} has important implications to our understanding of galaxy formation and near-field cosmology. In particular, the newest discoveries are some of the smallest bound stellar systems and thus constitute prime laboratories to study star formation on the smallest scales, in pure baryonic and dark matter dominated environments. At the ultra-faint end of the satellite galaxy luminosity function, there are still relatively few objects, which receive a high statistical weight in studies that correct observed satellite counts for detection efficiency. Consequently, any misclassification of a ultra-faint dwarf galaxy can skew the results significantly. Hence, it is imperative to know the true nature of every single object. In $\S$\ref{sec:observations} we present the details of our follow-up observations of Cetus II, the photometric calibration procedure, the artificial star experiment and the colour-magnitude diagram for all the stars we detected in the Cetus II field. In $\S$\ref{sec:CetIIpop} we revisit the adopted procedure for determining the age, metallicity and distance of the Cetus II stellar population and present the results. In $\S$\ref{sec:discussion} we discuss our findings and draw conclusions about the nature of Cetus II in $\S$\ref{sec:conclusion}. \begin{figure*} \begin{center} \includegraphics[width=1.0\hsize]{Figure1} \caption{All-sky view in Galactic coordinates showing the distribution of known Milky Way satellite dwarf galaxies (filled circles) and star clusters (open circles). Cetus II (yellow dot) is found close to the Galactic South pole at: $l=156\fdg47$, $b=-78\fdg53$, superimposed on the Sagittarius stellar tidal stream (contours) and close to the neutral hydrogen gas of the Magellanic Stream (grey scale image). The star density contours of the Sagittarius stream are inferred from the \citet{LM2010} tidal debris model, which adopts a triaxial dark matter halo for the Milky Way.}\label{fig:MWS} \end{center} \end{figure*} \section{Observations and Data Reduction}\label{sec:observations} \begin{figure} \begin{center} \includegraphics[width=1.0\hsize]{Figure2.pdf} \caption{False colour RGB image of the GMOS-S field centred on Cetus II, made using {\sc Aladin Sky Atlas v8.040}. The $g$ and $r$-band co-added images were used for the blue and red channel, respectively. Owing to the exquisite seeing, a large number of background galaxies are recognizable. However, no stellar overdensity is visible in the field. The bar in the lower left corner has a length of 1 arcminute.} \label{fig:CetII} \end{center} \end{figure} \begin{table*} \caption{Observations } \centering \begin{tabular}{lrccccccc} \hline Field & Right Ascension & Declination & Position Angle & Filter & Observation & Airmass & Exposure & Seeing\\ & (deg, J2000) & (deg, J2000) & (deg) & & Date& & (sec) & (\arcsec)\\ \hline\hline Cetus II & 19.4667& $-$17.425& 90&g\_G0325 & 2017-09-17& 1.04 - 1.06& $1\times 60$, $3\times 600$ & 0.60\\ (DES J0117-1725) & & & 90 & r\_G0326 & 2017-09-17 & 1.07 - 1.12 & $1\times$ 60, $3\times$ 600 & 0.54 \\ \hline \end{tabular} \end{table*}\label{table:data} The imaging data presented here were obtained with the Gemini Multi-Object Spectrograph South (GMOS-S) at the 8m-class Gemini South Observatory through Program ID: GS-2017B-Q-40. The observing conditions required for the observations to be scheduled, following the Gemini Observatory standards, were dark\footnote{SB50 - Sky Brightness 50$^{th}$ percentile}, clear skies\footnote{CC50 - Cloud Cover 50$^{th}$ percentile and excellent seeing\footnote{IQ20 - Image Quality 20$^{th}$ percentile}}. On average we achieved $0\farcs60$ in the $g-$band (g\_G0325) and $0\farcs54$ in the $r-$band (r\_G0326) for the night of September 17, 2017 (see Table~\ref{table:data}). The superb seeing obtained in IQ20 conditions allowed us to utilize the $1\times1$ binning mode of GMOS-S, which affords a pixel scale of $0\farcs08$. The GMOS-S field of view is $5\farcm5\times 5\farcm5$ and our observing strategy involved having a short 60\,sec exposure centred on the target and three dithered exposures of 600\,sec each. We employed the {\sc theli} pipeline \citep{2013ApJS..209...21S} to perform the basic data reduction of creating a master bias and master twilight flats, bias subtraction and flat fielding, astrometry and co-addition. To generate the object catalogues we used the Point Spread Function (PSF) photometry package {\sc dolphot} \citep{2000PASP..112.1383D} on the combined stacked images. {\sc Dolphot} has approximately 80 parameters which need to be set to process the data, the majority of these relate to the fundamental details of the files being processed: filenames, filters, offsets between frames in pixels, initial estimates of the seeing in pixels, exposure time, read noise, bad pixel value, saturation value, airmass etc. We chose to use a point spread function based on a linear Gaussian + Lorentzian solution and we set the detection threshold to 2.5 sigma above the noise. \subsection{Photometric Calibration}\label{sec:calibration} \begin{table} \caption{Photometric Calibration Results} \begin{center} \begin{tabular}{lcc} \hline & $g$ band & $r$ band \\ \hline\hline Colour term $(g -r)$\footnote{from \citet{Conn2018}} & $+0.026^{+0.045}_{-0.046}$& $-0.059^{+0.042}_{-0.041}$\\ Cetus II offsets& $-3.018^{+0.028}_{-0.029}$& $-2.771^{+0.026}_{-0.027}$ \\\hline \end{tabular} \end{center} \tablecomments{Colour terms and offsets derived from comparison with APASS calibrated DES photometry. All photometry assumed a zeropoint of 30.00 for both filters prior to the offsets being applied.The offset values listed are a combination of the true zeropoint correction and the atmospheric extinction correction.}\label{table:calib} \end{table} For the calibration of the instrumental magnitudes generated by the {\sc dolphot} photometry, we followed the same steps as discussed in Paper I. The GMOS-S data were cross-matched with APASS\footnote{The AAVSO Photometric All-Sky Survey}~\citep{2015AAS...22533616H} calibrated DECam photometry\footnote{DECam photometry generated using the procedures outlined in \citet{KimJerjen2015b}}. We utilized the same technique as described in Paper I, applying the same colour term and then calculating the linear offset (a combination of a zeropoint offset and atmospheric extinction correction) from a generic instrumental zeropoint. The calibration proceeds by first applying the offset directly to the raw magnitudes and then iterating the magnitudes using the colour and colour term to achieve the final solution, as shown in Equation~\ref{eqn:calib}. Apply the offset: \begin{eqnarray}\label{eqn:calib} \text{rawmag} = \text{rawmag} + \text{offset} \end{eqnarray} Iterate the magnitudes using: \begin{eqnarray}\label{eqn:calib2} \text{newmag} = \text{rawmag} + \text{Colour Term}*(g-r) \end{eqnarray} After each iteration update {\it rawmag} to the new {\it newmag} value then repeat until the solution converges. These colour terms and offsets, applied to the Cetus II field, are listed in Table~\ref{table:calib}. The selection criteria for objects to be included in the final catalogue consisted of finding objects where: \begin{itemize} \item in either filter, sharpness$^{2} \leq 0.1$ \item in both filters, signal-to-noise ratio $\geq3.5$ \item and the object type corresponds to "good stars" (Objtype = 1). \end{itemize} Spurious or saturated objects were removed from the catalogue through either their extremely large magnitude errors or zero magnitude error respectively. \subsection{Artificial Star Experiments}\label{sec:artstars} Following the procedure described in Paper I, the photometric completeness of the Cetus II field was determined using the {\sc dolphot} built-in artificial star experiment by generating a flat luminosity function with around 500,000 stars covering the magnitude interval $20<m<29$ and subdivided in 0.3\,mag bins. The recovery rate in each filter was fitted with a Logistic function: \begin{eqnarray}\label{eqn:logistic} Completeness = (1 + e^{(m - mc)/\lambda})^{-1} \end{eqnarray} where $m$ is the magnitude, $mc$ is the 50\% completeness value and $\lambda$ is the width of the rollover. The parameters of the best-fit solutions are listed in Table~\ref{table:completeness} and Figure~\ref{fig:completeness}. To explore the impact of the bright stars in the field on both the photometric completeness and any potential overdensity in the field, we show the position of the unrecovered artificial stars sorted by $g$-band magnitude in Figure~\ref{fig:loststars}. Here we can see clearly the effect of the bright stars and their halos in the field. Interestingly the biggest effect is in the magnitude range $24<g<26$, whereas at fainter magnitudes (greater than the 50\% completeness level) the distribution of unrecovered stars becomes much smoother. In general though, the amount of the field lost to the bright stars is quite small as can be seen in Figure~\ref{fig:completeness} where, at the brighter magnitudes, the completeness is very close to 100\%. Significant loss of coverage would lower the maximum completeness level proportional to the amount of contamination. \begin{figure} \centering \includegraphics[width=1.0\hsize]{Figure3.pdf} \caption{\label{fig:completeness} Recovery rate for artificial stars in the Cetus II field. The points show the photometric completeness per 0.3 magnitude bin, while the solid line shows the best fit solution with the dashed lines highlighting the 90th percentiles.} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=1.0\hsize]{Figure4.pdf} \caption{\label{fig:loststars} Pixel positions of unrecovered artificial stars sorted by $g$-band magnitude.} \end{center} \end{figure*} \begin{table} \caption{50\% Photometric Completeness Estimates}\label{table:completeness} \centering \begin{tabular}{cccc} \hline $mc_g$ & $\lambda_g$&$mc_r$& $\lambda_r$ \\\hline\hline 26.33$\pm{0.04}$ & 0.473$\pm{0.030}$ & 26.06$\pm{0.03}$ & 0.486$\pm{0.030}$ \\ \hline \end{tabular} \end{table} \subsection{Colour-Magnitude Diagram}\label{sec:cmd} Figure~\ref{fig:cmd_field} shows the extinction-corrected $(g-r)_\circ, g_\circ$ CMD of the GMOS-S field using all point sources from our photometry analysis found in the vicinity of Cetus II. The window in the brighter section of the CMD highlights the region that was investigated in the discovery paper derived from DES data \citep[Fig.\,14 in][]{Drlica-Wagner2015}. The Galactic extinction correction is based on the \citet{SFD1998} reddening map along with the correction coefficients of \citet{Schlafly2011}. The Cetus II main sequence is prominently visible and extends from $g_\circ \approx 20.5$\,mag down to $g_\circ = 26.3$ over three magnitudes fainter than the discovery data. The limiting magnitude of the data is $g_{lim}\sim 27.5$. Unresolved background galaxies appear as plume below $g_\circ = 24.2$ and $-0.5<(g-r)_\circ < +0.6$. The red stars in the colour interval $1.0<(g-r)_\circ < 2.0$ are the population of local Milky Way M\,dwarfs. The 50\% photometric completeness is indicated as a dashed line and reaches $g_\circ = 26.08$\,mag. \begin{figure} \begin{center} \includegraphics[width=1\hsize]{Figure5} \caption{The $g_\circ$ vs. $(g-r)_\circ$ colour-magnitude diagram of objects classified as point sources in the $5\farcm5\times 5\farcm5$ GMOS-S field centred on Cetus II. The rectangular window corresponds to the colour-magnitude parameter space investigated in the discovery paper \citep[Fig.\,14 in][]{Drlica-Wagner2015}. The CMD reveals a distinct main sequence population extending over six magnitudes down to $g\sim 26.8$. The error bars running vertically along the colour axis in 1\,mag intervals represent the typical photometric uncertainties. The 50\% completeness level can be seen as a dashed line. \label{fig:cmd_field}} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1.0\hsize]{Figure6} \caption{Smoothed maximum likelihood density map in age-metallicity space for all stars within the GMOS-S field centred on Cetus II. Contour lines show the 68\% (1$\sigma$), 95\% (2$\sigma$), and 99\% (2.6$\sigma$) confidence limits. The density distribution sharply peaks for a Dartmouth model isochrone with an age of 11.2\,Gyr and a metallicity of [Fe/H]$=-1.28$\,dex. The 1D marginalized parameters around the best fit (cross) with uncertainties are listed in Table~\ref{tab:CetIIparameters}.}\label{fig:CetII_age_metal} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1.0\hsize]{Figure7} \caption{The same colour-magnitude diagram as in Figure~\ref{fig:cmd_field} showing the best-fitting Dartmouth model isochrone and the associated mask used to identify Cetus II stars. The stellar population of Cetus II is best described by a single isochrone at a heliocentric distance of 26.3$\pm$1.2\,kpc ($m-M=17.10\pm0.10$\,mag).}\label{fig:cmd} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1.0\hsize]{Figure8} \caption{R.A.-DEC distribution of all objects classified as stars in the GMOS-S field-of-view that pass the isochrone mask filter for Cetus II (see Figure \ref{fig:cmd}). The circle is centred on the nominal centre of Cetus II and has a radius of 1.9\,arcmin, equivalent to the reported half-light radius. Parameters were taken from \citet{ Drlica-Wagner2015}. No evidence of a stellar overdensity in the direction of Cetus II is found. Red circles are AllWISE stars \citep{Wright2010}. The diameters of the circles correlate with the brightness of the stars.}\label{fig:onskydist1} \end{center} \end{figure} \section{Properties of Cetus II}\label{sec:CetIIpop} \subsection{Stellar population} For determining the properties of the Cetus II population we computed the model isochrone that best describes the main sequence stars distributed over the entire GMOS-S field (Figure~\ref{fig:cmd_field}) using the maximum likelihood method introduced in \cite{Frayn2002}. This method was employed in our previous studies \citep{KimJerjen2015a, Kim2, Kim2016}. In brief, we calculated the maximum-likelihood values $\mathcal{L}_i$ over a grid of Dartmouth model isochrones \citep{Dartmouth} as defined by equations\,1 and 2 in \citet{Fadely2011}. The grid points cover ages from 7--14\,Gyr, a broad range of chemical composition $-2.5\leq$ [Fe/H] $\leq-0.8$\,dex, $-0.2\leq$ [$\alpha$/Fe] $\leq +0.6$\,dex, and a distance interval $16.88<(m-M)<17.88$, where the central value of 17.38\,mag is the reported distance modulus for Cetus II from the discovery paper. Grid steps were 0.5\,Gyr, 0.1\,dex, 0.2\,dex, and 0.05\,mag, respectively. Figure~\ref{fig:CetII_age_metal} shows the maximum likelihood density map of the age-metallicity space for Cetus II. The well-defined location that corresponds to the best-fitting isochrone is marked with a cross. Cetus II's stellar population is found to have an age of 11.2\,Gyr, an [Fe/H] of $-1.28$\,dex, an [$\alpha$/Fe] of 0.0\,dex and a distance modulus of $(m-M)_\circ = 17.10\pm0.10$ (26.3$\pm$1.2\,kpc). The corresponding isochrone is superimposed on the CMD in Figure~\ref{fig:cmd} together with the associated mask. The mask has an upper and lower magnitude limit of $g_o=19.5$ and 27.2, respectively. The colour width of the mask for a given magnitude $g_o$ was determined from the photometric uncertainties: $$ (2\pi \sigma^2_{tot})^{-1/2} \exp(-((g_*-r_*)-(g-r)_{iso})^2/2\sigma_{tot}^2) >0.5, $$ \noindent where $(g-r)_{iso}$ is the colour of the model isochrone at $g_o$ and $\sigma^2_{tot}=\sigma_{int}^2+\sigma^2_{g_*}+\sigma^2_{r_*}$. The quantity $\sigma_{int}=0.07$\,mag was chosen as the intrinsic colour width of the isochrone mask and $\sigma^2_{g_*} $, $\sigma^2_{r_*}$ are the photometric uncertainties of a star. We apply the isochrone mask to select the most likely Cetus II stars and plot their on-sky distribution in Figure~\ref{fig:onskydist1}. To highlight the positions of bright foreground stars in the field we also overplotted objects from the AllWISE\footnote{All Wide-field Infra-red Survey Explorer mission, http://wise2.ipac.caltech.edu/docs/release/allwise/} catalogue as red circles. The sizes of these circles correlate with the apparent magnitudes of the objects. We further show a large, dashed circle that represents the centre and half-light radius of Cetus II as reported in \citet{Drlica-Wagner2015}. Although we trace the main sequence stars over six magnitudes there is no evidence of any concentration of stars in the Cetus II field that would indicate the presence of an ultra-faint star cluster or dwarf galaxy candidate. We note that the two bright AllWISE stars in the field (see also Fig.\,2) are not affecting our ability to identify an overdensity of stars at the Cetus II location. \begin{table} \caption{Derived properties of the Cetus II stellar population. \label{tab:CetIIparameters} } { \begin{center} \begin{tabular}{l|c} \hline & \bf{Cetus II} \\ \hline\hline ($l,b$) & ($156\fdg47,-78\fdg53$) \\ ($\Lambda_\odot,B_\odot$) & ($86\fdg17,8\fdg02$) \\ ($\tilde{\Lambda}_\odot,\tilde{B}_\odot$) & ($273\fdg83,-8\fdg02$) \\ $E(B-V)$ & 0.0171 \\ $(m-M)_\circ$ & 17.10$\pm$0.10 \\ $D$ (kpc) & 26.3$\pm$1.2 \\ age (Gyr) & $11.2_{-0.4}^{+1.3}$ \\ $[$Fe/H$]$ (dex) & $-1.28\pm0.07$ \\ $[\alpha$/Fe$]$ (dex)& 0.0 \\ \hline \end{tabular} \end{center} } \end{table} \section{Discussion}\label{sec:discussion} As shown in $\S$\ref{sec:CetIIpop}, the Cetus II field contains a well-defined, coherent stellar population that can be traced almost six magnitudes below the main sequence turn-off. These Cetus II stars are not concentrated into a distinct stellar overdensity, and thus it was not viable to determine the centre coordinates, half-light radius, ellipticity, and total luminosity of Cetus II. Nevertheless, the properties of the underlying stellar population such as an accurate distance, age, metallicity, and alpha abundance are now accurately determined and listed in Table~\ref{tab:CetIIparameters}. We also list the Galactic coordinates, Sagittarius spherical coordinates ($\Lambda_\odot, B_\odot$) from \citet{Maj2003}, ($\tilde{\Lambda}_\odot,\tilde{B}_\odot$) from \citet{Belokurov2014} and the local dust extinction estimate from \citet{SFD1998}, based on the coordinates provided in \citet{Drlica-Wagner2015}. The Cetus II stars are old, moderately metal poor and occupy a narrow heliocentric distance range. The lack of a clear overdensity suggests these stars belong to a tidally disrupted stellar population. \subsection{Possible connection to Sagittarius dwarf tidal stream} \begin{figure} \begin{center} \hspace{-1.0cm} \includegraphics[width=1.1\hsize]{Figure9} \caption{Distribution of Sgr Stream debris particles from the \citet{LM2010} model in R.A.-distance (top) and R.A.-DEC space (bottom). Particles are colour-coded using the Pcol value. The majority of model particles in the vicinity of Cetus II have a Pcol value of 1, which indicates debris stripped on the previous pericentric passage of the Sagittarius dwarf galaxy. The position of Cetus II is indicated in both plots as yellow dot. The good agreement between model and observations supports the picture that Cetus II is a not a ultra-faint dwarf galaxy, but made up of stars in the Sgr stellar stream.}\label{fig:CSgr_LM10} \end{center} \end{figure} As seen in Figure~\ref{fig:MWS}, Cetus II is projected onto the stellar density contours of the \citet{LM2010} model for the Sagittarius (Sgr) tidal stream. The presence of the Sgr stream at this location is observationally confirmed in Figure\,1 (bottom panel) from \citet{Bernard2016}. In Figure~\ref{fig:CSgr_LM10}, we use the \citet{LM2010} model to further explore the possibility that the Cetus II stars are associated with the Sgr tidal stream. In the top panel ($d_\odot$ vs R.A.), the distance and location of Cetus II is populated by model particles that were stripped on the previous pericentric passage of Sgr (Pcol=1). The \citet{LM2010} particles in a $4^\circ\times4^\circ$ window around the nominal centre of Cetus II with a Pcol value of 1 (40 particles in total) have an average heliocentric distance of 23.4\,kpc and a scatter of $\sigma = 2.9$\,kpc. Our derived Cetus II distance is in excellent agreement. We further compare the Cetus II distance with the Sgr Stream distance map generated from RR Lyrae stars of type ab identified in the Pan-STARRS1 $3\pi$ survey \citep{Herni2017}. In their Figure\,1, the Cetus II stellar population is at the same distance as the Sgr stream stars cover at $(\tilde{\Lambda}_\odot,\tilde{B})\approx (274^\circ,-8^\circ)$. In Figure\,4 of the same paper, we find the Cetus II stars reside in the Sagittarius trailing arm. In terms of age and metallicity, we also find excellent agreement between the Cetus II population ([Fe/H]$ = -1.28\pm0.07$, age$ = 11.2^{+1.3}_{-0.4}$) and the metal-poor Population\,B of Sgr ([Fe/H]$ =-1.2\pm0.1$, age$ =11\pm1$\,Gyr), e.g. \citet{LM2010} and \citet{Siegel2007}. The final confirmation of these stars belonging to the Sagittarius tidal stream would be a spectroscopic investigation to determine their radial velocities. The \citet{LM2010} model prediction for stars in this part of the Milky Way halo is velocities in the range $-95$\,km\,s$^{-1}<v_{GSR}<-60$\,km\,s$^{-1}$ with a mean of $-78$\,km\,s$^{-1}$ and a standard deviation of $9$\,km\,s$^{-1}$, see Figure~\ref{fig:VGSR_distr}. At the location and distance of Cetus II, the Population\,B (Pcol=1) Sgr stars represent approximately 75\% of the Sgr stars in the field. The photometry presented here will be used as the basis for future spectroscopic follow-up of the Cetus II region. \begin{figure} \begin{center} \hspace{-1.0cm} \includegraphics[width=1.1\hsize]{Figure10} \caption{Velocity histogram of the 245 \citet{LM2010} model particles within the $4^\circ\times4^\circ$ window centred on Cetus II. In this part of the Milky Way halo model particles cover a large range of velocities. However, the 40 particles with Pcol=1 values, model particles of the Sgr Stream that were stripped on the previous pericentric passage, have a well-defined distribution (blue histogram) with a mean of $-78$\,km\,s$^{-1}$ and a standard deviation of $9$\,km\,s$^{-1}$. Velocity measurements from spectroscopic follow-up of Cetus II stars can be used to test if Cetus II is made up of stars from that component of the trailing arm of the Sgr Stream.}\label{fig:VGSR_distr} \end{center} \end{figure} \subsection{Other stellar streams} Although we found strong evidence that Cetus II stars are part of the Sgr Stream trailing arm, we briefly want to look into other potential explanations for the Cetus II phenomenon. Is there another stream candidate which might explain the Cetus II stellar population? In their recent study of Milky Way halo substructures from the Pan-STARRS1 $3\pi$ Survey, \citet{Bernard2016} did not report a new stream nor is there any obvious candidate stream visible in their Figure\,1 at the Cetus II location. The next best alternative origin for the Cetus II stars would be the Cetus Polar Stream \citep{Newberg2009}. In Figure \ref{fig:CPS}, we compare the density distribution of particles from the Sgr Stream simulation \citep{LM2010} with five positions along the Cetus Polar Stream (CPS) measured using blue horizontal branch stars by \citet{Yam2013}. The width of the CPS stream for each data point ($\sigma_l$ in Table\,1 of \cite{Yam2013}) is represented by a horizontal error bar. The $(l,b)^\circ$ coordinates of Cetus II disagree with the N-body simulation of a satellite on the best-fitting CPS orbit (see Fig.\,18 in \citet{Yam2013}. Moreover, the heliocentric distance of the CPS increases systematically from 27.2\,kpc at $b\sim -36^\circ$ to 32.5\,kpc at $b\sim -66^\circ$. This gradient is suggesting an extrapolated distance of $\approx 35$\,kpc at the Galactic latitude of Cetus II. Hence, the measured distance of Cetus II (26.3$\pm$1.2\,kpc) seems incompatible. Finally, the CPS is reported in \citet{Yam2013} to have a metallicity of $-2.5 < [Fe/H] <-2.0$ which is significantly more metal-poor than the stellar population in the Cetus II field. \begin{figure} \begin{center} \hspace{-1.0cm} \includegraphics[width=1.1\hsize]{Figure11} \caption{Density distribution of particles from the Sgr Stream simulation \citep{LM2010}. Overlaid are Cetus II (yellow filled circle) and the five positions along the Cetus Polar Stream as measured from blue horizontal branch stars by \citet{Yam2013}. The width of the stream for each position is represented by a horizontal error bar. The heliocentric distance gradient of the CPS goes from 27.2\,kpc at $b\sim -36^\circ$ to 32.5\,kpc at $b\sim -66^\circ$. }\label{fig:CPS} \end{center} \end{figure} \subsection{False-positive photometric detections}\label{sec:falsepositive} The Cetus II case, as with Tucana V \citep{Conn2018}, highlights the complication that there are objects within the current set of candidate Milky Way satellites that are potential false-positive detections. Since they consist of a coherent stellar population but with no central overdensity these detections could be part of tidally disrupted stellar systems, which have either small physical sizes (e.g.~Kim\,1, \citet{KimJerjen2015a}) or are part of a much larger stream (e.g.~Sgr tidal Stream). The shallow photometric depth of typical discovery data combined with perhaps the apparent random clustering of stars are driving the discovery of these objects prior to a robust confirmation of their status. The presence of a coherent stellar population combined with marginal evidence for clustering does not automatically guarantee they belong to a stellar overdensity as it was demonstrated in \citet{Jerjen2013} where multiple stellar populations were detected without being associated with a specific object or overdensity. How then are these stellar populations being identified as overdensities? Can we attribute this to solely the chance clustering of member stars or are they examples of some underlying substructure? Before the revision of their nature, Tucana V and Cetus II curiously occupied a very similar regime in the size-luminosity plane that we have dubbed the "Trough of Uncertainty" (TUC) in \citet{Conn2018}. There were four objects in TUC of which two are now known not to be star clusters or dwarf galaxies. \subsection{Is spectroscopy the solution? } Of the two remaining objects, Draco II is the brightest TUC object and has been tentatively confirmed as an ultra-faint dwarf galaxy by \citet{Martin2016a} using KECK/DEIMOS spectroscopic data, but without improving on the shallow PanSTARRS1 discovery CMD ($g_{lim}\sim22.0$, Fig.1, \citet{Laevens2015b}). DESJ0225+0304 \citep{Luque2017} is yet to be followed up. While spectroscopy is a good tool to test the presence of a discrete stellar overdensity, the measured velocity dispersion often used to derive the total mass is not a clean cut indicator of what type of object is being probed. For example, the classical Milky Way dwarf galaxy satellites have velocity dispersions of $\sim 10$ km\,s$^{-1}$ \citep{Walker2007} while the ultra faint dwarf satellites are in the range $3.3 - 7.6$ km\,s$^{-1}$ \citep{SG2007, Simon2015, Kirby2015}. The Milky Way globular cluster population on the other hand has velocity dispersions approximately $6\pm4$ km\,s$^{-1}$ \citep{Harris2010}. In many cases, the values are broadly consistent with the expected velocity dispersion of stars in a stellar stream, e.g.~the Sagittarius tidal stream of $\sim 9$ km\,s$^{-1}$ at the location of Cetus II. It is for this reason that confirmation and refinement of the physical properties (e.g.~half-light radius, radial profile, ellipticity and evidence of tidal disruption) of candidate ultra-faint objects are vital if we are to determine what sort of structures we are probing at these scales. However, as has been seen with Tucana V \citep{Conn2018}, scaling relations such as those presented in \citet{Forbes2014} are not applicable if, as with Cetus II, a central overdensity cannot be identified. \subsection{Detection thresholds for targeted deep imaging} \begin{figure} \begin{center} \includegraphics[width=0.9\hsize]{Figure12a.pdf} \includegraphics[width=0.9\hsize]{Figure12b.pdf} \includegraphics[width=0.9\hsize]{Figure12c.pdf} \caption{Comparison of radial density profiles of a Cetus II-type object using parameters from \citet{Drlica-Wagner2015} with the GMOS-S data presented here. The solid lines are the median from 1000 realizations of an artificial galaxy with that particular half-light radius assuming it contains the same number of stars as seen in isochrone mask from Figure~\ref{fig:cmd}. The dotted lines are the 1st and 3rd Quartiles while the dashed lines show the minimum and maximum solutions. The solid points are the radial density profile of the stars in the Cet\,II field (Fig.\ref{fig:onskydist1})).}\label{fig:detect} \end{center} \end{figure} The original discovery of Cetus II was made using DECam with its very wide field of view, however for low surface brightness systems such as these, the question then arises can deep observations on a smaller field of view adequately describe the system? In Figure~\ref{fig:detect}, we present the expected radial density profile for a Cetus II-like model galaxy assuming the parameters from \citet{Drlica-Wagner2015} and comparing them with the radial density profile as calculated using our data. In each case, the object has a 2D Gaussian profile centred on the Cetus II position published in \citet{Drlica-Wagner2015} and is populated with the 393 stars located in the Cetus II mask (see Fig.~\ref{fig:cmd}). The three panels correspond to the $1\sigma$ range of possible half-light radii reported in the discovery paper: $r_h=1.9^{+1.0}_{-0.5}$ arcmin. For each half-light radius, the stellar distribution was drawn 1000 times and a radial density profile was calculated. The lines plotted in each panel represent the properties of the total sample for a given radius. The solid line is the median, the dotted lines delimit the 1st and 3rd Quartiles and the dashed lines show the minimum and maximum values at that radius. The solid circles are the radial profile as generated using our GMOS-S data. In the top panel of Figure~\ref{fig:detect}, the most compact of the three scenarios, with a half-light radius of 1.4 arcminutes, demonstrates that there are significant discrepancies between the data and model. Inside the half-light radius, the model consistently over-predicts the star density per radius compared to the data while at larger radii, the data outstrips the model. The middle panel, with r$_h$ = 1.9 arcminutes, the model is marginally more consistent at larger radii but the core density is still over-populated. For the largest half-light radius (r$_h$ = 2.9 arcminutes), the median radial profile is very flat, but once again the model predicts at least 20 more stars in the innermost radii than is not seen in the data. Even with such a flat profile, an extra 20 stars at the centre of the field would be easily detected in high quality data like these presented here. \section{Conclusion}\label{sec:conclusion} We have confirmed the presence of the Cetus II stellar population and detected member stars up to six magnitudes below the main sequence turn-off. Despite this finding, there is no overdensity in the GMOS-S field that could represent an ultra-faint star cluster or dwarf galaxy. Our photometric completeness estimates and examination of the field show that there is neither a crowding issue nor significant loss of coverage due to bright stars. Comparisons with model ultra-faint dwarf galaxies of various sizes has illustrated that even in the case with the largest half-light radius, the object should still be detectable by GMOS-S with a small field of view. It appears the original detection of an overdensity is perhaps another chance grouping of bright member stars, as is suspected with Tucana V. The Cetus II stars bear striking resemblance to the Population\,B stars of the Sagittarius dwarf galaxy tidal stream \citep{LM2010} in age, metallicity and distance. It is almost certain that this is the best explanation for these stars. The \citet{LM2010} models make predictions for the radial velocity distribution of these stars in the Cetus II field and spectroscopic follow-up of these stars would provide the final confirmation of their status. Cetus II is the second object that we confirm as a false-positive detection of a stellar overdensity and it accentuates the fine line that survey teams employ when setting the criteria for their detection thresholds. This is partly due to the desire to detect the missing Milky Way satellites predicted by lambda cold dark matter (LCDM) cosmological models and partly due to the lack of a good understanding what objects like Cetus II and Tucana V might actually entail and how to classify them. It cannot be stressed enough that these objects consist of a single age-metallicity stellar population and thus are not misinterpretations of the colour-magnitude space. They do reside at the distances originally estimated but they do not form overdensities which conform to our understanding of star clusters or dwarf galaxies. It is most likely they represent the stochastic process of tidal disruption and as such may be providing clues regarding the structure of the object from which they were stripped. Given the accumulation of false-positives and the prospect of even more such detections in upcoming surveys, it is highly desirable to avoid giving ultra-faint dwarf galaxy candidates names based on the constellation they are found in, so as not to confuse the community. Candidates should keep their working names until their true nature has been unambiguously established. \section{Acknowledgements} BCC and HJ acknowledge the support of the Australian Research Council through Discovery project DP150100862. The authors thank the anonymous referee for their insights and improvements to this paper. This paper is based on observations obtained at the Gemini Observatory (GS-2017B-Q-40), which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnolog\'{i}a e Innovaci\'{o}n Productiva (Argentina), and Minist\'{e}rio da Ci\^{e}ncia, Tecnologia e Inova\c{c}\~{a}o (Brazil). This research has made use of: the AAVSO Photometric All-Sky Survey (APASS), funded by the Robert Martin Ayers Sciences Fund; SIMBAD database, operated at CDS, Strasbourg, France. \software{Aladin \citep{2000A&AS..143...33B, 2014ASPC..485..277B}, Astropy \citep{Astropy2013}, DOLPHOT \citep{2000PASP..112.1383D}, TOPCAT \citep{2005ASPC..347...29T} } This project used public archival data from the Dark Energy Survey (DES). Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago, the Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A\&M University, Financiadora de Estudos e Projetos, Funda\c{c}\~{a}o Carlos Chagas Filho de Amparo \`{a} Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Cient\'{i}fico e Tecnol\'{o}gico and the Minist\'{e}rio da Ci\^{e}ncia, Tecnologia e Inova\c{c}\~{a}o, the Deutsche Forschungsgemeinschaft and the Collaborating Institutions in the Dark Energy Survey. The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones En\'{e}rgeticas, Medioambientales y Tecnol\'{o}gicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh, the Eidgen\"{o}ssische Technische Hochschule (ETH) Z\"{u}rich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ci\`{e}ncies de l'Espai (IEEC/CSIC), the Institut de F\'{i}sica d'Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig-Maximilians Universit\"{a}t M\"{u}nchen and the associated Excellence Cluster Universe, the University of Michigan, the National Optical Astronomy Observatory, the University of Nottingham, the Ohio State University, the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, and Texas A\&M University.
{ "redpajama_set_name": "RedPajamaArXiv" }
8,275
package kikaha.uworkers.core; import kikaha.core.util.Threads; import lombok.*; import java.util.concurrent.atomic.AtomicBoolean; /** * A factory designed for endpoints that uses polling to retrieve and read messages. * Created by miere.teixeira on 12/08/2017. */ public interface PollingEndpointFactory extends EndpointFactory { @Override default void listenForMessages( WorkerEndpointMessageListener listener, EndpointConfig endpointConfig, AtomicBoolean isShutdown, Threads threads) { final EndpointInbox inbox = createSupplier( endpointConfig ); final EndpointInboxConsumer consumer = new EndpointInboxConsumer( isShutdown, inbox, listener, endpointConfig.getEndpointName() ); runInBackgroundWithParallelism( consumer, endpointConfig.getParallelism(), threads ); } default void runInBackgroundWithParallelism(final EndpointInboxConsumer consumer, final int parallelism, Threads threads){ for ( int i=0; i<parallelism; i++ ) threads.submit( consumer ); } /** * Create a {@link EndpointInbox} for a given {@code endpointName}. * * @param config - the object containing all information required to create an {@link EndpointInbox} * @return */ EndpointInbox createSupplier(EndpointConfig config); }
{ "redpajama_set_name": "RedPajamaGithub" }
6,258
{"url":"https:\/\/electronics.stackexchange.com\/questions\/131641\/school-task-with-ntc","text":"I need to solve the task shown on the picture.\n\nThe first part is to calculate the temperature (in \u00b0C) from the NTC when Uv = 2V is. This means Iv is infinite.\n\nThe given Values are: Uq = 12V, R1 = 100Ohm, R2 = 200Ohm, R3 = 300Ohm\n\nNTC: Rn = 1kOhm, b = 2000Kelvin, Tn = 293K\n\nI tried using the formula: Uv = Uq((R2\/(R1+R2)) - (R3\/(R3+NTC))) and got 600Ohm for the NTC. After using it in the forumla for calculating an NTC I got only 81.85 degrees Celsius.\n\nCan someone help me?\n\nAssuming E3 is greater than E2, here's another way to do it:\n\nLeaving you do to the work but here's a clue where you have gone wrong.\n\nWhy did you subtract the resistance of the NTC.?\n\n \" R3\/(R3-NTC) \"\n\n\nThe resistance is POSITIVE it is the change in resistance (dR\/dT) that is a negative slope.","date":"2021-05-10 05:38:24","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7131972312927246, \"perplexity\": 2220.674443011972}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-21\/segments\/1620243989030.87\/warc\/CC-MAIN-20210510033850-20210510063850-00406.warc.gz\"}"}
null
null
\section{Introduction} In this paper we address the problem of preservation of skeletal maps and skeletally generated compacta by normal functors in the category $\mathbf{Comp}$ of compact Hausdorff spaces and their continuous maps. In the sequel all spaces are Hausdorff and all maps are continuous. A {\em compactum} is a compact Hausdorff space. A map $f:X\to Y$ between topological spaces is called {\em skeletal\/} if for each nowhere dense subset $A\subset Y$ the preimage $f^{-1}(A)$ is nowhere dense in $X$, see \cite{mr}. It is easy to see that each open map is skeletal while the converse is not true. To formulate our main results, we need to recall some definitions from the topological theory of functors, see \cite{TZ}. A functor $F:\mathbf{Comp}\to\mathbf{Comp}$ in the category $\mathbf{Comp}$ is defined to be \begin{itemize} \item {\em monomorphic} (resp. {\em epimorphic}) if for each injective (resp. surjective) map $f:X\to Y$ between compacta the map $Ff:FX\to FY$ is injective (resp. surjective); \item ({\em finitely}) {\em open} if for any open surjection $f:X\to Y$ between (finite) compacta the map $Ff:FX\to FY$ is open; \item ({\em finitely}) {\em skeletal} if for any open surjection $f:X\to Y$ between (finite) compacta the map $Ff:FX\to FY$ is skeletal; \item {\em weight-preserving} if $w(FX)\le w(X)$ for each infinite compactum $X$. \end{itemize} Here $w(X)$ denotes the {\em weight} (i.e. the smallest cardinality of a base of the topology) of $X$. If a functor $F:\mathbf{Comp}\to\mathbf{Comp}$ is monomorphic, then for each closed subspace $X$ of a compact space $Y$ the map $Fi:FX\to FY$ induced by the inclusion $i:X\to Y$ is a topological embedding, which allows us to identify the space $FX$ with the subspace $Fi(FX)$ of $FY$. Next we recall and define several properties of functors related to the bicommutativity. Let $\mathcal D$ be a commutative square diagram $$\xymatrix{ \tilde X\ar_{p_X}[d]\ar^{\tilde f}[r]&\tilde Y\ar^{p_Y}[d]\\ X\ar_{f}[r]&Y }$$consisting of continuous maps between compact spaces. The diagram $\mathcal D$ is called {\em bicommutative} if $\tilde f\big(p_X^{-1}(x)\big)=p_Y^{-1}\big(f(x)\big)$ for all $x\in X$, see \cite[\S3.IV]{Kur1}, \cite[\S2.1]{Sh}. We say that a functor $F:\mathbf{Comp}\to\mathbf{Comp}$ \begin{itemize} \item is ({\em finitely}) {\em bicommutative} if $F$ preserves the bicommutativity of square diagrams $\mathcal D$ consisting of surjective maps $f,\tilde f,p_X,p_Y$ and (finite) compacta $X,Y,\tilde X,\tilde Y$; \item {\em preserves} ({\em finite}) {\em preimages} if $F$ preserves the bicommutativity of square diagrams $\mathcal D$ with injective maps $f,\tilde f$ (and finite space $X$); \item {\em preserves} ({\em finite}) {\em 1-preimages} if $F$ preserves the bicommutativity of square diagrams $\mathcal D$ with injective maps $f,\tilde f$, bijective map $p_X$ (and finite space $X$). \end{itemize} It is clear that each bicommutative functor is finitely bicommutative. The converse is true for normal functors with finite supports, see Proposition 2.10.1 of \cite{TZ}. It is easy to see that a monomorphic functor $F:\mathbf{Comp}\to\mathbf{Comp}$ preserves [finite] (1-)preimages if and only if for any map $f:X\to Y$ between compact spaces and a [finite] closed subset $Z\subset Y$ (such that $f^{-1}(z)$ is a singleton for every $z\in Z$) we get $(Ff)^{-1}(FZ)=F\big(f^{-1}(Z)\big)$. \smallskip A functor $F:\mathbf{Comp}\to\mathbf{Comp}$ will be called {\em mec} if $F$ is monomorphic, epimorphic, and continuous. A mec functor that preserves finite 1-preimages will be called a {\em 1-mec} functor. A 1-mec functor that preserves weight of infinite compacta will be called a {\em 1-mecw} functor. The class of 1-mecw functors includes all normal functors in the sense of \v S\v cepin \cite{Sh}, \cite[\S2.3]{TZ} (let us recall that a functor $F:\mathbf{Comp}\to\mathbf{Comp}$ is {\em weakly normal} if it is monomorphic, epimorphic, continuous and preserves intersections, the empty set, the singleton, and the weight of infinite compacta; $F$ is {\em normal} if it is weakly normal and preserves preimages). \smallskip Our primary aim is to characterize skeletal functors among 1-mec functors. For a topological space $Z$ consider the open map $\mathit 2_Z:Z\oplus 2\to Z\oplus 1$ defined by $$\mathit 2_Z:z\mapsto\begin{cases} z&\mbox{if $z\in Z$,}\\ 0&\mbox{if $z\in 2$}.\end{cases} $$ Here for a natural number $n$ by $Z\oplus n$ we denote the topological sum of $Z$ and the discrete space $n=\{0,\dots,n-1\}$. \begin{theorem}\label{s-map} A 1-mec (resp. 1-mecw) functor $F:\mathbf{Comp}\to\mathbf{Comp}$ is skeletal if and only if for each zero-dimensional compact (metrizable) space $Z$ the map $F\mathit 2_Z:F(Z\oplus 2)\to F(Z\oplus 1)$ is skeletal. \end{theorem} Since each open map is skeletal, the preceding theorem implies: \begin{corollary}\label{op->skel} Each open 1-mec functor $F:\mathbf{Comp}\to\mathbf{Comp}$ is skeletal. \end{corollary} Examples~\ref{ex:TZ}--\ref{e:PDelta} presented in Section~\ref{s:eop} show that Corollary~\ref{op->skel} cannot be reversed. Next, we discuss the interplay between the skeletality and the (finite) bicommutativity of functors. \begin{theorem}\label{t1.5n} A 1-mecw functor $F:\mathbf{Comp}\to\mathbf{Comp}$ is skeletal if it is finitely bicommutative and finitely skeletal. \end{theorem} This criterion should be compared with the following characterization of open functors due to \v S\v cepin, see Propositions 3.18 and 3.19 of \cite{Sh}. \begin{theorem}[\v S\v cepin]\label{scepin} A normal functor $F:\mathbf{Comp}\to\mathbf{Comp}$ is open if and only if $F$ is bicommutative and finitely open. \end{theorem} \smallskip Now let us discuss the problem of preservation of skeletally generated compacta by normal functors. Following \cite{Val} we say that a compact Hausdorff space $X$ is {\em skeletally generated} if $X$ is homeomorphic to the limit of an inverse continuous $\omega$-spectrum $\mathcal S=\{X_\alpha,\pi_\alpha^\beta,\Sigma\}$ consisting of metrizable compacta and surjective skeletal bonding projections $\pi_\alpha^\beta:X_\beta\to X_\alpha$. According to \cite{kp8} or \cite{Val}, a compact Hausdorff space $X$ is skeletally generated if and only if the first player has a winning strategy in the following open-open game. The player I starts the game selecting a non-empty open set $V_0$ and the player II responds with a non-empty open set $W_0\subset V_0$. At the $n$-th inning the player I chooses a non-empty open set $V_n$ and player II responds with a non-empty open set $W_n\subset V_n$. At the end of the game the player I is declared the winner if the union $\bigcup_{n\in\omega}W_n$ is dense in $X$. Otherwise the player II wins the game. The class of skeletally generated compacta contains all openly generated compacta and all continuous images of openly generated compacta, see \cite{Val}. In particular, each dyadic compactum is skeletally generated. Skeletally generated compacta share some properties of dyadic compacta. In particular, each skeletally generated compactum has countable cellularity, see \cite{dkz} or \cite{kp8}. It is known that in general, normal functors do not preserve openly generated compacta. In fact, a normal functor $F:\mathbf{Comp}\to\mathbf{Comp}$ preserves the class of openly generated spaces if and only if $F$ is open, see \cite[\S4.1]{Sh}. This contrasts with the following theorem. \begin{theorem}\label{s-space} Each 1-mecw functor $F:\mathbf{Comp}\to\mathbf{Comp}$ preserves the class of skeletally generated compacta. \end{theorem} For preimage preserving mecw-functors $F$ with $F1\ne F2$ this theorem can be improved as follows. Below we identify a natural number $n$ with the discrete space $n=\{0,\dots,n-1\}$. \begin{theorem}\label{ps-space} Let $F:\mathbf{Comp}\to\mathbf{Comp}$ be a preimage preserving mecw-functor with $F1\ne F2$. A compact Hausdorff space $X$ is skeletally generated if and only if the space $FX$ is skeletally generated. \end{theorem} \begin{remark} Among the properties composing the definition of 1-mec functor the least studied is the property of preservation of finite 1-preimages. It is clear that a functor $F:\mathbf{Comp}\to\mathbf{Comp}$ preserves (finite) 1-preimages if it preserves (finite) preimages. On the other hand, the functor of superextension $\lambda$ and the functor of order-preserving functionals $O$ preserve 1-preimages but fail to preserve finite preimages, see \cite{KR}, \cite[2.3.2, 2.3.6]{TZ}. A simple example of a mec functor that does not preserve finite 1-preimages is $Pr^3$, the functor of the third projective power, see \cite[2.5.3]{TZ}. Another example of such a functor is $E$, the functor of non-expanding functionals, see \cite{KR}. By Theorem 1 of \cite{KR}, a continuous monomorphic functor $F:\mathbf{Comp}\to\mathbf{Comp}$ preserves 1-preimages if and only if its Chigogidze extension $F_\beta:\mathbf{Tych}\to\mathbf{Tych}$ preserves embeddings of Tychonoff spaces. \end{remark} Theorems~\ref{s-map}, \ref{t1.5n}, \ref{s-space} and \ref{ps-space} will be proved in Sections~\ref{s:s-map}---\ref{s:ps-space} after some preliminary work done in Sections~\ref{s:sm}--\ref{s:fsm}. Several examples of skeletal and non-skeletal functors will be given in Section~\ref{s:eop}. In that section we also pose some open problems related to skeletal functors. \section{Skeletal maps and skeletal squares}\label{s:sm} In this section we recall the necessary information on skeletal maps between compact spaces. First we introduce the necessary definitions. \begin{definition} A map $f:X\to Y$ between two topological spaces is defined to be \begin{itemize} \item {\em skeletal at a point $x\in X$} if for each neighborhood $U\subset X$ of $x$ the closure $\operatorname{cl}_Y(f(U))$ of $f(U)$ has non-empty interior in $Y$; \item {\em skeletal at a subset} $A\subset X$ if $f$ is skeletal at each point $x\in A$; \smallskip \item {\em open at a point $x\in X$} if for each neighborhood $U\subset X$ of $x$ the image $f(U)$ is a neighborhood of $f(x)$; \item {\em open at a subset $A\subset X$} if $f$ is open at each point $x\in A$; \item {\em densely open} if $f$ is open at some dense subset $A\subset X$. \end{itemize} \end{definition} It is easy to see that each densely open map is skeletal. The converse is true for skeletal maps between metrizable compacta: \begin{theorem}\label{l1c} A map $f:X\to Y$ between compact metrizable spaces is skeletal if and only if it is open at a dense $G_\delta$-subset of $X$. \end{theorem} This theorem has been proved in \cite{BKM}. The metrizability of $X$ is essential in this theorem as shown by the projection $\mathrm{pr}:A\to[0,1]$ of the Aleksandrov ``two arrows'' space $A$ onto the interval, see \cite[3.10.C]{En}. This projection is skeletal but is open at no point of $A$. A characterization of skeletal maps between non-metrizable compact spaces was given in \cite{BKM} in terms of morphisms of inverse $\omega$-spectra with skeletal limit or bonding squares. \begin{definition} Let $\mathcal D$ denote a commutative diagram $$\xymatrix{ \tilde X\ar[r]^{\tilde f}\ar[d]_{p_X}&\tilde Y\ar[d]^{p_Y}\\ X\ar[r]_{f}&Y }$$ consisting of maps between compact spaces. The square $\mathcal D$ is defined to be \begin{itemize} \item {\em open at a point $x\in X$} if for each neighborhood $U\subset X$ of $x$ the point $f(x)$ has a neighborhood $V\subset Y$ such that $V\subset f(U)$ and $p^{-1}_Y(V)\subset\tilde f(p_X^{-1}(U))$; \item {\em open at a subset $A\subset X$} if $\mathcal D$ is open at each point $x\in A$; \item {\em densely open} if it is open at some dense subset $A\subset X$; \smallskip \item {\em skeletal at a point $x\in X$} if for each neighborhood $U\subset X$ of $x$ there is a non-empty open set $V\subset Y$ such that $V\subset f(U)$ and $p_Y^{-1}(V)\subset \tilde f(p_X^{-1}(U))$; \item {\em skeletal at a subset} $A\subset X$ if $\mathcal D$ is skeletal at each point $x\in A$; \item {\em skeletal} if $\mathcal D$ is skeletal at $X$. \end{itemize} \end{definition} \begin{remark}\label{rem1a} Let $A$ be a subset of the space $X$ in the diagram $\mathcal D$. \begin{enumerate} \item If the diagram $\mathcal D$ is skeletal (resp. open) at $A$, then the map $f$ is skeletal (resp. open) at $A$; \item The diagram $\mathcal D$ is skeletal (resp. open) at $A$ if it is bicommutative and the map $f$ is skeletal (resp. open) at $A$; \item If the diagram $\mathcal D$ is open at $A$, then it is {\em bicommutative at $A$} in the sense that $\tilde f(p_X^{-1}(x))=p_Y^{-1}(f(x))$ for all points $x\in A$. \end{enumerate} \end{remark} \begin{remark}\label{rem1} A map $f:X\to Y$ is skeletal (resp. open) at a subset $A\subset X$ if and only if the square $$\xymatrix{ X\ar[r]^{f}\ar[d]_{\mathrm{id}_X}&Y\ar[d]^{\mathrm{id}_Y}\\ X\ar[r]_{f}&Y }$$is skeletal (resp. open) at the subset $A$. \end{remark} It is easy to see that each densely open square is skeletal. The converse is true in the metrizable case. The following lemma proved in \cite{BKM} is a ``square'' counterpart of the characterization Theorem~\ref{l1c}. \begin{lemma}\label{l1b} If in the diagram $\mathcal D$ the space $X$ is metrizable and the map $p_Y$ is surjective, then the square $\mathcal D$ is skeletal if and only if $\mathcal D$ is open at a dense $G_\delta$-subset of $X$. \end{lemma} In order to formulate the spectral characterization of skeletal maps, we need to recall some information about inverse spectra from \cite[\S2.5]{En} and \cite[Ch.1]{Chi}. For an inverse spectrum $\mathcal S=\{X_\alpha,p_\alpha^\beta,\Sigma\}$ consisting of topological spaces and continuous bonding maps, by $$\lim \mathcal S=\{(x_\alpha)_{\alpha\in \Sigma}\in\prod_{\alpha\in \Sigma}X_\alpha:\forall \alpha\le \beta \;\; p^\beta_\alpha(x_\beta)=x_\alpha\}$$we denote the limit of $\mathcal S$ and by $p_\alpha:\lim \mathcal S\to X_\alpha$, $p_\alpha:x\mapsto x_\alpha$, the limit projections. An inverse spectrum $\mathcal S=\{X_\alpha,p_\alpha^\beta,\Sigma\}$ is called an {\em $\omega$-spectrum} if \begin{itemize} \item each space $X_\alpha$, $\alpha\in \Sigma$, has countable weight; \item the index set $\Sigma$ is {\em $\omega$-complete} in the sense that each countable subset $\Sigma'\subset \Sigma$ has the smallest upper bound $\sup \Sigma'$ in $\Sigma$; \item the spectrum $\mathcal S$ is {\em $\omega$-continuous} in the sense that for any countable directed subset $\Sigma'\subset \Sigma$ with $\gamma=\sup \Sigma$ the limit map $\lim p^\gamma_\alpha:X_\gamma\to\lim \{X_\alpha,p^\beta_\alpha,\Sigma'\}$ is a homeomorphism. \end{itemize} Let $\mathcal S_X=\{X_\alpha,p^\beta_\alpha,\Sigma\}$ and $\mathcal S_Y=\{Y_\alpha,\pi^\beta_\alpha,\Sigma\}$ be two inverse spectra indexed by the same directed partially ordered set $\Sigma$. A {\em morphism} $\{f_\alpha\}_{\alpha\in \Sigma}:\mathcal S_X\to\mathcal S_Y$ between these spectra is a family of maps $\{f_\alpha:X_\alpha\to Y_\alpha\}_{\alpha\in \Sigma}$ such that $f_\alpha\circ p^\beta_\alpha=\pi^\beta_\alpha\circ f_\beta$ for any elements $\alpha\le\beta$ in $\Sigma$. Each morphism $\{f_\alpha\}_{\alpha\in \Sigma}:\mathcal S_X\to\mathcal S_Y$ between inverse spectra induces the limit map $$\lim f_\alpha:\lim\mathcal S_X\to\lim \mathcal S_Y,\;\;\lim f_\alpha:(x_\alpha)_{\alpha\in \Sigma}\mapsto (f_\alpha(x_\alpha))_{\alpha\in \Sigma }$$ between the limit spaces of these spectra. Following \cite{BKM} we say that a morphism $\{f_\alpha\}_{\alpha\in \Sigma}:\mathcal S_X\to \mathcal S_Y$ between two inverse spectra $\mathcal S_X=\{X_\alpha,p_\alpha^\beta,\Sigma\}$ and $\mathcal S_Y=\{Y_\alpha,\pi_\alpha^\beta,\Sigma\}$: \begin{itemize} \item {\em is skeletal} if each map $f_\alpha:X_\alpha\to Y_\alpha$, $\alpha\in \Sigma$, is skeletal; \item {\em has skeletal limit squares} if for every $\alpha\in \Sigma$ the commutative square $$\xymatrix{ \lim \mathcal S_X\ar[rr]^{\lim f_\alpha}\ar[d]_{p_\alpha}&&\lim \mathcal S_Y\ar[d]^{\pi_\alpha}\\ X_\alpha\ar[rr]_{f_\alpha}&&Y_\alpha} $$is skeletal. \end{itemize} We say that two maps $f:X\to Y$ and $f':X'\to Y'$ are {\em homeomorphic} if there are homeomorphisms $h_X:X\to X'$ and $h_Y:Y\to Y'$ such that $f'\circ h_X=h_Y\circ f$. The following spectral characterizations of skeletal maps was proved in \cite{BKM}. \begin{theorem}\label{skel-char-comp} For a map $f:X\to Y$ between compact Hausdorff spaces the following conditions are equivalent: \begin{enumerate} \item $f$ is skeletal. \item $f$ is homeomorphic to the limit map $\lim f_\alpha:\lim \mathcal S_X\to\mathcal S_Y$ of a skeletal morphism $\{f_\alpha\}:\mathcal S_X\to\mathcal S_Y$ between two $\omega$-spectra $\mathcal S_X=\{X_\alpha,p_\alpha^\beta,\Sigma\}$ and $\mathcal S_Y=\{Y_\alpha,\pi_\alpha^\beta,\Sigma\}$ with surjective limit projections. \item $f$ is homeomorphic to the limit map $\lim f_\alpha:\lim \mathcal S_X\to\mathcal S_Y$ of a morphism $\{f_\alpha\}:\mathcal S_X\to\mathcal S_Y$ with skeletal limit squares between two $\omega$-spectra $\mathcal S_X=\{X_\alpha,p_\alpha^\beta,\Sigma\}$ and $\mathcal S_Y=\{Y_\alpha,\pi_\alpha^\beta,\Sigma\}$ with surjective limit projections. \end{enumerate} \end{theorem} \section{Some properties of densely open squares} In this section we assume that $\mathcal D$ is a commutative square $$\xymatrix{ \tilde X\ar[r]^{\tilde f}\ar[d]_{p_X}&\tilde Y\ar[d]^{p_Y}\\ X\ar[r]_{f}&Y} $$consisting of surjective maps between compact spaces. By $$D_f=\{y\in Y:|f^{-1}(y)|=1\}\mbox{ and }D^f=\{x\in X:|f^{-1}(f(x))|=1\}$$we denote the {\em lower and upper degeneracy sets} of the map $f:X\to Y$, respectively. \begin{lemma}\label{l2} The square $\mathcal D$ is open at the upper degeneracy set $D^f\subset X$ of $f$. \end{lemma} \begin{proof} Given an open neighborhood $U\subset X$ of a point $x\in D^f$, observe that the set $V=Y\setminus f(X\setminus U)$ is an open neighborhood of $f(x)$ such that $f^{-1}(V)\subset U$. Applying to this inclusion the surjective map $f$, we get $V\subset f(U)$. To see that $p_Y^{-1}(V)\subset \tilde f(p_X^{-1}(U))$, fix any point $\tilde y\in p_Y^{-1}(V)$ and using the surjectivity of the map $\tilde f$, find a point $\tilde x\in \tilde X$ with $\tilde f(\tilde x)=\tilde y$. It follows that $f\circ p_X(\tilde x)=p_Y\circ \tilde f(\tilde x)\in V$ and hence $p_X(\tilde x)\in f^{-1}(V)\subset U$. Then $\tilde x\in p_X^{-1}(U)$ and $\tilde y=\tilde f(\tilde x)\in \tilde f(p_X^{-1}(U))$. \end{proof} \begin{lemma}\label{l3a} Assume that the square $\mathcal D$ is open at a point $a\in X$ and the space $X$ is first countable at $a$. Then there is a closed subset $Z\subset X$ such that $a\in D^{f|Z}$, $f(Z)=Y$ and $\tilde f(p_X^{-1}(Z))=\tilde Y$. \end{lemma} \begin{proof} Being first countable at $a$, the space $X$ has a countable neighborhood base $(W_n)_{n\in\omega}$ at $a$ such that $W_{n+1}\subset W_n\subset W_0=X$ for all $n\in\omega$. Let $U_0=X$ and $V_0=Y$. Using the fact that the square $\mathcal D$ is open at the point $a$, by induction on $n$, we can construct a sequence $(U_n)_{n=1}^\infty$ of open neighborhoods of $a$ in $X$ and a sequence $(V_n)_{n=1}^\infty$ of open neighborhoods of $f(a)$ in $Y$ such that \begin{itemize} \item $U_n\subset W_n\cap U_{n-1}\cap f^{-1}(V_{n-1})$, \item $V_n\subset f(U_n)\cap V_{n-1}$ and $p_Y^{-1}(V_n)\subset \tilde f(p_X^{-1}(U_n))$ \end{itemize} for every $n\in\mathbb N$. We claim that the set $$Z=\{a\}\cup\bigcup_{n\in\omega}\overline U_n\setminus f^{-1}( V_{n+1})$$ is closed in $X$ and has the required properties: $a\in D^{f|Z}$, $f(Z)=Y$ and $\tilde f(p_X^{-1}(Z))=\tilde Y$. The definition of the set $Z$ implies that it is closed in $X$ and $a\in D^{f|Z}$. To show that $\tilde f(p_X^{-1}(Z))=\tilde Y$, fix any point $\tilde y\in\tilde Y$. We need to find a point $\tilde x \in p_X^{-1}(Z)$ such that $\tilde f(\tilde x)=\tilde y$. For this we consider separately two cases. 1) The image $y=p_Y(\tilde y)$ of $\tilde y$ coincides with $f(a)$. In this case for every $n\in\omega$ we get $\tilde y\in p_Y^{-1}(V_n)\subset \tilde f(p_X^{-1}(U_n))$ and hence there is a point $\tilde x_n\in p^{-1}_X(U_n)$ such that $\tilde y=\tilde f(\tilde x_n)$. By the compactness of $\tilde X$, the sequence $(\tilde x_n)_{n\in\omega}$ has an accumulation point $\tilde x\in p^{-1}_X(a)\subset p_X^{-1}(Z)$. The continuity of the map $\tilde f$ guarantees that $\tilde f(\tilde x)=\tilde y$. 2) The point $y=p_Y(\tilde y)$ is not equal to $f(a)$. Since $V_0=Y$ and $\bigcap_{n\in\omega}f(U_n)=\bigcap_{n\in\omega}V_n=\{f(a)\}$, there is a unique number $n\in\omega$ such that $y\in V_n\setminus V_{n+1}$. Then $\tilde y\in p_Y^{-1}(V_n)\subset \tilde f(p_X^{-1}(U_n))$ and hence there is a point $\tilde x\in p_X^{-1}(U_n)$ such that $\tilde f(\tilde x)=\tilde y$. Consider the image $x=p_X(\tilde x)\in U_n$ and observe that $f(x)=f\circ p_X(\tilde x)=p_Y\circ\tilde f(\tilde x)=p_Y(\tilde y)=y\notin V_{n+1}$. Consequently, $x\in U_n\setminus f^{-1}(V_{n+1})\subset Z$ and $\tilde x\in p_X^{-1}(x)\subset p^{-1}_X(Z)$. Therefore $\tilde f(p_X^{-1}(Z))=\tilde Y$. Applying to this equality the surjective map $p_Y$, we get $$f(Z)=f\circ p_X(p_X^{-1}(Z))=p_Y\circ \tilde f(p_X^{-1}(Z))=p_Y(\tilde Y)=Y.$$ \end{proof} \begin{lemma}\label{l3} If the square $\mathcal D$ is open at a finite subset $A\subset X$, the space $X$ is first countable at each point $x\in A$, and the restriction $f|A$ is injective, then there is a closed subset $Z\subset X$ such that $f(Z)=Y$, $\tilde f(p^{-1}_X(Z))=\tilde Y$, and $A\subset D^{f|Z}$. \end{lemma} \begin{proof} By Lemma~\ref{l3a}, for every point $a\in A$ there is a closed subset $Z_a\subset X$ such that $a\in D^{f|Z_a}$, $f(Z_a)=Y$, and $\tilde f(p^{-1}_X(Z_a))=\tilde Y$. Since $f|A$ is injective, for each point $a\in A$ we can find an open neighborhood $W_a\subset Y$ of $f(a)$ in $Y$ such that the closures $\overline{W}_a$, $a\in A$, are pairwise distinct. Let $B=Y\setminus\bigcup_{a\in A}W_a$ and $$Z=f^{-1}(B)\cup\bigcup_{a\in A}f^{-1}(\overline{W}_a)\cap Z_a.$$It is easy to check that the set $Z$ is closed and has the required properties: $A\subset D^{f|Z}$, $f(Z)=Y$ and $\tilde f(p_X^{-1}(Z))=\tilde Y$. \end{proof} The proof of the following simple lemma is left to the reader. \begin{lemma}\label{l4a} A map $f:X\to Y$ is open (resp. skeletal) at a point $x\in X$ provided for some subset $Z\subset X$ that contains the point $x$ the map $f|Z:Z\to Y$ is open (resp. skeletal) at the point $x$. \end{lemma} The following lemma is a ``square'' counterpart of Lemma~\ref{l4a}. \begin{lemma}\label{l4} The square $\mathcal D$ is open (resp. skeletal) at a point $x\in X$ provided for some subset $Z\subset X$ that contains the point $x$, and its preimage $\tilde Z=p_X^{-1}(Z)$ the square $$\xymatrix{ \tilde Z\ar[r]^{\tilde f|\tilde Z}\ar[d]_{p_X|\tilde Z}&\tilde Y\ar[d]^{p_Y}\\ Z\ar[r]_{f|Z}&Y} $$ is open (resp. skeletal) at the point $x$. \end{lemma} \section{Preliminaries on functors} In this section we prove some auxiliary results on functors in the category $\mathbf{Comp}$ of compacta. From now on we assume that $F:\mathbf{Comp}\to\mathbf{Comp}$ is a monomorphic epimorphic continuous functor. For two compact Hausdorff spaces $X$ and $Y$ by $C(X,Y)$ we denote the space of continuous functions $f:X\to Y$, endowed with the compact-open topology. A proof of the following fact due to \v S\v cepin \cite[\S3.2]{Sh} can be found in \cite[2.2.3]{TZ}. \begin{lemma}\label{f-l1} For any compacta $X,Y$ the map$$F:C(X,Y)\to C(FX,FY),\;\;F:f\mapsto Ff,$$ is continuous. \end{lemma} Next, we discuss the notion of support. Let $X$ be a compact Hausdorff space, and $a\in FX$. We say that a point $a\in FX$ has {\em finite support} if $a\in FA$ for some finite subspace $A\subset X$. In this case we define $\mathrm{supp}(a)$ as the intersection $$\mathrm{supp}(a)=\cap\{A:a\in FA,\;A\subset X\mbox{ is finite}\}.$$ We shall often use the following fact proved in \cite{BMZ}: \begin{lemma}\label{l:BMZ} Let $a\in FX$ be an element with finite support. If $\mathrm{supp}(a)\ne\emptyset$, then $a\in F(\mathrm{supp}(a))$. If $\mathrm{supp}(a)=\emptyset$, then $a\in FA$ for any non-empty closed subspace $A\subset X$. \end{lemma} The set of all elements with finite support in $FX$ will be denoted by $F_\omega(X)$. The following lemma was proved in \cite[2.2.1]{TZ}. \begin{lemma}\label{l:Fomega} The subset $F_\omega(X)$ is dense in $FX$. \end{lemma} For a topological space $Y$ by $\dot Y$ we shall denote the set of isolated points of $Y$. For a surjective function $f:X\to Y$ let $$N_f=\{y\in Y:|f^{-1}(y)|>1\}=Y\setminus D_f\mbox{ and } N^f=\{x\in X:|f^{-1}(f(x))|>1\}=X\setminus D^f$$be the {\em lower and upper non-degeneracy sets} of $f$, respectively. \begin{lemma}\label{f-l2} For any skeletal map $f:X\to Y$ between compacta and any dense subset $A\subset X$, the set $$\mathcal A_0=\{a\in F_\omega(X):\mathrm{supp}(a)\subset A,\;N_{f|\mathrm{supp}(a)}\subset\dot Y\}$$ is dense in $FX$. \end{lemma} \begin{proof} By Lemma~\ref{l:Fomega}, the set $F_\omega(X)$ is dense in $FX$. So, it suffices to check that $\mathcal A_0$ is dense in $F_\omega(X)$. Fix any element $a\in F_\omega(X)$ and a neighborhood $O_a\subset F_\omega(X)$. We need to find an element $b\in O_a\cap \mathcal A_0$. If $\mathrm{supp}(a)=\emptyset$, then $a\in \mathcal A_0\cap O_a$ by the definition of $\mathcal A_0$. So we assume that $B=\mathrm{supp}(a)$ is not empty. By Lemma~\ref{l:BMZ}, $a\in FB$. By Lemma~\ref{f-l1}, the map $$F:C(B,X)\to C(FB,FX),\;\;F:g\mapsto Fg$$is continuous and so is the map $$F_a:C(B,X)\to FX,\;\;F_a:g\mapsto Fg(a).$$ It follows from the continuity of $F_a$ that the identity inclusion $i_B:B\to X$ has a neighborhood $O(i_B)$ in the function space $C(B,X)$ such that $F_a(g)=Fg(a)\in O_a$ for any map $g\in O(i_B)$. We claim that there is a map $g\in O(i_B)$ such that $N_{f\circ g|B}\subset \dot Y$. Since the compact-open topology on $C(B,X)$ coincides with the topology of pointwise convergence, for each point $x\in B$ we can find a neighborhood $O_x\subset X$ such that a map $g:B\to X$ belongs to the neighborhood $O(i_B)$ provided $g(x)\in O_x$ for all $x\in B$. Let $C=B\cap f^{-1}(\dot Y)$. We claim that for each point $x\in B\setminus C$ the set $f(O_x)$ is infinite. Assuming the converse, we can find a smaller neighborhood $U_x$ of $x$ such that $f(U_x)$ coincides with the singleton $\{f(x)\}$ which is open in $Y$ because of the skeletal property of $f$. In this case $f(x)\in \dot Y$ and $x\in C$, which contradicts the choice of $x$. Let $B\setminus C=\{x_1,\dots,x_n\}$ be an enumeration of the set $B\setminus C$. By finite induction for every $i\le n$ choose a point $x_i'\in O_{x_i}\cap A$ such that $f(x_i')\notin f(C)\cup\{f(x_j'):j<i\}$. As $f(O_{x_i})$ is infinite and $A$ is dense in $X$, the choice of $x_i'$ is always possible. After completing the inductive construction, define a map $g:B\to X$ letting $g(x_i)=x_i'$ for $i\le n$ and $g(x)\in O_x\cap A\cap f^{-1}(f(x))$ for any $x\in C$. By the construction, $g\in O(i_B)$ and the map $f\circ g|B\setminus C$ is injective, which means that $N_{f\circ g|B}\subset f(C)\subset\dot Y$. By the choice of the neighborhood $O(i_B)$, the element $b=Fg(a)$ lies in the neighborhood $O_a$. Since $b\in F(g(B))$, we get $\mathrm{supp}(b)\subset g(B)\subset A$, witnessing that $b\in\mathcal A_0$. \end{proof} \begin{lemma}\label{f-l3} Let $f:X\to Y$ be a skeletal map between compact Hausdorff spaces. If $\dot Y\subset D_f$, then for every dense subset $A\subset X$ the set $$\mathcal A_1=\{a\in F_\omega(X):\mathrm{supp}(a)\subset A,\;f|\mathrm{supp}(a) \mbox{ is 1-to-1}\}$$is dense in $FX$. \end{lemma} \begin{proof} By Lemma~\ref{f-l2}, the set $$\mathcal A_0=\{a\in F_\omega(X):\mathrm{supp}(a)\subset A,\;N_{f|\mathrm{supp}(a)}\subset\dot Y\}$$ is dense in $FX$. Observe that for each $a\in\mathcal A_0$ we get $N_{f|\mathrm{supp}(a)}\subset f(\mathrm{supp}(a))\cap \dot Y\subset f(\mathrm{supp}(a))\cap D_f\subset D_{f|\mathrm{supp}(a)}$, which implies $N_{f|\mathrm{supp}(a)}=\emptyset$ and $a\in\mathcal A_1$. Now we see that the density of the set $\mathcal A_0$ implies the density of the set $\mathcal A_1\supset\mathcal A_0$ in $FX$. \end{proof} \section{1-Mec functors and densely open squares} In this section we assume that $F:\mathbf{Comp}\to\mathbf{Comp}$ is a 1-mec functor and study its action on densely open squares. Let $\mathcal D$ be a commutative square $$\xymatrix{ \tilde X\ar[r]^{\tilde f}\ar[d]_{p_X}&\tilde Y\ar[d]^{p_Y}\\ X\ar[r]_{f}&Y }$$ consisting of surjective maps between compact Hausdorff spaces. Applying the functor $F$ to this square, we obtain the commutative square $F\mathcal D$: $$\xymatrix{ F\tilde X\ar[r]^{F\tilde f}\ar[d]_{Fp_X}&F\tilde Y\ar[d]^{Fp_Y}\\ FX\ar[r]_{Ff}&FY. }$$ \begin{lemma}\label{fo-l1} If the space $X$ is first countable and the square $\mathcal D$ is open at a non-empty subset $A\subset X$, then the square $F\mathcal D$ is open at the subset $$\mathcal A_1=\{a\in F_\omega(X):\mathrm{supp}(a)\subset A,\;f|\mathrm{supp}(a) \mbox{ is 1-to-1}\}\subset FX.$$ If $\dot Y\subset D_f$ and the set $A$ is dense in $X$, then the set $\mathcal A_1$ is dense in $FX$ and hence the square $F\mathcal D$ is densely open. \end{lemma} \begin{proof} Fix any point $b\in \mathcal A_1$ and consider its support $\mathrm{supp}(b)$. If it is not empty, put $B=\mathrm{supp}(b)$. If $\mathrm{supp}(b)$ is empty, put $B=\{z\}\subset A$ be any singleton in $A$. In both cases we have that $B\subset A$, $f|B$ is injective, and $b\in FB$, see Lemma~\ref{l:BMZ}. Let $C=f(B)$ and observe that $f|B:B\to C$ is a homeomorphism. By Lemma~\ref{l3}, there is a closed subset $Z\subset X$ such that $B\subset D^{f|Z}$, $f(Z)=Y$ and $\tilde f(p_X^{-1}(Z))=\tilde Y$. Let $\tilde Z=p_X^{-1}(Z)$, $p_Z=p_X|\tilde Z$, $f_Z=f|Z$, $\tilde f_Z=\tilde f|\tilde Z$ and consider the commutative square $\mathcal D_Z$: $$\xymatrix{ \tilde Z\ar[r]^{\tilde f_Z}\ar[d]_{p_Z}&\tilde Y\ar[d]^{p_Y}\\ Z\ar[r]_{f_Z}&Y }$$that consists of surjective maps. Applying to this square the epimorphic functor $F$, we obtain the commutative square $F\mathcal D_Z$: $$\xymatrix{ F\tilde Z\ar[r]^{F\tilde f_Z}\ar[d]_{Fp_Z}&F\tilde Y\ar[d]^{Fp_Y}\\ FZ\ar[r]_{Ff_Z}&FY, }$$also consisting of surjective maps. Taking into account that $B\subset D^{f_Z}$ and $F$ preserves finite 1-preimages, we conclude that $FB\subset D^{Ff_Z}$. By Lemma~\ref{l2}, the square $F\mathcal D_Z$ is open at $FB$. Applying Lemma~\ref{l4}, we conclude that the square $F\mathcal D$ is open at $FB$. In particular, $F\mathcal D$ is open at the point $b\in FB$. If $\dot Y\subset D_f$, then by Lemma~\ref{f-l3}, the set $\mathcal A_1$ is dense in $FX$ and hence the square $F\mathcal D$ is densely open. \end{proof} \section{1-Mec functors and skeletal maps}\label{s:fsm} In this section we study the action of 1-mec functors on some special types of skeletal maps. As in the preceding section, $F:\mathbf{Comp}\to\mathbf{Comp}$ is a 1-mec functor in the category of compact Hausdorff spaces. Our principal result is the following theorem. \begin{theorem}\label{t5n} For any surjective skeletal map $f:X\to Y$ between compact Hausdorff spaces the map $Ff:FX\to FY$ is skeletal at the subset $$\mathcal A_1=\{a\in F_\omega(X):f|\mathrm{supp}(a) \mbox{ is 1-to-1}\}.$$ If $\dot Y\subset D_f$, then the set $\mathcal A_1$ is dense in $FX$ and hence the map $Ff$ is skeletal. \end{theorem} \begin{proof} By Theorem~\ref{skel-char-comp}, the skeletal map $f:X\to Y$ can be identified with the limit map $\lim f_\alpha$ of a morphism $\vec f=\{f_\alpha\}_{\alpha\in \Sigma }:\mathcal S_X\to \mathcal S_Y$ between some $\omega$-spectra $\mathcal S_X=\{X_\alpha,p_\alpha^\beta,\Sigma\}$ and $\mathcal S_Y=\{Y_\alpha,\pi_\alpha^\beta,\Sigma\}$ with surjective limit projections such that for any $\alpha\in \Sigma $ the limit square $\mathcal D_\alpha$: $$\xymatrix{ X\ar[r]^{f}\ar[d]_{p_\alpha}&Y\ar[d]^{\pi_\alpha}\\ X_\alpha\ar[r]_{f_\alpha}&Y_\alpha }$$ is skeletal. To show that the map $Ff$ is skeletal at each point $a\in\mathcal A_1$, fix any open neighborhood $U\subset FX$ of $a$. We need to prove that the image $Ff(U)$ has non-empty interior in $FY$. The inclusion $a\in\mathcal A_1$ implies that the restriction $f|\mathrm{supp}(a)$ is injective. By the continuity of the functor $F$, there is an index $\alpha\in \Sigma $ and an open neighborhood $U_\alpha\subset FX_\alpha$ of the point $a_\alpha=Fp_\alpha(a)$ such that $U\supset (Fp_\alpha)^{-1}(U_\alpha)$. Replacing $\alpha$ by a larger index, if necessary, we can additionally assume that the restriction $p_\alpha|\mathrm{supp}(a)$ and $\pi_\alpha\circ f|\mathrm{supp}(a)$ are injective. Then the map $f_\alpha|p_\alpha(\mathrm{supp}(a))$ also is injective. Since $\mathrm{supp}(a_\alpha)\subset p_\alpha(\mathrm{supp}(a))$, the restriction $f_\alpha|\mathrm{supp}(a_\alpha)$ is injective. By our assumption the limit square $\mathcal D_\alpha$ is skeletal and by Lemma~\ref{l1b} it is open at some dense subset $A_\alpha\subset X_\alpha$. Repeating the argument from the proof of Lemma~\ref{f-l2}, we can approximate the element $a_\alpha$ by an element $a_\alpha'\in U_\alpha$ such that $\mathrm{supp}(a_\alpha')\subset A_\alpha$ and the map $f_\alpha|\mathrm{supp}(a_\alpha')$ is injective. By Lemma~\ref{fo-l1}, the square $F\mathcal D_\alpha$ is open at the point $a'_\alpha$. Then for the neighborhood $U_\alpha$ of $a_\alpha'$ there is a non-empty open set $V_\alpha\subset FY_\alpha$ such that $V_\alpha\subset Ff_\alpha(U_\alpha)$ and the open subset $V=(F\pi_\alpha)^{-1}(V_\alpha)$ of the space $FY$ lies in the image $Ff((Fp_X)^{-1}(U_\alpha))\subset Ff(U)$, which completes the proof of the skeletality of $Ff$ at $a$. If $\dot Y\subset D_f$, then the set $\mathcal A_1$ is dense in $FX$ by Lemma~\ref{f-l3} and hence the map $Ff:FX\to FY$ is skeletal. \end{proof} A map $f:X\to Y$ between topological spaces is called {\em irreducible} if $f(X)=Y$ but $f(Z)\ne Y$ for each closed subset $Z\subset X$. \begin{corollary}\label{fo-l3} For each irreducible map $f:X\to Y$ between compact Hausdorff spaces the map $Ff:FX\to FY$ is skeletal. \end{corollary} \begin{proof} This lemma follows from Theorem~\ref{t5n} because each closed irreducible map $f:X\to Y$ is skeletal and has $\dot Y\subset D_f$. \end{proof} \section{Preimage preserving functors and skeletal maps} The following theorem implies that for normal functors $F$ the skeletality of a map $f:X\to Y$ between compacta follows from the skeletality of the map $Ff$. \begin{theorem}\label{preimage} Let $F:\mathbf{Comp}\to\mathbf{Comp}$ be a preimage preserving mec-functor such that $F1\ne F2$. A surjective map $f:X\to Y$ between compact Hausdorff spaces is skeletal if the map $Ff:FX\to FY$ is skeletal. \end{theorem} \begin{proof} Assume that the map $Ff$ is skeletal. To show that $f:X\to Y$ is skeletal, fix a nowhere dense subset $N\subset Y$. We need to show that its preimage $f^{-1}(N)$ is nowhere dense in $X$. Assume conversely that $f^{-1}(N)$ contains some non-empty open set $U$. The set $F(X\setminus U)$ is closed in $FX$ and its complement $\mathcal U=FX\setminus F(X\setminus U)$ is open in $FX$. Let us show that the set $\mathcal U$ is not empty. Fix any point $u\in U$ and consider the closed subspace $Z=(X\setminus U)\cup \{u\}$ of $X$ and the continuous map $p:Z\to 2=\{0,1\}$ such that $p^{-1}(0)=X\setminus U$ and $p^{-1}(1)=\{u\}$. By our hypothesis, $F1\ne F2$. So we can find an element $b'\in F2\setminus F1$. Since the functor $F$ is epimorphic, there is an element $a'\in FZ$ such that $Fp(a')=b'$. The element $a'$ does not belong to $F(X\setminus U)$, which implies that the sets $FZ\setminus F(X\setminus U)$ and $\mathcal U=FX\setminus F(X\setminus U)$ are not empty. Since the map $Ff:FX\to FY$ is skeletal, the image $Ff(\mathcal U)$ of the non-empty open set $\mathcal U\subset FX$ has non-empty interior in $FY$ and hence contains some non-empty open subset $\mathcal V\subset Ff(\mathcal U)$ of the space $FY$. Since the set $N$ is nowhere dense in $Y$, the subspace $F_\omega(X\setminus N)=\{a\in F_\omega(X):\mathrm{supp}(a)\subset X\setminus N\}$ is dense in $FX$. So, we can find an element $b\in F_\omega(X)\cap\mathcal V$ with finite support $\mathrm{supp}(b)\subset Y\setminus N$. Let $A=\mathrm{supp}(b)$ if $\mathrm{supp}(b)\ne \emptyset$ and $A=\{y\}\subset Y\setminus N$ be any singleton in $Y\setminus N$ if $\mathrm{supp}(b)=\emptyset$. By \cite{BMZ}, $b\in F(A)\subset F(X\setminus N)$. Since $b\in\mathcal V\subset Ff(\mathcal U)$, there is an element $a\in\mathcal U=FX\setminus F(X\setminus U)$ with $Ff(a)=b$. Observe that $f^{-1}(A)\subset f^{-1}(Y\setminus N)=X\setminus f^{-1}(N)\subset X\setminus U$. Since the functor $F$ preserves preimages, we conclude that $a\in F(f^{-1}(A))\subset F(X\setminus U)$, which contradicts the choice of $a$. This contradiction shows that the set $f^{-1}(N)$ is nowhere dense in $X$ and hence the map $f$ is skeletal. \end{proof} \section{Proof of Theorem~\ref{s-map}}\label{s:s-map} To prove the ``1-mec'' part of Theorem~\ref{s-map}, assume that $F:\mathbf{Comp}\to\mathbf{Comp}$ is a 1-mec functor such that for each compact zero-dimensional space $Z$ the map $F{\mathit 2}_Z:F(Z\oplus 2)\to F(Z\oplus 1)$ is skeletal. \begin{lemma}\label{pm-l0} For any surjective map $f:A\to B$ between finite discrete spaces and any compact zero-dimensional space $Z$ the map $F(\mathrm{id}_Z\oplus f):F(Z\oplus A)\to F(Z\oplus B)$ is skeletal. \end{lemma} \begin{proof} Let $n=|A|\setminus |B|$ and $(A_i)_{i=0}^n$ be an increasing sequence of subsets of $A$ such that $|A_0|=|B|$, $f(A_0)=B$, $A_n=A$ and $|A_{i+1}\setminus A_i|=1$ for every $i<n$. For every positive number $i\le n$ choose a surjective map $f_i:A_{i}\to A_{i-1}$ such that $f\circ f_i=f|A_{i}$. Observe that $$\mathrm{id}_Z\oplus f=(\mathrm{id}_Z\oplus f_1)\circ\cdots\circ(\mathrm{id}_Z\oplus f_n)$$ and for every $i\le n$ the map $\mathrm{id}_Z\oplus f_i$ is homeomorphic to the map $\mathit 2_{Z_i}$ where $Z_i=Z\oplus (A_{i-1}\setminus D_{f_i})$. By our assumption the map $F(\mathit 2_{Z_i})$ is skeletal and so is its homeomorphic copy $F(\mathrm{id}_Z\oplus f_i)$. Since the composition of skeletal maps between compacta is skeletal, the map $$F(\mathrm{id}_Z\oplus f)=F(\mathrm{id}_Z\oplus f_1)\circ\cdots\circ F(\mathrm{id}_Z\oplus f_n)$$ is skeletal. \end{proof} \begin{lemma}\label{pm-l1} For any surjective map $f:A\to B$ between finite discrete spaces and any compact space $X$ the map $F(\mathrm{id}_X\oplus f):F(X\oplus A)\to F(X\oplus B)$ is skeletal. \end{lemma} \begin{proof} By \cite[3.2.2, 3.1.C]{En}, the compact space $X$ is the image of a compact zero-dimensional space $Z$ under an irreducible map $\xi:Z\to X$. Applying to the commutative diagram $$\xymatrix{ X\oplus A\ar[rr]^{\mathrm{id}_X\oplus f}&&X\oplus B\\ Z\oplus A\ar[rr]_{\mathrm{id}_Z\oplus f}\ar[u]^{\xi\oplus\mathrm{id}_A}&&Z\oplus B\ar[u]_{\xi\oplus\mathrm{id}_B} }$$the functor $F$, we obtain the commutative diagram $$\xymatrix{ F(X\oplus A)\ar[rr]^{F(\mathrm{id}_X\oplus f)}&&F(X\oplus B)\\ F(Z\oplus A)\ar[rr]_{F(\mathrm{id}_Z\oplus f)}\ar[u]^{F(\xi\oplus\mathrm{id}_A)}&&F(Z\oplus B)\ar[u]_{F(\xi\oplus\mathrm{id}_B)} }$$in which the map $F(\xi\oplus \mathrm{id}_A)$ is surjective, $F(\mathrm{id}_Z\oplus f)$ is skeletal by Lemma~\ref{pm-l0} and $F(\xi\oplus\mathrm{id}_B)$ is skeletal by Corollary~\ref{fo-l3}. Because of that the map $F(\mathrm{id}_X\oplus f)$ is skeletal. \end{proof} The following lemma yields the ``1-mec'' part of Theorem~\ref{s-map}. \begin{lemma}\label{pm-l2} For any skeletal surjection $f:X\to Y$ between compacta the map $Ff:FX\to FY$ is skeletal. \end{lemma} \begin{proof} By Lemma~\ref{f-l2}, the set $$\mathcal A_0=\{a\in F_\omega(X):N_{f|\mathrm{supp}(a)}\subset \dot Y\}$$ is dense in $FX$. So, the skeletality of the map $Ff$ will follow as soon as we check its skeletality at each point $a\in\mathcal A_0$. If $f|\mathrm{supp}(a)$ is injective, then $Ff$ is skeletal at $a$ by Theorem~\ref{t5n}. So, we assume that $f|\mathrm{supp}(a)$ is not injective. In this case the support $A=\mathrm{supp}(a)$ is not empty and $a\in FA$ by Lemma~\ref{l:BMZ}. By our assumption, $N_{f|A}\subset\dot Y$ and hence the complement $Y\setminus N_{f|A}$ is an open-and-closed subset of $Y$. Consider the closed subspace $\tilde X=f^{-1}(Y\setminus N_{f|A})\cup N^{f|A}$ of $X$ and the topological sum $\tilde Y=(Y\setminus N_{f|A})\oplus N^{f|A}$. Next, consider the commutative diagram $$\xymatrix{ X\ar[r]^{f}&Y\\ \tilde X\ar[r]_{\tilde f}\ar[u]^{i}&\tilde Y\ar[u]_{h} }$$where $i:\tilde X\to X$ is the embedding, $\tilde f$ is defined by $\tilde f|\tilde X\setminus N^{f|A}=f|\tilde X\setminus N^{f|A}$ and $\tilde f|N^{f|A}=\mathrm{id}$ while $h:\tilde Y\to Y$ is defined by $h|Y\setminus N_{f|A}=\mathrm{id}$ and $h|N^{f|A}=f|N^{f|A}$. Applying the functor $F$ to this diagram we get the commutative diagram $$\xymatrix{ FX\ar[r]^{Ff}&FY\\ F\tilde X\ar[r]_{F\tilde f}\ar[u]^{Fi}&F\tilde Y\ar[u]_{Fh} }$$ Since $\tilde f$ is skeletal and the restriction $\tilde f|A$ is injective, the map $F\tilde f:F\tilde X\to F\tilde Y$ is skeletal at $a$ by Theorem~\ref{t5n}. By Lemma~\ref{pm-l1}, the map $Fh$ is skeletal. Consequently, the composition $Fh\circ F\tilde f$ is skeletal at $a$ and then $Ff$ is skeletal at $a$ by Lemma~\ref{l4a}. \end{proof} To prove the ``1-mecw'' part of Theorem~\ref{s-map}, assume that $F:\mathbf{Comp}\to\mathbf{Comp}$ is a 1-mecw functor such that for each zero-dimensional compact metrizable space $Z$ the map $F{\mathit 2}_Z:F(Z\oplus 2)\to F(Z\oplus 1)$ is skeletal. The skeletality of the functor $F$ will follow from the ``1-mec'' part of Theorem~\ref{s-map} as soon as we check that for each zero-dimensional compact space $Z$ the map $F{\mathit 2}_Z:F(Z\oplus 2)\to F(Z\oplus 1)$ is skeletal. For this we shall apply the Characterization Theorem~\ref{skel-char-comp}. By (the proof of) Proposition~1.3.5 of \cite{Chi}, the zero-dimensional space $Z$ is homeomorphic to the limit $\lim \mathcal S_Z$ of an $\omega$-spectrum $\mathcal S_Z=\{Z_\alpha,p_\alpha^\beta,\Sigma\}$ with surjective limit projections, consisting of zero-dimensional compact metrizable spaces $Z_\alpha$, $\alpha\in \Sigma$. For $n\in\{1,2\}$, consider the inverse spectrum $\mathcal S_Z\oplus n=\{Z_\alpha\oplus n,p_\alpha^\beta\oplus\mathrm{id}_n,\Sigma\}$, where $\mathrm{id}_n:n\to n$ denotes the identity map of the discrete space $n=\{0,\dots,n-1\}$. Next, consider the skeletal morphism $\{\mathit 2_{Z_\alpha}\}_{\alpha\in A}:\mathcal S_Z\oplus 2\to\mathcal S_Z\oplus 1$. Applying to this morphism the mecw functor $F$, we obtain a morphism $\{F\mathit 2_{Z_\alpha}\}_{\alpha\in \Sigma}:F(\mathcal S_Z\oplus 2)\to F(\mathcal S_Z\oplus 1)$. By our assumption, for every $\alpha\in \Sigma$ the map $F\mathit 2_{Z_\alpha}:F(Z_\alpha\oplus 2)\to F(Z_\alpha\oplus 1)$ is skeletal. Then Theorem~\ref{skel-char-comp} guarantees that the limit map $\lim F\mathit 2_{Z_\alpha}:\lim F(\mathcal S_Z\oplus 2)\to \lim F(\mathcal S_Z\oplus 1)$ of the skeletal morphism $\{F\mathit 2_{Z_\alpha}\}_{\alpha\in\Sigma}$ is skeletal. By the continuity of the functor $F$, this map is homeomorphic to the map $F\mathit 2_Z:F(Z\oplus 2)\to F(Z\oplus 1)$. \section{Proof of Theorem~\ref{t1.5n}} We need to prove that a 1-mecw functor $F:\mathbf{Comp}\to\mathbf{Comp}$ is skeletal if it is finitely bicommutative and finitely skeletal. By Theorem~\ref{s-map}, it suffices to check that for any zero-dimensional compact metrizable space $Z$ the map $F\mathit 2_Z:F(Z\oplus 2)\to F(Z\oplus 1)$ is skeletal. Write the space $Z$ as the limit of an inverse spectrum $\mathcal S_Z=\{Z_n,p_n^m,\omega\}$ consisting of finite spaces $Z_n$, $n\in\omega$, and surjective bonding maps $p_n^m:Z_m\to Z_n$, $n\le m$. Then the map $\mathit 2_Z:Z\oplus 2\to Z\oplus 1$ can be identified with the limit map of the morphism $\{\mathit 2_{Z_n}\}_{n\in\omega}:\mathcal S_Z\oplus 2\to\mathcal S_Z\oplus 1$ between the inverse spectra $\mathcal S_Z\oplus 2=\{Z_n\oplus 2,p_n^m\oplus \mathrm{id}_2,\omega\}$ and $\mathcal S_Z\oplus 1=\{Z_n\oplus 1,p_n^m\oplus \mathrm{id}_1,\omega\}$. Applying to this morphism the continuous functor $F$, we obtain a morphism $\{F\mathit 2_{Z_n}\}_{n\in\omega}:F(\mathcal S_Z\oplus 2)\to F(\mathcal S_Z\oplus 1)$ between the inverse spectra $F(\mathcal S_Z\oplus 2)=\{F(Z_n\oplus 2),F(p_n^m\oplus \mathrm{id}_2),\omega\}$ and $F(\mathcal S_Z\oplus 1)=\{F(Z_n\oplus 1),F(p_n^m\oplus \mathrm{id}_1),\omega\}$. The finite skeletality of the functor $F$ implies that the morphism $\{F\mathit 2_{Z_n}\}_{n\in\omega}$ consists of skeletal maps $F\mathit 2_{Z_n}:F(Z_n\oplus 2)\to F(Z_n\oplus 1)$ for all $n\in\omega$. It is clear that for any $n\le m$ the bonding $\downarrow_n^m$-square $\mathcal D_n^m$ $$\xymatrix{ Z_m\oplus 2\ar_{p_n^m\oplus\mathrm{id}_2}[d]\ar^{\mathit 2_{Z_m}}[r]&Z_m\oplus 1\ar^{p_n^m\oplus\mathrm{id}_1}[d]\\ Z_n\oplus 2\ar_{\mathit 2_{Z_n}}[r]&Z_n\oplus 1} $$is bicommutative and consists of finite spaces. Since the functor $F$ is finitely bicommutative, the bonding $\downarrow_n^m$-square $F\mathcal D_n^m$ of the morphism $\{F\mathit 2_{Z_n}\}_{n\in\omega}$ also is bicommutative. By Proposition 2.5 of \cite{Sh}, the bicommutativity of the bonding $\downarrow_n^m$ squares $F\mathcal D_n^m$, $n\le m$, implies the bicommutativity of the limit $\downarrow_n$-square $F\mathcal D_n$ $$\xymatrix{ F(Z\oplus 2)\ar_{F(p_n\oplus\mathrm{id}_2)}[d]\ar^{F(\mathit 2_{Z})}[r]&F(Z\oplus 1)\ar^{F(p_n\oplus\mathrm{id}_1)}[d]\\ F(Z_n\oplus 2)\ar_{F(\mathit 2_{Z_n})}[r]&F(Z_n\oplus 1)} $$ for every $n\in\omega$. This fact combined with the skeletality of the map $F(\mathit 2_{Z_n})$ implies that the limit $\downarrow_n$-square $F\mathcal D_n$ is skeletal. Now Proposition 3.1 of \cite{BKM} guarantees that the limit map $F\mathit 2_Z:F(Z\oplus 2)\to F(Z\oplus 1)$ of the morphism $\{F\mathit 2_{Z_n}\}_{n\in\omega}:F(\mathcal S_Z\oplus 2)\to F(\mathcal S_Z\oplus 1)$ is skeletal. \section{Proof of Theorem~\ref{s-space}}\label{s:s-space} Let $F:\mathbf{Comp}\to\mathbf{Comp}$ be a 1-mecw functor. Given a skeletally generated compact Hausdorff space $X$, we need to prove that the space $FX$ is skeletally generated. Represent $X$ as the limit of a continuous $\omega$-spectrum $\mathcal S=\{X_\alpha,\pi^\beta_\alpha,\Sigma\}$ with surjective limit projections $p_\alpha:X\to X_\alpha$, $\alpha\in\Sigma$. By \cite{kp8}, the space $X$, being skeletally generated, has countable cellularity. Consequently, the set $\dot X$ of isolated points of $X$ is at most countable. For each isolated point $x\in {\dot X}$ of $X$ we can find an index $\alpha_x\in\Sigma$ such that $\{x\}=\pi_{\alpha_x}^{-1}(U_x)$ for some open set $U_x\subset X_{\alpha_x}$, which must coincide with the singleton of $\pi_{\alpha_x}(x)$. Then for any $\alpha\ge\sup\{\alpha_x:x\in\dot X\}$ we get $\pi_\alpha(\dot X)\subset D_{\pi_\alpha}\cap \dot X_\alpha$. Replacing the index set $\Sigma$ by its cofinal subset $\{\alpha\in \Sigma:\alpha\ge \sup\{\alpha_x:x\in\dot X\}\}$, if necessary, we can assume that $\pi_\alpha(\dot X)\subset D_{\pi_\alpha}\cap \dot X_\alpha$ for all $\alpha\in\Sigma$. \begin{claim}\label{ps-cl} The set $\Sigma'=\{\alpha\in \Sigma:\pi_\alpha(\dot X)=\dot X_\alpha\}$ is closed and cofinal in $\Sigma$. \end{claim} \begin{proof} First we prove that $\Sigma'$ is closed in $\Sigma$. Given a chain $C\subset\Sigma'$ having the supremum $\sup C$ in $\Sigma$, we need to show that $\sup C\in\Sigma'$. If $\sup C\in C$, then there is nothing to prove. So we assume that $\gamma=\sup C\notin C$. In this case by the continuity of the spectrum $\mathcal S$, the space $X_\gamma$ is the limit of the inverse subspectrum $\mathcal S|C=\{X_\alpha,\pi^\alpha_\beta,C\}$. We need to prove that $\dot X_\gamma\subset\pi_\gamma(\dot X)$. Take any isolated point $x\in \dot X_\gamma$. By the definition of the topology of the inverse limit $X_\gamma=\lim\mathcal S|C$, there is an index $\alpha\in C$ such that $\{x\}=(\pi^\gamma_\alpha)^{-1}(y)$ for some isolated point $y\in X_\alpha$. Since $y\in \dot X_\alpha=\pi_\alpha(\dot X)$, there is a point $z\in \dot X$ with $y=\pi_\alpha(z)$. Now consider the point $x'=\pi_\gamma(z)\in X_\gamma$ and observe that $\pi^\gamma_\alpha(x')=\pi^\gamma_\alpha(\pi_\gamma(z))=\pi_\gamma(z)=y=\pi^\gamma_\alpha(x)$ and $y\in \dot X_\alpha=\pi_\alpha(\dot X)\subset D_{\pi_\alpha}\subset D_{\pi^\gamma_\alpha},$ which implies $x=x'\in \pi_\gamma(\dot X)$. Next, we prove that $\Sigma'$ is cofinal in $\Sigma$. Given any $\alpha_0\in\Sigma$ we need to find $\alpha\in\Sigma'$ with $\alpha\ge\alpha_0$. For any isolated point $x\in \dot X_{\alpha_0}\setminus \pi_{\alpha_0}(\dot X)$ the preimage $\pi^{-1}_{\alpha_0}(x)$ is an open subset of $X$ containing no isolated points of $X$. Since the metrizable compactum $X_{\alpha_0}$ has at most countably many isolated points, and the index set $\Sigma$ is $\omega$-complete, there is an index $\alpha_1\ge \alpha_0$ such that for each isolated point $x\in \dot X_{\alpha_0}\setminus\pi_{\alpha_0}(\dot X)$ the preimage $(\pi^{\alpha_1}_{\alpha_0})^{-1}(x)$ is not a singleton, which means that $\dot X_{\alpha_0}\setminus\pi_{\alpha_0}(\dot X)\subset N_{\pi^{\alpha_1}_{\alpha_0}}$. Proceeding by induction, we can construct an increasing chain $(\alpha_n)_{n\in\omega}$ in $\Sigma$ such that $\dot X_{\alpha_n}\setminus \pi_{\alpha_n}(\dot X)\subset N_{\pi^{\alpha_{n+1}}_{\alpha_n}}$ for all $n\in\omega$. Since $\Sigma$ is $\omega$-complete, the chain $(\alpha_n)_{n\in\omega}$ has the smallest upper bound $\alpha_\omega=\sup_{n\in\omega}\alpha_n$ in $\Sigma$. We claim that $\alpha_\omega\in\Sigma'$. Given any isolated point $x\in \dot X_{\alpha_\omega}$, we need to prove that $x\in \pi_{\alpha_\omega}(\dot X)$. The continuity of the spectrum $\mathcal S$ guarantees that the space $X_{\alpha_\omega}$ is the limits of the inverse sequence $\{X_{\alpha_n},\pi^{\alpha_{n+1}}_{\alpha_n},\omega\}$. By the definition of the topology of the inverse limit, there is a number $n\in\omega$ such that $\{x\}=(\pi^{\alpha_\omega}_{\alpha_n})^{-1}(U_n)$ for some open set $U_n\subset X_{\alpha_n}$ which must coincide with the singleton of the isolated point $y=\pi^{\alpha_\omega}_{\alpha_n}(x)$. We claim that $y\in\pi_{\alpha_n}(\dot X)$. In the other case the choice of the index $\alpha_{n+1}$ guarantees that the preimage $(\pi^{\alpha_{n+1}}_{\alpha_n})^{-1}(y)$ is not a singleton and then $$(\pi^{\alpha_\omega}_{\alpha_{n+1}})^{-1}((\pi^{\alpha_{n+1}}_{\alpha_n})^{-1}(y))=(\pi^{\alpha_\omega}_{\alpha_n})^{-1}(y)=\{x\}$$is not a singleton, which is a contradiction. Thus $y=\pi_{\alpha_n}(z)$ for some isolated point $z\in \dot X$. Taking into account that $y\in\pi_{\alpha_n}(\dot X)\subset D_{\pi_{\alpha_n}}\subset D_{\pi^{\alpha_\omega}_{\alpha_n}}$ and $\pi^{\alpha_\omega}_{\alpha_n}(x)=y=\pi^{\alpha_\omega}_{\alpha_n}(\pi_{\alpha_\omega}(z))$, we can show that $x=\pi_{\alpha_\omega}(z)\in\pi_{\alpha_\omega}(\dot X)$. \end{proof} It follows from Claim~\ref{ps-cl} that $X$ is the limit of the $\omega$-spectrum $\mathcal S=\{X_\alpha,\pi^\alpha_\beta,\Sigma'\}$ consisting of metrizable compacta and surjective skeletal bonding projections and such that $\dot X_{\alpha}\subset D_{\pi_\alpha}\subset D_{\pi^\beta_\alpha}$ for all $\alpha\le\beta$ in $\Sigma'$. By Theorem~\ref{t5n}, the latter condition guarantees that the map $F\pi^\beta_\alpha:FX_\beta\to FX_\alpha$ is skeletal. Since the functor $F$ is epimorphic, continuous and preserves weight, the space $FX$ is skeletally generated, being the limit of the continuous $\omega$-spectrum $\{FX_\alpha,F\pi^\beta_\alpha,\Sigma'\}$ with surjective skeletal bonding projections $F\pi^\beta_\alpha:FX_\beta\to FX_\alpha$. \section{Proof of Theorem~\ref{ps-space}}\label{s:ps-space} Assume that $F:\mathbf{Comp}\to\mathbf{Comp}$ is a preimage preserving mecw-functor with $F1\ne F2$. We need to prove that a compact Hausdorff space $X$ is skeletally generated if and only if so is the space $FX$. If $X$ is skeletally generated, then by Theorem~\ref{s-space}, so is the space $FX$. Now assume conversely that the space $FX$ is skeletally generated. Write the space $X$ as the limit of an inverse $\omega$-spectrum $\mathcal S=\{X_\alpha,p_\alpha^\beta,\Sigma\}$ with surjective bonding projections. Applying to this spectrum the functor $F$, we get the inverse $\omega$-spectrum $F\mathcal S=\{FX_\alpha,Fp_\alpha^\beta,\Sigma\}$. By the continuity of the functor $F$, the limit space of the spectrum $F\mathcal S$ can be identified with $FX$. The space $FX$, being skeletally generated, is the limit of an inverse $\omega$-spectrum with skeletal bonding projection. By the Spectral Theorem of \v S\v cepin \cite{Sh}, \cite[1.3.4]{Chi}, we can assume that the latter spectrum coincides with the subspectrum $\mathcal S|\Sigma'=\{FX_\alpha,Fp^\beta_\alpha,\Sigma'\}$ for some $\omega$-closed cofinal subset $\Sigma'$ of the index set $\Sigma$. According to Theorem~\ref{preimage}, the skeletality of the maps $Fp_\alpha^\beta$ implies the skeletality of the maps $p^\beta_\alpha$ for any $\alpha\le \beta$ in $\Sigma'$. Consequently, the space $X$ is skeletally generated, being homeomorphic to the limit space of the inverse spectrum $\mathcal S|\Sigma'=\{X_\alpha,p_\alpha^\beta,\Sigma'\}$ with surjective skeletal bonding projections. \section{Some Examples and Open Problems}\label{s:eop} In this section we shall present examples of skeletal and non-skeletal functors. For a natural number $n$ and a mec functor $F:\mathbf{Comp}\to\mathbf{Comp}$ let $F_n$ be the subfunctor of $F$ assigning to each compact space $X$ the closed subspace $$F_n(X)=\{a\in FX:\exists \xi\in C(n,X)\mbox{ such that }a\in F\xi(Fn)\}$$of $F$. First we observe that subfuctors $F_n$ of open functors need not be skeletal. \begin{example}\label{ex:finitary} For the open functor $P:\mathbf{Comp}\to\mathbf{Comp}$ of probability measures and every natural number $n\ge 2$ the subfunctor $P_n$ is not skeletal. \end{example} This can be shown applying Theorem~\ref{s-map}. The non-skeletal functors $P_n$ are not finitary. We recall that a functor $F:\mathbf{Comp}\to\mathbf{Comp}$ is {\em finitary} if for any finite discrete space $X$ the space $FX$ is finite. A typical example of a finitary functor is the functor $\exp$ of hyperspace, see \cite[2.1.1]{TZ}. This functor is open according to \cite[2.10.11]{TZ}. \begin{example} For every $n\ge 3$ the subfunctor $\exp_n$ of the hyperspace functor is normal and finitary but not skeletal. \end{example} By Corollary~\ref{op->skel}, each open 1-mec functor is skeletal. Now we present three examples showing that the reverse implication does not hold. By Proposition 2.10.1 \cite{TZ}, a normal functor $F:\mathbf{Comp}\to\mathbf{Comp}$ with finite supports is bicommutative if and only if $F$ is finitely bicommutative. In \cite[p.85]{TZ} A.Teleiko and M.Zarichnyi constructed an example of a finitary normal functor $F:\mathbf{Comp}\to\mathbf{Comp}$, which is finitely bicommutative but not bicommutative. Applying to this functor Theorems~\ref{t1.5n} and \ref{scepin}, we get: \begin{example}\label{ex:TZ} There is a finitary normal functor $F:\mathbf{Comp}\to\mathbf{Comp}$ which is finitely bicommutative and skeletal but is not bicommutative and hence not open. \end{example} By Proposition 2.10.1 of \cite{TZ}, the functor from Example~\ref{ex:TZ} has infinite degree. There is also a finitary {\em weakly normal} functor of finite degree, which is skeletal but not open. By $\lambda:\mathbf{Comp}\to\mathbf{Comp}$ we denote the functor of superextension, see \cite[2.1.2]{TZ}. It is known that the functor $\lambda$ is open, finitary, weakly normal, preserves 1-preimages but fails to preserve preimages, see \cite{KR} and Propositions 2.3.2, 2.10.13 of \cite{TZ}. By \cite[2.10.19]{TZ}, for every $n\ge 3$ the subfunctor $\lambda_3$ of $\lambda$ is not open. Using the characterization Theorem~\ref{s-map}, one can easily check that the functor $\lambda_3$ is skeletal. Thus we obtain another: \begin{example}\label{e:lambda3} The finitary weakly normal functor $\lambda_3$ is skeletal but not open. \end{example} The functor $\lambda_3$ is finitary and has finite degree but is not normal. Our final example is a normal functor of finite degree which is skeletal but not open. \begin{example}\label{e:PDelta} The functor $P_3$ contains a normal subfunctor $P_\Delta$, which is skeletal but not open. \end{example} \begin{proof} In the standard 2-simplex $\Delta_2=\{(\alpha,\beta,\gamma)\in[0,1]^3:\alpha+\beta+\gamma=1\}$ consider the closed subsets $$\begin{aligned} \Delta_0&=\big\{(\alpha,\beta,\gamma)\in\Delta^2:\max\{\alpha,\beta,\gamma\}=1\big\},\\ \Delta_1&=\big\{(\alpha,\beta,\gamma)\in\Delta^2:\min\{\alpha,\beta,\gamma\}=0,\;\; \max\{\alpha,\beta,\gamma\}\le\frac{11}{12}\big\},\\ \Delta_2&=\big\{(\alpha,\beta,\gamma)\in\Delta^2:\min\{\alpha,\beta,\gamma\}\ge\frac1{12},\;\; \max\{\alpha,\beta,\gamma\}\ge \frac34\big\}, \end{aligned}$$ and their union $\Delta=\Delta_0\cup\Delta_1\cup\Delta_2$, which looks as follows: \begin{picture}(150,140)(-170,-10) \put(0,0){\circle*{3}} \put(5,10){\line(1,2){50}} \put(60,119){\circle*{3}} \put(120,0){\circle*{3}} \put(115,10){\line(-1,2){50}} \put(10,0){\line(1,0){100}} \put(14,10){{\huge $\blacktriangle$}} \put(92,10){{\huge $\blacktriangle$}} \put(53,90){{\huge $\blacktriangle$}} \end{picture} Now consider the subfunctor $P_\Delta\subset P$ of the functor of probability measures assigning to each compact space $X$ the closed subspace $$P_\Delta(X)=\{\alpha\delta_x+\beta\delta_y+\gamma\delta_z:(\alpha,\beta,\gamma)\in\Delta,\;x,y,z\in X\}\subset P(X).$$ Here $\delta_x$ stands for the Dirac measures concentrated at a point $x$. One can check that $P_\Delta$ is a normal functor of degree $\deg P_\Delta=3$. In fact, $P_\Delta$ is a subfunctor of the functor $P_3\subset P$. Theorem 2.10.21 \cite{TZ} characterizing open normal functors of finite degree implies that the functor $P_\Delta$ is not open. Applying the characterization Theorem~\ref{s-map}, one can check that the functor $P_\Delta$ is skeletal. \end{proof} Examples~\ref{ex:finitary}---\ref{e:PDelta} suggest the following open \begin{problem} Assume that a finitary normal functor $F:\mathbf{Comp}\to\mathbf{Comp}$ of finite degree is skeletal. Is $F$ open? Equivalently, is $F$ finitely bicommutative? \end{problem} Let us also ask some other questions about skeletality of functors. We shall say that a functor $F:\mathbf{Comp}\to\mathbf{Comp}$ is ({\em finitely}) {\em square-skeletal} if for each skeletal square $\mathcal D$ consisting of continuous surjective maps between (finite) compact spaces the square $F\mathcal D$ is skeletal. \begin{proposition}\label{p:s2} Let $F:\mathbf{Comp}\to\mathbf{Comp}$ be an epimorphic functor. \begin{enumerate} \item If $F$ is (finitely) square-skeletal, then $F$ is (finitely) skeletal. \item If $F$ is (finitely) bicommutative and (finitely) skeletal, then $F$ is (finitely) square-skeletal. \item If $F$ is finitary, then $F$ is finitely square-skeletal if and only if $F$ is finitely bicommutative only if $F$ is skeletal. \end{enumerate} \end{proposition} \begin{proof} 1,2. The first two statements follow from Remarks~\ref{rem1} and \ref{rem1a}, respectively. \smallskip 3. The third statement follows from Theorem~\ref{t1.5n} and an observation that a commutative square consisting of epimorphisms between finite spaces is skeletal if and only if it is bicommutative. \end{proof} Proposition~\ref{p:s2} suggests another two problems: \begin{problem}\label{prob:2} Is each (finitary) skeletal normal functor $F:\mathbf{Comp}\to\mathbf{Comp}$ (finitely) square-skeletal? \end{problem} \begin{problem} Is a normal functor $F:\mathbf{Comp}\to\mathbf{Comp}$ skeletal if it is finitely square-skeletal? {\rm (Theorem~\ref{t1.5n} implies that the answer is affirmative if the functor $F$ is finitary).} \end{problem} It is clear that each functor that preserves (1-)preimages preserves finite (1-)preimages. We do not know if the converse statement is true. \begin{problem} Does a mec-functor $F:\mathbf{Comp}\to\mathbf{Comp}$ preserve (1-)preimages if $F$ preserves finite (1-)preimages. \end{problem} \section{Acknowledgements} The authors would like to thank Vesko Valov for valuable comments concerning skeletally generated compacta.
{ "redpajama_set_name": "RedPajamaArXiv" }
6,418
4544 Xanthus eller 1989 FB är en asteroid i huvudbältet som upptäcktes den 31 mars 1989 av de båda amerikanska astronomerna Norman G. Thomas och Henry E. Holt vid Palomar-observatoriet. Den är uppkallad efter Xanthus i den grekiska mytologin. Asteroiden har en diameter på ungefär 1 kilometer. Den tillhör asteroidgruppen Apollo. Referenser Huvudbältesasteroider Apollo-asteroider Småplaneter namngivna efter mytologiska figurer Astronomiska upptäckter av NG Thomas Astronomiska upptäckter av HE Holt Astronomiska upptäckter 1989 Jordnära objekt
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,180
How to disable a Chrome keyboard shortcut Chrome, like any other browser, has keyboard shortcuts for common features. The Ctrl+D keyboard shortcut lets you bookmark the current page, Ctrl+H opens the History page, Ctrl+J opens Downloads, etc. Not all pages can be accessed via a keyboard shortcut which means Chrome doesn't monopolize too many of them. That said, you can disable a Chrome keyboard shortcut if you find it's intrusive or it interferes with a web app that you use. Here's how. Disable Chrome keyboard shortcut To disable a keyboard shortcut in Chrome, you need to install an extension called Shortkeys. The extension can configure new shortcuts in Chrome but more importantly it can disable any shortcut that you want. Once you've installed the extension, click its icon and select Options from the context menu. On the extension's options window, click the Add button. In the Keyboard shortcut field, enter the Chrome keyboard shortcut that you want to disable. For example, if you want to disable the Ctrl+D keyboard shortcut which bookmarks the current tab, enter that in this field. In the Behavior field, open the dropdown and scroll through the list of options until you find the Other section. Under this section, there's an option called 'Do Nothing'. Select it. With a Chrome keyboard shortcut, selecting this option will come with a warning. The extension tells you that the keyboard shortcut will be disabled on all pages except Chrome's internal pages. This includes the new tab page, the history page, the downloads page, etc. For all other pages though, the keyboard shortcut will be disabled. Click Save and visit any website of your choice. So long as you're not on a Chrome internal page, the keyboard shortcut will no longer work. You can use this extension to disable as many keyboard shortcuts as you want. As to why this doesn't work for internal Chrome pages, it's not a limitation of the extension. It has to do with Chrome. Chrome doesn't allow extensions to run on its internal page as a security measure. If extensions were allowed to run on an internal page, and one turned out to be malicious, not only could it hijack your browser, it could also prevent you from resetting the browser. That's why it's necessary to prevent extensions from running on internal pages which results in these limitations. Even if you were to install an extension that can modify the new tab page, the extension will still not work on the new tab page. ← How to insert a symbol with a keyboard shortcut in MS Word → How to find the card reader in Device Manager on Windows 10
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
742
Johann Weinlob (auch Johann Weinlöben, Johann Weinleb, Johann Weinleben, Johann Weinlaub; * in Treuenbrietzen, Kurfürstentum Brandenburg; † 10. Februar 1558 vermutlich in Berlin) war ein märkischer Jurist und Kanzler des Kurfürsten Joachim II. von Brandenburg, für den er die Reformation in Brandenburg mit einführte. Er wurde in der Nikolaikirche (Berlin) beigesetzt. Familie Weinlob war verheiratet mit Margaretha Ohne(n). Aus der Ehe sind nach Holtze die Sohne Johann und Joachim sowie die Töchter Ursula, Anna und Margarethe hervorgegangen: Johann Weinleben. († 1583). Johann Weinleben studierte in Frankfurt/Oder und war in der brandenburgischen Kanzlei des Kurfürsten ab 1565, zuletzt als Dezernent beschäftigt. Er war mit der Tochter des brandenburgischen Küchenmeisters und Bürgermeisters in Berlin Hans Tempelhof des Jüngeren (1505–1557) verheiratet. Er wird in dem unten genannten Vertrag bezüglich der Übertragung des Gutes in Wahrburg erwähnt. Beide Söhne (Johann und Joachim) erhielten im Jahre 1552 das Lehen "Andreä die Erste" in Stendal von dem Kurfürsten. Anna Weinleben. Sie heiratete den Kurfürstlich-Brandenburgischen Kammergerichtsrat Lucas Hoffmeister. Aus dieser Ehe entstammte Catharina Hoffmeister, die den Juristen und Domherrn zu Havelberg Lucas Luidtke heiratete. Nach dem Tod von Lucas Luidtke heiratete Catharina den im Jahre 1612 verstorbenen Bürgermeister von Stendal Johann Salzwedel, beide in zweiter Ehe. In dem o. g. Aufsatz von Holtze wird die Ehefrau des Hoffmeister zwar mit dem Namen "Ursula" bezeichnet. Dies ist aber wohl eine Namensverwechselung, da in der o. g. Leichenpredigt die Mutter mit dem Namen "Anna" bezeichnet wird. In dem Stammbaum der Familie Salzwedel wird "Catherina Hoffmeisterin" als Witwe des Domherrn zu Havelberg Lucas Lüdecke und Tochter des Churfürstlich Brandenburgischen Raths zu Berlin Lucas Hoffmeister genannt. Eine weitere Tochter der Eheleute Lucas und Anna Hoffmeister war Margarethe Hoffmeister. Sie war verheiratet mit Sebastian Brunnemann, Kammergerichtsadvokat und mehr als drei Jahrzehnte Bürgermeister von Cölln. Friedrich Weinleben († 21. August 1550). Das Epitaph für Friedrich Weinleben, Sohn des Kanzlers Johann Weinleben (dat. 1552, dem Berliner Hofmaler M. Ribestein zugeschrieben; rest. 1976. "Christus segnet die Kinder", italianisierend, in schlichter Holzrahmung) wurde im Jahr vom Rat der damals selbständigen Stadt Neustadt Brandenburg in der Schöppenkapelle der Stadtkirche St. Katharinen in Brandenburg an der Havel errichtet. und wird zurzeit nur provisorisch aufbewahrt. Es zeigt einen die Kinder segnenden Christus. Joachim Weinleben. Er wird ebenfalls in dem unten genannten Vertrag bezüglich der Übertragung des Gutes in Wahrburg erwähnt. Auch war er an der seinem Bruder Johann Weinleben erfolgten Verleihung des Lehens in Stendal beteiligt. Im Jahre 1555 verlieh das Capitel das Lehen "Andreä die Andere", das nach dem Tode des Jacob Belkow erloschen war, dem Sohn des Kanzlers, Joachim Weinleben "in causam studiorum". Am 21. April 1561 erhielt Joachim Weinleben nach dem Tode des Dechanten und letzten katholischen Domherrn des Domstifts zu Havelberg Petrus Conradi dessen Präbende und war danach Domherr in Havelberg. Matthäus Ludecus widmet in dem Titelblatt zu seinem Buch "Complet Gesang Simeonis des gerechten" das Werk u. a. Joachim Weinleben. Der Sohn von Matthäus Ludecus, Lucas Luidke, war, wie oben ausgeführt, mit Joachim Weinleben verschwägert. Ursula Weinleben. Oben ist schon dargelegt worden, dass Johann Weinleben sen. zwei Töchter mit den Namen "Anna" und "Ursula" hatte. Eine Tochter war verheiratet mit dem Rat Joachim Zerer, einem Enkel des Kanzlers Dr. Sigismund Zerer (Kanzler von 1483 bis 1510).Holtze sagt zwar, dass Ursula mit Hoffmeister und Anna mit Zerer verheiratet war. Dies ist wohl ein Irrtum, was sich aus den Ausführungen zu Anna ergibt. Schwager des Kanzlers Weinlob war der Wittenberger Seidenhändler Georg Reiche, der zusammen mit seiner Ehefrau auf der Rückreise von der Messe zu Frankfurt a. d. O. mit dem Gefährt, auf dem er zugleich seine Güter bei sich führte, in der Mittagsstunde des 23. Juli 1538 in die Nähe von Jüterbog gelangte. Hier wurden sie auf der Landstraße von vier Bewaffneten zu Pferd angehalten. Der Anführer war Hans Kohlhase, der Frau Reiche mit den Worten: "Da habt Ihr einen Brief, den bringt mir dem Bürgermeister von Wittenberg". Dieser war aber nicht bereit, Lösegeld zu zahlen. Reiche wurde gestattet, mit seinen Verwandten Kontakt aufzunehmen, so auch mit seinem Schwager Johann Weinlob, der seinerzeit Sekretär und nachmaliger Staatskanzler des Brandenburger Kurfürsten war. Dieser war nicht länger bereit, Kohlhase, der zwar aus Brandenburg stammte, aber den Geleitbrief zurückgegeben hatte, bei seiner Fehde gegen das Kurfürstentum Sachsen weiterhin zu unterstützen. Brandenburg sagte nur einige Tage später Sachsen die lange begehrte Verfolgungshilfe zu. Durch Lockspitzel wurde Kohlhase mitgeteilt, dass der Kurfürst Joachim zu Verhandlungen in Berlin bereit sei. Kohlhase begab sich mit seiner schwangeren Frau nach Berlin, wurde dort verhaftet, zum Tode verurteilt und hingerichtet. Die Geschichte des Hans Kohlhase war Vorbild für die Novelle von Heinrich von Kleist über Michael Kohlhaas. Die Geschichte des Hans Kohlhase wurde nicht nur von Kleist, sondern auch von anderen Schriftstellern erzählt. Berufliche Tätigkeit Im Jahr 1538 traf der gerade inthronisierte Kurfürst Joachim II. nach vierjährigem Zögern die Entscheidung für eine Abkehr von der bisherigen Kirchenpolitik seines Vaters Joachim I. unter Einfluss des kurfürstlichen Rates und späteren Kanzlers Johann Weinlob sowie nach einem Besuch Philipp Melanchthons am kurfürstlichen Hof. Der Kurfürst vertraute ihm schon vor der offiziellen Ernennung zum Kanzler neben dem Theologen Jacob Stratner nach Einführung der Reformation die Aufsicht über die Kirchen in der Mark Brandenburg an. Als der Kurfürst sich insgeheim aufgelöste Klöster und katholische Kirchenvermögen aneignete, ging Weinlob dagegen vor. Auch mancher unangemessenen Belastung der Bauern ließ er Einhalt gebieten. Im Jahr 1541, während der Teilnahme des Kurfürsten am Reichstag in Regensburg, gehörte Johann Weinlob zu den "heimgelassenen Räthen", die mit Hans von Arnim, dem Landvogt der Uckermark, als Statthalter an der Spitze hatten. Weinlob führte nach seiner Ernennung zum Kanzler den Vorsitz in dem von sechs adligen und sechs gelehrten Räten besetzten Kammergericht. Er sorgte für die Nutzbarmachung des kirchlichen Vermögens für die lutherischen kirchlichen Zwecke, bei der die verschiedensten Rechte und Ansprüche zu berücksichtigen waren. Er hat nach Verhandlungen mit Städten und Rittergutsbesitzern Vergleiche (Visitationsrezesse) abgeschlossen, die bezogen auf die Feststellung der märkischen Kirchen noch immer gelten. Das Kammergericht unter dem Vorsitz von Weinlob nahm auch Klagen märkischer Freisassen gegen ihre Gutsherren an, in denen auf die Feststellung der ländlichen Dienste geklagt wurde. Dieser drohenden Bauernbefreiung trat der Adel entgegen, und erreichte, dass Weinlobs Nachfolger Lampert Distelmeyer das Kammergericht zu einem Stützpunkt für den absterbenden Feudalstaat umorganisierte. Kanzlerhaus Berlin-Mitte, Poststraße 11 Weinleben erhielt vom Kurfürsten als Kanzleigebäude und Wohnung das Kanzlerhaus Berlin-Mitte, Poststraße 11 unter Befreiung von allen Lasten. Nach dem Tode des Enkels Johann Weinleben fiel es als Burglehen dem Kurfürsten anheim, der es seinem Kammerdiener Hermann verlieh. Das Haus bewohnte vorher der Wolfgang Kettwig (auch Kettwich), der im Dezember 1541 starb. Erst im September 1587 bezog der Kanzler Lamprecht Di(e)stelmeyer das Haus in der Poststraße 11 – das Kanzlerhaus. Er hatte vom März 1558 bis 1588 das kurfürstliche Kanzleramt inne. Di(e)stelmeyer leitete die wichtigsten diplomatischen Angelegenheiten des Kurfürsten Joachim II. Hector (1505–1571) von Brandenburg sowie ab 1571 von dessen Sohn Johann Georg (1525–1598) von Brandenburg. Di(e)stelmeyer verstarb in der Poststraße im Oktober 1588. Die Stadt Berlin brachte im Jahre 1896 an dem Kanzlerhaus folgende Gedenktafel an, die entfernt oder nicht mehr vorhanden ist. Dem Andenken der Kurfürstlichen Kanzler Joh. Weinleben 1541-1558 Lamp. Distelmeier 1558-1588 die hier wohnten und starben. Erwerb des Dorfes Wahrburg Kurfürst Joachim II. von Brandenburg (1505–1571) belehnte seinen Kanzler Johann Weinleben († 1558) mit der Anwartschaft auf eine Hälfte des Dorfes Wahrburg, die die Brüder Andreas und Palm Rynow zu Lehen hatten. Am 15. November 1547 belehnte er ihn auch mit der Anwartschaft auf die andere Hälfte des Dorfes Wahrburg. Am 24. August 1569 verkauften oben genannten Brüder Johann und Joachim Weinleben zu Berlin das seinerzeit ihrem Vater von Kurfürst Joachim verliehene Angefälle auf das Dorf Wahrburg – Lehnsbesitz der Brüder Andreas und Palm Rynow sowie des Hans Kolck zu Stendal – an Claus Goldbeck, Bürgermeister zu Stendal, und seine Brüder und Vettern Andres, Georg, Heinrich und Gregorius, die aus dem Werbener Zweig der Familie der Familie Goldbeck stammten, für 100 Gulden. Epitaph des Kanzlers Johannes Weinleb mit der Geschichte des Tobias In der Nikolaikirche (Berlin) wurde an der Nordwand im Jahr 1558 ein Epitaph aus Alabaster zum Gedenken an Weinlob errichtet. Es zeigte Szenen, in denen Tobias und Sara aus dem Buch Tobit aus dem Alten Testament dargestellt wurden. Die einzelnen Bilder zeigten: Tobias und Sara beten; Tobias nimmt Abschied von seinen Eltern; Hanna beweint die Abreise ihres Sohnes; Tobias fängt den Fisch und zieht ihn an Land; die Heilung des Tobit: Tobias legt die Galle des Fisches auf die Augen seines Vaters; in der Nacht begräbt Tobit den toten Mann; Christus als (himmlischer) Richter. Das Epitaph befand sich "am zweiten Pfeiler an der Orgel unten". Der Name des Künstlers ist unbekannt. Er war Italiener. Abbildungen des Epitaphs sind zu sehen in der "Deutschen Digitalen Bibliothek", im "Bildindex" und bei "Europeana Collections" Beschreibungen befinden sich in vielen Abhandlungen über die Nicolaikirche. In dem Buch von Schubring wird das Epitaph nicht erwähnt, obwohl es Fotografien gibt, die später entstanden sind. Die Kirche wurde 1939 zum Zwecke ihrer "stilreinen Restaurierung" vorübergehend geschlossen, aber zunächst in keiner Hinsicht geräumt. Dies geschah erst unter dem Eindruck des nahenden Bombenkrieges 1943. Neben der unmittelbaren Kriegseinwirkung gingen jedoch die meisten Verluste auch auf das Konto der langen Jahre, in denen die Nikolaikirche als quasi herrenlose Ruine (nach oben) offen gestanden hat. Das Weinlöben-Epitaph wird wohl zu den Kunstwerken gehört haben, die bereits 1943 ausgelagert wurden. Das Schicksal dieser ausgelagerten (= beweglichen) Ausstattungsstücke in den Kriegs- und Nachkriegsjahren ist dabei nicht belegt. Von vielen wichtigen Werken fehlt bis heute jede Spur. Das Stadtmuseum Berlin verfügt über zahllose, noch nicht wieder restaurierte bzw. zurückgeführte Epitaphien bzw. Spolien. Das Weinlöben'sche ist nicht darunter. Abbildung von Lucas Cranach d. J. Auf dem Bild Lucas Cranachs des Jüngeren "Die Taufe Christi mit den Bildnissen des Markgrafen von Brandenburg-Küstrin, seiner Gemahlin und seiner Freunde" ist Weinleben (vorne links, 2. Reihe hinter Martin Luther) zu sehen. Zu dem Bild und der Zuschreibung der abgebildeten Personen wird auf die Ausführungen von Max Friedländer und Wilhelm Hammer verwiesen. Literatur Adolf Stölzel: Brandenburg-Preußens Rechtsverwaltung und Rechtsverfassung dargestellt im Wirken seiner Landesfürsten und obersten Justizbeamten. Franz Vahlen, Berlin 1888, S. 164, 180ff, 187 ff, (Digitalisat) Friedrich Holtze: Die ältesten märkischen Kanzler und ihre Familien. In: Forschungen zur Brandenburgischen und Preußischen Geschichte. Band 7, 1894, S. 522ff, (Digitalisat), ab 1440 Weblinks Nachweise Kanzler (Brandenburg) Jurist Deutscher Geboren im 15. oder 16. Jahrhundert Gestorben 1558 Mann
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,914
// ContactInfo class declaration. class ContactInfo { private: char *name; // The contact's name char *phone; // The contact's phone number public: // Constructor ContactInfo(char *n, char *p) { // Allocate just enough memory for the name and phone number. name = new char[strlen(n) + 1]; phone = new char[strlen(p) + 1]; // Copy the name and phone number to the allocated memory. strcpy(name, n); strcpy(phone, p); } // Destructor ~ContactInfo() { delete [] name; delete [] phone; } const char *getName() const { return name; } const char *getPhoneNumber() const { return phone; } }; #endif
{ "redpajama_set_name": "RedPajamaGithub" }
2,922
<div ng-controller="ToolbarCtrl"> <div class="form-horizontal"> <span> <div class="btn-group read-mode" data-toggle="buttons-radio"> <button type="button" class="btn" ng-model="settingsService.settings.readingMode" btn-radio="'unread'">${toolbar.unread}</button> <button type="button" class="btn" ng-model="settingsService.settings.readingMode" btn-radio="'all'">${toolbar.all}</button> </div> <a type="button" class="btn" ng-click="toggleOrder()" title="${toolbar.sort_by_asc_desc}"> <i ng-class="{'icon-arrow-up' : settingsService.settings.readingOrder == 'asc', 'icon-arrow-down': settingsService.settings.readingOrder == 'desc'}"></i> </a> <div class="btn-group" data-toggle="buttons-radio"> <a type="button" class="btn" ng-model="settingsService.settings.viewMode" btn-radio="'title'" title="${toolbar.titles_only}"><i class="icon-list"></i></a> <a type="button" class="btn" ng-model="settingsService.settings.viewMode" btn-radio="'expanded'" title="${toolbar.expanded_view}"><i class="icon-th-list"></i></a> </div> <div class="btn-group"> <a type="button" class="btn" ng-click="previousEntry()" title="${toolbar.previous_entry}"><i class="icon-chevron-up"></i></a> <a type="button" class="btn" ng-click="nextEntry()" title="${toolbar.next_entry}"><i class="icon-chevron-down"></i></a> <a type="button" class="btn" ng-click="refresh()" title="${toolbar.refresh}"><i class="icon-refresh"></i></a> </div> <div class="btn-group"> <a type="button" class="btn" ng-click="markAllAsRead()" title="${toolbar.mark_all_as_read}"><i class="icon-check"></i></a> <button class="btn dropdown-toggle" data-toggle="dropdown"> <span class="caret"></span> </button> <ul class="dropdown-menu pull-right"> <li><a ng-click="markAllDay()">${toolbar.mark_all_older_day}</a></li> <li><a ng-click="markAllWeek()">${toolbar.mark_all_older_week}</a></li> <li><a ng-click="markAllTwoWeeks()">${toolbar.mark_all_older_two_weeks}</a></li> </ul> </div> <div class="btn-group"> <a class="btn" ng-click="toSettings()" title="${toolbar.settings}"><i class="icon-cog"></i></a> <button class="btn dropdown-toggle" data-toggle="dropdown"> <span class="caret"></span> </button> <ul class="dropdown-menu pull-right"> <li><a ng-click="toProfile()"><i class="icon-user"></i> ${toolbar.profile}</a></li> <li ng-show="session.admin"><a ng-click="toAdmin()"><i class="icon-edit"></i> ${toolbar.admin}</a></li> <li class="divider"></li> <li><a href="logout"><i class="icon-off"></i> ${toolbar.logout}</a></li> </ul> </div> </span> <form ng-submit="search()" class="input-append"> <input type="text" ng-model="keywords"></input> <button class="btn" type="submit"><i class="icon-search"></i></button> </form> <div class="donate"> <button class="btn btn-success" type="button" ng-click="toHelp()"><i class="icon-question-sign"></i> ${toolbar.about} / ${toolbar.donate}</button> </div> <div spinner shown="loading"></div> <span>{{ServerService.announcement}}</span> </div> </div>
{ "redpajama_set_name": "RedPajamaGithub" }
5,748
\section{Introduction} \qquad This paper continues the research of \cite{Alt06} where analytical expression for the radion effective potential capable to meet demands of the slow-roll inflation was received in a class of models with fluxbrane throat-like solutions. These studies develop the direction of thought of papers \cite{Strominger}-\cite{Wolfe2} where the throat-like solutions in the Type IIB supergravity with warped Klebanov-Strassler conifold \cite{Klebanov}-\cite{Herzog} were considered. In the present paper we consider the non-deformed and deformed throat-like solution in the Type IIA supergravity where six extra dimensions are given by the warped flat space which base is a sphere (\cite{Horowitz}-\cite{Aharony}). Following the conventional approach, and in parallel with the Randall-Sundrum theory \cite{Randall}, we suppose that massive matter of Standard Model (SM) is localized in the IR region near the tip of the throat whereas the volume of extra space is terminated by the co-dimension one local source - heavy "ultraviolet" boundary where $Z_{2}$-identification and corresponding junction conditions are imposed. And we suppose that dynamics of the boundary surface is described by the simplest positive tension Nambu-Goto action, hence energy-momentum tensor of the boundary is isotropic. In Sec. 3 it is shown that anisotropic Israel junction conditions stabilize position of the isotropic boundary at the top of the throat. This seemingly comes in conflict with the no-scale structure of the supergravity theory (\cite{Giddings} and references therein) and with the no-go theorem \cite{nogo}, \cite{Giddings}. However there is no conflict at all. To fix the modulus of the overall volume of extra space is possible because the no-scale structure is spoiled here by the external local source (UV boundary), and because this co-dimension one local source also evades the no-go theorem. Indeed it is easy to show that combination $\widetilde T$ (defined by expression (34) of the paper \cite{nogo}) of the components of the energy-momentum tensor does not meet demands of the no-go theorem in case the positive tension co-dimension one local source is introduced in the action; this is not true however for the positive tension local sources of lower dimensions. The Reissner-Nordstrom type deformation of the elementary extremal solution will be used as a tool to construct the IR end of the throat. This type of solutions with a "bolt" (in terminology of \cite{Hawking}) was considered earlier in 6D models \cite{Louko}-\cite{Altsh1} where also the constant curvature of the 4-dimensional space-time was introduced to make Israel junction conditions consistent. As it was shown in \cite{Altsh1} the value of this curvature must be extremely small and may correspond to the observed acceleration of the Universe (see Review \cite{quint}). In the present paper this approach is generalized in D10 Type IIA supergravity theory. Contrary to \cite{Altsh1} where formula for Dark Energy $\rho_{D.E.}=G_{N}m^{6}$ ($G_{N}$ is Newton's constant, $m$ is characteristic mass of matter) was received, model considered in the present paper gives more realistic result $\rho_{D.E.} \sim G_{N}^{2}m^{8}$ (subsection {\it 5-b}). Radion field is defined here as the position of the UV boundary which slowly depends on the space-time coordinates in 4 dimensions (cf. \cite{radion3}-\cite{Brax}). Radion effective potential is calculated with the standard procedure of integrating out extra coordinates in the higher-dimensional action. Of course it is possible to rescale the isotropic radial coordinate and to make the position of the boundary fixed; then we come to the definition of radion as a factor-field in higher-dimensional metric \cite{Csaki} - \cite{Mazumdar}. Two approaches are basically equivalent when radion effective potential is calculated. Form of the radion potential depends only on the choice of the theory; in the Type IIA supergravity considered in the paper potential exponentially decreases upward the throat with an exponent equal to 0,21, i.e. it is sufficiently flat to provide the slow-roll inflation \cite{Dvali}, \cite{Mukhanov}. Thus radion scalar field introduced in the paper may serve as an inflaton. Radion potential proves to be non-negative, its relatively flat region ends with steep slope falling down to the minimum of potential at the top of the throat. In the present paper these results of \cite{Alt06} are repeated in a more transparent way. Also the validity of the hypothesis of \cite{Alt06} is proved: it is shown in subsection {\it 5-c} that Reissner-Nordstrom type deformation of the elementary fluxbrane solution results in tiny positive deviation from zero of the minimal value of the radion effective potential. This deviation is seen today as Dark Energy. The idea to use dynamical scalar associated with extra dimensions, interbrane distance in particular, as a candidate for inflaton is not a novel one (see e.g. \cite[b]{Cline}, \cite{Mazumdar}). The very possibility to get in this way the exact analytical expression for the scalar field potential possessing qualitatively the basic features demanded by the astrophysical observations \cite{WMAP} looks attractive. It must be noted that physically meaningful radion effective potential may be received here only in case the bulk magnetic monopole fluxbrane solution is considered as a background, not the dual electric one. The nonequivalence of two solutions is immediately seen when the higher-dimensional consistency condition of \cite{Leblond} is applied, see Appendix in \cite{Alt06}. The formulae for the value of the electro-weak hierarchy presented in Sec. 5 essentially develop ideas of works \cite{Altsh} where it was observed that in the throat-like fluxbrane models hierarchy proves to be strongly dependent on the value of the $n$-form-dilaton coupling constant and on dimensionalities of the extra subspaces. The paper is organized as follows. Basic action, bulk and junction equations are presented in Sec. 2. In Sec. 3 the stabilization of the volume modulus of the non-deformed throat-like solution in the Type IIA supergravity is demonstrated, the analytical expression for the radion effective potential is received and its compatibility with the demands of inflation is shown. In Sec. 4 the generalization of the elementary solution induced by introduction of non-zero "Maxwell" field and non-zero curvature of the 4-dimensional Universe is considered. Sec. 5 presents formulae for the mass-scale hierarchy and for the rate of acceleration of the Universe; the physically meaningful dependence of two hierarchies is deduced. In Sec. 6 results and problems are summarized and possible trends of future research are outlined. \section{Action, ansatz, dynamical equations} Let us consider the following action in $D$ dimensions: \begin{eqnarray} \label{1} &&S^{(D)}=M^{D-2}\Bigg \{ \int\left[R^{(D)}-\frac{1}{2}(\nabla\varphi)^2-\frac{1}{2\cdot n!}e^{\alpha\varphi}F_{(n)}^2-\frac{1}{2\cdot 2!}e^{\eta\varphi}F_{(2)}^2\right.- \nonumber \\ &&-\left.\sigma \, e^{\gamma\varphi} \delta^{(1)}\frac{\sqrt{-h^{(D-1)}}}{\sqrt{-g^{(D)}}}\right]\sqrt{-g^{(D)}}\,d^{D}x+\rm{GH} \Bigg \}, \end{eqnarray} which bulk part is an Einstein-frame truncated low-energy description of the string-based supergravity with dilaton and antisymmetric tensor; $M$, $g_{AB}$, $R^{(D)}$ are "Planck mass", metric and curvature in $D$ dimensions; $\rm{GH}$ - Gibbons-Hawking term; $F_{(n)}$ is $n$-form field strength; $F_{(2)}$ is 2-form "Maxwell" field; $\varphi$ - dilaton field coupled to $n$-form, 2-form and to local source in (\ref{1}) with a coupling constants $\alpha$, $\eta$, $\gamma$ correspondingly. The co-dimension one local source will serve the UV-boundary of the throat, its action is taken in the simplest Nambu-Goto form; mass parameter $\sigma$ characterises its tension equal to $M^{D-2}\sigma$; ${h^{(D-1)}=\det{h_{ab}}}$; $h_{ab}$ is an induced metric on the boundary, $a, \,b=\{0,1\dots(D-2)\}$; $\delta^{(1)}$ is Dirac delta function fixing position of the boundary. In this paper we shall consider theory (\ref{1}) for the following particular values of dimensionalities and coupling constants in (\ref{1}): \begin{equation} \label{2} D=10, \qquad n=4, \qquad \alpha=\frac{1}{2}, \qquad \eta=\frac{3}{2}, \qquad \gamma=-\frac{1}{12}. \end{equation} With this choice the bulk part of the action (\ref{1}) is a truncated Bose-action of the Type IIA supergravity. It is worthwile to note that D10 theory (\ref{1}) with specific values of constants given in (\ref{2}) is just a compactification of the action of D11 $M$-theory where in addition to the conventional bulk terms the D10 local source is included: \begin{equation} \label{3} S_{(M)}=\tilde{M}^{9}\Bigg \{ \int\left[R^{(11)}-\frac{1}{2\cdot 4!}F_{(4)}^{2}-\tilde{\sigma}\delta^{(1)}\frac{\sqrt{-h^{(10)}}}{\sqrt{-g^{(11)}}}\right]\sqrt{-g^{(11)}}\,d^{11}x+\rm{GH} \Bigg \}, \end{equation} $\tilde{M}$, $\tilde{\sigma}$ are Planck mass and mass parameter of the local source in 11 dimensions. After reduction of action (\ref{3}) to 10 dimensions the volume of compact 11-th dimension becomes a dilaton field whereas 2-form in (\ref{1}) is a corresponding Kaluza-Klein field. However we shall not refer any more to $M$-theory and consider the supergravity action (\ref{1}), where dimensionalities and constants are given in (\ref{2}), as a primary one throughout the paper. The following ansatz for the bulk solution of the dynamical equations given by the action (\ref{1}), (\ref{2}) will be used: \begin{eqnarray} \label{4} &&ds_{(10)}^{2}=b^{2}{\tilde g}_{\mu\nu}dx^{\mu}dx^{\nu}+ c^{2}dz^{2}+N^{2}dr^{2}+a^{2}d\Omega_{4}^{2}, \qquad \varphi=\varphi(r), \\ && F_{(4)}=Q_{(4)}dy^{1}\wedge dy^{2}\wedge dy^{3}\wedge dy^{4}, \, \, F_{(2)zr}=dA_{z}(r)/dr=\frac{Q_{(2)}cN}{b^{4}a^{4}}\, e^{-3\varphi /2}, \nonumber \end{eqnarray} where metric scale factors $b$, $c$, $a$, "lapse function" $N$ and dilaton $\varphi$ depend only on the isotropic coordinate $r$, ${\tilde g}_{\mu\nu}$ is metric of the 4-dimensional Universe $M_{(3+1)}$, $z$ is coordinate of torus $S^{1}$ of period $T_{z}$, $d\Omega_{4}^{2}$ is metric of 4-sphere of unit radius; $x^{A}=\{x^{\mu},z,r,y^{i}\}$, $A=0,1\ldots 9$, $\mu=0,1,2,3$, $i=1,2,3,4$. $Q_{(4)}$ is charge of the magnetic monopole. $A_{z}$ is non-zero component of the vector-potential of 2-form field $F_{(2)}$, $Q_{(2)}$ is its "electric" charge, last equality for $F_{(2)}$ in (\ref{4}) is received from the "Maxwell" equation for 2-form written down for the metric ansatz (\ref{4}). Introduction of small $F_{(2)}\ne 0$ gives the Euclidian version of the Reissner-Nordstrom type deformation of the extremal fluxbrane solution. It will be shown in Sec. 4 that this deformation provides the physical tool to terminate the throat at its IR end and also enforces dynamically to introduce the extremely small positive curvature ${\widetilde R}^{(4)}=12{\tilde h}^{2}$ of the manifold $M_{(3+1)}$; auxiliary "Hubble constant" $\tilde h$ is connected with the observed acceleration rate of the Universe $h=10^{-60} M_{\rm Pl}$ by scale transformation (\ref{30}) below (see in Sec. 5 in more detail). With ansatz (\ref{4}) and ${\tilde h}\ne 0$ action (\ref{1}) with parameters (\ref{2}) in it gives following gravity equations for scale factors $b(r)$, $c(r)$, $a(r)$ (we don't need to put down the gravity constraint) and equation for dilaton field (prime means derivative over $r$): \begin{equation} \label{5} \frac{3{\tilde h}^{2}}{b^{2}}+\frac{1}{N^2}\Bigg[-\frac{b''}{b}+\frac{b'^{2}}{b^{2}}+\frac{b'}{b}\Bigg(\frac{N'}{N}-4\frac{b'}{b}-\frac{c'}{c}-4\frac{a'}{a}\Bigg)\Bigg]=-\frac{3}{8}J_{(4)}-\frac{1}{8}J_{(2)}+\frac{1}{16}J_{(\sigma)}, \end{equation} \\ \begin{equation} \label{6} \frac{1}{N^2}\Bigg[-\frac{c''}{c}+\frac{c'^{2}}{c^{2}}+\frac{c'}{c}\Bigg(\frac{N'}{N}-4\frac{b'}{b}-\frac{c'}{c}-4\frac{a'}{a}\Bigg)\Bigg]=-\frac{3}{8}J_{(4)}+\frac{7}{8}J_{(2)}+\frac{1}{16}J_{(\sigma)}, \end{equation} \\ \begin{equation} \label{7} \frac{3}{a^{2}}+\frac{1}{N^2}\Bigg[-\frac{a''}{a}+\frac{a'^{2}}{a^{2}}+\frac{a'}{a}\Bigg(\frac{N'}{N}-4\frac{b'}{b}-\frac{c'}{c}-4\frac{a'}{a}\Bigg)\Bigg]=-\frac{5}{8}J_{(4)}-\frac{1}{8}J_{(2)}+\frac{1}{16}J_{(\sigma)}, \end{equation} \begin{equation} \label{8} \frac{1}{N^2}\Bigg[\varphi''-\varphi'\Bigg(\frac{N'}{N}-4\frac{b'}{b}-\frac{c'}{c}-4\frac{a'}{a}\Bigg)\Bigg]=\frac{1}{2}J_{(4)}+\frac{3}{2}J_{(2)}-\frac{1}{12}J_{(\sigma)}, \end{equation} where \begin{eqnarray} \label{9} &&J_{(4)}\equiv \frac{e^{\varphi /2}F_{(4)}^{2}}{2\cdot 4!}=\frac{e^{\varphi /2}Q_{(4)}^{2}}{2a^{8}}, \qquad J_{(2)}\equiv \frac{e^{3\varphi /2}F_{(2)}^{2}}{2\cdot 2!}=\frac{e^{-3\varphi /2}Q_{(2)}^{2}}{2b^{8}a^{8}}, \nonumber \\ \\ &&J_{(\sigma)}\equiv e^{-\varphi /12} \, \sigma \frac{\delta(r-r_{0})}{N}, \qquad {\tilde h}^{2}=\frac{{\widetilde R}^{(4)}}{12}. \nonumber \end{eqnarray} We suppose that local source is placed at some $r=r_{0}$ and that it terminates the throat "from above", i.e. it forms the UV end of the throat. Thus we truncate space-time (\ref{4}) at $r=r_{0}$, paste two copies of the inner region along the cutting surface and consider codimension one local source in the RHS of equations (\ref{5})-(\ref{8}) as a heavy boundary where $Z_{2}$-symmetry is imposed. The space-time of the surface is a product \begin{equation} \label{10} M_{(3+1)} \times S^{1} \times S^{4}, \end{equation} physically this boundary may be or 8-brane which spans $M_{(3+1)}$ and wraps around compact extra spaces $S^{1}$, $S^{4}$, or it may be viewed as a shell of branes of lower dimension uniformly distributed over compact subspaces. In any case we suppose, and this is the main hypothesis of the paper, that dynamics of the boundary surface is described by the simplest Nambu-Goto action, hence its energy-momentum tensor is isotropic as it follows from the action (\ref{1}). There are four junction conditions at the boundary: three Israel conditions for three subspaces in (\ref{10}) and jump condition for dilaton field $\varphi$. These conditions are immediately received by integrating equations (\ref{5})-(\ref{8}) over $r$ around $r=r_{0}$ (factor 2 in the LHS in (\ref{11})-(\ref{14}) reflects the $Z_{2}$-symmetry): \begin{equation} \label{11} \frac{2}{N^{2}} \, \frac{b'}{b}=\frac{\sigma e^{-\varphi/12}}{16N}, \end{equation} \begin{equation} \label{12} \frac{2}{N^{2}} \, \frac{c'}{c}=\frac{\sigma e^{-\varphi/12}}{16N}, \end{equation} \begin{equation} \label{13} \frac{2}{N^{2}} \, \frac{a'}{a}=\frac{\sigma e^{-\varphi/12}}{16N}, \end{equation} \begin{equation} \label{14} -\frac{2}{N^{2}} \, \varphi'=-\frac{1}{12} \frac{\sigma e^{-\varphi/12}}{N}. \end{equation} These relations must be valid at the position of the boundary $r=r_{0}$. Equations (\ref{11})-(\ref{14}) are actually quite informative. We'll see that they fix position $r_{0}$ of the UV boundary, i.e. determine the overall volume of the extra space, they "fine-tune" magnetic monopole charge $Q_{(4)}$ and mass parameter $\sigma$ in (\ref{1}), and in the model with deformed extremal solution considered in Sec. 4 the consistency of equations (\ref{11}) and (\ref{12}) demands introduction of non-zero curvature of the Universe which value, as it will be shown, may correspond to the observed acceleration of the Universe. \section{Non-deformed fluxbrane solution, stabilization of the volume modulus and potential for the slow-roll inflation} \vspace{0,5 cm} {\large\it 3-a. Bulk solution and stabilization of the volume modulus.} \vspace{0,5cm} In this Section the way of thought of paper \cite{Alt06} is repeated for the particular case of the Type IIA supergravity given by action (\ref{1}) with constants (\ref{2}) in it where we discard "Maxwell" field and consider $M_{(3+1)}$ the Minkowsky space-time (i.e. put $F_{(2)}=0$, ${\tilde h}=0$ in (\ref{5})-(\ref{8})). Then ansatz (\ref{4}) for the elementary magnetic fluxbrane solution of the equations (\ref{5})-(\ref{8}) looks as \cite{Gibbons}- \cite{Aharony}: \begin{eqnarray} \label{15} &&ds_{(10)}^{2}=H^{-3/8}({\tilde g}_{\mu\nu}dx^{\mu}dx^{\nu}+ dz^{2})+H^{5/8}(dr^{2}+r^{2}d\Omega_{4}^{2}), \quad e^{\varphi}=e^{\varphi_{\infty}}H^{-1/4}, \nonumber \\ \\ && F_{(4)}=Q_{(4)}dy^{1}\wedge dy^{2}\wedge dy^{3}\wedge dy^{4}, \quad H=1+\left (\frac{L}{r}\right )^{3}, \quad L^{3}=\frac{1}{3}Q_{(4)}e^{\varphi_{\infty}/4}, \nonumber \end{eqnarray} $\varphi_{\infty}$ is the value of dilaton field at $r=\infty$. Metric (\ref{15}) describes a warped "throat" of proper length $\int_{0}^{L}H^{5/16}dr\cong 16L$ with an integrable singularity at $r=0$ where curvature $R^{(10)}\to \infty$. The low-energy action (\ref{1}) makes sense only if curvature components are small as compared to $M^{2}$. For the space-time (\ref{15}) this condition formulated e.g. for the scalar curvature in 10 dimensions reads: \begin{equation} \label{16} R^{(10)}= \frac{45}{32}\, \frac{1}{L^{2}}\left (\frac{L}{r}\right )^{1/8}< M^{2}, \end{equation} this inequality is written inside the throat where $r \ll L$. From (\ref{16}) the minimal permitted value $r_{\it{min}}$ of the isotropic coordinate is immediately determined: \begin{equation} \label{17} r>r_{\it{min}}= k^{8}\, (ML)^{-16} \, L, \end{equation} where coefficient $k$ is equal $45/32$ when $R^{(10)}$ is used in the estimate inequality (\ref{16}). In what follows we shall consider $k$ some number of order one. Big value of the exponent in the RHS of (\ref{17}) reflects its non-analytical dependence on the $4$-form-dilaton coupling constant $\alpha$. In general case this exponent is equal to $\Delta/ \alpha^{2}$ \cite{Alt06} \footnote[1]{There is unfortunate mistake in \cite{Alt06} where factor $2$ is omitted in the denominator of exponent in the RHS of expression (\ref{14}) of \cite{Alt06}; this however does not change essentially conclusions of \cite{Alt06}. The present paper gives the corrected formulae when the Type IIA supergravity is considered.} ($\Delta$ is well known parameter of the elementary fluxbrane solutions determined by $\alpha$ and dimensionalities; $\Delta=4$ in the Type IIA, Type IIB supergravities). For $\alpha=0$ in (\ref{1}), like it takes place in the Type IIB supergravity where there is $AdS_{5}\times S^{5}$ asymptotic inside the throat, $r_{\it{min}}=0$, there is no singularity of curvature at any $r$. Comparison of metrics (\ref{15}) and (\ref{4}) gives $b=c=H^{-3/16}$, $a=H^{5/16}\, r$, hence jump conditions (\ref{11}), (\ref{12}) coincide; also dilaton jump condition (\ref{14}) identically follows from (\ref{11}) for dilaton field given in (\ref{15}). Compatibility of jump conditions(\ref{11}) and (\ref{13}) demands $b'/b=a'/a$, i.e. $(Hr^{2})'=0$ at $r=r_{0}$ which gives: \begin{equation} \label{18} r_{0}=\frac{L}{2^{1/3}}, \end{equation} whereas (\ref{11}) with account of (\ref{18}) connects $L$, $\sigma$ and $\varphi_{\infty}$. We shall put down this relation multiplying it by the higher-dimensional Planck mass $M$: \begin{equation} \label{19} ML=12\left (\frac{2}{3}\right )^{1/3}\frac{M}{\sigma} \, e^{\varphi_{\infty}/12}\equiv 10,5 \, g, \end{equation} where dimensionless constant \begin{equation} \label{20} g=\frac{M}{\sigma} \, e^{\varphi_{\infty}/12} \end{equation} is an important parameter which determines the physical predictions of the model. $g$ is an invariant of scale transformation $g_{AB} \to e^{2\lambda}g_{AB}$, $\varphi \to \varphi + 12\lambda$, $M \to e^{-\lambda}M$ ($\lambda=const$) which is an unvariance of the action (\ref{1}) when $F_{(2)}=0$ and constants (\ref{2}) are used in (\ref{1}). All the approach makes sense only if $ML\gg 1$, hence it is necessary that $g \ge 1$ as it follows from (\ref{19}). From (\ref{19}), (\ref{15}) it also follows the "fine-tuning" condition: \begin{equation} \label{21} Q_{(4)}^{1/3}=12\cdot 2^{1/3}\cdot \sigma^{-1}. \end{equation} This is a direct analogy of the fine-tuning of the bulk cosmological constant and brane's tension demanded in the Randall-Sundrum model \cite{Randall}. However here the bulk magnetic $4$-form charge $Q_{(4)}$ is not an input parameter in the action but a free constant of the bulk solution of the dynamical equations. Hence relation (\ref{21}) is by no means a fine-tuning but a constraint determining magnetic charge through the constant $\sigma$ of the action (\ref{1}). We'll see below that position (\ref{18}) of the UV boundary of the throat determining the overall volume of extra space is a point of zero minimum of the corresponding effective potential. As it was already noted in the Introduction dynamical stabilization of the modulus of the volume of extra space is possible because in the model under consideration the local source in (\ref{1}) breaks the no-scale structure of the theory and because this codimension one local source also violates the conditions of the no-go theorem. Thus dynamics of the model terminates the extra space of the space-time (\ref{15}) at the top of the throat, at its UV end (\ref{18}). In Sec. 4 the IR end of the throat where supposedly Standard Model resides will be fixed near the tip of the throat by the tool of small deformation of the extremal solution (\ref{15}). This deformation will not influence essentially the form of the radion effective potential calculated in the next Subsection for the non-deformed background (\ref{15}), but will result in a tiny positive deviation (seen today as Dark energy) from the minimal zero value of potential received for the non-deformed background (see Sec. {\it {5-c}}). \vspace{0,5 cm} {\large\it 3-b. Radion as inflaton. Potential for the slow-roll inflation.} \vspace{0,5cm} Effective action $S^{(3+1)}$ in four dimensions is conventionally calculated by integrating out extra coordinates in a higher dimensional action. With account of the radion field this will give the effective scalar-tensor Brans-Dicke type action (see e.g. \cite[b]{Kanno}, \cite{Csaki}). To calculate the effective action $S^{(3+1)}$ we shall use in (\ref{1}) the bulk solution (\ref{15}) but move the UV boundary (and hence change the upper limit of the integration over isotropic coordinate $r$ in (\ref{1})) from $r=r_{0}$ (\ref{18}) fixed by junction conditions (\ref{11})-(\ref{14}) to the arbitrary position $\rho (x)$, slowly depending on coordinates $x^{\mu}$: \begin{equation} \label{22} r_{0} \to \rho (x), \end{equation} $\rho (x)$ is called radion field \cite[b]{Cline}, \cite{radion3}-\cite{Brax}. This definition of the radion field is equivalent to the more conventional one where radion is considered as depending on $x^{\mu}$ factor of the lapse-function $N$ in metric (\ref{4}) \cite{Csaki}-\cite{Mazumdar}. The gradient terms of $\rho (x)$ contribute to the induced metric of the UV boundary: \begin{equation} \label{23} h_{ab}=g_{ab}+\rho,_{a}\rho,_{b}g_{rr}, \end{equation} where $x^{a}=\{x^{\mu}, z, y^{i}\}$ and $g_{ab}$, $g_{rr}$ are corresponding components of the bulk metric (\ref{15}). Then, considering that $\rho(x)$ does not depend on $z$, $y^{i}$ and depends on $x^{\mu}$ slowly as compared to the scales of the bulk solution, the Lagrangian $L_{\it{l.s.}}$ of the local source in (\ref{1}) (with constants (\ref{2}) in it and with account of (\ref{15})) takes the form: \begin{eqnarray} \label{24} &&{\rm L}_{\it{l.s.}}=-M^{8}\sigma e^{-\varphi /12}\delta(r-\rho)\,\frac{\sqrt{-h^{(9)}}}{\sqrt{-g^{(10)}}} \approx \nonumber \\ &&\approx -\frac{M^{8}\sigma e^{-\varphi_{\infty}/12}\,\delta(r-\rho)}{H^{14/48}}\left[1+\frac{1}{2}H {\tilde g}^{\mu\nu}\rho,_{\mu}\rho,_{\nu}\right]. \end{eqnarray} In calculating radion effective potential we shall substitute $r_{\it IR} \to r=0$ in the lower limit of the integration over $r$ in (\ref{1}). This will not change $S^{(3+1)}$ essentially since it is supposed that $r_{\it IR}\ll L$ and since all integrals in (\ref{1}) are convergent at $r=0$. We also postulate that $Z_{2}$-symmetry at the "moved" UV boundary surface is preserved, i.e. bulk integration must be fulfilled over two pasted copies of the solution. Direct calculation shows that for the magnetic monopole fluxbrane bulk solution (\ref{15}) this procedure gives zero value of the radion potential at $\rho=r_{0}$ (\ref{18}) where jump conditions (\ref{11})-(\ref{14}) are valid; this corresponds to the consistency conditions \cite{Leblond}. And this is not the case for the dual bulk electric fluxbrane solution - see {\it {Note 2}} below. Thus symbolically the Brans-Dicke type effective action $S^{(3+1)}$ depending on general meric ${\tilde g}_{\mu\nu}(x)$ of the manifold $M_{(3+1)}$ and on the radion field $\rho (x)$ is received when extra coordinates are integrated out in the action (\ref{1}): \begin{eqnarray} \label{25} &&S^{(4)}=2\int_{0}^{\rho}{\rm L}_{\it bulk}+\int {\rm L}_{\it{l.s.}}= \\ &&=\int\left[\Phi(\rho){\widetilde R}^{(4)}-\frac{1}{2}\omega (\rho){\tilde g}^{\mu\nu}\rho,_{\mu}\rho,_{\nu}-{\widetilde V}(\rho)\right]\sqrt{-{\tilde g}^{(4)}}d^{(4)}x, \nonumber \end{eqnarray} where $L_{\it bulk}$ sums up all bulk terms in (\ref{1}) including the Gibbons-Hawking term, $L_{\it{l.s.}}$ is given in (\ref{24}); ${\widetilde R}^{(4)}$ is scalar curvature of the $(3+1)$ dimensional space-time described by metric ${\tilde g}_{\mu\nu}(x)$ slowly depending on $x^{\mu}$. Brans-Dicke field $\Phi (\rho)$, kinetic term function $\omega (\rho)$ and auxiliary radion potential ${\widetilde V}(\rho)$ in (\ref{25}) are calculated when bulk metric (\ref{15}) is used in (\ref{1}) where it is taken ${\tilde g}_{\mu\nu}=\eta_{\mu\nu}$ - Minkowski metric in four dimensions; $Q_{(4)}$, $\sigma$, $\varphi_{\infty}$ may be expressed through the characteristic length of the throat $L$ with use of dependences given in (\ref{15}), (\ref{19}); $T_{z}$ is period of torus $S^{1}$ in (\ref{15}), $\Omega_{4}$ is volume of four-sphere of unit radius. Simple calculations finally give: For the Brans-Dicke field: \begin{equation} \label{26} \Phi(\rho)=2M^{8}\Omega_{4} T_{z} \int_{0}^{\rho} Hr^{4}\,dr \, = \, 2M^{2}(ML)^{5}(MT_{z})\Omega_{4} \left[\frac{1}{5}\left (\frac{\rho}{L}\right)^{5}+\frac{1}{2} \left(\frac{\rho}{L} \right)^{2} \right]. \\ \end{equation} For the kinetic term function: \begin{eqnarray} \label{27} &&\omega(\rho)=M^{8}\Omega_{4} T_{z} \int \sigma e^{-\varphi_{\infty}/12}\delta(r-\rho)H^{4/3}r^{4}\,dr= \nonumber \\ \\ &&=M^{4}(ML)^{3}(MT_{z})\Omega_{4} \, 12\left(\frac{2}{3}\right)^{1/3}\left(\frac{\rho}{L}\right)^{4}\left[1+\left(\frac{L}{\rho}\right)^{3}\right]^{4/3}. \nonumber \end{eqnarray} And for potential in (\ref{25}) (expression in square brackets includes GH term; gravity constraint was used in receiving it): \begin{eqnarray} \label{28} &&{\widetilde V}(\rho)=M^{8}\Omega_{4} T_{z} \Bigg\{-2\int_{0}^{\rho}\left[\frac{24}{H^{5/8}r^{2}}-\frac{Q_{(4)}^{2}e^{\varphi_{\infty} /2}H^{-1/8}}{H^{5/2}r^{8}}\right]H^{5/8}r^{4}\,dr + \\ &&+\int \sigma e^{-\varphi_{\infty} /12} H^{1/48}\delta(r-\rho)H^{5/16}r^{4}\,dr\Bigg\}=4M^{4}(ML)^{3}(MT_{z})\Omega_{4}F\left(\frac{\rho}{L}\right), \nonumber \end{eqnarray} where function $F(y)$ is given by the formula: \begin{eqnarray} \label{29} &&F(y)=y^{3}\left[3\left(\frac{2}{3}\right)^{1/3}(1+y^{3})^{1/3}+\frac{3}{2(1+y^{3})}-4\right], \nonumber \\ &&y=\frac{\rho}{L}, \qquad y_{0}=\frac{r_{0}}{L}=2^{-1/3}, \end{eqnarray} value of $y_{0}$ is received from (\ref{18}). It is easy to see that $F(y)$ possesses minimum at $y=y_{0}$ and $F(y_{0})=0$. The same is true for potential ${\tilde V}(\rho)$ (\ref{28}) at $\rho=r_{0}$. {\it Note 1.} Although Gibbons-Hawking term is a full divergence and we consider compact extra space it would be mistake to discard GH term in (\ref{25}) when radion effective potential is calculated. Direct calculation of the GH term in (\ref{25}), when step-functions reflecting mirror $Z_{2}$-jumps of $b(r)$, $c(r)$, $a(r)$, $\varphi (r)$ are taken into account, shows that it really vanish at the solution of dynamical equations, i.e. at $\rho=r_{0}$. But GH contribution to the radion effective potential is by no means equal to zero when upper limit of integration in (\ref{25}) is changed from $r_{0}$ to arbitrary value $\rho$. {\it Note 2.} It is impossible to calculate from the action (\ref{1}) the physically meaningful radion effective potential in case dual electric 6-form $F_{6}$ is used in (\ref{1}). Although electric $4$-brane extremal solution is given by the same formulae (\ref{15}), (\ref{18})-(\ref{21}) as a magnetic one, the values of action (\ref{1}), $S_{m}$ and $S_{e}$, calculated at the magnetic and electric fluxbrane solutions as a backgrounds drastically differ. General consistency conditions \cite{Leblond} say that $S_{m}$ must vanish at the solution of dynamical equations but these conditions are not applicable to $S_{e}$ (see Appendix in \cite{Alt06}). According to (\ref{28}), (\ref{29}) ${\widetilde V}(\rho)\to 0$ at $\rho\to 0$ and ${\widetilde V}(\rho)\to \infty$ at $\rho\to \infty$. However this behavior is of no physical interest since similar behavior possesses the Brans-Dicke field (\ref{26}). To get the physical radion effective potential the low dimension Brans-Dicke effective action (\ref{25}) must be written in the Einstein-frame metric and radion field must be transformed in a way providing the canonical form of its kinetic term. Thus let us rescale metric ${\tilde g}_{\mu\nu}$ in the Brans-Dicke action in the RHS of (\ref{25}) to the Einstein-frame metric $g_{\mu\nu}$: \begin{equation} \label{30} {\tilde g}_{\mu\nu}=\frac{M_{\rm Pl}^{2}}{\Phi(\rho)}\,g_{\mu\nu}, \end{equation} where $M_{\rm Pl}=10^{19}GeV$ is Planck mass. Effective action (\ref{25}) being expressed as a functional of the Einstein-frame metric $g_{\mu\nu}$ introduced in (\ref{30}) and of the canonical radion field $\psi$ (defined below) takes the standard form: \begin{equation} \label{31} S^{(4)}=\int\left[M_{\rm Pl}^{2}R^{(4)}-(1/2)M_{\rm Pl}^{2}(\nabla \psi)^{2}-\mu^{4}V(\psi)\right]\sqrt{-g^{(4)}}\,d^{(4)}x. \end{equation} $\mu$ is a calculable constant of dimensionality of mass - the characteristic of the radion potential, $V(\psi)$ is taken dimensionless for convenience. Also dimensionless (normalized to Planck mass) canonical radion field $\psi(\rho)$ is introduced in (\ref{31}): \begin{equation} \label{32} \psi(\rho)=\frac{1}{L}\int_{r_{0}}^{\rho}\epsilon (\rho)\,d\rho=\int_{y_{0}}^{y}\epsilon (y)\,dy, \qquad y=\frac{\rho}{L}, \end{equation} here the point (\ref{18}) of stable extremum of the radion effective potential is chosen at $\psi=0$; $y_{0}=2^{-1/3}$; $\epsilon (\rho)$ is expressed through functions $\Phi(\rho)$, $\omega(\rho)$ given in (\ref{26}), (\ref{27}): \begin{eqnarray} \label{33} &&\epsilon^{2}(y)=L^{2}\left[ \frac{\omega(\rho)}{\Phi(\rho)}+3 \left(\frac{1}{\Phi}\frac{d\Phi}{d\rho}\right)^{2}\right]= \nonumber \\ \\ &&=6 \left(\frac{2}{3}\right)^{1/3} y^{4} \left(\frac{y^{5}}{5}+\frac{y^{2}}{2}\right)^{-1}\left(1+\frac{1}{y^{3}}\right)^{4/3}+3y^{8}\left(\frac{y^{5}}{5}+\frac{y^{2}}{2}\right)^{-2}\left(1+\frac{1}{y^{3}}\right)^{2}. \nonumber \end{eqnarray} It is seen from (\ref{33}) that in the $\rho \ll L$ ($y \ll 1$) limit, i.e. inside the throat, $\epsilon(y) \sim y^{-1}$ and in the $\rho \gg L$ ($y \gg 1$) limit we have $\epsilon \sim y^{-1/2}$. Hence it follows from (\ref{32}) that in these two limits: \begin{equation} \label{34} \psi (y)=c\cdot \ln y, \qquad c=2 (18^{1/3}+3)^{1/2}, \qquad 0<y=\frac{\rho}{L}\ll 1, \end{equation} and \begin{equation} \label{35} \psi (y)=2 (10 \cdot 18^{1/3})^{1/2} \,y^{1/2}, \qquad 1\ll y=\frac{\rho}{L}<\infty. \end{equation} Radion potential $\mu^{4}V(\psi)$ in (\ref{31}) is expressed through the auxiliary potential ${\widetilde V}(\rho)$ (\ref{28}) and Brans-Dicke field $\Phi (\rho)$ (\ref{26}): \begin{eqnarray} \label{36} &&\mu^{4}V(\psi)=M_{\rm Pl}^{4} \frac{{\widetilde V}(\rho)}{\Phi(\rho)^{2}}=\frac{M_{\rm Pl}^{4}}{(MT_{z})(ML)^{7}\Omega_{4}}\, K(y(\psi)), \nonumber \\ \nonumber \\ &&V(\psi) \equiv K(y(\psi))= \frac{F(y)}{\left(\frac{y^{5}}{5}+\frac{y^{2}}{2}\right)^{2}}, \end{eqnarray} where function $F(y)$ is given in (\ref{29}) and dependence $y(\psi)$ must be received from (\ref{32}), (\ref{33}). The characteristic density $\mu^{4}$ of the radion effective potential is defined in (\ref{36}). It depends on the period $T_{z}$ of torus $S^{1}$ in the metric (\ref{15}) and on the length of the throat $L$; expression for $ML$ is given in (\ref{19}), value of $MT_{z}$ will be calculated in Sec. 4 below. Because of strong inequality $ML\gg 1$ demanded by the applicability of the low-energy string approximation $\mu^{4}$ in (\ref{36}) proves to be suppressed as compared to the Planck density $M_{\rm Pl}^{4}$. This is important since effective action approach is valid only if radion potential in (\ref{31}) is essentially below the Planck density \begin{equation} \label{37} \mu^{4}V(\psi)\ll M_{\rm Pl}^{4}. \end{equation} We'll see in Sec. 4 that although $V(\psi)$ is growing down the throat inequality (\ref{37}) is valid everywhere in the region of applicability of the low-energy string approximation given by the condition (\ref{17}). The form of the dimensionless potential $V(\psi)$ (\ref{36}) depends only on the choice of the theory. For the Type IIA supergravity with the co-dimension one local source (choice (\ref{2}) of dimensionalities and coupling constants) $V(\psi)$ is drawn in \cite{Alt06} (Curve "D" at Fig. 1 of \cite{Alt06}). Potential is non-negative, as expected it possesses zero minimum at $\psi=0$ where junction conditions (\ref{11})-(\ref{14}) are valid. To the right of this point $V(\psi)$ increases, reaches maximum and then again falls down to zero at infinity. Thus stable state $\psi=0$ where supposedly our Universe "lives in" is protected from the runaway decompactification by certain potential barrier. This situation is typical for all theories with compactified extra dimensions \cite{Giddings2}. It is not without interest to study in frames of considered model to what extent this "protection" is reliable. But we'll leave this work for future. Asymptotic behavior of the dimensionless radion potential $V(\psi)$ in the limits $\psi \ll -1$ and $\psi \gg 1$ immediately follow from (\ref{36}) with account of expression (\ref{29}) for $F(y)$ and asymptotic (\ref{34}), (\ref{35}) for $\psi(y)$: For $\psi \ll -1$: \begin{equation} \label{38} V_{-}(\psi)= (2^{7/3}3^{2/3}-10)e^{-\psi /c} \approx 0,48 \cdot e^{-0,21\cdot\psi}, \end{equation} where $c$ is given in (\ref{34}). And at $\psi \gg 1$: \begin{equation} \label{39} V_{+}(\psi)=2^{20}\cdot 3^{5}\cdot 5^{8}\cdot \left(\frac{2}{3}\right)^{1/3}\cdot \psi^{-12} \approx \frac{8,7\cdot 10^{13}}{\psi^{12}}. \end{equation} Now we can look shortly at the possibility to apply these results to the description of inflation in the early Universe \cite{Guth}, \cite{Dvali}, \cite{Mukhanov}. Radion field introduced above hopefully may serve as an inflaton field (cf. \cite[b]{Cline}, \cite{Mazumdar}). We may suppose that initially the "heavy lid" boundary of the extra space was located somewhere deep in the throat ($\rho_{\it in} \ll L$ or $\psi_{\it in} \ll -1$) and after that, obeying the dynamics determined by the action (\ref{31}), it rolls down the exponential asymptotic $V_{-}(\psi)$ (\ref{38}) of the radion potential (\ref{36}) to the steep slope leading to stable brane's position (\ref{18}) ($\psi=0$, see (\ref{32})) at the top of the throat. The following questions must be answered: Does radion potential $V(\psi)$ (\ref{36}) meet the necessary flatness and slow roll conditions? Can this scenario provide the number of $e$-foldings $N_{e}$ during inflation demanded by the astrophysical observations ($N_{e} \approx 80-100$) \cite{Dvali}-\cite{WMAP}? For the exponentially decreasing potential $V(\psi)\sim e^{-k\psi}$ flatness and slow roll conditions demand $k^{2} \ll 1$ which seemingly is true for $k=0,21$ like in (\ref{38}). The number of $e$-foldings during inflation is given by simple formula \cite{Dvali, Mukhanov} (prime means derivative over $\psi$ which, we remind, is dimensionless - in Planck units): \begin{equation} \label{40} N_{e}=\int_{\psi_{\it in}}^{\psi_{\it fin}}\frac{V(\psi)}{V'(\psi)}\,d\psi = \frac{\psi_{\it fin}- \psi_{\it in}}{k}, \end{equation} where last equality is received for the exponential potential; $\psi_{\it in}$ and $\psi_{\it fin}$ are the values of the radion (inflaton) field in the beginning and in the end of the inflation. Thus for the value of $k=0,21$ (\ref{38}) it follows from (\ref{40}) that necessary number of $e$-foldings is reached if $\psi_{\it fin}- \psi_{\it in} > 20$. The end of inflation where reheating begins is expected at the beginning of steep slope of the radion potential. Analyses of the exact analytical expression of $V(\psi)$ (\ref{36}) shows that steep slope begins somewhere at $\psi_{\it fin} \approx -20$. Hence to receive the sufficiently long period of inflation the initial value of the radion (inflaton) field must satisfy $\psi_{\it{in}} \le -40$, i.e. initial position of the "heavy lid" boundary must be sufficiently deep in the throat. Let us look at the validity of the inequality $\psi_{\it in} \le -40$ from the point of view of applicability of the low-energy string approximation. The permitted values of the isotropic coordinate $r$ must obey inequality $r> r_{\it min}$ where $r_{\it min}$ is determined in (\ref{17}). Corresponding minimal permitted value $\psi_{\it min}$ may be calculated from asymptotic expression (\ref{34}) (where it is taken $y_{\it min}= \rho_{\it min}/L=r_{\it min}/L$). If we express $(ML)$ in (\ref{17}) from (\ref{19}) then the value of $\psi_{\it min}$ is found from (\ref{34}): \begin{equation} \label{41} \psi_{\it min}=38\cdot \ln k-180-76\cdot \ln g. \end{equation} As long as $k$ in (\ref{17}) is of order one and parameter $g\ge 1$ (according to (\ref{19}) it is demanded by the condition of validity of all the approach $ML\gg 1$) this value of $\psi_{\it min}$ is essentially below the value providing the necessary number of e-foldings during inflation ($\psi_{\it in} \approx -40$). Thus permitted length of the throat does not come in conflict with demands of the early inflation. To be sure that effective action approach of this Section is consistent the validity of inequality (\ref{37}) must be established. This will be done in subsection {\it 4-c}. \section{Deformation of the elementary fluxbrane solution} \vspace{0,5 cm} {\large\it 4-a. General formulae.} \vspace{0,5cm} In this Section equations (\ref{5})-(\ref{8}) will be analized for the case of non-zero "Maxwell" field $F_{(2)}$ given in ansatz (\ref{4}) and non-zero small constant positive curvature of the Universe, i.e. when $Q_{(2)}\ne 0$, ${\tilde h}\ne 0$ in (\ref{5})-(\ref{8}). Let us rewrite the metric of ansatz (\ref{4}) in a form: \begin{equation} \label{42} ds_{(10)}^{2}=b^{2}({\tilde g}_{\mu\nu}dx^{\mu}dx^{\nu}+ Udz^{2})+f^{2}\left(\frac{dr^{2}}{U}+r^{2}d\Omega_{4}^{2}\right), \end{equation} this is always possible with a transformation of the isotropic coordinate $r$ in (\ref{4}). The non-deformed solution (\ref{15}) of equations (\ref{5})-(\ref{8}) looks for metric (\ref{42}) as: \begin{equation} \label{43} b=\bar{b}=H^{-3/16}, \quad U=\bar{U}=1, \quad f=\bar{f}=H^{5/16}, \quad e^{\varphi}=e^{\bar{\varphi}}=e^{\bar{\varphi}_{\infty}}H^{-1/4}, \end{equation} here we included for convenience the expression (\ref{15}) for the "non-deformed" dilaton field. There is well known \cite{Horowitz}-\cite{Aharony} exact Schwarzshild type bulk solution generalizing metric (\ref{15}) in a way of (\ref{42}) where \begin{equation} \label{44} U=U_{\it{Sch}}=1+\frac{const}{r^{3}} \end{equation} and $b$, $f$, $\varphi$ are like in (\ref{43}). This solution was used in \cite{Alt06} to build the IR end of the throat in the "bolt" point where $U_{\it{Sch}}=0$. However, as it was estimated in \cite{Alt06} and was shown exactly in \cite{Altsh1} in 6D generalization of the Randall-Sundrum model the value of Dark Energy received from the Schwarzshild type deformation of the elementary throat-like solution is about 60 orders above the observed value $10^{-120}M_{\rm Pl}^{4}$. That is why in \cite{Altsh1} not Schwarzshild type but Reissner-Nordsrtrom type deformation of the Randall-Sundrum $AdS$ model was used, and it was shown that in this case the calculated value of the Dark Energy may be in accordance with observations. Before starting the analyses of the solution of Eqs. (\ref{5})-(\ref{8}) when $Q_{(2)}\ne 0$ and ${\tilde h}\ne 0$ it is worthwile to outline shortly the logic of introduction in these models of extremely small positive curvature of the Universe. The point is that presence of $U(r)\ne const$ in metric (\ref{42}) results in discrepancy of the Israel junction conditions at the UV boundary of space-time (discrepancy appears since it is supposed that energy-momentum tensor of the boundary is isotropic). Because of quick decrease of the additional, depending on $r$, term in $U(r)$ with increase of $r$ from IR end to UV end of the warped extra space this discrepancy at the UV end proves to be quite small. The remedy may be the introduction of small non-zero positive curvature of the Universe which will give additional term in $U(r)$ reparing Israel junction conditions. However the decrease with growth of $r$ of the Schwarzshild term (\ref{44}) proves to be insufficiently quick and, as it was said above, does not give the observed value of the Dark Energy; more satisfactory result may be expected when Reissner-Nordstrom type deformation is used. As for our knowledge the exact solution of Eqs. (\ref{5})-(\ref{8}) when $Q_{(2)}\ne 0$, ${\tilde h}\ne 0$ is not found yet. If $Q_{(2)}$, ${\tilde h}$ are small as compared to the scales of the non-deformed solution (\ref{15}), and this is the case under consideration, then induced variations of $b(r)$, $f(r)$, $\varphi(r)$ in (\ref{42}) are also small as compared to their "non-deformed" values (\ref{43}) and may be studied in linear approximation of Eqs. (\ref{5})-(\ref{8}). Also small will be the variations of position of UV boundary and of the "fine-tuning" condition, i.e. variations of $r_{0}$, $Q_{(4)}$ which "non-deformed" values are given in (\ref{18}), (\ref{21}). However in the context of the present paper there is no need to calculate all these small variations. The peculiarity of the situation is that change of $U(r)$ in (\ref{42}) does not need to be small as compared to $U=\bar{U}=1$ of (\ref{43}). To receive $U(r)$ it is sufficient to subtract equations (\ref{5}), (\ref{6}) where in accordance with (\ref{42}) we put $c^{2}=U b^{2}$, $N^{2}=f^{2}/U$, $a=rf$. The resulting equation for $U(r)$ looks as follows: \begin{equation} \label{45} U''+U'\left(5\frac{b'}{b}+3\frac{f'}{f}+\frac{4}{r}\right)=-2f^{2}\,\frac{e^{-3\varphi /2}Q_{(2)}^{2}}{2b^{8}f^{8}r^{8}}-f^{2}\,\frac{6{\tilde h}^{2}}{b^{2}}. \end{equation} Since $Q_{(2)}$, ${\tilde h}$ in the RHS of (\ref{45}) are supposed to be small, the other functions in (\ref{45}) ($b(r)$, $f(r)$, $\varphi(r)$) may be taken in zero approximation. Substitution of their expressions (\ref{43}) into (\ref{45}) gives: \begin{equation} \label{46} U''+\frac{4}{r}U'=-\frac{Q_{(2)}^{2}e^{-3\varphi_{\infty}/2}}{r^{8}}-6{\tilde h}^{2}\left(1+\frac{L^{3}}{r^{3}}\right). \end{equation} The free solution of (\ref{46}) is, as expected, the Schwarzshild potential (\ref{44}); in what follows we shall discard this term of $U(r)$ for the reasons explained above in this Section. Substraction of junction conditions (\ref{11}), (\ref{12}) with account $c^{2}=Ub^{2}$ gives simple condition of their consistency: \begin{equation} \label{47} U'(r_{0})=0. \end{equation} Strictly speaking (\ref{47}) must be valid at the location of the $Z_{2}$-symmetric UV boundary slightly shifted from its "non-deformed" position (\ref{18}). But in the lowest approximation we may take in (\ref{47}) $r_{0}=L/2^{1/3}$ given in (\ref{18}). From (\ref{47}) the value of ${\tilde h}$ will be determined. \vspace{0,5 cm} {\large\it 4-b. Case $F_{(2)}\ne 0$, ${\tilde h}=0$. Determination of modulus $T_{z}$.} \vspace{0,5cm} As it will be seen the value of ${\tilde h}$ determined from condition (\ref{47}) is extremely small, hence at the IR end of the throat and practically everywhere inside the throat the ${\tilde h}$-term is essentially below the $Q_{(2)}$-term in the RHS of (\ref{46}). Thus let us first put down the solution of equation (\ref{46}) in case ${\tilde h}=0$: \begin{equation} \label{48} U=1-\left(\frac{l}{r}\right)^{6}, \qquad l^{6} \equiv \frac{Q_{(2)}^{2}e^{-3\varphi_{\infty}/2}}{18}, \end{equation} it is supposed that $l\ll L$, which means that deformation (\ref{48}) of the elementary solution (\ref{15}) is small; we remind that Schwarzshild term $\sim r^{-3}$ (see (\ref{44})) is deliberately omitted in (\ref{48}). Metric (\ref{42}) with $U(r)$ given in (\ref{48}) is a Euclidian "time" version of the Reissner-Nordstrom generalization of the elementary throat-like solution. The "bolt" point $r=l$ where $U=0$ is the IR end of the throat; it is topologically equivalent to the pole of 2-sphere \cite{Hawking}-\cite{Aghababaie}. Space-time (\ref{42}) may possess conical singularity at this point in case "matter trapping" co-dimension two IR brane is placed there (see e.g. \cite{Aghababaie, Carroll, Rubakov2}). This will produce deficit angle $\delta_{d}$ depending on tension of the IR brane which will influence the value of period $T_{z}$ of the Euclidian "time" $S^{1}$ calculated from (\ref{42}), (\ref{48}). We shall not consider this option in the present paper and postulate that $\delta_{d}=1$, hence IR end of the throat is supposed to be smooth. Then taking the zero-order dependences (\ref{43}) for $b(r)$, $f(r)$ in metric (\ref{42}) and with account of (\ref{48}) for $U(r)$ the following expression for period of torus $S^{1}$ of space-time (\ref{15}) (or (\ref{42})) is received: \begin{equation} \label{49} T_{z}=\frac{2\pi}{3}\,H^{1/2} \, l \approx \frac{2\pi}{3} \, L \, \left(\frac{L}{l}\right)^{1/2}, \end{equation} where $H(r)$ is given in (\ref{15}) and it is taken at $r=l$, last approximate equality is valid since $l \ll L$. Thus period $T_{z}$ of the extra torus of space-time (\ref{42}) is not an arbitrary modulus of the solution but is determined by (\ref{49}) through the characteristic lengths of the throat $L$, $l$. From (\ref{49}) it follows that $T_{z} \gg L$. Substitution of modulus (\ref{49}) in expression (\ref{26}) for Brans-Dicke field $\Phi$, with account of (\ref{18}), (\ref{19}), gives the important quantity, entering formulae for hierarchies in Sec. 5, as a function of $l$ and parameter $g$ (\ref{20}): \begin{equation} \label{50} \frac{M}{\sqrt{\Phi(r_{0})}}=10^{-4}\, g^{-3} \, \left(\frac{l}{L}\right)^{1/4}, \end{equation} where coefficient $10^{-4}$ absorbs numbers of formulae (18) for $r_{0}$, (\ref{19}) for $ML$, (\ref{26}) for $\Phi$ and (\ref{49}) for $T_{z}$, including the value of volume of 4-sphere of unit radius $\Omega_{4}=8\pi^{2}/3$ in (\ref{26}). \vspace{0,5cm} {\large\it 4-c. Consistency of the effective action approach.} \vspace{0,5cm} Now when we determined the modulus $T_{z}$ (\ref{49}) it is possible to check up the enaquality (\ref{37}) which is the condition of applicability of the low-dimension effective action approach of Sec. 3. Since potential (\ref{36}) grows down the throat it is sufficient to verify (\ref{37}) at the IR end of the throat, i.e. at $\rho = l$. Substitution in (\ref{37}) of asymptotic expressions (\ref{38}), (\ref{34}) for $V(\psi)$, $\psi(\rho)$ (and with account of formulae (\ref{19}), (\ref{49}) for $ML$, $MT_{z}$ entering in the definition of $\mu^{4}$ in (\ref{36})) inequality (\ref{37}) at $\rho=l$ is expressed through location $l$ of the IR end of the throat and parameter $g$ defined in (\ref{20}): \begin{equation} \label{51} \frac{\mu^{4}}{M_{\rm Pl}^{4}}\,V(\psi(l))=6\cdot 10^{-11}\,g^{-8}\,\left(\frac{L}{l}\right)^{1/2} < 1. \end{equation} The location of the IR end of the throat must meet the demand $l>r_{\it min}$ of validity of the low-energy action (\ref{1}), where $r_{\it min}$ is estimated in (\ref{17}). It is interesting to note that even at this depth inequality (\ref{51}) remains to be valid. More of that: its validity does not depend on the value of parameter $g$ which drops out from the expression (\ref{51}). In fact substitution of $l=r_{\it min}$ in (\ref{51}) with account of expression (\ref{17}) for $r_{\it min}$ gives: \begin{equation} \label{52} \frac{\mu^{4}}{M_{\rm Pl}^{4}}\,V(\psi(r_{\it min}))= \frac{10^{-2}}{k^{4}} < 1, \end{equation} this inequality is valid since coefficient $k$ introduced in (\ref{17}) is supposed to be of order of one. \vspace{0,5 cm} {\large\it 4-d. Adjustment of Israel conditions and determination of ${\tilde h}$.} \vspace{0,5cm} Condition (\ref{47}) of consistency of the Israel junction equations for subspaces $M_{(3+1)}$ and $S^{1}$ of the boundary of space-time (\ref{42}) can not be fulfilled for $U(r)$ given in (\ref{48}). To repair Israel conditions the necessary anisotropy of the energy-momentum tensor of the boundary was introduced in \cite{Louko}, \cite{Aghababaie} in 6D model. But perhaps it would be more natural to escape the arbitrary modifications of the action of local source in (\ref{1}) and to resolve the problem with introduction of small positive curvature of space-time $M_{(3+1)}$ \cite{Altsh1}. We shall go this way. Thus taking ${\tilde h}\ne 0$ in the RHS of (\ref{46}) and with account of (\ref{48}) the following expression for $U(r)$ is received from equation (\ref{46}): \begin{equation} \label{53} U=1-\left(\frac{l}{r}\right)^{6}-\frac{3}{5}{\tilde h}^{2}r^{2}+\frac{3{\tilde h}^{2}L^{3}}{r}. \end{equation} Then ${\tilde h}$ is immediately determined from (\ref{47}) (where $r_{0}=L/2^{1/3}$ (\ref{18})): \begin{equation} \label{54} {\tilde h}=\sqrt{\frac{2^{5/3}\cdot 5}{3}}\,\frac{1}{L}\, \left(\frac{l}{L}\right)^{3}. \end{equation} Condition $U=0$ gives location of the IR end of the throat. Presence of the ${\tilde h}$-terms in expression for $U(r)$ (\ref{53}) will make a shift of this position from the value $r=l$ detremined from (\ref{48}). This shift is however extremely small since it follows from (\ref{54}) that at $r=l$ the main (second one) ${\tilde h}$-term in the RHS of (\ref{53}) is suppressed by the factor $(l/L)^{5}$ as compared to the $Q_{(2)}$-term. Since $l\ll L$ we may consider $r=l$ the location of the IR end of the throat. Actually "curvature" ${\tilde h}$-terms may be neglected in (\ref{53}), as well as in the RHS of (\ref{46}), practically everywhere inside the throat; they become comparable with "Maxwell" $Q_{(2)}$-terms only in vicinity of the top of the throat $r\cong L$, although both remain quite small there. Auxiliary "Hubble constant" ${\tilde h}$ (\ref{54}) characterises the curvature ${\widetilde R}^{(4)}$ of the manifold $M_{(3+1)}$ (see (\ref{9})). To receive the observed rate of acceleration of the Universe $h$ we must rescale ${\widetilde R}^{(4)}$ with transformation (\ref{30}) to the Einstein-frame curvature $R^{(4)}$. This will be done in the next Section. \section{Calculation of the mass scale hierarchy and of the "acceleration hierarchy"} {\large\it 5-a. Formula for mass scale hierarchy.} \vspace{0,5cm} Following the Randall and Sundrum approach \cite{Randall} we take mass parameters of matter action written in the primordial metric of the action (\ref{1}) equal to the fundamental scale $M$. And it is conventionally supposed that massive matter of Standard Model is concentrated near the IR end of the strongly warped space-time - let it be because of trapping at the IR brane or because of pure gravitational accretion to the IR end. Then, in case warped throat-like solution (\ref{15}) is considered, mass of the visible matter is decreased as compared to $M$ by the value of warp factor $H^{-3/16}$ at $r=r_{\it IR}$: \begin{equation} \label{55} M \to M\,H^{-3/16}(r_{\it IR}) \approx M\, \left(\frac{r_{\it IR}}{L}\right)^{9/16}, \end{equation} where it was taken into account that $H \approx (L/r)^{3}$ at $r\ll L$. In previous Section the IR end of the throat was built as a "bolt" point $r=l$ of the deformed metric (\ref{42}), (\ref{48}). In what follows we shall put \begin{equation} \label{56} r_{\it IR}=l > r_{\it min}, \end{equation} where $r_{\it min}$ (\ref{17}) is the point in the depth of the throat where effective low-energy string induced action (\ref{1}) is not valid any more. To receive the observed electro-weak scale $m$ it is necessary to write down the effective matter action in lower dimensions in the Einstein-frame metric $g_{\mu\nu}$ introduced in (\ref{30}). Brans-Dicke field $\Phi(\rho)$ in (\ref{30}) must be taken at $\rho=r_{0}$ (\ref{18}) i.e. in the minimum of the radion effective potential (\ref{28}) (or equivalently in the minimum of potential (\ref{36}) at $\psi=0$) where supposedly our Universe is stabilized after inflation and reheating. In calculating mass scale hierarchy it is possible to use expression (\ref{26}) for $\Phi (r_{0})$ recived by integration out of extra coordinates in action (\ref{1}), (\ref{2}) when non-deformed solution (\ref{15}) is taken as a background. The only impact of deformation upon these calculations is dynamical fixation of the IR end of the throat at $r=l$ and determination of period of torus $T_{z}$ (\ref{49}) performed in Sec. 4. Thus from (\ref{55}), (\ref{30}), (\ref{50}) the following expression is received for the mass scale hierarchy as a function of location of the IR end of the throat $l$ and parameter $g$ (\ref{20}) of the fluxbrane solution: \begin{equation} \label{57} \frac{m}{M_{\rm Pl}}=H^{-3/16}\,\frac{M}{\sqrt{\Phi(r_{0})}}= 10^{-4} \cdot g^{-3} \cdot \left(\frac{l}{L}\right)^{13/16}. \end{equation} In case $g=1$ to get the observed value of mass hierarchy it is necessary to place IR end of the throat sufficiently deeply: $l/L\approx 10^{-16}$. This value of $l$ practically coicides with the limit of validity of the low-energy approximation $r_{\it min}$ (\ref{17}). For $g>1$ situation becomes less dangerous. For the limiting depth of the throat, i.e. when we substitute in (\ref{57}) $l=r_{\it min}$ from (\ref{17}) (where formula (\ref{19}) for $ML$ is used) expression (\ref{57}) reads: \begin{equation} \label{58} \frac{m}{M_{\rm Pl}}= 10^{-17}\, k^{13/2} \, g^{-16}. \end{equation} It is seen that (\ref{58}) gives the observed value of mass scale hierarchy $m/M_{\rm Pl}=10^{-16}$ for $g\approx 1$, $k\approx 1$. Of course this game in numbers should not be taken seriously since the RHS of (\ref{58}) is strongly dependent on free parameters $k$, $g$. It is interesting however that wishful big value of mass hierarchy may be received without introduction of big numbers "by hand". Big number $10^{17}$ in (\ref{58}) appeared here "from nothing", i.e. from coefficient 10,5 in (\ref{19}) (in a general case equal to $4(n-1)[\Delta/2(n-1)]^{1/(n-1)}$, see (\ref{26}) of \cite{Alt06}, $\Delta$ is given in (\ref{9}) of \cite{Alt06}) and from the exponent 16 in (\ref{17}) (in a general case equal to $\Delta/ \alpha^{2}$). Here we have $n=4$, $\Delta=4$, $\alpha = 1/2$. Physically expression (\ref{58}) for the mass scale hierarchy follows from the "bold" hypothesis of \cite{Alt06}, \cite[c]{Altsh} that SM resides at the brink of existence of the target space-time, i.e. that massive matter falls down the very "bottom" of the throat and concentrates there being stopped by unknown higher-curvature terms not included in the low-energy action (\ref{1}). In any case the unambigous result of the paper not depending on these speculations is given by expression (\ref{57}) for the mass scale hierarchy. \vspace{0,5cm} {\large\it 5-b. Values of the acceleration rate and of Dark Energy.} \vspace{0,5cm} In Subsection {\it 4-d} expression (\ref{54}) for the auxiliary "Hubble constant" $\tilde h$ was deduced from the Israel junction conditions at the UV boundary of the throat. To find the observed rate of acceleration of the Universe $h$ (equal to $10^{-60}M_{\rm Pl}$ according to the observations) it is necessary to perform scale transformation (\ref{30}) taken at the point $\rho=r_{0}$ of extremum of the radion potential (like it was done for $m$ in expression (\ref{57}) above): \begin{equation} \label{59} \frac{h}{M_{\rm Pl}}=\frac{\tilde h}{M} \, \frac{M}{\sqrt{\Phi(r_{0})}} = 10^{-5} \cdot g^{-4} \cdot \left(\frac{l}{L}\right)^{13/4}. \end{equation} Last equality was received from (\ref{54}), (\ref{50}), (\ref{19}). Finally it is instructive to express "acceleration hierarchy" $h/M_{\rm Pl}$ not through $l/L$ but through the value of mass hierarchy $m/M_{\rm Pl}$ (\ref{57}): \begin{equation} \label{60} \frac{h}{M_{\rm Pl}}=10^{11} \, g^{8} \left(\frac{m}{M_{\rm Pl}}\right)^{4}. \end{equation} The simple, 4-th power, dependence of two hierarchies is a real gift after many of cumbersome exponents above. Dark Energy $\rho_{D.E.}$ responsible for acceleration of the Universe $h$ (\ref{60}) is equal to: \begin{equation} \label{61} \rho_{D.E.}=\frac{1}{2} \, M_{\rm Pl}^{2}\, R^{(3+1)}=6h^{2}\, M_{\rm Pl}^{2}=6\cdot 10^{22}\, g^{16}\, \frac{m^{8}}{M_{\rm Pl}^{4}}, \end{equation} It is seen that in case parameter $g=1$ ($g$ is defined in (\ref{20})) the observed value of the Dark Energy $10^{-120}M_{\rm Pl}^{4}$ is received from (\ref{61}) for $m=10 \, GeV$. In the limiting depth of the throat substitution of $r_{\it IR}=l=r_{\it min}$ (\ref{17}) into (\ref{59}) gives: \begin{equation} \label{62} \frac{h}{M_{\rm Pl}}=10^{-57} \, g^{-56} \, k^{26}. \end{equation} The drawback of this expression, like of (\ref{58}) for the mass scale hierarchy, is strong dependence of the RHS on the values of arbitrary parameters of order one. We outline however that possibly main result of the paper $\rho_{D.E.} \sim G_{N}^{2}m^{8}$ (\ref{61}) ($G_{N}=M_{\rm Pl}^{-2}$ is Newton's constant) does not depend on the "bold" hypothesis described in the end of previous subsection. \vspace{0,5cm} {\large\it 5-c. $\rho_{D.E.}$ as a value of radion potential in its extremum.} \vspace{0,5cm} In Sec. 3 the exact analytical form of the radion effective potential was received for the non-deformed background solution; it was shown that potential possesses zero minimum at the value of radion field where all dynamical equations, including junction conditions (\ref{11})-(\ref{14}), are fulfilled. To repeat the same calculations for the deformed background (\ref{42}) is not a simple task, the more so that we do not know the exact bulk solution when $F_{(2)}\ne 0$ in anzats (\ref{4}) and when curvature of the manifold $M_{(3+1)}$ is not equal to zero. It is possible to show however that deformation of the background will result in tiny shift of the value of potential $\mu^{4}\,V_{\it extr}$ (\ref{36}) in its extremum from zero to the value equal the Dark Energy $\rho_{D.E.}$ (\ref{61}). The tool for calculation of $\mu^{4}\,V_{\it extr}$ is the general consistency condition of paper \cite{Leblond} which is valid at the solution of the dynamical equations. In our case expression (\ref{12}) of \cite{Leblond} written when parameter $\alpha_{[49]}$ of this paper is taken equal to $p=3$ gives: \begin{equation} \label{63} \oint b^{4}~(T^{m}_{m}-T^{\mu}_{\mu})=-2 \oint b^{2}{\widetilde R}^{(3+1)}, \end{equation} here $T^{m}_{m}$, $T^{\mu}_{\mu}$ are traces of the energy-momentum tensor of matter fields ($F_{(4)}$, $F_{(2)}$, $\varphi$) in (\ref{1}) in internal subspaces and in 4 dimensions correspondingly; $b$ is warp factor in metric (\ref{4}) ($W$ in \cite{Leblond}); $\oint$ symbolizes the integration over compact internal space which is deciphered in the LHS of (\ref{25}) where upper limit of integration $\rho$ is to be taken the value $\rho=r_{0}$ (\ref{18}) determined by the dynamical jump conditions (\ref{11})-(\ref{14}). ${\widetilde R}^{(3+1)}$ is curvature of the manifold $M_{(3+1)}$ of space-time (\ref{4}) which is equal to zero for the non-deformed space-time (\ref{15}) and is equal to $12{\tilde h}^{2}$ in case deformed metric (\ref{42}) is considered. Also, as it was shown in Appendix in \cite{Alt06}, in case form-fields of the action (\ref{1}) "live" only in internal space the combination of components of the energy-momentum tensor in the LHS of (\ref{63}) is proportional to Lagrangian $\rm L$ of the action (\ref{1}) calculated at the solution of dynamical equations: \begin{equation} \label{64} T^{m}_{m}-T^{\mu}_{\mu}=-4{\rm L}. \end{equation} Hence from (\ref{63}), (\ref{64}) it follows that at $\rho=r_{0}$ in (\ref{25}): \begin{equation} \label{65} \oint b^{4} {\rm L}=\frac{1}{2}~\Phi_{\it extr}~{\widetilde R}^{(3+1)}. \end{equation} We took into account here that $\oint b^{2}=\Phi_{\it extr}$ - value of Brans-Dicke field in (\ref{25}) at the point of extremum of potential. According to definition (\ref{25}) effective action in 4 dimensions in the point of extremum where radion field is constant is equal to: \begin{equation} \label{66} \oint b^{4} {\rm L}=\Phi_{\it extr}{\widetilde R}^{(3+1)}-{\widetilde V}_{\it extr}={\widetilde V}_{\it extr}, \end{equation} last equality is valid because value of action is calculated at the solution of Einstein equations in 4 dimensions. Thus finally from (\ref{65}), (\ref{66}) it follows: $\oint b^{4} {\rm L}={\widetilde V}_{\it extr}$. The same is true for the extremal value of potential in the Einstein frame action (\ref{31}): \begin{equation} \label{67} \mu^{4}\,V_{\it extr}=\frac{1}{2}\,M_{\rm Pl}^{2}\,R^{(3+1)}=6h^{2}\,M_{\rm Pl}^{2}=\rho_{D.E.}, \end{equation} $\rho_{D.E.}$ see in (\ref{61}). This rather strong result does not depend on details of the solution and follows from the consistency condition (\ref{63}) in case proportionality (\ref{64}) is fulfilled. To check up (\ref{64}) is a simple task. Whereas (\ref{63}) was received in \cite{Leblond} after certain integral of full divergence over compact internal space was put equal to zero. And in this point the special caution is demanded as it was noted in \cite{Leblond} as well. It would be important to check up with direct calculation consistency condition (\ref{63}) for the space-time of type (\ref{42}) with the "bolt" point (where $U=0$) topologically equivalent to 2-sphere. \section{Conclusion} \qquad Paper presents three apparently physically interesting results: 1) Exact expression (\ref{36}) for the scalar field potential $\mu^{4}\,V(\psi)$ in the effective action (\ref{31}) calculated for the non-deformed fluxbrane solution as a background. Asymptotic (\ref{38}) of potential describes slow-roll inflation, potential possesses steep slope for reheating and zero minimum where matter dominating evolution of the Universe begins. 2) Formula (\ref{67}) for the tiny positive deviation (seen today as Dark Energy $\rho_{D.E.}$ (\ref{61})) of the extremal value of the radion effective potential calculated for the "deformed" background. 3) Expressions (\ref{57}) for mass scale hierarchy $m/M_{\rm Pl}$, (\ref{59}) for "acceleration hierarchy" $h/M_{\rm Pl}$ and their relation (\ref{60}) which gives non-trivial dependence $\rho_{D.E.} \sim G_{N}^{2}m^{8}$ (\ref{61}), $G_{N}$ is Newton's constant. This dependence is a progress as compared to Zeldovich "numerology" where $\rho_{D.E.} \sim G_{N}m^{6}$ \cite{Zeldovich}. Also it was demonstrated that under natural additional hypothesis (that SM resides at the boarder of space-time where low-energy string approximation stops to be valid) big numbers ($10^{17}$, $10^{57}$ in (\ref{58}), (\ref{62})) may be received "from nothing", i.e. from dimensionalities $D=10$, $n=4$ and the value of the 4-form-dilaton coupling constant $\alpha=1/2$. All quantative results of the paper depend solely on the choice of the theory - the Type IIA supergravity in this paper. It would be interesting to trace the logic of the paper for some other theories. Since canonical radion field $\psi$ is associated with position $\rho (x)$ of the UV boundary terminating the throat (see (\ref{22}), (\ref{32})) it would be interesting to find the description of the non-trivial effective dynamics in 4 dimensions given by action (\ref{31}) with potential (\ref{36}) in the language of heavy boundary moving in higher dimensional non-stationary background formed with account of gravitational back-reaction of this $Z_{2}$-symmetric co-dimension one local source. The basic difficulty of the approach of the paper is the lack of physical grounds for the very appearance of the UV boundary surface of the throat and for the choice of its dynamics. The simplest Nambu-Goto choice taken in the action (\ref{1}) is crucial for the calculations of the paper. But the "simplest" does not mean "well grounded". \section*{Acknowledgements} Author is grateful for plural discussions to M.Z. Iofa and to participants of the Quantum Field Theory Seminar of the Theoretical Physics Department, Lebedev Physical Institute. This work was partially supported by the grant LSS-4401.2006.2
{ "redpajama_set_name": "RedPajamaArXiv" }
7,338
Long battle ahead to curb fake news Facebook headquarters in Menlo Park, California: the tech giant has announced plans to limit the spread of fake news, but whether it'll succeed remains to be seen. Keystone Swiss and European researchers are working on algorithms to detect misinformation circulating on social media but caution that training machines to do the work is no easy task. This content was published on December 21, 2016 - 11:00 December 21, 2016 - 11:00 Geraldine Wong Sak Hoi A stickler for detail, Geraldine first arrived at swissinfo.ch in 2014 to study rumours on social media as part of a collaborative research project known as Pheme. She now coordinates the Fact Checks by swissinfo.ch dossier covering (mis)statements about Switzerland, and continues to follow the trail of online misinformation. Deutsch (de) Der Kampf gegen Falschnachrichten wird lange dauern Español (es) Será larga la batalla para frenar las noticias falsas Português (pt) Um longa batalha para combater as notícias falsas 中文 (zh) 打击虚假新闻任重道远 Français (fr) Longue bataille en vue pour freiner les nouvelles bidon عربي (ar) المعركة الطويلة للحَد من انتشار الأخبار الكاذبة على شبكة الإنترنت Pусский (ru) Как бороться с лживой информацией в соцсетях? 日本語 (ja) 「偽ニュース」拡散防止、まだ長い道のり Italiano (it) La lotta alle false notizie si prospetta lunga Misinformation hit international headlines in 2016, peaking with accusations that fake news on Facebook helped win Donald Trump the White HouseExternal link. After initially denying that false information had had an influence on voters, the world's most popular social network began testingExternal link measures to limit the spread of hoaxes on its site. From giants like Google to solitary tech nerds, others are also springing into action. Yet those who began studying the growth of misinformation well before the unexpected results of the American presidential election brought the problem to the fore caution that experts face an uphill battle against fake news. "It's a race between machines and people (fabricating information) for fun, political agenda or money," says Kalina Bontcheva, a professor at the University of Sheffield in the United Kingdom. The work in this area by computer scientists like Bontcheva and news organisations, including swissinfo.ch, reveal just how difficult it is to actually limit the spread of lies and distortions on social media. Detecting false information CEO Mark Zuckerberg announced a plan for curbing the spread of fake news on Facebook that includes "stronger detection … to improve our ability to classify misinformation." Bontcheva likens technology that can do this to email spam filters. But its powers would likely be limited. Fake news made in Switzerland Fake news sites have cropped up in Switzerland but they are few in number and Linards Udris says their following and reach are also limited. One possible reason for this is the size of the country. "For those who would want to make money (from fake news), it wouldn't be possible" here, given the relatively small domestic market for news, says the media researcher at the University of Zurich. Another likely factor, he says, is the comparatively low level of polarisation in Swiss politics, as hyper partisanship is a characteristic of many fake news sites, particularly in the US. Still, Udris cautions that polarisation is growing in Switzerland, and as more people get their news from social media, experts will need to keep a close eye on how the fake news landscape evolves. "Fake news sites popping up for monetising purposes are easy to detect," she says. "The more difficult ones are the claims with hidden agendas, because they're a lot more subtle" and therefore harder for machines to detect, she adds. A research projectExternal link she leads is trying to address this challenge. Named Pheme and funded by the European Commission, the project brings together IT experts, universities and swissinfo.ch to devise technologies that could help journalists find and verify online claims. "We're trying to use a lot of past rumours as training data for machine learning algorithms," Bontcheva explains. "We're training models to spot the opinions of users about a claim, and based on that pick out how likely something is to be true or false." Machines are learning, albeit slowly It may sound straightforward, but training machines to give a clear indication of whether a text is credible or not is a complex task. Scientists must combine approaches, mining both the history of social networks and the content of individual posts to pick out patterns for credible and questionable content alike, says data scientist Pierre Vandergheynst. "No one has cracked this nut yet," says the professor at the Federal Institute of Technology Lausanne (EPFL), who studies how information evolves on platforms like Wikipedia. "You can read a text and decide if you should trust it, but a machine does not have the cognitive reasoning to do this." Bontcheva admits the development of this technology is still in its early stages. "It has been three years of experimentation and it's still far from the level of reliability that we need." But she believes Pheme researchers have moved things forward since the project began. "The technology is getting better and we've pushed the state of the art," she says, adding that project partners have also contributed a large amount of data to the field. "When we started, there weren't many social media rumours (to use as training data)." Indeed, researchers often run into the problem of a lack of access to data held by Facebook and other social networks. But the volume of information these companies have to contend with is also an issue for the tech giants themselves, says Bontcheva. It means they must develop systems that can find suspicious content from the enormous volume of posts users share everyday. Not all tools are created equal In addition to Facebook and Google, which have both announced plans to curb fake news on their sites, tech savvy users are also trying their hand at fighting online misinformation. Among the proposed solutions that cropped up as fake news became big news in late 2016 is a tool cheekily named "BS Detector" developed by a technologist in the United States. Daniel Sieradski told the mediaExternal link that he created the web browser plug-in, which detects and flags "questionable" news sources on the basis of a list of fake news sites, "in about an hour". This method sounds similar to a spam system, says EPFL data scientist Pierre Vandergheynst. And it has its weaknesses. "You'd need to have a list of all the potential fake news sites" out there for the plug-in to be effective, he says. Even then, it would fail to detect rumours started by social media users with no affiliation to such sites and that then get picked up by mainstream news outlets. Another issue is how to maintain users' trust in a system that decides which posts contain false information. "Tech companies need to be totally transparent about how they decide what makes a fake news site," says Linards Udris, a Swiss media expert at the University of Zurich. Bontcheva agrees. To avoid accusations of censorship, she says Facebook could give users the option of seeing questionable content in a separate feed, similar to how email inboxes contain a spam folder that people can open at will. Facebook is taking a different tact, piloting a system for flagging "disputed" stories and warning users as they share these items. The risk of censorship also limits the possibility for states to restrict information. Udris sees little point in introducing new legislation, pointing out that current libel laws – at least in Switzerland – are one way to deal with cases of false, incendiary claims targeting specific persons or groups. But governments could focus their attention elsewhere. "Tech companies have few commercial incentives" to limit fake news, says Udris, deputy director of the Research Institute for the Public Sphere and SocietyExternal link. When such stories go viral, they help to generate revenue for social platforms. So the state could offer tax breaks, for example, to those firms that take steps against misinformation. The human factor Other actors need to get involved. Facebook is testing ways for users and third parties, including fact checking organisations, to help identify misleading posts. But journalists too must be part of the solution. "The problem is when legitimate (news) websites pick up false information and spread it," says Pierre Vandergheynst. "At that moment, it's given the seal of authenticity. That cycle has to be broken." With media outlets facing resource cuts to stay afloat, Udris wants to see "a wider debate about how good journalism can be fostered in society." Public broadcasting is critical, he adds. "It's one important pillar where people get high-quality, diverse, verified information." The onus is also on online users to become more discriminating news consumers. Udris points to studies that show less than half of people surveyed who get their news on social media pay attention to the source of the information they're reading. "Critical thinking is needed," he says, suggesting there is a need for stronger media education for youth who, according to a recent studyExternal link by the Reuters Institute, are more likely than other age groups to consume news primarily on social media. He also believes that paying for online news can help people to make more critical choices about which outlets to turn to. Yet, even with efforts from all sectors, the spread of misinformation cannot be stopped completely, and Udris says not to expect short-term miracles. "Rumours are part of human nature," he points out. It's a thought echoed by Pierre Vandergheynst. "In the end, the web did not invent conspiracy theories," says the EPFL researcher. "It just made them spread quicker, because instead of the local pub you hear them on Facebook." Anti-Semitism in Switzerland Anti-Semitic prejudices tend to rise to the surface during crises. Switzerland has a history of this kind of discrimination. Has social media swallowed the Swiss news? This content was published on Aug 24, 2016 Aug 24, 2016 Social media influence on the news in Switzerland is rising - and this is proving to be both a friend and enemy to Swiss media companies. Media... Public media boost confidence in news This content was published on Nov 17, 2016 Nov 17, 2016 Publicly funded news outlets elevate people's overall trust in journalism, according to a Swiss research team. Because of this, Swiss citizens... Fact checking the rumours spread on social media This content was published on Jun 8, 2016 Jun 8, 2016 For the last two years, swissinfo.ch has been involved in a collaborative project to help journalists tackle one of the biggest challenges of... True or false? Sorting rumours and sifting data This content was published on May 29, 2016 May 29, 2016 Rumours spread fast on social media, and finding out what's true or false is hard. Is there a better way?
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,923
London Art Fair 2014 was successful. David Franchi – Thursday, 22nd January 2014 Barbara Hepworth, Kneeling Figure (1932) Rosewood. Courtesy of Wakefield Permanent Art Collection © Bowness, Hepworth Estate/Photograph: Norman Taylor The crowded London Art Fair 2014 was successful. At its 26th edition, London Art Fair confirms to be the leading event for Modern British and Contemporary art in the UK. You can get lost in the Business Design Centre, in Angel, Islington, there is so much to see and to attend to. London Art Fair 2014 hosted 128 galleries from all over the world. It is an encouraging space for all kind of collectors with prices for all budgets. It is an excellent place for vendors as well, allowing business from emerging galleries to affirmed Mayfair ones. The programme presented by London Art Fair 2014 was appealing with curated exhibitions, talks, tours, films and performances. The galleries at the London Art Fair 2014 showed pieces of talented new artists together with such more established, including Mirò, Picasso, Damien Hirst, Francis Bacon, David Hockney, Peter Blake, Lichtenstein, Eduardo Paolozzi, Alan Davie and Prunella Clough just to name a few. New features for 2014 included the Fair's first museum partnership, which saw The Hepworth Wakefield present a unique exhibition of British Modernism, and a new series of international 'Dialogues', curated by Adam Carr of MOSTYN to mark the 10th edition of the critically acclaimed Art Projects section. Photo50, the Fair's annual showcase of contemporary photography, is this year entitled 'Immaterial Matter' and curated by Charlie Fellowes and Jeremy Epstein, Directors of Edel Assanti. The Hepworth Wakefield was curated by Frances Guy, Head of Collection and Exhibitions at The Hepworth Wakefield, 'Barbara Hepworth and the Development of British Modernism', sponsored by Hiscox. It focused on how the museum's enlightened commitment to the work of young contemporary artists in the 1920s and 1930s led to the preservation of this key moment in British art history. Curated by Adam Carr, 'Dialogues' is a new initiative for 2014 featuring collaborative presentations between invited UK and international galleries. With many of these galleries and artists working together for the first time, the section promises a unique exhibition of critical conversations around shared ideas or a common aesthetic. Currently curator at MOSTYN, Wales, Adam Carr has previously been guest curator for Castello di Rivoli Museum of Contemporary Art, Turin and Kadist Art Foundation, Paris. The 'Dialogues' galleries are: DREI, Cologne / Limoncello, London; Galeria Stereo, Warsaw / The Sunday Painter, London; SABOT, Romania / Maria Stenfors, London; Frutta, Rome / Seventeen, London. New galleries for 2014 included Brooklyn's Muriel Guépin Gallery and Paris based UN-SPACED, with a solo presentation by Éric Sadamasa Motonaga, 'Small Work', 1961, oil and synthetic resin on panel. Courtesy of Whitestone Gallery Tabuchi. Galerie E.G.P, also from Paris, showcased two artists – Oliver Bragg and Nicholas Portalupi – whose work shares an interest in fictional and parallel universes. Further highlights included group shows by BEARSPACE, Ceri Hand Gallery, dalla Rosa Gallery, THE RESIDENCE GALLERY and Vane. Art Projects also hosted the launch of The Catlin Guide 2014, featuring the cream of art school graduates from around the UK; and solo shows from British rising stars such as Nicole Morris, who has recently been nominated for the Max Mara Prize for female artists, at Space In Between, and Alison Erika Forde at The International 3. Photo50 is the annual guest-curated exhibition of contemporary photography. Entitled 'Immaterial Matter', the 2014 exhibition was curated by Charlie Fellowes and Jeremy Epstein, Directors of Edel Assanti. Each year, Photo50 provides a critical showcase of some of the most interesting and distinctive elements of current photographic practice. 'Immaterial Matter' examines the increasingly indiscernible distinction between the digital and the material. The 50 artworks selected investigated our understanding of these two classifications, and to what extent they effectively delineate our world and our fields of experience in the information age. At the London Art Fair 2014 An extensive programme of tours, talks and critical debates took place throughout the week in association with key partners such as Sotheby's Institute of Art, Iniva, PhotoVoice, Apollo magazine, Own Art and Vital Arts. Sponsors and partners for the London Art Fair 2014 included Air Partner, Infiniti, Peroni Nastro Azzurro, Lund Humphries, Hiscox, Natuzzi, Expofreight and Drummond Read Recruitment. London Art Fair was at the Business Design Centre, Islington, from 15th to 19th January 2014. This entry was posted on January 23, 2014 by London Art Reviews in Other Art Events, Reviews and tagged Art Fair, London art exhibitions, London Art Fair, London Art Reviews, Photography. https://wp.me/p2MlEe-rD
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,332
What is considered high risk truck insurance ? An insurance underwriter looks at several factors when considering insuring a commercial truck for insurance. Your CSA score, loss history, type of cargo, and driver experience are all taken under consideration when evaluating a risk. An insiders guide to truck insurance. What is bobtail insurance? What are the required limits of liability? Enjoy this comprehensive insiders guide to commercial truck insurance.
{ "redpajama_set_name": "RedPajamaC4" }
764
O rio Sûre ou rio Sauer (, , ) é um afluente do rio Mosela que percorre a Bélgica, Luxemburgo e Alemanha. Nasce perto de Vaux-sur-Sûre nas Ardenas, sudeste da Bélgica, e atravessa a fronteira com o Luxemburgo perto de Martelange. A oeste de Esch-sur-Sûre passa num lago artificial, o Lago Sûre Superior, que dá nome à comuna luxemburguesa de Lac de la Haute-Sûre. Passa por Ettelbruck e Diekirch, e forma a fronteira Alemanha-Luxemburgo nos últimos 50 km do seu curso, passando em Echternach antes de desaguar no Mosela em Wasserbillig. Ligações externas Obersauer NaturPark Rios de Luxemburgo Rios da Bélgica Rios da Alemanha Rios internacionais da Europa Fronteira Alemanha-Luxemburgo Fronteira Bélgica-Luxemburgo Rios fronteiriços da Alemanha Rios fronteiriços da Bélgica Rios fronteiriços do Luxemburgo
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,392
De Troostembergh is een Zuid-Nederlandse adellijke familie. Geschiedenis De Troostembergh was een notabele familie uit het Leuvense, met archieven opklimmende tot in de veertiende eeuw. François-Maximilien de Troostembergh (1727-1788) en Marie-Thérèse de la Hamaide (1731-1820) waren de ouders van de hierna volgende drie broers. Isidore de Troostembergh Isidore Maximilien Martin de Troostembergh (Leuven, 1 augustus 1765 - 23 december 1830), gemeenteraadslid van Leuven, trouwde in 1794 met Anne-Marie de Spoelberch (1767-1834). Ze hadden een enige dochter, die als kind overleed. Hij werd in 1822 in de erfelijke adel erkend. Jérôme de Troostembergh Jérôme Charles Joseph de Troostembergh (1766-1823) werd op 29 oktober 1822 erkend in de erfelijke adel. Hij stierf op 8 februari 1823, zonder de open brieven te hebben gelicht. Hij bleef vrijgezel. Joseph de Troostembergh Joseph Norbert Jean de Troostembergh (Leuven, 30 juni 1770 - 31 maart 1830) werd in 1822 samen met zijn beide broers in de erfelijke adel erkend. Hij trouwde met Anne-Marie Everaerts (1773-1855). Guillaume Joseph Lucien de Troostembergh (1810-1885) trouwde met Adèle de Ryckman de Betz (1812-1848), met wie hij drie kinderen kreeg. Hij hertrouwde met Constance de Moreau (1822-1873). Lucien de Troostembergh (1838-1906), doctor in de rechten, werd burgemeester van Houwaart. Hij trouwde met Ghislaine de Dieudonné (1837-1867), dochter van de burgemeester van Korbeek-Lo. Ze kregen vijf kinderen. Maximilien de Troostembergh (1861-1925) trouwde met Anna Wouters (1860-1943). Het echtpaar bleef kinderloos. Hij was burgemeester van Houwaart en directeur van de Annuaire de la noblesse belge. In 1911 kreeg hij de persoonlijke titel baron. Louis de Troostembergh (1862-1890) trouwde met Caroline de Montpellier (1866-1942), dochter van volksvertegenwoordiger en gouverneur van Namen Charles de Montpellier. Jean-Marie de Troostembergh de Troostembergh (1888-1964) trouwde met gravin Ghislaine d'Aspremont Lynden (1887-1952) en ze kregen twee zoons. Jean werd geadopteerd door zijn oom Maximilien en kreeg in 1919 de titel baron, overdraagbaar bij eerstgeboorte. Hij werd kanselier van de Belgische vereniging van ridders van de Orde van Malta. Hij was een van de negen stichters van de Vereniging van de Belgische adel in 1936. Maxime de Troostembergh (1919-2012) trouwde met Marie-Emilie Blondeau (1922-2014). Ze kregen zeven kinderen, met afstammelingen tot heden. Charles de Troostembergh de Troostembergh (1923-1994), trouwde met Hélène de Hemptinne (1929- ). Ze kregen eveneens zeven kinderen, met afstammelingen tot heden. In 1956 kreeg Charles de baronstitel, overdraagbaar bij eerstgeboorte. Het kasteel Cleerbeek in Houwaart is sinds 1807 eigendom van de familie de Troostembergh. In 2010-2011 werd het, nog steeds eigendom en bewoond door kinderen van Charles de Troostembergh de Troostembergh, grondig gerenoveerd. Literatuur Généalogie Troostembergh, in: Annuaire de la noblesse de Belgique, Brussel, 1875. E. LEJOUR, Inventaire des archives de la famille De Troostembergh, Rijksarchief, Brussel, 1949. Oscar COOMANS DE BRACHÈNE, État présent de la noblesse belge'', Annuaire 1999, Brussel, 1999. Zuid-Nederlands adellijk huis (voor 1830)
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,700
{"url":"https:\/\/www.mathway.com\/examples\/algebra\/functions\/determine-if-surjective-onto?id=647","text":"# Algebra Examples\n\nDetermine if Surjective (Onto)\nFunction is said to be a surjection or onto if every element in the range is an image of at least one element of the domain. This means the range of must be all real numbers for the function to be surjective. If the range is not all real numbers, it means that there are elements in the range which are not images for any element from the domain.\nRange should be all real numbers\nThe range is the set of all valid values. Use the graph to find the range.\nA function is said to be surjective or onto if every element of the range is an image of at least one element from the domain.\nSurjective (Onto)\n\nWe're sorry, we were unable to process your request at this time\n\nStep-by-step work + explanations\n\u2022 \u00a0\u00a0\u00a0Step-by-step work\n\u2022 \u00a0\u00a0\u00a0Detailed explanations\n\u2022 \u00a0\u00a0\u00a0Access anywhere\nAccess the steps on both the Mathway website and mobile apps\n$--.--\/month$--.--\/year (--%)","date":"2018-02-21 23:05:37","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5186722278594971, \"perplexity\": 477.80840672254004}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-09\/segments\/1518891813818.15\/warc\/CC-MAIN-20180221222354-20180222002354-00572.warc.gz\"}"}
null
null
Q: how to call function in module from block_view I am writing a Drupal 7 module that queries my db and prints out a view_block with a list of projects belonging to the current logged user: ID ClientName ProjectName ProjectDescription 1 mickey toon1 bla bla bla 2 mouse toon2 bla bla bla At this point, I need the user to select one of his projects to proceed to visualization. EDIT: what I need to do is to call another function in my module and pass it the ID for the project selected by the current user. I know how to add a link to every row in the table and bring the ID value with me to the next page, using GET['id'] to retrieve it, but I really don't want to change the url for security reasons - namely, I don't want users to change the ID in the url and see other people's projects, nor prevent it to happen. I prefer to keep url clean as much as possible. What is in Drupal 7 the right logic to allow the user select one project and then call another function in my module to load data according to the selected ID? A: I would implement an hook_menu. With this solution you need to pass the Id of the project in the url, but an hook menu allow you to define 2 functions. The access function will check if the user has access to the project, the callback function will retrieve the project and show the project details. This is a simple example of how I would implement it: /** * Implements hook_menu(). */ function my_module_menu() { $items['my_module/show_project_detail/%'] = array( 'title' => 'Show Task Detail', 'access callback' => 'show_project_access', 'access arguments' => array(1), 'title callback' => false, 'page callback' => 'show_project', 'page arguments' => array(1), ); return $items; } function show_project_access( $sProjectId ){ // checks if the user has access to $sProjectId. // returns true if he has access, otherwise returns false } function show_project( $sProjectId ){ //returns the html of the task's detail view } Then to call the hook_menu you will call this url https://my_drupal/my_module/show_project_detail/123 where 123 is the project Id.
{ "redpajama_set_name": "RedPajamaStackExchange" }
376
Sting will perform with the Utah Symphony on Aug. 31 in a concert that will benefit Zion National Park. To write the play, presented by Pioneer Theater Company in Salt Lake City in 2016 after its 2014-15 run on Broadway, Sting drew upon the shipyards and characters of his hometown of Wallsend in northeastern England. He attended a performance in Salt Lake City in September 2016. Next year, the news release said, he will star as shipyard foreman Jackie White in a Toronto-based production of the play at the Princess of Wales Theatre.
{ "redpajama_set_name": "RedPajamaC4" }
2,149
package org.opensaml.xml.signature; import org.opensaml.xml.XMLObject; import org.opensaml.xml.XMLObjectBuilder; /** * Builder for XMLObjects from {@link org.opensaml.xml.signature} * * @param <XMLSignatureType> the type of XMLObject being built */ public interface XMLSignatureBuilder<XMLSignatureType extends XMLObject> extends XMLObjectBuilder<XMLSignatureType> { /** * Builds an XMLObject using the default name and namespace information provided XML Signature specifications. * * @return built XMLObject */ public XMLSignatureType buildObject(); }
{ "redpajama_set_name": "RedPajamaGithub" }
3,038
{"url":"https:\/\/na.mathematicstip.com\/5433-118e-exercises.html","text":"# 11.8E: Exercises\n\nWe are searching data for your request:\n\nForums and discussions:\nManuals and reference books:\nData from registers:\nWait the end of the search in all databases.\nUpon completion, a link will appear to access the found materials.\n\n### Practice Makes Perfect\n\nExercise (PageIndex{23}) Graph Quadratic Functions of the Form (f(x)=x^{2}=k)\n\nIn the following exercises,\n\n1. Graph the quadratic functions on the same rectangular coordinate system\n2. Describe what effect adding a constant, (k), to the function has on the basic parabola.\n1. (f(x)=x^{2}, g(x)=x^{2}+4, ext { and } h(x)=x^{2}-4)\n2. (f(x)=x^{2}, g(x)=x^{2}+7, ext { and } h(x)=x^{2}-7)\n\n1.\n\n1. Figure 9.7.71\n2. The graph of (g(x)=x^{2}+4) is the same as the graph of (f(x)=x^{2}) but shifted up (4) units. The graph of (h(x)=x^{2}-4) is the same as the graph of (f(x)=x^{2}) but shift down (4) units.\n\nExercise (PageIndex{24}) Graph Quadratic Functions of the Form (f(x)=x^{2}=k)\n\nIn the following exercises, graph each function using a vertical shift.\n\n1. (f(x)=x^{2}+3)\n2. (f(x)=x^{2}-7)\n3. (g(x)=x^{2}+2)\n4. (g(x)=x^{2}+5)\n5. (h(x)=x^{2}-4)\n6. (h(x)=x^{2}-5)\n\n1.\n\n3.\n\n5.\n\nExercise (PageIndex{25}) Graph Quadratic Functions of the Form (f(x)=(x-h)^{2})\n\nIn the following exercises,\n\n1. Graph the quadratic functions on the same rectangular coordinate system\n2. Describe what effect adding a constant, (h), inside the parentheses has\n1. (f(x)=x^{2}, g(x)=(x-3)^{2}, ext { and } h(x)=(x+3)^{2})\n2. (f(x)=x^{2}, g(x)=(x+4)^{2}, ext { and } h(x)=(x-4)^{2})\n\n1.\n\n1. Figure 9.7.75\n2. The graph of (g(x)=(x\u22123)^{2}) is the same as the graph of (f(x)=x^{2}) but shifted right (3) units. The graph of (h(x)=(x+3)^{2}) is the same as the graph of (f(x)=x^{2}) but shifted left (3) units.\n\nExercise (PageIndex{26}) Graph Quadratic Functions of the Form (f(x)=(x-h)^{2})\n\nIn the following exercises, graph each function using a horizontal shift.\n\n1. (f(x)=(x-2)^{2})\n2. (f(x)=(x-1)^{2})\n3. (f(x)=(x+5)^{2})\n4. (f(x)=(x+3)^{2})\n5. (f(x)=(x-5)^{2})\n6. (f(x)=(x+2)^{2})\n\n1.\n\n3.\n\n5.\n\nExercise (PageIndex{27}) Graph Quadratic Functions of the Form (f(x)=(x-h)^{2})\n\nIn the following exercises, graph each function using transformations.\n\n1. (f(x)=(x+2)^{2}+1)\n2. (f(x)=(x+4)^{2}+2)\n3. (f(x)=(x-1)^{2}+5)\n4. (f(x)=(x-3)^{2}+4)\n5. (f(x)=(x+3)^{2}-1)\n6. (f(x)=(x+5)^{2}-2)\n7. (f(x)=(x-4)^{2}-3)\n8. (f(x)=(x-6)^{2}-2)\n\n1.\n\n3.\n\n5.\n\n7.\n\nExercise (PageIndex{28}) Graph Quadratic Functions of the Form (f(x)=ax^{2})\n\nIn the following exercises, graph each function.\n\n1. (f(x)=-2 x^{2})\n2. (f(x)=4 x^{2})\n3. (f(x)=-4 x^{2})\n4. (f(x)=-x^{2})\n5. (f(x)=frac{1}{2} x^{2})\n6. (f(x)=frac{1}{3} x^{2})\n7. (f(x)=frac{1}{4} x^{2})\n8. (f(x)=-frac{1}{2} x^{2})\n\n1.\n\n3.\n\n5.\n\n7.\n\nExercise (PageIndex{29}) Graph Quadratic Functions Using Transformations\n\nIn the following exercises, rewrite each function in the (f(x)=a(x\u2212h)^{2}+k) form by completing the square.\n\n1. (f(x)=-3 x^{2}-12 x-5)\n2. (f(x)=2 x^{2}-12 x+7)\n3. (f(x)=3 x^{2}+6 x-1)\n4. (f(x)=-4 x^{2}-16 x-9)\n\n1. (f(x)=-3(x+2)^{2}+7)\n\n3. (f(x)=3(x+1)^{2}-4)\n\nExercise (PageIndex{30}) Graph Quadratic Functions Using Transformations\n\nIn the following exercises,\n\n1. Rewrite each function in (f(x)=a(x\u2212h)^{2}+k) form\n2. Graph it by using transformations\n1. (f(x)=x^{2}+6 x+5)\n2. ((x)=x^{2}+4 x-12)\n3. (f(x)=x^{2}+4 x-12)\n4. (f(x)=x^{2}-6 x+8)\n5. (f(x)=x^{2}-6 x+15)\n6. (f(x)=x^{2}+8 x+10)\n7. (f(x)=-x^{2}+8 x-16)\n8. (f(x)=-x^{2}+2 x-7)\n9. (f(x)=-x^{2}-4 x+2)\n10. (f(x)=-x^{2}+4 x-5)\n11. (f(x)=5 x^{2}-10 x+8)\n12. (f(x)=3 x^{2}+18 x+20)\n13. (f(x)=2 x^{2}-4 x+1)\n14. (f(x)=3 x^{2}-6 x-1)\n15. (f(x)=-2 x^{2}+8 x-10)\n16. (f(x)=-3 x^{2}+6 x+1)\n\n1.\n\n1. f(x)=(x+3)^{2}-4\n\n3.\n\n1. (f(x)=(x+2)^{2}-1)\n\n5.\n\n1. (f(x)=(x-3)^{2}+6)\n\n7.\n\n1. (f(x)=-(x-4)^{2}+0)\n\n9.\n\n1. (f(x)=-(x+2)^{2}+6)\n\n11.\n\n1. (f(x)=5(x-1)^{2}+3)\n\n13.\n\n1. (f(x)=2(x-1)^{2}-1)\n\n15.\n\n1. (f(x)=-2(x-2)^{2}-2)\n\nExercise (PageIndex{31}) Graph Quadratic Functions Using Transformations\n\nIn the following exercises,\n\n1. Rewrite each function in (f(x)=a(x\u2212h)^{2}+k) form\n2. Graph it using properties\n1. (f(x)=2 x^{2}+4 x+6)\n2. (f(x)=3 x^{2}-12 x+7)\n3. (f(x)=-x^{2}+2 x-4)\n4. (f(x)=-2 x^{2}-4 x-5)\n\n1.\n\n1. (f(x)=2(x+1)^{2}+4)\n\n3.\n\n1. (f(x)=-(x-1)^{2}-3)\n\nExercise (PageIndex{32}) Matching\n\nIn the following exercises, match the graphs to one of the following functions:\n\n1. (f(x)=x^{2}+4)\n2. (f(x)=x^{2}-4)\n3. (f(x)=(x+4)^{2})\n4. (f(x)=(x-4)^{2})\n5. (f(x)=(x+4)^{2}-4)\n6. (f(x)=(x+4)^{2}+4)\n7. (f(x)=(x-4)^{2}-4)\n8. (f(x)=(x-4)^{2}+4)\n\n1. Figure 9.7.97\n\n2. Figure 9.7.98\n\n3. Figure 9.7.99\n\n4. Figure 9.7.100\n\n5. Figure 9.7.101\n\n6. Figure 9.7.102\n\n7. Figure 9.7.103\n\n8. Figure 9.7.104\n\n1. c\n\n3. e\n\n5. d\n\n7. g\n\nExercise (PageIndex{33}) Find a Quadratic Function from its Graph\n\nIn the following exercises, write the quadratic function in (f(x)=a(x\u2212h)^{2}+k) form whose graph is shown.\n\n1. Figure 9.7.105\n\n2. Figure 9.7.106\n\n3. Figure 9.7.107\n\n4. Figure 9.7.108\n\n1. (f(x)=(x+1)^{2}-5)\n\n3. (f(x)=2(x-1)^{2}-3)\n\nExercise (PageIndex{34}) Writing Exercise\n\n1. Graph the quadratic function (f(x)=x^{2}+4x+5) first using the properties as we did in the last section and then graph it using transformations. Which method do you prefer? Why?\n2. Graph the quadratic function (f(x)=2x^{2}\u22124x\u22123) first using the properties as we did in the last section and then graph it using transformations. Which method do you prefer? Why?\n\n## Self Check\n\na. After completing the exercises, use this checklist to evaluate your mastery of the objectives of this section.\n\nb. After looking at the checklist, do you think you are well-prepared for the next section? Why or why not?\n\n## HOW TO SPEAK ENGLISH FAST\n\n11.1 Session Written text 5:1-21 (Acts). Do it again each phrase.\n\nBut there was a man, (But there was a man,) \/ who with his partner, (who with his partner,) \/ marketed some residence that belonged to them. (sold some residence that belonged to them.) \/ But there was a man, who with his partner, marketed some residence that belonged to them. (But there was a man, who with his partner, marketed some residence that belonged to them.) \/ But with his partner's contract (But with his partner's agreement) \/ he kept aspect of the cash for himself (he kept aspect of the cash for himself) \/ and converted the relax over to the apostles. (and converted the relax over to the apostles.) \/ But with his partner's contract he kept aspect of the cash for himself and converted the relax over to the apostles. (But with his partner's contract he kept aspect of the cash for himself and converted the relax over to the apostles.)\n\nPeter said to him, (Peter said to him,) \/ Why did you let The devil take management of you (Why did you let The devil take management of you) \/ and create you lie to the Sacred Soul (and create you lie to the Sacred Spirit) \/ by maintaining aspect of the cash you obtained for the property? (by maintaining aspect of the cash you obtained for the property?) \/ Chris said to him, Why did you let The devil take management of you and create you lie to the Sacred Soul by maintaining aspect of the cash you obtained for the property? (Peter said to him, Why did you let The devil take management of you and create you lie to the Sacred Soul by maintaining aspect of the cash you obtained for the property?)\n\nBefore you marketed the residence, (Before you marketed the residence,) \/ it belonged to you (it belonged to you) \/ and after you marketed it, (and after you marketed it,) \/ the cash was yours. (the cash was yours.) \/ Before you marketed the residence, it belonged to you and after you marketed it, the cash was yours. (Before you marketed the residence, it belonged to you and after you marketed it, the cash was yours.) \/ Why, then, did you choose to do such a thing? (Why, then, did you choose to do such a thing?) \/ You have not protect to individuals, (You have not protect to individuals,) \/ you have protect to God. (you have protect to God.) \/ You have not protect to individuals, you have protect to God. (You have not protect to individuals, you have protect to God.)\n\nAs soon as the man observed this, (As soon as the man observed this,) \/ he dropped down dead (he dropped down dead) \/ and all who observed about it were frightened. (and all who observed about it were frightened.) \/ As soon as the man observed this, he dropped down dead and all who observed about it were frightened. (As soon as the man observed this, he dropped down dead and all who observed about it were frightened.) \/ The young men came in, (The young men came in,) \/ covered up his system, (wrapped up his system,) \/ taken him out, (carried him out,) \/ and laid to relax him. (and laid to relax him.) \/ The young men came in, covered up his system, taken him out, and laid to relax him. (The young men came in, covered up his system, taken him out, and laid to relax him.)\n\nSo Chris said to her, (So Chris said to her,) \/ Why did you and your partner choose (Why did you and your partner decide) \/ to put the Lord's Soul to the test? (to put the Lord's Soul to the test?) \/ So Chris said to her, Why did you and your partner choose to put the Lord's Soul to the test? (So Chris said to her, Why did you and your partner choose to put the Lord's Soul to the test?) \/ The men who laid to relax your partner (The men who laid to relax your husband) \/ are at the entry right now, (are at the entry right now,) \/ and they will bring you out too. (and they will bring you out too.) \/ The men who laid to relax your partner are at the entry right now, and they will bring you out too. (The men who laid to relax your partner are at the entry right now, and they will bring you out too.) \/ At once she dropped down at his legs and passed away. (At once she dropped down at his legs and passed away.)\n\nThe young men came in (The young men came in) \/ and saw that she was deceased, (and saw that she was deceased,) \/ so they taken her out (so they taken her out) \/ and laid to relax her beside her partner. (and laid to relax her beside her partner.) \/ The young men came in and saw that she was deceased, so they taken her out and laid to relax her beside her partner. (The young men came in and saw that she was deceased, so they taken her out and laid to relax her beside her partner.) \/ The whole cathedral (The whole church) \/ and all the others who observed of this (and all the others who observed of this) \/ were frightened. (were frightened.) \/ The whole cathedral and all the others who observed of this were frightened. (The whole cathedral and all the others who observed of this were frightened.)\n\nMany wonders and amazing things (Many wonders and wonders) \/ were being conducted among the individuals (were being conducted among the people) \/ by the apostles. (by the apostles.) \/ Many wonders and amazing things were being conducted among the individuals by the apostles. (Many wonders and amazing things were being conducted among the individuals by the apostles.) \/ All the followers met together in Solomon's Patio. (All the followers met together in Solomon's Patio.) \/ Nobody outside the team dared be a part of them, (Nobody outside the team dared be a part of them,) \/ even though the individuals talked extremely of them. (even though the individuals talked extremely of them.) \/ Nobody outside the team dared be a part of them, even though the individuals talked extremely of them. (Nobody outside the team dared be a part of them, even though the individuals talked extremely of them.)\n\nBut more and more individuals (But more and more people) \/ were included to the number of men and ladies (were included to the number of men and women) \/ who considered in the Master. (who considered in the Master.) \/ But more and more individuals were included to the number of men and ladies who considered in the Master. (But more and more individuals were included to the number of men and ladies who considered in the Master.)\n\nAs a consequence of what the apostles were doing, (As a consequence of what the apostles were doing,) \/ fed up individuals were conducted into the roads (sick individuals were conducted into the streets) \/ and placed on mattresses and pads (and placed on mattresses and mats) \/ Due to what the apostles were doing, fed up individuals were conducted into the roads and placed on mattresses and pads (As a consequence of what the apostles were doing, fed up individuals were conducted into the roads and placed on mattresses and mats) \/ so that at least Peter's darkness (so that at least Peter's shadow) \/ might drop on some of them (might drop on some of them) \/ as he approved by. (as he\npassed by.) \/ so that at least Peter's darkness might drop on some of them as he approved by. (so that at least Peter's darkness might drop on some of them as he approved by.)\n\nAnd crowd (And crowds of people of people) \/ came in from the areas around Jerusalem, (came in from the areas around Jerusalem,) \/ providing those who were fed up (bringing those who were sick) \/ or who had wicked mood in them (or who had wicked mood in them) \/ and they were all recovered. (and they were all recovered.) \/ And crowd came in from the areas around Jerusalem, providing those who were fed up or who had wicked mood in them and they were all recovered. (And crowd came in from the areas around Jerusalem, providing those who were fed up or who had wicked mood in them and they were all recovered.)\n\nThen the Great Preacher (Then the Great Priest) \/ and all his partners, (and all his partners,) \/ associates of the regional celebration of spiritual management, (members of the regional celebration of spiritual management,) \/ Then the Great Preacher and all his partners, associates of the regional celebration of spiritual management, (Then the Great Preacher and all his partners, associates of the regional celebration of spiritual management,) \/ became incredibly envious of the apostles (became incredibly envious of the apostles) \/ so they made the decision to take activity. (so they made the decision to take activity.) \/ became incredibly envious of the apostles so they made the decision to take activity. (became incredibly envious of the apostles so they made the decision to take activity.)\n\nLESSON 11: EXERCISE LESSON\n\n11.2 Do it again each phrase.\n\n11.2a Finish the following phrases with \"of dropping.\"\nI am frightened (I am frightened of dropping.) \/ I was frightened (I was frightened of\nfalling.) \/ I will be frightened (I will be frightened of dropping.)\nHe is frightened (He is frightened of dropping.) \/ He was frightened (He was terrified\nof dropping.) \/ He will be frightened (He will be frightened of dropping.)\nYou are frightened (You are frightened of dropping.) \/ You were frightened (You\nwere frightened of dropping.) \/ You will be frightened (You will be frightened of\nfalling.)\n\n\u25ac Finish the following phrases with \"by the activity.\"\nWe are frightened (We are frightened by the activity.) \/ We were frightened (We\nwere frightened by the activity.) \/ We will be frightened (We will be frightened by\nthe activity.)\n\nThey are frightened (They are frightened by the activity.) \/ They were terrified\n(They were frightened by the activity.) \/ They will be frightened (They will be\nterrified by the activity.)\n\n11.2b Finish the following phrases with \"with their guidelines.\"\nI believe the fact (I believe the fact with their guidelines.) \/ I made the decision (I made the decision with their\ninstructions.) \/ I will believe the fact (I will believe the fact with their guidelines.)\nShe confirms (She confirms with their guidelines.) \/ She made the decision (She agreed\nwith their guidelines.) \/ She will believe the fact (She will believe the fact with their\ninstructions.)\nYou believe the fact (You believe the fact with their guidelines.) \/ You made the decision (You agreed\nwith their guidelines.) \/ You will believe the fact (You will believe the fact with their\ninstructions.)\n\n\u25ac Finish the following phrases with \"control of the city.\"\nWe believe the fact (We accept to take management of the city.) \/ We made the decision (We made the decision to\ntake management of the city.) \/ We will believe the fact (We will accept to take management of\nthe city.)\n\nThey believe the fact (They accept to take management of the city.) \/ They made the decision (They\nagreed to take management of the city.) \/ They will believe the fact (They will believe the fact to\ntake management of the city.)\n\n11.2c Finish the following phrases with \"action for the public's excellent.\"\nI take (I take activity for the public's excellent.) \/ I took (I took activity for the\npublic's excellent.) \/ I will take (I will take activity for the public's excellent.)\nShe requires (She requires activity for the public's excellent.) \/ She took (She took\naction for the public's excellent.) \/ She will take (She will take activity for the\npublic's excellent.)\n\nYou take (You take activity for the public's excellent.) \/ You took (You took\naction for the public's excellent.) \/ You will take (You will take activity for the\npublic's excellent.)\n\n\u25ac Finish the following phrases with \"to take management of the young men.\"\nWe take (We take management of the young men.) \/ We took (We took management of\nthe young men.) \/ We will take (We will take management of the young men.)\nThey take (They take management of the young men.) \/ They took (They took\ncontrol of the young men.) \/ They will take (They will take management of the\nyoung men.)\n\n11.2d Finish the following phrases with \"it by the entry.\"\n\nI position (I position it by the entry.) \/ I placed (I placed it by the entry.) \/ I will\nplace (I will position it by the entry.)\nShe locations (She locations it by the entry.) \/ She placed (She placed it by the\ndoor.) \/ She will position (She will position it by the entry.)\nYou position (You position it by the entry.) \/ You placed (You placed it by the\ndoor.) \/ You will position (You will position it by the entry.)\n\n\u25ac Finish the following phrases with \"them down by our legs.\"\nWe position (We position them down by our legs.) \/ We placed (We placed them down by our legs.) \/ We will position (We will position them down by our legs.)\nThey position (They position them down by our legs.) \/ They placed (They placed them down by our legs.) \/ They will position (They will position them down by our legs.)\n\n11.2e Finish the following phrases with \"the shop at beginning.\"\nI get into (I get into the shop at beginning.) \/ I joined (I joined the shop at beginning.) \/ I will get into (I will get into the shop at beginning.)\nShe goes into (She goes into the shop at beginning.) \/ She joined (She joined the shop at beginning.) \/ She will get into (She will get into the shop at beginning.)\nYou get into (You get into the shop at beginning.) \/ You joined (You joined the shop at beginning.) \/ You will get into (You will get into the shop at beginning.)\n\n\u25ac Finish the following phrases with \"public residence here.\"\nWe get into (We get into community residence here.) \/ We joined (We joined community residence here.) \/ We will get into (We will get into community residence here.)\nThey get into (They get into community residence here.) \/ They joined (They joined community residence here.) \/ They will get into (They will get into community residence here.)\n\n11.3 Response each phrase with \"I don't know yet,\" and \"may.\"\nls11gb.mp3\n11.3a Will you go tomorrow?\n(I don't know yet. I may go the next day.) I don't know yet. I may go the next day. (I don't know yet. I may go the next day.)\n\n11.3b Will she discuss the emergency?\n(I don't know yet. She may discuss the urgent.) I don't know yet. She may discuss the urgent. (I don't know yet. She may discuss the urgent.)\n\n11.3c Will it be a lot of money?\n(I don't know yet. It may be a huge sum of cash.) I don't know yet. It may be a huge sum of cash. (I don't know yet. It may be a huge sum of cash.)\n\n11.3d Will they do all of their work?\n(I don't know yet. They may do all of their perform.) I don't know yet. They may do all of their perform. (I don't know yet. They may do all of their perform.)\n\n11.3e Will it be examined soon?\n(I don't know yet. It may be examined soon.) I don't know yet. It may be examined soon. (I don't know yet. It may be examined soon.)\n\n11.3f Will we know the outcomes tomorrow?\n(I don't know yet. We may know the outcomes the next day.) I don't know yet. We may know the outcomes the next day. (I don't know yet. We may know the outcomes the next day.)\n\n11.3g Will the relax of the cash be given away?\n(I don't know yet. The relax of the cash may be given away.) I don't know yet. The relax of the cash may be given away. (I don't know yet. The relax of the cash may be given away.)\n\n11.4 Response each phrase with \"I don't think I (or another person) will. But I (or the other person) might .\"\n11.4a Will you go tomorrow?\n(I don't think I will. But I might go the next day.) I don't think I will. But I might go the next day. (I don't think I will. But I might go the next day.)\n\n11.4b Will she discuss the emergency?\n(I don't think she will. But she might discuss the urgent.) I don't think she will. But she might discuss the urgent. (I don't think she will. But she might discuss the urgent.)\n\n11.4c Will we provide a lot of money?\n(I don't think we will. But we might provide a huge sum of cash.) I don't think we will. But we might provide a huge sum of cash. (I don't think we will. But we might provide a huge sum of cash.)\n\n11.4d Will they do all of their work?\n(I don't think they will. But they might do all of their perform.) I don't think they will. But they might do all of their perform. (I don't think they will. But they might do all of their perform.)\n\n11.4e Will she analyze it soon?\n(I don't think she will. But she might analyze it soon.) I don't think she will. But she might analyze it soon. (I don't think she will. But she might analyze it soon.)\n\n11.4f Will we know the outcomes tomorrow?\n(I don't think we will. But we might know the outcomes the next day.) I don't think we will. But we might know the outcomes the next day. (I don't think we will. But we might know the outcomes the next day.)\n\n11.4g Will the relax of them drop down?\n(I don't think they will. But the relax of them might drop down.) I don't think they will. But the relax of them might drop down. (I don't think they will. But the relax of them might drop down.)\n\n11.5 Do it again each term (regular verbs).\n11.5a TO TEST (to test) \/ He guaranteed to analyze it. (He guaranteed to analyze it.)\ntesting (testing) \/ He is examining some. (He is examining some.)\ntested (tested) \/ it is examined (it is tested) \/ it was examined (it was tested) \/ it will be examined (it will be tested)\n\nI analyze (I test) I examined (I tested) I will analyze (I will test)\nhe assessments (he tests) he examined (he tested) he will analyze (he will test)\nshe assessments (she tests) she examined (she tested) she will analyze (she will test)\nit assessments (it tests) it examined (it tested) it will analyze (it will test)\nyou analyze (you test) you examined (you tested) you will analyze (you will test)\nwe analyze (we test) we examined (we tested) we will analyze (we will test)\nthey analyze (they test) they examined (they tested) they will analyze (they will test)\n\n11.5b TO PASS (to pass) \/ He guaranteed to pass it. (He guaranteed to pass it.)\npassing (passing) \/ He is moving some. (He is moving some.)\npassed (passed) \/ it is approved (it is passed) \/ it was approved (it was passed) \/ it will be approved (it will be passed)\nI pass (I pass) I approved (I passed) I will pass (I will pass)\nhe goes (he passes) he approved (he passed) he will pass (he will pass)\nshe goes (she passes) she approved (she passed) she will pass (she will pass)\nit goes (it passes) it approved (it passed) it will pass (it will pass)\nyou pass (you pass) you approved (you passed) you will pass (you will pass)\nwe pass (we pass) we approved (we passed) we will pass (we will pass)\nthey pass (they pass) they approved (they passed) they will pass (they will pass)\n\nreceiving (receiving) \/ He is getting some. (He is getting some.)\n\n11.5d TO LIE (to lie) \/ He guaranteed not to lie. (He guaranteed not to lie.)\nlying (lying) \/ He is relaxing. (He is relaxing.)\nlied : It is protect is rarely or never used.\n\nI lie (I lie) I protect (I lied) I will lie (I will lie)\nhe can be found (he lies) he protect (he lied) he will lie (he will lie)\nshe can be found (she lies) she protect (she lied) she will lie (she will lie)\nit can be found (it lies) it protect (it lied) it will lie (it will lie)\nyou lie (you lie) you protect (you lied) you will lie (you will lie)\nwe lie (we lie) we protect (we lied) we will lie (we will lie)\nthey lie (they lie) they protect (they lied) they will lie (they will lie)\n\n11.5e TO WRAP (to wrap) \/ He guaranteed to wrap it. (He guaranteed to wrap it.)\nwrapping (wrapping) \/ He is covering some. (He is covering some.)\nwrapped (wrapped) \/ it is covered (it is wrapped) \/ it was covered (it was wrapped) \/ it will be covered (it will be wrapped)\n\nI wrap (I wrap) I covered (I wrapped) I will wrap (I will wrap)\nhe parcels (he wraps) he covered (he wrapped) he will wrap (he will wrap)\nshe parcels (she wraps) she covered (she wrapped) she will wrap (she will wrap)\nit parcels (it wraps) it covered (it wrapped) it will wrap (it will wrap)\nyou wrap (you wrap) you covered (you wrapped) you will wrap (you will wrap)\nwe wrap (we wrap) we covered (we wrapped) we will wrap (we will wrap)\nthey wrap (they wrap) they covered (they wrapped) they will wrap (they will wrap)\n\n11.6 Do it again each term (irregular verbs).\n\n11.6a TO FALL (to fall) \/ He guaranteed not to drop. (He guaranteed not to drop.)\nfalling (falling) \/ He is dropping. (He is dropping.)\nfallen : It is dropped is rarely or never used.\nI drop (I fall) I dropped (I fell) I will drop (I will fall)\nhe drops (he falls) he dropped (he fell) he will drop (he will fall)\nshe drops (she falls) she dropped (she fell) she will drop (she will fall)\nit drops (it falls) it dropped (it fell) it will drop (it will fall)\nyou drop (you fall) you dropped (you fell) you will drop (you will fall)\nwe drop (we fall) we dropped (we fell) we will drop (we will fall)\nthey drop (they fall) they dropped (they fell) they will drop (they will fall)\n\n11.6b TO MEET (to meet) \/ He guaranteed not to meet. (He guaranteed not to meet.)\nmeeting (meeting) \/ He is conference them. (He is conference them.)\nmet (met) \/ it is met (it is met) \/ it was met (it was met) \/ it will be met (it will be met)\nI meet (I meet) I met (I met) I will see (I will meet)\nhe satisfies (he meets) he met (he met) he will see (he will meet)\nshe satisfies (she meets) she met (she met) she will see (she will meet)\nit satisfies (it meets) it met (it met) it will see (it will meet)\nyou meet (you meet) you met (you met) you will see (you will meet)\nwe meet (we meet) we met (we met) we will see (we will meet)\nthey meet (they meet) they met (they met) they will see (they will meet)\n\n11.6c TO LAY (to lay) \/ He guaranteed not to lay it down. (He guaranteed not to lay it down.)\nlaying (laying) \/ He is resting them down. (He is resting them down.)\nlaid (laid) \/ it is set (it is laid) \/ it was set (it was laid) \/ it will be set (it will be laid)\nI lay (I lay) I set (I laid) I will lay (I will lay)\nhe sets (he lays) he set (he laid) he will lay (he will lay)\nshe sets (she lays) she set (she laid) she will lay (she will lay)\nit sets (it lays) it set (it laid) it will lay (it will lay)\nyou lay (you lay) you set (you laid) you will lay (you will lay)\nwe lay (we lay) we set (we laid) we will lay (we will lay)\nthey lay (they lay) they set (they laid) they will lay (they will lay)\n\n11.6d TO READ (to read) \/ He guaranteed not to see it. (He guaranteed not to see it.)\nreading (reading) \/ He is studying to them. (He is studying to them.)\n\n11.7 Do it again each correspondence of the abc.\n\nA \/ a B \/ b C \/ c D \/ d \/ E \/ e F \/ f G \/ g\nH \/ h I \/ i J \/ j K \/ k L \/ l M \/ m N \/ n\nO \/ o P \/ p Q \/ q R \/ r S \/ s T \/ t U \/ u\nV \/ v W \/ w X \/ x Y \/ y Z \/ z\n\n11.8 I will ask, \"Did you put them in jail?\" You will answer, \"No, I didn't put them in jail.\" I will ask, \"Did they secure it together?\" You will answer, \"No, they didn't secure it together.\"\n\n11.8a Did you put them in jail?\n(No, I didn't put them in jail.) No, I didn't put them in jail. (No, I didn't put them in jail.)\n\n11.8b Did they secure it together?\n(No, they didn't secure it together.) No, they didn't secure it together. (No, they didn't secure it together.)\n\n11.8c Did she help them work?\n(No, she didn't help them perform.) No, she didn't help them perform. (No, she didn't help them perform.)\n\n11.8d Did he discover everything himself?\n(No, he didn't discover everything himself.) No, he didn't discover everything himself. (No, he didn't discover everything himself.)\n\n11.8e Does she tell them to walk?\n(No, she doesn't tell them simply walking.) No, she doesn't tell them simply walking. (No, she doesn't tell them simply walking.)\n\n11.8f Do you come together?\n(No, we don't come together.) No, we don't come together. (No, we don't come together.)\n\n11.8g Do we wonder about today?\n(No, we don't wonder about these days.) No, we don't wonder about these days. (No, we don't wonder about these days.)\n\n11.8h Does he keep the kid's hand?\n(No, he doesn't keep the kid's side.) No, he doesn't keep the kid's side. (No, he doesn't keep the kid's side.)\n\nTHE VERB AGREES WITH ITS SUBJECT\nwomen believe the fact The females from that team always accept to management it.\ngroup confirms That number of females always confirms to management it.\nstudents understand The learners in this university understand British.\nschool instructs This university for men instructs British.\nchildren take The kids from this family members take a aspect with us.\nfamily requires This family members with three kids requires a aspect with us.\nPeter operates he operates Chris sometimes operates.\nJohn and Chris run they run David and Chris sometimes run together.\ncar is it is The car is on the street.\ncar and bus are they are The car and bus are on the street.\nhand was it was His side was harm.\nhand and arm were they were Both his side and arm were harm.\n\nTHE USE OF \"OTHER\"\nanother is (one person) Another man is powerful.\nothers are (two or more people) Others are powerful.\nthe other is (one person) The other man is powerful.\nthe others are (two or more people) The others are powerful.\n\nLESSON 11 VOCABULARY\naction aspect to fall\namount celebration to lay\nangel jail to lie\nanother residence to pass\nbed audience to place\ncontrol invoice to take\ndare relax to take action\ndawn right now to take control\nentrance darkness to test\nextreme, incredibly analyze to wrap\nlie to be town\nlocal to be frightened lady, women\nmat to challenge wrapping\nmight, great to enter\n\nWith his partner's contract he kept aspect of the cash for himself and converted the relax over to their management. (5:2)\nWhy did you let him take management of you and create you lie? (5:3)\nBefore you marketed the residence, it belonged to you and after you marketed it, the cash was yours. (5:4)\nAbout three time later his partner, not understanding what had occurred, came in. (5:7)\nNobody outside the team dared be a part of them, even though the individuals talked extremely of them. (5:13)\nAs a consequence of what they were doing, individuals were taken to see them. (5:15)\nThen the management became incredibly envious of them so they made the decision to take activity. (5:17)\nBut that evening the jail gateways were began out. (5:19)\n\nTHE FAMILY VOCABULARY\nbrother (younger, little, mature, big) excellent granny old lady (slang,\nbrother-in-law excellent grandmother and grandfather impolite)\nchild (children) excellent great grandmother and grandfather parents\ncousin (your generation) excellent nephew (grandchild's generation) sibling(s)\ndad excellent relative (grandchild's generation) sis (younger, little,\nfather (paternal) mature, big)\ngrandchildren sister-in-law\ngranddaughter double (brother, sis,\ngrandfather excellent dad (grandparent's similar, fraternal)\ngrandmother partner generation)\ngrandparents mom (maternal) generation)\ngrandson relative, nephew (children's partner\ngreat auntie (grandparent's generation)\ngeneration) old man (slang,\ngreat grandfather impolite)\n\nWe have made it easy for you to find a PDF Ebooks without any digging. And by having access to our ebooks online or by storing it on your computer, you have convenient answers with Prentice Hall Writing And Grammar Grade 8 Teacher Edition . To get started finding Prentice Hall Writing And Grammar Grade 8 Teacher Edition , you are right to find our website which has a comprehensive collection of manuals listed.\nOur library is the biggest of these that have literally hundreds of thousands of different products represented.\n\nFinally I get this ebook, thanks for all these Prentice Hall Writing And Grammar Grade 8 Teacher Edition I can get now!\n\nI did not think that this would work, my best friend showed me this website, and it does! I get my most wanted eBook\n\nMy friends are so mad that they do not know how I have all the high quality ebook which they do not!\n\nIt's very easy to get quality ebooks )\n\nso many fake sites. this is the first one which worked! Many thanks\n\nwtffff i do not understand this!\n\n## PostgreSQL Data Access with Haskell\n\n### Introduction\n\nPostgreSQL is a very popular relational database which has quite a few different data access libraries available for the Haskell programming language.\n\nToday\u2019s article aims to get you up and running, executing queries against PostgreSQL from your Haskell environment with the least amount of hassle.\n\n### Postgresql-simple\n\nThe first library that we\u2019ll go through is postgresql-simple. This library has a very basic interface, and is really simple to get up an running.\n\nA mid-level client library for the PostgreSQL database, aimed at ease of use and high performance.\n\n### Prerequisites\n\nBefore you get started though, you\u2019ll need libpq installed.\n\nYou\u2019ll need to add a dependency on the postgresql-simple library to your application. The following code will then allow you to connect to your PostgreSQL database, and ru a simple command.\n\n### Hello, Postgres!\n\nWhen your application successfully builds and executes, you should be met with the following output:\n\nWalking through this code quickly, we first enable OverloadedStrings so that we can specify our Query values as literal strings.\n\nIn order to connect to Postgres, we use a ConnectInfo value which is filled out for us via defaultConnectInfo . We just override those values for our examples. I\u2019m running PostgreSQL in a docker container, therefore I\u2019ve got my docker network address.\n\nThe localPG value is now used to connect to the Postgres database. The conn value will be referred to after successful connection to send instructions to.\n\nFinally, we run our query SELECT 1 + 1 using the query_ function. conn is passed to refer to the connecion to execute this query on.\n\nWith this basic code, we can start to build on some examples.\n\n### Retrieve a specific record\n\nIn the Hello, World example above, we were adding two static values to return another value. As exampeles get more complex, we need to give the library more information about the data that we\u2019re working with. Int is very well known already, and already has mechanisms to deal with it (along with other basic data types).\n\nIn the client database table we have a list of names and ids. We can create a function to retrieve the name of a client, given an id:\n\nThe Query template passed in makes use of the ? character to specify where substitutions will be put. Note the use of query rather than query_ . In this case, query also accepts a Tuple containing all of the values for substitution.\n\nUsing the FromRow type class, our code can define a much stronger API. We can actually retrieve client rows from the database and convert them into Client values.\n\nThe Client data type needs definition now. It\u2019s how we\u2019ll refer to a client within our Haskell program:\n\nThe Client data type now gets a FromRow instance, which allows postgresql-simple to use it.\n\nIn order of the fields definitions, we give fromRow definition. The retrieveClient function only changes to broaden its query, and change its return type!\n\n### Create a new record\n\nWhen creating data, you can use the function execute . The execute function is all about execution of the query without any return value.\n\nExtending our API, we can make a createClient function but with a twist. We\u2019ll also return the generated identifier (because of the id field).\n\nWe need a definition for Int64 . This is what the underlying SERIAL in PostgreSQL will translate to inside of your Haskell application.\n\nWe can now use createClient to setup an interface of sorts fo users to enter information.\n\nWe\u2019ve created a data creation interface now.\n\n### Update an existing record\n\nWhen it comes to updating data, we don\u2019t expect much back in return aside from the number of records affected by the instruction. The execute function does exactly this. By measuring the return, we can convert the row count into a success\/fail style message. I\u2019ve simply encoded this as a boolean here.\n\n### Destroying records\n\nFinally, destroying information out of the database will look a lot like the update.\n\nexecute providing the affected count allows us to perform the post-execution validation again.\n\n### Summary\n\nThere\u2019s some basic operations to get up and running using postgresql-simple . Really looks like you can prototype software all the way through to writing fully blown applications with it.\n\n## 11.8E: Exercises\n\nPhysics 160 Assignments Page\n\nThe homework assignments are done via the website WebAssign.net. Every time you log on, you will be asked for three things:\n\nUsername This is your first initial and last name combined as one word (for example, mine would be MHASSELBECK)\n\nLog on to WebAssign by clicking here. The three entry fields are on the left hand side of the page.\n\nThe problems are directly out of the textbook, but some of the numbers are often changed on the WebAssign site. Follow the directions carefully. You need to complete the entire problem set before submitting it to the website for grading. If you get incorrect answers, they will be noted on the response page that comes up immediately afterwards. After the first submission, you'll have two more opportunities to fix the incorrect answers. If you are happy with your results on the first or second try (even if not a perfect score) the system will grade the last acceptable submission made before the deadline. You are not required to make three submissions.\n\nNote that very large or small numbers (or any number for that matter) can be entered using scientific notation with the format 3.00e+8, for example. Entering 3E8 will also work in this case. The Perl script at WebAssign does not recognize the character sequence 10^ for exponentials. You can also enter fractional values such as 1\/10 instead of 0.10. Unrecognized entrys (examples are accidental spaces between numerals or using O instead of 0 to designate zero) are counted as errors for the purpose of grading, so watch for typos when entering data. In general, the script is looking for answers correct to 3 significant figures and a numerical accuracy of 2%. If you are doing a calculation involving pi, for example, you'd need to use at least 3.14 (although having more digits is probably a good idea). If you have questions about entering numerical answers, click here.\n\nProblem Set 1 is due on Sunday, August 25 at 11:59 pm. Solutions here.\n\nProblem Set 2 is due on Thursday, August 29 at 5:15 pm (just before class). Solutions Page 1, Page 2, Page 3.\n\nProblem Set 3 (Textbook: 2-17E, 2-21P, 2-24E, 2-30P) is due on Sunday, September 1 at 11:59 pm. Solutions Page 1, Page 2.\n\nProblem Set 4 (Textbook: 2-33P, 2-34P, 2-42E, 2-57P) is due on Tuesday, September 3 at 5:15 pm (just before class). Solutions Page 1, Page 2.\n\nProblem Set 5 (Textbook: 3-3E, 3-4E, 3-5E, 3-8P, 3-14E, 3-16E, 3-21P, 3-36P) is due on Sunday, September 8 at 11:59 pm. Solutions Page 1, Page 2, Page 3.\n\nProblem Set 6 There are two written problems to be handed in before class and three WebAssign exercises. The two written problems are from the textbook: 2-52P and 4-15P. See WebAssign for hints on 2-52P. The three WebAssign problems are also in the textbook: 4-4P, 4-9E, 4-11E. All five are due on Tuesday, September 10 at 5:15 pm (just before class). Solutions Page 1, Page 2, Page 3.\n\nProblem Set 7 (Textbook: 4-17E, 4-23E, 4-25P, 4-33P, 4-44E, 4-48P) is due on Sunday, September 15 at 11:59 pm. Solutions Page 1, Page 2, Page 3, Page 4, Page 5.\n\nProblem Set 8 (Textbook: 5-2E, 5-4E, 5-6P, 5-7P) is due on Sunday, September 22 at 11:59 pm. Solutions Page 1, Page 2.\n\nProblem Set 9 consists of two written problems (Textbook questions 5-42 and 5-53) and four WebAssign problems (Textbook questions 5-17E, 5-38P, 5-43P, 5-50P). Look at this problem set on WebAssign for hints. All six problems are due on Thursday, September 26 at 5:15 pm (just before class). Solutions Page 1, Page 2, Page 3, Page 4, Page 5.\n\nProblem Set 10 is entirely WebAssign (Textbook: 5-40P, 5-47P, 6-2E, 6-3E, 6-8E, 6-15P, 6-19P) and is due on Tuesday, October 1 at 5:15 pm. See WebAssign for hints. Solutions Page 1, Page 2, Page 3, Page 4, Page 5, Page 6, Page 7.\n\nProblem Set 11 is entirely WebAssign (Textbook: 6-21P, 6-22P, 6-28P, 6-34E, 6-37E, 6-41P, 6-45P) and is due on Sunday, October 6 at 11:59 pm. Solutions Page 1, Page 2, Page 3, Page 4, Page 5, Page 6, Page 7.\n\nThe due date for Problem Set 12 has been extended until Tuesday, October 15 at 5:15 pm because of fall break. (Textbook questions: 7-2E, 7-9E, 7-13P, 7-15E, 7-22P, 7-25E, 7-30E) Solutions Page 1, Page 2, Page 3, Page 4.\n\nProblem Set 13 is due Monday, October 21 at 6:00 pm (day before test). Textbook questions: 8-2E, 8-4E, 8-7P, 8-15P, 8-21P, 8-24P, 8-26P, 8-46E, 8-50P, 8-59P. See WebAssign for hints. Solutions Page 1, Page 2, Page 3, Page 4.\n\nProblem Set 14 is a WebAssign due Tuesday, October 29 at 5:15 pm. Textbook questions: 9-3E, 9-5E, 9-7P, 9-11E, 9-15P, 9-21E, 9-23P, 9-25P, 9-36P, 9-39P. See WebAssign for hints. Solutions Page 1, Page 2, Page 3.\n\nProblem Set 15 is a WebAssign due Sunday, November 3 at 11:59 pm. Textbook questions: 10-2E, 10-3E, 10-8P, 10-10P, 10-14P, 10-23E, 10-26P, 10-34P, 10-36E, 10-40P. See WebAssign for hints. Solutions Page 1, Page 2, Page 3.\n\nProblem Set 16 consists of two written problems (Textbook questions 10-49E and 10-53P) and six WebAssign problems (Textbook questions 11-2E, 11-4E, 11-6P, 11-8E, 11-14P, 11-29P). Look at this problem set on WebAssign for hints. All eight problems are due on Tuesday, November 5 at 5:15 pm (just before class). Page 1, Page 2, Page 3, Page 4.\n\nProblem Set 17 is a WebAssign due Sunday, November 10 at 11:59 pm. Textbook questions: 11-36E, 11-38E, 11-39E, 11-44P, 11-46E, 11-50E, 11-51E, 11-55P, 11-59E, 11-63P. See WebAssign for hints. Solutions Page 1, Page 2, Page 3.\n\nProblem Set 18 is a WebAssign due Tuesday, November 12 at 5:15 pm. Textbook questions: 12-1E, 12-3E, 12-8P, 12-9P, 12-12P, 12-18E. See WebAssign for hints. Solutions Page 1, Page 2, Page 3.\n\nProblem Set 19 is a WebAssign due Sunday, November 17 at 11:59 pm. Textbook questions: 12-24E, 12-30E, 12-33E, 12-35E, 12-38P, 12-39E, 12-44E, 12-50P, 12-54P, 12-58P. See WebAssign for hints. Solutions Page 1, Page 2, Page 3.\n\nProblem Set 20 consists of two written problems (Textbook questions 14-10P and 14-13P) and five WebAssign problems (Textbook questions 14-2E, 14-9P, 14-16E, 14-26E, 14-39P). All seven problems are due before class on Tuesday, November 19. Solutions Page 1, Page 2, Page 3.\n\nProblem Set 21 is a WebAssign due Sunday, November 24 at 11:59 pm. Textbook questions: 15-3E, 15-5E, 15-8E, 15-13E, 15-18P, 15-27E, 15-32P, 15-46E. See WebAssign for hints. Solutions Page 1, Page 2, Page 3.\n\nProblem Set 22 is a WebAssign due Monday, November 25 at 6:00 pm. Textbook questions: 16-3E, 16-10E, 16-42E, 16-44E, 16-45E. Solutions here.\n\n## 11.8E: Exercises\n\nHOUSE . . . . . . . . . . . . . . . No. 95\n\nText of a further amendment (offered by Mr. Michlewitz of Boston) to the Senate amendment to the House Bill financing a program for improvements to the Unemployment Insurance Trust Fund and providing relief to employers and workers in the Commonwealth (House, No. 90). March 22, 2021.\n\nThe Commonwealth of Massachusetts\n\nIn the One Hundred and Ninety-Second General Court\n(2021-2022)\n\nBy striking out all after the enacting clause and inserting in place thereof the following:\u2013\n\nSECTION 1. To provide for a program for improvements to the Unemployment Insurance Trust Fund and relief to employers in the commonwealth, the sum set forth in section 2, for the several purposes and subject to the conditions specified in this act, is hereby made available, subject to the laws regulating the disbursement of public funds. The sum set forth in section 2 shall be in addition to any amounts previously authorized and made avail able for these purposes.\n\nEXECUTIVE OFFICE FOR LABOR AND WORKFORCE DEVELOPMENT\n\n7003-2025 For the program to reduce the amount of, or avoid the need to obtain, a federal advance from the federal government or to repay federal advances made to the commonwealth from the federal unemployment account for the fiscal years 2020 to 2025, inclusive, and to fund any reserve account, costs of issuance and capitalized interest, if any, related to bonds issued for such purposes an d the initial costs established pursuant to section 19 of this act and expenses of the administration of said program provided, that the aggregate principal amount shall not exceed the total amount authorized in this item. $7,000,000,000. SECTI ON 3. Chapter 151A of the General Laws is hereby amended by inserting after section 14J the following section:- Section 14J1\/2. For the period from January 1, 2021 until December 31, 2022, each employer required to make contributions pursuant to section 1 4 shall pay an excise on the wages paid to its employees in accordance with the following table: Fo r the purpose of this section, the term \u201cwages\u201d shall include only that part of remuneration on which the employer is required to make contributions pursuant to said section 14. Such excise shall be paid to the commissioner in accordance with the procedure s prescribed by the commissioner. The commissioner shall deposit the receipts of such excise into the Federal Loan Interest Fund established in section 14K. Such receipts shall not be subject to the allowable state tax revenue limitations established in ch apter 62F. Prior to the depositing of the receipts, the commissioner may deduct all administrative costs incurred as a result of this section, including an amount as determined by the United States Secretary of Labor in accordance with federal cost rules, if applicable. Except where inconsistent with the terms of this section, the terms and conditions of this chapter which are applicable to the payment of and the collection of contributions pursuant to said section 14 shall apply to the payment of and the collection of said excise provided, however, that said excise shall not be credited to the employer\u2019s account or to the solvency account established pursuant to said section 14 except as otherwise provided in section 14K. The commissioner, after providin g not less than 60 days\u2019 written notice to the house and senate committees on ways and means and the joint committee on labor and workforce development, may adjust the excise rate specified in this section to pay interest required to be paid to the Federal Loan Interest Fund established by said section 14K. The notice shall include, but not be limited to: (i) the proposed adjusted excise rate (ii) the estimated amount of funds that will be raised by the adjusted excise rate (iii) the rationale for adjusti ng the excise rate (iv) the balance of the Federal Loan Interest Fund established in said section 14K and (v) the estimated amount of interest required to be paid under section 1202(b) of the federal Social Security Act. SECTION 4. Said chapter 151A is hereby further amended by striking out section 14K, as appearing in 2018 Official Edition, and inserting in place thereof the following section:- Section 14K. There is hereby established a separate fund to be known as the Federal Loan Interest Fund which shall be administered by the commissioner, without liability on the part of the commonwealth beyond the amount credited to and earned by the fund. Said fund shall consist of all amounts received under section 14J1\/2, which shall be credited to such fund, except as otherwise provided in said section 14J1\/2 and any other monies authorized by law to be credited to said fund. Money credited to the fund shall be used only for the payment of interest required to be paid under section 1202(b) of the federal Socia l Security Act. The monies in said fund shall be continuously available to the commissioner for the payment of said interest without further appropriation and shall not lapse at any time or be transferred to any other fund or account except as provided in this section. On September 30 of each calendar year, the commissioner shall transfer from the fund to the Unemployment Compensation Fund any amounts deposited therein pursuant to said section 14J1\/2 prior to the immediately preceding 36-month period which have not been expended for the payment of interest. The commissioner shall credit such amounts transferred to the solvency account pursuant to paragraph (1) of subsection (e) of section 14 as of October 1 of said calendar year. SECTION 5. Paragraph (b) of subsection (1) of section 30A of said chapter 151A, as appearing in section 8 of chapter 201 of the acts of 2020, is hereby amended by striking out subparagraph (2) and inserting in place thereof the following subparagraph:- (2) There shall be a state \u201co ff\u201d indicator for the commonwealth for the purposes of this paragraph for weeks of unemployment if at any time the provisions of subparagraph (1) are not met or 100 per cent federal sharing is not available under section 4105 of the federal Families First Coronavirus Response Act, Public Law 116-127, hereinafter the \u201cFamilies First Act\u201d, or any subsequent amendment to the Families First Act, or other federal law and the funding is sufficient to meet the requirements of this subparagraph, including, but not limited to, the federal Continued Assistance for Unemployed Workers Act of 2020. SECTION 6. Paragraph (c) of said subsection 1 of said section 30A of said chapter 151A, as so appearing, is hereby amended by striking out subparagraph (3) and inserting in p lace thereof the following 2 subparagraphs:- (3) There shall be a state \u201coff\u201d indicator for the purposes of this paragraph for weeks of unemployment if at any time the provisions of subparagraph (1) are not met or 100 per cent federal sharing is not avail able under section 4105 of the Families First Act, or any subsequent amendment to the Families First Act, or other federal law and the funding is sufficient to meet the requirements of this subparagraph, including, but not limited to, the federal Continued Assistance for Unemployed Workers Act of 2020. (4) With respect to determining whether the state is in an extended benefit period from November 1, 2020 to December 31, 2021, inclusive, the commonwealth shall disregard the requirement of paragraph (a) that no extended benefit period may begin before th e fourteenth week following the end of a prior extended benefit period which was in effect. SECTION 7. Section 50 of chapter 201 of the acts of 2020 is hereby amended by striking out the words \u201cJune 30\u201d and inserting in place thereof the following words:- December 31. SECTION 8. Notwithstanding chapter 62C of the General Laws or any other general or special law to the contrary, in order to address disruptions caused by the outbreak of the 2019 novel coronavirus, also known as COVID-19, and the effects of the governor\u2019s March 10, 2020 declaration of a state of emergency, for taxable year 2020, no tax penalty shall be imposed by the commissioner of revenue on a taxpayer solely for failure to remit taxes imposed by chapter 62 of the General Laws on unemploym ent compensation, as defined in section 85 of the Internal Revenue Code, received by a taxpayer during taxable year 2020 provided, however, that if such penalty has been assessed, it shall be abated by the commissioner of revenue in whole. SECTION 9. Not withstanding chapter 62C of the General Laws or any other general or special law to the contrary, all returns and payments for the 2020 calendar year that would be otherwise due on April 15, 2021 pursuant to subsection (a) of section 6 of said chapter 62C shall be due on May 17, 2021. SECTION 10. Notwithstanding section 14 of chapter 151A of the General Laws, for calendar years 2021 and 2022, the experience rate of an employer qualifying under subsection (b) of said section 14 of said chapter 151A shall be the rate which appears in column \u201cE\u201d of paragraph (1) of subsection (i) of said section 14 of said chapter 151A. SECTION 11. Notwithstanding any federal interest charges for necessary federal advances, the commissioner, as defined in subsection (e 1\/2) o f section 1 of chapter 151A of the General Laws, may pursue any necessary federal advances to provide for timely payment of benefits. Nothing in this act shall contribute to or allow for a reduction in benefits including, but not limited to, the amount or length of benefits, pursuant to said chapter 151A. SECTION 12. Notwithstanding any general or special law to the contrary, for the taxable year beginning January 1, 2020, the following items shall be deducted from federal gross income for the purpose of d etermining Massachusetts gross income under section 2 of chapter 62 of the General Laws: (i) an amount which, but for this section, would be included in the gross income, in whole or in part, of an eligible recipient, as described in subsection (a) of sect ion 1102 of the federal Coronavirus Aid, Relief, and Economic Security Act, P.L. 116-136, because of the forgiveness described in subsection (b) of section 1106 of said federal Coronavirus Aid, Relief, and Economic Security Act, P.L. 116-136 (ii) an amoun t of an advance received pursuant to subsection (e) of section 1110 of said federal Coronavirus Aid, Relief, and Economic Security Act, P.L. 116-136 (iii) an amount of any payment described in subsection (c) of section 1112 of said federal Coronavirus Aid , Relief, and Economic Security Act, P.L. 116-136 and (iv) an amount of funding received pursuant to section 331 of the federal Economic Aid to Hard-Hit Small Businesses, Nonprofits, and Venues Act, P.L. 116-260. SECTION 13. The following definitions sha ll apply to sections 13 to 17, inclusive, and shall have the following meanings unless the context clearly requires otherwise: \u201cChild\u201d, a biological, adopted or foster child, a stepchild or legal ward, a child to whom the employee stands in loco parentis or a person to whom the employee stood in loco parentis when the person was a minor child. \u201cCOVID-19 emergency paid sick leave\u201d, paid time-off that is compensated by an employer, and with the same employment benefits to which the employee is entitled from such employer as a term of the employee\u2019s employment, for the purposes described in subsection (b) of section 15 provided, however, that in no case shall the employee\u2019s hourly compensation be less than that provided under section 1 of chapter 151 of the General Laws. \u201cDomestic partner\u201d, a person not less than 18 years of age who: (i) is dependent upon the employee for support as shown by either unilateral dependence or mutual interdependence that is evidenced by a nexus of factors including, but not limi ted to: (A) common ownership of real or personal property (B) common householding (C) children in common (D) signs of intent to marry (E) shared budgeting and (F) the length of the personal relationship with the employee or (ii) has registered as the domestic partner of the employee with any registry of domestic partnerships maintained by the employer of either party, or in any state, county, city, town or village in the United States. \u201cEmployee\u201d, any person who performs services for an employer fo r wage, remuneration or other compensation, including employees employed by the commonwealth, its departments, sub-divisions, quasi-public agencies or a municipality, district, political subdivision or its instrumentalities provided, however, that notwith standing any general or special law to the contrary, \u201cemployee\u201d shall include a family child care provider, as defined in subsection (a) of section 17 of chapter 15D of the General Laws, and a personal care attendant, as defined in section 70 of chapter 11 8E of the General Laws. \u201cEmployer\u201d, any individual, corporation, partnership or other private or public entity, including any agent thereof, who engages the services of an employee for wages, remuneration or other compensation, including, but not limite d to, (i) the commonwealth, its departments, sub-divisions or quasi-public agencies or (ii) a municipality, district, political subdivision or its instrumentalities provided, however, that the United States government shall not be considered an \u201cemployer \u201d provided further, that an individual employer shall be determined by the federal employer identification number provided further, that the department of early education and care shall be deemed the employer of family child care providers, as defined in subsection (a) of section 17 of chapter 15D of the General Laws and provided further, that the PCA quality home care workforce council established in section 71 of chapter 118E of the General Laws shall be the employer of personal care attendants, as def ined in section 70 of said chapter 118E. \u201cEmployment benefits\u201d, all benefits provided or made available to employees by an employer, including, but not limited to, group life insurance, health insurance, disability insurance, sick leave, annual or vacati on leave, educational benefits and pensions. \u201cFamily member\u201d, the spouse, domestic partner, child, parent or parent of a spouse or domestic partner of the employee, a person who stood in loco parentis to the employee when such employee was a minor child o r a grandchild, grandparent or sibling of the employee. For the purposes of this definition, \u201cperson who stood in loco parentis\u201d shall not include a person with whom the employee has no personal relationship. \u201cHealth care provider\u201d, a health care professi onal licensed under chapter 112 of the General Laws or any other person licensed under federal or any state law to provide medical care or emergency medical services and authorized to provide such services in the commonwealth. \u201cParent\u201d, a biological, adop tive, foster or step-parent of an employee or of an employee\u2019s spouse or domestic partner, a legal guardian of an employee or other person who stood in loco parentis when the employee or employee\u2019s spouse or domestic partner was a minor child. \u201cSpouse\u201d, a person who is married to the employee. \u201cTelework\u201d, a work flexibility arrangement under which an employee performs the duties and responsibilities of such employee's position, and other authorized activities, from an approved worksite other than the lo cation from which the employee would otherwise work. SECTION 14. There shall be established a fund known as the COVID-19 Emergency Paid Sick Leave Fund to be administered by the executive office for administration and finance, or any department or agency thereof designated by the executive office. The purpose of the fund shall be to reimburse eligible employers for providing employees with COVID-19 emergency paid sick leave. There shall be credited to the fund all amounts that are transferred or authorized to be transferred thereto or directed to be deposited therein, and all amounts received as gifts, grants or contributions for the purposes of the fund. Amounts credited to the fund shall not be subject to appropriation. Any money remaining in the fund as of September 30, 2021 and not subject to a filed employer reimbursement application under section 15, shall revert to the General Fund provided, however, that all money in the fund shall revert to the General Fund not later than November 1, 2021. SECTIO N 15. (a)(1) Notwithstanding any general or special law to the contrary, as a result of the outbreak of the 2019 novel coronavirus, also known as COVID-19, as of the effective date of this section, an employer shall provide, subject to section 16, COVID-19 emergency paid sick leave to its employees pursuant to paragraph (3) who are absent from and are unable to work pursuant to subsection (b). (2) The executive office for administration and finance, or any department or agency thereof designated by the ex ecutive office, shall reimburse an employer from the COVID-19 Emergency Paid Sick Leave Fund, established in section 14, for the cost of providing COVID-19 emergency paid sick leave to an employee provided, however, that any qualified sick leave wages pai d by an employer that are eligible for the tax credit for paid sick and paid family and medical leave under the federal Families First Coronavirus Response Act, P.L. 116-127 or subsequent extensions, including the federal Consolidated Appropriations Act, 2 021 and the federal American Rescue Plan Act of 2021, shall not be eligible for reimbursement from said COVID-19 Emergency Paid Sick Leave Fund. (3) An employer shall provide the following amount of leave for an employee who takes COVID-19 emergency paid sick leave: (i) an employee who works 40 hours or more per week shall be provided 40 hours of COVID-19 emergency paid sick leave (ii) an employee who works less than 40 hours a week, but maintains a regular schedule with consistent hours per week, shall be provided COVID-19 emergency paid sick leave that is equal to the number of hours that such employee works per week, on average over a 14-day period of such regular schedule or (iii) for an employee whose schedule and weekly hours worked vary from week to week, such employee shall be provided COVID-19 emergency paid sick leave that: (A) is equal to the average number of hours t hat the employee was scheduled to work per week over the 6-month period immediately preceding the date on which such employee takes the COVID-19 emergency paid sick leave, including hours for which such employee took leave of any type or (B) if the employ ee did not work over such 6-month period, is equal to the reasonable expectation of the employee at the time of hiring of the average number of hours per week that the employee would normally be scheduled to work. (4) An employee eligible for COVID-19 eme rgency paid sick leave shall be eligible for leave that is compensated by the employer, while maintaining the same employment benefits to which the employee is entitled as a term of employment by an employer to an employee provided, however, that no emplo yee shall receive, and no employer shall be eligible for reimbursement for such employee, COVID-19 emergency paid sick leave in excess of$850 per week.\n\n(5) An eligible employer who pays an employee for COVID-19 emergency paid sick leave shall be reimburs ed by the executive office for administration and finance, or any department or agency thereof, in consultation with the department of revenue, from the COVID-19 Emergency Paid Sick Leave Fund by submitting, in a form prescribed by the executive office for administration and finance, or any department or agency thereof designated by the executive office, an application as provided in paragraph (1) of subsection (e). The executive office, or any department or agency thereof, shall provide such reimbursements directly to eligible employers within 30 business days of the employer submitting the application.\n\n(6) An employee\u2019s COVID-19 emergency paid sick leave shall terminate at the beginning of the employee\u2019s next scheduled work shift immediately following the termination of the need for COVID-19 emergency paid sick leave under subsection (b).\n\n(b) An employer shall provide COVID-19 emergency paid sick leave to an employee for the following reasons related to the outbreak of the 2019 novel coronavirus, also known as COVID-19:\n\n(1) An employee\u2019s need to: (i) self-isolate and care for oneself beca use of the employee\u2019s COVID-19 diagnosis (ii) seek or obtain medical diagnosis, care or treatment for COVID-19 symptoms or (iii) obtain immunization related to COVID-19 or the employee is recovering from an injury, disability, illness or condition relate d to such immunization\n\n(2) An employee\u2019s need to care for a family member who: (i) is self-isolating due to a COVID-19 diagnosis or (ii) needs medical diagnosis, care or treatment for COVID-19 symptoms\n\n(3) A quarantine order, or other determination b y a local, state or federal public official, a health authority having jurisdiction, the employee\u2019s employer or a health care provider that the employee\u2019s presence on the job or in the community would jeopardize the health of others because of the employee \u2019s exposure to COVID-19 or exhibiting of symptoms, regardless of whether the employee has been diagnosed with COVID-19\n\n(4) An employee\u2019s need to care for a family member due to a quarantine order, or other determination by a local, state or federal publi c official, a health authority having jurisdiction, the family member\u2019s employer or a health care provider that the family member\u2019s presence on the job or in the community would jeopardize the health of others because of the family member\u2019s exposure to COV ID-19, regardless of whether the family member has been diagnosed with COVID-19 or\n\n(5) An employee\u2019s inability to telework because the employee has been diagnosed with COVID-19 and the symptoms inhibit the ability of the employee to telework.\n\n(c)(1) COV ID-19 emergency paid sick leave provided by an employer may be reduced by the amount of wages or wage replacement that an employee receives for that period under any government program or law. COVID-19 emergency paid sick leave shall not be reduced by and shall be in addition to all job protected time off, paid and unpaid, that the employer is required provide to employees: (i) under section 148C of chapter 149 of the General Laws (ii) under any existing policy or program of the employer (iii) pursuant to a collectively bargained agreement between the employer and a collective bargaining representative of an employee or (iv) under federal law, to the extent permitted by that federal law provided, however, said COVID-19 emergency paid sick leave may be re duced if the aggregate amount an employee would receive would exceed the employee\u2019s average weekly wage. An employer shall not require an employee to use other paid leave provided by the employer to the employee before the employee uses the COVID-19 emerge ncy paid sick leave, unless federal law requires otherwise.\n\n(2) An employee may use COVID-19 emergency paid sick leave on an intermittent basis and in hourly increments.\n\n(d) The employee shall provide notice to the employer of the need for COVID-19 emerg ency paid sick leave as soon as practicable or foreseeable. After the first workday an employee receives COVID-19 emergency paid sick leave, an employer may require the employee to follow reasonable notice procedures in order to continue receiving COVID-19 emergency paid sick leave. An employer shall not require, as a condition of an employee\u2019s taking COVID-19 emergency paid sick leave, that the employee search for or find a replacement worker to cover the hours during which the employee is using COVID-19 e mergency paid sick leave.\n\n(e)(1) Applications for reimbursements from an eligible employer from the COVID-19 Emergency Paid Sick Leave Fund shall be in a form prescribed by the executive office for administration and finance, or any department or agency t hereof designated by the executive office, and shall include, but not be limited to, a copy of a written request for COVID-19 emergency paid sick leave from the employee to the employer, in which the employee provides: (i) the employee\u2019s name (ii) the dat e or dates for which leave is requested and taken (iii) a statement of the COVID-19 related reason the employee is requesting leave and written support for such reason and (iv) a statement that the employee is unable to work, including by means of telewo rk, for such reason.\n\nIn the case of a leave request based on a quarantine order or self-quarantine advice, the statement from the employee shall also include: (i) the name of the governmental entity ordering quarantine or the name of the health care prov ider advising self-quarantine and (ii) if the person subject to quarantine or advised to self-quarantine is not the employee, that person\u2019s name and relation to the employee.\n\n(2) Health information related to COVID-19 emergency paid sick leave possessed by an employer regarding an employee or employee\u2019s family member shall: (i) be maintained on a separate form and in a separate file from other personnel information (ii) be treated as confidential medical records (iii) not be disclosed except to the affe cted employee or with the express permission of the affected employee and (iv) be kept confidential in accordance with any other state or federal law.\n\n(f) It shall be unlawful for any employer to interfere with, restrain or deny an employee\u2019s ability to take COVID-19 emergency paid sick leave, including, but not limited to, using an employee\u2019s taking of COVID-19 emergency paid sick leave as a negative factor in any employment action, such as an evaluation, promotion, disciplinary action or termination, or otherwise subjecting an employee to discipline or taking any other adverse action against an employee for the use of COVID-19 emergency paid sick leave.\n\n(g) It shall be unlawful for any employer to take any adverse action against an employee because the employee opposes practices believed to be in violation of this section, or because the employee supports the exercise of rights of another employee under this section, including, but not limited to: (i) filing an action, or instituting or causing to be ins tituted any proceeding under or related to this section (ii) providing or intending to provide any information in connection with any inquiry or proceeding related to this section or (iii) testifying or intending to testify in any inquiry or proceeding r elated to this section.\n\n(h) Nothing in this section shall be construed to: (i) discourage employers, including the commonwealth, its departments, sub-divisions or quasi-public agencies or a municipality, district, political subdivision or its instrumenta lities from adopting or retaining job-protected paid time off policies that are more generous than policies set out in this section (ii) diminish or impair the obligation of an employer to comply with any contract, collective bargaining agreement or any e mployment benefit program or plan in effect on the effective date of this section that provides to employees greater job-protected paid time off rights than the rights established under this section or (iii) pre-empt the power of a municipality, district, political subdivision or its instrumentalities from adopting or retaining job-protected paid time off policies more generous than policies that comply with the requirements of this section.\n\nAny employer with a separate COVID-19 sick leave policy who mak es available an amount of COVID-19 sick leave sufficient to meet the requirements of sections 14 to 16, inclusive, that may be used for the same purposes and under the same conditions as COVID-19 emergency paid sick leave under said sections 14 to 16, incl usive, shall not be required to provide additional COVID-19 emergency paid sick leave under said sections 14 to 16, inclusive.\n\n(i) Not later than 7 days after the effective date of this section, the executive office of labor and workforce development, in consultation with the executive office for administration and finance, shall prepare and provide to employers notice of this section in English and in other languages required under clause (iii) of subsection (d) of section 62A of chapter 151A of the Gener al Laws. Employers shall post this notice in a conspicuous location accessible to employees in every establishment where employees with rights under this section work and shall provide a copy to their employees provided, however, that in cases where the e mployer does not maintain a physical workplace, or an employee teleworks or performs work through a web-based platform, notification shall be sent via electronic communication or a conspicuous posting in the web-based platform.\n\n(j) The executive office o f labor and workforce development, in consultation with the executive office for administration and finance and the executive office of health and human services, shall develop and implement a multilingual outreach program to inform employers, employees an d health care providers about the availability of COVID-19 emergency paid sick leave.\n\n(k) The executive office for administration and finance, or any department or agency thereof designated by the executive office, shall issue a report on the COVID-19 eme rgency paid sick leave program. The report shall include, but not be limited to: (i) aggregate information on the number of employees who were provided COVID-19 emergency paid sick leave (ii) the reason employees received COVID-19 emergency paid sick leav e (iii) the average amount paid to employees who were provided COVID-19 emergency paid sick leave (iv) the average length of COVID-19 emergency paid sick leave (v) the employers who received reimbursements from the COVID-19 Emergency Paid Sick Leave Fun d established in section 14 (vi) the average amount of each reimbursement of the employer and (vii) the total amount of reimbursements received by each employer. The report shall not include any identifying information of an individual employee. The repo rt shall be filed with the clerks of the house of representatives and the senate and the joint committee on labor and workforce development not later than January 1, 2022.\n\nSECTION 16. COVID-19 emergency paid sick leave shall be available to an employee un der section 15 until: (i) money in the COVID-19 Emergency Paid Sick Leave Fund established in section 14 is no longer available (ii) notification from the executive office for administration and finance, or any department or agency thereof designated by t he executive office, to employers that it reasonably anticipates funds will no longer be available for reimbursement or (iii) September 30, 2021, whichever first occurs.\n\nSECTION 17. The executive office for administration and finance, or any department or agency thereof designated by the executive office, may promulgate regulations necessary for the implementation of sections 13 to 16, inclusive.\n\nSECTION 18. Words used in this section and sections 19 to 21 inclusive, shall have the same meaning as in se ction 1 of chapter 151A of the General Laws provided, that the following words shall, unless the context clearly requires otherwise, have the following meanings:\n\n\u201cBond\u201d, any type of special obligation bond, including a bond, note, certificate or other in strument, or series thereof, issued by the commonwealth for the purposes set forth under this act.\n\n\u201cBond administrative expenses\u201d, expenses incurred to issue and administer bonds authorized under this act, or as otherwise necessary to ensure compliance wi th applicable federal or state law.\n\n\u201cFederal advances\u201d, loans issued by the federal government to the commonwealth for the payment of compensation under Title XII of the federal Social Security Act or other federal law.\n\nSECTION 19. (a) When authorized by a vote taken in the manner provided by section 3 of Article LXII of the Amendments to the Constitution of the Commonwealth, the state treasurer, upon request of the governor, may issue special obligation bonds in 1 or more series and in principal amounts necessary or estimated to be necessary to:\n\n(i) reduce the amount of, or avoid the need to obtain, a federal advance from the federal government\n\n(ii) repay federal advances made to the commonwealth from the federal unemployment account for the fiscal ye ars 2020 to 2025, inclusive\n\n(iii) repay prior years\u2019 interest and other related costs on federal advances for the fiscal years 2020 to 2025, inclusive, to the extent not paid pursuant to section 14J1\/2 of chapter 151A of the General Laws\n\n(iv) fund any reserve account, costs of issuance, capitalized interest, if any, and the initial bond administrative expenses and\n\n(v) refund outstanding bonds or notes secured by the Special Contribution Unemployment Compensation Trust Fund established b y section 21.\n\n(b) The bonds authorized pursuant to this section may be issued by the state treasurer upon a request by the governor and shall state the amount required for the purposes pursuant to subsection (a) and the date or dates upon which such funds are required, and such other matters as the secretary of labor and workforce development and the secretary of administration and finance shall determine as appropriate under such request, consistent with carrying out the purposes of this section. Such req uest may be filed with the state treasurer only after the secretary of labor and workforce development and the secretary of administration and finance send a letter to the governor recommending the issuance of revenue bonds.\n\n(c) Any such bonds shall be sp ecial obligations of the commonwealth payable solely from monies credited to the Special Contribution Unemployment Compensation Trust Fund established in section 21 provided, however, that notwithstanding any general or special law to the contrary, such b onds shall not be general obligations of the commonwealth. Bonds may be issued in such manner and on such terms and conditions as the state treasurer may determine in accordance with this subsection and, to the extent not inconsistent with this subsection, the General Laws for the issuance of bonds of the commonwealth. Bonds may be secured by a trust agreement entered into by the state treasurer, with the concurrence of the secretary of labor and workforce development and the secretary of administration and finance, on behalf of the commonwealth, and the trust agreement may pledge or assign all or any part of the amounts on deposit in the Special Contribution Unemployment Compensation Trust Fund and rights to receive the same, whether existing or coming into existence and whether held or thereafter acquired, and the proceeds thereof. The state treasurer may, with the concurrence of the secretary of labor and workforce development and the secretary of administration and finance, enter into additional security, insurance or other forms of credit enhancement, which may be secured on a parity or subordinate basis with the bonds. A pledge in any such trust agreement or credit enhancement agreement shall be valid and binding from the time such pledge shall be made w ithout any physical delivery or further act, and the lien of such pledge shall be valid and binding against all parties having claims of any kind in tort, contract or otherwise, whether such parties have notice thereof or not. Any such pledge shall be perf ected by filing of the trust agreement or credit enhancement agreement in the records of the state treasurer and no filing shall be required under chapter 106 of the General Laws. Any such trust agreement or credit enhancement agreement may establish provi sions defining defaults and establishing remedies and other matters relating to the rights and security of the holders of the bonds or other secured parties as determined by the state treasurer, including provisions relating to the establishment of reserve s, the issuance of additional or refunding bonds, whether or not secured on a parity basis, the application of receipts, monies or funds pledged pursuant to such agreement, the regulation of the custody, investment and application of monies and such other matters deemed necessary or desirable by the state treasurer for the security of such bonds.\n\n(d) The state treasurer may also provide for issuance of temporary notes in anticipation of bonds, grants, revenues or appropriations. The issuance of the notes shall be governed by this section relating to the issuance of bonds. The state treasurer may also issue refunding bonds for the purpose of paying any bonds at or before maturity, as provided for and permitted by the terms of a trust agreement. The principa l amount of bonds for the payment or redemption of which, either at or before maturity, refunding bonds shall have been issued, shall be excluded from the aggregate principal amount of bonds issued under this chapter for purposes of computing the limit on outstanding bonds under this section.\n\n(e) Bonds and notes issued by the commonwealth, their transfer and income therefrom, including any profit made on the sale thereof, shall at all times be free from taxation within the commonwealth. In connection with the issuance of bonds and notes of the commonwealth which are intended to qualify for tax exemption under the federal Internal Revenue Code of 1986, as amended, and to induce the purchase of such bonds and notes, the state treasurer may covenant on behalf of the commonwealth with the purchasers or with the holders from time to time of such bonds or notes or with a trustee or trustees for the benefit of such holders with respect to compliance with the requirements of said Internal Revenue Code relative to su ch tax exemption, including without limitation compliance with provisions relating to the use of proceeds by private parties, the investment of proceeds and the payment of rebate, so-called, to the federal government. Any such covenant may appear on the bo nds or notes or may be included in a separate trust agreement.\n\n(f) In order to increase the marketability of any such bonds or notes issued by the commonwealth, the commonwealth covenants with the purchasers and all subsequent owners and transferees of bo nds and notes issued by the state treasurer pursuant to this section in consideration of the acceptance of the payment for the bonds and notes, until such bonds and notes, together with the interest thereon, with interest on any unpaid installment of inter est and all costs and expenses in connection with any action or proceeding on behalf of such owners, are duly met and discharged or unless expressly permitted or otherwise authorized by the term of each contract and agreement made or entered into by or on behalf of the commonwealth with or for the benefit of such owners: (i) no pledged funds shall be diverted from the Special Contribution Unemployment Compensation Trust Fund and (ii) so long as the sums are necessary, as determined by the state treasurer i n accordance with any applicable trust or security agreement or credit enhancement agreement or insurance policy related to bonds or notes issued by the state treasurer, for the purposes for which they have been pledged, notwithstanding any general or spec ial law to the contrary, the commonwealth will impose, charge, raise, levy, collect and apply the assessment set forth in section 20 and other revenues, receipts, funds or moneys pledged in an amount sufficient to pay all principal or redemption premium of and interest on the bonds and notes and any other obligation due relating to such bonds and notes and comply with the covenants set forth in the trust agreement providing for such bonds and notes.\n\nSECTION 20. (a) For any year in which bonds or notes issu ed pursuant to section 19 are outstanding, an employer entitled to an experience rate under section 14 of chapter 151A of the General Laws shall be subject to, shall be assessed and shall pay an unemployment obligation assessment.\n\n(b) Annually, the commi ssioner shall set the unemployment obligation assessment rate at an amount sufficient to ensure timely payment of all of the following:\n\n(i) principal, interest and any redemption premium on the bonds or notes\n\n(ii) administrative expenses, credit enhan cement fees and other fees, if any, in connection with issuing the bonds or notes\n\n(iii) all other amounts required to be maintained and paid under the terms of applicable trust agreements or credit enhancement agreements and\n\n(iv) amounts necessary to e stablish the ratings on the obligations that are assigned by a nationally recognized rating service at a level determined by the treasurer in the state treasurer\u2019s sole discretion.\n\n(c) The rate shall be based on a formula prescribed by rules set forth by the commissioner, using the employer\u2019s experience rate. The unemployment obligation assessment rate shall apply to the same wage base to which the employer\u2019s unemployment tax applies for the applicable period.\n\n(d) Not less than 30 days following the annu al setting of the unemployment obligation assessment rate, the commissioner shall provide written notice to the house and senate committees on ways and means and the joint committee on labor and workforce development. The notice shall include, but not be l imited to: (i) the assessment rate (ii) a description of the formula on which the assessment rate was based and (iii) the amounts of any outstanding payments associated with bonds issued pursuant to section 19, including the amounts described in clauses (i) to (iv), inclusive, of subsection (b).\n\n(e) The unemployment obligation assessment shall be due at the same time, collected in the same manner and subject to the same penalties and interest as other contributions assessed under said section 14 of said chapter 151A.\n\n(f) The unemployment obligat ion assessment shall be credited to the Special Contribution Unemployment Compensation Trust Fund established pursuant to section 21. Receipts from the assessment shall not be subject to the allowable state tax revenue limitations established by chapter 62 F of the General Laws.\n\nSECTION 21. (a) There is hereby established on the books of the commonwealth a fund to be known as the Special Contribution Unemployment Compensation Trust Fund. Said fund shall be administered by the secretary of labor and workforc e development, with the approval of the secretary of administration and finance.\n\n(b) All costs related to the organization, establishment and operation of the fund and all costs related to the establishment of billing, payment and collection procedures fo r amounts received from employers in payment of the unemployment obligation assessment established by section 20, to the extent not payable under the trust agreement for bonds issued under section 19, may be paid from other amounts available under chapter 151A of the General Laws when made available thereunder for such purpose.\n\n(c) Amounts in the fund shall be held by the secretary of labor and workforce development or the secretary\u2019s designee, as trustee and not on account of the commonwealth, exclusively for the purposes set forth in section 19, and the secretary of labor and workforce development shall disburse amounts in the fund to a trustee under a trust agreement as set forth in said section 19, without further appropriation. All amounts in the fund, including investment earnings, shall be available for expenditure for any lawful purpose, including without limitation payment of debt service on bonds or notes issued by the state treasurer, and may be pledged to secure special obligation bonds in such m anner and according to such priority as set forth in said section 19 or a trust agreement established for such purpose.\n\n(d) In order to increase the marketability of any bonds or notes of the trust which may be secured by or payable from amounts held in t he fund, the sums to be credited to the fund are hereby impressed with a trust for the benefit of the trust and the holders from time to time of the bonds or notes, and in consideration of the acceptance of payment for the bonds or notes, the commonwealth covenants with the purchasers and all subsequent holders and transferees of the bonds or notes that while the bond or note shall remain outstanding, and so long as the principal of or interest on the bond or note shall remain unpaid, the sums to be credite d to the fund shall not be diverted from the control of the trust and, so long as the sums are necessary, as determined by the state treasurer in accordance with any applicable trust or security agreement or credit enhancement agreement or insurance policy related to bonds or notes issued by the state treasurer, for the purposes for which they have been pledged, notwithstanding any general or special law to the contrary, the commonwealth will impose, charge, raise, levy, collect and apply the unemployment o bligation assessment set forth in section 20 and other revenues, receipts, funds or moneys pledged in an amount sufficient to pay all principal or redemption premium of and interest on the bonds and notes and any other obligation due relating to such bonds and notes and comply with the covenants set forth in the trust agreement providing for such bonds and notes.\n\nSECTION 22. Not later than 10 days after the effective date of this act, the secretary of administration and finance shall direct the comptroller to transfer $75,000,000 from federal funds received by the commonwealth in response to the public health emergency caused by COVID-19, if any, available and consistent with federal funding requirements to the COVID-19 Emergency Paid Sick Leave Fund establ ished in section 14 provided, however, that if the secretary of administration and finance certifies to the comptroller that no such funds are available, the comptroller shall transfer$75,000,000 from the General Fund to the COVID-19 Emergency Paid Sick Leave Fund.\n\nSECTION 23. To meet the expenditures necessary in carrying out section 2, the state treasurer shall, upon request of the governor, issue and sell bonds of the commonwealth in an amount to be specified by the governor from time to time but not exceeding, in an aggregate principal amount, $7,000,000,000. All such bonds issued by the commonwealth shall be designated on their face, the Unemployment Insurance Trust Fund Solvency Act of 2021, and shall be issued for a maximum term of years, not excee ding 20 years, as the governor may recommend to the general court under section 3 of Article LXII of the Amendments to the Constitution of the Commonwealth. All such bonds shall be payable not later than June 30, 2046. All interest and payments on account of principal on these bonds and notes shall be payable from the Special Contribution Unemployment Compensation Trust Fund established pursuant to section 21. Bonds and interest thereon issued under this section shall, notwithstanding any provision of the G eneral Laws or this act, be special obligations of the commonwealth payable solely in accordance with the provisions of said section 21. Notwithstanding any general or special law to the contrary, bonds and notes issued under this act and interest thereon shall not be included in the computation of outstanding bonds for purposes of the limit imposed by the second paragraph of section 60A of chapter 29 of the General Laws, nor shall debt service with respect to these bonds and notes be included in the comput ation of the limit imposed by section 60B of said chapter 29. SECTION 24. The department of family and medical leave shall conduct an analysis on the expansion of the family and medical leave program established by chapter 175M of the General Laws to pro vide coverage for future communicable illnesses related to a public health emergency. Such analysis shall include, but not be limited to: (i) an examination of the costs and benefits of providing coverage under such program, including, but not limited to, public health and economic benefits (ii) the impact of providing benefits under such program on other safety net programs used during the COVID-19 pandemic to provide financial assistance to employees, including, but not limited to, unemployment insurance and (iii) the potential impact of providing coverage for communicable illnesses related to a public health emergency on contributions to the Family and Employment Security Trust Fund established in section 7 of said chapter 175M. The department shall iss ue a report with its findings, including any legislative recommendations, if any, to the clerks of the house and the senate and the joint committee on labor and workforce development not later than December 31, 2022. SECTION 25. (a) There shall be a speci al commission established pursuant to section 2A of chapter 4 of the General Laws to study and develop recommendations on the solvency of the unemployment trust fund established in section 14F of chapter 151A of the General Laws. The commission shall consi st of the following 21 members: the chairs of the joint committee on labor and workforce development, who shall serve as co-chairs 1 member appointed by the minority leader of the house of representatives 1 member appointed by the minority leader of the senate the secretary of labor and workforce development or a designee the director of unemployment assistance or a designee 1 member appointed by the Massachusetts State Labor Council, AFL- CIO 1 member appointed by the Associated Industries of Massachu setts, Inc. 1 member appointed by the Massachusetts Legal Assistance Corporation representing unemployed workers 1 member appointed by the Alliance for Business Leadership, Inc. 1 member appointed by the National Federation of Independent Business Massa chusetts 1 member appointed by the Union of Minority Neighborhoods, Inc. 1 member appointed by the Massachusetts Restaurant Association, Inc. 1 member appointed by the Black Economic Council of Massachusetts, Inc. 1 member appointed by the Greater Bost on Chamber of Commerce 1 member appointed by the Massachusetts Building Trades Council 1 member appointed by the Massachusetts Competitive Partnership 1 member appointed by Greater Boston Legal Services Employment Unit 1 member appointed by the Massach usetts Taxpayers Foundation, Inc. 1 member appointed by the Tufts University Jonathan M. Tisch College of Civic Life Center for State Policy Analysis and 1 member appointed by the Retailers Association of Massachusetts, Inc. (b) The commission shall stu dy the long-term solvency of the unemployment trust fund, including, but not limited to: (i) evaluating whether changes are necessary to the experience rating system in order to promote solvency and reduce the tax impact on small businesses (ii) examining increasing or indexing the taxable wage base under section 14 of said chapter 151A (iii) examining the industry specific impacts of changes to the unemployment tax rate (iv) reviewing solvency efforts in other state unemployment tax systems and (v) det ermining what changes are necessary to benefit from federal tax credits and federal interest-free borrowing under the Federal Unemployment Tax Act, 26 U.S.C. \u00a7\u00a7 3301-3305. The report by the commission shall include recommendations to promote the long-term solvency of the trust fund and meet solvency criteria required by the United States Department of Labor under the Federal Unemployment Tax Act, 26 U.S.C. \u00a7 3301-3305, and the Social Security Act, 42 U.S.C. \u00a7\u00a7 1321-1324 and applicable regulations and guidan ce. (c) The commission shall hold at least 1 public hearing and may hold additional hearings as necessary at which members of the public shall have an opportunity to speak. (d) Not later than December 15, 2021, the commission shall file a report on its f indings and recommendations with the clerks of the house of representatives and the senate, the joint committee on labor and workforce development and the house and senate committees on ways and means. SECTION 26. (a) As used in this section, \u201cunemploymen t compensation\u201d, shall, unless the context clearly requires otherwise, mean unemployment compensation as defined under section 85 of the federal Internal Revenue Code, including, but not limited to, benefits received under chapter 151A of the General Laws, or other unemployment compensation authorized by federal law including, but not limited to, the federal Federal-State Extended Unemployment Compensation Act of 1970, the federal Coronavirus Aid, Relief and Economic Security Act of 2020, the federal Contin ued Assistance for Unemployed Workers Act of 2020, the federal Lost Wages Assistance program or any amendments to those acts. (b) Notwithstanding any general or special law to the contrary, for taxable years beginning on January 1, 2020 and January 1, 2021, any amount, up to$10,200, of unemployment compensation that is included in a taxpayer\u2019s federal gross income, as defined i n section 1 of chapter 62 of the General Laws, shall be deducted from said federal gross income for the purpose of determining Massachusetts gross income under section 2 of chapter 62 of the General Laws if the taxpayer\u2019s household income is not more than 200 per cent of the federal poverty level as calculated by the United States Department of Health and Human Services. For the purpose of this subsection, \u201chousehold income\u201d shall be determined without regard to this section.\n\n(c) The department of unemploy ment assistance, in conjunction with the department of revenue, shall establish a public information and education campaign to notify taxpayers of the income exclusion for unemployment compensation for tax years 2020 and 2021 established by subsection (b) and the tax penalty relief provided in section 8 provided, however, that the campaign shall include: (i) a multilingual notice of the availability of such unemployment compensation exclusion (ii) a description of, and the eligibility criteria for, the ex clusion under said subsection (b) and (iii) targeted and direct outreach to individuals who have received, or are receiving, unemployment compensation. The department of unemployment assistance and the department of revenue shall publish such information on their respective websites in a conspicuous manner and location and shall be available in multiple languages as determined by the department of unemployment assistance.\n\nSECTION 27. Section 3 is hereby repealed.\n\nSECTION 28. Section 8 is hereby repealed.\n\nSECTION 29. Sections 13 to 17, inclusive, shall take effect 10 days after the effective date of this act.\n\n## SHARON Andrews\n\nDr Sharon Andrews is a Senior Lecturer in Nursing in the School, University of Tasmania. Her research and teaching interests span ageing, dementia, palliative care, pain management, evidence-base nursing, participatory action research, critical social theory and behaviour change theory. She is a Registered Nurse, experienced researcher and academic (RN, BN, Hons, PhD). She has 20 years\u2019 clinical experience of working in the aged care sector and 15 years\u2019 experience in the design and conduct of participatory and translational research. Sharon is a National Health and Medical Research Council, Translating Research into Practice Fellow and a Research Fellow with the Wicking Dementia Research and Education Centre UTAS. Dr Andrews has collaborated on research projects totaling over A \\$2.7million and has and has an international reputation for research into evidence-based care for people with dementia, across areas of pain and palliative care. Dr Andrews has 20 peer reviewed\npublications, in addition to knowledge translation tools.\n\nStory-Telling Competition: A YEP Asia Pacific Region Project\nChair: Datin Jacqueline Wong, demensia Brunei (Brunei)\nCommittee: DY Suharya, Regional Director, Asia Pacific \u2013 ADI\nChris Humphrey, A + E Networks (UK\/Singapore) \u2013 Project Lead\nDr Sharon Andrews, University Tasmania (Australia)\nOwen McNeir, Remarkable Lives (UK)\nDishen Kumar, Health Matters \u2013 astroAWANI (Malaysia)\n\nWe have made it easy for you to find a PDF Ebooks without any digging. And by having access to our ebooks online or by storing it on your computer, you have convenient answers with Prentice Hall Writing And Grammar Grade 8 Answer Key . To get started finding Prentice Hall Writing And Grammar Grade 8 Answer Key , you are right to find our website which has a comprehensive collection of manuals listed.\nOur library is the biggest of these that have literally hundreds of thousands of different products represented.\n\nFinally I get this ebook, thanks for all these Prentice Hall Writing And Grammar Grade 8 Answer Key I can get now!\n\nI did not think that this would work, my best friend showed me this website, and it does! I get my most wanted eBook\n\nMy friends are so mad that they do not know how I have all the high quality ebook which they do not!\n\nIt's very easy to get quality ebooks )\n\nso many fake sites. this is the first one which worked! Many thanks\n\nwtffff i do not understand this!\n\n## DY Suharya\n\nRegional Director,\n\nDY is the Regional Director of Alzheimer\u2019s Disease International Asia Pacific Region and Founder of Alzheimer Indonesia, an NGO that works toward greater Dementia Alzheimer awareness and risk reduction in Indonesia. DY holds a Master of Public Health (MPH) from Curtin University Perth, Australia, a Bachelor of Arts in Communication from Ohio State University, USA and a Diploma in English Literature from the University of Indonesia.\n\nDY, have more than 20 years of experience in public health, public private partnerships and communication, in the past, DY has worked as a Health Communication Consultant with the World Bank, WHO and UNICEF. In addition to her role in Alzheimer Indonesia, DY has also been working closely with the World Health Organization, Indonesia\u2019s Ministry of Health and have facilitated one of the working group at the NCD Alliance Regional Meeting \u2013 Strengthening the NCD Civil Society Movement in South East Asia Region in New Delhi, July 2015. DY\u2019s mother was diagnosed with Vascular Dementia in 2009 and has been her source of inspiration in improving quality life of people with dementia, caregivers and inter-generations through the establishment of Alzheimer\u2019s Indonesia since 2013. Her mother passed away in April 2017 but her legacy continues.\n\n### Saturday, 21 March '09, 2:00pm At Rumah Alzheimer's PJ - Awareness Talk \"Dementia\/Alzheimer's Disease\" By Dr Yau Weng Keong, Consultant Physician & Geriatrician of Hospital Kuala Lumpur\n\nIn conjunction with our monthly gathering in March, the KL-PJ Alzheimer&rsquos Caregivers Support Group Committee is organizing an Awareness Talk on &ldquoDementia\/Alzheimer&rsquos Disease&rsquo by Dr Yau Weng Keong from the ADFM Medical Panel.\n\nDay\/Date: Saturday, 21 March 2009\nTime: 2:00pm\nVenue: Rumah Alzheimer&rsquos (PJ Day Care Centre), No. 6, Lorong 11\/8E, Section 11, 46200 Petaling Jaya.\n\nDR YAU WENG KEONG , a Consultant Physician & Geriatrician, obtained his basic medical degree from USM in 1990 and MRCP (UK) in 1996. After obtaining his MRCP, he specialized in Geriatric Medicine in Stobhill, Glasgow, UK..\n\nSince then, Dr Yau had been attached to the Aged and Palliative Care in Adelaide (1997), Geriatric Department of Tan Tock Seng Hospital (2000), Dover Park Hospice and Hospice Care Association in Singapore (2000) and worked as a Physician and Geriatrician since 1997 and heading the only Geriatric Unit in MOH.\n\nDr Yau is currently a Consultant Physician and Geriatrician with the Geriatric Unit, Department of Medicine, Hospital Kuala Lumpur. He started the Post Basic Gerontology Nursing and was instrumental in helping UPM to set up their &ldquoHealth Care for the Elderly&rdquo Undergraduate Programme. In the MOH, he is actively involved in the health programmes for elderly people, conducting regular CME and workshops pertaining to elderly and dementia care Public Health lectures in UKM and the Memory Clinic in Seremban once a week,\n\nFor Refreshments purpose and confirm your attendance, kindly call:\nADFM Secretariat, Katherine\/Janet at Tel: 603 &ndash 7956 2008 \/ 7958 3008 OR Email to: [email\u00a0protected]\n\nOR you can register online with the ADFM National Alzheimer&rsquos Caregivers Online Network (NACON) at: [email\u00a0protected]\n\n\"See all of you on 21 March \u0f45 at 2:00pm, Rumah Alzheimer's, PJ Day Care Centre\"","date":"2022-05-28 14:03:47","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2500188946723938, \"perplexity\": 5987.194507128068}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652663016853.88\/warc\/CC-MAIN-20220528123744-20220528153744-00732.warc.gz\"}"}
null
null
\section{Introduction} Les faisceaux prioritaires sur \proj{2} ont \'et\'e introduits par A. Hirschowitz et Y. Laszlo dans \cite{hi_la}. Rappelons qu'un faisceau coh\'erent ${\cal E}$ sur \proj{2} est dit {\em prioritaire} s'il est sans torsion et si \ \m{\mathop{\rm Ext}\nolimits^2({\cal E},{\cal E}(-1))=0}. Par exemple les faisceaux semi-stables au sens de Gieseker-Maruyama sont prioritaires. On s'int\'eresse ici \`a la structure pr\'ecise du faisceau prioritaire g\'en\'erique de rang $r$ et de classes de Chern \m{c_1}, \m{c_2} lorsqu'il n'existe pas de faisceau semi-stable de m\^emes rang et classes de Chern. D'apr\`es \cite{hi_la}, le {\em champ} des faisceaux prioritaires est lisse et irr\'eductible. Les conditions d'existence des faisceaux prioritaires sont les suivantes : posons $$\mu \ = \ \q{c_1}{r}, \ \ \ \Delta \ = \ \q{1}{r}(c_2 - \q{r-1}{2r}c_1^2),$$ (si ${\cal E}$ est un faisceau coh\'erent ${\cal E}$ sur \proj{2} de rang $r$ et de classes de Chern \m{c_1}, \m{c_2}, on appelle $\mu=\mu({\cal E})$ la {\em pente} de ${\cal E}$ et $\Delta=\Delta({\cal E})$ le {\em discriminant} de ${\cal E}$). Alors, si \ \m{-1\leq\mu\leq 0}, il existe un faisceau prioritaire de pente $\mu$ et de discriminant $\Delta$ si et seulement si on a $$\Delta \ \geq \ - \q{\mu(\mu+1)}{2}.$$ Les conditions d'existence des faisceaux semi-stables sur \proj{2} sont rappel\'ees ci-dessous. On peut voir qu'il existe beaucoup de triplets \m{(r,c_1,c_2)} tels qu'il existe un faisceau prioritaire de rang $r$ et de classes de Chern \m{c_1}, \m{c_2} mais pas de faisceau semi-stable avec les m\^emes invariants. Les conditions d'existence des faisceaux semi-stables sur \proj{2} (cf \cite{dr_lp}) s'expriment en fonction des seules variables $\mu$ et $\Delta$. On montre qu'il existe une unique fonction $\delta(\mu)$ telle qu'on ait \ \m{\dim(M(r,c_1,c_2)) > 0} \ si et seulement si \ \m{\Delta\geq\delta(\mu)}. La fonction \m{\delta(\mu)} est d\'ecrite \`a l'aide des {\it fibr\'es exceptionnels}. On dit qu'un faisceau coh\'erent ${\cal E}$ sur \proj{2} est {\it exceptionnel} si ${\cal E}$ est {\it simple} (c'est-\`a-dire si les seuls endomorphismes de ${\cal E}$ sont les homoth\'eties), et si $$\mathop{\rm Ext}\nolimits^1({\cal E},{\cal E}) \ = \ \mathop{\rm Ext}\nolimits^2({\cal E},{\cal E}) \ = \ \lbrace 0\rbrace.$$ Un tel faisceau est alors localement libre et stable, et la vari\'et\'e de modules de faisceaux semi-stables correspondante contient l'unique point ${\cal E}$. Il existe une infinit\'e d\'enombrable de fibr\'es exceptionnels, et un proc\'ed\'e simple permet de les obtenir tous \`a partir des fibr\'es en droites (cf. \cite{dr1}). Notons qu'un fibr\'e exceptionnel est uniquement d\'etermin\'e par sa pente. Soit $F$ un fibr\'e exceptionnel. On note \m{x_F} la plus petite solution de l'\'equation $$X^2-3X+\q{1}{rg(F)^2} \ = \ 0.$$ Alors on montre que les intervalles \ \m{\rbrack\mu(F)-x_F,\mu(F)+x_F\lbrack} \ constituent une partition de l'ensemble des nombres rationnels. On va d\'ecrire la fonction \m{\delta(\mu)} sur cet intervalle. Posons $$P(X) = \q{X^2}{2}+\q{3}{2}X+1.$$ Sur l'intervalle \ \m{\rbrack\mu(F)-x_F,\mu(F)\rbrack}, on a $$\delta(\mu) \ = \ P(\mu-\mu(F))-\q{1}{2}(1-\q{1}{rg(F)^2}),$$ et sur \ \m{\lbrack\mu(F),\mu(F)+x_F\lbrack}, on a $$\delta(\mu) \ = \ P(\mu(F)-\mu)-\q{1}{2}(1-\q{1}{rg(F)^2}).$$ On obtient les courbes $D(F)$ et $G(F)$ repr\'esent\'ees sur la figure qui suit. Ce sont des segments de coniques. On consid\`ere maintenant la courbe \ \m{\Delta=\delta'(\mu)} \ d\'efinie de la fa\c con suivante : sur l'intervalle \ \m{\rbrack\mu(F)-x_F,\mu(F)+x_F\lbrack}, on a $$\delta'(\mu) = \delta(\mu) - \q{1}{rg(F)^2}(1-\q{1}{x_F}\mid\mu(F)-\mu\mid).$$ On obtient ainsi les segments de coniques $D'(F)$ et $G'(F)$. Le point \m{(\mu(F),\delta'(\mu(F)))} est la paire \m{(\mu,\Delta)} correspondant au fibr\'e exceptionnel $F$. Le point \m{(\mu(F),\delta(\mu(F)))} est le sym\'etrique de $F$ par rapport \`a la droite \ \m{\Delta=1/2}. Notons que si $\mu$ est un nombre rationnel diff\'erent de la pente d'un fibr\'e exceptionnel, le nombre $\delta'(\mu)$ est irrationnel. Ces courbes, sur l'intervalle \ \m{\rbrack\mu(F)-x_F,\mu(F)+x_F\lbrack} \ , sont repr\'esent\'ees ci-dessous : \vfill\eject \setlength{\unitlength}{0.012500in}% \begin{picture}(410,565)(200,235) \thicklines \multiput(400,800)(0.00000,-7.98561){70}{\line( 0,-1){ 3.993}} \multiput(610,520)(-7.96117,0.00000){52}{\line(-1, 0){ 3.981}} \put(400,760){\line(-2,-3){160}} \put(400,760){\line( 2,-3){160}} \put(560,520){\line(-2,-3){160}} \put(400,280){\line(-2, 3){160}} \multiput(240,520)(0.00000,-8.00000){33}{\line( 0,-1){ 4.000}} \multiput(560,520)(0.00000,-8.00000){33}{\line( 0,-1){ 4.000}} \put(285,630){\makebox(0,0)[lb]{\smash{$G(F)$}}} \put(495,630){\makebox(0,0)[lb]{\smash{$D(F)$}}} \put(280,395){\makebox(0,0)[lb]{\smash{$G'(F)$}}} \put(485,395){\makebox(0,0)[lb]{\smash{$ D'(F)$}}} \put(410,275){\makebox(0,0)[lb]{\smash{$F$}}} \put(410,760){\makebox(0,0)[lb]{\smash{$P$}}} \put(570,530){\makebox(0,0)[lb]{\smash{ $\Delta=1/2$}}} \put(400,235){\makebox(0,0)[lb]{\smash{$\mu=\mu(F)$}}} \put(215,245){\makebox(0,0)[lb]{\smash{$\mu=\mu(F)-x_F$}}} \put(535,245){\makebox(0,0)[lb]{\smash{$\mu=\mu(F)+x_F$}}} \end{picture} \bigskip \bigskip \bigskip Pour tout point $x$ de \proj{2}, soit \m{{\cal I}_x} le faisceau d'id\'eaux du point $x$. On a $$\mathop{\rm Ext}\nolimits^1({\cal I}_x,{\cal O})\simeq\cx{}.$$ Soit \m{{\cal V}_x} l'unique faisceau extension non triviale de \m{{\cal I}_x} par ${\cal O}$. On va d\'emontrer le \vfill\eject \noindent{\bf Th\'eor\`eme A : }{\em Soient $r$, \m{c_1}, \m{c_2} des entiers, avec \m{r\geq 1}, \m{-1<\mu\leq 0}, $$\Delta \ \geq \ \q{\mu(\mu+1)}{2},$$ et tels que la vari\'et\'e \m{M(r,c_1,c_2)} soit vide. \medskip \noindent 1 - Si \ \m{\Delta < \delta'(\mu)}, il existe des fibr\'es exceptionnels $E_0$, $E_1$, $E_2$, des espaces vectoriels de dimension finie $M_0$, $M_1$, $M_2$, dont un au plus peut \^etre nul, tels que le faisceau prioritaire g\'en\'erique de rang $r$ et de classes de Chern $c_1$, $c_2$ soit isomorphe \`a $$(E_0\otimes M_0)\oplus(E_1\otimes M_1)\oplus(E_2\otimes M_2).$$ \medskip \noindent 2 - On suppose que \m{c_1\not = 0} ou \m{c_2>1}. Si \ \m{\Delta > \delta'(\mu)}, soit $F$ l'unique fibr\'e exceptionnel tel que \ \m{\mu\in \ \rbrack\mu(F)-x_F,\mu(F)+x_F\lbrack}. Alors si \ \m{\mu\leq\mu(F)}, l'entier $$p \ = \ r.rg(F)(P(\mu-\mu(F))-\Delta-\Delta(F))$$ est strictement positif, et le faisceau prioritaire g\'en\'erique de rang $r$ et de classes de Chern $c_1$, $c_2$ est isomorphe \`a une somme directe $$(F\otimes \cx{p})\oplus{\cal E},$$ o\`u ${\cal E}$ est un fibr\'e semi-stable situ\'e sur la courbe $G(F)$. De m\^eme, si \ \m{\mu\geq\mu(F)}, l'entier $$p \ = \ r.rg(F)(P(\mu(F)-\mu)-\Delta-\Delta(F))$$ est strictement positif, et le faisceau prioritaire g\'en\'erique de rang $r$ et de classes de Chern $c_1$, $c_2$ est isomorphe \`a une somme directe $$(F\otimes \cx{p})\oplus{\cal E},$$ o\`u ${\cal E}$ est un fibr\'e semi-stable situ\'e sur la courbe $D(F)$. \medskip \noindent 3 - Si \ \m{c_1=0}, \m{c_2=1}, le faisceau prioritaire g\'en\'erique de rang $r$ et de classes de Chern $c_1$, $c_2$ est isomorphe \`a une somme directe du type $$({\cal O}\otimes\cx{r-2})\oplus{\cal V}_x.$$} \bigskip Le r\'esultat pr\'ec\'edent apporte des pr\'ecisions sur ce qui est d\'emontr\'e dans \cite{hi_la}, c'est-\`a-dire que s'il n'existe pas des faisceau semi-stable de rang $r$ et de classes de Chern \m{c_1}, \m{c_2}, alors deux cas peuvent se produire : la filtration de Harder-Narasimhan du faisceau prioritaire g\'en\'erique de rang $r$ et de classes de Chern \m{c_1}, \m{c_2} comporte deux termes, ou elle en comporte trois. Dans le premier cas, un des termes est semi-exceptionnel (c'est-\`a-dire de la forme \m{F\otimes\cx{k}}, avec $F$ exceptionnel), et dans le second cas les trois termes sont semi-exceptionnels. \bigskip Le th\'eor\`eme A permet aussi de conclure que s'il n'existe pas de faisceau semi-stable de rang $r$ et de classes de Chern \m{c_1}, \m{c_2}, il n'existe pas non plus d'{\em espaces de modules fins} de faisceaux de rang $r$ et de classes de Chern \m{c_1}, \m{c_2} contenant au moins un faisceau prioritaire. On appelle ici {\it espace de modules fin} de faisceaux de rang $r$ et de classes de Chern \m{c_1}, \m{c_2} sur \proj{2} la donn\'ee d'une vari\'et\'e alg\'ebrique lisse $M$ non vide et d'un faisceau coh\'erent ${\cal F}$ sur \ \m{M\times\proj{2}} poss\'edant les propri\'et\'es suivantes : \medskip \noindent (i) Le faisceau ${\cal F}$ est plat sur $M$ et pour tout point ferm\'e $x$ de $M$, \ \m{{\cal F}_x={\cal F}_{\mid \lbrace x\rbrace\times\proja{{\ttf 2}}}} est un faisceau sans torsion sur \proj{2}, de rang $r$ et de classes de Chern \m{c_1}, \m{c_2}. \noindent (ii) Pour tout point ferm\'e $x$ de $M$, le faisceau ${\cal F}_x$ est simple, on a \ \m{\mathop{\rm Ext}\nolimits^2({\cal F}_x,{\cal F}_x)=\lbrace 0\rbrace}, et le morphisme de d\'eformation infinit\'esimale de Koda\"ira-Spencer $$T_xM\longrightarrow\mathop{\rm Ext}\nolimits^1({\cal F}_x,{\cal F}_x)$$ est surjectif. \noindent (iii) Pour tous point ferm\'es distincts $x$ et $y$ de $M$, les faisceaux \m{{\cal F}_x} et \m{{\cal F}_y} ne sont pas isomorphes. \bigskip Par exemple, si $r$, \m{c_1} et $$\chi \ = \ r - c_2 + \q{c_1(c_1+3)}{2}$$ sont premiers entre eux, et s'il existe un faisceau stable de rang $r$ et de classes de Chern \m{c_1}, \m{c_2}, la vari\'et\'e de modules de ces faisceaux stables, \'equip\'ee d'un {\it faisceau universel}, est un espace de modules fin. Ceci sugg\`ere la conjecture suivante : \bigskip \noindent{\bf Conjecture : }{\it Les seuls espaces de modules fins qui soient projectifs sont les vari\'et\'es de modules de faisceaux stables, lorsque $r$, \m{c_1} et $\chi$ sont premiers entre eux.} \bigskip Le th\'eor\`eme A entraine imm\'ediatement le \bigskip \noindent{\bf Th\'eor\`eme B : }{\em Soient $r$, \m{c_1}, \m{c_2} des entiers avec \ $r\geq 1$. On suppose que la vari\'et\'e modules \m{M(r,c_1,c_2)} des faisceaux semi-stables sur \proj{2} de rang $r$ et de classes de Chern \m{c_1}, \m{c_2} est vide. Alors il n'existe pas d'espace de modules fin de faisceaux de rang $r$ et de classes de Chern \m{c_1}, \m{c_2}, et contenant un faisceau prioritaire.} \bigskip Il est possible de pr\'eciser le 1- du th\'eor\`eme A. On rappelle dans le \paragra~\hskip -2pt 2 la notion de {\em triade}, qui est un triplet particulier \m{(E,F,G)} de fibr\'es exceptionnels. On ne consid\`ere ici que des triades de fibr\'es exceptionnels dont les pentes sont comprises entre $-1$ et $0$. A la triade \m{(E,F,G)} correspond le {\em triangle} \m{{\cal T}_{(E,F,G)}} du plan (de coordonn\'ees \m{(\mu,\Delta)}), dont les c\^ot\'es sont des segments de paraboles et les sommets les points correspondant \`a $E$, $F$ et $G$. Ce triangle est d\'efini par les in\'equations $$\Delta\leq P(\mu-\mu(G))-\Delta(G), \ \ \Delta\geq P(\mu-\mu(H)+3)-\Delta(H), \ \ \Delta\leq P(\mu-\mu(E)+3)-\Delta(E),$$ $H$ \'etant le fibr\'e exceptionnel noyau du morphisme d'\'evaluation \ \m{E\otimes\mathop{\rm Hom}\nolimits(E,F)\longrightarrow F}. Soit {\bf T} l'ensemble des triades de fibr\'es exceptionnels dont les pentes sont comprises entre $-1$ et $0$. Soit ${\cal S}$ l'ensemble des points \m{(\mu,\Delta)} du plan tels que $$-1\leq\mu\leq 0, \ \ -\q{\mu(\mu+1)}{2}\leq\Delta\leq\delta'(\mu).$$ On d\'emontrera le \bigskip \bigskip \noindent{\bf Th\'eor\`eme C : }{\em 1 - Soient \m{(E,F,G)}, \m{(E',F',G')} des \'el\'ements distincts de {\bf T}. Alors les triangles \m{{\cal T}_{(E,F,G)}} et \m{{\cal T}_{(E',F',G')}} ont une intersection non vide si et seulement si cette intersection est un sommet commun ou un c\^ot\'e commun. Dans le premier cas, les fibr\'es exceptionnels correspondants sont identiques, et dans le second les paires de fibr\'es exceptionnels correspondantes le sont. \medskip \noindent 2 - On a \ \ \ \m{\displaystyle {\cal S}\ = \ \bigcup_{(E,F,G)\in{\bf T}}{\cal T}_{(E,F,G)}}. \medskip \noindent 3 - Soient \m{r,c_1,c_2} des entiers, avec \ \m{r\geq 1}, $$\mu=\q{r}{c_1}, \ \ \Delta=\q{1}{r}(c_2-\q{r-1}{2r}c_1^2).$$ On suppose que \ \m{(\mu,\Delta)\in{\cal T}_{(E,F,G)}}. Soit $H$ le noyau du morphisme d'\'evaluation \break \m{E\otimes\mathop{\rm Hom}\nolimits(E,F)\longrightarrow F}. Alors $$m \ = \ r.rg(E).(P(\mu-\mu(E)+3)-\Delta(E)),$$ $$n \ = \ r.rg(H).(P(\mu-\mu(H)+3)-\Delta(H)),$$ $$p \ = \ r.rg(G).(P(\mu-\mu(G))-\Delta(G))$$ sont des entiers positifs ou nuls, et le fibr\'e prioritaire g\'en\'erique de rang $r$ et de classes de Chern $c_1$, $c_2$ est de la forme $$(E\otimes\cx{m})\oplus(F\otimes\cx{n})\oplus(G\otimes\cx{p}).$$ } \bigskip \bigskip \bigskip \noindent{\bf Notations} \medskip Rappelons que le th\'eor\`eme de Riemann-Roch s'\'ecrit pour un faisceau coh\'erent $E$ de rang positif sur \proj{2} $$\chi(E) \ = \ rg(E).(P(\mu(E))-\Delta(E)),$$ \m{\chi(E)} d\'esignant la caract\'erisitique d'Euler-Poincar\'e de $E$. Si $E$, $F$ sont des faisceaux coh\'erents sur \proj{2}, on pose $$\chi(E,F) \ = \ \mathop{\hbox{$\displaystyle\sum$}}\limits_{0\leq i\leq 2}(-1)^i\dim(\mathop{\rm Ext}\nolimits^i(E,F)).$$ On a, si \ \m{rg(E)>0} \ et \ \m{rg(F)>0}, $$\chi(E,F) \ = \ rg(E).rg(E).(P(\mu(F)-\mu(E))-\Delta(E)-\Delta(F)).$$ On a en g\'en\'eral, pour tout entier $i$, un isomorphisme canonique $$\mathop{\rm Ext}\nolimits^i(E,F) \ \simeq \ \mathop{\rm Ext}\nolimits^{2-i}(F,E(-3))$$ (dualit\'e de Serre, cf. \cite{dr_lp}, prop. (1.2)). \section{Fibr\'es exceptionnels} \subsection{Construction des fibr\'es exceptionnels} Les r\'esultats qui suivent ont \'et\'e d\'emontr\'es dans \cite{dr_lp} ou \cite{dr1}. Un fibr\'e exceptionnel est enti\`erement d\'etermin\'e par sa pente. Soit ${\cal P}$ l'ensemble des pentes de fibr\'es exceptionnels. Si \m{\alpha\in{\cal P}}, on note \m{E_\alpha} le fibr\'e exceptionnel de pente $\alpha$, et \m{r_\alpha} son rang. On montre que \m{r_\alpha} et \m{c_1(E_\alpha)} sont premiers entre eux. Soit \ \m{\Delta_\alpha=\Delta(E_\alpha)}. Alors on a $$\Delta_\alpha\ = \ \q{1}{2}(1-\q{1}{r_\alpha^2}),$$ (ce qui d\'ecoule du fait que \ \m{\chi(E_\alpha,E_\alpha)=1}). Soit ${\cal D}$ l'ensemble des nombres rationnels diadiques, c'est-\`a-dire pouvant se mettre sous la forme \m{p/2^q}, $p$ et $q$ \'etant des entiers, \m{q\geq 0}. On a une bijection $$\epsilon : {\cal E}\longrightarrow{\cal P}.$$ Cette application est enti\`erement d\'etermin\'ee par les propri\'et\'es suivantes: \medskip \noindent - Pour tout entier $k$, on a \ \m{\epsilon(k)=k}. \noindent - Pour tout entier $k$ et tout \ \m{x\in{\cal D}}, on a \ \m{\epsilon(x+k)=\epsilon(x)+k}. \noindent - Pour tous entiers $p$, $q$, avec \ \m{q\geq 0}, on a $$\epsilon(\q{2p+1}{2^{q+1}}) \ = \ \epsilon(\q{p}{2^q})\times\epsilon(\q{p+1}{2^q}),$$ o\`u $\times$ est la loi de composition suivante : $$\alpha\times\beta\ = \ \q{\alpha+\beta}{2}+\q{\Delta_\alpha-\Delta_\beta} {3+\alpha-\beta}.$$ Cette relation signifie simplement que $$\chi(E_{\alpha\times\beta},E_\alpha) \ = \ \chi(E_\beta,E_{\alpha\times\beta}) \ = \ 0.$$ \bigskip La construction des pentes des fibr\'es exceptionnels comprises entre $-1$ et $0$ se fait donc en partant des pentes $-1$ et $0$, correspondant aux fibr\'es exceptionnels \m{{\cal O}(-1)} et ${\cal O}$. On appelle {\em triades} les triplets de fibr\'es exceptionnels de la forme \noindent\m{({\cal O}(k),{\cal O}(k+1),{\cal O}(k+2))}, \m{(E_\alpha,E_{\alpha\times\beta},E_\beta)}, \m{(E_{\alpha\times\beta},E_{\beta},E_{\alpha+3})} ou \m{(E_{\beta-3},E_\alpha,E_{\alpha\times\beta})}, \m{\alpha} et \m{\beta} \'etant des \'el\'ements de ${\cal P}$ de la forme $$\alpha \ = \ \epsilon(\q{p}{2^q}), \ \ \ \ \beta \ = \ \epsilon(\q{p+1}{2^q}).$$ o\`u $p$ et $q$ sont deux entiers avec \ \m{q\geq 0}. Les triades sont exactement les {\em bases d'h\'elice} de \cite{go_ru}. On donne maintenant la construction des triades de fibr\'es exceptionnels dont les pentes sont comprises entre \m{-1} et $0$. Ces triades sont du type \m{(E_\alpha,E_{\alpha\times\beta},E_\beta)}. La construction se fait de la fa\c con suivante, par r\'ecurrence : on part de la triade \m{({\cal O}(-1),Q^*,{\cal O})}, o\`u $Q$ est le fibr\'e exceptionnel quotient du morphisme canonique \ \m{{\cal O}(-1)\longrightarrow{\cal O}\otimes H^0({\cal O}(1))^*}. Supposons la triade \m{(E,F,G)} construite. Alors on construit les {\it triades adjacentes} \m{(E,H,F)} et \m{(F,K,G)}. Le fibr\'e $H$ est le noyau du morphisme canonique surjectif $$F\otimes\mathop{\rm Hom}\nolimits(F,G)\longrightarrow G$$ et $K$ est le conoyau du morphisme canonique injectif $$E\longrightarrow F\otimes\mathop{\rm Hom}\nolimits(E,F)^*.$$ De plus, le morphisme canonique $$E\otimes\mathop{\rm Hom}\nolimits(E,H)\longrightarrow H \ \ {\rm \ \ \ (resp. \ } K\longrightarrow G\otimes\mathop{\rm Hom}\nolimits(K,G)^* {\rm \ )}$$ est surjectif (resp. injectif) et son noyau (resp. conoyau) est isomorphe \`a \m{G(-3)} (resp. \m{E(3)}). \subsection{Suite spectrale de Beilinson g\'en\'eralis\'ee} A toute triade \m{(E,G,F)} et \`a tout faisceau coh\'erent ${\cal E}$ sur \proj{2} on associe une suite spectrale \m{E^{p,q}_r} de faisceaux coh\'erents sur \proj{2}, convergeant vers ${\cal E}$ en degr\'e 0 et vers 0 en tout autre degr\'e. Les termes \m{E^{p,q}_1} \'eventuellement non nuls sont $$E^{-2,q}_1\simeq H^q({\cal E}\otimes E^*(-3))\otimes E, \ \ E^{-1,q}_1\simeq H^q({\cal E}\otimes S^*)\otimes G, \ \ E^{0,q}_1\simeq H^q({\cal E}\otimes F^*)\otimes F,$$ $S$ d\'esignant le fibr\'e exceptionnel conoyau du morphisme canonique injectif \noindent\m{G\longrightarrow F\otimes\mathop{\rm Hom}\nolimits(G,F)}. \subsection{S\'erie exceptionnelle associ\'ee \`a un fibr\'e exceptionnel} Soit $F$ un fibr\'e exceptionnel. Les triades comportant $F$ comme terme de droite sont de la forme \m{(G_n,G_{n+1},F)}, o\`u la suite de fibr\'es exceptionnels \m{(G_n)} est enti\`erement d\'etermin\'ee par deux de ses termes cons\'ecutifs, par exemple \m{G_0} et \m{G_1}, par les suites exactes $$0\longrightarrow G_{n-1}\longrightarrow (G_n\otimes\mathop{\rm Hom}\nolimits(G_{n-1},G_n)^*)\simeq (G_n\otimes\mathop{\rm Hom}\nolimits(G_n,G_{n+1})) \longrightarrow G_{n+1}\longrightarrow 0.$$ On appelle \m{(G_n)} la {\it s\'erie exceptionnelle} \`a gauche associ\'ee \`a $F$. Les couples \m{(\mu(G_n),\Delta(G_n))} sont situ\'es sur la conique d'\'equation $$\Delta \ = \ P(\mu(F)-\mu)-\Delta(F),$$ (ce qui traduit le fait que \ \m{\chi(F,G_n)=0}). \bigskip \bigskip \setlength{\unitlength}{0.240900pt} \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(1500,900)(0,0) \font\gnuplot=cmr10 at 10pt \gnuplot \sbox{\plotpoint}{\rule[-0.200pt]{0.400pt}{0.400pt}}% \put(176,877){\usebox{\plotpoint}} \multiput(176.58,872.50)(0.493,-1.250){23}{\rule{0.119pt}{1.085pt}} \multiput(175.17,874.75)(13.000,-29.749){2}{\rule{0.400pt}{0.542pt}} \multiput(189.58,840.16)(0.492,-1.358){21}{\rule{0.119pt}{1.167pt}} \multiput(188.17,842.58)(12.000,-29.579){2}{\rule{0.400pt}{0.583pt}} \multiput(201.58,808.63)(0.493,-1.210){23}{\rule{0.119pt}{1.054pt}} \multiput(200.17,810.81)(13.000,-28.813){2}{\rule{0.400pt}{0.527pt}} \multiput(214.58,777.75)(0.493,-1.171){23}{\rule{0.119pt}{1.023pt}} \multiput(213.17,779.88)(13.000,-27.877){2}{\rule{0.400pt}{0.512pt}} \multiput(227.58,747.75)(0.493,-1.171){23}{\rule{0.119pt}{1.023pt}} \multiput(226.17,749.88)(13.000,-27.877){2}{\rule{0.400pt}{0.512pt}} \multiput(240.58,717.57)(0.492,-1.229){21}{\rule{0.119pt}{1.067pt}} \multiput(239.17,719.79)(12.000,-26.786){2}{\rule{0.400pt}{0.533pt}} \multiput(252.58,688.88)(0.493,-1.131){23}{\rule{0.119pt}{0.992pt}} \multiput(251.17,690.94)(13.000,-26.940){2}{\rule{0.400pt}{0.496pt}} \multiput(265.58,660.14)(0.493,-1.052){23}{\rule{0.119pt}{0.931pt}} \put(268,673){$A$} \multiput(264.17,662.07)(13.000,-25.068){2}{\rule{0.400pt}{0.465pt}} \multiput(278.58,633.14)(0.493,-1.052){23}{\rule{0.119pt}{0.931pt}} \multiput(277.17,635.07)(13.000,-25.068){2}{\rule{0.400pt}{0.465pt}} \multiput(291.58,605.85)(0.492,-1.142){21}{\rule{0.119pt}{1.000pt}} \multiput(290.17,607.92)(12.000,-24.924){2}{\rule{0.400pt}{0.500pt}} \multiput(303.58,579.26)(0.493,-1.012){23}{\rule{0.119pt}{0.900pt}} \multiput(302.17,581.13)(13.000,-24.132){2}{\rule{0.400pt}{0.450pt}} \multiput(316.58,553.39)(0.493,-0.972){23}{\rule{0.119pt}{0.869pt}} \multiput(315.17,555.20)(13.000,-23.196){2}{\rule{0.400pt}{0.435pt}} \multiput(329.58,528.26)(0.492,-1.013){21}{\rule{0.119pt}{0.900pt}} \multiput(328.17,530.13)(12.000,-22.132){2}{\rule{0.400pt}{0.450pt}} \multiput(341.58,504.52)(0.493,-0.933){23}{\rule{0.119pt}{0.838pt}} \multiput(340.17,506.26)(13.000,-22.260){2}{\rule{0.400pt}{0.419pt}} \multiput(354.58,480.65)(0.493,-0.893){23}{\rule{0.119pt}{0.808pt}} \multiput(353.17,482.32)(13.000,-21.324){2}{\rule{0.400pt}{0.404pt}} \multiput(367.58,457.65)(0.493,-0.893){23}{\rule{0.119pt}{0.808pt}} \multiput(366.17,459.32)(13.000,-21.324){2}{\rule{0.400pt}{0.404pt}} \put(368,460){\circle*{20}} \put(383,460){$G_{n-1}$} \multiput(380.58,434.68)(0.492,-0.884){21}{\rule{0.119pt}{0.800pt}} \multiput(379.17,436.34)(12.000,-19.340){2}{\rule{0.400pt}{0.400pt}} \multiput(392.58,413.90)(0.493,-0.814){23}{\rule{0.119pt}{0.746pt}} \multiput(391.17,415.45)(13.000,-19.451){2}{\rule{0.400pt}{0.373pt}} \multiput(405.58,392.90)(0.493,-0.814){23}{\rule{0.119pt}{0.746pt}} \multiput(404.17,394.45)(13.000,-19.451){2}{\rule{0.400pt}{0.373pt}} \multiput(418.58,372.03)(0.493,-0.774){23}{\rule{0.119pt}{0.715pt}} \multiput(417.17,373.52)(13.000,-18.515){2}{\rule{0.400pt}{0.358pt}} \multiput(431.58,351.96)(0.492,-0.798){21}{\rule{0.119pt}{0.733pt}} \multiput(430.17,353.48)(12.000,-17.478){2}{\rule{0.400pt}{0.367pt}} \multiput(443.58,333.29)(0.493,-0.695){23}{\rule{0.119pt}{0.654pt}} \put(445,333){\circle*{20}} \put(445,333){$G_{n}$} \multiput(442.17,334.64)(13.000,-16.643){2}{\rule{0.400pt}{0.327pt}} \multiput(456.58,315.29)(0.493,-0.695){23}{\rule{0.119pt}{0.654pt}} \multiput(455.17,316.64)(13.000,-16.643){2}{\rule{0.400pt}{0.327pt}} \multiput(469.58,297.23)(0.492,-0.712){21}{\rule{0.119pt}{0.667pt}} \multiput(468.17,298.62)(12.000,-15.616){2}{\rule{0.400pt}{0.333pt}} \multiput(481.58,280.41)(0.493,-0.655){23}{\rule{0.119pt}{0.623pt}} \multiput(480.17,281.71)(13.000,-15.707){2}{\rule{0.400pt}{0.312pt}} \multiput(494.58,263.54)(0.493,-0.616){23}{\rule{0.119pt}{0.592pt}} \multiput(493.17,264.77)(13.000,-14.771){2}{\rule{0.400pt}{0.296pt}} \multiput(507.58,247.67)(0.493,-0.576){23}{\rule{0.119pt}{0.562pt}} \multiput(506.17,248.83)(13.000,-13.834){2}{\rule{0.400pt}{0.281pt}} \multiput(520.58,232.65)(0.492,-0.582){21}{\rule{0.119pt}{0.567pt}} \multiput(519.17,233.82)(12.000,-12.824){2}{\rule{0.400pt}{0.283pt}} \multiput(532.58,218.80)(0.493,-0.536){23}{\rule{0.119pt}{0.531pt}} \multiput(531.17,219.90)(13.000,-12.898){2}{\rule{0.400pt}{0.265pt}} \multiput(545.58,204.80)(0.493,-0.536){23}{\rule{0.119pt}{0.531pt}} \multiput(544.17,205.90)(13.000,-12.898){2}{\rule{0.400pt}{0.265pt}} \multiput(558.00,191.92)(0.539,-0.492){21}{\rule{0.533pt}{0.119pt}} \multiput(558.00,192.17)(11.893,-12.000){2}{\rule{0.267pt}{0.400pt}} \put(560,192){\circle*{20}} \put(575,192){$G_{n+1}$} \multiput(571.00,179.92)(0.496,-0.492){21}{\rule{0.500pt}{0.119pt}} \multiput(571.00,180.17)(10.962,-12.000){2}{\rule{0.250pt}{0.400pt}} \multiput(583.00,167.92)(0.590,-0.492){19}{\rule{0.573pt}{0.118pt}} \multiput(583.00,168.17)(11.811,-11.000){2}{\rule{0.286pt}{0.400pt}} \multiput(596.00,156.92)(0.590,-0.492){19}{\rule{0.573pt}{0.118pt}} \multiput(596.00,157.17)(11.811,-11.000){2}{\rule{0.286pt}{0.400pt}} \multiput(609.00,145.92)(0.600,-0.491){17}{\rule{0.580pt}{0.118pt}} \multiput(609.00,146.17)(10.796,-10.000){2}{\rule{0.290pt}{0.400pt}} \multiput(621.00,135.93)(0.728,-0.489){15}{\rule{0.678pt}{0.118pt}} \multiput(621.00,136.17)(11.593,-9.000){2}{\rule{0.339pt}{0.400pt}} \multiput(634.00,126.93)(0.824,-0.488){13}{\rule{0.750pt}{0.117pt}} \multiput(634.00,127.17)(11.443,-8.000){2}{\rule{0.375pt}{0.400pt}} \multiput(647.00,118.93)(0.824,-0.488){13}{\rule{0.750pt}{0.117pt}} \multiput(647.00,119.17)(11.443,-8.000){2}{\rule{0.375pt}{0.400pt}} \multiput(660.00,110.93)(0.758,-0.488){13}{\rule{0.700pt}{0.117pt}} \multiput(660.00,111.17)(10.547,-8.000){2}{\rule{0.350pt}{0.400pt}} \multiput(672.00,102.93)(1.123,-0.482){9}{\rule{0.967pt}{0.116pt}} \multiput(672.00,103.17)(10.994,-6.000){2}{\rule{0.483pt}{0.400pt}} \multiput(685.00,96.93)(1.123,-0.482){9}{\rule{0.967pt}{0.116pt}} \multiput(685.00,97.17)(10.994,-6.000){2}{\rule{0.483pt}{0.400pt}} \multiput(698.00,90.93)(1.123,-0.482){9}{\rule{0.967pt}{0.116pt}} \multiput(698.00,91.17)(10.994,-6.000){2}{\rule{0.483pt}{0.400pt}} \multiput(711.00,84.94)(1.651,-0.468){5}{\rule{1.300pt}{0.113pt}} \multiput(711.00,85.17)(9.302,-4.000){2}{\rule{0.650pt}{0.400pt}} \multiput(723.00,80.94)(1.797,-0.468){5}{\rule{1.400pt}{0.113pt}} \multiput(723.00,81.17)(10.094,-4.000){2}{\rule{0.700pt}{0.400pt}} \multiput(736.00,76.95)(2.695,-0.447){3}{\rule{1.833pt}{0.108pt}} \multiput(736.00,77.17)(9.195,-3.000){2}{\rule{0.917pt}{0.400pt}} \multiput(749.00,73.95)(2.472,-0.447){3}{\rule{1.700pt}{0.108pt}} \multiput(749.00,74.17)(8.472,-3.000){2}{\rule{0.850pt}{0.400pt}} \put(761,70.17){\rule{2.700pt}{0.400pt}} \multiput(761.00,71.17)(7.396,-2.000){2}{\rule{1.350pt}{0.400pt}} \put(774,68.67){\rule{3.132pt}{0.400pt}} \multiput(774.00,69.17)(6.500,-1.000){2}{\rule{1.566pt}{0.400pt}} \put(787,67.67){\rule{3.132pt}{0.400pt}} \multiput(787.00,68.17)(6.500,-1.000){2}{\rule{1.566pt}{0.400pt}} \put(812,67.67){\rule{3.132pt}{0.400pt}} \multiput(812.00,67.17)(6.500,1.000){2}{\rule{1.566pt}{0.400pt}} \put(825,68.67){\rule{3.132pt}{0.400pt}} \multiput(825.00,68.17)(6.500,1.000){2}{\rule{1.566pt}{0.400pt}} \put(838,70.17){\rule{2.700pt}{0.400pt}} \multiput(838.00,69.17)(7.396,2.000){2}{\rule{1.350pt}{0.400pt}} \multiput(851.00,72.61)(2.472,0.447){3}{\rule{1.700pt}{0.108pt}} \multiput(851.00,71.17)(8.472,3.000){2}{\rule{0.850pt}{0.400pt}} \multiput(863.00,75.61)(2.695,0.447){3}{\rule{1.833pt}{0.108pt}} \multiput(863.00,74.17)(9.195,3.000){2}{\rule{0.917pt}{0.400pt}} \multiput(876.00,78.60)(1.797,0.468){5}{\rule{1.400pt}{0.113pt}} \multiput(876.00,77.17)(10.094,4.000){2}{\rule{0.700pt}{0.400pt}} \multiput(889.00,82.60)(1.651,0.468){5}{\rule{1.300pt}{0.113pt}} \multiput(889.00,81.17)(9.302,4.000){2}{\rule{0.650pt}{0.400pt}} \multiput(901.00,86.59)(1.123,0.482){9}{\rule{0.967pt}{0.116pt}} \multiput(901.00,85.17)(10.994,6.000){2}{\rule{0.483pt}{0.400pt}} \multiput(914.00,92.59)(1.123,0.482){9}{\rule{0.967pt}{0.116pt}} \multiput(914.00,91.17)(10.994,6.000){2}{\rule{0.483pt}{0.400pt}} \multiput(927.00,98.59)(1.123,0.482){9}{\rule{0.967pt}{0.116pt}} \multiput(927.00,97.17)(10.994,6.000){2}{\rule{0.483pt}{0.400pt}} \multiput(940.00,104.59)(0.758,0.488){13}{\rule{0.700pt}{0.117pt}} \multiput(940.00,103.17)(10.547,8.000){2}{\rule{0.350pt}{0.400pt}} \multiput(952.00,112.59)(0.824,0.488){13}{\rule{0.750pt}{0.117pt}} \multiput(952.00,111.17)(11.443,8.000){2}{\rule{0.375pt}{0.400pt}} \multiput(965.00,120.59)(0.824,0.488){13}{\rule{0.750pt}{0.117pt}} \multiput(965.00,119.17)(11.443,8.000){2}{\rule{0.375pt}{0.400pt}} \multiput(978.00,128.59)(0.728,0.489){15}{\rule{0.678pt}{0.118pt}} \multiput(978.00,127.17)(11.593,9.000){2}{\rule{0.339pt}{0.400pt}} \multiput(991.00,137.58)(0.600,0.491){17}{\rule{0.580pt}{0.118pt}} \multiput(991.00,136.17)(10.796,10.000){2}{\rule{0.290pt}{0.400pt}} \multiput(1003.00,147.58)(0.590,0.492){19}{\rule{0.573pt}{0.118pt}} \multiput(1003.00,146.17)(11.811,11.000){2}{\rule{0.286pt}{0.400pt}} \multiput(1016.00,158.58)(0.590,0.492){19}{\rule{0.573pt}{0.118pt}} \multiput(1016.00,157.17)(11.811,11.000){2}{\rule{0.286pt}{0.400pt}} \multiput(1029.00,169.58)(0.496,0.492){21}{\rule{0.500pt}{0.119pt}} \multiput(1029.00,168.17)(10.962,12.000){2}{\rule{0.250pt}{0.400pt}} \multiput(1041.00,181.58)(0.539,0.492){21}{\rule{0.533pt}{0.119pt}} \multiput(1041.00,180.17)(11.893,12.000){2}{\rule{0.267pt}{0.400pt}} \multiput(1054.58,193.00)(0.493,0.536){23}{\rule{0.119pt}{0.531pt}} \multiput(1053.17,193.00)(13.000,12.898){2}{\rule{0.400pt}{0.265pt}} \multiput(1067.58,207.00)(0.493,0.536){23}{\rule{0.119pt}{0.531pt}} \multiput(1066.17,207.00)(13.000,12.898){2}{\rule{0.400pt}{0.265pt}} \multiput(1080.58,221.00)(0.492,0.582){21}{\rule{0.119pt}{0.567pt}} \multiput(1079.17,221.00)(12.000,12.824){2}{\rule{0.400pt}{0.283pt}} \multiput(1092.58,235.00)(0.493,0.576){23}{\rule{0.119pt}{0.562pt}} \multiput(1091.17,235.00)(13.000,13.834){2}{\rule{0.400pt}{0.281pt}} \multiput(1105.58,250.00)(0.493,0.616){23}{\rule{0.119pt}{0.592pt}} \multiput(1104.17,250.00)(13.000,14.771){2}{\rule{0.400pt}{0.296pt}} \multiput(1118.58,266.00)(0.493,0.655){23}{\rule{0.119pt}{0.623pt}} \multiput(1117.17,266.00)(13.000,15.707){2}{\rule{0.400pt}{0.312pt}} \multiput(1131.58,283.00)(0.492,0.712){21}{\rule{0.119pt}{0.667pt}} \multiput(1130.17,283.00)(12.000,15.616){2}{\rule{0.400pt}{0.333pt}} \multiput(1143.58,300.00)(0.493,0.695){23}{\rule{0.119pt}{0.654pt}} \multiput(1142.17,300.00)(13.000,16.643){2}{\rule{0.400pt}{0.327pt}} \multiput(1156.58,318.00)(0.493,0.695){23}{\rule{0.119pt}{0.654pt}} \multiput(1155.17,318.00)(13.000,16.643){2}{\rule{0.400pt}{0.327pt}} \multiput(1169.58,336.00)(0.492,0.798){21}{\rule{0.119pt}{0.733pt}} \multiput(1168.17,336.00)(12.000,17.478){2}{\rule{0.400pt}{0.367pt}} \multiput(1181.58,355.00)(0.493,0.774){23}{\rule{0.119pt}{0.715pt}} \multiput(1180.17,355.00)(13.000,18.515){2}{\rule{0.400pt}{0.358pt}} \multiput(1194.58,375.00)(0.493,0.814){23}{\rule{0.119pt}{0.746pt}} \multiput(1193.17,375.00)(13.000,19.451){2}{\rule{0.400pt}{0.373pt}} \multiput(1207.58,396.00)(0.493,0.814){23}{\rule{0.119pt}{0.746pt}} \multiput(1206.17,396.00)(13.000,19.451){2}{\rule{0.400pt}{0.373pt}} \multiput(1220.58,417.00)(0.492,0.884){21}{\rule{0.119pt}{0.800pt}} \multiput(1219.17,417.00)(12.000,19.340){2}{\rule{0.400pt}{0.400pt}} \multiput(1232.58,438.00)(0.493,0.893){23}{\rule{0.119pt}{0.808pt}} \multiput(1231.17,438.00)(13.000,21.324){2}{\rule{0.400pt}{0.404pt}} \multiput(1245.58,461.00)(0.493,0.893){23}{\rule{0.119pt}{0.808pt}} \multiput(1244.17,461.00)(13.000,21.324){2}{\rule{0.400pt}{0.404pt}} \multiput(1258.58,484.00)(0.493,0.933){23}{\rule{0.119pt}{0.838pt}} \multiput(1257.17,484.00)(13.000,22.260){2}{\rule{0.400pt}{0.419pt}} \multiput(1271.58,508.00)(0.492,1.013){21}{\rule{0.119pt}{0.900pt}} \multiput(1270.17,508.00)(12.000,22.132){2}{\rule{0.400pt}{0.450pt}} \multiput(1283.58,532.00)(0.493,0.972){23}{\rule{0.119pt}{0.869pt}} \multiput(1282.17,532.00)(13.000,23.196){2}{\rule{0.400pt}{0.435pt}} \multiput(1296.58,557.00)(0.493,1.012){23}{\rule{0.119pt}{0.900pt}} \multiput(1295.17,557.00)(13.000,24.132){2}{\rule{0.400pt}{0.450pt}} \multiput(1309.58,583.00)(0.492,1.142){21}{\rule{0.119pt}{1.000pt}} \multiput(1308.17,583.00)(12.000,24.924){2}{\rule{0.400pt}{0.500pt}} \multiput(1321.58,610.00)(0.493,1.052){23}{\rule{0.119pt}{0.931pt}} \multiput(1320.17,610.00)(13.000,25.068){2}{\rule{0.400pt}{0.465pt}} \multiput(1334.58,637.00)(0.493,1.052){23}{\rule{0.119pt}{0.931pt}} \multiput(1333.17,637.00)(13.000,25.068){2}{\rule{0.400pt}{0.465pt}} \multiput(1347.58,664.00)(0.493,1.131){23}{\rule{0.119pt}{0.992pt}} \multiput(1346.17,664.00)(13.000,26.940){2}{\rule{0.400pt}{0.496pt}} \put(1310,673){$B$} \multiput(1360.58,693.00)(0.492,1.229){21}{\rule{0.119pt}{1.067pt}} \multiput(1359.17,693.00)(12.000,26.786){2}{\rule{0.400pt}{0.533pt}} \multiput(1372.58,722.00)(0.493,1.171){23}{\rule{0.119pt}{1.023pt}} \multiput(1371.17,722.00)(13.000,27.877){2}{\rule{0.400pt}{0.512pt}} \multiput(1385.58,752.00)(0.493,1.171){23}{\rule{0.119pt}{1.023pt}} \multiput(1384.17,752.00)(13.000,27.877){2}{\rule{0.400pt}{0.512pt}} \multiput(1398.58,782.00)(0.493,1.210){23}{\rule{0.119pt}{1.054pt}} \multiput(1397.17,782.00)(13.000,28.813){2}{\rule{0.400pt}{0.527pt}} \multiput(1411.58,813.00)(0.492,1.358){21}{\rule{0.119pt}{1.167pt}} \multiput(1410.17,813.00)(12.000,29.579){2}{\rule{0.400pt}{0.583pt}} \multiput(1423.58,845.00)(0.493,1.250){23}{\rule{0.119pt}{1.085pt}} \multiput(1422.17,845.00)(13.000,29.749){2}{\rule{0.400pt}{0.542pt}} \put(800.0,68.0){\rule[-0.200pt]{2.891pt}{0.400pt}} \put(176,661){\usebox{\plotpoint}} \put(176.00,661.00){\usebox{\plotpoint}} \put(196.76,661.00){\usebox{\plotpoint}} \multiput(201,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(217.51,661.00){\usebox{\plotpoint}} \put(238.27,661.00){\usebox{\plotpoint}} \multiput(240,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(259.02,661.00){\usebox{\plotpoint}} \multiput(265,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(279.78,661.00){\usebox{\plotpoint}} \put(300.53,661.00){\usebox{\plotpoint}} \multiput(303,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(321.29,661.00){\usebox{\plotpoint}} \multiput(329,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(342.04,661.00){\usebox{\plotpoint}} \put(362.80,661.00){\usebox{\plotpoint}} \multiput(367,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(383.55,661.00){\usebox{\plotpoint}} \put(404.31,661.00){\usebox{\plotpoint}} \multiput(405,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(425.07,661.00){\usebox{\plotpoint}} \multiput(431,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(445.82,661.00){\usebox{\plotpoint}} \put(466.58,661.00){\usebox{\plotpoint}} \multiput(469,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(487.33,661.00){\usebox{\plotpoint}} \multiput(494,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(508.09,661.00){\usebox{\plotpoint}} \put(528.84,661.00){\usebox{\plotpoint}} \multiput(532,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(549.60,661.00){\usebox{\plotpoint}} \put(570.35,661.00){\usebox{\plotpoint}} \multiput(571,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(591.11,661.00){\usebox{\plotpoint}} \multiput(596,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(611.87,661.00){\usebox{\plotpoint}} \put(632.62,661.00){\usebox{\plotpoint}} \multiput(634,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(653.38,661.00){\usebox{\plotpoint}} \multiput(660,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(674.13,661.00){\usebox{\plotpoint}} \put(694.89,661.00){\usebox{\plotpoint}} \multiput(698,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(715.64,661.00){\usebox{\plotpoint}} \multiput(723,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(736.40,661.00){\usebox{\plotpoint}} \put(757.15,661.00){\usebox{\plotpoint}} \multiput(761,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(777.91,661.00){\usebox{\plotpoint}} \put(798.66,661.00){\usebox{\plotpoint}} \multiput(800,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(819.42,661.00){\usebox{\plotpoint}} \multiput(825,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(840.18,661.00){\usebox{\plotpoint}} \put(860.93,661.00){\usebox{\plotpoint}} \multiput(863,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(881.69,661.00){\usebox{\plotpoint}} \multiput(889,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(902.44,661.00){\usebox{\plotpoint}} \put(923.20,661.00){\usebox{\plotpoint}} \multiput(927,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(943.95,661.00){\usebox{\plotpoint}} \put(964.71,661.00){\usebox{\plotpoint}} \multiput(965,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(985.46,661.00){\usebox{\plotpoint}} \multiput(991,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(1006.22,661.00){\usebox{\plotpoint}} \put(1026.98,661.00){\usebox{\plotpoint}} \multiput(1029,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(1047.73,661.00){\usebox{\plotpoint}} \multiput(1054,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(1068.49,661.00){\usebox{\plotpoint}} \put(1089.24,661.00){\usebox{\plotpoint}} \multiput(1092,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(1110.00,661.00){\usebox{\plotpoint}} \put(1130.75,661.00){\usebox{\plotpoint}} \multiput(1131,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(1151.51,661.00){\usebox{\plotpoint}} \multiput(1156,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(1172.26,661.00){\usebox{\plotpoint}} \put(1193.02,661.00){\usebox{\plotpoint}} \multiput(1194,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(1213.77,661.00){\usebox{\plotpoint}} \multiput(1220,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(1234.53,661.00){\usebox{\plotpoint}} \put(1255.29,661.00){\usebox{\plotpoint}} \multiput(1258,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(1276.04,661.00){\usebox{\plotpoint}} \multiput(1283,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(1296.80,661.00){\usebox{\plotpoint}} \put(1317.55,661.00){\usebox{\plotpoint}} \multiput(1321,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(1338.31,661.00){\usebox{\plotpoint}} \put(1359.06,661.00){\usebox{\plotpoint}} \multiput(1360,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(1379.82,661.00){\usebox{\plotpoint}} \multiput(1385,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(1400.57,661.00){\usebox{\plotpoint}} \put(1421.33,661.00){\usebox{\plotpoint}} \multiput(1423,661)(20.756,0.000){0}{\usebox{\plotpoint}} \put(1436,661){\usebox{\plotpoint}} \put(1480,650){$\Delta=1/2$} \end{picture} \bigskip Dans la figure ci-dessus, les points $A$ et $B$ sont les intersections de cette conique avec la droite d'\'equation \ \m{\Delta=1/2}. On a $$\lim_{n\rightarrow -\infty}=A \ \ \ {\rm et} \ \ \ \lim_{n\rightarrow\infty}=B.$$ Remarquons que \ \m{\mu(B)-\mu(A)<3}. Si \m{F={\cal O}}, il existe une unique paire \m{(G_n,G_{n+1})} telle que \ \m{\mu(G_{n+1})-\mu(G_n)\geq 1}, c'est \m{({\cal O}(-2),{\cal O}(-1))}. Supposons que \ \m{-1<\mu(F)<0}. Il existe alors une unique triade de la forme \m{(E,F,G)}, avec \ \m{-1\leq\mu(E)<\mu(G)\leq 0}. On en d\'eduit que \m{(G(-3),E)} est une des paires \m{(G_n,G_{n+1})}. On peut supposer que \ \m{(G(-3),E)=(G_0,G_1)}. On a \ \m{\mu(G_1)-\mu(G_0)\geq 2}, et \m{(G_0,G_1)} est l'unique paire \m{(G_n,G_{n+1})} telle que \noindent\m{\mu(G_{n+1})-\mu(G_n)\geq 1}. On l'appelle la paire {\it initiale} de la s\'erie \m{(G_n)}. \bigskip \begin{xlemm} Le fibr\'e vectoriel \m{G_0^*\otimes G_1} est engendr\'e par ses sections globales. \end{xlemm} \noindent{\em D\'emonstration}. D'apr\`es la construction de \m{(G_0,G_1)}, il suffit de prouver le r\'esultat \hbox{suivant :} si \m{(A,B,C)} est une triade de fibr\'es exceptionnels telle que \ \m{\mu(C)-\mu(A)\leq 1}, les fibr\'es \m{B^*\otimes A(3)}, \m{C^*\otimes B(3)} et \m{C^*\otimes A(3)} sont engendr\'es par leurs sections globales. On d\'emontre cela par r\'ecurrence : il faut montrer que si c'est vrai pour une triade, c'est vrai pour les deux triades adjacentes. Supposons que ce soit vrai pour \m{(A,B,C)}. Soient $H$ le noyau du morphisme canonique surjectif $$B\otimes\mathop{\rm Hom}\nolimits(B,C)\longrightarrow C$$ et K le conoyau du morphisme canonique injectif $$A\longrightarrow B\otimes\mathop{\rm Hom}\nolimits(A,B)^*.$$ Il faut montrer que le r\'esultat est vrai pour les triades \m{(A,H,B)} et \m{(B,K,C)}. En consid\'erant la triade {\it duale} \m{(C^*(-1),B^*(-1),A^*(-1))}, on voit qu'il suffit de consid\'erer \m{(A,H,B)}. On a une suite exacte $$0\longrightarrow H\longrightarrow B\otimes\mathop{\rm Hom}\nolimits(B,C)\longrightarrow C\longrightarrow 0.$$ On en d\'eduit un morphisme surjectif $$B^*(3)\otimes A\otimes\mathop{\rm Hom}\nolimits(B,C)^*\longrightarrow H^*(3)\otimes A.$$ Puisque \m{B^*(3)\otimes A} est engendr\'e par ses sections globales (hypoth\`ese de r\'ecurrence), il en est de m\^eme de \m{H^*(3)\otimes A}. On a d'autre part une suite exacte $$0\longrightarrow C(-3)\longrightarrow A\otimes\mathop{\rm Hom}\nolimits(C(-3),A)^*\longrightarrow H\longrightarrow 0,$$ d'o\`u on d\'eduit un morphisme surjectif $$B^*(3)\otimes A\otimes\mathop{\rm Hom}\nolimits(C(-3),A)^*\longrightarrow B^*(3)\otimes H,$$ d'o\`u on d\'eduit que \m{B^*(3)\otimes H} est engendr\'e par ses sections globales. $\Box$ \bigskip \begin{xlemm} Pour tout entier $n$, on a \m{n\geq 1} si et seulement si pour tous entiers $a$, $b$, $c$ positifs ou nuls, le fibr\'e vectoriel $$(G_n\otimes\cx{a})\oplus(G_{n+1}\otimes\cx{b})\oplus(F\otimes\cx{c})$$ est prioritaire. \end{xlemm} \noindent{\em D\'emonstration}. Imm\'ediat. $\Box$ \bigskip On d\'efinit de m\^eme la {\em s\'erie exceptionnelle \`a droite} \m{(H_n)} associ\'ee \`a $F$. On a \break \m{H_n=G_n(3)} pour tout $n$. \subsection{\'Etude de {\bf T}} L'ensemble {\bf T} est construit comme une union croissante de sous-ensembles $$T_0=\lbrace({\cal O}(-1),Q^*,{\cal O})\rbrace\subset T_1\subset\ldots T_n\subset T_{n+1}\subset\ldots$$ $$T=\bigcup_{n\geq 0}T_n,$$ o\`u $T_n$ est l'ensemble des triades \m{(E_\alpha,E_{\alpha\times\beta}, E_\beta)}, $\alpha$, $\beta$ \'etant de la forme $$\alpha=\epsilon(\q{p}{2^n}), \ \ \beta=\epsilon(\q{p+1}{2^n}),$$ avec $p$ entier. Si $n>0$, les triades de \m{T_n\backslash T_{n-1}} forment une suite \m{t_0^{(n)}}, \ldots, \m{t_{2^n-1}^{(n)}}, $$t_i^{(n)}\ = \ (E_{\alpha(\q{i}{2^n})},E_{\alpha(\q{2i+1}{2^{n+1}})}, E_{\alpha(\q{i+1}{2^n})}).$$ On a $$\mu(E_{\alpha(\q{i}{2^n})}) \ < \ \mu(E_{\alpha(\q{2i+1}{2^{n+1}})}) \ < \ \mu(E_{\alpha(\q{i+1}{2^n})}),$$ et dans le plan de coordonn\'ees \m{(\mu,\Delta)}, \m{E_{\alpha(\q{2i+1}{2^{n+1}})}} est situ\'e au dessus de la droite\break \m{E_{\alpha(\q{i}{2^n})}E_{\alpha(\q{i+1}{2^n})}}. \setlength{\unitlength}{0.240900pt} \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(1500,900)(0,0) \font\gnuplot=cmr10 at 10pt \gnuplot \sbox{\plotpoint}{\rule[-0.200pt]{0.400pt}{0.400pt}}% \put(233,68){\usebox{\plotpoint}} \multiput(233.00,68.58)(3.543,0.500){321}{\rule{2.930pt}{0.120pt}} \multiput(233.00,67.17)(1139.919,162.000){2}{\rule{1.465pt}{0.400pt}} \put(233,68){\usebox{\plotpoint}} \multiput(233.00,68.58)(0.591,0.500){967}{\rule{0.573pt}{0.120pt}} \multiput(233.00,67.17)(571.812,485.000){2}{\rule{0.286pt}{0.400pt}} \put(806,553){\usebox{\plotpoint}} \multiput(806.00,551.92)(0.887,-0.500){643}{\rule{0.810pt}{0.120pt}} \multiput(806.00,552.17)(571.320,-323.000){2}{\rule{0.405pt}{0.400pt}} \put(233,68){\usebox{\plotpoint}} \multiput(233,68)(9.029,18.689){20}{\usebox{\plotpoint}} \put(405,424){\usebox{\plotpoint}} \put(405,424){\usebox{\plotpoint}} \multiput(405,424)(19.758,6.356){21}{\usebox{\plotpoint}} \put(806,553){\usebox{\plotpoint}} \put(806,553){\usebox{\plotpoint}} \multiput(806,553)(20.608,-2.467){20}{\usebox{\plotpoint}} \put(1207,505){\usebox{\plotpoint}} \put(1207,505){\usebox{\plotpoint}} \multiput(1207,505)(11.006,-17.597){16}{\usebox{\plotpoint}} \put(1379,230){\usebox{\plotpoint}} \put(800,280){$t_i^{(n)}$} \put(430,330){$t_{2i}^{(n+1)}$} \put(1110,410){$t_{2i+1}^{(n+1)}$} \end{picture} Le segment de conique \m{E_{-1}E_0} de \m{{\cal T}_{(E_{-1},E_{\q{1}{2}},E_0)}} n'est autre que la courbe \ \m{\Delta=-\q{\mu(\mu+1)}{2}}. On en d\'eduit imm\'ediatement le \bigskip \begin{xlemm} Soit \ \m{Z = \bigcup_{(E,F,G)\in{\bf T}}{\cal T}_{(E,F,G)}}. Alors, si \ \m{(\mu,\Delta)\in Z}, on a \ \m{(\mu,\Delta')\in Z} \ si $$-\q{\mu(\mu+1)}{2} \ \leq \ \Delta' \ \leq \ \Delta.$$ \end{xlemm} \section{Fibr\'es prioritaires g\'en\'eriques} \subsection{Cohomologie naturelle} \begin{xlemm} Soient $F$ un fibr\'e exceptionnel, $r$, \m{c_1}, \m{c_2} des entiers tels que \m{r\geq 2},\break \m{\mu(F)-x_F<\mu\leq\mu(F)} \ et \ \m{\Delta=\delta(\mu)}. Alors il existe un fibr\'e vectoriel stable ${\cal E}$ de rang $r$ et de classes de Chern $c_1$, $c_2$, tel que \ \m{\mathop{\rm Ext}\nolimits^1({\cal E},F)=\lbrace 0\rbrace}. \end{xlemm} \noindent{\em D\'emonstration}. On consid\`ere la suite \m{(G_n)} de fibr\'es exceptionnels du \paragra~\hskip -2pt 2. Soient $n$ un entier et ${\cal E}$ un faisceau semi-stable de rang $r$ et de classes de Chern \m{c_1}, \m{c_2}. On pose $$k = \chi({\cal E},F), \ \ \ m_n \ = \ -\chi({\cal E}\otimes G_n^*(-3)),$$ qui sont ind\'ependants de ${\cal E}$. Ces entiers sont positifs : pour le premier, cela d\'ecoule du fait que le point correspondant \`a ${\cal E}$ est situ\'e sous la conique donnant l'\'equation de \m{\delta(\mu)} sur \m{\rbrack\mu(F),\mu(F)+x_F\lbrack}. Pour le second on utilise le fait que \m{H^0({\cal E}\otimes G_n^*(-3))} et \m{H^2({\cal E}\otimes G_n^*(-3))} sont nuls. On consid\`ere les triades \m{(F,G_{p-1}(3),G_p(3))}. Ceci sugg\`ere de trouver ${\cal E}$ comme noyau d'un morphisme surjectif ad\'equat $$\theta : (F\otimes\cx{k})\oplus(G_{p-1}(3)\otimes\cx{m_{p+1}})\longrightarrow G_p(3)\otimes\cx{m_p}.$$ Un tel fibr\'e a en effet les bons rang et classes de Chern, et de plus on a \ \m{\mathop{\rm Ext}\nolimits^1({\cal E},F)=\lbrace 0\rbrace}. Pour montrer que ${\cal E}$ se d\'eforme en fibr\'e stable, il suffit qu'il soit prioritaire, car le champ des faisceaux prioritaires est irr\'eductible (cf. \cite{hi_la}). On prend \ \m{p=1}, c'est-\`a-dire qu'on consid\`ere des morphismes $$(F\otimes\cx{k})\oplus(G_0(3)\otimes\cx{m_2})\longrightarrow G_1(3)\otimes\cx{m_1}.$$ Alors on a \ \m{\mu(G_1(3))-\mu(G_0(3))\geq 1}, donc \m{\mu(G_1(3))-\mu(F) > 1}, et la paire \m{(F,G_1(3))} est initiale dans la s\'erie qui la contient. Ceci entraine que le faisceau des morphismes pr\'ec\'edents est engendr\'e par ses sections globales. Comme \ \m{r\geq 2}, il existe un morphisme $$\theta : (F\otimes\cx{k})\oplus(G_0(3)\otimes\cx{m_2})\longrightarrow G_1(3)\otimes\cx{m_1}$$ qui est surjectif. Soit $${\cal E} \ = \ \ker(\theta).$$ Il reste \`a montrer que ${\cal E}$ est prioritaire, c'est-\`a-dire que \ \m{\mathop{\rm Hom}\nolimits({\cal E},{\cal E}(-2))=\lbrace 0\rbrace}. On a une suite exacte $$0\longrightarrow{\cal E}\longrightarrow (F\otimes\cx{k})\oplus(G_0(3)\otimes\cx{m_2})\longrightarrow G_1(3)\otimes\cx{m_1} \longrightarrow 0,$$ d'o\`u on d\'eduit que $$\mathop{\rm Hom}\nolimits({\cal E},{\cal E}(-2))\ \subset \ (\mathop{\rm Hom}\nolimits({\cal E},F(-2))\otimes\cx{k})\oplus (\mathop{\rm Hom}\nolimits({\cal E},G_0(1))\otimes\cx{m_2}).$$ Il faut montrer que $$\mathop{\rm Hom}\nolimits(({\cal E},F(-2))=\mathop{\rm Hom}\nolimits({\cal E},G_0(1))=\lbrace 0\rbrace.$$ Montrons d'abord que \ \m{\mathop{\rm Hom}\nolimits(({\cal E},F(-2))=\lbrace 0\rbrace}. D'apr\`es la suite exacte pr\'ec\'edente, on a une suite exacte $$(\mathop{\rm Hom}\nolimits(F,F(-2))\otimes\cx{k})\oplus(\mathop{\rm Hom}\nolimits(G_0(3),F(-2))\otimes\cx{m_2})\longrightarrow \mathop{\rm Hom}\nolimits(({\cal E},F(-2)) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $$ $$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \longrightarrow\mathop{\rm Ext}\nolimits^1(G_1(3),F(-2))\otimes\cx{m_1}.$$ On a \ \m{\mathop{\rm Hom}\nolimits(F,F(-2))=\mathop{\rm Hom}\nolimits(G_0(3),F(-2))=\lbrace 0\rbrace}, car \ \m{\mu(G_0(3))>\mu(F)>\mu(F(-2))}. D'autre part, $$\mathop{\rm Ext}\nolimits^1(G_1(3),F(-2))\ \simeq\ \mathop{\rm Ext}\nolimits^1(F(-2),G_1)^*$$ par dualit\'e de Serre. Pour montrer que \ \m{\mathop{\rm Ext}\nolimits^1(F(-2),G_1)= \lbrace 0\rbrace}, il suffit d'apr\`es \cite{dr1} de prouver que \ \m{\mu(F(-2))\leq\mu(G_1)}. Si \ \m{F={\cal O}} \ c'est \'evident car \ \m{G_1={\cal O}(-1)}. Sinon, on a \ \m{\mu(G_1)-\mu(G_0)\geq 2}, et si \ \m{\mu(F(-2))>\mu(G_1)}, on a \ \m{\mu(F)-\mu(G_0)>4}, ce qui est faux car \ \m{\mu(F)-\mu(G_0)<3}. Montrons maintenant que \ \m{\mathop{\rm Hom}\nolimits({\cal E},G_0(1))=\lbrace 0\rbrace}. On a une suite exacte $$(\mathop{\rm Hom}\nolimits(F,G_0(1))\otimes\cx{k})\oplus(\mathop{\rm Hom}\nolimits(G_0(3),G_0(1))\otimes\cx{m_2}) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $$ $$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \longrightarrow \mathop{\rm Hom}\nolimits(({\cal E},G_0(1))\longrightarrow\mathop{\rm Ext}\nolimits^1(G_1(3),G_0(1))\otimes\cx{m_1}.$$ On a \ \m{\mathop{\rm Hom}\nolimits(F,G_0(1))=\lbrace 0\rbrace} \ car \ \m{\mu(F)>\mu(G_1)\geq\mu(G_0(1))}, et \m{\mathop{\rm Hom}\nolimits(G_0(3),G_0(1))=\lbrace 0\rbrace}. Il reste \`a prouver que \ \m{\mathop{\rm Ext}\nolimits^1(G_1(3),G_0(1))=\lbrace 0\rbrace}. On a $$\mathop{\rm Ext}\nolimits^1(G_1(3),G_0(1)) \ \simeq \ \mathop{\rm Ext}\nolimits^1(G_0(1),G_1)^* \ = \ \lbrace 0\rbrace$$ d'apr\`es \cite{dr1} et le fait que \ \m{\mu(G_0(1))\leq\mu(G_1)}. $\Box$ \subsection{d\'emonstration du th\'eor\`eme A} Soient $F$ un fibr\'e exceptionnel, $r$, \m{c_1}, \m{c_2} des entiers tels que \m{\mu(F)-x_F<\mu<\mu(F)+x_F}, \m{\Delta<\delta(\mu)} \ et \ \m{(\mu,\Delta)\not=(\mu(F),\Delta(F))}. On peut se limiter au cas o\`u \m{\mu(F)-x_F<\mu\leq\mu(F)}, l'autre cas s'en d\'eduisant par dualit\'e. On a alors $$p \ = \ r.rg(F)(P(\mu-\mu(F))-\Delta-\Delta(F)) \ \ > \ \ 0.$$ Supposons que \ \m{\mu > \delta'(\mu)}. Alors on a \ \m{p. rg(F) < r}. En effet, ceci \'equivaut \`a $$\delta(\mu)-\Delta \ < \ \q{1}{rg(F)^2}$$ (cf. la figure de l'Introduction). Il existe donc des entiers \m{r'}, \m{c'_1}, \m{c'_2}, tels que $r$, \m{c_1} et \m{c_2} soient le rang et le classes de Chern d'une somme directe d'un fibr\'e vectoriel ${\cal U}$ de rang \m{r'} et de classes de Chern \m{c'_1},\m{c'_2} et de \m{F\otimes\cx{p}}. Le point correspondant \`a ${\cal U}$ est situ\'e sur la conique d'\'equation $$\Delta = P(\mu-\mu(F))-\Delta(F)$$ et on a \ \m{\Delta \geq \delta'(\mu)} \ si et seulement si ce point est situ\'e sur le segment \m{G(F)} de la conique. Supposons que \ \m{\Delta \geq \delta'(\mu)} \ et \ \m{r'\geq 2}. Dans ce cas il existe d'apr\'es le lemme 3.1 un fibr\'e stable ${\cal U}$ de rang \m{r'} et de classes de Chern \m{c'_1},\m{c'_2} tel que \ \m{\mathop{\rm Ext}\nolimits^1({\cal U},F)=\lbrace 0\rbrace}. Le fibr\'e $${\cal E} \ = \ (F\otimes\cx{p})\oplus{\cal U}$$ est prioritaire, de rang \m{r} et de classes de Chern \m{c_1},\m{c_2}. Les fibr\'es prioritaires g\'en\'eriques sont de ce type, car les fibr\'es tels que ${\cal E}$ sont d\'efinis par la suite de conditions ouvertes suivante : \medskip \noindent (i) on a \ \m{\mathop{\rm Ext}\nolimits^2(F,{\cal E})=\lbrace 0\rbrace}. \noindent (ii) Le morphisme canonique d'\'evaluation $$ev : F\otimes\cx{p}=F\otimes\mathop{\rm Hom}\nolimits(F,{\cal E})\longrightarrow{\cal E}$$ est injectif. \noindent (iii) Si \ \m{{\cal U}=\mathop{\rm coker}\nolimits(ev)}, ${\cal U}$ est un fibr\'e stable tel que \ \m{\mathop{\rm Ext}\nolimits^1({\cal U},F)=\lbrace 0\rbrace}. \medskip Supposons maintenant que \ \m{r'=1}. Dans ce cas on doit avoir \ \m{F={\cal O}} \ et \m{c_2=1}. Les faisceaux de \m{M(r',c'_1,c'_2)} sont de la forme \m{{\cal I}_x} (id\'eal d'un point $x$ de \proj{2}). On a \ \m{\mathop{\rm Ext}\nolimits^1({\cal I}_x,{\cal O})=\cx{}}, d'o\`u le th\'eor\`eme A dans ce cas. Il reste \`a traiter le cas o\`u \ \m{\Delta < \delta'(\mu)}. C'est une cons\'equence du th\'eor\`eme C, dont la d\'emonstration suit. $\Box$ \subsection{{\cal D}\'emonstration du th\'eor\`eme C} Soit \m{(E,F,G)\in{\bf T}}. En consid\'erant la suite spectrale de Beilinson g\'en\'eralis\'ee associ\'ee \`a \m{(E,F,G)}, on voit imm\'ediatement que les points \m{(\mu,\Delta)} de \m{{\cal T}_{(E,F,G)}} (\`a coordonn\'ees rationnelles) sont les paires \m{(\mu({\cal E}),\Delta({\cal E}))}, o\`u ${\cal E}$ est de la forme $${\cal E}\ = (E\otimes\cx{a})\oplus(F\otimes\cx{b})\oplus(G\otimes\cx{c}),$$ avec \ $a,b,c\geq 0$ \ non tous nuls. Le fibr\'e pr\'ec\'edent est prioritaire et rigide, c'est donc un fibr\'e prioritaire g\'en\'erique. On pose comme dans le lemme 2.3, $$Z \ = \ \bigcup_{(E,F,G)\in{\bf T}}{\cal T}_{(E,F,G)}.$$ La partie 1- du th\'eor\`eme C est une cons\'equence imm\'ediate du \paragra~\hskip -2pt 2.4. Il reste donc \`a prouver que $$Z \ = \ {\cal S}.$$ Soit \ \m{(\mu,\Delta)\in Z}. Alors on a \ \m{\Delta\leq\delta'(\mu)}, car les fibr\'es prioritaires g\'en\'eriques ayant les invariants \m{\mu} et \m{\Delta} sont rigides, comme on vient de le voir. On a donc \ \m{Z\subset{\cal S}}. Soit $F$ un fibr\'e exceptionnel tel que \ \m{-1<\mu(F)\leq 0}, \m{(G_n)} la s\'erie exceptionnelle \`a gauche associ\'ee \`a $F$. On va montrer que lorsque $n$ tend vers l'infini, le segment de conique \m{G_nF} de \m{T_{(G_{n-1},G_n,F)}} tend vers le segment de conique $$\lbrace(\mu,\delta'(\mu)), \mu(F)-x_F<\mu\leq\mu(F)\rbrace.$$ On montrerait de m\^eme que si \ \m{-1\leq\mu(F)<0}, et si \m{(H_n)} est la s\'erie exceptionnelle \`a droite associ\'ee \`a $F$, alors lorsque $n$ tend vers moins l'infini, le segment de conique \m{FH_n} de \m{T_{(F,H_n,H_{n+1})}} tend vers le segment de conique $$\lbrace(\mu,\delta'(\mu)), \mu(F)\leq\mu<\mu(F)+x_F\rbrace.$$ D'apr\`es le lemme 2.3, ceci entraine que \ \m{{\cal S}\subset Z}. L'\'equation du segment de conique \m{G_nF} de \m{T_{(G_{n-1},G_n,F)}} est $$\Delta\ = \ P(\mu-\mu(G_{n-1})-3)-\Delta(G_{n-1}).$$ On a $$\lim_{n\rightarrow\infty}(\mu(G_{n-1})) \ = \ \mu(F)-x_F, \ \ \ \lim_{n\rightarrow\infty}(\Delta(G_{n-1})) \ = \ \q{1}{2}.$$ Donc le segment \m{G_nF} tend vers la courbe $$\lbrace(\mu,\phi(\mu)), \mu(F)-x_F<\mu\leq\mu(F)\rbrace.$$ avec $$\phi(\mu) \ = \ P(\mu-\mu(F)+x_F-3)-\q{1}{2}.$$ On v\'erifie imm\'ediatement que \ \m{\phi(\mu)=\delta'(\mu)}, ce qui ach\`eve la d\'emonstration du th\'eor\`eme C. $\Box$
{ "redpajama_set_name": "RedPajamaArXiv" }
1,224
The Boston News-Letter (en español, El boletín de Boston), publicado por primera vez el 24 de abril de 1704, es considerado el primer periódico publicado de manera continua en las Trece Colonias. Era subsidiado fuertemente por el gobierno británico, con una circulación limitada. Todas las copias fueron aprobadas por el gobernador. El primer periódico de las colonias fue Publick Occurrences Both Forreign and Domestick, que publicó su primera y única edición el 25 de septiembre de 1690. En 1718, el Weekly Jamaica Courant le siguió en Kingston. En 1726, Boston Gazette inició su publicación con Bartholomew Green, Jr. como impresor. Historia El primer editor del News-Letter fue John Campbell, un vendedor de libros y oficial de correos en Boston. Campbell había estado escribiendo y enviando activamente "boletines" de eventos europeos a los gobernadores de Nueva Inglaterra durante un año o más, y pensó que le ahorraría trabajo imprimirlas para todos. El News-Letter era originalmente publicado semanalmente. Su primer número llevaba por fecha "From Monday, April 17, to Monday, April 24, 1704" ("Del lunes 17 de abril al lunes 24 de abril de 1704"). El impresor fue Bartholomew Green. Durante sus primeros años, el News-Letter incluía principalmente noticias de periódicos londinenses, describiendo la política inglesa y los detalles de las guerras europeas. Como el único periódico en las colonias, en esa época, también reportó la sensacional muerte del pirata Barbanegra, en un combate mano a mano en 1718. En 1707, John Allen se ocupó de imprimir el periódico. En 1722, su edición pasó a Green, quien se enfocó en sucesos domésticos. Tras su muerte en 1732, su yerno John Draper, también impresor, tomó el mando del periódico. Aumentó el tamaño del periódico a cuatro páginas e incluyó en él noticias de todas las colonias. Condujo la publicación hasta su muerte en 1762, año en que su hijo, Richard Draper, pasó a ser el editor. Richard murió en 1774, y su viuda Margaret Green Draper publicó el News-Letter por el resto de su existencia. Richard Draper fue un ardiente lealista y firmemente apoyó la madre patria en los tormentosos tiempos de los años 1770. Su viuda compartió sus sentimientos, y cuando el joven que ella contrató como editor, Robert Boyle, mostró simpatía con la revolución americana, lo reemplazó con el lealista John Howe. Howe ejerció como el editor de la señora Draper hasta que los británicos evacuaron Boston el 17 de marzo de 1776, llevándose a Howe y Draper con ellos. Con la salida de los ingleses, el News-Letter dejó de existir. El gobierno británico le dio a Margaret Draper una pensión vitalicia. Referencias Enlaces externos Artículo sobre el The Boston News-Letter Publicaciones fundadas en 1704 Periódicos de Boston Periódicos desaparecidos de Estados Unidos Publicaciones desaparecidas en 1776
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,175
Palmarès Club Competizioni nazionali Vita Club: 2001 Primeiro de Agosto: 2006 Collegamenti esterni Calciatori della Nazionale della Repubblica Democratica del Congo
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,155
\section{Introduction} The magnetic-dipole hyperfine structure (HFS) constants are highly sensitive to the changes of charge and magnetization distributions inside the nucleus because these constants are defined by the behavior of the electron wave function in this region. High experimental accuracy is achieved in the spectroscopic measurement of HFS constants for atoms, which allows one to study the nuclear effects in isotope sequences. These experimental data are very useful for understanding of properties of the atomic nuclei. The HFS constant $A$ for the finite nucleus can be written in the following form \cite{stroke61}, \begin{align}\label{HFS_param} A &= g_I {\cal A}_0 (1-\delta)(1-\varepsilon)\,, \end{align} where $g_I = \frac{\mu}{\mu_N I}$ is the nuclear $g$ factor, $\mu$ and $I$ are magnetic moment and spin of the nucleus, respectively; $\mu_N$ is nuclear magneton. $g_I {\cal A}_0$ is the HFS constant for the point-like nucleus, $\delta$ and $\varepsilon$ are Breit--Rosenthal~\cite{RB32,CS49} (BR) and Bohr--Weisskopf \cite{BW50} (BW) corrections, respectively. For stable or long-lived isotopes, measurements of the nuclear $g$-factor and the HFS constant can be carried out independently. These experimental data enable one to evaluate a relative hyperfine anomaly $^1\Delta^2$ (RHFA) values through the relation, \begin{align} \label{rhfa} ^1\Delta^2 \equiv \frac{g_I^{(2)}A^{(1)}}{g_I^{(1)}A^{(2)}}-1 &\approx \varepsilon^{(2)} - \varepsilon^{(1)} + \delta^{(2)} - \delta^{(1)}=\\ \nonumber &= ^1\Delta^{2}_\mathrm{BW} + ^1\Delta^{2}_\mathrm{BR}. \end{align} Here, nuclear $g$-factors, $A$-constant values, BR and BW corrections for isotopes (1) and (2) are marked by the corresponding superscript. The dependence of BR correction on the nuclear radius $R$ is defined by the asymptotic behaviour of the electron wavefunction near a point nucleus~\cite{ionesco60}. Then, the BR correction for $s_{1/2}$ and $p_{1/2}$ atomic states can be written as~\cite{ionesco60,Sha94}, \begin{align}\label{BR-bN} \delta(R) = b_N (R/\lambdabar_C)^\varkappa\,, \quad \varkappa = 2\sqrt{1-(\alpha Z)^2}-1\,. \end{align} Here $\lambdabar_C$ is the reduced Compton wavelength of the electron ($\lambdabar_C =\tfrac{\hbar}{m_e c}$), $\alpha$ is the fine structure constant, $Z$ is nuclear charge, and dimensionless parameter $b_N$ depends on the electron state. Taking into account that the charge density is almost homogeneous inside the nucleus~\cite{fermi_model} we use \mbox{$R = \sqrt{5/3}\, r_\mathrm{rms}$}, where \mbox{$r_\mathrm{rms}=\langle r^2\rangle^{1/2}$} is a root-mean-square nuclear charge radius. Assuming the atomic-nuclear factorization, the BW correction takes the form~\cite{KDKB18,skripn20}, \begin{align} \label{BW-bM} \varepsilon (d_\mathrm{nuc},\,R) = d_\mathrm{nuc}\,\varepsilon_\mathrm{at} (R), \quad \varepsilon_\mathrm{at}(R)= b_M (R/\lambdabar_C)^\varkappa. \end{align} The accuracy of such separation had been found to be very high~\cite{skripn20,pros21}. In the case of the point-like magnetic dipole $d_\mathrm{nuc}=0$, whereas the homogeniously magnetized sphere of radius $R$ corresponds to $d_\mathrm{nuc}=1$. The HFS constants for $p_{3/2}$ and other electronic states with angular momentum $j\ge 3/2$ are sensitive to the nuclear charge and magnetization distributions only due to the admixture of $s_{1/2}$ and $p_{1/2}$ partial waves~(see Refs.~\cite{KKDB17,PMS19}). Therefore the BR and BW corrections for all electron states are described by Eqs.~(\ref{BR-bN}) and (\ref{BW-bM}), respectevly. The parameterization of HFS constants by Eqs.~(\ref{HFS_param}) -- (\ref{BW-bM}) involves three nuclear ($g_I$, $d_\mathrm{nuc}$, and $R$) and three atomic (${\cal A}_0$, $b_N$, and $b_M$) characteristics. In order to perform an atomic-structure calculation of the $A$ constants we need to fix the nuclear parameters, \begin{align}\label{HFS_nuc} A &= A(g_I,d_\mathrm{nuc},R)\, =\,g_IA(1,d_\mathrm{nuc},R)\, . \end{align} The atomic parameters are the same for different isotopes and obtained numerically. The $b_M$ parameter can be found from Eqs.~\eqref{HFS_param} and \eqref{BW-bM}: \begin{align}\label{b_M} b_M &= \frac{\lambdabar_C^\varkappa}{R^\varkappa} \left(1 - \frac{A(g_I,1,R)}{A(g_I,0,R)} \right)\,. \end{align} To find parameter $b_N$ we performed calculations for two different nuclear radii: \begin{align}\label{b_N} b_N &= \frac{\left(A\left(g_I,0,R_2\right) - A\left(g_I,0,R_1\right)\right ) \lambdabar_C^\varkappa} {A(g_I,0,R_2)R_1^\varkappa -A(g_I,0,R_1)R_2^\varkappa} \,. \end{align} The atomic parameter ${\cal A}_0$ was found from the relation: \begin{align}\label{A_0} {\cal A}_0 &= \frac{A(1,0,R)} {1 - b_N (R/\lambdabar_C)^{\varkappa}} \,. \end{align} The independent measurements of the HFS constants and nuclear magnetic moments allow to determine RHFA for several K isotopes~\cite{Per13}. At the same time the electronic structure of potassium atom is relatively simple and consists of a single valence electron above the filled atomic core. Advanced atomic methods allow to calculate HFS constants of potassium isotopes with high accuracy~\cite{owusu97,saf99,saf_k}. The BW effect should no longer be ignored at the level of accuracy modern experiments and theoretical atomic-structure calculations. In Refs.~\cite{papuga13,papuga14} the HFS measurements for potassium isotopes were extended up to $^{51}$K, enabling one to assess the nuclear magnetic moments. In the present work we recalculate these nuclear magnetic moments taking into account hyperfine anomaly corrections. \section{Hyperfine anomaly} In Sec.~4 we will show that for all pottasium isotopes considered here $\Delta_\mathrm{BR}$ is three orders of magnitude smaller than $\Delta_\mathrm{BW}$. Correspondingly, we can neglect BR contribution to RHFA and assume \mbox{$\Delta \approx\ \Delta_\mathrm{BW}$}. Then one can determine $d_\mathrm{nuc}^{(2)}$ for the isotope in question provided the nuclear factor $d_\mathrm{nuc}^{(1)}$ for the reference isotope, RHFA value $^1\Delta^2$, and the atomic part of BW correction $\varepsilon_\mathrm{at}$ are known, \begin{align}\label{rhfa2} d_\mathrm{nuc}^{(2)} = d_\mathrm{nuc}^{(1)} +\frac{^1\Delta^{2}}{\varepsilon_\mathrm{at}}. \end{align} These factors can be compared with that calculated within a framework of the single-particle nuclear model~\cite{BW50,B51}. One can expect that this model works fairly well for $^{39}$K with one proton hole ($\pi d_{3/2}^{-1}$) with respect to the doubly magic $^{40}$Ca. At the same time, it was shown~\cite{stone73} that BW correction (i.e. the $d_\mathrm{nuc}$ factor) for $\pi d_{3/2}$ state is anomalously large and sensitive to small perturbations, for example, in the case of gold \mbox{$I^{\pi}=3/2^+$} isotopes~\cite{dem20,B2020,roberts21}. Thus, the study of \mbox{$I^{\pi}=3/2^+$} potassium isotopes with one hole in the closed proton shell can give additional insight in this single-particle nuclear structure. \section{Single-particle nuclear model} The nuclear magnetization mainly arises due to the spin polarization and the orbital motion of the valence nucleon. The nuclear $g$-factor is given by the famous Land{\'e} formula, \begin{align} \label{g_S} \begin{split} g_I = &\left [ \frac{1}{2} -\frac{L(L+1)-3/4}{2I(I+1)} \right]g_S\\ +&\left [ \frac{1}{2} +\frac{L(L+1)-3/4}{2I(I+1)} \right]g_L\,. \end{split} \end{align} Introducing $\sigma$ (the average odd-particle spin component on the direction of $\bm{I}$) in accordance with relation: \begin{align} \label{4sigma} g_I = \frac{\sigma}{I} g_S + \frac{(I-\sigma)}{I}g_L, \end{align} we obtain, \begin{subnumcases} {\label{g_S2}\sigma =} \label{g_S21} ~\frac12, & \text{$I = L +\tfrac12$}\\ \label{g_S22} -\frac{I}{2(I+1)}, & \text{$I = L -\tfrac12$}. \end{subnumcases} The spin $g$-factor $g_S$ is chosen from the condition that Eqs. (\ref{g_S}) and (\ref{4sigma}) reproduce the experimental $g$-factor value by setting $g_L = 1$ for proton and $g_L = 0$ for neutron~\cite{Sha94}. Such a choice of $g_L$ gives $g_S$ within the range from 0.84$g_p^\mathrm{free}$ to 0.95$g_p^\mathrm{free}$ (the free-proton $g$ factor $g_p^\mathrm{free} = 5.586$) for the considered potassium $I^\pi = 3/2^+$ isotopes. Then the BW correction $\varepsilon$ can be represented as a linear combination of the spin and orbital contributions, $\varepsilon_S$ and $\varepsilon_L$ with the weights determined by \Eref{4sigma}, \begin{align} \label{eps_frac} \begin{split} \varepsilon = &\frac{\sigma g_S}{Ig_I}\varepsilon_S + \left(\frac{I-\sigma}{I} \right )\frac{g_L}{g_I}\varepsilon_L\,. \end{split} \end{align} One can represent $\varepsilon_S$ and $\varepsilon_L$ according to \citet{BW50} in the following form, \begin{align} \label{eps_SL} \varepsilon_S = (1 - k\zeta)\varepsilon_\mathrm{at}, \quad \varepsilon_L = (1 + k)\varepsilon_\mathrm{at}\,. \end{align} Here, $k \approx -0.38$~\cite{BW50}, $\zeta$ is the so-called spin asymmetry parameter~\cite{bellac,B51}. If the valence nucleon is in the $L\ne 0$ state, then the spin density is asymmetric and additional contribution to the spin part of BW correction appears. Expressions for $\zeta$ were suggested by ~\citet{B51}: \begin{subnumcases} {\zeta = \label{zeta_both}} \label{BW_zeta+} \frac{2I-1}{4(I+1)}, &\text{$I = L+\tfrac12$}\\ \label{BW_zeta-} \frac{2I+3}{4I}, &\text{$I = L-\tfrac12$.} \end{subnumcases} The nuclear factor can be found from Eqs.~(\ref{4sigma}) -- (\ref{eps_SL}) as, \begin{align} \label{F2} d_\mathrm{nuc} = 1 + k\left [ 1 - (1+\zeta)\frac{\sigma g_S}{Ig_I}\right ]\,. \end{align} \begin{table}[h!] \caption{\label{tbl:energies} The binding energies (in au) of the low-lying electron states of potassium atom relative to the $\rm K^+$ core. The rows DHF, MBPT, and LCC correspond to the Dirac--Hartree--Fock, Dirac--Hartree--Fock plus MBPT, and Dirac--Hartree--Fock plus LCC methods, respectively. We take into account Breit corrections at DHF stage of the calculations. The experimental data and the theoretical error (in \%) are listed in the last two rows.} \begin{tabular}{lcccc} \hline \\[-3mm] Method &$4s_{1/2}$&$4p_{1/2}$& $4p_{3/2}$\\ \hline DHF &{~~0.1475~~}&{~~0.0957~~}&{~~0.0955~~}\\ MBPT &{~~0.1609~~}&{~~0.1007~~}&{~~0.1004~~}\\ LCC &{~~0.1601~~}&{~~0.1006~~}&{~~0.1003~~}\\ Expt~\cite{NIST} &{~~0.1595~~}&{~~0.1004~~}&{~~0.1001~~}\\ Diff.with expt. &\multicolumn{1}{c}{$0.36$\%} &\multicolumn{1}{c}{$0.21$\%}&\multicolumn{1}{c}{$0.21$\%}\\ \hline \end{tabular} \end{table} When the nuclear factor is large, a more accurate estimate of the parameter $k$ given by~\Eref{eps_SL} than that of ~\cite{BW50} is needed. This parameter can be calculated directly by solving the Schr{\"o}dinger equation with the Woods-Saxon potential~\cite{nuc_so} for the valence nucleon. After that the radial wave function of the valence nucleon is used to compute the ratio $\varepsilon_L/\varepsilon_\mathrm{at}$ as proposed in Refs.~\cite{zherebtsov00,yerokhin08}. The ratio $\varepsilon_L/\varepsilon_\mathrm{at} = 0.621(2)$, corresponding to the parameter \mbox{$k = -0.379(2)$}, is quite stable for all considered potassium isotopes. Deviations between the numerical results determine the uncertainty of $k$. \section{ Calculation results} We consider the ground and valence-excited configurations of the potassium atom, which can be represented as as a single valence electron above the $3s^{2}3p^{6}$ electron shells included in the atomic core. The core-valence and core-core correlations are treated perturbatively. All calculations are performed using Dirac-Coulomb-Breit Hamiltonian. Breit corrections including both the magnetic term and the retardation term in the zero-frequency limit are taken into account in accordance with Refs.~\cite{breit1,breit2}. We start by solving Dirac-Hartree-Fock (DHF) equations for the core and valence orbitals up to $5p_{3/2}$. After that we merge these orbitals with B-splines of order 8 as described in Ref.~\cite{basis} to form a basis set for calculating the correlation corrections. The basis set $22spdfgh$ includes 230 orbitals for partial waves with orbital angular momentum $l$ from 0 to 5. Correlation corrections to the HFS include ones to the hyperfine operator and to the many-electron wave functions. To account for the correlation corrections to the effective hyperfine operator we use the random phase approximation (RPA) with structural radiation correction \cite{KPJ01}. These corrections include in particular the spin polarization of the core shells, down to $1s$. We use second order many-body perturbation theory (MBPT) \cite{DFK96b,KPST15} and linearized single double coupled-clusters method (LCC) \cite{blundell89,blundell91,SKJJ09} to take into account correlation corrections to the wave function. In both cases these corrections are included in self-energy contribution to the effective Hamiltonian for a single valence electron~\cite{DFKP98}. The energy dependence of the effective Hamiltonian is taken into account as discussed in Refs.~\cite{DFK96b,SKJJ09}. As seen from Tables 1 and 2, the LCC results agree with the experimental data better than the MBPT ones. We have already seen the same preference for LCC results in our previous calculations~\cite{dem20}. \begin{table}[tbh] \caption{\label{tbl:hfs_K} The atomic parameters ${\cal A}_0$, $b_N$, $b_M$, and HFS constants for the lower levels of K I. We compare HFS constants for $^{39}\mathrm{K}$ ($g_I = 0.2609775 (2)$~\cite{39K_mu} and single-particle factor $d_\mathrm{nuc} = -2.1$) with avalable experimental data~\cite{K_st_s,K_st_p}.} \begin{tabular}{lrrrr} \hline \\[-3mm] Method &${\cal A}_0$ (MHz)&$b_N$&$b_M$&$A$ (MHz)\\ \hline \multicolumn{5}{c}{$4s_{1/2}$}\\ DHF &564.4&{0.218~~}&{0.079~~} & 147.2\\ RPA &697.9&{0.216~~}&{0.078~~} & 182.0\\ RPA+MBPT &915.5&{0.206~~}&{0.078~~} & 238.8\\ RPA+LCC &888.1&{0.206~~}&{0.078~~} & 231.6\\ Experiment ($^{39}\mathrm{K}$)&&\multicolumn{3}{r}{230.8598601(3)}\\ Relative error &&&& $0.3\%$\\ \multicolumn{5}{c}{$4p_{1/2}$}\\ DHF &63.6 &{0.002~~}&{0.001~~} & 16.6\\ RPA &82.4 &{$-0.010$~~}&{$-0.003$~~} & 21.5\\ RPA+MBPT &110.1&{$-0.004$~~}&{$-0.001$~~} & 28.7\\ RPA+LCC &107.7 &{$-0.004$~~}&{$-0.001$~~} & 28.1\\ Experiment ($^{39}\mathrm{K}$)&&&\multicolumn{2}{r}{ 27.775(42)}\\ Relative error &&&& $1.2\%$\\ \multicolumn{5}{c}{$4p_{3/2}$}\\ DHF &12.4 &{0.000~~}&{0.000~~}& 3.2\\ RPA &20.7 &{0.050~~}&{0.016~~}& 5.4\\ RPA+MBPT &24.0 &{0.029~~}&{0.008~~}& 6.3\\ RPA+LCC &23.4 &{0.030~~}&{0.008~~}& 6.1\\ Experiment ($^{39}\mathrm{K}$)&&&\multicolumn{2}{r}{6.093(25)}\\ Relative error &&&& $1.0\%$\\ \hline \end{tabular} \end{table} A comparison of theoretical binding energies of the $4s_{1/2}$, $4p_{1/2}$, and $4p_{3/2}$ states with experiment is given in \Tref{tbl:energies}. The experimental data used in this comparison are from Ref.~\cite{NIST}. Our final theoretical uncertainty ranges from 130 cm$^{-1}$ for 4s to 50 cm$^{-1}$ for 4p states of K I. The theoretical result for the fine-structure $4p_{1/2} - 4p_{3/2}$ interval of 58.2 cm$^{-1}$ is in excellent agreement with experimental value, 57.7~cm$^{-1}$~\cite{NIST}. The calculated HFS atomic parameters for $4s_{1/2}$, $4p_{1/2}$, and $4p_{3/2}$ states of potassium are given in \Tref{tbl:hfs_K}. The parameter ${\cal A}_0$ is highly sensitive to the electronic correlations treatment. The uncertainty of ${\cal A}_0$ calculations can be reliably estimated for the $4p$ states. The changes in $A(4p_{1/2})$ and $A(4p_{3/2})$ constants due to BR corrections are only 0.005\% and 0.04\%, respectively. The contributions of BW corrections are of the same order of magnitude. Both BR and BW corrections can be neglected for these states in present consideration. Thus, the deviation of the theoretical $A(4p)$ constants from the experimental values stems exclusively from the incompleteness of ${\cal A}_0$ calculations. Our LCC results agree with experimental data for $\rm ^{39}K$ within 1.2\% for $A(4p_{1/2})$ constant and 1.0\% for $A(4p_{3/2})$ one. It should be noted that taking into account partial triple excitations within LCC method significantly reduces calculation uncertainty of the $A(4p_{1/2})$ constant~\cite{saf_k}. We conservatively estimate the possible uncertainty of the ${\cal A}_0\, (4s_{1/2} )$ calculation for K I within LCC method as 1.2\%. Relative correlation contributions in ${\cal A}_0$ for $4s_{1/2}$ and $4p_{1/2}$ states are close to each other ($\sim 60$\%, see \Tref{tbl:hfs_K}), therefore, one can expect that the accuracy of the ground-state calculation is not worse than that for excited state. The calculation of the parameter $b_N$ requires a variation of the nuclear radius, which leads to a change in the integration grid within the framework of our software package~\cite{KPST15}. Therefore, the parameter $b_N$ is more sensitive then $b_M$ to the size of the basis set. As a final $b_N$ value for $4s_{1/2}$ state in potassium we adopted the LCC result with the uncertainty covering the deviations of the results obtained in the frameworks of the different approximations (see column 3 of \Tref{tbl:hfs_K}): $b_N (4s_{1/2}) = 0.206(12)$. Then the BR correction for the ground state of $\rm ^{39}K$ is 0.26(2)\%. Using the nuclear radii from Ref.~\cite{angeli13} we found that $\prescript{39}{}\Delta^{47}_\mathrm{BR} = 4\cdot 10^{-6}$. The $b_M$ parameter for the $4s_{1/2}$ state of potassium is stable at the each stage of correlation effects treatment. Conservatively assuming the same relative error for parameters $b_M$ and $b_N$ one can obtain $\varepsilon_\mathrm{at} = 0.098(4)\%$ for the ground state of K I. The atomic part of BW correction is weakly dependent on the principal number of electron state~\cite{shab01}. Because of that the $\varepsilon_\mathrm{at}$ corrections calculated for $s$ states of H-like ion and neutral atom should be comparable. Our result coincides with $\varepsilon_\mathrm{at} = 0.098\%$ obtained for the ground state of H-like potassium ion~\cite{Sha94}. Note, following~\citet{B51} in Refs.~\cite{papuga13,papuga14} was used overestimated value $\varepsilon_\mathrm{at} = 0.125\%$. A comparison of atomic parameters for HFS constants of H-like ions calculated by us with results of \citet{Sha94} and \citet{B51} is given in Ref.~\cite{ita20}. \begin{figure}[tbh] \includegraphics[height=8.5cm]{K_isotopes.pdf} \caption{\label{exp} \textbf{FIG.~1.} The $A(4p_{1/2})/g_I$ values for potassium isotopes when the $g$-factors were measured independently. Dots with error bars -- experimentally measured $A(4p_{1/2})$ constants from Refs.~\cite{k37p_exp,K_st_p,papuga14} divided by the nuclear $g$-factors from Refs.~\cite{k37_exp,39K_mu,stone19,k42_exp}. Dotted line -- weighted mean value for these isotopes.} \end{figure} \section{Evaluation of the nuclear magnetic moments} Previously the nuclear $g$-factors of potassium isotopes far from stability were extracted from the $A(4s_{1/2})$ constants~\cite{papuga14} neglecting the RHFA. The additional uncertainties of 0.3\% and 0.5\% were added for odd-even and odd-odd isotopes, respectively, to account for RHFA. Note, that this estimation of the RHFA contribution is based on the experimental data with uncertainties $\sim50-100$\% and theoretical calculations with unknown accuracy, therefore, the conservative estimation of the additional uncertainties due to RHFA should be 0.5\% and 0.8\% in odd-even and odd-odd cases, respectively (see Table~2 in~\cite{papuga14}). However, the $A(4p_{1/2})$ constants are more convinient for the nuclear magnetic moments extraction. Due to negligible magnitudes of both BR and BW corrections the $A(4p_{1/2})/g_I$ values should be the same for different potassium isotopes. In order to estimate this value from experimental data we use independently measured HFS constants and nuclear $g$-factors of $\rm ^{37,\,39-42}K$ isotopes (see~\Fref{exp}). \hfill \break The weighted mean value ${\cal A}_{0}^\mathrm{mean} (4p_{1/2}) = 106.44(8)$~MHz was used to extract the nuclear $g$-factor: $g_I = \tfrac{A(4p_{1/2})}{{\cal A}_{0}^\mathrm{mean} (4p_{1/2})}$. The comparison of our results to the literature values from Ref.~\cite{papuga14} with uncertainties due to RHFA increased in accordance with more conservative prescription outlined above is presented in~\Tref{tbl:g_factors}. New results yield smaller uncertainties than the literature data~\cite{papuga14}, except $\rm ^{51}K$ due to large relative error of experimental HFS constant. \begin{table}[tbh] \caption{\label{tbl:g_factors} The nuclear magnetic moments of potassium isotopes extracted from experimentally measured $A (4p_{1/2})$ constants~\cite{papuga14}. The results are compared to the literature data.} \begin{tabular}{ccrrr} \hline \\[-3mm] \setlength{\tabcolsep}{25pt} Isotope&$I^\pi$& $A (4p_{1/2})$, MHz&\multicolumn{2}{c}{$\mu$, $\mu_N$}\\ &&Ref.~\cite{papuga14}&\multicolumn{1}{c}{this work}&\multicolumn{1}{c}{Ref.~\cite{papuga14}} \\ \hline \\[-3mm] 38&$3^+$&48.9(2)&1.378(6)&1.371(12)\\ 44&$2^-$&$-45.8(2)$&$-0.861(4)$&$-0.857(8)$\\ 46&$2^-$&$-55.9(2)$&$-1.050(4)$&$-1.046(9)$\\ 47&$1/2^+$&411.8(2)&1.934(2)&1.929(10)\\ 48&$1^-$&$-96.3(3)$&$-0.905(3)$&$-0.900(7)$\\ 49&$1/2^+$&285.6(7)&1.342(3)&1.339(7)\\ 51&$3/2^+$&36.6(9)&0.516(13)&0.513(5)\\ \hline \end{tabular} \end{table} \hfill \break \begin{table}[tbh] \caption{\label{tbl:d_nuc} The $d_\mathrm{nuc}$ factors determinated from RHFA values~\cite{Per13,papuga14} by \Eref{rhfa2} with $^{39}\mathrm{K}$ as the reference isotope. For comparison the $d_\mathrm{nuc}$ factors calculated within the single-particle nuclear model are given in the last column.} \begin{tabular}{crrcrrr} \hline \\[-3mm] \multicolumn{2}{c}{Isotope\ \ $I^\pi$}&\multicolumn{2}{c}{$g$-factor}&\multicolumn{1}{c}{$\prescript{39}{}\Delta^{*}$, \%}&\multicolumn{2}{c}{$d_\mathrm{nuc}$}\\ &&&&\multicolumn{1}{c}{Eq.\eqref{rhfa}}&\multicolumn{1}{c}{Eq.\eqref{rhfa2}}&\multicolumn{1}{c}{Eq.\eqref{F2}} \\ \hline \\[-3mm] 37&$3/2^+$&$0.13547$(4)&\cite{k37_exp}&$-$0.249(35)&$-4.6$(4)&$-5.3$ \\ 39&$3/2^+$&$0.2609775$(2)&\cite{39K_mu}&\multicolumn{1}{c}{0.0}&\multicolumn{1}{c}{$-$}&$-2.1$\\ 40&$4^-$ &$0.324493$(8) &\cite{stone19}&0.466(19)&2.7(2)&$-$\\ 41&$3/2^+$&$0.143248$(3)&\cite{stone19} &$-$0.22936(14)&$-4.4(1)$ & $-5.0$\\ 42&$2^-$ &$-0.57125$(3)&\cite{k42_exp} &0.336(38)&1.3(4)&$-$\\ 47&$1/2^+$ &3.869(3)&&0.272(90)&0.7(9)&1.0\\ \hline \end{tabular} \end{table} \section{Evaluation of nuclear factors} For a number of potassium isotopes the relative hyperfine anomalies are known with sufficient accuracy~\cite{Per13}. For the reference isotope, $^{39}$K, the single-particle nuclear model [\Eref{F2}] gives $d^{(39)}_\mathrm{nuc} = -2.1$. This factor corresponds to BW correction $\varepsilon^{(39)} = -0.205\%$. Nuclear factors for $\rm ^{37,\,41}K$ calculated by the single-particle nuclear model (see column 6 in \Tref{tbl:d_nuc}) happen to be lower than corresponding experimental values (see column 5 in \Tref{tbl:d_nuc}). Note, that magnetic moment of $\rm ^{39}K$ (0.39~$\mu_N$) is nearly twice as large as magnetic moments of $\rm ^{37,\,41}K$ ($\sim 0.20\,\,\mu_N$) with the same spin ($I^\pi=3/2^+$) and leading nuclear configuration $\pi d_{3/2}$. Correspondingly, $d_\mathrm{nuc}(^{37,\,41}\mathrm{K}) \approx 2\times d_\mathrm{nuc}(^{39}\mathrm{K})$ and single-particle evaluations underestimate $d_\mathrm{nuc}$ for $\rm ^{37,\,41}K$. Keeping in mind the strong single-particle nature of the $\rm ^{39}K$ ground state, this disagriment indicates mixing of the nuclear configurations in $\rm ^{37,\,41}K$. Surprisingly, similar jump-like behavior was found for gold nuclei with $I^\pi=3/2^+\,(\pi d_{3/2})$: \mbox{$\mu(^{199}\mathrm{Au})=0.27~\mu_N$} whereas $\mu(^{191,\,193,\,195,\,197}\mathrm{Au}) \approx 0.15~\mu_N$ and $d_\mathrm{nuc}(^{191,\ldots,197}\mathrm{Au}) \approx 2\times d_\mathrm{nuc}(^{199}\mathrm{Au})$~\cite{dem20}. Besides, the single-particle model does not describe well the $d_\mathrm{nuc}$ parameter in light Au isotopes~\cite{dem20}. Similarly, this model fails to reproduce the experimental $d_\mathrm{nuc}$ value for $\rm ^{37,\,41}K$. This similarity supports the assumption of Ref.~\cite{dem20} that in contrast to a rather pure ground state of $\rm ^{199}Au$, the ground state of $\rm ^{197}Au$ (and lighter odd Au isotopes with $I^\pi=3/2^+$) has a noticeable admixture of other configurations. \break \section{Conclusions} We calculate the hyperfine structure constants of low-lying states of potassium atom taking into account the Bohr--Weisskopf and Breit--Rosenthal effects. In order to separate these effects we use two cases of nuclear magnetization distribution and the same homogeniouslly distributed charge. The first case describes the point-like magnetic dipole in the center of the nucleus, whereas the second assumes the homogeneously magnetized sphere of nuclear radius. We extract atomic parameters $b_N$, $b_M$, and ${\cal A}_0$ for each considered state. To estimate the BW correction we assume the atomic-nuclear factorization and use the $d_\mathrm{nuc}$ factor. According to our calculations the $4p_{1/2}$ state of K I is almost free from both BR and BW corrections. Using this fact, we obtain the mean value ${\cal A}_0^\mathrm{mean} (4p_{1/2}) = 106.41(8)$~MHz from experimental data. The result of our LCC calculations agrees with this value within 1.2\%. We use the ${\cal A}_0^\mathrm{mean}$ value to extract the nuclear magnetic moments of short-lived potassium isotopes from $A(4p_{1/2})$ constants. Experimentally measured relative hyperfine anomalies provide the relation between the $d_\mathrm{nuc}$ factors of different isotopes. One can consider the configuration of $^{39}$K nuclear ground state as the single proton hole with respect to the doubly magic $^{40}$Ca. This justifies our choice to use the single-particle $d_\mathrm{nuc}^{(39)} = -2.1$ as a reference to restore the nuclear factors for other isotopes from RHFA values. The striking similarity of the jump-like behavior of magnetic moments and $d_\mathrm{nuc}$ parameters in K and Au isotopes supports the assumption of a configuration mixing in light odd Au isotopes with $I^{\pi} = 3/2^+$~\cite{dem20}. \subsection*{Acknowledgments} We thank Prof. M.S. Safronova for helpful discussions and providing the LCC code. This research was funded by the Russian Science Foundation Grant \textnumero 20-62-46006. \bibliographystyle{apsrev}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,422
\section{Introduction} General relativity (GR) has proven to be a good theoretical framework to describe many phenomena of gravitational origin in the Universe. In particular, combined with quantum field theory (QFT) in curved backgrounds and notions of standard physics, this framework is able to explain, at least to a certain level of accuracy, the evolution of the primordial perturbations that served as seeds for the present large-scale structures and originated the observed cosmic microwave background (CMB) \cite{mukh,langlois}. Despite this success, there is a common belief in the scientific community that, in regimes where the gravitationally relevant quantities approach Planck scale, GR should be corrected. This idea is partially supported by the unavoidable appearance of spacetime singularities in the relativistic theory, such as the big bang \cite{HawElli}. In those regimes, one can argue that a framework which incorporates nonperturbative quantum effects of the geometry (and its interplay with matter) might overcome the breakdown of the classical theory. In the search for this elusive theory of quantum gravity, the identification and analysis of observations that might unveil some traces of those quantum gravitational effects is most important, in order to eventually falsify the theoretical predictions. A possible observational window to such quantum phenomena, or to other alternative modifications of GR, might perhaps be found in the power spectra of the CMB. The increasing precision of the most recent observations \cite{planck,planck-inf} seems to indicate the existence of anomalies in the spectra at large scales. Although the observations at the largest scales are inevitably affected by cosmic variance errors \cite{mukh}, there might exist anomalies even for multipoles of moderately large number, around $\ell \sim 22$ \cite{planck-inf}. This explains the present interest in discussing the potential implications in cosmology of a quantization of the geometry. One of the most promising candidates for a theory of quantum gravity is the formalism of loop quantum gravity (LQG). This is a nonperturbative canonical quantization of GR which, in a background-independent way, adopts the strategy followed by Yang-Mills theories, adapted to the description of the gravitational degrees of freedom \cite{lqg}. With an eye on possible tests of the physical consequences that this nonperturbative quantization implies, the techniques of LQG have been applied to cosmological spacetimes, giving rise to a field of research known as loop quantum cosmology (LQC) \cite{lqc1,lqc2,ap,lqc3}. One of the most remarkable results reached for homogeneous spacetimes in LQC is the, quite generic, quantum resolution of the big bang singularity, which becomes what has been named as big bounce \cite{ap,singh}. Specifically, when the (so-called) polymeric quantization that is characteristic of LQG is applied to a homogeneous and isotropic cosmology coupled to a massless scalar field, one obtains families of physical states that are peaked on trajectories that only depart from those of GR when they approach the cosmological singularity, which gets then replaced with a quantum bounce that connects a contracting branch of the Universe with an expanding one \cite{aps,iaps}. Moreover, these trajectories are the solutions of an effective Hamiltonian dynamics that incorporates quantum corrections, leading to a modification of GR that is often called effective LQC \cite{ap,taveras}. The resolution of the initial singularity experimented in LQC opens new avenues to explore the physics of the Early Universe, extending the study of cosmological perturbations beyond the onset of inflation, back to the epochs when the bounce occurred and the eras previous to it, reaching regimes that may be close to the Planck scale. In fact, the effects that the quantum nature of the spacetime may have exerted on the evolution of the cosmological perturbations have been investigated within the context of LQC by a considerable number of authors in recent years, following several different theoretical approaches \cite{hybr-inf1,hybr-inf2,hybr-num, hybr-inf3,hybr-ref,hybr-inf5,hybr-ten,dressed1,dressed2,dressed3,dressed4,effective2,effective3,effective4,effective5}. The possibility that these effects might have left an imprint on the CMB depends on the energy scales associated with the process of inflation and the beginning of the slow-roll regime \cite{Morris,GUI}, details that in turn depend on the type of inflationary spacetimes that are favored in LQC \cite{inflationLQC} (and on some phenomenological parameters). For a recent review of cosmological perturbations in LQC, we refer the reader to Ref. \cite{Edward}. There exist two approaches to LQC that are based on the combination of a polymeric quantization of the geometry with a Fock quantization of the perturbations: the hybrid approach \cite{hybr-inf1,hybr-inf2,hybr-num,hybr-inf3,hybr-ref,hybr-inf5,hybr-ten} and the dressed metric approach \cite{dressed1,dressed2,dressed3,dressed4}. Their common strategy of combining different types of quantization is based on the assumption that there should exist a regime of physical interest where the main quantum gravity effects come from the homogeneous sector of the cosmology, which is then quantized by means of LQG techniques, while the inhomogeneities can be described with more conventional techniques from QFT in curved backgrounds. This type of strategy was, in fact, first introduced for the quantization of some inhomogeneous Gowdy cosmologies \cite{hybrid1,hybrid2}. The genuine hybrid approach treats the whole system, composed of the background spacetime and its perturbations, as a constrained canonical system, starting from a Hamiltonian formalism that is obtained by truncating the action at quadratic perturbative order \cite{hybr-ref}. The other approach considers the quantization of the homogeneous sector first, it obtains a dressed metric that incorporates the most important quantum corrections within homogeneity, and then lifts the corresponding dynamical trajectories to the truncated phase space that describes the perturbed system at the desired order of approximation \cite{dressed2}. In both approaches, the ultraviolet behavior of the perturbations is standard, inasmuch as one gets field equations for the gauge-invariant perturbations that are hyperbolic in the ultraviolet regime \cite{hybr-inf1,hybr-inf2,dressed2}. These field equations for the tensor perturbations and for the so-called Mukhanov-Sasaki invariant \cite{sasa,kodasasa,mukhanov}, which describes the true degrees of freedom of the scalar perturbations, are in fact equivalent to a(n infinite) collection of harmonic oscillators with a time-dependent mass. The quantization of the geometry only alters the value of that time-dependent mass with respect to the standard value in GR. Moreover, the power spectra of the perturbations resulting from these equations have been computed within both approaches, obtaining results that are compatible with the observations \cite{dressed3,GUI}. The purpose of this paper is to make clear that, in spite of all the common features shared by the two approaches, they lead in fact to different time-dependent masses, both for the tensor and for the scalar perturbations. As we will see, this is so even when backreaction is neglected and the quantum geometry of the background is described in terms of effective LQC, which is the situation better studied in the literature \cite{dressed3,Morris,GUI}. This difference is important, because it implies that the predictions for the power spectra of the CMB, although similar, are not completely identical. We will explain the reason for this discrepancy, rooted, as expected, in the distinct procedures followed in the two approaches in order to include quantum geometry corrections in the dynamics of the perturbations. In particular, we will study in detail the difference between the time-dependent masses of the two approaches at the bounce. The value and behavior of the mass at the bounce that replaces the singularity in LQC are especially relevant, because this event marks a privileged instant in the evolution of the Universe, and therefore provides a natural choice of time to set some, physically motivated, initial conditions for the perturbations. In fact, these initial conditions are typically understood in cosmology as the definition of an initial vacuum state, both for the tensor and the scalar perturbations. In this sense, the properties of the mass, and more specifically its positivity, can prove very important for the correct definition of those initial conditions. For instance, this would be the case if one wants to construct initial adiabatic vacua for the gauge-invariant perturbations \cite{dressed2,dressed3,dressed4,Morris,GUI,hybr-pred}. We will study this positivity, in particular paying an especial attention to background spacetimes in which the energy density of the inflaton at the bounce is dominated by the kinetic contribution. In the two considered approaches these are the most interesting and studied backgrounds \cite{inflationLQC,dressed3,GUI}, because they lead to spectra for the CMB that are compatible with observations and may include quantum geometry corrections. However, we will not restrict the discussion exclusively to those backgrounds. In fact, for spacetimes in which the inflaton energy density is so highly dominated by the kinetic contribution at the bounce that the potential can be safely ignored, the background solutions display a common behavior that can be calculated analytically, as shown in Refs. \cite{waco,wacopre}. This analytic solution can be used to determine the positivity or negativity of the mass at the bounce, provided our considerations are circumscribed to this sector of extreme kinetic dominance.\footnote{We acknowledge an anonymous referee for remarking this point and call attention to the two cited references.} Nonetheless, with the aim at specifying in a quantitative way the regions of solutions where one can assure that the mass turns out to be positive at the bounce, here we want to go beyond that regime of extreme kinetic dominance and complete an analysis that includes sectors of backgrounds with a potential energy density that does not need to be totally negligible. According to the numerical simulations presented in Ref. \cite{waco} this is the case when the parameter that determines the equation of state of the inflaton at the bounce differs from the unit by more than a few percent or, equivalently, when the absolute value of the potential is more than a few percent of the inflaton energy density at the bounce. The article is structured as follows. In Sec. \ref{sec1}, we first review some basic results about effective homogeneous LQC. Then, we summarize the main ideas that underlie the derivation of the field equations for the gauge-invariant perturbations in the hybrid and dressed metric approaches to LQC, and show the explicit form of the time-dependent masses that appear in those equations when backreaction is neglected and the background obeys the dynamics of effective LQC. At the end of this section, we briefly comment on the difference between the values of these masses in the two considered approaches, identifying the origin of this discrepancy and explaining it on the basis of the different routes to quantization adopted in each case. In Sec. \ref{sec3}, we focus our study on the value of the time-dependent masses at the bounce. We carry out an analytical and numerical study of the properties of these masses, with especial attention paid to the discussion of their positivity. We prove that the mass of the tensor perturbations is always negative (or zero) in the dressed metric approach, and positive in the hybrid approach for a set of background solutions that includes those with kinetically dominated bounces. For scalar perturbations, on the other hand, we demonstrate again the positivity for a sector of backgrounds around the kinetically dominated region in the hybrid case, whereas this is not so generically in the dressed metric approach. In particular, within kinetic dominance and around it, the time-dependent scalar mass of the dressed metric scheme is negative in the relevant case of a quadratic potential when the inflaton mass is not extremely far away from the range of values suggested by phenomenological considerations in LQC \cite{inflationLQC}. Finally, we summarize our results and conclude in Sec. \ref{concl}. Throughout the text, we set the Planck constant $\hbar$ and the speed of light equal to one. Planck units are then defined by taking Newton constant $G$ also equal to one. \section{Field equations for the gauge-invariant perturbations}\label{sec1} We start by considering a homogeneous and isotropic, Friedmann-Lema\^{\i}tre-Robertson-Walker (FLRW) spacetime with flat spatial hypersurfaces. For simplicity in the exposition and for mathematical convenience, we will assume compact sections, isomorphic to the three-torus $T^3$, with a compactification length given by a parameter $l_0$. Actually, if this parameter is chosen sufficiently large (much larger than the corresponding Hubble radius), the relevant analyses of cosmological perturbations carried out here become essentially equivalent to those of the noncompact case, which would be reached in a suitable limit with $l_0$ tending to infinity. The FLRW spacetime metric is characterized by a scale factor $a(t)$ and a homogeneous lapse $N_0(t)$. With coordinates adapted to the homogeneity, it can be written as \begin{align} ds^{2}=-N_0^2(t)dt^2+a^2(t)\,^0h_{ij} dx^i dx^j, \end{align} where $i,j=1,2,3$ denote spatial indices, $^0h_{ij}$ is the Euclidean metric on the compact spatial section, and $x^i$ are periodic Euclidean coordinates, with period equal to $2\pi /l_0$. As a matter source, we minimally couple a homogeneous scalar field $\phi(t)$, with a potential $V(\phi)$, which will play the role of an inflaton field. Indeed, in the classical theory, this scalar field can serve to drive an inflationary period of the geometry. The quantization of this homogeneous and isotropic model, when restricted to a vanishing potential, has been thoroughly studied in LQC \cite{aps,iaps,mmo}. In particular, it is possible to construct a well-defined operator representing the Hamiltonian constraint of the system (see, e.g., \cite{mmo}). Among the solutions to this quantum constraint, it has been shown in a number of analyses \cite{iaps,mop} that there exist states which are highly peaked on trajectories generated by a certain effective Hamiltonian \cite{taveras}, which differs from the classical one by incorporating quantum corrections. Those trajectories have the remarkable property of avoiding the big bang singularity, which gets replaced with a big bounce that connects a contracting branch of the Universe with an expanding one \cite{iaps}. This bounce of quantum origin sets an upper bound on the energy density of the scalar field \cite{ap}. In the case of interest here of an inflationary cosmology with perturbations, the scalar field that serves as a matter source for the homogeneous sector of the system is not massless, but it is subject to a potential $V(\phi)$. Nonetheless, it is generally admitted (supported in part by the numerical simulations; see, e.g., \cite{ads,lambd}) that the influence of the potential will not change the effective behavior found in the case of the massless field. The Hamiltonian $H_{|0}^{\rm eff}$ that would generate the effective dynamics of the peaks of these states is then \begin{align}\label{effh} N_{0}H_{|0}^{\rm eff}=\frac{N_{0}}{2l_0^3 a^3}\left[\pi_\phi^2- \frac{3l_0^{6}a^{6}}{4\pi G\gamma^2 \Delta} \sin^2 \bigg(\frac{4\pi G\gamma\sqrt{\Delta}\pi_a}{3l_0^3 a^2}\bigg) + 2a^6 l_0^6 V(\phi)\right], \end{align} where $\pi_a$ and $\pi_{\phi}$ denote, respectively, the canonically conjugate momenta of $a$ and $\phi$. This effective LQC Hamiltonian, which must vanish on effective solutions, gives rise to the following modified Friedmann and Raychaudhuri equations for the geometry of the inflationary FLRW universe \cite{dressed3,GUI}: \begin{align}\label{LQC} \left(\frac{a'}{a}\right)^{2}=\frac{8\pi G}{3}a^2\rho \left(1-\frac{\rho}{\rho_{\rm max}}\right), \qquad \frac{a''}{a}=\frac{4\pi G}{3}a^2\rho\left(1+2\frac{\rho}{\rho_{\rm max}}\right)-4\pi G a^2 P\left(1-2\frac{\rho}{\rho_{\rm max}}\right). \end{align} Here, the prime denotes derivative with respect to the conformal time, defined by setting the lapse equal to the effective value of the scale factor, and $\rho$ and $P$ are the energy density and the pressure of the scalar field, \begin{equation}\label{density} \rho=\frac{1}{2}\left(\frac{\phi'}{a}\right)^2+V(\phi) ,\qquad P=\rho-2 V(\phi). \end{equation} Besides, the quantity $\rho_{\rm max}=3/(8\pi G \gamma^{2}\Delta)$ is the upper bound on the energy density of the inflaton field, where $\gamma$ is the so-called Immirzi parameter \cite{immi} and $\Delta=4\sqrt{3}\pi\gamma G$ is the area gap allowed by the spectrum of the area operator in LQG \cite{lqg}. When the bound $\rho_{\rm max}$ is reached, the Hubble parameter vanishes and the quantum bounce occurs. Finally, the inflaton field and its momentum satisfy the same relation as in GR, \begin{align}\label{scalar} \phi'=\frac{\pi_{\phi}}{l_{0}^3a^{2}}, \qquad \pi_{\phi}'=-l_0^3 a^4 V_{,\phi}. \end{align} In order to study small departures from homogeneity, one can now introduce inhomogeneous perturbations of both the metric and the scalar field to lowest nontrivial order in the Einstein-Hilbert action \cite{langlois,HH,shirai,langlois2,pintoneto1,pintoneto2}. According to their properties under symmetry transformations, these perturbations are typically classified as scalar, vector or tensor. In fact, at the considered perturbative order, the vector inhomogeneities are known to be pure gauge when the matter content is a scalar field. Therefore, we will devote most of our discussion to the scalar and tensor perturbations. It is convenient to expand these perturbations in a complete set of scalar, vector, and tensor harmonics. Specifically, given the Euclidean metric $^0h_{ij}$ and its associated affine connection, one can consider a complete set of eigenfunctions of its Laplacian, which can be understood as plane waves with wave vectors $\vec{k}$ that, in the compact case, are integer tuples multiplied by $2\pi/l_0$. From these eigenfunctions, as well as forming appropriate combinations of the metric $^0h_{ij}$ with its associated connection, one can construct the desired complete set of scalar, vector, and tensor harmonics \cite{Bardeen}. Let us notice at this point that, thanks to the compactness of the spatial sections, in all these harmonic expansions we can and will exclude their zero-modes, and regard them as part of the homogeneous metric and matter variables. \subsection{Hybrid quantization approach} Let us begin by summarizing the main ideas that underlie the hybrid (LQC) approach to the quantization of cosmological perturbations. For specific details about the derivation of the equations discussed here, we refer the reader to Refs. \cite{hybr-inf1,hybr-inf2,hybr-inf3,hybr-ref,hybr-inf5,hybr-ten}. The strategy followed in those works is, in short, to truncate the Einstein-Hilbert action at quadratic order in the perturbations, and then regard the entire truncated system as a constrained symplectic manifold. The Hamiltonian that results from this truncation is a linear combination of constraints, which capture the covariance of the relativistic system up to the order of the truncation. More specifically, this Hamiltonian is the sum of the following terms. On the one hand, the homogeneous lapse function $N_0$ is the Lagrange multiplier of the zero-mode of the Hamiltonian constraint, which is in turn formed by the Hamiltonian constraint of an FLRW model plus an infinite sum of functions that are quadratic in the mode coefficients of the scalar and tensor perturbations. On the other hand, the infinite number of mode coefficients of the perturbations of the lapse and the shift vector serve as the Lagrange multipliers of the linearization of the Hamiltonian constraint and of the momentum constraints of GR, respectively. This truncated cosmological system can be recast in terms of gauge-invariant canonical variables for the perturbations, as shown in Ref. \cite{hybr-ref}. The advantages of a description in terms of quantities that are invariant under the transformations generated by the linear perturbative constraints is evident. The main steps of this reformulation are as follows. First, with the considered matter content, the tensor perturbations of the metric are automatically gauge-invariant. On the other hand, by appropriately combining the scalar perturbations of the metric with those of the inflaton field, one can obtain the Mukhanov-Sasaki gauge-invariant field \cite{sasa,kodasasa,mukhanov}. Both of these tensor and scalar invariants still are so if one rescales them with a function of the homogeneous scale factor (and their conjugate momenta with one over this function, up to the sum of terms that depend only on the configuration variables). Among the variables allowed by this freedom in the choice of gauge invariants, the hybrid approach fixes those which have configuration mode coefficients that, classically, obey the second-order equation of a harmonic oscillator with a time-dependent mass and without any friction term \cite{mukh,hybr-ref,hybr-ten}. We will call $\tilde{d}_{\vec k,\epsilon}$ the corresponding configuration mode coefficients for these tensor gauge invariants, where $\epsilon$ is a dichotomy label that denotes each of the two possible polarizations. On the other hand, let $v_{\vec k}$ be the configuration mode coefficients of the chosen Mukhanov-Sasaki gauge invariant. Apart from the advantages of describing the primordial fluctuations with these specific variables in standard cosmology \cite{mukh} (given the almost-Gaussian distribution of the anisotropies in the CMB), they turn out to be the only ones, among those related by the mentioned rescaling transformations, for which the dynamics can be unitarily implemented when one adopts an adequate Fock quantization for them \cite{unique1,unique2,unique3,unique3b,fmov,uniqueds,uniquesignature,unique4}, in the context of QFT in curved spacetimes. This feature is appealing in view of the hybrid ideas for the later quantization of the system, which employs a Fock representation of the perturbations. Once these gauge-invariant canonical variables have been chosen, the rest of information contained in the perturbative sector of the phase space can be codified in the, conveniently Abelianized, linear perturbative constraints, together with their canonical momenta \cite{hybr-ref}. Now, let us recall that the hybrid scheme demands a canonical formulation of the whole system formed by the homogeneous degrees of freedom and the perturbations. Therefore, since the chosen tensor and Mukhanov-Sasaki gauge invariants involve, in their definition in terms of the metric and inflaton perturbations, the homogeneous canonical variables, these homogeneous quantities must be corrected by terms quadratic in perturbations so that the new, corrected, homogeneous variables complete the gauge invariants and the linear perturbative constraints, together with their momenta, into a canonical set for the entire cosmological system \cite{hybr-ref,hybr-ten}. The Hamiltonian for the whole cosmology is then expressed in terms of this new set of canonical variables, prior to its quantum representation. To implement the quantization, the hybrid approach combines some quantum gravity inspired representation (in this paper it will be LQC) of the homogeneous sector of the phase space with a more conventional Fock representation of the rest of inhomogeneous degrees of freedom \cite{hybrid1,hybrid2,hybrid3}. The kinematic representation space of the quantum theory is the tensor product of the different Hilbert or Fock spaces associated with these sectors. One then constructs a quantum representation of the different constraints that the relativistic system possesses and imposes them following the Dirac procedure \cite{Dirac}. In the studied perturbed cosmological spacetimes, since the linear perturbative constraints are part of the selected canonical variables, their quantum imposition is straightforward: they just restrict the physical states not to depend on their conjugate momenta. On the other hand, the quantum imposition of the zero-mode of the Hamiltonian constraint is a highly nontrivial task. In particular, the perturbative contributions to this constraint couple the homogeneous sector with the Mukhanov-Sasaki and tensor perturbations \cite{hybr-ref}. In order to find solutions of cosmological interest to this complicated quantum constraint, the following ansatz has been proposed \cite{hybr-inf3,hybr-ref,hybr-ten}. One considers wave functions in which the dependence on the different sectors of the phase space factorizes, except for the homogeneous inflaton, which then may be viewed as an internal time for these quantum states. In particular, the homogeneous part of these states, $\Gamma (a,\phi)$, where $a$ generically denotes dependence on the homogeneous geometry, may be chosen as an exact solution to the, homogeneous, quantum FLRW model, and in this paper we will take it this way. However, let us comment that this choice is in principle not needed, and in fact one may consider other possibilities for $\Gamma$ that incorporate the presence of some quantum backreaction of the perturbations onto the homogeneous sector of the model \cite{hybr-ref,hybr-ferm}. With this ansatz at hand, and provided that $\Gamma$ is sufficiently peaked on the homogeneous geometry for all values of $\phi$, then the imposition of the zero-mode of the Hamiltonian constraint leads to the requirement that the outcome of certain operators, acting exclusively on the Mukhanov-Sasaki and the tensor parts of the wave function, must be zero. Remarkably, these quantum equations on the gauge-invariant perturbations only depend on the homogeneous geometry via some expectation values of geometric operators taken on $\Gamma$ \cite{hybr-ref,hybr-ten}. On the other hand, as expected, the quantum dependence on the Mukhanov-Sasaki and tensor configuration variables, as well as on their momenta, is quadratic. Therefore, it seems reasonable that, for some of our considered states, one can legitimately substitute this quadratic dependence on the Mukhanov-Sasaki and tensor variables by its classical counterpart, and then regard the operators that act on the perturbative parts of the wave function as constraints that incorporate the effect of quantum geometry contributions. In this situation, one can easily obtain the dynamical equations for the Mukhanov-Sasaki and tensor perturbations with quantum geometry corrections \cite{hybr-ref,hybr-ten}. Furthermore, in the considered scenario with negligible backreaction, we may choose $\Gamma$ to be highly peaked on the effective LQC trajectories generated by $H_{|0}^{\rm eff}$, given in Eq. \eqref{effh}. If this is the case, we may substitute that effective behavior in the expectation values of the geometry which appear in the previous dynamical equations, arriving at the following evolution for the tensor and the Mukhanov-Sasaki perturbations \cite{hybr-pred,GUI}, \begin{align}\label{h-efft2} \tilde d_{\vec {k},\epsilon}''+\left[k^2-\frac{4\pi G}{3}a^2(\rho-3P)\right]\tilde d_{\vec {k},\epsilon}=0,\\\label{h-effs2} v_{\vec k}''+\left[k^2-\frac{4\pi G}{3}a^2(\rho-3P)+\mathcal{U}\right]v_{\vec k}=0, \end{align} where the prime denotes again derivative with respect to the conformal time, and \begin{align}\label{Ueff} \mathcal{U}=a^2\left[V_{,\phi\phi}+48\pi G V(\phi)+ 6\frac{a'{\phi}'}{ a^{3}\rho}V_{,\phi}-\frac{48\pi G}{\rho} V^2(\phi)\right]. \end{align} In order to compute this Mukhanov-Sasaki potential $\mathcal{U}$, one needs to provide certain prescriptions for the quantum representation of the functions of the homogeneous variables that couple to the perturbations in the full Hamiltonian constraint. For the effective LQC approximation considered here, the only relevant one of all such prescriptions consists of adjusting the length of the holonomies which encode the information about the Hubble parameter in LQC. This adjustment is made to preserve the superselection sectors in which the Hamiltonian constraint of the homogeneous model separates the kinematic Hilbert space \cite{hybr-inf3}. Finally, it is worth noticing that, in those regimes where the effective dynamics of the geometric background approaches classical linearized GR [so that one can ignore quadratic terms in the pressure and density in Eq. \eqref{LQC}], the $k$-independent term that appears in both the tensor and the Mukhanov-Sasaki equations approaches the classical quantity $-a''/a$, thus recovering the well-known classical equations for the gauge-invariant perturbations. \subsection{Dressed metric approach} Let us now summarize the main features of the dressed metric approach to the quantization of cosmological perturbations within the context of LQC. For specific details about the arguments and derivations of the corresponding equations, we refer the reader to Refs. \cite{dressed1,dressed2,dressed3,dressed4}. First of all, let us point out that this approach follows the same hybrid quantization strategy of combining LQC techniques for the homogeneous degrees of freedom with Fock representations for the perturbations. However, there is a major difference between the two approaches: in the dressed metric case, one does not regard the truncated, perturbed cosmological system as a full symplectic and constrained manifold. One treats in a separate way the homogeneous background and the inhomogeneous perturbations, assuming since the very beginning that backreaction effects should be ignorable. In particular, one deals with the phase space evolution in two steps: one first obtains the dynamical trajectories on the homogeneous sector and then lifts them to the truncated phase space \cite{dressed2}. Consequently, in this approach, one lacks a classical Hamiltonian that generates the evolution of both the homogeneous background and the perturbations, at the considered order of truncation. Instead, one may understand the dressed metric formalism as if it possessed two different Hamiltonians. The first one is just the standard FLRW Hamiltonian. The second one is the Hamiltonian that, when the homogeneous background is viewed as a fixed entity, gives rise to the linearized equations for the perturbations \cite{dressed2,dressed3}. The perturbative sector of the cosmological system is again given in terms of gauge-invariant quantities, although their description is somewhat different from the one put forward in the hybrid approach. In the dressed metric approach, one solves classically the linear perturbative constraints. The resulting, reduced, phase space for the perturbations is then described with a specific choice of tensor and Mukhanov-Sasaki variables. For the tensor degrees of freedom, we will follow the notation of Ref. \cite{dressed2} and call $T^{(\epsilon)}_{\vec k}/l_{0}^{3}$ the configuration mode coefficients of the tensor perturbations (where $\epsilon$ denotes again the polarization). In turn, $\mathcal{Q}_{\vec k}/l_{0}^{3}$ will denote the configuration mode coefficients of the Mukhanov-Sasaki field variable chosen in the dressed metric approach. As we have commented, the philosophy to quantize the system is to combine an LQC representation for the homogeneous sector of the (truncated) phase space with a Fock representation for the tensor and Mukhanov-Sasaki perturbations, as in the hybrid approach. Again, one also introduces an ansatz for the quantum states in which the dependence on the homogeneous geometry and on the perturbations factorizes. In this ansatz, all partial wave functions are allowed to depend on the inflaton $\phi$, which is viewed as an internal time. However, in the dressed metric case there is no Hamiltonian constraint that affects the perturbations, since the whole of the truncated cosmology is not treated as a constrained symplectic system. Instead, one has the Hamiltonian constraint of the homogeneous FLRW model, and the Hamiltonian functions that, classically, generate the dynamics of the perturbations. Accordingly, the approach requires the homogeneous part of the states to be an exact solution of the FLRW model in LQC, and then uses this solution to define the quantum dynamics on the phase space of the gauge-invariant perturbations \cite{dressed2,dressed3}. In this way, the perturbations behave as test fields that see a dressed metric determined by certain expectation values of operators of the homogeneous geometry, which incorporate the most relevant quantum effects. In our case with compact sections, one can construct in this manner, for instance, operators representing the Hamiltonians on the phase space of the perturbations. Associated with these operators, one obtains Schr\"odinger equations in $\phi$ for the partial wave functions that describe the tensor and the Mukhanov-Sasaki perturbations. One may again consider in this approach that the homogeneous part of the wave function is highly peaked on a trajectory dictated by the effective LQC dynamics generated by $H_{|0}^{\rm eff}$. Such an effective description would then also apply to the dressed metric quantities that couple to the scalar and tensor perturbations. The field equations of the tensor and Mukhanov-Sasaki variables propagating on such effective dressed metric are \cite{dressed2,dressed3} \begin{align} T_{\vec k}^{(\epsilon)\prime \prime}+2\frac{a'}{a}T_{\vec k}^{(\epsilon)\prime}+k^{2}T_{\vec k}^{(\epsilon)}=0,\\ \mathcal{Q}_{\vec k}''+2\frac{a'}{a}\mathcal{Q}_{\vec k}'+(k^{2}+\mathcal{V})\mathcal{Q}_{\vec k}=0, \end{align} with the same notation as before for the prime and where all the homogeneous quantities must be evaluated on the effective LQC background [see Eq. \eqref{LQC}]. Besides, \begin{align}\label{V} \mathcal{V}=\left[\mathfrak{f}V(\phi)-2\sqrt{\mathfrak{f}} V_{,\phi}+ V_{,\phi\phi}\right]a^2, \qquad \mathfrak{f}=\frac{48\pi G \pi_{\phi}^{2}}{\pi_{\phi}^2 +l_{0}^{6}a^{6}V(\phi)}, \end{align} a function that can be checked to coincide with the hybrid Mukhanov-Sasaki potential $\mathcal{U}$ in an FLRW universe, and therefore at the order of truncation adopted in the dressed metric approach. A caveat is in order here: the coefficient of $V_{,\phi}$ in $\mathcal{V}$, when evaluated on classical FLRW solutions, equals $-12a|\pi_{\phi}|/|\pi_{a}|$ if the square root of $\mathfrak{f}$ is defined as positive, or $12a|\pi_{\phi}|/|\pi_{a}|$ if it is defined as negative. On the other hand, the corresponding coefficient in $\mathcal{U}$, which is in fact the one that appears in the context of linearized GR \cite{mukh}, is given by $-12a\pi_{\phi}/\pi_{a}$. This tension may be solved by demanding that the sign of the square root of $\mathfrak{f}$ be positive when the signs of $\pi_{\phi}$ and $\pi_{a}$ coincide, and negative otherwise. Alternatively, in recent analyses on the consequences of the dressed metric approach for the CMB \cite{waco,waco1}, the considered coefficient of $V_{,\phi}$ in the potential $\mathcal{V}$ has been taken equal to $2a^{2}\sqrt{24\pi G}\dot{\phi}\rho^{-1/2}$, where the dot denotes derivative with respect to the proper time. One can see that, classically, this last expression coincides for an expanding universe with the result obtained in linearized GR, and thus also with the hybrid result. Now, if one compares the Hamiltonian functions for the perturbations in the dressed metric approach with the ones that generate the evolution of these perturbations in the hybrid approach \cite{hybr-ref,hybr-ten,dressed1,dressed2}, one can see that the choices of variables employed in each of these approaches for the description of the tensor and Mukhanov-Sasaki perturbations can be related by a very specific transformation, which is canonical as far as the perturbations are concerned. In particular, this transformation involves the multiplication of the configuration variables $T_{\vec k}^{(\epsilon)}$ and $\mathcal{Q}_{\vec k}$ by the homogeneous scale factor of the cosmology (up to a constant). So, in order to compare the dressed field equations for the perturbations with the ones obtained in the hybrid quantization approach for the tensor and Mukhanov-Sasaki variables $\tilde d_{\vec {k},\epsilon}$ and $v_{\vec k}$, it is most convenient to consider their analogues for \begin{align} t_{\vec k}^{(\epsilon)}=\frac{a}{\sqrt{32\pi Gl_{0}^{3}}}T_{\vec k}^{(\epsilon)} \qquad \text{and} \qquad q_{\vec k}=\frac{a} {\sqrt{l_{0}^{3}}}\mathcal{Q}_{\vec k}, \end{align} where $a$ corresponds again to the effective, dressed scale factor. Their equations, after evaluating the resulting explicit time derivative $a^{\prime\prime}$ on effective LQC trajectories [via Eq. \eqref{LQC}], are \begin{align}\label{d-efft2} t_{\vec{k}}^{(\epsilon)\prime\prime}+\left[k^{2}-\frac{4\pi G}{3} a^2 \rho\left(1+2\frac{\rho}{\rho_{\rm max}}\right)+4\pi G a^2 P\left(1-2\frac{\rho}{\rho_{\rm max}}\right)\right]t_{\vec k}^{(\epsilon)}=0,\\\label{d-effs2} q_{\vec{k}}''+\left[k^{2}-\frac{4\pi G}{3} a^2 \rho\left(1+2\frac{\rho}{\rho_{\rm max}}\right)+4\pi G a^2 P\left(1-2\frac{\rho}{\rho_{\rm max}}\right)+\mathcal{V}\right]q_{\vec k}=0, \end{align} with \begin{align}\label{Veff} {\mathcal{V}}=a^2\left[V_{,\phi\phi}+48\pi G V(\phi)-\text{sign}\left(\sqrt{\mathfrak{f}}\right)\frac{4\sqrt{6\pi G}|\phi'|}{a\rho^{1/2}}V_{,\phi}-\frac{48\pi G}{\rho} V^2(\phi)\right]. \end{align} In those regimes in which the effective dynamics of the background approaches that of linearized GR (and when the sign of the square root of $\mathfrak{f}$ is appropriately taken in $\mathcal{V}$, as we have commented above), these equations for the perturbations coincide with the hybrid ones and, in turn, with the classical tensor and Mukhanov-Sasaki equations. \subsection{Differences in the perturbation dynamics within effective LQC} Owing to the different quantization strategies adopted in the hybrid and dressed metric approaches, as we have explained above, the following differences arise in the perturbations equations derived from them in effective LQC: \begin{itemize} \item The $k$-independent term that appears in both the tensor and the Mukhanov-Sasaki equations, and which equals $-a''/a$ in classical GR, is not the same in the two approaches [see Eqs. \eqref{h-efft2}, \eqref{h-effs2}, \eqref{d-efft2}, and \eqref{d-effs2}]. This can be traced back to the differences in the treatment of the phase space of the perturbed FLRW cosmologies in the hybrid and the dressed metric approaches. In the hybrid case, the whole phase space is treated as a symplectic manifold. The $k$-independent factor is then expressed in terms of canonical variables. It is the expectation value of the operator representing this canonical expression what is evaluated on trajectories described by the effective dynamics of LQC. On the contrary, in the dressed metric formalism, one does not have a global canonical symplectic structure on the truncated phase space. The dressed term $-a''/a$ is evaluated on effective solutions to LQC, including the computation of the time derivatives, which are calculated along the trajectories of the effective dynamics. The difference then arises because of the departure between the standard classical relation of the time derivatives of the scale factor with its canonical momentum and the corresponding effective relation in LQC. In other words, given that $a''/a=\{a\{a,H_{|0}\},H_{|0}\}$ classically, where $H_{|0}$ is the Hamiltonian constraint of the inflationary FLRW cosmology, the difference appears because \begin{align} \left(\{a\{a,H_{|0}\},H_{|0}\}\right)_{\rm eff}\neq \{a\{a,H^{\rm eff}_{|0}\},H^{\rm eff}_{|0}\}, \end{align} where the subscript ``eff'' indicates evaluation on effective solutions after having computed the Poisson brackets. \item The Mukhanov-Sasaki potentials $\mathcal{U}$ and $\mathcal{V}$ are different as well. Leaving aside a subtlety concerning the sign of the contribution of $V_{,\phi}$ in $\mathcal{V}$, related with the way in which $\sqrt{\mathfrak{f}}$ is chosen and which is present even if the effective dynamics reproduces classical GR, the two potentials indeed display discrepancies in effective LQC. Specifically, if we compare the expression \eqref{Veff} of $\mathcal{V}$ with the hybrid potential $\mathcal{U}$ given in Eq. \eqref{Ueff}, we observe that they differ in the absolute value of the factor that multiplies $V_{,\phi}$, where it is especially remarkable the absence of the first time derivative of the (effective) scale factor in the dressed metric case. The discrepancy arises, once more, owing to the different quantization prescriptions followed in the hybrid and the dressed metric approaches. In particular, in the hybrid approach, one is naturally led to adopt a specific prescription that preserves the superselection sectors of the homogeneous geometry, since the potential $\mathcal{U}$ is part of a constraint operator acting on the entire quantum space that describes both the homogeneous cosmology and the perturbations. This is not the case in the dressed metric approach, where one just evaluates $\mathfrak{f}$ [given in Eq. \eqref{V}] on effective LQC trajectories and then takes its square root (as it is directly proposed in Refs. \cite{dressed2,dressed3}). \end{itemize} It is worth noting, nonetheless, that the latter difference between the two Mukhanov-Sasaki potentials $\mathcal{U}$ and $\mathcal{V}$ is only expected to be relevant in regimes where the energy density of the scalar field is not kinetically dominated, since it is only then that the effect of the potential, and hence the contribution of $V_{,\phi}$, can be important. But a kinetic dominance would precisely be the case at the bounce for the most interesting effective solutions, since a physically acceptable period of slow-roll inflation compatible with the persistence of quantum geometry effects on the largest scales observed in the CMB typically requires solutions of this type \cite{GUI}. However, since in the passage from the bounce to the inflationary regime, the contribution of the potential becomes significant on those effective solutions, the commented difference might not be totally ignorable during some stages of the evolution. In Fig. \ref{fig:uvpot}, we compare the absolute values of the Mukhanov-Sasaki potentials $\mathcal{U}$ and $\mathcal{V}$, as well as their relative values with respect to the corresponding time-dependent mass of the scalar perturbations (in the hybrid and dressed metric approaches, respectively), for one of such kinetically dominated solutions in the case of a quadratic potential for the inflaton. The initial conditions and inflaton mass for this solution were considered in Ref. \cite{GUI}, in the numerical study of the consequences of the hybrid approach for the CMB, where they were shown to lead to power spectra compatible with observations while displaying power suppression and certain superposed features at large scales. In this sense, we recall that, using the initial value $a_{\rm B}$ of the scale factor $a$ as a length scale and imposing the effective homogeneous Hamiltonian, we can reduce the set of initial conditions at the bounce (where the time derivative of the scale factor vanishes) just to the value of the inflaton. In addition, we plot in Fig. \ref{fig:tdm} the time-dependent masses for the tensor and the Mukhanov-Sasaki perturbations, both in the hybrid and the dressed metric cases, to show how tiny the effect of the potentials $\mathcal{U}$ and $\mathcal{V}$ is in this particular kinetically dominated solution. In the same figure, we also plot the relative difference between the value in the two approaches of the time-dependent tensor mass and of the scalar one. For the dressed metric, we have chosen the Mukhanov-Sasaki potential $\mathcal{V}$ with the prescription of sign of Refs. \cite{waco,waco1} for the term proportional to $V_{,\phi}$. \begin{figure} \includegraphics[width=0.49\textwidth]{fig11.pdf} \includegraphics[width=0.49\textwidth]{fig12.pdf} \caption{Left panel: Evolution, on the expanding branch of the Universe, of the Mukhanov-Sasaki potentials $\mathcal{U}$ and $\mathcal{V}$, corresponding respectively to the hybrid and dressed metric approaches. Right panel: Relative contribution of the Mukhanov-Sasaki potential to the corresponding time-dependent mass, denoted by $s^{({\rm s})}$ in the hybrid approach and by $\breve{s}^{({\rm s})}$ in the dressed metric approach. Here, we consider a quadratic potential, $V(\phi) = m^{2}\phi^2/2$. In Planck units, the inflaton field at the bounce and the parameters of the model are taken equal to $\phi_{\rm B} = 0.97$, $m =1.20\cdot 10^{-6}$, and $\gamma = 0.2375$. We represent the absolute value of the considered quantities, distinguishing between positive and negative values by employing solid and dashed lines, respectively. The black vertical dashed line marks the onset of the slow-roll phase.} \label{fig:uvpot} \end{figure} \begin{figure} \includegraphics[width=0.49\textwidth]{fig21.pdf} \includegraphics[width=0.49\textwidth]{fig22.pdf} \caption{Left panel: Evolution on the expanding branch of the Universe of the time-dependent masses for the tensor and the Mukhanov-Sasaki perturbations in the hybrid approach, denoted by $s^{({\rm t})}$ and $s^{({\rm s})}$ respectively, and in the dressed metric approach, denoted by $\breve{s}^{({\rm t})}$ and $\breve{s}^{({\rm s})}$, respectively. Right panel: Relative difference between the value of the time-dependent mass in the two approaches, both for the tensor and for the Mukhanov-Sasaki perturbations. In this right panel, solid (dashed) lines correspond to positive (negative) values of the quantities for which we represent the absolute value. Here, we consider a quadratic potential, $V(\phi) = m^{2}\phi^2/2$. In Planck units, the inflaton field at the bounce and the parameters of the model are taken equal to $\phi_{\rm B} = 0.97$, $m =1.20\cdot 10^{-6}$, and $\gamma = 0.2375$.} \label{fig:tdm} \end{figure} \section{Time-dependent masses at the Big Bounce}\label{sec3} The equations for the tensor and the Mukhanov-Sasaki perturbations in the hybrid and dressed metric approaches are of the harmonic oscillator type, with different time-dependent masses in each case. In this section we will analyze the properties of these masses when they are evaluated at the big bounce in effective LQC, focusing on their positivity. If one chooses the initial time when the bounce occurs, this positivity may be relevant in the search of well-defined and physically interesting initial conditions for the gauge-invariant perturbations, for instance if one wants to construct sets of initial conditions corresponding to adiabatic states \cite{dressed2,dressed3,dressed4,Morris,hybr-pred,GUI}. Let us call \begin{align}\label{hmass} s^{({\rm t})}=-\frac{4\pi G}{3}a^2(\rho-3P)\qquad \text{and}\qquad s^{({\rm s})}=s^{({\rm t})}+\mathcal{U} \end{align} the time-dependent masses for the tensor and the Mukhanov-Sasaki perturbations in the hybrid approach, respectively. Similarly, we will call \begin{align}\label{dmass} \breve{s}^{({\rm t})}=-\frac{4\pi G}{3}a^2\rho\left(1+2\frac{\rho}{\rho_{\rm max}}\right)+4\pi G a^2 P\left(1-2\frac{\rho}{\rho_{\rm max}}\right)\qquad \text{and} \qquad \breve{s}^{({\rm s})}=\breve{s}^{({\rm t})}+\mathcal{V} \end{align} the corresponding masses in the dressed metric approach. Recall that $\mathcal{U}$ and $\mathcal{V}$ are given in Eqs. \eqref{Ueff} and \eqref{Veff}, and these homogeneous quantities must be evaluated on effective trajectories. In effective LQC, the big bounce occurs when the energy density of the inflaton field $\rho$, given in Eq. \eqref{density}, equals its upper bound $\rho_{\rm max}=3/(8\pi G \gamma^{2}\Delta)$. When this happens, the modified Friedmann equation \eqref{LQC} results in a vanishing Hubble parameter and the scale factor reaches its minimum possible value $a_{\rm B}$. In what follows, the symbol B as subscript or superscript of any homogeneous variable stands for its evaluation at the bounce. In this situation, it is easy to check that the time-dependent masses adopt the expressions \begin{align}\label{hybformul} \frac{s^{({\rm t})}_{\rm B}}{8\pi G a_{\rm B}^2}=\frac{1}{8\pi G \gamma^2\Delta}- V(\phi_{\rm B})= \frac{\rho_{\rm max}}{3}- V(\phi_{\rm B}), \qquad s^{({\rm s})}_{\rm B}=s^{({\rm t})}_{\rm B}+{\mathcal{U}}_{\rm B}, \end{align} in the hybrid case, whereas for the dressed metric approach one obtains \begin{align} \frac{\breve{s}^{({\rm t})}_{\rm B}}{8\pi G a_{\rm B}^2}=-\frac{3}{8\pi G \gamma^2\Delta}+ V(\phi_{\rm B})= -\rho_{\rm max}+ V(\phi_{\rm B}),\qquad \breve{s}^{({\rm s})}_{\rm B}=\breve{s}^{({\rm t})}_{\rm B}+{\mathcal{V}}_{\rm B}. \end{align} In these formulas, the Mukhanov-Sasaki potentials at the bounce are \begin{align} &\mathcal{U}_{\rm B}=a_{\rm B}^2\left[V_{,\phi\phi}^{\rm B}+48\pi G V(\phi_{\rm B})-128\pi^2 G^2 \gamma^2 \Delta V^{2}(\phi_{\rm B})\right],\\&\label{Vbd}\mathcal{V}_{\rm B}=a_{\rm B}^2\left[V_{,\phi\phi}^{\rm B}+48\pi G V(\phi_{\rm B})-\text{sign}\left(\sqrt{\mathfrak{f}}\right)16\pi G \gamma \sqrt{\Delta}\frac{|{\phi}'_{\rm B}|}{a_{\rm B}}V_{,\phi}^{\rm B}-128\pi^2 G^2 \gamma^2 \Delta V^{2}(\phi_{\rm B})\right]. \end{align} The most relevant differences between the time-dependent masses for the two quantization approaches, commented in the previous section, show up here. In particular, these differences are important when one analyzes the positivity of these masses at the bounce. As we have discussed in the Introduction, in the regime of very high kinetic dominance, where the effects of the potential can be safely ignored, one can use e.g. the analytic solution obtained in Ref. \cite{waco} to elucidate the sign of the masses at the bounce for each of the two considered approaches. Nevertheless, as we have explained, our goal is to exceed this regime and carry out a more general analysis in which the influence of the potential in the positivity of the mass can be quantified. Of course, the conclusions that we will reach in this way will, in particular, reproduce in the limit of vanishing potential the results for solutions with an inflaton energy density that is totally dominated by its kinetic contribution. For the tensor perturbations, it is straightforward to deduce from Eq. \eqref{hybformul} that, in the hybrid approach, the mass at the bounce, $s^{({\rm t})}_{\rm B}$, is positive if and only if $V(\phi_{\rm B})< \rho_{\rm max}/3$. Notice that this upper bound on the potential is compatible with kinetic dominance at the bounce, since it suffices that the kinetic contribution to the energy density of the inflaton is larger than $2\rho_{\rm max}/3$, and hence always larger than twice the potential. Obviously, this kinetic contribution is bounded from above by $\rho_{\rm max}$ for non-negative potentials. In the case of the dressed metric approach, on the other hand, the mass at the bounce for the tensor perturbations is never positive, because the potential is necessarily smaller than, or equal to, the upper bound for the energy density. This nonpositivity was certainly expected, because the analyzed tensor mass is known to coincide with the effective value of $-a''/a$ in the dressed metric approach. Since the dressed scale factor has a minimum at the bounce, its second derivative is non-negative and then, trivially, the studied ratio cannot be positive. The analysis of the Mukhanov-Sasaki masses is more involved, because it depends on the explicit form of the inflaton potential $V(\phi)$. In order to perform this analysis, we will treat the value of $V_{,\phi\phi}^{\rm B}$ as a parameter, that we will assume non-negative. This is what happens in fact for the massive scalar field, for which the second derivative of the potential is just a positive constant, namely the square mass of the inflaton. Besides, we will restrict our attention to potentials that are non-negative, like the one for the massive scalar field. Therefore, at the bounce we must have $0\leq V(\phi_{\rm B})\leq \rho_{\rm max}$. In the hybrid approach, the time-dependent mass for the Mukhanov-Sasaki perturbations takes the form of a quadratic polynomial in $V(\phi_{\rm B})$. The roots of this polynomial are \begin{align} x_{\pm}=\frac{5\pm\sqrt{33+8\gamma^2\Delta V_{,\phi\phi}^{\rm B}}}{12}\rho_{\rm max}, \end{align} and the polynomial decreases for large values of $V(\phi_{\rm B})$. Given our assumption of a non-negative second derivative of the inflaton potential, $x_{-}$ is clearly negative, and $x_+$ positive. Consequently, for non-negative potentials, the mass $s^{({\rm s)}}_{\rm B}$ is positive if and only if $V(\phi_{\rm B})< x_+$. The availability of a sector of kinetic dominance at the bounce with positive time-dependent mass is then directly granted, since this positivity holds for sufficiently small potentials and the kinetic contribution to the energy density at the bounce is simply $\rho_{\rm max}-V(\phi_{\rm B})$. Moreover, since $x_+$ is always larger than its value for vanishing $V_{,\phi\phi}^{\rm B}$, which equals $(5+\sqrt{33})\rho_{\rm max}/12$, the interval $[x_+,\rho_{\rm max}]$ of potentials at the bounce below the upper bound for the energy density is either empty or included in $[(5+\sqrt{33})\rho_{\rm max}/12,\rho_{\rm max}]$. Therefore, since $(5+\sqrt{33})/12\approx 0.895$, it is only at most in a relatively restricted interval of large potentials away from the kinetically dominated sector that the mass might not be positive in the effective regime of hybrid LQC. In the dressed metric approach, the Mukhanov-Sasaki mass at the bounce, $\breve{s}^{({\rm s})}_{\rm B}$, contains the new factor $V_{,\phi}^{\rm B}$, something that adds an extra complication to the analysis of the positivity. In order to complete the study analytically, apart from the already assumed non-negativity of the inflaton potential and of its second derivative at the bounce, we will suppose that the first derivative at the bounce can be bounded in the form $|V_{,\phi}^{\rm B}|\leq C \sqrt{2 V^{\rm B}_{,\phi\phi} V(\phi_{\rm B})}$, where $C\equiv C(V^{\rm B}_{,\phi\phi})$ may be any positive bounded function of the order of the unit. This assumption of a bound is not too restrictive, and in particular contains the relevant case of the quadratic potential, that satisfies the functional relation $|V_{,\phi}|=\sqrt{2V_{,\phi\phi}V(\phi)}$ for all values of the inflaton at all instants of time, and not only at the bounce. Thus, in this case, one can make $C=1$. On the other hand, given the relation of the energy density with the time derivative of the inflaton and its potential [see Eq. \eqref{density}], evaluated at the bounce where $\rho=\rho_{\rm max}$, one obtains that $|\phi'_{\rm B}| \sqrt{2V(\phi_{\rm B})}/a_{\rm B}\leq \rho_{\rm max}$. With all this information, it is easy to deduce that \begin{align} 16\pi G \gamma\sqrt{\Delta}\frac{|\phi'_{\rm B} V^{\rm B}_{,\phi}|}{a_{\rm B}}\leq\frac{6 C}{\gamma\sqrt{\Delta}}\sqrt{V_{,\phi\phi}^{\rm B}}. \end{align} Noting that $V^{\rm B}_{,\phi}$, and hence the term proportional to it in $\mathcal{V}_{\rm B}$, may take both signs, one concludes that the Mukhanov-Sasaki mass at the bounce in the dressed metric approach is bounded from below and from above by $P_{-}\leq \breve{s}^{({\rm s})}_{\rm B} \leq P_{+}$, where $P_{-}$ and $P_{+}$ are the following quadratic polynomials in $V(\phi_{\rm B})$: \begin{align}\label{u} P_{\pm}=\breve{s}_{\rm B}^{({\rm t})}+a_{\rm B}^2\left[V_{,\phi\phi}^{\rm B}+48\pi G V(\phi_{\rm B})\pm\frac{6C}{\gamma\sqrt{\Delta}}\sqrt{V_{,\phi\phi}^{\rm B}}-128\pi^2 G^2 \gamma^2 \Delta V^{2}(\phi_{\rm B})\right]. \end{align} Both polynomials $P_{\pm}$ decrease for large $|V(\phi_{\rm B})|$. Their roots, $y_{\pm}(P_{+})$ and $y_{\pm}(P_{-})$, respectively, are given by \begin{align}\label{roots} y_{\pm}(P_{\pm})=\frac{7\pm\sqrt{25+8\gamma^2\Delta V_{,\phi\phi}^{\rm B}\pm 48 \gamma C (\Delta V^{\rm B}_{,\phi\phi})^{1/2}}}{12}\rho_{\rm max}. \end{align} For non-negative $V^{\rm B}_{,\phi\phi}$, the two roots of $P_+$ are always real. Hence, the bound $\breve{s}^{({\rm s})}_{\rm B} \leq P_{+} $ implies that the mass is negative outside of the interval $[y_{-}(P_{+}),y_{+}(P_{+}) ]$ for $V(\phi_{\rm B})$ in the sector of physical interest $[0,\rho_{\rm max}]$. It is clear that $y_{-}(P_{+})< \rho_{\rm max}$. Then, the Mukhanov-Sasaki mass of the dressed metric approach is not positive in the subinterval $[0,y_{-}(P_{+})]$ provided that $y_{-}(P_{+})>0$. This last condition is satisfied for $\gamma^{2}\Delta V_{,\phi\phi}^{\rm B}<(\sqrt{9 C^2+3} -3 C)^2$. In particular, this includes the sector of small $\gamma^{2}\Delta V^{\rm B}_{,\phi\phi}$. It is worth commenting that, for the quadratic potential and with the typical values of the inflaton mass that are favored phenomenologically in order to get power spectra compatible with the observations of the CMB and still presenting power suppression at large scales \cite{GUI}, one has that $\gamma^2 \Delta V^{\rm B}_{,\phi\phi}$ is as small as $10^{-12}$. Let us also point out that the interval $[0,y_{-}(P_{+})]$ in which the mass becomes nonpositive grows as large as $[0,\rho_{\rm max}/6]$ in the limit in which $V_{,\phi\phi}^{\rm B}$ tends to zero. So, in those cases where the root $y_{-}(P_+)$ is positive, something that occurs if $V_{,\phi\phi}^{\rm B}$ is not very large and certainly in the physically interesting region of very small values of $V_{,\phi\phi}^{\rm B}$, the studied mass is inevitably negative for $V(\phi_{\rm B})$ in a nonempty neighborhood of zero, which is precisely the region containing the solutions that are kinetically dominated. Finally, if not only the roots of $P_+$, but also those of $P_-$ are real, one can straightforwardly check that \begin{equation} y_{-}(P_{+})\leq y_{-}(P_{-}) \leq y_{+}(P_-)\leq y_{+}(P_+). \end{equation} This reality of all roots occurs if $\gamma^2 \Delta V^{\rm B}_{,\phi\phi}\geq (3C +\sqrt{9 C^2-25/8}\, )^2 $ or if $\gamma^2 \Delta V^{\rm B}_{,\phi\phi}\leq (3C -\sqrt{9 C^2-25/8}\, )^2 $, which includes the region of small values of $V^{\rm B}_{,\phi\phi}$. Taking then into account the bound $P_-\leq \breve{s}^{(s)}_B $, we can be sure that the mass is non-negative at least in the intersection of $[y_{-}(P_{-}),y_{+}(P_{-}) ]$ with the interval $[0,\rho_{\rm max}]$, to which $V(\phi_{\rm B})$ is restricted. In Fig. \ref{fig:msdm} we plot the value of the time-dependent mass for the Mukhanov-Sasaki perturbations in the dressed metric approach particularized to the case of the quadratic potential, with the same value of the inflaton mass as in Figs. \ref{fig:uvpot} and \ref{fig:tdm}. We zoom the regions where the mass changes from positive to negative values, comparing those points with the roots of the polynomials $P_{\pm}$. Again, we have employed the prescription adopted in Refs. \cite{waco,waco1} for the sign of $\sqrt{\mathfrak{f}}$. \begin{figure} \includegraphics[width=0.49\textwidth]{fig31.pdf} \includegraphics[width=0.49\textwidth]{fig32.pdf} \caption{Left panel: The time-dependent Mukhanov-Sasaki mass at the bounce in the dressed metric approach as a function of the value of the potential at the bounce, for the case of the quadratic potential, $V(\phi) = m^{2}\phi^2/2$, with $m =1.20\cdot10^{-6}$ in Planck units. Again, we take $\gamma=0.2375$. We consider the two possibilities of positive and negative values of the inflaton at the bounce. Right panel: Zoom of the two regions in which the time-dependent mass changes sign. In these regions, we also plot the polynomials $P_{\pm}$ and their zeros.} \label{fig:msdm} \end{figure} \section{Conclusions}\label{concl} We have considered the field equations for the gauge-invariant scalar and tensor perturbations arising from two approaches to LQC: the hybrid and the dressed metric approaches. In order to apply the loop quantization of the geometry, they both combine a polymeric quantization of the background with a Fock quantization of the cosmological perturbations. Moreover, in both cases, one obtains field equations with a standard hyperbolic behavior in the ultraviolet regions, the quantum effects on the geometry being incorporated as a modification of the time-dependent mass of the perturbations with respect to GR. We have focused our discussion on the scenarios that have received more attention in the literature, namely, the situations without backreaction of the perturbations and with background geometries that are describable in terms of effective LQC. We have seen that, in spite of the great similarities between the two approaches, the time-dependent masses deduced with them differ. In the studied scenarios, this is the only noticeable difference, and can be understood as a consequence of the distinct quantization procedures that are followed in each of the approaches. The hybrid approach treats the whole system composed by the background and the perturbations as a symplectic constrained system, after truncating its action at quadratic order in the perturbations. The formulation is canonical, and the effective description of LQC is incorporated only at the end, once all background quantities are expressed in terms of the basic variables. The dressed metric approach, on the other hand, deals first with the background, and lifts its effective trajectories to the truncated phase space that contains the perturbations, incorporating the effects of the quantization of the geometry precisely by replacing the classical metric with a dressed one. In this framework, it is the time derivatives of the dressed metric that are incorporated in the equations for the perturbations. Since these derivatives are computed within the effective dynamics, their relation with the canonical variables of the geometry departs from the standard one in GR. This is the main reason behind the difference between the time-dependent masses of the two approaches. The part of those masses that is common for the tensor and scalar perturbations equals $- a^{\prime\prime}/a$ in GR. The two distinct procedures by which one quantizes the second derivative of the background scale factor lead to the noticed discrepancy between the considered masses. There is a related but more subtle difference between the time-dependent masses in the two studied cases, precisely in the additional term that appears for the Mukhanov-Sasaki perturbations, that contains the dependence on the inflaton potential and its two first derivatives. More specifically, this difference affects the contribution that is proportional to the first derivative of the potential. The distinction is due to the quantization prescription adopted for a factor of the form $1/b$, where $b$ is a classical variable proportional to the homogeneous Hubble parameter (see, e.g., Ref. \cite{hybr-inf5}). In the hybrid quantization, the requirement of preserving the superselection sectors of the background geometry at the moment of defining the action of the Hamiltonian constraint of the entire system (i.e., the background plus the perturbations), leads to an effective counterpart of the type $\sin{(2b)}/(2 \sin^2{b})$. In the dressed metric approach, on the other hand, the truncated phase space is not constrained as a whole, and this factor is made to correspond to the square root of $1/\sin^2{b}$ in the effective description. Nonetheless, given that the term where this discrepancy appears is proportional to the derivative of the inflaton potential, this additional difference turns out to be generically negligible for solutions where the energy density of the inflaton is clearly dominated by the kinetic contribution, at least compared to the other differing part of the time-dependent masses that we have found. We have studied in detail the properties of the masses of the two approaches at the big bounce experimented by the effective background. This bounce marks a special instant of time, when the Hubble constant vanishes. It seems reasonable to consider that instant as a natural choice of initial time, in which one can fix initial conditions for the perturbations. In the definition of those initial conditions, the properties of the time-dependent mass, and in particular its positivity, can be very important, for instance, if one wants to determine data that correspond to adiabatic states \cite{dressed2,dressed3,dressed4,Morris,hybr-pred,GUI} away from the ultraviolet region. We have seen, however, that the mass of the tensor perturbations is never positive in the dressed metric approach. For the hybrid approach, on the contrary, we have proven the positivity of the tensor mass in the sector of background solutions for which the inflaton energy density is dominated by the kinetic term. Furthermore, non-negativity is granted if the kinetic contribution lies in the interval $[2 \rho_{{\rm max}}/3, \rho_{{\rm max}} ]$, where $\rho_{{\rm max}}$ is the upper bound on the inflaton energy density, saturated at the bounce. The analysis at the bounce of the positivity of the time-dependent mass for the Mukhanov-Sasaki gauge-invariant perturbations is more involved, owing to the appearance of the inflaton potential and its derivatives in the corresponding expression. Restricting the study to non-negative potentials with a non-negative second derivative at the bounce, and treating this second derivative as a parameter, we have proven that the mass is not negative in the hybrid approach at least for all background solutions with a kinetic energy of the inflaton at the bounce in $[ (7-\sqrt{33})\rho_{\rm max}/12,\rho_{\rm max}]$, an interval which clearly contains the region of kinetic dominance. In particular, the result is valid for the quadratic potential $V(\phi)=m^2\phi^2/2$, which is non-negative and has a positive second derivative equal to the constant $m^2$, i.e. the squared mass of the inflaton. For the dressed metric, on the other hand, the time-dependent gauge-invariant scalar mass includes an additional term that is proportional to the first derivative of the inflaton potential. To deal with it without introducing unnecessary complications, we have further restricted the discussion to potentials that satisfy the relation $|V^{\rm B}_{,\phi}|\leq C \sqrt{2 V^{\rm B}_{,\phi\phi}V(\phi_{\rm B})}$, where $C$ can be any positive bounded function of $V^{\rm B}_{,\phi\phi}$ of the order of the unit. The analysis includes again the interesting case of the quadratic potential, for which this relation holds as an equality with $C=1$. We have then demonstrated that, if $\gamma^{2}\Delta V_{,\phi\phi}^{\rm B}<(\sqrt{9 C^2+3} -3 C)^2$, where $\gamma$ is the Immirzi parameter and $\Delta$ the area gap allowed by LQG, there exists an interval of kinetic energies for the inflaton at the bounce for which the Mukhanov-Sasaki mass is negative. This interval always contains a neighborhood of $\rho_{\rm max}$, and therefore includes the sector of kinetically dominated solutions. For the case of the quadratic potential and values of the inflaton mass favored phenomenologically in LQC, in order to derive power spectra compatible with observations that nonetheless contain traces of quantum effects at large scales, the value of $\gamma^{2}\Delta V_{,\phi\phi}^{\rm B}$ is really small, around $10^{-12}$ or less, so that the above condition is clearly satisfied. Furthermore, for such almost negligible values of the second derivative of the potential at the bounce, the interval of kinetic energies for which the mass is negative is very approximately equal to $[5 \rho_{\rm max}/6, \rho_{\rm max}]$. \acknowledgments The authors are grateful to J. Olmedo for very enlightening discussions. This work was supported by Project. No. MINECO FIS2014-54800-C2-2-P from Spain and its continuation Project. No. MINECO FIS2017-86497-C2-2-P.
{ "redpajama_set_name": "RedPajamaArXiv" }
8,223
Q: How does one go about making a hangman game where the computer shows you your progress in Python? I'm trying to make a Hangman game in Python but I lack the knowledge of several things it seems. This code below is a rough draft on what I got so far. The idea here is using words from a text file, separating them into a list and then taking a random entry on that list to use as the word for the game. After that, it takes an input from the user on a letter: if the input is in the list, it should print a the word, but turning the letters that weren't guessed yet into "_". So, for example: ____a _. The problem is, I don't know how to do this and it's very confusing. As you can see below, I was trying to use a "for" loop. import random import string # Opening the text file that contains the words that are going to be used in the game with open('words.txt') as file: text = file.read() words = list(map(str, text.split())) result = random.choice(words) list_of_letters = list(result) attempts = 1 for attempts in range(6): pick = input("Pick a letter: ") if pick in list_of_letters: print(pick) else: print("_") #Here is supposed to be what you get as a result when you lose, I want to keep track of the progress like __a__a__, for example. else: print("You lost! The word was", result, "\n Here's your progress: ", ) A: You should do a few more tutorials in python you have made some minor errors in terms of efficiency and methodology. But to answer your question I made this example of a hangman game. Hopefully you can learn from it My words.txt file looks like this these are some words # String or str() is a primitive type in python. You don't need to import it. import random # Opening the text file that contains the words that are going to be used in the game with open('words.txt') as file: # Here I read the file, replace newlines ('\n') with a space and then split the entire thing. words = file.read().replace('\n', ' ').split(' ') # function to grab a new word and list of letters def new_choice(words): res = random.choice(words) l_of_letters = list(res) return res, l_of_letters # function to evaluate the pick and manage progress def check_pick(l_of_letters, pick, progress): new_progress = [] match = False # if you have already guessed the letter it returns immediately if pick in progress: print('You have already guessed that letter!') return progress, False, True for letter in l_of_letters: # if the letter has not been guessed previously if letter not in progress: # see if it matches the pick if pick == letter: # if it does return a successful match and add it to new_progress match = True new_progress.append(pick) else: # otherwise add the default underscore new_progress.append('_') # if it has been guessed before then add it else: new_progress.append(letter) # you win if new_progress is all letters w = '_' not in new_progress return new_progress, w, match # function to continually ask for input until it is appropriate def pick(): p = input("Pick a letter: ").strip(' ') if p == '': print('Guess cannot be blank!') pick() return p # Beginning of main loop. This will loop forever until break is called in the main loop while True: # get a word choice result, list_of_letters = new_choice(words) print('_' * len(list_of_letters)) # progress is started as a list of underscores the length of the word progress = ['_'] * len(list_of_letters) win = False attempts = 6 # this nested while loop asks the user to input their choice, evaluates and tracks progress. It does this until you run out of attempts while attempts is not 0: print('Attempts: ', attempts) # ask for a pick p = pick() # check the pick and the progress progress, win, match = check_pick(list_of_letters, p[0], progress) # if check_pick() returns win == True the game is over if win: print(f'You win! The word was \"{result}\"') # this breaks out of the nested while loop break # if you haven't won it prints your progress print(''.join(progress)) # if your pick was not a match you lose an attempt if not match: attempts -= 1 # if the loop finishes and you haven't won it prints this message if not win: print(f"You lost! The word was \"{result}\"\nHere's your progress: {''.join(progress)}")
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,516
<?xml version="1.0" encoding="utf-8"?> <resources> <string name="donations__button_close">Close</string> <string name="donations__flattr">Flattr</string> <string name="donations__description">Do you find this application useful?\nSupport its development by sending a donation to the developer!</string> <string name="donations__flattr_description">Flattr charges a fee of 10%</string> <string name="donations__google_android_market">Google Play Store</string> <string name="donations__google_android_market_not_supported_title">In-App Donations are not supported.</string> <string name="donations__google_android_market_not_supported">In-App Donations are not supported. Is Google Play Store installed correctly?</string> <string name="donations__google_android_market_description">Google charges a fee of 30%</string> <string name="donations__google_android_market_donate_button">Donate!</string> <string name="donations__google_android_market_text">How much?</string> <string name="donations__paypal">PayPal</string> <string name="donations__paypal_description">You can choose how much you want to donate after clicking the button!</string> <string name="donations__thanks_dialog_title">Thanks!</string> <string name="donations__thanks_dialog">Thanks for donating!\nI really appreciate this!</string> <string name="donations__alert_dialog_title">Error occurred</string> <string name="donations__alert_dialog_no_browser">No browser was found to open a website!</string> <string name="donations__bitcoin">Bitcoin</string> <string name="donations__bitcoin_description">Send any amount of bitcoin to the developer. Press and hold button to copy address.</string> <string name="donations__bitcoin_send_bitcoin_button">Send bitcoin!</string> <string name="donations__bitcoin_toast_copy">Bitcoin address has been copied to the clipboard!</string> </resources>
{ "redpajama_set_name": "RedPajamaGithub" }
4,533
Q: google place autocomplete entry in mysql issue i wish to use google place autocomplete for one of my project but the issue is that i need to store entry of county, state and city as different entry in mysql. i.e when someone type mum and result is displayed as Mumbai(city), Maharashtra(state), India(country), then all this 3 entry should be stored as different entry in mysql. Is it possible to do this ? A: See this page and Enjoy. <!DOCTYPE HTML> <html> <head> <title>Place Autocomplete Address and type</title> <meta name="viewport" content="initial-scale=1.0, user-scalable=no"> <meta charset="utf-8"> <link type="text/css" rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500"> <script type="text/javascript" src="http://maps.googleapis.com/maps/api/js?libraries=places&sensor=false&language=it"></script> </head> <body> <input id="autocomplete" placeholder="Enter your address" type="text"></input> <br> <br> <div id="divToPrint" style="clear: both; float: left; border:solid 1px black; height: 32px;"></div> <script> /* set a default Bounds */ var defaultBounds = new google.maps.LatLngBounds( new google.maps.LatLng(52.207606672865225, -0.9448242187499911), new google.maps.LatLng(52.464377026041284, -0.2746582031249841)); /* get a input tag with id = autocomplete */ var input = document.getElementById('autocomplete'); /* set a options */ var options = { bounds: defaultBounds, types: ['geocode'], }; /* register a autocomplete object into a variable named autocomplete */ var autocomplete = new google.maps.places.Autocomplete(input, options); /* listen a place_changed event into variable autocomplete and do: */ google.maps.event.addListener(autocomplete, 'place_changed', function() { /* reset a defaultBounds into variable */ autocomplete.setBounds(defaultBounds); /* get a response place */ var place = autocomplete.getPlace(); /* initilize the componentsTypePlace */ var componentsTypePlace = ""; /* get a div tag with id = divToPrint */ var divToPrint = document.getElementById('divToPrint'); /* get a legth of request address */ var addressLengthplace = place.address_components.length; /* cycle into a adress_components */ for(var i=0; i < addressLengthplace; i++){ /* get a value of address_component type */ componentsTypePlace = place.address_components[i].types[0]; /* get a value of address_component array[i] */ var val = place.address_components[i].long_name; /* insert into div a value of address component and its type */ divToPrint.innerHTML += "//"+ componentsTypePlace + " : " + val +" //"; } }); </script> </body> </html> A: <link type="text/css" rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500"> <script src="https://maps.googleapis.com/maps/api/js?v=3.exp&sensor=false&libraries=places"></script> <script> var placeSearch, autocomplete; var componentForm = { street_number: 'short_name', route: 'long_name', locality: 'long_name', administrative_area_level_1: 'long_name', country: 'long_name', postal_code: 'short_name' }; function initialize() { autocomplete = new google.maps.places.Autocomplete( /** @type {HTMLInputElement} */(document.getElementById('autocomplete')), { types: ['geocode'] }); google.maps.event.addListener(autocomplete, 'place_changed', function() { //fillInAddress(); }); } function fillInAddress() { var place = autocomplete.getPlace(); for (var component in componentForm) { document.getElementById(component).value = ''; document.getElementById(component).disabled = false; } for (var i = 0; i < place.address_components.length; i++) { var addressType = place.address_components[i].types[0]; if (componentForm[addressType]) { var val = place.address_components[i][componentForm[addressType]]; document.getElementById(addressType).value = val; } } } function geolocate() { if (navigator.geolocation) { navigator.geolocation.getCurrentPosition(function(position) { var geolocation = new google.maps.LatLng( position.coords.latitude, position.coords.longitude); autocomplete.setBounds(new google.maps.LatLngBounds(geolocation, geolocation)); }); } } </script> <body onload="initialize()"> <div id="locationField"> <input id="autocomplete" placeholder="Enter your address" onFocus="geolocate()" type="text" value= <?php if(isset($location)): echo $location; else: echo '""'; endif; ?> name="data[Location][location]" class ="input-block-level" required ="required"> </div> </body>
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,496
Christian Defaye, né Christian Large, le à Villefranche-sur-Saône et mort le à Genève, est un journaliste français, animateur durant 23 ans de l'émission hebdomadaire Special Cinema puis de l'émission Tout va bien sur la TSR. Biographie Jeunesse et formation Né un à Villefranche-sur-Saône, Christian Defaye s'installe à Lyon avec sa mère en 1939, passe toute la guerre, dans un internat religieux, jusqu'à son baccalauréat, puis une licence en Sciences-Politique, à Paris. Remarqué avec Philippe Labro et Pierre Bouteiller lors d'un concours lancé par Europe 1, en 1956, à 22 ans, Maurice Siegel lui donne la rubrique météo d'Europe 1, jusqu'en 1959 où il devient reporter dans la presse écrite au Progrès de Lyon et couvre l'enquête sur l'affaire Deveaux, avec l'écrivain Bernard Clavel, Frédéric Pottecher et Daniel Sarne, publiée chez Denoël dans la collection de Jacques Lanzmann. En 1963, il y fonde le mouvement des journalistes internationaux et entre au journal Le Matin (Tribune de Lausanne) puis enquête sur Les nazis parmi nous, réalisée en 1967, avec Max Syfrig. Il écrit pour La Suisse sur la gastronomie pour Plaisir Gastronomie Magazine. Journaliste de télévision En 1968, il entre à la Télévision suisse romande, il commence sa carrière au sein de la rubrique d'actualité Carrefour, confiné à la cabine de commentaire des sujets d'actualités régionales présentés par les vedettes de l'époque. Il partage le bureau d'Eric Lehmann, qui deviendra plus tard Président de la SSR. Christian Defaye et Eric Lehmann sont choisis pour la présenter. Le top départ est donné pour une série de reportages émissions réalisés avec diverses équipes de la TSR. Avec son ami Jean Claude Chanel, ils conçoivent l'émission réalisée au pénitencier de Bochuz avec Johnny Hallyday auquel s'est joint Raymond Devos. En avril 1972, il lance avec Eric Lehmann Bon dimanche Mr X, une émission allant à la rencontre du public afin de découvrir des personnalités romandes et leurs histoires. En 1974, avec Christian Zender et Christiane Cusin, ils lancent une émission pirate, Spécial cinéma . Sans aucun moyen, cette émission va occuper l'espace du lundi dès 1974 et s'imposer rapidement parmi les programmes les plus populaires de la Télévision Suisse Romande. Avec plus de 950 émissions présentées et invités reçus, Christian Defaye est l'un des journalistes/producteurs ayant l'une des carrières les plus riches du paysage audio-visuel suisse. Claudette Defaye-Cottagnoud le rejoint dès 1975 pour tout ce qui concerne l'actualité et la bibliographie du cinéma au sein de la rubrique Cinerama. Ils formeront un couple à la ville comme à la télévision, jusqu'au décès de Christian Defaye, en 1997. Elle poursuivra la présentation de la rubrique d'actualité cinématographique de la Télévision Suisse Romande dans l'émission, qui fut rebaptisée Vive le Cinéma, et ce jusqu'en mars 2004. Directeur de chaîne de télévision Un premier mandat est demandé à Christian Defaye en 1990 pour l'amélioration qualitative des productions de Télécinéromandie, une chaîne télévisée à péage spécialisée dans la diffusion de contenu lié au cinéma. En 1991, les investisseurs lui demande de reprendre la direction générale de Télécinéromandie avec comme but de lui insuffler une nouvelle vie. Cependant, la santé financière chancelante de la chaîne ne lui offre pas l'opportunité de la faire revivre. En juin 1991, il doit se résoudre à abandonner. En parallèle, il poursuivra sa carrière de producteur et présentateur pour la TSR. Animateur de télévision De 1991 à 1997, il poursuivra la présentation de l'émission Spécial Cinéma jusqu'à ce que la maladie ne l'écarte définitivement de l'antenne le . En 1992, il commente la des Césars pour la TSR et . En parallèle, il lance en septembre 1993 une nouvelle émission d'interview et débats Intitulée Tout va bien où il reçoit de nombreux invités durant les quatre années de diffusion, dont notamment la conseillère Fédérale déchue Elisabeth Kopp, Nicolas Hayek, Roger Pfund, Henri Dès, ou encore Frédy Girardet. Pendant 24 ans il assure la présentation de l'émission Spécial Cinéma et la production de plus de 950 émissions hebdomadaires. De 1986 à 1988, il anime avec le cinéaste Simon Edestein et le producteur Claude Richardet des stages d'entreprises à l'expression télévisuelle pour de grandes entreprises suisses. Vie privée Il se marie une première fois en 1964 avec Heidy Stauffer, à Pully. Ils divorcèrent deux ans plus tard. En 1972, dans les studios de la télévision, il rencontre la speakerine Claudette Cottagnoud, qui deviendra sa femme. Décès Il disparaît le 23 juillet 1997, emporté par la maladie à son domicile genevois. Alain Delon lui rendra hommage en l'évoquant comme cela : . Notes et références Liens externes Site officiel de « Christian Defaye » Christian Defaye dans les archives de la Radio télévision suisse . Une remarquable interview de l'Abbé Pierre par Christian Defaye diffusée dans Spécial cinéma le 28 mai 1990 (archive de la RTS). Naissance en août 1934 Journaliste français du XXe siècle Décès en juillet 1997 Personnalité genevoise de la télévision Décès à 62 ans
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,607
Q: Creating/Accessing new database I have created a new database in SQL Server 2008 R2 on my local machine. The problem is that for some reason I CAN'T seem to get the it so I can use SQL Server Authentication. I am only able to log in with Windows Authentication. Would someone describe the steps so that I can access a database that I created with a newly created account? A: This will happen when you do not install SQL Server under Mixed Mode Authentication. To change this: * *open SQL Server Management Studio (SSMS) *Right click the server *click Properties *click Security *Under "Server Authentication" select "SQL Server and Windows Authentication mode" See the docs for complete details: http://msdn.microsoft.com/en-us/library/ms188670(v=sql.105).aspx
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,202
\section{Introduction} The direction of spectral energy transfer is a crucial feature of a turbulent system. In contrast to isotropic two- and three-dimensional turbulence, where the spectral energy transfer is predominantly to large or small scales \citep{Kraichnan1967,Kolmogorov1941}, some systems such as rotating stratified turbulence and magnetohydrodynamical turbulence \cite[cf.][]{Alexakis2018} exhibit more complex behavior, where the energy transfer can be bidirectional\footnote{In \cite{Alexakis2018} this scenario is termed a ``split energy cascade'', but we think ``bidirectional energy transfer'' is more intuitive.}. To quantify the magnitude and the direction of energy transfer, theories that link the measurable third-order structure functions to energy fluxes have been developed in the inertial ranges, which are away from both the dissipation and forcing scales for isotropic turbulent systems. E.g., in three-dimensional (3D) isotropic turbulence, where energy transfers downscale, \citet{Kolmogorov1941} found that the longitudinal third-order structure function $\ovl{\delta u_L^3}$ and the energy input rate $\epsilon$, which equals the magnitude of energy flux in a statistically steady state, are exactly related by $\ovl{\delta u_L^3}=-\frac{4}{5}\epsilon r<0$, where $r$ is the distance between the two measured points in the inertial range. In contrast, energy transfers to large scales in two-dimensional (2D) turbulence, and the corresponding relation in the energy inertial range becomes $\ovl{\delta u_L^3}=\frac{3}{2}\epsilon r>0$ \citep{Bernard1999,Lindborg1999,Yakhot1999}. It has since become commonplace to use local fits to power laws of observed third-order structure functions to detect spectral energy transfer directions in a variety of systems \citep{Lindborg2007,Kurien2006,Deusebio2014}, including the solar wind \citep{Sorriso-Valvo2007} and atmospheric flow \citep{Cho2001}. Indeed, sometimes just the \textsl{sign} of the observed third-order structure function has been used to estimate the direction of the energy flux at some scale $r$. This is not a robust diagnostic once we consider the shortcomings of the local theories. First, they are valid only in local inertial ranges, which are far away from both the forcing and dissipation scales. Thus, when applying to measured data, one has to determine where the inertial range is in the first place, i.e., in order to use the \citet{Kolmogorov1941} theory one needs to find where the third-order structure function is linear in $r$. But it is possible that different researchers choose different data ranges, and inertial ranges might be short and hard to identify using imperfect measured data, all of which leads to uncertainties. Second, the local, inertial-range theories by definition fail at the forcing scales, which prevents the important detection of forcing scales, e.g., for geophysical flows. Third, previous theories were developed for scenarios with unidirectional energy transfer, but there is good evidence that in natural turbulence, e.g., in the atmosphere and oceans, energy transfers simultaneously to both large and small scales \citep{Marino2015,Pouquet2017}. The direction of energy flux is essential for these structure-function theories, e.g., \citet{Xie2018} illustrate how the 3D \citep{Kolmogorov1941} and 2D \citep{Kraichnan1967} turbulence must be treated differently when taking the infinite Reynolds number limit in the K\'arm\'an-Howarth-Monin (KHM) \cite[cf.][]{Monin1975,Frisch1995} equation, because of their opposite directions of energy transfer. So it is questionable to directly apply the previous theories to scenarios with bidirectional energy transfer. Thus, we want to obtain a forcing-resolving global-in-scale theory that not only captures different inertial ranges in one formula but also applies to bidirectional energy transfer, allowing us to make use of the measured data over a wide range that includes the forcing scales. Here, we derive such a theory for isotropic 2D turbulence and test it against a numerical simulation of 2D MHD turbulence with bidirectional energy transfer \citep{Seshasayanan2014,Seshasayanan2016}, which is a limiting case of 3D MHD with strong background magnetic field \citep{Gallet2015}. We also show how to adapt our theory to turbulent flows in 1D or 3D. \section{Theoretical framework} We start from the generic K\'arm\'an-Howarth-Monin (KHM) equation for two-point correlations, in which the nonlinear terms appear via the divergence of a third-order vector field: \begin{equation} \frac{1}{2}\frac{\partial}{\partial t}C - \frac{1}{4}\nabla\!\cdot\!\bV = D + P. \label{abs_eq} \end{equation} Here $C$ is the second-order correlation function, $\bV$ is a vector of third-order structure functions if the system has a quadratic nonlinearity, and $D$ and $P$ describe the effects of dissipation and external forcing, respectively. For example, in the case of 2D homogeneous isotropic turbulence studied by \cite{Xie2018}, \begin{subequations} \begin{align} C &= \ovl{\bu\!\cdot\!\bu'},\\ \bV &= \ovl{\delta\bu |\delta\bu|^2},\\ D &= -\alpha\ovl{\bu\!\cdot\!\bu'}+\nu\nabla^2\ovl{\bu\!\cdot\!\bu'}, \label{2d_damp}\\ P &= \frac{1}{2}\br{ \ovl{\bF\!\cdot\!\bu'} + \ovl{\bF'\!\cdot\!\bu} }, \end{align} \end{subequations} where $\bu'=\bu(\bx+\bor)$ with $\bor$ the displacement between two measurement points, $\delta\bu=\bu'-\bu$, $\alpha$ is a Rayleigh damping rate, $\nu$ is the viscosity, $\bF$ is the external forcing and the overline denotes the ensemble average. For statistically steady turbulent states, (\ref{abs_eq}) simplifies to \begin{equation} -\frac{1}{4}\nabla\!\cdot\!\bV = D + P \label{abs_eq_stea}. \end{equation} The Fourier transform of $C$ yields the power spectrum as a function of wavenumber $\bk$ so by applying the Fourier transform to (\ref{abs_eq}) and integrating over the wavenumber shell $|\bk| < K$ it follows that (cf.\ \S 6 in \cite{Frisch1995}) \begin{equation} F(K) = -\int_{|\bk|\leq K}^{} \frac{1}{4}\widehat{\nabla\!\cdot\!\bV} \dd \bk \label{F_1} \end{equation} is the nonlinear spectral energy transfer rate across the wavenumber shell with radius $K$, i.e., a positive $F(K)>0$ measures the downscale energy transfer from larger scales ($|\bk| < K$) to smaller scales ($|\bk| > K$) in spectral space. Under the assumption of isotropy, the third-order structure-function vector is \begin{equation} \bV = V(r) \hat{\bor}, \end{equation} where $r=|\bor|$ and $\hat{\bor}$ is a unit vector pointing in the direction of $\bor$. Thus, in two dimensions (\ref{F_1}) can be expressed as (cf.\ (5.8) in \cite{Xie2018}) \begin{equation} F(K) = -\frac{K^2}{4}\int_{0}^{\infty} V(r) J_2(Kr) \dd r, \label{F_V} \end{equation} where $J_2$ is the second-order Bessel function. Equivalently, using the orthogonality of Bessel functions, we can invert (\ref{F_V}) to obtain \begin{equation} V(r) = -4r\int_{0}^{\infty}\frac{1}{K}F(K)J_2(Kr) \dd K. \label{V_F} \end{equation} \subsection{Non-dissipative theory} \label{sec:non-diss-energy} We now consider first an idealized non-dissipative scenario where the external forcing is sharply localized at some wavenumber $k_f$ with corresponding length scale $l_f=1/k_f$ whilst the dissipation at small and large scales has been pushed to $K\to \infty$ and $K\to 0$, respectively. Corrections due to finite-scale dissipation are deferred until \S~\ref{simba}. Therefore, at finite $K$ we can argue that $F(K)$ must take the piecewise constant form \begin{equation} F(K) =-\epsilon_\mathrm{u} + (\epsilon_\mathrm{u}+\epsilon_\mathrm{d}) H(K-k_f). \label{F_heavi} \end{equation} Here $\epsilon_\mathrm{u} $ and $\epsilon_\mathrm{d}$ are the magnitudes of upscale and downscale energy fluxes, $H$ is the Heaviside function, and \begin{equation} \label{eq:1} \epsilon=\epsilon_\mathrm{u}+\epsilon_\mathrm{d} \end{equation} is the total energy input rate. Substituting (\ref{F_heavi}) into (\ref{V_F}) yields the corresponding non-dissipative expression \begin{equation} V(r) = 2\epsilon_\mathrm{u} r - 4 \frac{\epsilon}{k_f}J_1(k_fr), \label{V_theo} \end{equation} which extends Kolmogorov's classical inertial range theory by including the forcing scale as well as bidirectional energy transfer. In contrast to the classic definition of inertial range, we do not assume that the considered scale $r$ is far away from the forcing scale $l_f$, thus, (\ref{V_theo}) is a forcing-scale-resolving expression and we call it a global solution. We illustrate its behavior in Figure~\ref{fig_V_illus} using $k_f=1$, $\epsilon=1$, and various values of the fractional upscale flux \begin{equation} \label{eq:2} R = \frac{\epsilon_\mathrm{u}}{\epsilon}. \end{equation} In the limit $R=1$ of completely upscale energy flux the present (\ref{V_theo}) reduces to the second equality in (4.9) already derived in \cite{Xie2018}. Notably, $V(r)$ is sign-definite and positive \textsl{only} if $R=1$, i.e., for all values $R<1$ the sign of $V(r)$ changes at least once. Also, the case with $R=0.02$ is almost indistinguishable from the limiting case $R=0$ for downward-only energy flux, but only if $k_fr\ll 10$. Otherwise their difference becomes obvious as $k_fr\gg10$. In the intermediate range ($k_fr\sim10$), where $r$ is larger than the forcing scale $1/k_f$ and almost all energy transfers upscale, the structure function $V$ with $R=0.02$ still has alternating signs, which illustrates once more that one cannot safely read off the direction of the spectral transfer just from the sign of the third-order structure function. Naturally, in the limits of large and small $k_fr$ the global expression (\ref{V_theo}) recovers the classic local results \cite[cf.][]{Bernard1999,Lindborg1999,Yakhot1999} asymptotically, i.e., \begin{equation}\label{V_expan} V(r) =\left\{\begin{matrix} \underset{\mathrm{downscale\,energy}}{\underbrace{-2\epsilon_\mathrm{d} r}} + \underset{\mathrm{``enstrophy"}}{\underbrace{\dfrac{1}{4}\epsilon k_f^2r^3}} + O\br{(k_fr)^5}, \quad \mathrm{when} \quad k_fr\ll 1, \\ \underset{\mathrm{upscale\,energy}}{\underbrace{2\epsilon_u r}} + O\br{(k_fr)^{-1/2}}, \quad \mathrm{when} \quad k_fr\gg 1. \end{matrix}\right. \end{equation} Interestingly, the small-scale ``enstrophy" term recovers the classical enstrophy cascade result of 2D turbulence when $\epsilon_\mathrm{d}=0$, but if $\epsilon_\mathrm{d}\neq0$ there may be not even be any enstrophy conservation in the turbulent system, but nonetheless this term arises in all cases in the expansion of $V(r)$. \begin{figure} \centering \includegraphics[width=0.99\linewidth]{V_illus} \caption{Theoretical expression (\ref{V_theo}) with $k_f=1$, $\epsilon=1$ and different values of $R\equiv\epsilon_\mathrm{u}/\epsilon$. At the right end ($k_fr=10^2$), the curves align with descending $R$ from above to below. Solid and dashed lines denote positive and negative values, respectively. The black lines illustrate classical power laws.} \label{fig_V_illus} \end{figure} \subsection{Dissipative corrections} \label{simba} For realistic turbulence dissipation brings about corrections to (\ref{V_theo}) at large and small $r$. E.g., in 2D turbulence a linear Ekman damping introduces the log-correlation to the energy spectrum at the enstrophy inertial range \citep{Kraichnan1971}, and it bounds the range of inverse energy cascade \cite[e.g.][]{SmithKS2002}. We do not want to introduce a closure that links second- and third-order structure functions to calculate the shape of them, instead, we simply derive an exact relation that links them diagnostically. The derivation starts from distinguishing the large- and small-scale damping terms which dominantly absorb upscale and downscale energy fluxes, respectively. This distinction is necessary because the two types of damping influence the inertial range different in the limit of zero viscosity: the large-scale damping brings about leading-order contribution while the effect of small-scale damping is of higher order compared with that of the external forcing, as shown in \cite{Xie2018}. Let's write the dissipation term in (\ref{abs_eq}) as \begin{equation} \label{eq:3} D=\mc{L}_{D} C= D_l + D_s = \mc{L}_{Dl} C + \mc{L}_{Ds}C \end{equation} where the operator $\mc{L}_{D}$ is the sum of large- and small-scale parts $\mc{L}_{Dl}$ and $\mc{L}_{Ds}$, respectively. For example, \cite{Xie2018} used $\mc{L}_{Dl}=-\alpha$ and $\mc{L}_{Ds}=\nu\nabla^2$ for Rayleigh damping and Navier--Stokes diffusion. The large- and small-scale net dissipation rates are then $\epsilon_\mathrm{u} =D_l|_{r=0} $ and $\epsilon_\mathrm{d}=D_s|_{r=0}$, respectively. For two-dimensional isotropic turbulence integrating (\ref{abs_eq_stea}) over a disk of radius $r$ yields \begin{equation}\label{V_damp0} V_{d}(r) = -\frac{4}{r}\int_{0}^{r}sD_s(s)\dd s -\frac{4}{r}\int_{0}^{r}s\br{D_l(s)+\epsilon_\mathrm{u}}\dd s + 2\epsilon_\mathrm{u}r -\frac{4}{r}\int_{0}^{r}sP(s)\dd s. \end{equation} If the external forcing is white-noise in time and centered at wavenumber $k_f$ then (\ref{V_damp0}) becomes \begin{equation}\label{V_damp1} \begin{aligned} V_{d} = -\frac{4}{r}\int_{0}^{r}sD_s(s)\dd s -\frac{4}{r}\int_{0}^{r}s\br{D_l(s)+\epsilon_\mathrm{u}}\dd s + 2\epsilon_\mathrm{u}r -4\frac{\epsilon}{k_f}J_1(k_fr). \end{aligned} \end{equation} This is the sought-after dissipative correction to (\ref{V_theo}). We need to note that for a general 2D turbulence system we are not able to strictly derive that in the limit of zero viscosity the finite damping effect tends to zero and is therefore negligible compared with the limit result, and to do so we need to consider a specific turbulence system with prescribed damping term, one such example is 2D turbulence studied by \citet{Xie2018}. Note that in the derivation we need to distinguish large- and small-scale dissipations. But the smallness of the finite damping effect in the zero viscosity limit matches the derivation starting from the idealized spectral energy flux (\ref{F_heavi}). In the next section we check both the non-dissipative result (\ref{V_theo}) and its dissipation correction (\ref{V_damp1}) in a MHD example. \section{Application to two-dimensional MHD turbulence} \label{sec_MHD} To test our heuristic theory we performed numerical simulations of a 2D MHD turbulent flow in which the velocity $\bv$ and the magnetic field $\bB$ are coplanar. This is an ideal test system because \citet{Seshasayanan2014} found bidirectional energy transfer in this 2D system, and also its KHM equation has the generic form (\ref{abs_eq}) with a third-order structure function vector \cite[cf.][]{Podesta2008} defined by \begin{equation} \bV = \ovl{\delta\bu \br{\delta\bu\!\cdot\!\delta\bu}} + \ovl{\delta\bu \br{\delta\bB\!\cdot\!\delta\bB}} -2\ovl{\delta\bB \br{\delta\bB\!\cdot\!\delta\bu}}. \label{V_MHD} \end{equation} Here the magnetic field is normalized to have velocity units such that $C= \ovl{\bu\!\cdot\!\bu'} + \ovl{\bB\!\cdot\!\bB'}$. The numerical simulation uses a Fourier pseudospectral method with 2/3 dealiasing in space, a resolution $512\times512$ and a fourth-order explicit Runge--Kutta scheme in time, in which the nonlinear terms are treated explicitly and linear terms implicitly using an integrating factor method. We take the forcing wavenumber to be $k_f=32$, the momentum and magnetic equation are forced by random forces which are white-noise in time, and we control the kinetic energy input rate to be 100 times of that of the magnetic energy, which is a case that is found to have bidirectional energy transfer \citep{Seshasayanan2016}. We add hypoviscosity with operator $\nabla^{-2}$ and hyperviscosity with operator $\nabla^6$ to both the velocity and magnetic fields to dissipate energy transferred to large and small scales, respectively, and therefore the turbulence system reaches a statistically steady state. We show in the left panel of Figure~\ref{fig_energy_trans_V} the spectral transfer $F(K)$ of total energy, which is the sum of kinetic and magnetic energy. Here the spectral energy transfer is directly calculated in Fourier space from the pseudospectral code without making use of the third-order structure function in physical space. As expected, bidirectional energy transfer is observed: around $60 \%$ of total energy transfers upscale and is mainly dissipated by the hypoviscosity, while the other $40 \%$ transfers downscale and is mainly dissipated by the hyperviscosity; this corresponds to $R\approx0.6$. In Table~\ref{Table} the value of $\epsilon_\mathrm{u}$ is obtained by calculating the amount of energy dissipated by the hypoviscosity and the value of $\epsilon$ is calculated from the white-noise forcing applied in the numerical simulation. Now, the right panel of Figure \ref{fig_energy_trans_V} shows the comparison of structure functions obtained in several different ways. The blue curve shows the structure function directly measured from the statistics of the velocity and magnetic fields. The black curve is the theoretical formula (\ref{V_theo}) using the observed value of $\epsilon_\mathrm{u}$ as well as the forcing wavenumber $k_f=32$ and the total energy transfer $\epsilon$ known from the numerical setup. The red curve is a least-square fitting of the theoretical result (\ref{V_theo}) using only the four measured points from the blue curve marked in green squares. We choose these four points as a test because we need to capture the sign transition of the third-order structure function and we intentionally avoid choosing points in the classic inertial ranges to distinguish our theory from the past ones: the left three points are around the region of sign change and the last point is around the forcing scale. The parameters used in the fittings are also shown in Table~\ref{Table}. This comparison shows that the fitting based on our global theory using only four measured structure function values works well in determining the bidirectional energy flux rate within a 5\% error. \begin{table} \centering \setlength{\tabcolsep}{0.5em} {\renewcommand{\arraystretch}{1.5} \begin{tabular}{c | c c c} & $\epsilon$ & $R=\epsilon_u/\epsilon$ & $k_f$ \\ \hline $V_{fit1}$ & $1.000\times 10^{-2}$ & $0.5845$ & $32.00$ \\ $V_{fit2}$ & $0.958\times 10^{-2}$ & $0.5786$ & $32.06$ \\ \end{tabular} } \caption{Comparison of the coefficients of two fitting curves shown in the right panel of Figure \ref{fig_energy_trans_V}.} \label{Table} \end{table} \begin{figure} \centering \includegraphics[width=0.49\linewidth]{energy_flux_MHD_2} \includegraphics[width=0.49\linewidth]{V1D_MHD_4} \caption{Left panel: total observed energy transfer normalized by the total energy input rate $\epsilon$. The circle marks the forcing wavenumber $k_f=32$. Right panel: Comparison of third-order structure functions obtained from the statistics of numerical data (blue), two zero-viscosity fitting curves (black and red), and the finite damping fitting curve (yellow). The four green boxes mark the four points used for fitting 2 (red). In the legend, the symbols ``$+$" and ``$-$" denote positive and negative values, respectively.} \label{fig_energy_trans_V} \end{figure} The right panel of Figure \ref{fig_energy_trans_V} also shows that the dissipation at large-scale due to hypoviscosity brings about a nonnegligible discrepancy between the theory (\ref{V_theo}) in zero-viscosity limit and the numerical data. To capture this large-scale dissipative correction we include the hypoviscosity $\mc{L}_{Dl}=\alpha\nabla^{-2}$ but omit the hyperviscosity effect in (\ref{V_damp1}) to obtain the viscous expression of the third-order structure function \begin{equation}\label{V_dampMHD} \begin{aligned} V_{d} &= -\frac{4}{r}\int_{0}^{r} s\sbr{\alpha\nabla^{-2}\br{\ovl{\bu\cdot\bu'}(s)+\ovl{\bB\cdot\bB'}(s)}-\epsilon_\mathrm{u}} \dd s + 2 \epsilon_\mathrm{u}r - \frac{4\epsilon}{k_f}J_1(k_fr)&\\ &= - \frac{2\alpha}{r}\int_{0}^{r} s\br{\ovl{\delta\psi^2}(s)+\ovl{\delta A^2}(s)} \dd s + 2 \epsilon_\mathrm{u}r - \frac{4\epsilon}{k_f}J_1(k_fr),& \end{aligned} \end{equation} where $\psi$ and $A$ are the stream functions for $\bu$ and $\bB$, respectively, and we have used the identity $\nabla^2\ovl{AA'} = -\ovl{\nabla A\cdot\nabla'A'}$, which holds for arbitrary scalar fields $A$ with isotropic statistics. The excellent match between (\ref{V_dampMHD}) and the numerical data verifies the validity of (\ref{V_damp1}). Thus, if the damping form is known and the corresponding second-order structure function can be measured, we can make use of them to fit the data in a broader range to detect energy transfer. \section{Discussion} To test our global results (\ref{V_theo}) and (\ref{V_damp1}), we deliberately used a relatively low-resolution 2D MHD simulation, which provides imperfect inertial ranges. This severely limits the applicability of classic local theories but not of the new global theory. Indeed, due to the limited resolution, the direct numerical data (blue) shows that the energy inertial ranges which have $V\sim r$ behavior are not observed. Similarly, because of the non-negligible influence from the forcing scale, a straight line corresponding to $V\sim r^3$ is also not clear. These make the traditional process based on classic local theories of fitting straight lines in a log-log plot to obtain the information of energy flux impossible, but our global theory can achieve it. In addition, since our global theory only contains three parameters and applies to a broader range containing forcing scale, we can make use of more data information and thereby detect the forcing scale. The sublimits of our global expression (\ref{V_theo}) match those of the classic inertial-range results (cf. (\ref{V_expan})) implying that our theory captures the transitions of inertial ranges. Also, it implies that simply ``gluing" the theories of different inertial ranges for turbulence with unidirectional energy transfer to obtain a global theory is fallacious, because the constant in front of the $r^3$ depends on the total energy input instead of the upscale transferred energy alone. Also, this expansion brings about a new perspective to understanding the ``enstrophy" range. In \citet{Kraichnan1967}'s argument, the simultaneous conservation of both energy and enstrophy results in an upscale energy transfer and a downscale enstrophy transfer, and correspondingly in the enstrophy inertial range the third-order structure function has an $r^3$ dependence. However, our theory shows that as long as there exists nonzero upscale energy transfer, an $r^3$ dependence of third-order structure function exists as a natural consequence of asymptotic expansion, but the presence of a constant downscale ``enstrophy" flux is not necessary with the ``enstrophy" a preserved quantity without external force and dissipation, which is the case for 2D MHD turbulence. In this paper, we present a general framework for the inertial-range third-order structure-function global theory that captures bidirectional energy transfer and resolves the forcing scale in homogeneous isotropic turbulence. This theory has three parameters, $\epsilon_\mathrm{u}$, $\epsilon_\mathrm{d}$ and $l_f$, that describe the upscale energy flux magnitude, downscale energy flux magnitude and the forcing scale. The classic local theories that are applicable away from the forcing scales are recovered as sublimits of this global theory, which captures the transitions as well. In the present theory we assumed that the energy input is $\delta$-centered at one wavenumber $k_f$, but considering that when assuming a $\delta$-centered external forcing we are solving a Green's function for equation (\ref{abs_eq_stea}) we can express the expression of third-order structure function with a general distribution of energy input rate after a convolution. Thus, our theory can be used to detect the unknown distribution of energy input for a 2D turbulent system. As to the finite damping effect, it is shown in \cite{Xie2018} that for 2D turbulence with damping operator $\mc{L}_D=-\alpha+\nu\nabla^2$ the damping effect in (\ref{V_damp1}) tends to zero as $\alpha$ and $\nu$ tends to zero. But the comparable smallness of the damping effect in (\ref{V_damp1}) remains to be studied carefully in other turbulence system. And it is important to justify that the different operations to the large- and small-scale damping effects is general and therefore in the limit of zero viscosity large-scale damping impacts the third-order structure function at the leading order while the influence of small-scale damping is negligible. In the main text of this paper we only show the 2D theory for the reason that we can test it numerically. We close the paper by present the third-order structure function expression analogous to \eqref{V_theo} for 1D (Burgers) and 3D isotropic turbulence with bidirectional energy transfer: \begin{equation}\label{1D_V} \begin{aligned} \textrm{1D:} \quad V&= 4\epsilon_\mathrm{u}r - 4 \epsilon \frac{\sin\br{k_fr}}{k_f}\\ & = \left\{\begin{matrix} -4\epsilon_\mathrm{d} r + \dfrac{2}{3}\epsilon k_f^2r^3 + O\br{(k_fr)^5}, \quad \br{k_fr\ll 1}, \\ 4\epsilon_u r+ O\br{1}, \quad \br{k_fr\gg 1}. \end{matrix}\right. \end{aligned} \end{equation} \begin{equation}\label{3D_V} \begin{aligned} \textrm{3D:} \quad V&= \frac{4}{3}\epsilon_\mathrm{u}r - 4 \epsilon \frac{\sin\br{k_fr}-Kr\cos\br{k_fr}}{k_f^3r^2}\\ & = \left\{\begin{matrix} -\dfrac{4}{3}\epsilon_\mathrm{d} r + \dfrac{2}{15}\epsilon k_f^2r^3 + O\br{(k_fr)^5}, \quad \br{k_fr\ll 1}, \\ \dfrac{4}{3}\epsilon_u r+ O\br{\br{k_fr}^{-1}}, \quad \br{k_fr\gg 1}. \end{matrix}\right. \end{aligned} \end{equation} Note that the 3D result for small $k_fr$ gives $V = -\frac{4}{3}\epsilon_\mathrm{d}r$. For classic 3D turbulence, considering the relation between $V$ and the longitudinal third-order structure function \begin{equation} V = \ovl{\delta u_L^3} + \frac{1}{3}\frac{\dd }{\dd r}(r\ovl{\delta u_L^3}) \end{equation} we recover the $-4/5$ law of \citet{Kolmogorov1941}'s theory: $\ovl{\delta u_L^3}=-\frac{4}{5}\epsilon_\mathrm{d}r$. The detailed derivations of the 1D and 3D expression are shown in \S \ref{sec_1D3D}. \vspace{1em} We grateful to Andrew Majda and Shafer Smith for discussions that help to improve this paper. We gratefully acknowledge financial support from the United States National Science Foundation grant DMS-1312159 and Office of Naval Research grant N00014-15-1-2355.
{ "redpajama_set_name": "RedPajamaArXiv" }
1,833
\section{INTRODUCTION} \label{s_intro} The nature of the second parameter, aside from metallicity, which determines the morphology of the horizontal branch (HB) in globular clusters (GCs), is one of the most longstanding problems of modern astrophysics. In fact, a lower metallicity favors the formation of hotter and bluer HB stars, but clusters with the same metallicity can show very different HB morphology \citep{Sandage67}, and an HB extended far toward the blue is observed even in some metal-rich GCs \citep[e.g.,][]{Rich97}. The helium abundance has been early proposed, among others, as this second parameter (\citealt{Sweigart97,DAntona02}; see \citealt{Catelan09b}, for a review) because, during the He-burning phase, helium-rich stars are expected to be hotter than objects of canonical composition. This model has recently drawn much attention, triggered by the discovery of multiple stellar populations in GCs \citep{Piotto05,Piotto07}. In fact, \citet{Piotto05} showed that a different metallicity is not the cause of the main-sequence split observed in $\omega$\,Centauri \citep{Bedin04}, and the only explanation is that the bluer sequence is greatly enriched in helium, about 50\% more He-rich (Y=0.38) than in normal metal-poor GC stars \citep{Norris04,Piotto05}. In this scenario, the blue HB stars observed in many GCs could be the progeny of the He-enriched second stellar generation. Unfortunately, diffusion processes completely alter the surface chemical abundances of hot HB stars \citep[e.g.,][]{Behr99}, preventing a direct demonstration of the connection between multiple populations and HB morphology. Nevertheless, an increased helium content can be indirectly deduced from other observable quantities, because He-enriched HB stars are predicted to be brighter \citep{Sweigart87} and to occupy different loci in the temperature--gravity plane \citep{Moehler03}. In this Letter, we present the results of our investigation aimed to search for an indirect indication of helium enrichment among blue HB stars in $\omega$\,Centauri, to test the He-enrichment scenario and its predicted effects on the HB morphology. This cluster is the ideal target for our purpose, because it hosts a very complex stellar population, comprising three known MSs and six sub-giant branches \citep{Bellini10}. \section{OBSERVATIONS AND DATA ANALYSIS} \label{s_obs} \begin{figure} \epsscale{1.} \plotone{fig1.ps} \caption{{\it Upper panel}: distribution of $\omega$\,Cen stars in the temperature-gravity plane. The Zero-Age and Terminal-Age HB (ZAHB and TAHB, respectively), for both canonical and He-enriched models are also indicated. {\it Lower panel}: comparison of stars in $\omega$\,Cen (full dots) and members of three other clusters (open circles). The vertical coordinate is the difference between the stellar gravity and the corresponding value of the canonical ZAHB at the same temperature. The plot is thus analogous to the T$_\mathrm{eff}$--$\log{\mathrm (g)}$ space of the upper panel, but the horizontal axis coincides with the canonical ZAHB.} \label{f_tg} \end{figure} We selected 116 target stars from the ground-based photometry of \citet{Bellini09}. They span a wide portion of the cluster HB, from the blue edge of the RR Lyrae gap to the Blue Hook objects (T$_\mathrm{eff}\geq$ 33\,000~K) 5 mag fainter. In this Letter, we will focus on the comparison of $\omega$\,Cen stars with other clusters and theoretical models. We therefore limit the analysis to T$_\mathrm{eff}\leq$33\,000~K, because hotter Blue Hook stars are not included in the canonical models and are not present in the comparison clusters. A full analysis of the results, including the Blue Hook, will be presented in a forthcoming paper. The data were collected at Paranal Observatory in service mode between 2006 January and April, with FORS2@UT1 in MXU mode. The selected 600B grism, coupled with 0$\farcs$5-wide slits, gave spectra of resolution R$\approx1600$ from the atmospheric cutoff to approximately 5900\,\AA. Three 45 minute spectra for faint stars and two 45 minute spectra for bright ones were acquired. Data were reduced with the FORS pipeline\footnote{\small{www.eso.org/sci/data-processing/software/pipelines/index.html}}, and the spectra were extracted under IRAF\footnote{\small{IRAF is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.}}, subtracting the nearby sky spectrum within the same slit. Finally, the spectrum of the standard star LTT4816 \citep{Hamuy92}, secured during observations, was used to flux-calibrate the object spectra, whose resulting signal-to-noise ratio was between 50 and 150. Heliocentric radial velocities (RV) were measured with the IRAF {\it fxcor} task, cross-correlating \citep{Tonry79} the spectra with synthetic templates of adequate parameters as estimated from the stellar position in the color-magnitude diagram (CMD). Considering the internal velocity dispersion of the cluster ($\sim$13\,km\,s$^{-1}$, \citealt{Sollima05}) and the errors of measurements (about 30~km~s$^{-1}$), the RV of all the observed stars is consistent with cluster membership. The atmospheric parameters of target stars were measured fitting the observed Balmer and helium lines with stellar model atmospheres, computed with ATLAS9 \citep{Kurucz93}. We used Lemke's version\footnote{\small{For a description see http://a400.sternwarte.uni-erlangen.de/$\sim$ai26/linfit/linfor.html}} of the LINFOR program (developed originally by Holweger, Steffen, and Steenbock at Kiel University) to compute a grid of synthetic spectra. Stars showing iron lines in the 4450--4600 \AA\ region, indicating active atmospheric diffusion \citep{Moehler99}, or being hotter than 13\,000~K (as deduced from the position in the CMD), were fitted with metal-rich models ([M/H]=+0.5) with varying surface helium abundance, to account for the effects of radiative levitation of heavy elements \citep[e.g.,][]{Moehler00}. This was done even for five stars hotter than 11\,500~K not satisfying these criteria, because the observed helium lines were clearly too weak compared to the fitted models when diffusion was not taken into account. The cooler stars were fitted with cluster-metallicity models ([M/H]=$-$1.5) with helium abundance fixed at the solar value, because their He lines are very weak and not observed at our resolution. In a few cases of doubt about the correct model to use, we adopted the set of model spectra that returned the lower $\chi^2$ of the fit. The best fit to the observed spectra was obtained by means of the routines developed by \citet{Bergeron92} and \citet{Saffer94}, as modified by \citet{Napiwotzki99}, which employ a $\chi^2$ test. The spectral lines used in the procedure included the Balmer series from H$_\beta$ to H$_{12}$, except for H$_\epsilon$ to avoid the blended Ca\,II~H line, four He\,I lines (4026 \AA, 4388 \AA, 4471 \AA, and 4921 \AA) for stars whose helium abundance was not kept fixed, and the He\,II lines 4542 \AA\ and 4686 \AA\ when visible in the spectra of the hottest stars. The routines estimate the errors on the derived parameters from the $\chi^2$ of the fit \citep[see][]{Moehler99}, but neglect all other sources of errors (e.g., defects in normalization, flat-field correction, sky subtraction). We therefore obtained a better estimate of the true errors multiplying the output values by 3 (R. Napiwotzki, private communication). Stellar masses were calculated from the measured temperatures and gravities, through the relation: \begin{equation} \log{\frac{M}{M_{\sun}}}=\log{\frac{g}{g_{\sun}}}-4\cdot \log{\frac{T}{T_{\sun}}}+\log{\frac{L}{L_{\sun}}}, \label{eqmass1} \end{equation} where \begin{equation} \log{\frac{L}{L_{\sun}}}=-0.4\cdot(V - (m-M)_0 - 3.1\cdot E(B-V) + BC - 4.74). \label{eqmass2} \end{equation} We assumed T$_{\sun}$=5777 K, $\log{\mathrm{g}_{\sun}}$=4.44, (m-M)$_0$=13.75$\pm$0.13 \citep{Vandeven06}, and E($B-V$)=0.12$\pm$0.01 \citep[][2010 December Web version]{Harris96}. The bolometric correction (BC) was derived from effective temperatures through the empirical calibration of \citet{Flower96}. Errors on masses were derived from propagation of errors. \section{RESULTS} \label{s_results} Our results are plotted in the upper panel of Figure~\ref{f_tg}, where we show the position of the program stars in the temperature-gravity space, superimposed to the theoretical zero-age and terminal-age HB (ZAHB and TAHB, respectively) from \citet{Moehler03}, for canonical (Y=0.23) and He-enriched (Y=0.33) models. In the same figure we include 78 stars from \citet{Moehler11}, who measured the atmospheric parameters with the same procedure as in the present work, but based on medium-resolution FLAMES spectra, and a different set of model spectra for stars above 20\,000~K. The two sets of data behave very similarly in the T$_\mathrm{eff}$-$\log{\mathrm (g)}$ plane. The comparison of the eight stars in common confirms the good agreement between the two works: the mean difference in gravity is null ($\leq0.01$ dex), while the difference in temperature is small (155~K), and becomes negligible (25~K) after the exclusion of two stars with very large errors ($\geq$1500~K). Our mass estimates are on average higher by 0.035 M$_\sun$, an offset accounted for by the fainter magnitudes (0.09 mag the mean difference) from the \citet{Castellani07} catalog used by Moehler et al. Given the excellent agreement, we will merge the two datasets together. \begin{figure} \epsscale{1.05} \plotone{fig2.ps} \caption{{\it Upper panel}: stellar masses, calculated through Equation~(\ref{eqmass1}). The line indicates the canonical model expectation. {\it Lower panel}: absolute magnitudes, estimated from distance moduli and reddening given in Section~\ref{s_obs} and \citet{Moni07,Moni09}. Different symbols are used for $\omega$\,Cen and other cluster stars, as in the lower panel of Figure~\ref{f_tg}.} \label{f_masabs} \end{figure} The surface gravity of stars cooler than $\sim$18\,000~K is systematically lower by about 0.2-0.3 dex with respect to canonical models, while they closely follow the trend of He-enhanced models at all temperatures. In the lower panel of Figure~\ref{f_tg} we compare these results with similar measurements obtained in three GCs, namely NGC\,6752 \citep{Moni07}, M80 and NGC\,5986 \citep{Moni09}. We adopt as vertical coordinate the difference between the measured $\log{\mathrm (g)}$ and the value of the canonical ZAHB at the corresponding temperature. The comparison reveals that the HB stars in $\omega$\,Cen and in the other GCs behave very differently. We note that the stars of the comparison clusters do not completely agree with canonical models, whose expectation is to find the majority of the objects next to the ZAHB, and not to the TAHB as observed. Even so, $\omega$\,Cen stars clearly show lower gravities -- at a given effective temperature -- with respect to stars in other GCs. Observational errors tend to mask the general trend, but there is an offset of 0.15 dex at the cooler end, which decreases at higher temperatures and fades out around 18\,000~K, to reappear even larger ($\geq$0.2 dex) among hot stars at 25\,000-28\,000~K. However, we find a problem in the estimate of stellar masses, summarized in the upper panel of Figure~\ref{f_masabs}: while the results in the comparison clusters roughly agree with expectations \citep[see][for a complete discussion]{Moni07,Moni09}, the masses of $\omega$\,Cen stars are constantly underestimated at all temperatures. Interestingly enough, \citet{Moehler06} found very similar results for HB stars in NGC\,6388. This is a very peculiar cluster, as it has an extended, blue HB \citep{Rich97} despite its high metallicity ([Fe/H]=$-$0.6). This has been interpreted in terms of an extreme He-enrichment \citep[up to Y=0.40,][]{Caloi07}, as in the case of its twin cluster, NGC\,6441. While \citet{Moehler06} cast doubts on their results due to the large uncertainties caused by stellar crowding in this compact, bulge cluster, these results are very similar to what we find now in $\omega$\,Cen. \section{DISCUSSION} \label{s_discussion} \begin{figure} \epsscale{0.7} \plotone{fig3.ps} \caption{Comparison of the H$_\beta$ lines of one of our program star (full line) and a HB star in M\,80 at the same temperature. Derived temperatures and gravities are given.} \label{f_spec} \end{figure} In the temperature-gravity plane, the $\omega$\,Cen HB stars match the expectations of He-enriched models rather than canonical ones. However, the resulting underestimate of their mass prevents us from straightforwardly concluding that this is evidence of helium enrichment. In fact, the progenies of He-rich stars in the HB phase should not be noticeably less massive than stars of canonical composition \citep{DAntona04}, and the difference at any temperature is expected to be tiny \citep[$\leq$0.03~M$_\sun$,][]{Moehler03}. Moreover, the derived masses are on average well below the value required to ignite helium in the core ($\sim$0.45~M$_\sun$). The easiest interpretation of our observations is the presence of a systematic error, biasing the results toward lower gravities and, as a consequence, lower masses. However, in this work we used the same instrument, software, and models as \citet{Moni07,Moni09}, finding a clear difference between $\omega$\,Cen and the other clusters, while our measurements well agree with \citet{Moehler11}, who also investigated $\omega$\,Cen but with a different instrument, higher resolution, different models for stars hotter than 20\,000~K, and only a subset of Balmer lines. Therefore, even if we cannot completely exclude an observational bias with respect to theoretical expectations, the difference between $\omega$\,Cen and the three comparison clusters must be real: {\it HB stars in $\omega$\,Cen are intrinsically different to their analogs in other GCs}. The same conclusion can be drawn even if the offset is a product of the inadequacy of the employed models: in this case, they would be reproducing sufficiently well the atmospheric structure of the HB objects in the comparison clusters, but not in $\omega$\,Cen, hence a physical difference would be present. The observed trends cannot be due to a wrong (hotter) temperature scale. In fact, for each star we translated the measured temperature in a reddening estimate, comparing the observed ($B-V$) color to the theoretical value obtained interpolating the \citet{Kurucz93} grid, for the same metallicities as the model spectra used in the fits. The average value is E($B-V$)=0.114, in perfect agreement with the literature, and with no significant trend along the HB. A temperature scale hotter by 10\% (5\%) would have caused an overestimate of reddening by 0.04 (0.02) mag at 10\,000~K. In Figure~\ref{f_spec} we compare the H$_\beta$ line profiles of two stars with similar temperature in M\,80 and $\omega$\,Cen: the line core and depth are very similar, indicating no noticeably difference in temperature, but the star in M\,80, whose measured gravity is 0.40~dex higher, shows wider wings. This comparison indicates that the peculiar stellar gravities reflect a real difference in the spectra of the target stars. We are aware that some stellar parameters unaccounted for in our study, such as stellar wind and rotation, can cause wider line wings mimicking a difference in gravity, and this degeneracy cannot be avoided at our resolution. The underestimate of gravities can also be caused by the stratification of elemental abundances due to diffusion processes \citep{Leblanc10}. However, a systematic difference in one of these parameters, affecting $\omega$\,Cen but not the other clusters, would be even more puzzling, and it is harder to postulate at this stage. It must also be noted that the mass underestimate does not completely follow the trend of the difference in gravity: while this seems to vary with temperature as noted before, the masses in $\omega$\,Cen are constantly underestimated by $\sim$0.15 M$_\sun$ for T$_\mathrm{eff}\geq$10\,000~K. For example, at T$_\mathrm{eff}\sim$16\,000-18\,000~K there is a clear offset in masses, but not in gravities. This indicates that low gravities may play a role, but at least another effect should be at work, causing the observed mass underestimate. All the comparison clusters have similar metallicity \citep[{[Fe/H]}$\approx -$1.5][]{Harris96}, while $\omega$\,Cen shows a large spread up to [Fe/H]=$-$0.6 \citep[e.g.,][]{Sollima05}. Nevertheless, it is unlikely that the higher metallicity is the origin of the peculiar results in $\omega$\,Cen: the largest differences are found for stars hotter than $\sim$11\,500~K, whose surface abundances are altered by diffusion processes. \citet{Behr03} and \citet{Pace06} showed that, in the presence of diffusion, the surface abundance patterns are very similar in clusters of very different initial metallicity. The stars in all the clusters should therefore show the same behavior independently of their primordial metal content, especially at 15\,000-16\,000~K where diffusion reaches its maximum strength \citep{Moni09}. Moreover, \citet{Moehler00} found no peculiarity in the measured gravity and mass of two stars in 47\,Tuc ([Fe/H]=$-$0.7), although using low-metallicity models. We repeated the measurements assuming different values of the model metallicity, to test how this parameter can affect the results. We found small differences in the stellar parameters, but the general behavior was unaltered: a higher model metallicity indeed returned slightly higher gravities, but higher temperatures too. As a consequence, the points were shifted almost parallel to the theoretical tracks in the T$_\mathrm{eff}$-$\log{\mathrm (g)}$ plane, while the masses were increased by less than 0.05~M$_\sun$. The blanketing effect should be lower in metal-poor stars than in the solar-metallicity stars used to calibrate the adopted T$_\mathrm{eff}$-BC relation, and this could cause the underestimate of the BC and of the mass. The adopted BC should be a good approximation for the stars hotter than the Grundahl jump \citep{Grundahl99}, because radiative levitation increases their surface abundances to super-solar values. As already noted, the effect should be independent of their primordial metallicity, thus the BC does not explain the offset of $\omega$\,Cen with respect to the other GCs. At cooler temperatures, the offset could be explained by the BC if $\omega$\,Cen stars were more metal-poor than in the other GCs, thus decreasing their BC by $\sim$0.4 mag. Indeed, $\omega$\,Cen hosts a metal-poor sub-population at [Fe/H]=$-$2 \citep[e.g.,][]{Pancino11}, but the BC varies by less than 0.15~mag for stars at 8\,000-10\,000~K in the whole range between solar metallicity and [Fe/H]=$-$2 \citep{Cassisi99,Alonso99}. A wrong distance modulus or reddening could also cause wrong mass estimates, but the required correction to increase the masses by 0.1~M$_\sun$ is huge ($\geq$0.4 mag in distance modulus, $\geq$0.13 mag in reddening). The recent literature estimates agree on these quantities within 0.1 and 0.01 mag, respectively, allowing only negligible variations of the mass estimates. Even an offset in the photometric zero-point could cause the observed offset, but comparing the $V$ mag of \citet{Bellini09} with the values of \citet{Castellani07} and \citet{Momany04} we found only a small difference ($\leq$0.1 mag) in the direction opposite to that required, with the magnitudes used here being brighter than in the other catalogs. In conclusion, none of the photometric parameters (stellar magnitude, BC, distance modulus, and reddening) entering in the calculation of the stellar mass through Equation~(\ref{eqmass2}) offers a viable explanation of our results. He-enriched HB stars are expected to be brighter than analogous objects of canonical composition \citep[e.g.,][]{Caloi07,Catelan09}, and the increased luminosity should balance the lower gravities in Equation~(\ref{eqmass1}), returning a similar mass. All other parameters being the same, M$_\mathrm{V}$ should be brighter by 0.25-0.38 mag to compensate a decrease of 0.10-0.15 dex in gravity. On the contrary, in the lower panel of Figure~\ref{f_masabs} the perfect match between the absolute magnitudes of the HB of $\omega$\,Cen and the other clusters is clear. The too low values obtained for the stellar masses could therefore also be interpreted as due to the lack of increased flux, instead of too low gravity estimates. It could be argued that He-enriched stars could not necessarily be brighter in the $V$ band, because the bolometric luminosity is the quantity entering in Equation~(\ref{eqmass1}). Thus, the detailed spectral energy distribution (SED) of He-enriched and canonical stars is required to properly deduce their luminosity from the $V$ magnitude through Equation~(\ref{eqmass2}). However, great differences are not expected for stars hotter than 12\,000~K, because the diffusion processes decrease the atmospheric He-abundance well below solar values in both cases. In fact, we find no difference in surface helium abundance between $\omega$\,Cen and the other clusters, and $\log{\mathrm (N_{He}/N_H)}\leq -$1.5 for all the stars. With atmospheres of very similar chemical composition, their SED should not be very different, and even the known UV-enhanced flux of these stars \citep{Grundahl99} should have the same effects irrespective of the primordial helium content. \section{CONCLUSIONS} \label{s_conclusions} Blue HB stars in $\omega$\,Cen show lower gravities with respect to both canonical models and analogous stars in other GCs, but their stellar masses are underestimated, and their visual absolute magnitudes are very similar to those of the comparison clusters. Neither the low gravities nor the other parameters involved in the calculation can explain the too low masses. We can firmly conclude that these results reveal an intrinsic difference between the blue HB stars in $\omega$\,Cen and their analogs in other GCs, but its interpretation is not straightforward. The lower gravities follow the expectations for He-rich stars, but the magnitudes and masses do not. \acknowledgments C.M.B. acknowledges support from the Chilean projects {\sl Centro de Astrof\'\i sica} FONDAP No. 15010003 and the Chilean Centro de Excelencia en Astrof\'\i sica y Tecnolog\'\i as Afines (CATA) BASAL PFB/06. G.P. acknowledges support by MIUR under the program PRIN2007 (prot.\ 20075TP5K9), and PRIN-INAF 2009. The authors are grateful to the anonymous referee for his/her fruitful report.
{ "redpajama_set_name": "RedPajamaArXiv" }
4,983
\section{Introduction} One overarching objective of science is to further our understanding of the universe, from its early stages to its current state and future evolution. This depends on gaining insight on the universe's most macroscopic components, for example galaxies and stars, as well as describing its smallest components, namely elementary particles and nuclei and their interactions. It is clear that this endeavor requires combined expertise from the fields of astroparticle physics, particle physics and nuclear physics. A number of the contributions and discussions at the recent Granada meeting for the update of the European Strategy of Particle Physics, as well as the contribution at the EPS-HEP ECFA Open Session summarized in this newsletter, highlighted a growing wish for closer collaboration between ECFA and the astrophysics (APPEC, \url{https://www.appec.org}) and nuclear physics (NuPECC, \url{http://www.nupecc.org}) communities. Many physics problems where synergies between particle physics, astrophysics and nuclear physics are required are discussed in the APPEC and NuPECC strategy documents (see links at the bottom of this piece). Among those, this contribution focused on the challenge of of elucidating the nature of 27\% of the matter-energy content of the universe, commonly called \textit{dark matter}. Pursuing these scientific goals also requires mastering challenges related to instrumentation (e.g. beams and detectors), data acquisition, selection and analysis, and making data and results available to the broader science communities. Joint work and recognition of these \textit{foundational} topics, also covered in detail in the contributions by C. Biscari, A. Cattai and G. A. Stewart in the ECFA newsletter~\cite{ECFANewsletter}, will help all communities grow towards their individual and common scientific goals. This contribution presented one of the many common challenges faced by particle physics and astrophysics: the necessity of dealing with large, sometimes heterogeneous datasets and derive insight from them in short periods of time. \section{New physics discoveries and dark matter} The Large Hadron Collider has yielded the discovery of a new particle, the Higgs boson. Precision measurements and fits of other quantities in the Standard Model of Particle Physics guided a search lasting decades after the conception of the Higgs mechanism. With the European Strategy Update, the particle physics community is currently deciding what are the best tools to employ to test the Standard Model: \textit{how/where to look next for new physics} is a relevant question in this process, given that hints coming from the Standard Model itself are not as telling as in the case of the Higgs boson. For this reason, research directions for physics beyond the Standard Model can be found in open problems in astrophysics that need a systematic exploration, for example the determination of the nature of dark matter. One of the many explanations for this dark matter is that it is composed by new massive particles that interact only weakly with ordinary matter particles, or Weakly Interacting Massive Particles (WIMPs). These new particles can be produced at colliders, as well as detected by direct and indirect detection astrophysics experiments in space and underground (see Ref.~\cite{DarkMatterFeature} and links for a basic summary). By producing new particles in the lab, colliders are well placed to understand the nature of these particle's interactions with ordinary matter. The necessary confirmation that these new particles have also a cosmological origin comes from complementary observations in direct and indirect detection experiments. \begin{figure}[!htb] \center{\includegraphics[width=\textwidth] {DMCombinationDD}} \caption{\label{fig:DMCombinationDD} Comparison of sensitivities of future collider and direct detection experiments within a simplified model scenario of a WIMP where the interaction between Standard Model quarks and the dark matter is mediated by a new scalar particle. If dark matter is composed by particles within this model with a mass between 10 GeV and 1 TeV, future colliders and direct detection experiment can confirm each other's discoveries in the next decades. Adapted from Ref.~\cite{Strategy:2019vxc}, with the addition of ATLAS results line.} \end{figure} The WIMP scenario shown in Figure~\ref{fig:DMCombinationDD} only represents a very simple benchmark in the landscape of theories on the nature of dark matter. Many other compelling explanations exist: for example the WIMP can be identified with the lightest, stable and invisible particle included in many supersymmetric models that also answer other outstanding questions of the Standard Model. Alternatives to the WIMP paradigm also exist, for example models where the dark matter particle is much lighter and has a mass below the GeV, see Fig.~\ref{fig:PBC} for a set of constraints of current and planned experiments. In these cases, searches at collider and direct detection experiments are complementary to searches at other planned dedicated accelerator experiments (e.g. beam dump experiments such as SHIP~\cite{Anelli:2015pba}, NA64~\cite{Banerjee:2016tad}, and LDMX~\cite{Akesson:2018vlm} among others), as well as underground experiments using novel sensor technologies (e.g. SENSEI \cite{Abramoff:2019dfb} and DAMIC~\cite{Aguilar-Arevalo:2019wdi}). \begin{figure}[!htb] \center{\includegraphics[width=\textwidth] {pbc_bc2}} \caption{\label{fig:PBC} Comparison of sensitivities of future collider and non-collider experiments for a model of light dark matter being produced via a new dark boson, showing the complementarity between different kinds of experiments in different ranges of dark matter particle mass and couplings with the dark boson. Taken from Ref.~\cite{Strategy:2019vxc,Beacham:2019nyx}.} \end{figure} Axions (and axion-like particles)~\cite{PDGAxions, AxionEPNewsletter} also may be connected to solutions of the dark matter problem, being the DM particle candidate themselves or the mediators of the SM-DM interaction. Synergies between many different experiments and theoretical frameworks is evident in the case of those particles. Depending on the mass range and coupling of those particles, a discovery of such particles may occur at the high-luminosity LHC, at lepton colliders (e.g. Belle II~\cite{Abe:2010gxa}) or at high-precision experiments that search for these particles directly (e.g. IAXO~\cite{Armengaud:2019uso} and ADMX~\cite{Braine:2019fqb}), or measure fundamental constants sensitive to fifth forces. Interaction between the members of these different experimental communities, as well as with theory and astrophysics, are needed to shape the future search program of these complementary experiments. In general, connecting results and potential discoveries from different experiments within a coherent framework requires both particle and astroparticle physics theory involvement. This effort has been started by the LHC Dark Matter Working Group \cite{DMWG}, where the Astroparticle community wishes to be further involved (see Ref.~\cite{APPECNews}). A parallel effort for non-WIMP, non-collider dark matter and dark sector searches is ongoing within the Physics Beyond Colliders Working Group. A connection to nuclear physics in these and other (e.g. beam dump) dark matter experiments is needed to fully understand instrumental and beam backgrounds, as well as simulation. Particle and astroparticle experiments searching for dark matter also benefit from cross-talk in terms of instrumentation (e.g. sensors and cryogenics) and interpretation of results. \section{\textit{Data firehoses} and shared solutions in high energy physics and astrophysics} Another example of a common challenge for different fields is the ever-increasing volume of data available to different fields of research. Examples of current \textit{data firehoses} are the LHC, especially in light of the planned high-luminosity upgrade, and upcoming astrophysics surveys such as LSST~\cite{Ivezic:2008fe} and SKA~\cite{Bull:2018lat} to name but two. Similar challenges in data acquisition and recording are present in neutrino physics experiments, in the case of their supernova detection data streams. In all these cases, a fast and close-to-real-time analysis of the data is necessary, so that events of interest can be recorded or investigated further in a timely and cost-effective way, and common real-time analysis solutions are being investigated and deployed by multiple experiments. Another point of contact is when software solutions are shared across fields, for example in the case of gravitational waves and high energy physics with the CernVM software appliance~\cite{CERNVMEPNewsletter} and the RUCIO distributed data management system that is in use by LHC experiments and will be adopted by neutrino experiments as well~\cite{RucioCERNNews}. \section{Collaborative efforts} A number of platforms and fora exist at CERN and in Europe to facilitate cross-talk among different communities. In addition to the already-mentioned Dark Matter Working Group, there are (just to name a few) the recently inaugurated European Center for Astroparticle Physics currently hosted by CERN~\cite{EuCAPTCERNNews}, the European Science Cluster for Astronomy and Particle physics ESFRI research infrastructures project, the HEP Software Foundation to facilitate cooperation and common effort in software and computing, as well as the very successful CERN neutrino platform. \section{Conclusions} The examples brought forward in this EPS-HEP contribution are only a very limited subset of how the particle, astroparticle and nuclear physics communities can be answering challenging scientific questions together. Other topics where synergies exist mentioned during the session were axion-like particles, the theory and experimental efforts bridging the gap between nuclear and high energy physics, and the opportunities offered by astrophysics experiments (e.g. Auger) spanning a much higher and complementary energy regime with respect to nuclear and particle physics experiments. Since detector technologies are often common to different communities, the CERN expertise stemming from the current world-leading collider program can be reused. Moreover, data collection and analysis benefit from becoming faster, more efficient and more open: using versatile computing strategies and tools to solve diverse problems encourages common expertise that lasts beyond a single experiment. In conclusion, there is the common wish that the European Strategy process will facilitate closer collaboration between the particle, astroparticle and nuclear physics communities, in a context where the design of detectors, data acquisition systems and computing are an integral part in our quest to understand the universe.
{ "redpajama_set_name": "RedPajamaArXiv" }
6,053
\section{Appendix A} \label{AppendixA} \subsection{Implementation Details for Review Generators} \label{generators_implementation_details} \textit{Recurrent Neural Networks} (RNNs) directly model the generation process of text sequences, and provide an end-to-end solution to learning the generating function from large quantities of data. These networks maintain a hidden layer of neurons with recurrent connections to their own previous values, which in theory gives them the potential to model long span dependencies. For an input sequence $x = x_1, x_2, \ldots, x_t$, the hidden state $h_t$ which summarizes the information of the entire sequence up to timestep $t$ is recursively updated as $h_t = f(h_{t-1}, x_t)$, where $f(.,.)$ denotes a non-linear transformation function. The overall probability of the sequence is calculated as: \begin{equation} p(x) = \prod_{t=1}^{T}p(x_t|h_{t-1}), \end{equation} and the probability of generating the next word $x_{t+1}$ given its low dimensional continuous representation $O_{x_{t+1}}$ and input sequence $x_t$ is defined as: \begin{equation} p(x_{t+1} | x \le t) = p(x_{t+1}|h_t) \propto \exp(O_{x_{t+1}}^T h_t) \end{equation} However, in practice the gradient computation is difficult to propagate back in time due to exploding or vanishing gradients \cite{hochreiter2001gradient}, \cite{bengio1994learning}, making the learning of arbitrarily long phenomena challenging in RNNs. Long Short Term Memory networks (LSTMs) \cite{hochreiter1997long} effectively address these limitations by relying on a memory state and gating functions to control the flow of the information throughout the network -- and in particular what information is written to the memory state, what information is read from the memory state, and what information is removed (or forgotten) from the memory state. The mathematical formulation of LSTM units can be expressed as follows: \begin{equation} \label{lstm_equations} \begin{split} i^{(t)} &= \sigma(W^{(i)}x^{(t)} + U^{(i)}h^{(t-1)} ) \qquad \text{(Input gate)} \\ f^{(t)} &= \sigma(W^{(f)}x^{(t)} + U^{(f)}h^{(t-1)}) \qquad \text{(Forget gate)} \\ o^{(t)} &= \sigma(W^{(o)} x^{(t)} + U^{(o)} h^{(t-1)}) \qquad \text{(Output gate)} \\ \widetilde{c}^{(t)} &= \text{tanh}(W^{(c)} x^{(t)} + U^{(c)}h^{(t-1)} \qquad \text{(New memory cell)} \\ c^{(t)} &= f^{(t)} \circ \widetilde{c}^{(t-1)} + i^{(t)} \circ \widetilde{c}^{(t)} \qquad \text{(Final memory cell)} \\ h^{(t)} &= o^{(t)} \circ \text{tanh}(c^{(t)}) \\ \end{split} \end{equation} In the above set of equations, the input word $x^{(t)}$ and the past hidden state $h^{(t-1)}$ are used to generate new memory $\widetilde{c}^{(t)}$ which includes features of the new word $x^{(t)}$ without prior determination of whether $x^{(t)}$ is important and worth keeping. The role of the input gate is to check whether it is sensible to store the new input word given the word $x^{(t)}$ itself and the past hidden state $h^{(t-1)}$; the input gate produces $i^{(t)}$ as output, which encapsulates the worthiness decision of preserving the input information. Similarly to the input gate, the forget gate also determines the usefulness of a word by inferring whether the past memory cell is used to compute the current memory cell by looking at the input word word $x^{(t)}$ itself and the past hidden state $h^{(t-1)}$; it produces $f^{(t)}$ as output, which encapsulates the worthiness decision of preserving the past memory cell. In the final memory generation stage, the advice of the input gate $i^{(t)}$ to gate the new memory $\widetilde{c}^{(t)}$ and the advice of the forget gate $f^{(t)}$ to forget the past memory $\widetilde{c}^{(t-1)}$ are both considered, and the two results are summed up to produce the final memory $c^{(t)}$. The output gate is used to separate the hidden state $h^{t}$ from the final memory of the network $c^{(t)}$. Given that every state of the LSTM is relying on hidden states and that the final memory $c^{(t)}$ contains a lot of information not necessarily required to be saved in the hidden state, the output gate discriminatively assesses which parts of the memory $c^{(t)}$ should be kept inside the hidden state $h^{t}$. In our experiments we employ an LSTM generative model trained at word level. Sampling from a trained word language model can be done in two ways: beam search \cite{bahdanau2014neural} and random sampling \cite{graves2013generating}. Following \cite{tang2016context}, we use random sampling with different values for the temperature parameter. Sampling from the LSTM model with a high temperature results in the model generating diverse samples at the cost of introducing some mistakes, while small temperatures generate conservative samples without a lot of content diversity. In our experiments, we empirically set the temperatures to the following values: 1.0, 0.7 and 0.5. RNNs, and LSTMs in particular, have become the standard for modeling machine learning problems that involve temporal and sequential data including text. The data is modeled via a fully-observed directed graphical model, where the distribution over a discrete time sequence $y_1, y_2, \dots, y_T$ is decomposed into an ordered product of conditional distributions over tokens: \begin{equation} P(y_1, y_2, \dots, y_T) = P(y_1)\prod_{t=1}^{T}P(y_t|y_1, \dots, y_{t-1}) \end{equation} For models with recurrent connections from their outputs leading back into the model, \textit{teacher forcing} \cite{williams1989learning} is the most popular training strategy. This procedure emerges from the maximum likelihood criterion, in which at training time $t+1$ the model receives as input the ground truth output $y^t$: \begin{equation} \label{maxlikelihood} \begin{split} \log p(y^{(1)}, y^{(2)} | x^{(1)}, x^{(2)}) &= \log p (y^{(2)}| y^{(1)}, x^{(1)}, x^{(2)}) \\ &+ \log p (y^{(1)} | x^{(1)}, x^{(2)}) \end{split} \end{equation} The model in Equation \ref{maxlikelihood} above illustrates the conditional maximum likelihood criterion at timestep $t=2$. The model is trained to maximize the conditional probability of $y^{(2)}$ given the sequence $x$ generated so far and the previous $y^{(1)}$ value. Therefore, maximum likelihood specifies that at training time the previous token generated by the model is replaced with ground-truth examples $y_t$ that are fed back into the model for predicting outputs at later time steps. Feeding back ground truth samples at training time forces the RNN to stay close to the ground-truth sequence. However, at inference time, the ground truth sequence is no longer available conditioning, and each $y_t$ is generated by the model itself (i.e. sampled from its conditional distribution over the sequence given the previously generated samples). This discrepancy between training time and inference time causes errors in the model predictions that accumulate and amplify quickly over the generated sequence as the model is in a part of the state space it has never seen during training time. Small prediction errors compound in the RNN's conditioning context, and as the generated sample starts to diverge from sequences it has seen during training, the prediction performance of the RNN worsens \cite{lamb2016professor}. To alleviate this problem, Bengio et al \cite{bengio2015scheduled} propose \textit{Scheduled Sampling (SS)}, a learning strategy for training RNNs which mixes inputs from the ground-truth sequence with inputs generated by the model itself at training time. SS relies on curriculum learning \cite{bengio2009curriculum} to change the training process from a fully guided scheme using the true previous token to a less guided scheme mostly using the generated token. The choice of replacing the ground truth with the model's prediction is determined by a coin flip with some probability, independently for each token. The probability of using the ground truth is set to a high value initially. As the model gradually keeps improving, samples from the model become more frequent and the model is partially fed with its own synthetic data as prefix in a similar way to inference mode. Therefore, the training objective is slowly changed from an easy task where the previous token is known, to a realistic task where the previous token is provided by the model itself. The scheduled sampling training scheme is meant to make the model more robust and forces it to deal with its own mistakes at training time, in a similar way to inference time. However, as the model generates several consecutive tokens $y_t$-s, it is not clear whether the correct target distribution remains the same as in the ground truth sequence. The authors propose two solutions: \textit{i)} make the self-generated sequences short, and \textit{ii)} anneal the probability of using self-generated vs. ground-truth samples to 0, according to some schedule. Despite its impressive empirical performance, Huszar et al \cite{huszar2015not} show that SS is an inconsistent training strategy which pushes models towards memorising the distribution of symbols conditioned on their position in the sequence instead of on the prefix of preceding symbols. According to the authors, SS pays no attention to the content of the sequence prefix, and uses the hidden states to implement a simple counter which makes the model likely to recover from its own mistakes. Moreover, it is possible that the good performance of the model on image captioning datasets is either due to the algorithm not running until convergence, or to a lucky combination of factors including the model structure, early stopping, random restarts, and the annealing schedule. The authors recommend adversarial training strategies as a much better choice for generative models. Tang et al \cite{tang2016context} study the the problem of NLG at particular contexts or situations. The authors focus on user review data due to its richness of context, sentiments and opinions expressed. They propose two approaches built on top of the encoder-decoder framework to generate user reviews as text sequences from user product contexts. In the first approach, \textit{Contexts to Sequences}, the authors encode the product context information $\overrightarrow{C}=\{\overrightarrow{c}_i\}_{i=1,\ldots, K}$, where $\overrightarrow{c}_i$ denotes a type of context and $K$ the number of context types, into a continuous semantic representation, which is fed into an LSTM decoder to generate text sequences. Despite promising results shown by the method, the authors consider that for long generated sequences the information from contexts is not propagated to distant words. In their second approach, \textit{Gated Contexts to Sequences}, the authors add skip-connections to directly build the dependency between contexts $h_C$ and each word when predicting the next word $x_{t+1}$ in a sequence. When a new word in a sequence is generated, it does not only depend on the current hidden state $h_t$, but it also depends on the context representation $h_C$. Similar to the first model, the decoder is a vanilla recurrent neural network with LSTM unit. Focusing on the same problem as Tang et al \cite{tang2016context}, Dong et al \cite{dong2017learning} propose \textit{Attention Enhanced Attribute to Sequence Model}. The model learns to encode product attributes into vectors by means of an encoder network, and then generate reviews by conditioning on the encoded vectors inside a sequence decoder, and an attention mechanism \cite{bahdanau2014neural}, \cite{xu2015show} which learns soft alignments between the input attributes and the generated words. The product review generation problem is formally defined as follows. Given input attributes $a=(a_1, \ldots, a_{|a|})$, generate a product review $r=(y_1, \ldots, y_{|r|})$ which maximizes the conditional probability $p(r|a)$: \begin{equation} p(r|a) = \prod_{t=1}^{|r|}p(y_t| (y_1, \ldots, y_{t-1}), a) \end{equation} While the number of attributes $|a|$ is fixed for each product, the review text $r$ is a sequence of variable length. In our experiments we use the two models proposed by Tang et al \cite{tang2016context} and Dong et al \cite{dong2017learning} to generate use product reviews given the context information and the review text of each product in the Amazon dataset. In addition to the already mentioned models, we also employ a pre-trained model released by Google, commonly referred to as Google LM \cite{jozefowicz2016exploring}. The model is an important contribution to the field of neural language modeling which emphasizes large scale recurrent neural network training. The model was trained on the One Billion Word Benchmark \cite{chelba2013one}, a publicly available dataset containing mainly news data and used as a reference standard for measuring the progress of statistical language modeling. The dataset includes 1 billion words in total with a vocabulary of 800,000 unique words. While for count based language models it is considered a medium-sized dataset, for neural network based language models the benchmark is regarded as a very large dataset. In terms of the model architecture, the GoogleLM model is a 2-layer LSTM neural network with 8,192 and respectively 1,024 hidden units in each layer, the largest Google was able to fit into GPU memory. The model uses Convolutional Neural Networks (CNNs) character embeddings as input, and makes predictions one character at a time, which presents the advantage that the model does not need to learn long-term dependencies in the data. We employ GoogleLM to generate sentences with a topic which identifies with the existing three categories (books, electronics and movies) present in the Amazon dataset we used. Generative Adversarial Networks (GANs) \cite{goodfellow2014generative} represent a training methodology for generative models via an adversarial process, and are aimed at generating synthetic data which resembles the real data. The GAN framework works through the interplay between two feedforward neural network models, a generative model $G$ and a discriminative model $D$, trained simultaneously by competing against each other. The generative model $G$ aims to capture the data distribution and generate high quality synthetic data, while the discriminative model $D$ estimates the probability a sample comes from the real training data and not from the synthetic data generated by $G$. Concretely, the generator $G$ takes as input a vector of random numbers $z$, and transforms it into the form of the data we are interested in imitating; the discriminator $D$ takes as input either the real data $x$ or generated data $G(z)$, and outputs probability $P(x)$ of the respective data being real. The GAN framework is equivalent to a minimax two-player game between the two models $G$ and $D$: \begin{equation} \label{GAN_equation} \begin{split} \min_G \max_D V(D, G) &= \mathbb{E}_{x \sim p_{\text{data}}(x)} [\log D(x)] \\ &+ \mathbb{E}_{z \sim p_z(z)}[\log(1-D(G(z)))] \end{split} \end{equation} Adversarial learning algorithms iteratively sample batches from the data and noise distributions, and use noisy gradient information to simulatenously ascend in the parameters $\theta_d$ of $D$, while descending in the parameters $\theta_g$ of $G$. The discriminator $D$ is optimized to increase the likelihood of assigning a high probability to the real data $x$ and a low probability to the fake generated data $G(z)$. The gradient for the discriminator can be expressed as follows: \begin{equation} \label{discriminator_optimization} \triangledown_{\theta_d} \frac{1}{m} \sum_{i=1}^{m} \big[\log D (x^{(i)}) + \log(1-D(G(z^{(i)})))\big] \end{equation} Alternatively, the generator $G$ is optimized to increase the probability the generated data $G(z)$ is rated highly: \begin{equation} \label{generator_optimization} \triangledown_{\theta_g} \frac{1}{m} \sum_{i=1}^{m} \big[\log(1-D(G(z^{(i)})))\big] \end{equation} The goal of the generator $G$ is to maximize the probability of discriminator $D$ making a mistake by generating highly realistic data, while the discriminator $D$ is learnt to distinguish whether a given data instance is real or not. The gradient of the training loss from the discriminator $D$ is used as guidance for updating the parameters of the generator $G$. Gradient optimization is alternated between the two networks $D$ and $G$ as illustrated in Equations \ref{discriminator_optimization} and \ref{generator_optimization} on batches of real and generated data until GAN converges, at which point the data produced by GAN is the most realistic the network is capable of modeling. However, GAN's applicability to discrete data is limited, despite the great success at generating realistic real valued synthetic samples in many computer vision tasks for eg., image generation \cite{brock2016neural}, \cite{zhu2016generative}, \cite{taigman2016unsupervised}, image style transfer \cite{luan2017deep}, \cite{zhu2017unpaired} and semantic segmentation \cite{luc2016semantic}, \cite{souly2017semi}. Training generative models of text using GANs is challenging due to the discrete nature of text data, which makes it difficult to backpropagate the gradient from the discriminator $D$ to the generator $G$. GANs are designed for generating real-valued, continuous data, and the gradient of the loss from discriminator $D$ w.r.t. the output of generator $G$ is used to guide $G$ to slightly change the generated value to make it more realistic (i.e. the gradient of the output of the discriminator network with respect to the synthetic data indicates how to slightly change the synthetic data to make it more plausible). Changes can be made to the synthetic data if it is based on real numbers, however for discrete tokens the slight change guidance is not a useful signal, as it is very likely that there is no corresponding token to the slight change given the limited vocabulary space\footnote{\url{https://www.reddit.com/r/MachineLearning/comments/40ldq6/generative_adversarial_networks_for_text/}}. In addition, a further reason why GANs cannot be applied to text data is because the discriminator $D$ can only asses a complete sequence. When having to provide feedback for partially generated sequences, it is non-trivial to balance the current score of the partially generated sequence with the future score after the entire sequence has been generated \cite{yu2017seqgan}. In the literature there are two approaches on how to deal with the problem of non-differentiable output and finding the optimal weights in a neural network: the REINFORCE algorithm, and Gumbel-Softmax reparameterization. We present each method below. \textit{REINFORCE} \cite{williams1992simple} algorithms, also known as \textit{REward Increments, score-function estimators}, or \textit{likelihood-ratio methods} adjust the weights of a neural network based on the log derivative trick in a direction that lies along the gradient of expected reinforcement without explicitly computing gradient estimates. It is a policy gradient method which uses the likelihood ratio trick $\big(\frac{\triangledown_\theta p(X, \theta)}{P(X, \theta)} = \triangledown_{\theta} \log p(X, \theta); \frac{\partial}{\partial_x} \log f(x)=\frac{f'(x)}{f(x)} \big)$ to update the parameters of an agent and increase the probability that the agent's policy will select a rewarding action given a state. Given the trajectory $\tau_t = (u_1, \ldots, u_{t-1}, x_0, \ldots, x_t)$ made up of a sequence of states $x_k$ and control actions $u_k$, the goal of policy gradient is to find policy $\pi_{\vartheta}$ which takes as input trajectory $\tau_t$ and outputs a new control action that maximizes the total reward after $L$ time steps. $\pi_{\vartheta}$ is a parametric randomized policy which assumes a probability distribution over actions: \begin{equation} p(\tau; \vartheta) = \prod_{t=0}^{L-1}p(x_{t+1} | x_t, u_t) \pi_v(u_t|\tau_t) \end{equation} If we define the reward of a trajectory as: \begin{equation} R(\tau) = \sum_{t=0}^{N} R_{t}(x_t, u_t), \end{equation} the reinforcement learning optimization problem becomes: \begin{equation} \begin{split} \max_{\vartheta} J(\vartheta) = \max_{\vartheta} \mathbb{E}_{p(\tau|\vartheta)} [R(\tau)] \end{split} \end{equation} Then policy gradient can be derived as follows: \begin{equation} \label{policy_gradient} \begin{split} \triangledown_{\vartheta} J(\vartheta) &= \int R(\tau) \triangledown_{\vartheta} p(\tau; \vartheta)d\tau \\ &= \int R(\tau) \frac{\triangledown_{\vartheta} p(\tau; \vartheta)}{p(\tau; \vartheta)}p(\tau; \vartheta)d\tau \\ &= \int (R(\tau)\triangledown_{\vartheta} \log p(\tau; \vartheta))p(\tau; \vartheta) d\tau \\ &= \mathbb{E}_{p(\tau;\vartheta)} [R(\tau) \triangledown_{\vartheta} \log p(\tau; \vartheta) ] \end{split} \end{equation} From Equation \ref{policy_gradient} we have that the gradient of $J$ w.r.t. $\vartheta$ is equal to the expected value of the function $G(\tau, \vartheta) = R(\tau) \triangledown_{\vartheta} \log p(\tau; \vartheta)$. This function provides an unbiased estimate of the gradient of $J$ and can be computed by running policy $\pi_\vartheta$ and sampling a trajectory $\tau$ without knowing the dynamics of the system since $p(x_{t+1}|x_t, u_t)$ does not depend on parameter $\vartheta$. Following this direction is equivalent to running stochastic gradient descent on $J$. \begin{equation} \triangledown_{\vartheta} \log p(\tau; \vartheta) = \sum_{t=0}^{L-1} \triangledown_{\vartheta} \log \pi_{\vartheta}(u_t|\tau_t) \end{equation} The policy gradient algorithm can be summarized: \begin{enumerate} \item Choose $\vartheta_0$, stepsize sequence $\alpha_k$, and set $k=0$; \item Run the simulator with policy $\pi_{\vartheta_k}$ and sample $\tau_k$; \item $\vartheta_{k+1} = \vartheta_k + \alpha_k R(\tau_k) \sum_{t=0}^{L-1} \triangledown_{\vartheta} \log \pi_{\vartheta}(u_{tk}|\tau_t)$; \item $k = k + 1$, go to step 2. \end{enumerate} The policy gradient algorithm can be run on any problem if sampling from $\pi_{\vartheta}$ can be done efficiently. Policy gradient is simple as it optimizes over a parametric family $p(u; \vartheta)$ instead of optimizing over the space of all probability distributions. However, there are constraints regarding the probability distribution, which should be easy to sample from, easy to search by gradient methods, and rich enough to approximate delta functions. In addition, the complexity of the method depends on the dimensionality of the search space and can be slow to converge. Finally, the policy gradient update is noisy, and its variance increases proportionally with the simulation length $L$. The other solution to the problem of dealing with non-differentiable output is to use the the \textit{Gumbel-Softmax} \cite{jang2016categorical} approach, and replace the non-differentiable sample from the categorical distribution with a differentiable sample from a Gumbel-Softmax distribution. The Gumbel-Softmax distribution is a continuous distribution on the simplex that can approximate categorical samples. Parameter gradients can be easily computed by applying the reparameterization trick \cite{kingma2013auto}, a popular technique used in variational inference and adversarial learning of generative models in which the expectation of a measurable function $g$ of a random variable $\epsilon$ is calculated by integrating $g(\epsilon)$ with respect to the distribution of $\epsilon$: \begin{equation} \mathbb{E}(g(\epsilon)) = \int g(\epsilon) dF_{\epsilon} \end{equation} Therefore, in order to compute the expectation of $z=g(\epsilon)$ we do not need to know explicitly the distribution of $z$, but only know $g$ and the distribution of $\epsilon$. This can alternatively be expressed as: \begin{equation} \mathbb{E}_{\epsilon \sim p(\epsilon)}(g(\epsilon)) = \mathbb{E}_{z \sim p(z)}(z) \end{equation} If the distribution of variable $z$ depends on parameter $\phi$, i.e. $z \sim p_{\phi}(z)$, and if we can assume $z=g(\epsilon, \phi)$ for a known function $g$ of parameters $\phi$ and noise distribution $\epsilon \sim \mathcal{N} (0,1)$, then for any measurable function $f$: \begin{equation} \label{reparameterization} \begin{split} \mathbb{E}_{\epsilon \sim p(\epsilon)}(f(g(\epsilon, \phi))) &= \mathbb{E}_{z \sim p_{\phi}(z)}(f(z)) \\ \mathbb{E}_{\epsilon \sim p(\epsilon)}(\triangledown f(g(\epsilon, \phi))) &= \triangledown_{\phi} \mathbb{E}_{\epsilon \sim p(\epsilon)}(f(g(\epsilon, \phi))) \\ &= \triangledown_{\phi} \mathbb{E}_{z \sim p_{\phi}(z)}(f(z)) \end{split} \end{equation} In equation \ref{reparameterization}, $z$ has been conveniently expressed such that functions of $z$ can be defined as integrals w.r.t. to a density that does not depend on the parameter $\phi$. Constructing unbiased estimates of the gradient is done using Monte Carlo methods: \begin{equation} \triangledown_{\phi} \mathbb{E}_{z \sim p_{\phi}(z)}(f(z)) \sim \frac{1}{M}\sum_{i=1}^{M}\triangledown f(g(\epsilon^{i}, \phi)) \end{equation} The reparameterization trick aims to make the randomness of a model an input to that model instead of letting it happen inside the model. Given this, the network model is deterministic and we can differentiate with respect to sampling from the model. An example of applying the reparameterization trick is to rewrite samples drawn from the normal distribution $z \sim \mathcal{N}(\mu, \sigma)$ as $z=\mu + \sigma \epsilon $, with $\epsilon \sim \mathcal{N}(0,1)$. In this way stochastic nodes are avoided during backpropagation. However, the re-parameterization trick cannot be directly applied to discrete valued random variables, for eg. text data, as gradients cannot backpropagate through discrete nodes in the computational graph. The Gumbel-Softmax trick attempts to overcome the inability to apply the reparameterization trick to discrete data. It parameterizes a discrete distribution in terms of a Gumbel distribution, i.e. even if the corresponding function is not continuous, it will be made continuous by applying a continuous approximation to it. A random variable $G$ has a standard Gumbel distribution if $G=-\log(-\log(U)), U \sim \text{Unif}[0,1]$. Any discrete distribution can be parameterized in terms of Gumbel random variables as follows. If $X$ is a discrete random variable with $P(X=k) \propto \alpha_k$ random variable and $\{G_{k}\}_{k \le K}$ an i.i.d. sequence of standard Gumbel random variables, then: \begin{equation} \label{sampling_gumbel} X = \arg \max_k(\log \alpha_k +G_k) \end{equation} Equation \ref{sampling_gumbel} illustrates sampling from a categorical distribution: draw Gumbel noise by transforming uniform samples, add it to $\log \alpha_k$, then take the value of $k$ that yields the maximum. The $\arg \max$ operation that relates the Gumbel samples is not continuous, however discrete random variables can be expressed as one-hot vectors and take values in the probability simplex: \begin{equation} \Delta^{K-1}= \{ x \in R^{K}_{+}, \sum_{k=1}^{K}x_k=1 \} \end{equation} A $\text{one\_hot}$ vector corresponds to a discrete category, and since the $\arg \max$ function is not differentiable, a softmax function can be used instead as a continuous approximation of $\arg \max$: \begin{equation} f_{\tau}(x)_k = \frac{\exp(x_k/\tau)}{\sum_{k=1}^{K}\exp(x_k/ \tau)} \end{equation} Therefore, the sequence of simplex-valued random variables $X^{\tau}$ is: \begin{equation} \begin{split} \label{eq_GumbelSoftmax_Distribution} X^{\tau} = (X_k^{\tau})_k &= f_{\tau}(\log \alpha + G) \\ &= \frac{\exp((\log \alpha_k + G_k)/ \tau)}{\sum_{i=1}^{K}\exp((\log \alpha_i + G_i)/\tau)} \end{split} \end{equation} Equation \ref{eq_GumbelSoftmax_Distribution} is known as the Gumbel-Softmax distribution and can be evaluated exactly for different values of $x$, $\alpha$ and $\tau$, where $\tau$ is a temperature parameter that controls how closely the samples from the Gumbel-Softmax distribution approximate those from the categorical distribution. When $\tau \rightarrow 0 $, the softmax function becomes an $\arg \max$ function and the Gumbel-Softmax distribution becomes the categorical distribution. At training time $\tau$ is a set to a value greater than 0 which allows gradients to backpropagate past the sample, and then is gradually annealed to a value close to 0. The Gumbel Softmax trick is important as it allows for the inference and generation of discrete objects. A direct application of this technique is generating text via GANs. In summary, GANs have shown impressive performance at generating natural images nearly indistinguishable from real images, however applying GANs to text generation is a non-trivial task due to the special nature of the linguistic representation. According to Dai et al \cite{dai2017towards}, the two main challenges to overcome when using GANs with textual input are: \textit{i)} first, text generation is a sequential non-differentiable sampling procedure which samples a discrete token at each time step (vs. image generation where the transformation from the input random vector to the produced output image is a deterministic continuous mapping); the non-differentiability of text makes it difficult to apply back-propagation directly, and to this end, classical reinforcement learning methods such as Policy Gradient \cite{sutton2000policy} have been used. In policy gradient the production of each word is considered as an action for which the reward comes from the evaluator, and gradients can be back-propagated by approximating the stochastic policy with a parametric function. \textit{ii)} second, in the GAN setting the generator receives feedback from the evaluator when the entire sample is produced, however for sequence generation this causes difficulties during training, such as vanishing gradients and error propagation. To allow the generator to get early feedback when a text sequence is partly generated, Monte Carlo rollouts are used to calculate the approximated expected future reward. This has been found empirically to improve the efficiency and stability of the training process. Unlike in conventional GAN settings that deal with image generation, the production of sentences is a discrete sampling process, which is also non-differentiable. A natural question that arises is how can the feedback be back-propagated from the discriminator to the generator under such a formulation. Policy gradient considers a sentence as a sequence of actions, where each word $w_t$ is an action and the choices of such actions are governed by a policy $\pi_{\theta}$. The generative procedure begins with an initial state $S_{1:0}$ which is the empty sentence, and at each time step $t$ the policy $\pi_{\theta}$ takes as input the previously generated words $S_{1:t-1}$ up until time $t-1$, as well as the noise vector $z$, and yields a conditional distribution $\pi_{\theta}(w_t | z, S_{1:t-1})$ over the vocabulary words. The computation is done one step at a time moving along the LSTM network and sampling an action $w_t$ from the conditional distribution up until $w_t$ will be equal to the end of sentence indicator, in which case the sentence is terminated. The reward for the generated sequence of actions $S$ will be a score $r$ calculated by the discriminator. However, this score can be computed only after the sentence has been completely generated, and in practice this leads to difficulties such as vanishing gradients and very slow training convergence. Early feedback is used to evaluate the expected future reward when the sentence is partially generated, and the expectation can be approximated using Monte Carlo rollouts. The Monte Carlo rollout method is suitable to use when a part of the sentence $S_{1:t}$ has been already generated, and we continue to sample the remaining words of the sentence from the LSTM network until the end of sentence token is encountered. The conditional simulation is conducted $n$ times, which results in $n$ sentences. For each sentence we compute an evaluation score, and the rewards obtained by the simulated sentences are averaged to approximate the expected future reward of the current sentence. In this way updating the generator is possible with feedback coming from the discriminator. The utility of the policy gradient method is that by using the expected future reward the generator is provided with early feedback and becomes trainable with gradient descent. Yu et al propose SeqGAN \cite{yu2017seqgan}, a GAN-based sequence generation framework with policy gradient, which is the first work to employ GANs for generating sequences of discrete tokens to overcome the limitations of GANs on textual data. SeqGAN treats the sequence generation procedure as a sequential decision making process \cite{bachman2015data}. A discriminator is used to evaluate the generated sequence and provide feedback to the generative model to guide its learning. It is a well known problem of GANs that for text data (discrete ouputs) the gradient cannot be passed back from the discriminator to the generator. SeqGAN addresses this problem by treating the generator as a stochastic parameterized policy trained via policy gradient \cite{sutton2000policy} and optimized by directly performing gradient policy update, therefore avoiding the differentiation difficulty for discrete data. The reinforcement learning reward comes from the discriminator based on the likelihood that it would be fooled judged on a complete sequence of tokens, and is passed back to the intermediate state-action steps using Monte Carlo search \cite{browne2012survey}. The sequence generation problem is defined as follows. Given a dataset of human written sequences, train a generative model $G_{\theta}$ parameterized by $\theta$ to output sequence $Y_{1:T} = (y_1, \ldots, y_t, \ldots, y_T), y_t \in Y$, where $Y$ is the word vocabulary. The current state is the sequence of tokens $(y_1, \ldots, y_{t-1})$ generated until timestep $t$, and the action $a$ taken from this state is the selection of next token $y_t$. The policy model $G_{\theta}(y_t|Y_{1:t-1})$ is stochastic and will select an action according to the leant probability distribution of the input tokens. The state transition from the current state $s=Y_{1:t-1}$ to the next state $s^{'} = Y_{1:t}$ after choosing action $a=y$ is deterministic, i. e. $\delta_{s,s^{'}}^{a}=1$ for next state $s^{'}$, and $\delta_{s,s^{''}}^{a}=0$ for other next states $s^{''}$. The discriminative model $D_{\phi}(Y_{1:T})$ is used to guide the generator $G_{\theta}$, and outputs a probability indicating how likely a sequence $Y_{1:T}$ produced by $G_{\theta}$ comes from real sequence data. $D_\phi$ is trained with both real and fake examples from the real sequence data and the synthetic data generated by $G_{\theta}$. The objective of the generator model (policy) $G_{\theta}(y_y| Y_{1:t-1})$ is to maximize its expected end reward $R_T$ which comes from the discriminator $D_{\phi}$ for a sequence which is generated starting from initial state $s_0$: \begin{equation} J(\theta) = \mathbb{E}[R_T|s_0, \theta] = \sum_{y_1 \in Y}G_{\theta}(y_1|s_0)Q_{D_{\phi}}^{G_{\theta}}(s_0, y_1) \end{equation} The action-value function $Q_{D_{\phi}}^{G_{\theta}}(s, a)$ for a sequence represents the expected cumulative reward starting from state $s$, taking action $a$ and then following policy $G_{\theta}$. The action value function $Q_{D_{\phi}}^{G_{\theta}}(s, a)$ is calculated as the estimated probability (reward) the discriminator $D_{\phi}(Y_{1:T}^{n})$ assigns to the generated sample being real: \begin{equation} Q_{D_{\phi}}^{G_{\theta}}(a=y_T, s=Y_{1:T-1}) = D_{\phi}(Y_{1:T}^{n}) \end{equation} In the GAN setup, the discriminator $D_{\phi}$ can only provide a reward at the end of a finished sequence. In order to evaluate the action-value function $Q_{D_{\phi}}^{G_{\theta}}(s, a)$ for an intermediate state $s$, Monte Carlo search with roll-out policy $G_{\beta}$ (identical to the generator $G_{\theta}$ policy) is used to sample the unknown remaining $T-t$ tokens that result in a complete sentence. The roll-out policy $G_{\beta}$ starts from the current state $s$ and is run for $N$ times to get an accurate assessment of the action-value function $Q_{D_{\phi}}^{G_{\theta}}(s, a)$ through a batch of $N$ output samples, thus reducing the variance of the estimation: \begin{equation} \begin{split} \{Y_{1:T}^{1}, \ldots, Y_{1:T}^{N}\}&=MC^{G_{\beta}}(Y_{1:t}; N) \\ Q_{D_{\phi}}^{G_{\theta}}(a=y_t, s=Y_{1:t-1}) &=\begin{cases} \frac{1}{N} \sum_{n=1}^{N}D_{\phi}(Y_{1:T}^n), \\ \text{if } Y_{1:T}^n \in MC^{G_{\beta}}(Y_{1:t; N}), t < T \\ D_{\phi}(Y_{1:t}), \text{if } t = T \end{cases} \end{split} \end{equation} The generator starts with random sampling at first, but once more realistic samples have been generated, the discriminator $D_{\phi}$ is updated (which will in turn improve the generator model iteratively): \begin{equation} \min_{\phi}-\mathbb{E}_{Y \sim p_{\text{data}}}[\log D_{\phi}(Y)] - \mathbb{E}_{Y \sim G_{\theta}}[\log (1 - D_{\phi}(Y))] \end{equation} The generator $G_{\theta}$ is updated every time a new discriminator $D_{\phi}$ has been obtained. The gradient of the generator's objective function $J(\theta)$ w.r.t the generator's parameters $\theta$ is expressed as follows: \begin{equation} \begin{split} \nabla_{\theta} J(\theta) = \sum_{t=1}^{T}\mathbb{E}_{Y_{1:t-1} \sim G_{\theta}}\bigg[\sum_{y_t \in Y} \nabla_{\theta}G_{\theta}(y_t|Y_{1:t-1}) \cdot \\ \cdot Q_{D_{\phi}}^{G_{\theta}}(Y_{1:t-1}, y_{t})\bigg] \end{split} \end{equation} Expectation $\mathbb{E}$ can be approximated by sampling methods, and generator's parameters are updated: \begin{equation} \theta \leftarrow \theta + \alpha_{h}\nabla_{\theta}J(\theta), \text{where } \alpha_{h} - \text{learning rate} \end{equation} In the initial stages of training, the generator $G_{\theta}$ is pre-trained via maximum likelihood estimation, and the discriminator $D_{\phi}$ is pre-trained via minimizing the cross-entropy between the ground truth label and the predicted probability; after the pre-training stage is over, the generator and the discriminator are trained alternatively. The SeqGAN authors chose an LSTM \cite{schmidhuber1997long} architecture for the generator in order to avoid the vanishing and the exploding gradient problem of back-propagation through time, and a CNN \cite{lecun1998gradient}, \cite{kim2014convolutional} architecture with highway networks \cite{srivastava2015highway} as discriminator. The evaluation metric is set to minimize the average negative log-likelihood between the generated data and an oracle considered as the human observer: \begin{equation} \text{NLL}_{\text{oracle}} = -\mathbb{E}_{Y_{1:T \sim G_{\theta}}}\bigg[\sum_{t=1}^{T} \log G_{\text{oracle}}(y_t|Y_{1:t-1}) \bigg] \end{equation} Lin et al \cite{lin2017adversarial} consider that GANs restrict the discriminator too much by forcing it to be a binary classifier. Because of this setup, the discriminator is limited in its learning capacity especially for tasks with a rich structure, such as when generating natural language expressions. The authors propose a generative adversarial framework called RankGAN, which is able to capture the richness and diversity of language by learning a relative ranking model between the machine written and human written sentences in an adversarial framework. The adversarial network consists of two neural network models, a generator $G_{\theta}$ and a ranker $R_{\phi}$, where $\theta$ and $\phi$ are parameters. The RankGAN discriminator $R_{\phi}$, instead of performing a binary classification task as in conventional GANs, is trained to rank the machine-written sentences lower than human-written sentences w.r.t. a human-written reference set. Alternatively, the generator $G_{\theta}$ is trained to confuse the ranker $R$ in such a way that machine written sentences are ranked higher than human written sentences with regard to the reference set. The authors consider that by viewing a set of samples collectively (instead of just one sample) and evaluating their quality through relative ranking, the discriminator can make better judgements regarding the quality of the samples, which helps in turn the generator better learn to generate realistic sequences. The problem can be expressed mathematically as $G_{\theta}$ and $R_{\phi}$ playing a minimax game with the objective function $\mathcal{L}$: \begin{equation} \label{RankGANobj} \begin{split} \min_{\theta}\max_{\phi} \mathcal{L}(G_{\theta}, R_{\phi}) = \mathbb{E}_{s \sim P_h}[\log R_{\phi}(s|U, C^{-})] + \\ \mathbb{E}_{s \sim G_{\theta}}[\log(1-R_{\phi}(s|U, C^{+}))] \end{split} \end{equation} The ranker $R_{\phi}$ is optimized to increase the likelihood of assigning a high probability to the real sentence $s$ and a low probability to the fake generated data $G_{\theta}$. $s \sim P_h$ denotes that sentence $s$ is sampled from human written sentences, while $s \sim G_{\theta}$ denotes that sentence $s$ is sampled from machine written sentences. $U$ is a reference set which is used for estimating relative ranks. $C^{+}$ and $C^{-}$ are comparison sets with regards to input sentences. When the input sentence $s$ is sampled from the real data, $C^{-}$ is sampled from the generated data, and alternatively when the sentence $s$ is sampled from the synthetic data generated by $G_{\theta}$, $C^{+}$ is sampled from human written data. Similar to SeqGAN, the authors use policy gradient to overcome the non-differentiability problem of text data. However, unlike SeqGAN, the regression based discriminator is replaced with a ranker and a new learning objective function. The generative model $G_{\theta}$ is an LSTM network, while the ranker $R_{\phi}$ is a CNN network. The rewards for training the model are encoded with relative ranking information. When a sequence is incomplete, an intermediate reward is computed using Monte Carlo rollout methods. The expected future reward $V$ for partial sequences is defined as: \begin{equation} \label{rankgan_reward} V_{\theta, \phi}(s_{1:t-1, U}) = \mathbb{E}_{s_r \sim G_{\theta}}[ \log R_{\phi}(s_r | U, C^{+}, s_{1:t-1}) ] \end{equation} In Equation \ref{rankgan_reward} above, $s_r$ denotes a complete sequence sampled by using rollout methods starting from sequence $s_{1:t-1}$. A total of $n$ different paths are sampled, and their corresponding ranking scores are computed. The average ranking score is used to approximate the expected future reward for the current partially generated sequence $s_{1:t-1}$; the ranking score of an input sentence $s$ given reference sentence $u$ and comparison set $C$ (where $C=C^{+}$ if sentence $s$ is machine generated, $C=C^{-}$ otherwise) is computed using a softmax-like formula: \begin{equation} \label{rankgan_ranking} \begin{split} P(s|u,C) &=\frac{\exp(\gamma \alpha(s|u))}{\sum_{s^{'} \in C^{'}} \exp(\gamma \alpha (s^{'}|u))}, \text{where }\\ \alpha(s|u) &=\cos(y_s,y_u)=\frac{y_s y_u}{||y_s|| ||y_u||} \end{split} \end{equation} In Equation \ref{rankgan_ranking}, $y_s$ is the embedded feature vector of the input sentence, and $y_u$ is the embedded feature vector of the reference sentence. The gradient of the objective function for generator $G_{\theta}$ for start state $s_0$, vocabulary $V$, and generator policy $\pi_\theta$ is computed as: \begin{equation} \begin{split} \triangledown_{\theta} \mathcal{L}_{\theta}(s_0) = \mathbb{E}_{s_{1:T}\sim G_{\theta}} \bigg[\sum_{t=1}^{T}\sum_{w_t \in V} \triangledown_{\theta} \pi_{\theta} (w_t | s_{1:t-1}) \cdot \\ \cdot V_{\theta, \phi} (s_{1:t}, U)\bigg] \end{split} \end{equation} Therefore, RankGAN deals with the gradient vanishing problem of GAN by replacing the original binary classifier discriminator with a ranking model in a learning-to-rank framework. The ranking score is computed by taking a softmax over the expected cosine distances from the generated sequences to the real data. Guo et al \cite{guo2017long} find that a limitation of current GAN frameworks for text generation \cite{yu2017seqgan}, \cite{lin2017adversarial}, \cite{rajeswar2017adversarial}, \cite{che2017maximum}, \cite{li2017adversarial}, \cite{zhang2017adversarial} is that they are only capable of generating short texts, within a limited length of around 20 tokens. Generating longer sequences is a less studied but more challenging research problem with a lot of useful applications, such as the auto-generation of news articles or product descriptions. Nevertheless, long text generation faces the issue that the binary guiding signal from generator $D$ is sparse and non-informative; it does not provide useful information regarding the intermediate syntactic structure and semantics of the generated text so that the generator $G$ could learn from that signal. Besides that, it is only available after the entire sequence has been generated, and the final reward value does not provide much guidance on how to alter the parameters of $G$ at training time. Moreover, the approach of relying on binary feedback from the discriminator requires a very large number of real and generated samples to improve $G$. Aiming to make the guiding signal coming from the discriminator $D$ more informative, the authors propose LeakGAN \cite{guo2017long}, a GAN approach for adversarial text generation in which the discriminative model $D$ is allowed to leak its own high-level extracted features (in addition to providing the final reward value) to better guide the training of the generative model $G$. The authors pick a hierarchical generator for $G$, which is made up of two distinct modules: a \textit{high-level manager} module, and a \textit{low-level worker} module. The high level manager module (or mediator) receives the feature map representation of the discriminator $D$; this is not normally allowed in the conventional GAN setup as this feature map is internally maintained by the discriminator. The manager embeds this feature map representation coming from the discriminator and passes it over to the worker module. The worker first encodes the current generated sequence, and combines this resulting encoding with the embedding produced by the manager to decide what action to take at the current state. Therefore, LeakGAN ``leaks'' guiding signals from the discriminator $D$ to the generator $G$ more frequently and more informatively throughout the sequence generation process and not at the end only, helping $G$ improve better and faster. The discriminator $D_{\phi}$ is made up of a feature extractor $\mathcal{F}(.; \phi_f)$ and a final sigmoid classification layer. For input sequence $s$, $D_{\phi}$ is defined as: \begin{equation} D_{\phi}(s) = \text{sigmoid}(\phi_l^T \mathcal{F} ( s; \phi_f)) = \text{sigmoid}(\phi_l^T f) \end{equation} The feature vector in the last layer of $D_\phi$ is denoted as $f = \mathcal{F} ( s; \phi_f) $, and it will be leaked to the generator $G_\theta$. A natural implication of this approach is that the reward the generator $G_\theta$ receives for a partially generated sequence is directly related to the quality of the extracted features by the discriminator $D_{\phi}$. Therefore, for the discriminator $D_{\phi}$ to yield a high reward, it is necessary to find a highly rewarding region in the extracted feature space. The authors consider that compared to a scalar signal, the feature vector $f$ is more informative as it captures the position of the generated words in the extracted feature space. $D_{\phi}$ is implemented as a CNN network. The manager module $\mathcal{M}(f_t, h_{t-1}^{M}; \theta_m)$ of the hierarchical generator $G_\theta$ receives as input the extracted feature vector $f_t$, which it combines with its internal hidden state to produce the goal vector $g_t$: \begin{equation} \begin{split} g_t^{'} &= \mathcal{M}(f_t, h_{t-1}^{M}; \theta_m)\\ g_t &= \frac{g_t^{'}}{||g_t^{'}||} \end{split} \end{equation} The goal vector embedding $w_t$ of goal $g_t$ is computed by applying a linear transformation $\psi$ with weight matrix $W_{\psi}$ to the sum of recent $c$ goals: \begin{equation} w_t = \psi (\sum_{i=1}^{c}g_{t-i}) = W_{\psi} (\sum_{i=1}^{c}g_{t-i}) \end{equation} $w_t$ is fed to the worker module $\mathcal{W}(.;\theta_w)$, which is in charge with the generation of the next token. The worker module takes the current word $x_t$ as input and outputs matrix $O_t$; this matrix is then combined through a softmax with the goal vector embedding $w_t$: \begin{equation} \begin{split} O_t, h_t^W &= \mathcal{W}(x_t, h_{t-1}^{W}; \theta_w) \\ G_{\theta}(.|s_t) &= \text{softmax}(O_t w_t / \alpha) \end{split} \end{equation} At training time, the manager and the worker modules are trained separately -- the manager is trained to predict which are the most rewarding positions in the discriminative feature space, while the worker is rewarded to follow these directions. The gradient for the manager module is defined as: \begin{equation} \triangledown _{\theta_m}^{\text{adv}}g_t = -Q_{\mathcal{F}}(s_t, g_t)\triangledown_{\theta_m} d_{\cos}(f_{t+c}-f_t, g_t(\theta_m)) \end{equation} $Q_{\mathcal{F}}(s_t, g_t)$ defines the expected reward under the current policy and can be approximated using Monte Carlo search. $d_{\cos}$ computes cosine similarity between the goal vector $g_t(\theta_m)$ produced by the manager, and the change in feature representation $f_{t+c} -f_t$ after $c$ transitions. In order to achieve a high reward, the loss function is trying to force the goal vector to match the transition in feature space. Before the adversarial training takes place, the manager undergoes a pre-training stage with a separate training scheme which mimics the transition of real text samples in the feature space: \begin{equation} \triangledown_{\theta_m}^{\text{pre}} = -\triangledown_{\theta_m}d_{\cos}(f_{t+c}^{'} - f_t^{'}, g_t(\theta_m)) \end{equation} The worker uses the REINFORCE algorithm during training to maximize the reward when taking action $x_t$ given the previous state is $s_{t-1}$: \begin{equation} \begin{split} \triangledown_{\theta_w}\mathbb{E}_{s_{t-1} \sim G} \bigg[ \sum_{x_t} r_{t}^{I} \mathcal{W}(x_t|s_{t-1}; \theta_w)\bigg] &= \\ \mathbb{E}_{s_{t-1} \sim G, x_t \sim \mathcal{W} (x_t | s_{t-1})}\bigg[ r_t^{I} \triangledown_{\theta_w} \log \mathcal{W} (x_t|s_{t-1}; \theta_w) \bigg] \\ r_t^{I} = \frac{1}{c} \sum_{i=1}^{c}d_{\cos}(f_t - f_{t-i}, g_{t-i}) \end{split} \end{equation} During the adversarial training process, the generator $G_\theta$ and the discriminator $D_\phi$ are trained in alternative stages. When the generator $G_\theta$ is trained, the worker $\mathcal{W}(.;\theta_w)$ and the manager $\mathcal{M}(.;\theta_m)$ modules are trained alternatively fixing each other. Mode collapse \cite{goodfellow2016nips} is a common problem when training GAN models, when the generator learns to produce samples with extremely low variety, limiting the usefulness of the leant GAN model. In mode collapse the generator network learns to output samples from a few modes of the data distribution only, missing out on many other modes even though samples from these missing modes can be found throughout the training data. Mode collapse can range from complete collapse, when the generated samples are entirely identical, to partial collapse when the generated samples present some common properties \cite{srivastava2017veegan}, \cite{salimans2016improved}. Several attempts have been made to address the problem, which include: \textit{i)} directly encouraging the generator cost function to account for the diversity of the generated batches by comparing these samples across a batch in order to determine whether the entire batch is real or fake, \textit{ii)} anticipate counterplay, in which the generator learns to fool the discriminator before the discriminator has a chance to respond (and therefore taking counterplay into account), \textit{iii)} experience replay, which minimizes the switching between modes by showing old fake generated samples to the discriminator every now and then, and \textit{iv)} using multiple GANs, in which a GAN is trained for each different mode so that when combined, the GANs altogether cover all modes. In LeakGAN, in order to address mode collapse, the authors propose an interleaved training scheme, which combines supervised training using maximum likelihood estimation with GAN adversarial training (instead of carrying only GAN adversarial training after the pretraining stage). Blending two training schemes is considered useful by the authors as it helps LeakGAN overcome local minimums, alleviates mode collapse and acts as an implicit regularizer on the generative model. \subsection{Samples produced by the review generators} \label{appendix_user_study_samples} \begin{figure*}[!htbp] \centering \includegraphics[width=\textwidth]{figures/AMT_instructions.png} \caption{Screenshot of the instructions presented to Amazon Mechanical Turk workers.} \label{user_instructions} \end{figure*} \begin{figure*}[!htbp] \centering \includegraphics[width=6.5in]{figures/Interface.png} \caption{Screenshot of the Amazon Mechanical Turk user study interface.} \label{user_interface} \end{figure*} Figure \ref{user_instructions} shows the instructions given to the AMT workers who participated in this study. In Figure \ref{user_interface} we include a screen-shot of the user interface when annotating reviews. In what follows we present samples generated by the review generators on which human annotators disagree most on whether these are human-written or machine-generated. \begin{itemize} \item Word LSTM temp 1.0 \begin{enumerate}[label=\alph*)] \item i so enjoyed this book . i felt \underline{\hspace{0.4cm}} though . i especially like loving horses in the \underline{\hspace{0.4cm}} . and the story is well written . \item one of a different type on locked paranormal / vacation book . i enjoyed the characters and the plot . great mixture of historical fiction . \item this first edition of the complete series 8 years over six episodes just makes you laugh . the original tv is by far my cup of tea ! \item works out of the box ! wouldn ' t spend the money for a better keyboard . use this with the matching kindle screen as well . \end{enumerate} \item Word LSTM temp 0.7 \begin{enumerate}[label=\alph*)] \item i am looking forward to the next book . i am a \underline{\hspace{0.4cm}} \underline{\hspace{0.4cm}} and i enjoyed the story . i like books where the characters are real . \item this is an exciting book i could n ' t put down . i will probably read more books by this author . this is a must read . \item okay , that ' s how i expected this movie . it was okay but it was so boring . i was bored and was disappointed . \item this cable is not bad . it is so cheap and it works great . i ' ve used this for a couple of months now and \underline{\hspace{0.4cm}} on the ipad \end{enumerate} \item Word LSTM temp 0.5 \begin{enumerate}[label=\alph*)] \item this book was a great read ! the story was exciting and a bit \underline{\hspace{0.4cm}} . i really enjoyed the characters and the story line . \item this is a great cable for the price . i would recommend this product to anyone needing a cable for a great price . \item this is a great series . it is a must see for anyone who loves period dramas . i love the \underline{\hspace{0.4cm}} . \item these batteries seem to be working as expected . i have had no problems with this product . i would recommend this to anyone . \end{enumerate} \item Scheduled Sampling \begin{enumerate}[label=\alph*)] \item like most of the ones i have ! the tablet that came starts working properly . \item i have had any almost using keyboards with an iphone case and kept it nicely and time . and it works well . \item have got to watch it many times again and the seasons of \underline{\hspace{0.4cm}} each episode we can all watch it . \item very interesting characters and likable characters that grow when you gave me \underline{\hspace{0.4cm}} of the \underline{\hspace{0.4cm}} because of the dog . what can i say is i absolutely loved it . \end{enumerate} \item Google LM \begin{enumerate}[label=\alph*)] \item \underline{\hspace{0.4cm}} systems generally require less bandwidth and \underline{\hspace{0.4cm}} with operating systems , \underline{\hspace{0.4cm}} users to write and edit data nearly anywhere . \item seems all but impossible to access . \underline{\hspace{0.4cm}} is all a \underline{\hspace{0.4cm}} and gets a bad \underline{\hspace{0.4cm}} on every \underline{\hspace{0.4cm}} . \item \underline{\hspace{0.4cm}} is based in \underline{\hspace{0.4cm}} \underline{\hspace{0.4cm}} , \underline{\hspace{0.4cm}} , with a commercial office in \underline{\hspace{0.4cm}} \item oved this clip and the \underline{\hspace{0.4cm}} and \underline{\hspace{0.4cm}} apps were about so much fun that \underline{\hspace{0.4cm}} paid a big price . \underline{\hspace{0.4cm}} 2 and 3 like crazy . \end{enumerate} \item Attention Attribute to Sequence \begin{enumerate}[label=\alph*)] \item i am always waiting for the next book to come out . i am a big fan of sean black and will . \item purchased this to use with my macbook pro . it worked out perfectly , as described . no complaints . \item great book all of the great mystery books . i enjoyed all of them and was sad when the book ended . \item this is a great product . i ' ve had it for over a year now and it ' s still going strong . i ' m very happy with this purchase . \end{enumerate} \item Contexts to Sequences \begin{enumerate}[label=\alph*)] \item i love this series . i love the characters and the story . i love the characters and the story line . \item a great book and a great read . i love the characters and the story . i would recommend this book to anyone . \item i enjoyed the story . it was a good read . i would recommend it to anyone who likes a good read . \item i love this book and i love the characters . i love this book and i was not disappointed . \end{enumerate} \item Gated Contexts to Sequences \begin{enumerate}[label=\alph*)] \item this is the first book i have read by this author . would recommend to anyone who likes a good romance book . \item one of the best books i have ever read . the chemistry between the two main characters was a good read . \item this book is awesome . lots of action and intrigue . i ' m glad i bought this book . thank you for sharing \item great story and plot . sometimes a little slow at times but overall a good read . \end{enumerate} \item MLE SeqGAN \begin{enumerate}[label=\alph*)] \item you will like this movie - get this set \ldots better than expected award for the characters . bad ending . \item this switch converter works fine with all games and works perfect , sturdy program to zero manual products . nice feel . \item i could not put it down . it was an interesting clean book , but i was expecting many more individuals in this story so i read in a long time . \item great story . in college kids has been lost the \underline{\hspace{0.4cm}} mysteries , chris \underline{\hspace{0.4cm}} son is not better . \end{enumerate} \item SeqGAN \begin{enumerate}[label=\alph*)] \item it was slow he kept me interested , and i think i thoroughly enjoyed the story . \item i enjoyed this book and look forward to getting to \underline{\hspace{0.4cm}} larson . \item received in excellent condition . i thought it was great but didn ' t know that movies were more than high ratings which i am my cup of tea . \item awesome cute story . kudos to mr much \underline{\hspace{0.4cm}} of the sookie ' s story . \end{enumerate} \item RankGAN \begin{enumerate}[label=\alph*)] \item robin williams is ok . just a great movie with \underline{\hspace{0.4cm}} now . \underline{\hspace{0.4cm}} is a great film with three stars ! wonderful video for a very good movie . \item i have loved this movie so i could like the dvd sort of info . hot slow . love the old ford shows to though . \underline{\hspace{0.4cm}} a great actor . \item this was a very amazing . \underline{\hspace{0.4cm}} laws and oh fact she became \underline{\hspace{0.4cm}} and \underline{\hspace{0.4cm}} is very unlikely together on the case . \item i say so i would that originally arrived so i love the circular inch screen . i am sad how it works . \end{enumerate} \item LeakGAN \begin{enumerate}[label=\alph*)] \item i really enjoyed reading this book . the author did an excellent job in delivering for all his writing books into us as business . a great summer read . \item just loved it , so much could read more of this series , i like it but it was not written in a book that is well written , but very interesting . \item i love hockey - baseball movie coming meets hockey ' s et addicted fear the birds feature so popular films have developed far worse reviews . \item a very good book with a lot of twists in this book . i will be checking out more of this author next book . \end{enumerate} \end{itemize} \subsection{Results} \subsubsection{Human Evaluators} \label{appendix_human_evaluators} We chose the task of distinguishing machine-generated from real reviews because it is a straightforward surrogate of a Turing test. Moreover, how much their generated content can fool humans has been a key claim of many artificial intelligence models recently. The low inter-rater agreement suggests that this is a difficult task even for humans, which we hope would trigger the community to rethink about these claims. There are indeed finer-grained, perhaps more agreeable aspects of text quality (including semantic coherence, syntactic correctness, fluency, adequacy, diversity and readability). We decided not to include them in this experiment for two reasons: 1) as the first study, we are not sure which aspects human raters would consider when they judge for the realism of a review; 2) we wanted to keep the experiment design simple, and many of these aspects are harder to define. In the post-experiment survey, the raters commented on the reasons why they considered reviews as fake. The low inter-rater agreement (0.27) reflects the difficulty/ subjectivity of the task: identifying individual reviews as human-written or machine-generated. Low human agreement is commonly reported in subjective evaluation tasks. Since our goal is to evaluate the \textbf{evaluators} instead of the competing algorithms, it is important to use a task neither too easy or too hard, so that there are distinguishable differences among the performances of competitors (including humans). When using the majority vote of human judgements, the accuracy of humans improved to a reasonable 72.63 \%. \subsubsection{Discriminative Evaluators} \label{appendix_discriminative_evaluators_results} \begin{table}[!bp] \caption{Accuracy of deep (LSTM) and shallow (SVM) meta-adversarial evaluators. \textbf{The lower the better.} Meta-adversarial evaluators do better than humans on individual reviews, with less bias between the two classes. GAN-based generators are considered to be the best by meta-adversarial evaluators.} \centering \scalebox{0.8}{ \begin{tabular}{ l | c | c } \hline \hline \textbf{Generators} & \textbf{LSTM} & \textbf{SVM} \\ \hline \hline Word LSTM temp 1.0 & 48.29 \% & \textbf{50.31} \% \\ Word LSTM temp 0.7 & 92.58 \% & 78.69 \% \\ Word LSTM temp 0.5 & 99.31 \% & 94.74 \% \\ Scheduled Sampling & 50.09 \% & 51.31 \% \\ Google LM & 84.58 \% & 78.59 \% \\ Attention Attribute to Sequence & 90.08 \% & 74.37 \% \\ Contexts to Sequences & 100.00 \% & 100.00 \% \\ Gated Contexts to Sequences & 98.37 \% & 96.26 \% \\ MLE SeqGAN & \textbf{41.45} \% & 52.35 \% \\ SeqGAN & 50.05 \% & 56.20 \% \\ RankGAN & 66.28 \% & 70.17 \% \\ LeakGAN & 87.03 \% & 77.55 \% \\ \hline D-test (all) & 77.58 \% & 74.50 \% \\ D-test (human-written) & 80.12 \% & 75.98 \% \\ D-test (machine-generated) & 75.04 \% & 73.01 \% \\ \hline \hline \end{tabular}} \label{meta_discriminator_accuracy_rankings} \end{table} In Table \ref{meta_discriminator_accuracy_rankings} and Table \ref{meta_discriminator_accuracy_rankings_all} we present comprehensive results for the meta-adversarial evaluators. \begin{table*}[!h] \caption{Accuracy of deep (LSTM, CNN, CNN \& LSTM) and shallow (SVM, RF, NB, XGBoost) meta-adversarial evaluators. \textbf{The lower the better.} Meta-adversarial evaluators do better than humans on individual reviews, with less bias between the two classes. GAN-based generators are considered best by meta-adversarial evaluators.} \centering \scalebox{0.7}{ \begin{tabular}{ l | c | c | c | c | c | c | c } \hline \hline \textbf{Generators} & \textbf{LSTM} & \textbf{CNN} & \textbf{CNN \& LSTM} & \textbf{SVM} & \textbf{RF} & \textbf{NB } & \textbf{XGBoost} \\ \hline \hline Word LSTM temp 1.0 & 48.29 \% & 55.22 \% & 45.68 \% & \textbf{50.31} \% & 53.63 \% & 32.77 \% & 48.97 \% \\ Word LSTM temp 0.7 & 92.58 \% & 93.14 \% & 91.02 \% & 78.69 \% & 81.05 \% & 79.92 \% & 80.49 \% \\ Word LSTM temp 0.5 & 99.31 \% & 99.35 \% & 99.08 \% & 94.74 \% & 94.29 \% & 96.86 \% & 94.71 \% \\ Scheduled Sampling & 50.09 \% & 48.77 \% & 43.37 \% & 51.31 \% & 52.88 \% & \textbf{20.97} \% & 44.12 \% \\ Google LM & 84.58 \% & 74.03 \% & 74.85 \% & 78.59 \% & 82.71 \% & 48.28 \% & 82.41 \% \\ Attention Attribute to Sequence & 90.08 \% & 91.78 \% & 89.94 \% & 74.37 \% & 77.29 \% & 80.02 \% & 71.68 \% \\ Contexts to Sequences & 100.00 \% & 100.00 \% & 99.97 \% & 100.00 \% & 99.98 \% & 100.00 \% & 99.98 \% \\ Gated Contexts to Sequences & 98.37 \% & 99.06 \% & 98.38 \% & 96.26 \% & 95.35 \% & 98.63 \% & 93.62 \%\\ MLE SeqGAN & \textbf{41.45} \% & \textbf{47.54} \% & \textbf{41.91} \% & 52.35 \% & \textbf{51.14} \% & 21.83 \% & \textbf{43.71} \%\\ SeqGAN & 50.05 \% & 52.91 \% & 47.35 \% & 56.20 \% & 54.91 \% & 25.60 \% & 48.11 \% \\ RankGAN & 66.28 \% & 67.23 \% & 59.37 \% & 70.17 \% & 61.94 \% & 35.98 \% & 61.23 \% \\ LeakGAN & 87.03 \% & 80.28 \% & 79.57 \% & 77.55 \% & 67.74 \% & 46.80 \% & 63.80 \% \\ \hline D-test (all) & 77.58 \% & 74.72 \% & 75.18 \% & 74.50 \% & 70.31 \% & 70.74 \% & 73.79 \% \\ D-test (human-written) & 80.12 \% & 73.54 \% & 77.99 \% & 75.98 \% & 68.59 \% & 83.53 \% & 79.10 \%\\ D-test (machine-generated) & 75.04 \% & 75.90 \% & 72.38 \% & 73.01 \% & 72.04 \% & 57.95 \% & 68.48 \% \\ \hline \hline \end{tabular}} \label{meta_discriminator_accuracy_rankings_all} \end{table*} \subsubsection{Text-Overlap Evaluators} \label{appendix_text_overlap_evaluators_results} In Figure \ref{word_overlap_eval_compressed_version} we present detailed results for all word overlap evaluators we used in this study. \begin{figure*}[!htbp] \centering \includegraphics[width=5in]{figures/word_overlap_accuracy_h2.png} \caption{Text-Overlap Evaluators (BLEU, ROUGE, METEOR and CIDEr) scores for individual generators. \textbf{The higher the better.} The rankings are overall similar, as GAN-based generators are ranked low.} \label{word_overlap_eval_compressed_version} \end{figure*} \subsubsection{Comparing Evaluators} \label{appendix_comparing_evaluators} In Table \ref{table_correlation_results} we present correlation results between the evaluators included in this work. \begin{table*}[htbp] \centering \scalebox{0.8}{ \begin{tabular}{ l | c | c | c | c | c | c } \hline \hline \textbf{Evaluation Method} & \textbf{Kendall tau-b} & \textbf{Spearman} & \textbf{Pearson} & \textbf{Kendall tau-b} & \textbf{Spearman} & \textbf{Pearson} \\ & \textbf{(H1)} & \textbf{(H1)} & \textbf{(H1)} & \textbf{(H2)} & \textbf{(H2)} & \textbf{(H2)} \\ \hline \textbf{SVM} Individual-discriminators & -0.4545* & -0.6294* & -0.6716* & -0.5455* & -0.6783* & -0.6823* \\ \hline \textbf{LSTM} meta-discriminator & -0.5455* & -0.7552* & -0.7699* & -0.6364* & -0.8042* & -0.7829* \\ \hline \textbf{CNN} meta-discriminator & -0.6363* & -0.8112* & -0.8616* & -0.7273* & -0.8741* & -0.8766* \\ \hline \textbf{CNN \& LSTM} meta-discriminator & -0.6060* & -0.7902* & -0.8392* & -0.6970* & -0.8462* & -0.8507* \\ \hline \textbf{SVM} meta-discriminator & -0.4545* & -0.6573* & -0.7207* & -0.5455* & -0.6993* & -0.7405 \\ \hline \textbf{RF} meta-discriminator & -0.5455* & -0.7273* & -0.7994* & -0.6364* & -0.7832* & -0.8075* \\ \hline \textbf{NB} meta-discriminator & -0.6364* & -0.8112* & -0.9290* & -0.7273* & -0.8741* & -0.9388* \\ \hline \textbf{XGBoost} meta-discriminator & -0.5455* & -0.7413* & -0.7764* & -0.6364* & -0.8042* & -0.7878* \\ \hline \textbf{BLEU} evaluator & 0.7576* & 0.8601* & 0.8974* & 0.6666* & 0.8182* & 0.9060* \\ \hline \textbf{ROUGE} evaluator & 0.6060* & 0.7692* & 0.8054* & 0.5758* & 0.7483* & 0.8073* \\ \hline \textbf{METEOR} evaluator & 0.5758* & 0.7762* & 0.8225* & 0.5455* & 0.7622* & 0.8231* \\ \hline \textbf{CIDEr} evaluator & 0.5455* & 0.7413* & 0.8117* & 0.4545* & 0.6643* & 0.8203* \\ \hline \hline \end{tabular}} \caption{Kendall tau-b, Spearman and Pearson correlation coefficients between human evaluators $H1$, $H2$, and discriminative evaluators and word-overlap evaluators (* denotes statistical significant result with $p \le 0.05$).} \label{table_correlation_results} \end{table*} \subsubsection{Diversity Analysis} \label{appendix_diversity_analysis} In Table \ref{diversity_results} we present results for the Self-BLEU metric, while in Table \ref{table_correlation_diversity} we present the correlation of Self-BLEU with the other evaluators. In addition, in Table \ref{table_correlation_BLEUGTrain} we present correlation results for BLEU G-Train and the rest of the evaluators. \begin{table}[!htbp] \centering \scalebox{0.7}{ \begin{tabular}{ l | c | c } \hline \hline \textbf{Generative Text Model} & \textbf{Self-BLEU} & \textbf{Lexical diversity}\\ \hline Word LSTM temp 1.0 & 0.1886 & 0.6467 \\ \hline Word LSTM temp 0.7 & 0.4804 & 0.2932 \\ \hline Word LSTM temp 0.5 & 0.6960 & 0.1347 \\ \hline Scheduled Sampling & 0.1233 & 0.7652 \\ \hline Google LM & 0.1706 & \textbf{0.7745} \\ \hline Attention Attribute to Sequence & 0.5021 & 0.2939 \\ \hline Contexts to Sequences & 0.8950 & 0.0032 \\ \hline Gated Contexts to Sequences & 0.7330 & 0.1129 \\ \hline MLE SeqGAN & 0.1206 & 0.7622 \\ \hline SeqGAN & 0.1370 & 0.7330 \\ \hline RankGAN & \textbf{0.1195} & 0.7519 \\ \hline LeakGAN & 0.1775 & 0.7541 \\ \hline \hline \end{tabular}} \caption{Self-BLEU diversity scores per generator (the lower the more diverse), and lexical diversity scores (the higher the more diverse). There is high correlation between the two metrics with respect to the rankings of the generative text models.} \label{diversity_results} \end{table} \begin{table}[!htbp] \centering \scalebox{0.6}{ \begin{tabular}{ l | c | c | c } \hline \hline \textbf{Self-BLEU} & \textbf{Kendall tau-b} & \textbf{Spearman} & \textbf{Pearson} \\ \hline \textbf{H1} evaluator & -0.8788* & -0.9301* & -0.8920* \\ \hline \textbf{H2} evaluator & -0.7879* & -0.8881* & -0.9001* \\ \hline \textbf{LSTM} meta-discriminator & 0.6667* & 0.8252* & 0.7953* \\ \hline \textbf{CNN} meta-discriminator & 0.7576* & 0.8811* & 0.8740* \\ \hline \textbf{CNN \& LSTM} meta-discriminator & 0.7273* & 0.8601* & 0.8622* \\ \hline \textbf{SVM} meta-discriminator & 0.5758* & 0.7413* & 0.8518* \\ \hline \textbf{RF} meta-discriminator & 0.6667* & 0.8112* & 0.8944* \\ \hline \textbf{NB} meta-discriminator & 0.7576* & 0.8811* & 0.9569* \\ \hline \textbf{XGBoost} meta-discriminator & 0.6667* & 0.8252* & 0.8693* \\ \hline \textbf{BLEU} evaluator & -0.8788 & -0.9301* & -0.9880* \\ \hline \textbf{ROUGE} evaluator & -0.7273* & -0.8392* & -0.9299* \\ \hline \textbf{METEOR} evaluator & -0.6967* & -0.8462* & -0.8955* \\ \hline \textbf{CIDEr} evaluator & -0.5455* & -0.7413* & -0.7987*\\ \hline \hline \end{tabular}} \caption{Kendall tau-b, Spearman and Pearson correlation coefficients between Self-BLEU diversity rankings and the three evaluation methods - human evaluators $H1$, $H2$, discriminative evaluators and word-overlap based evaluators (* denotes statistical significant result with $p \le 0.05$). Meta-discriminators have been trained on D-train, D-valid sets and tested on the \textbf{annotated D-test set with ground-truth test labels}.} \label{table_correlation_diversity} \end{table} \begin{table}[t!] \centering \scalebox{0.6}{ \begin{tabular}{ l | c | c | c } \hline \hline \textbf{BLEU G-train} & \textbf{Kendall tau-b} & \textbf{Spearman} & \textbf{Pearson} \\ \hline \textbf{H1} evaluator & 0.7176* & 0.8511* & 0.9111* \\ \hline \textbf{H2} evaluator & 0.6260* & 0.8091* & 0.9209* \\ \hline \textbf{LSTM} meta-discriminator & -0.5649* & -0.7461* & -0.7091* \\ \hline \textbf{CNN} meta-discriminator & -0.6565 & -0.7951* & -0.8213* \\ \hline \textbf{CNN \& LSTM} meta-discriminator & -0.6260* & -0.7811* & -0.7951* \\ \hline \textbf{SVM} meta-discriminator & -0.4428* & -0.6130* & -0.7442*\\ \hline \textbf{RF} meta-discriminator & -0.5038* & -0.6340* & -0.7864*\\ \hline \textbf{NB} meta-discriminator & -0.6260* & -0.7601* & -0.9164* \\ \hline \textbf{XGBoost} meta-discriminator & -0.5649* & -0.6550* & -0.7586*\\ \hline \textbf{BLEU} evaluator & 0.9619* & 0.9912* & 0.9936* \\ \hline \textbf{ROUGE} evaluator & 0.5954* & 0.7496* & 0.8717* \\ \hline \textbf{METEOR} evaluator & 0.6260* & 0.7636* & 0.8477* \\ \hline \textbf{CIDEr} evaluator & 0.6565* & 0.8371* & 0.8318* \\ \hline \hline \end{tabular}} \caption{Kendall tau-b, Spearman and Pearson correlation coefficients between BLEU G-train rankings and the three evaluation methods - human evaluators $H1$, $H2$, discriminative evaluators and word-overlap based evaluators (* denotes statistical significant result with $p \le 0.05$). Meta-discriminators have been trained on D-train, D-valid sets and tested on the \textbf{annotated D-test set with ground-truth test labels}.} \label{table_correlation_BLEUGTrain} \end{table} \section{Discussion} \subsection{User Study} \label{appendix_user_study} A more detailed list of major clusters of reasons is as follows: \begin{enumerate} \item Grammar/ typo/ mis-spelling: the language does not flow well. \item Too general/ too generic/ vagueness: generated reviews are vague, in lack of details. \item Word choice (wording): in lack of slang, use the wrong words. \item Flow (not fluent)/ structured/ logical: the sentences level language errors. \item Contradictory arguments: some arguments support opposite opinions. \item Emotion: lack of emotion, personality in the comments. \item Repeated text: using words/ phrases repetitively. \item Overly same as human: too advertisement, too formal, too likely to be real. \end{enumerate} \begin{comment} ABCDEFGH \end{comment} \subsection{Granularity of Judgements} \label{appendix_granularity_of_judgments} We charged the Turkers to label individual reviews as either fake or real. Each human judge only annotates 20 reviews, and they do not know which reviews are generated by the same generator. Comparing to an adversarial discriminator, a human judge has not seen many ``training'' examples of fake reviews or generators. That explains why the meta-adversarial evaluators are better at identifying fake reviews. In this context, humans are likely to judge whether a review is real based on how ``similar'' it appears to the true reviews they are used to see online. That is probably why their decisions are better correlated to text-overlap metrics that measures the similarity between a review and a set of references. This hypothesis is supported by a post-experiment survey of the human judges. Please see Appendix \ref{appendix_user_study_samples} for user study samples. This finding provides interesting implications to the selection of evaluation methods for different tasks. In tasks that are set up to judge individual pieces of generated text (e.g., reviews, translations, summaries, captions, fake news) where there exists human-written ground-truth, it is better to use word-overlap metrics instead of adversarial evaluators. Indeed, when the audience are not trained by reading lots of bot-generated texts, it is more reasonable to use an evaluator that mimics their decision-making process. In some scenarios, the task is to make judgments in the context of a longer conversation or a set of documents (e.g., conversation agents, dialogue systems, social bots). The difference is that human subjects are exposed to machine-generated text, so that they may be better trained to distinguish fake from real. Moreover, when judgments are made on the agent/ system level (e.g., whether a Twitter account is a bot), signals like how similar the agent outputs are or how much the agent memorizes the training examples may become more useful than word usage, and a discriminative evaluator may be more effective than text-overlap metrics. Our experiment also provide implications to improving NLG models, which implies that adversarial accuracy might not be the optimal objective for NLG if the goal is to generate documents that humans consider as real. Indeed, a fake review that fools humans does not necessarily need to fool a machine that has seen everything. In contrast, GAN based models may perform better when judged as a whole system instead of individual items, or in a conversational context. When the human judges have seen enough examples from the same generator, the next example had better be somewhat different. \subsection{Imperfect Ground-truth} \label{appendix_imperfect_ground_truth} One important thing to note is that all discriminative evaluators are trained using natural labels (i.e., treating all examples from the Amazon review dataset as positive and examples generated by the candidate models as negative) instead of human-annotated labels. It is possible that if they were trained with human labels, the discriminative evaluators would have been more consistent to the human evaluators. Indeed, some reviews posted on Amazon may have been generated by bots, and if that is the case, treating them as human-written examples may bias the discriminators. One way to verify this is to consider an alternative ``ground-truth''. We apply the already trained meta-discriminators to the human-annotated subset (3,600 reviews) instead of the full \textit{D-test} set, and we use the majority vote of human judges (whether a review is fake or real) to surrogate the ``ground-truth'' labels (whether a review is generated or sampled from Amazon). \begin{figure}[!htbp] \centering \includegraphics[width=2.5in]{figures/hbarchart_accuracy_annotatedDtest_majority_vote_labels.png} \caption{Accuracy of deep (LSTM) and shallow (SVM) meta-discriminators when tested on the \textbf{annotated subset of \textit{D-test}}, with \textit{majority votes} as ground-truth. The lower the better.} \label{fig::heatmap_annotated_Dtest_majority_vote_test_labels} \end{figure} Surprisingly, when the meta-adversarial evaluators are tested using human majority-votes as ground-truth, both the accuracy numbers and the rankings of the generators are significantly different from Table~\ref{meta_discriminator_accuracy_rankings} and Table~\ref{meta_discriminator_accuracy_rankings_all} (which used natural labels as ground-truth). We note that the scores and rankings are more inline with the human evaluators. To confirm the intuition, we calculate the correlations between the meta-discriminators and the human evaluators using the annotated subset only. Replacing the natural ground-truth with human annotated labels, the meta-discriminators become positively correlated with human evaluators (Figure~\ref{fig::hbarchart_annotatedDtest_majority_vote_test_labels}), although BLEU still appears to be the best evaluator. These results indicate that when the ``ground-truth'' used by an automated Turing test is questionable, the decisions of the evaluators may be biased. Discriminative evaluators suffer the most from the bias, as they were directly trained using the imperfect ground-truth. Text-overlap evaluators are more robust, as they only take the most relevant parts of the test set as references (more likely to be high quality). Our results also suggest that when adversarial training is used, the selection of training examples must be done with caution. If the ``ground-truth'' is hijacked by low quality or ``fake'' examples, models trained by GAN may be significantly biased. This finding is related to the recent literature of the robustness and security of machine learning models. \subsection{Role of Diversity} \label{appendix_role_of_diversity} We also assess the role diversity plays in the rankings of the generators. To this end, we measure lexical diversity \cite{bache2013text} of the samples produced by each generator as the ratio of unique tokens to the total number of tokens. We compute in turn lexical diversity for unigrams, bigrams and trigrams, and observe that the generators that produce the least diverse samples are easily distinguished by the meta-discriminators, while they confuse human evaluators the most. Alternatively, samples produced by the most diverse generators are hardest to distinguish by the meta-discriminators, while human evaluators present higher accuracy at classifying them. As reported in \cite{kannan2017adversarial}, the lack of lexical richness can be a weakness of the generators, making them easily detected by a machine learning classifier. Meanwhile, a discriminator's preference for rarer language does not necessarily mean it is favouring higher quality reviews. In addition to lexical diversity, Self-BLEU \cite{zhu2018texygen} is an interesting measurement of the diversity of a set of text (average BLEU score of each document using the same collection as reference, therefore the lower the more diverse). In Figure \ref{fig::self_bleu_lexical_diversity} we present Self-BLEU scores for each generator, applied to their generated text in \textit{D-test fake}. We also compute the correlation coefficients between the rankings of generators by Self-BLEU and the rankings by the evaluators (please see Figure \ref{fig::correlation_SelfBLEU_BLEUGTrain}). Results obtained indicate that Self-BLEU presents negative correlation with human evaluators and word-overlap evaluators, and positive correlation with discriminative evaluators. This result confirms the findings in literature \cite{kannan2017adversarial} that discriminators in adversarial evaluation are capturing known limitations of the generative models such as lack of diversity. \begin{comment} \begin{table}[t!] \centering \scalebox{0.7}{ \begin{tabular}{ l | c | c } \hline \hline \textbf{Generative Text Model} & \textbf{Self-BLEU} & \textbf{Lexical diversity}\\ \hline Word LSTM temp 1.0 & 0.1886 & 0.6467 \\ \hline Word LSTM temp 0.7 & 0.4804 & 0.2932 \\ \hline Word LSTM temp 0.5 & 0.6960 & 0.1347 \\ \hline Scheduled Sampling & 0.1233 & 0.7652 \\ \hline Google LM & 0.1706 & \textbf{0.7745} \\ \hline Attention Attribute to Sequence & 0.5021 & 0.2939 \\ \hline Contexts to Sequences & 0.8950 & 0.0032 \\ \hline Gated Contexts to Sequences & 0.7330 & 0.1129 \\ \hline MLE SeqGAN & 0.1206 & 0.7622 \\ \hline SeqGAN & 0.1370 & 0.7330 \\ \hline RankGAN & \textbf{0.1195} & 0.7519 \\ \hline LeakGAN & 0.1775 & 0.7541 \\ \hline \hline \end{tabular}} \caption{Self-BLEU diversity scores per generator (the lower the more diverse), and lexical diversity scores (the higher the more diverse). There is high correlation between the two metrics with respect to the rankings of the generative text models.} \label{diversity_results} \end{table} \end{comment} \begin{figure}[!htbp] \centering \includegraphics[width=\columnwidth]{figures/hbarchart_correlation_SelfBleu_BleuGTrain.png} \caption{Kendall $\tau$-b correlation coefficients between BLEU G-train and Self-BLEU rankings, and the three evaluation methods - human evaluators $H1$, $H2$, discriminative evaluators and word-overlap based evaluators (* denotes $p \le 0.05$). Meta-discriminators have been trained on D-train, D-valid sets and tested on the \textbf{annotated D-test set with ground-truth test labels}.} \label{fig::correlation_SelfBLEU_BLEUGTrain} \end{figure} \begin{comment} \begin{table}[t!] \centering \scalebox{0.6}{ \begin{tabular}{ l | c | c | c } \hline \hline \textbf{Self-BLEU} & \textbf{Kendall tau-b} & \textbf{Spearman} & \textbf{Pearson} \\ \hline \textbf{H1} evaluator & -0.8788* & -0.9301* & -0.8920* \\ \hline \textbf{H2} evaluator & -0.7879* & -0.8881* & -0.9001* \\ \hline \textbf{LSTM} meta-discriminator & 0.6667* & 0.8252* & 0.7953* \\ \hline \textbf{CNN} meta-discriminator & 0.7576* & 0.8811* & 0.8740* \\ \hline \textbf{CNN \& LSTM} meta-discriminator & 0.7273* & 0.8601* & 0.8622* \\ \hline \textbf{SVM} meta-discriminator & 0.5758* & 0.7413* & 0.8518* \\ \hline \textbf{RF} meta-discriminator & 0.6667* & 0.8112* & 0.8944* \\ \hline \textbf{NB} meta-discriminator & 0.7576* & 0.8811* & 0.9569* \\ \hline \textbf{XGBoost} meta-discriminator & 0.6667* & 0.8252* & 0.8693* \\ \hline \textbf{BLEU} evaluator & -0.8788 & -0.9301* & -0.9880* \\ \hline \textbf{ROUGE} evaluator & -0.7273* & -0.8392* & -0.9299* \\ \hline \textbf{METEOR} evaluator & -0.6967* & -0.8462* & -0.8955* \\ \hline \textbf{CIDEr} evaluator & -0.5455* & -0.7413* & -0.7987*\\ \hline \hline \end{tabular}} \caption{Kendall tau-b, Spearman and Pearson correlation coefficients between Self-BLEU diversity rankings and the three evaluation methods - human evaluators $H1$, $H2$, discriminative evaluators and word-overlap based evaluators (* denotes statistical significant result with $p \le 0.05$). Meta-discriminators have been trained on D-train, D-valid sets and tested on the \textbf{annotated D-test set with ground-truth test labels}.} \label{table_correlation_diversity} \end{table} \end{comment} Following this insight, an important question to answer is to what extent the generators are simply memorizing the training set \textit{G-train}. To this end, we assess the degree of n-gram overlap between the generated reviews and the training reviews using the BLEU evaluator. In Table \ref{results_BLEUGTrain} we present the average BLEU scores of generated reviews using their nearest neighbors in \textit{G-train} as references. We observe that generally the generators do not memorize the training set, and GAN models generate reviews that have fewer overlap with \textit{G-train}. In Figure \ref{fig::correlation_SelfBLEU_BLEUGTrain} we include the correlation between the divergence from training and the ratings by evaluators in the study. BLEU w.r.t. \textit{G-train} presents highly positive correlation with BLEU w.r.t. \textit{D-test real}, and it is also positively correlated with the human evaluators $H1$ and $H2$. \begin{table}[t!] \centering \scalebox{0.7}{ \begin{tabular}{ l | c } \hline \hline \textbf{Generative Text Model} & \textbf{BLEU G-Train} \\ \hline Word LSTM temp 1.0 & 0.2701 \\ \hline Word LSTM temp 0.7 & 0.4998 \\ \hline Word LSTM temp 0.5 & 0.6294 \\ \hline Scheduled Sampling & 0.1707 \\ \hline Google LM & 0.0475 \\ \hline Attention Attribute to Sequence & 0.5122 \\ \hline Contexts to Sequences & 0.7542 \\ \hline Gated Contexts to Sequences & 0.6240 \\ \hline MLE SeqGAN & 0.1707 \\ \hline SeqGAN & 0.1751 \\ \hline RankGAN & 0.1525 \\ \hline LeakGAN & 0.1871 \\ \hline \hline \end{tabular}} \caption{BLEU results when evaluating the generated reviews using G-train as the reference corpus (a lower score indicates less n-grams in common between the training set G-train and the generated text). GAN models present low similarity with the training set.} \label{results_BLEUGTrain} \end{table} \begin{comment} \begin{table}[t!] \centering \scalebox{0.6}{ \begin{tabular}{ l | c | c | c } \hline \hline \textbf{BLEU G-train} & \textbf{Kendall tau-b} & \textbf{Spearman} & \textbf{Pearson} \\ \hline \textbf{H1} evaluator & 0.7176* & 0.8511* & 0.9111* \\ \hline \textbf{H2} evaluator & 0.6260* & 0.8091* & 0.9209* \\ \hline \textbf{LSTM} meta-discriminator & -0.5649* & -0.7461* & -0.7091* \\ \hline \textbf{CNN} meta-discriminator & -0.6565 & -0.7951* & -0.8213* \\ \hline \textbf{CNN \& LSTM} meta-discriminator & -0.6260* & -0.7811* & -0.7951* \\ \hline \textbf{SVM} meta-discriminator & -0.4428* & -0.6130* & -0.7442*\\ \hline \textbf{RF} meta-discriminator & -0.5038* & -0.6340* & -0.7864*\\ \hline \textbf{NB} meta-discriminator & -0.6260* & -0.7601* & -0.9164* \\ \hline \textbf{XGBoost} meta-discriminator & -0.5649* & -0.6550* & -0.7586*\\ \hline \textbf{BLEU} evaluator & 0.9619* & 0.9912* & 0.9936* \\ \hline \textbf{ROUGE} evaluator & 0.5954* & 0.7496* & 0.8717* \\ \hline \textbf{METEOR} evaluator & 0.6260* & 0.7636* & 0.8477* \\ \hline \textbf{CIDEr} evaluator & 0.6565* & 0.8371* & 0.8318* \\ \hline \hline \end{tabular}} \caption{Kendall tau-b, Spearman and Pearson correlation coefficients between BLEU G-train rankings and the three evaluation methods - human evaluators $H1$, $H2$, discriminative evaluators and word-overlap based evaluators (* denotes statistical significant result with $p \le 0.05$). Meta-discriminators have been trained on D-train, D-valid sets and tested on the \textbf{annotated D-test set with ground-truth test labels}.} \label{table_correlation_BLEUGTrain} \end{table} \end{comment} The effects of diversity is perhaps not hard to explain. At the particular task of distinguishing fake reviews from real, all decisions are made on individual reviews. And because a human judge was not exposed to many fake reviews generated by the same generator, whether or not a fake review is sufficiently different from the other generated reviews is not a major factor for their decision. Instead, the major factor is whether the generated review looks similar to the reviews they have seen in reality. Instead, a discriminative evaluator makes the decision after seeing many positive and negative examples, and a fake review that can fool an adversarial classifier has to be sufficiently different from all other fake reviews it has encountered (therefore diversity of a generator is a major indicator of its ability to pass an adversarial judge). \begin{comment} Lastly, we compute the review-level correlations to determine to what extent meta-discriminator's predictions on individual examples correlate with human decisions. To this end, we use Matthews correlation coefficient (MCC) \cite{matthews1975comparison} (also known as the $\phi$ coefficient) between the human and predicted binary classification decisions. In Table \ref{table_MCC} we present review-level MCC correlations between the human evaluators $H1$ and $H2$ and meta-discriminators. Meta-discriminators predictions correlate negatively with the $H1$ human evaluator, and positively with the $H2$ human evaluator. Since human labels are noisy, the human evaluator $H2$ based on majority votes is more stable and makes predictions closer to ground-truth. \begin{table}[t!] \centering \scalebox{0.6}{ \begin{tabular}{ l | c | c } \hline \hline \textbf{Evaluation method} & \textbf{MCC H1} & \textbf{MCC H2} \\ \hline \textit{LSTM meta-discriminator} & & \\ D-test (all) & -0.2324 & 0.2034 \\ D-test real & -0.0474 & 0.0827 \\ D-test fake & -0.1497 & -0.2147 \\ \hline \textit{CNN meta-discriminator} & & \\ D-test (all) & -0.2101 & 0.1481 \\ D-test real & -0.0320 & 0.0734 \\ D-test fake & -0.1578 & -0.2239 \\ \hline \textit{CNN \& LSTM meta-discriminator} & & \\ D-test (all) & -0.2193 & 0.1565 \\ D-test real & -0.0321 & 0.0490 \\ D-test fake & -0.1571 & -0.2184 \\ \hline \textit{NB meta-discriminator} & & \\ D-test (all) & -0.2502 & 0.0231 \\ D-test real & 0.0056 & 0.0132 \\ D-test fake & -0.2533 & -0.3492 \\ \hline \textit{SVM meta-discriminator} & & \\ D-test (all) & -0.1838 & 0.1708 \\ D-test real & 0.0024 & 0.0153 \\ D-test fake & -0.1107 & -0.1569 \\ \hline \textit{RF meta-discriminator} & & \\ D-test (all) & -0.1712 & 0.1099 \\ D-test real & 0.0021 & 0.0102 \\ D-test fake & -0.1341 & -0.1870 \\ \hline \textit{XGBoost meta-discriminator} & & \\ D-test (all) & -0.2096 & 0.1546 \\ D-test real & -0.0370 & 0.0628 \\ D-test fake & -0.1321 & -0.2061 \\ \hline \hline \end{tabular}} \caption{Review-level MCC. Meta-discriminators classification decisions correlate positively with H1, and negatively with H2.} \label{table_MCC} \end{table} \end{comment} \begin{comment} \subsection{Remarks} \textcolor{red}{\textbf{Remarks mentioned in other papers, need to rewrite the paragraphs if we keep these!!} While BLEU evaluates the quality of the generated sentences, it neglects their diversity (e.g., repeatedly generating only one high quality sentence results in a high BLEU score). On the other hand, the Self-BLEU metric, recently introduced to evaluate diversity, ignores their quality of the generated texts. \\ \\ In unconditional text generation, all sentences in the test set are considered as the reference set and generated sentences are evaluated by computing their average BLEU score on this reference set. In conditional text generation tasks like machine translation which include a limited reference set (for each condition), computing the similarity of the generated text and the reference set may be sensible. However, the reference set for the unconditional text generation task is whole available sentences and measures like BLEU just consider the validity of generated sentences without measuring what proportion of the reference sentences can be covered by the text generation model. In addition, BLEU fails to capture semantic variations. \\ \\ Although the likelihood of a generative model on real (test) data evaluates the ability of the model in generating the test samples, it doesn't measure the quality of the whole set of generated texts by the model. In fact, a model with a low NLL value on test data (or equivalently a model in which the likelihood of the test data is high) may also assign high probability to many other sentences that are not valid or qualified. Specifically for sequence models, the likelihood doesn't assess the free-running mode of models. To be more detailed, most of the probabilistic sequence models, decompose the joint distribution to conditional distributions using the chain rule. These conditional distributions are the probability of each token conditioned on the prior tokens. Thus, in the likelihood evaluation, each of token's probability is conditioned on a prefix that is a real sequence itself and the likelihood is not assessed on the previously generated tokens of the model during evaluation (it is similar to the exposure bias problem of MLE for sequence generation). Moreover, measuring a model by its likelihood score has another problem. When a model misses one mode of a multi-modal distribution, its score decreases severely; so it is an unfair metric for comparing MLE method with other methods because MLE method uses likelihood as its objective and has mean seeking behavior. \\ \\ While the ultimate goal is to obtain models that generate samples indistinguishable by a human rater from real ones, human evaluation of unsupervised models may not tell the full story. Most commonly, raters are presented with one sample at a time, which makes it possible to measure only precision but not recall. If a model produces only a few very good samples, it will score well on human evaluation, but models with severe mode-drop will not be penalized. Additionally, human evaluation is expensive and hence cannot be used for model tuning.} \end{comment} \section{Conclusion} \label{Conclusion} In summary, our findings represent a preliminary foundation for proposing more solid and robust evaluation metrics for the evaluation of natural language generation. First, we find that in the context of judging individual documents, discriminative evaluators are not as realistic as word-overlap evaluators, w.r.t. how they correlate with a simulated Turing test (human evaluators). That implies that adversarial accuracy might not be the optimal objective for natural language generation, if the goal is to generate individual documents that humans consider as real. As a result, simple LSTM models or attention models may generate surprisingly competitive results. In contrast, GAN based models may more easily pass a Turing test on a bot level (when judgments are made on a system as a whole instead of on individual items), or in a conversational context. That is, when the judges have seen enough examples from the same generator, the next example had better be somewhat different. Our results also suggest that when adversarial training is used, the selection of training examples must be done with caution. We also find that when humans are distinguishing fake reviews from real ones, they tend to focus more on the usage of words, expressions, emotions, and other details. This may affect the design of objectives for the next generation of natural language generation models. In the future we plan to carry additional experiments to include a wider range of generative models and discriminator architectures, and inspired by the current results we aim to design more meaningful objectives for natural language generation. \section{Discussion} \label{Discussion} We carried a systematic experiment that evaluates the evaluators for NLG. Results have intriguing implications to both the evaluation and the construction of natural language generators. We conduct in-depth analysis to discover possible explanations. \subsection{Granularity of Judgments} We charged the Turkers to label individual reviews as either fake or real instead of evaluating each generator as a whole. Comparing to an adversarial discriminator, a human judge has not seen many ``training'' examples of \textit{fake} reviews or generators. That explains why the meta-adversarial evaluators are better at identifying fake reviews. In this context, humans are likely to judge whether a review is real based on how ``similar'' it appears to the true reviews they are used to seeing online. This finding provides interesting implications to the selection of evaluation methods for different tasks. In tasks that are set up to judge individual pieces of generated text (e.g., reviews, translations, summaries, captions, fake news) where there exists human-written ground-truth, it is better to use word-overlap metrics instead of adversarial evaluators. When judgments are made on the agent/ system level (e.g., whether a Twitter account is a bot), signals like how similar the agent outputs are or how much the agent memorizes the training examples may become more useful than word usage, and a discriminative evaluator may be more effective than word-overlap metrics. Our finding also implies that adversarial accuracy might not be the optimal objective for NLG if the goal is to generate documents that humans consider as real. Indeed, a fake review that fools humans does not necessarily need to fool a machine that has seen everything. In Appendix \ref{appendix_granularity_of_judgments} we provide more details. \subsection{Imperfect Ground Truth} One important thing to note is that all discriminative evaluators are trained using natural labels (i.e., treating all examples from the Amazon review dataset as positive and examples generated by the candidate models as negative) instead of human-annotated labels. Some reviews posted on Amazon may have been generated by bots, and if that is the case, treating them as human-written examples may bias the discriminators. To verify this, we apply the already trained meta-discriminators to the human-annotated subset (3,600 reviews) instead of the full \textit{D-test} set, and we use the majority vote of human judges (whether a review is fake or real) to surrogate the natural ``ground-truth'' labels (whether a review is generated or sampled from Amazon). \begin{figure}[!h] \centering \includegraphics[width=2.5in]{figures/hbarchart_correlation_evaluators_annotatedDtest_majority_vote_test_labels.png} \caption{Kendall $\tau$-b correlation coefficients between human evaluators and automated evaluators, tested on the \textbf{annotated subset of \textit{D-test}} with \textit{majority votes} as ground-truth ($^*$ denotes $p \le 0.05$).} \label{fig::hbarchart_annotatedDtest_majority_vote_test_labels} \end{figure} When the meta-adversarial evaluators are tested using human majority-votes as ground-truth, the scores and rankings of these discriminative evaluators are more inline with the human evaluators, although still not as highly correlated as BLEU; please see Figure \ref{fig::hbarchart_annotatedDtest_majority_vote_test_labels}. Indeed, discriminative evaluators suffer the most from low-quality labels, as they were directly trained using the imperfect ground-truth. Word-overlap evaluators are more robust, as they only take the most relevant parts of the test set as references (more likely to be high quality). Our results also suggest that when adversarial training is used, selection of training examples must be done with caution. If the ``ground-truth'' is hijacked by low quality or ``fake'' examples, models trained by GAN may be significantly biased. This finding is related to the recent literature of the robustness and security of machine learning models \cite{papernot2017practical}. Appendix \ref{appendix_imperfect_ground_truth} contains further details. \subsection{Role of Diversity} We assess the role diversity plays in rankings the generators. Diversity of a generator is measured by either the lexical diversity \cite{bache2013text} or Self-BLEU \cite{zhu2018texygen} of the samples produced by the generator. Results obtained (see Figure \ref{fig::self_bleu_lexical_diversity}) indicate generators that produce the least diverse samples are easily distinguished by the meta-discriminators, while they confuse humans the most. This confirms that adversarial discriminators capture limitations of generative models in lack of diversity \cite{kannan2017adversarial}. \begin{figure}[!htbp] \centering \includegraphics[width=2.5in]{figures/hbarchart_selfbleu_lexicaldiversity.png} \caption{Self-BLEU scores (the lower the more diverse) and lexical diversity scores (the higher the more diverse) are highly correlated in ranking the generators.} \label{fig::self_bleu_lexical_diversity} \end{figure} Similarly, we measure to what extent the generators are memorizing the training set \textit{G-train} as the average BLEU scores of generated reviews using their nearest neighbors in \textit{G-train} as references. We observe the generators do not memorize the training set, and GAN models generate reviews that have fewer overlap with \textit{G-train}; this finding is in line with recent theoretical studies on memorization in GANs \cite{theory19}. The effects of diversity indicate that when humans are distinguishing individual reviews as real or fake, whether or not a fake review is sufficiently different from the other generated reviews is not a major factor for their decision. Instead, they tend to focus on whether the review looks similar to the reviews they have seen in reality. A discriminative evaluator is more powerful in making decisions at a system level (e.g., a dialog system or a bot account), where diversity becomes a major factor. In Appendix \ref{appendix_role_of_diversity} we provide more details. \subsection{User Study} Finally, we are interested in the reasons why human annotators label certain reviews as fake (machine-written). After annotating a batch of reviews, workers are asked to explain their decisions by filling in an optional free-text comment. This enables us to have a better understanding of what differentiates machine-generated from human-written reviews from human's perspective. Analyzing their comments, we identify the main reasons why human evaluators annotate a review as machine-written. These are mainly related to the presence of grammatical errors in the review text, wrong wording or inappropriate choice of expressions, redundant use of specific phrases or contradictory arguments in the review. Interestingly, human evaluators' innate biases are also reflected in their decisions: they are likely to categorize a review as fake if it is too formal, lacks emotion and personal pronouns, or is too vague and generic. Please see Appendix \ref{appendix_user_study}. \subsection{Summary} In summary, our findings represent a preliminary foundation for proposing more solid and robust evaluation metrics and objectives of natural language generation. The low inter-rater agreement we observe suggests that judging \textit{individual} pieces of text as machine- or human-generated is a difficult task even for humans. In this context, discriminative evaluators are not as correlated with human judges as word-overlap evaluators. That implies that adversarial accuracy might not be the optimal objective for generating individual documents when realism is the main concern. In contrast, GAN based models may more easily pass a Turing test on a \textit{system} level, or in a conversational context. When the judges have seen enough examples from the same generator, the next example had better be somewhat different. Our results also suggest that when adversarial evaluation is used, the training examples must be carefully selected to avoid false-positives. We also find that when humans are distinguishing fake reviews from real ones, they tend to focus more on the usage of words, expressions, emotions, and other details. This may affect the design of objectives for the next generation of NLG models. \section*{Acknowledgement} We thank Wei Ai for his help on the power analysis, and Yue Wang and Teng Ye for helpful discussions. This work is in part supported by the National Science Foundation under grant numbers 1633370 and 1620319 and by the National Library of Medicine under grant number 2R01LM010681-05. \section{Experiment Design} \label{exp_design} We design a large-scale experiment to systematically analyze the procedures and metrics used for evaluating NLG models. To test the different \textit{evaluators}, the experiment carefully chooses a particular application context and a variety of natural language generators in this context. Ideally, a sound automated evaluator should be able to distinguish good generators from suboptimal ones. Its preferences (on ordering the generators) should be consistent to humans in the exact application context. \subsection{Experiment Context and Procedure} We design the experiment in the context of generating online product reviews. There are several reasons why review generation is a desirable task for the experiment: 1) online product reviews are widely available, and it is easy to collect a large number of examples for training/ testing the generators; 2) Internet users are used to reading online reviews, and it is easy to recruit capable human judges to assess the quality of reviews; and 3) comparing to tasks like image caption generation or dialogue systems, review generation has minimal dependency on the conversation context or on non-textual data, which reduces possible confounds. \begin{figure}[!htbp] \centering \includegraphics[width=3in]{figures/Flowchart.png} \caption{Overview of the Experiment Procedure.} \label{dataset_split} \end{figure} The general experiment procedure is presented in Figure \ref{dataset_split}. We start from the publicly available Amazon Product Reviews dataset \footnote{\url{http://jmcauley.ucsd.edu/data/amazon/}} and select three most popular domains: \textit{books}, \textit{electronics}, and \textit{movies}. After filtering rare products, inactive users, and overly long reviews, the dataset is randomly split into three parts, to train, to validate, and to test the candidate review generators (denoted as \textit{G-train}, \textit{G-valid}, and \textit{G-test}). Every generative model is trained and validated using the same datasets, and then charged to generate a number of product reviews (details are included in the next section). These generated reviews, mixed with the real reviews in \textit{G-test}, are randomly split into three new subsets for training, validating, and testing candidate (discriminative) evaluators, denoted as \textit{D-train}, \textit{D-valid}, and \textit{D-test}. Finally, a random sample of reviews from \textit{D-test} are sent for human evaluation. \subsection{Review Generators} Although our goal is to evaluate the evaluators, it is critical to include a wide range of text generators with various degrees of quality. A good evaluator should be able to distinguish the high-quality generators from the low-quality ones. We select a diverse set of generative models from recent literature. The goal of this study is \textit{not} to name the best generative model, and it is unfeasible to include all existing models. Our criteria are: (1) the models are published before 2018, when our experiment is conducted; (2) the models represent different learning strategies and quality levels; (3) the models have publicly available implementations, for reproducibility purposes. In Table \ref{table:generators} we list the candidate generators. It is not an exhaustive list of what are currently available. For implementation details of these models please see Appendix \ref{generators_implementation_details}. \begin{table}[h] \caption{Candidate models for review generation.} \centering \scalebox{0.65}{ \begin{tabular}{ l | c} \hline \hline Generative Model & Adversarial \\ & Framework \\ \hline \hline Word LSTM temp 1.0 \cite{hochreiter1997long} & No \\ Word LSTM temp 0.7 \cite{hochreiter1997long} & No \\ Word LSTM temp 0.5 \cite{hochreiter1997long} & No \\ Scheduled Sampling \cite{bengio2015scheduled} & No \\ Google LM \cite{jozefowicz2016exploring} & No \\ Attention Attribute to Sequence* \cite{dong2017learning} & No \\ Contexts to Sequences* \cite{tang2016context} & No \\ Gated Contexts to Sequences* \cite{tang2016context} & No \\ MLE SeqGAN \cite{yu2017seqgan} & Yes \\ SeqGAN \cite{yu2017seqgan} & Yes \\ RankGAN \cite{lin2017adversarial} & Yes \\ LeakGAN \cite{guo2017long} & Yes \\ \hline \hline \end{tabular}} \scriptsize * indicates that review generation using these models are conditional on context information such as product ids; other models are context independent. \label{table:generators} \end{table} Every generator (except Google LM) is trained and validated on \textit{G-train} and \textit{G-valid} datasets, and used to generate the same number of machine-generated (a.k.a., fake) reviews (see Table \ref{discriminator_student_dataset}). We follow the best practice in literature to train these models, although it is possible that the performance of models might not be optimal due to various constraints. This will not affect the validity of the experiment as our goal is to evaluate the \textbf{evaluators} instead of the individual generators. Google LM was not trained on reviews, but it provides a sanity check for the experiment - a reasonable evaluator should not rank it higher than those trained for generating reviews. \begin{table}[!htbp] \caption{Number of generated reviews by each model.} \centering \scalebox{0.6}{ \begin{tabular}{ l | c | r | r | r } \hline \textbf{Generative Model} & \textbf{Total} & \textbf{D-Train} & \textbf{D-Valid} & \textbf{D-Test} \\ \hline \hline $\forall$ model in Table \ref{table:generators} except Google LM & 32,500 & 22,750 & 3,250 & 6,500 \\ Google LM & 6,680 & 4,676 & 668 & 1,336 \\ \hline \hline \end{tabular}} \label{discriminator_student_dataset} \end{table} \subsection{Evaluators} \label{eval_methods} We include a comprehensive set of evaluators for the quality of the aforementioned generators: \textit{i)} human evaluators, \textit{ii)} discriminative evaluators, and \textit{iii)} text overlap evaluators. The evaluators are the main subjects of the experiment. \subsubsection{Human evaluators} We conduct a careful power analysis \cite{christensen2007methodology}, which suggests that at least 111 examples per generative model should be human annotated to infer that the machine-generated reviews are comparable in quality to human-written reviews, at a minimal statistically significance level of 0.05. Per this calculation, we sample 150 examples for each of the 12 generators for human evaluation. This totals 1,800 machine-generated reviews, to which we add 1,800 human-written reviews, or a total of 3,600 product reviews sent for human annotation. We markup out-of-vocabulary words in \textit{both} human-written and machine-generated reviews to control for confounds of using certain rare words. There is no significant difference in proportion of the markup token between the two classes (2.5\%-real vs. 2.2\%-fake). We recruit 900 human annotators through the Amazon Mechanical Turk (AMT) platform. Each annotator is presented 20 reviews, a mixture of 10 real (i.e., human written) and 10 fake (i.e., machine generated), and they are charged to label each review as real or fake based on their own judgment. Clear instructions are presented to the workers that markup tokens are present in both classes and cannot be used to decide whether a review is real or fake. Each page is annotated by 5 distinct human evaluators. The 5 judgments on every review are used to assemble two distinct \textbf{human evaluators}: \textit{H1} - \textbf{individual votes}, treating all human annotations independently, and \textit{H2} - \textbf{majority votes} of the 5 human judgments. For every \textit{annotated} review, the human evaluator ($H1$ or $H2$) makes a call which can be either right or wrong with regard to the ground truth. A generator is high quality if the human evaluator achieves low accuracy identifying the reviews as fake. \subsubsection{Discriminative evaluators} The inclusion of multiple generators provides the opportunity of creating \textbf{meta-adversarial evaluators}, trained using a \textit{pool} of generated reviews by \textit{many} generators, mixed with a larger number of ``real'' reviews (\textit{D-train} and \textit{D-valid} datasets). Such a ``pooling'' strategy is similar to the standard practice used by the TREC conferences to evaluate different information retrieval systems \cite{harman2006trec}. Comparing to individual adversarial evaluators, a meta-evaluator is supposed to be more robust and fair, and it can be applied to evaluate new generators without being retrained. In our experiment, we find that the meta-adversarial evaluators rank the generators in similar orders to the best individual adversarial evaluators. We employ a total of 7 meta-adversarial evaluators: 3 deep, among which one using LSTM \cite{hochreiter1997long}, one using Convolutional Neural Network (CNN) \cite{lecun1998gradient}, and one using a combination of LSTM and CNN architectures; 4 shallow, based on Naive Bayes (NB) \cite{rish2001empirical}, Random Forest (RF) \cite{liaw2002classification}, Support Vector Machines (SVM) \cite{cortes1995support}, and XGBoost \cite{chen2016xgboost}, with unigrams, bigrams, and trigrams as features and on balanced training sets. We find the best hyper-parameters using random search and prevent the models from overfitting by using early stopping. For every review in \textit{D-test} (either annotated or not), a meta-adversarial evaluator makes a judgment call. A generator is considered high quality if the meta-adversarial evaluator makes more mistakes on reviews it generated. \subsubsection{Word-overlap evaluators} We include a set of 4 text-overlap metrics used for NLG evaluation: BLEU and METEOR (specific to machine translation), ROUGE (used in text summarization), and CIDEr \cite{vedantam2015cider} (used in image description evaluation). These metrics rely on matching $n$-grams in the target text (i.e., generated reviews) to the ``references'' (i.e., human-written reviews). The higher the overlap (similarity), the higher the quality of generated text. For every generated review in \textit{D-test Fake}, we assemble the set of references by retrieving the top-$10$ most similar human-written reviews in \textit{D-test Real} using a simple vector space model. We compute 600-dimensional vector representation of reviews using Sent2Vec \cite{pagliardini2018unsupervised}, pretrained on English Wikipedia, and retrieve the top-k nearest neighbors for each review based on cosine similarity of the embedding vectors. The rationale of using nearest neighbors of each generated review as references is that to appear ``real'', a generated review just need to be similar to \textit{some} real reviews instead of \textit{all}. A generator is considered high quality if its generated reviews obtain a high average score by a text overlap evaluator. In total, we analyze and compare 13 candidate evaluators (2 human evaluators, 7 discriminative evaluators, and 4 text-overlap metrics), based on the \textit{D-test} dataset. \section{Introduction} \label{sec:introduction} Recent developments in neural language models \cite{mikolov2012context}, \cite{reiter2009investigation}, \cite{mikolov2011rnnlm}, \cite{mikolov2011extensions} have inspired the use of neural network based architectures for the task of natural language generation (NLG). Despite fast development of algorithms, there is an urgency to fill the huge gap in evaluating NLG systems. On one hand, a rigorous, efficient, and reproducible evaluation procedure is critical for the development of any machine learning technology and for correct interpretation of the state-of-the-art. On the other hand, evaluating the quality of language generation is inherently difficult due to the special properties of text, such as \textit{subjectivity} and \textit{non-compositionality}. Indeed, \textit{``there is no agreed objective criterion for comparing the goodness of texts''} \cite{dale1998towards}, and there lacks a clear model of text quality \cite{hardcastle2008can}. Conventionally, most NLG systems have been evaluated in a rather informal manner. \cite{reiter2009investigation} divide existing evaluation methods commonly employed in text generation into three categories: \textit{i)} evaluations based on task performance, \textit{ii)} human judgments and ratings, where human subjects are recruited to rate different dimensions of the generated texts, and \textit{iii)} evaluations based on comparison to a reference corpus using automated metrics. \textit{Task based evaluation} considers that the value of a piece of functional text lies in how well it serves the user to fulfill a specific application. It can be expensive, time-consuming, and often dependent on the good will of participants in the study. Besides that, it is hard to toss out the general quality of text generation from the special context (and confounds) of the task, or to generalize the evaluation conclusions across tasks. \textit{Human annotation} is able to assess the quality of text more directly than task based evaluation. However, rigorously evaluating NLG systems with real users can be expensive and time consuming, and it does not scale well \cite{reiter2001using}. Human assessments also need to be consistent and repeatable for a meaningful evaluation \cite{lopez2012putting}. Alternative strategies which are more effective in terms of cost and time are used more frequently. \textit{Automated evaluation} compares texts generated by the candidate algorithms to human-written texts. Word overlap metrics and more recent automated adversarial evaluators are widely employed in NLG as they are cheap, quick, repeatable, and do not require human subjects when a reference corpus is already available. In addition, they allow developers to make rapid changes to their systems and automatically tune parameters without human intervention. Despite the benefits, however, the use of automated metrics in the field of NLG is controversial \cite{reiter2009investigation}, and their results are often criticized as not meaningful for a number of reasons. First, these automatic evaluations rely on a high-quality corpus of references, which is not often available. Second, comparisons with a reference corpus do not assess the usefulness and the impact of the generated text on the readers as in human-based evaluations. Third, creating human written reference texts specifically for the purpose of evaluation could still be expensive, especially if these reference texts need to be created by skilled domain experts. Finally and most importantly, using automated evaluation metrics is sensible only if they correlate with results of human-based evaluations and if they are accurate predictors of text quality, which is never formally verified at scale. We present a large-scale, systematic experiment that evaluates the \textit{evaluators} for NLG. We compare three types of evaluators including human evaluators, automated adversarial evaluators trained to distinguish human-written from machine-generated product reviews, and word overlap metrics (such as BLEU and ROUGE) in a particular scenario, generating online product reviews. The preferences of different evaluators on a dozen representative deep-learning based NLG algorithms are compared with human assessments of the quality of the generated reviews. Our findings reveal significant differences among the evaluators and shed light on the potential factors that contribute to these differences. The analysis of a post experiment survey also provides important implications on how to guide the development of new NLG algorithms. \section{Related Work} \label{sec:relatedwork} \subsection{Deep Learning Based NLG} \label{e2e} Recently, a decent number of deep learning based models have been proposed for text generation. Recurrent Neural Networks (RNNs) and their variants, such as Long Short Term Memory (LSTM) \cite{hochreiter1997long} models, Google LM \cite{jozefowicz2016exploring}, and Scheduled Sampling (SS) \cite{bengio2015scheduled} are widely used for generating textual data. Generative Adversarial Networks \cite{goodfellow2014generative}, or GANs, train generative models through an adversarial process. Generating text with GANs is challenging due to the discrete nature of text data. SeqGAN \cite{yu2017seqgan} is one of the earliest GAN-based model for sequence generation, which treats the procedure as a sequential decision making process. RankGAN \cite{lin2017adversarial} proposes a framework that addresses the quality of a set of generated sequences collectively. Many GAN-based models \cite{yu2017seqgan}, \cite{lin2017adversarial}, \cite{rajeswar2017adversarial}, \cite{che2017maximum}, \cite{li2017adversarial}, \cite{zhang2017adversarial} are only capable of generating short texts. LeakGAN \cite{guo2017long} is proposed for generating longer texts. Deep learning architectures other than LSTM or GAN have also been proposed for text generation. \cite{tang2016context} study NLG given particular contexts or situations and proposes two approaches based on the encoder-decoder framework. \cite{dong2017learning} address the same task and employ an additional soft attention mechanism. Pre-training enables better generalization in deep neural networks \cite{erhan2010does}, especially when combined with supervised discriminative fine-tuning to learn universal robust representations \cite{radford2018improving}, \cite{devlin2018bert}, \cite{radford2019language}. \cite{guu2018generating} use a prototype-then-edit generative language model for sentences. \subsection{Automated Evaluation Metrics} \label{automatic_eval} The variety of NLG models are also evaluated with various approaches. Arguably, the most natural way to evaluate the quality of a generator is to involve humans as judges, either through some type of Turing test \cite{machinery1950computing} to distinguish generated text from human input texts, or to directly compare the texts generated by different generators \cite{mellish1998evaluation}. Such approaches are hard to scale and have to be redesigned whenever a new generator is included. Practically, it is critical to find automated metrics to evaluate the quality of a generator independent of human judges or an exhaustive set of competing generators. \textbf{Perplexity} \cite{jelinek1977perplexity} is commonly used to evaluate the quality of a language model, which has also been employed to evaluate generators \cite{yarats2017hierarchical}, \cite{ficler2017controlling}, \cite{gerz2018language} even though it is commonly criticized for not being a direct measure of the quality of generated text \cite{fedus2018maskgan}. Perplexity is a model dependent metric, and ``how likely a sentence is generated by a given model'' is not comparable across different models. Therefore we do not include perplexity in this study. \textbf{Discriminative Evaluation} is an alternative way to evaluate a generator, which measures how likely its generated text can fool a classifier that aims to distinguish the generated text from human-written texts. In a way, this is an automated approximation of the Turing test, where machine judges are used to replace human judges. Discriminative machine judges can be trained either using a data set with explicit labels \cite{ott2011finding}, or using a mixture of text written by real humans and those generated by the model being evaluated. The latter is usually referred to as \textit{adversarial evaluation}. \cite{bowman2015generating} proposes one of the earliest studies that uses adversarial error to assess the quality of generated sentences. Notably, maximizing the adversarial error is consistent to the objective of the generator in generative adversarial networks. \cite{kannan2017adversarial} propose an adversarial loss to discriminate a dialogue model's output from human output. The discriminator prefers longer output and rarer language instead of the common responses generated. There however lacks evidence that a model that obtains a lower adversarial loss is better according to human evaluations. Automatic dialogue evaluation is formulated as a learning problem in \cite{lowe2017towards}, who train an RNN to predict the scores a human would assign to dialogue responses. RNN predictions correlate with human judgments at the utterance and system level, however each response is evaluated in a very specific context and the system requires substantial human judgments for training. \cite{li2017adversarial} employ a discriminator (analogous to the human evaluator in the Turing test) both in training and testing and define adversarial success. Other work finds the performance of a discriminative agent (e.g., attention-based bidirectional LSTM binary classifier) is comparable with human judges at distinguishing between real and fake dialogue excerpts \cite{bruni2017adversarial}. However, results show there is limited consensus among humans on what is considered as coherent dialogue passages. \textbf{Word Overlap Metrics}, such as BLEU \cite{papineni2002bleu}, ROUGE \cite{lin2004rouge}, and METEOR \cite{banerjee2005meteor}, are commonly used to measure the similarity between the generated text and human written references. \cite{liu2016not} find that word overlap metrics present weak or no correlation with human judgments in non-task oriented dialogue systems and thus should be used with caution or in combination with user studies. In contrary, it is reported in \cite{sharma2017relevance} that text overlap metrics are indicative of human judgments in task-oriented dialogue settings, when used on datasets which contain multiple ground truth references. \cite{dai2017towards} find text overlap metrics too restrictive as they focus on fidelity of wording instead of fidelity of semantics. \cite{callison2006re} consider an increase in BLEU insufficient for an actual improvement in the quality of a system and posit in favor of human evaluation. BLEU and its variants (e.g., Self-BLEU) are used to evaluate GAN models \cite{caccia2018language, zhu2018texygen}. \cite{shi2018towards} compare frameworks for text generation including MLE, SeqGAN, LeakGAN and Inverse Reinforcement Learning using a simulated Turing test. A benchmarking experiment with GAN models is conducted in \cite{lu2018neural}; results show LeakGAN presents the highest BLEU scores on the test data. Similarly, BLEU and METEOR present highest correlations with human judgements \cite{callison2008further}, \cite{graham2014re}. However, evaluation metrics are not robust across conditions, and no single metric consistently outperforms other metrics across all correlation levels \cite{DBLP:journals/mt/PrzybockiPBS09}. Conventional neural language models trained with maximum likelihood can be on par or better than GANs \cite{caccia2018language}, \cite{semeniuta2018accurate}, \cite{tevet2018evaluating}. However, log-likelihood is often computationally intractable \cite{theis2015note}. Models with good likelihood can produce bad samples, and vice-versa \cite{goodfellow2016nips}. Generative models should be evaluated with regards to the task they are intended for over the full quality-diversity spectrum \cite{cifka2018eval}, \cite{hashimoto2019unifying}, \cite{montahaei2019jointly}. \begin{comment} \todo{New evaluation papers} \textcolor{red}{Fundamental flaws with quality-only evaluation are outlined in Caccia et al. \cite{caccia2018language}, with the recommendation to evaluate generative models over the full quality-diversity spectrum. An interesting finding in their work is that maximum likelihood estimation models outperform textual GAN variants. This finding is in line with Semeniuta et al. \cite{semeniuta2018accurate} who report conventional neural language models perform as well as GANs. Unconditional generative models for text are evaluated in C{\'\i}fka et al. \cite{cifka2018eval} accounting for two criteria: \textit{i)} the generated sentences should be correct with respect to the language used in the training data, and \textit{ii)} the generated sentences should reflect the diversity of expressions in the training data. The authors adopt \textit{Fr\'echet InferSent Distance} metric \cite{heusel2017gans} from the field of image generation to measure the distance between the generative and data distributions; however, the problem with this approach is that it assumes features are normally distributed. Human evaluation does not penalize under-diverse models, and to this end \textit{HUSE} is proposed in Hashimoto et al. \cite{hashimoto2019unifying} to capture both the quality and diversity of generation. HUSE is computed as the upper bound on the optimal classification error of distinguishing reference and model-generated text. Diversity is measured in Li et al. \cite{li2016diversity} at $n$-gram level, to quantify models that suffer from the generic utterance problem producing repetitive memorized utterances such as \textit{``I don't know''}. Evaluation metrics which are simultaneously sensitive to quality and diversity (MS-Jaccard, Fr\'echet BERT Distance) are proposed in Montahaei et al. \cite{montahaei2019jointly}, based on the observation that low scores should be assigned not only to models generating low quality samples, but also to models producing low diversity samples. Standard probability-based evaluation metrics are used to evaluate character-based GAN models in Tevet et al. \cite{tevet2018evaluating}, who report low quality for GAN generated sequences. Generative models should be evaluated with regards to the application they were intended for according to Theis et al. \cite{theis2015note}. Although log-likelihood has been widely used for the training and evaluation of generative models, it is often computationally intractable and is it hard to even estimate it under certain generative models as outlined in recent theoretical studies focused on memorization in GANs \footnote{\url{http://www.cs.cmu.edu/~vaishnan/papers/GAN_memorization.pdf}}. In addition, models with good likelihood can produce bad samples, while models with good samples can have bad likelihood \cite{goodfellow2016nips}. Moreover, likelihood of output given input is unsuited to response generation in neural conversation models \cite{li2016diversity}.} \end{comment} While many generators are proposed and evaluated with various metrics, no existing work has systematically evaluated the different evaluators at scale, especially in the context of online review generation. Our work fills in this gap. \section{Results} \label{sec:results} First, we are interested in the accuracy of individual evaluators - how well they can distinguish ``fake'' (machine-generated) reviews from ``real'' (human-written) reviews. Second, we are interested in how an evaluator assesses the quality of the 12 generators instead of individual reviews. The absolute scores an evaluator gives to the generators are not as informative as how it ranks them: a good evaluator should be able to rank good generators above bad generators. Last but not least, we are interested in how the rankings by different evaluators correlate with each other. Intuitively, an automated evaluator that ranks the generators in similar orders as the human evaluators is more reasonable and can potentially be used as the surrogate of humans. \subsection{Results of Individual Evaluators} \subsubsection{Human evaluators} Every review is annotated by 5 human judges as either ``fake'' or ``real.'' The inter-annotator agreement (Fleiss-Kappa score \cite{fleiss2013statistical}) is $k=0.2748$. This suggests that \textit{distinguishing machine-generated reviews from real in general is a hard task even for humans}; there is limited consensus on what counts as a realistic review. The low agreement also implies that any automated evaluator that mimics human judges is not necessarily the most ``accurate.'' \begin{figure}[!h] \includegraphics[width=\linewidth, height = 2in]{figures/human_accuracy.png} \small \caption{Accuracy of human evaluators on individual reviews: \textit{H1} - individual votes; \textit{H2} - majority votes. } \label{fig::human_eval} \end{figure} In Figure \ref{fig::human_eval} we present the accuracy of two human evaluators on individual annotated reviews, based on either all 5 annotations or their majority votes for each review. Comparing to the ground-truth (of whether a review is machine-generated or collected from Amazon), individual human decisions are 66.61\% accurate, while their majority votes can reach 72.63\%. Neither of them is close to perfect. \textit{We observe that human evaluators generally do better at correctly labelling human-written reviews as real (true positive rate of 78.96\% for $H1$ and 88.31\% for $H2$), and they are confused by machine-generated reviews in close to half of the cases (true negative rate of 54.26\% for $H1$ and 56.95\% for $H2$)}. This trend reassures previous observations \cite{tang2016context}. We then look at how the human evaluators rank the 12 generators, according to the accuracy of human evaluators on all (fake) reviews generated by each of the generators. The lower the accuracy, the more likely the human evaluator is confused by the generated reviews, and thus the better the generator. We observe a substantial variance in the accuracy of both human evaluators on different generators, which suggests that human evaluators are able to distinguish between generators. The generator ranked the highest by both human evaluators is \textit{Gated Contexts to Sequences}. Google LM is ranked on the lower side, which makes sense as the model is not trained to generate reviews. Interestingly, humans tend not to be fooled by reviews generated by the GAN-based models (MLE SeqGAN, SeqGAN, RankGAN and LeakGAN), even though their objective is to confuse fake from real. GAN-generated reviews tend to be easily distinguishable from the real reviews by human judges. \begin{comment} \begin{table}[!htbp] \caption{Accuracy of human evaluators on reviews generated by individual generators. \textbf{The lower the better}. } \centering \scalebox{0.60}{ \begin{tabular}{ l | c | c } \hline \hline \textbf{Generative Text Model} & \textbf{Human Evaluators} & \textbf{Human Evaluators} \\ & \textbf{Accuracy (H1)} & \textbf{Accuracy (H2)} \\ \hline Word LSTM temp 1.0 & 54.87 \% & 59.73 \% \\ \hline Word LSTM temp 0.7 & 33.91 \% & 28.19 \% \\ \hline Word LSTM temp 0.5 & 26.71 \% & 17.80 \% \\ \hline Scheduled Sampling & 75.27 \% & 87.25 \% \\ \hline Google LM & \textit{68.19} \% & \textit{79.17} \% \\ \hline Attention Attribute to Sequence & 32.31 \% & 27.21 \% \\ \hline Contexts to Sequences & 38.72 \% & 34.23 \% \\ \hline Gated Contexts to Sequences & \textbf{24.63 \%} & \textbf{14.86} \% \\ \hline MLE SeqGAN & 76.23 \% & 89.93 \% \\ \hline SeqGAN & 74.50 \% & 85.03 \% \\ \hline RankGAN & 77.82 \% & 84.25 \% \\ \hline LeakGAN & 68.14 \% & 76.19 \% \\ \hline \hline \end{tabular}} \label{human_accuracy_rankings} \end{table} \end{comment} \subsubsection{Discriminative evaluators} We then analyze the 7 meta-adversarial evaluators. Different from human evaluators that are applied to the 3,600 annotated reviews, the discriminative evaluators are applied to \textit{all} reviews in \textit{D-test}. \textbf{Meta-adversarial Evaluators.} On individual reviews, the three deep learning based and the one SVM based evaluators achieve higher accuracy than the two human evaluators, indicating that adversarial evaluators can distinguish a single machine-generated review from human-written better than humans (Figure \ref{fig::H1_H2_LSTM_SVM} and Table \ref{meta_discriminator_accuracy_rankings_all} in Appendix \ref{appendix_discriminative_evaluators_results}). Their true positive rates and true negative rates are more balanced than human evaluators. Meta-discriminators commonly rank GAN-based generators the highest. This makes sense as the objective of GAN is consistent to the (reversed) evaluator accuracy. Interestingly, by simply setting the temperature parameter of Word LSTM to 1.0, it achieves comparable performance to the GANs. \begin{figure}[!htbp] \centering \includegraphics[width=\columnwidth]{figures/H1_H2_LSTM_SVM_accuracy.png} \caption{Accuracy of human (H1, H2) and meta-adversarial evaluators (LSTM, SVM) on reviews generated by individual generators. \textbf{The lower the accuracy, the better the generator.} } \label{fig::H1_H2_LSTM_SVM} \end{figure} \subsubsection{Word-Overlap Evaluators} The generators are ranked based on the average scores of their generated reviews. In Figure \ref{word_overlap_eval} we present the average scores of the 12 generators by each evaluator. Different word-overlap evaluators also tend to rank the generators in similar orders. Interestingly, the top-ranked generator according to three evaluators is \textit{Contexts to Sequences}, while CIDEr scores highest the \textit{Gated Contexts to Sequences} model. GAN-based generators are generally ranked low; please also see Appendix \ref{appendix_text_overlap_evaluators_results}. \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{figures/word_overlap_accuracy_h2_compressed_version.png} \caption{Text-Overlap Evaluators (BLEU and CIDEr) scores for individual generators. \textbf{The higher the better.} The rankings are overall similar, as GAN-based generators are ranked low.} \label{word_overlap_eval} \end{figure} \subsection{Comparing Evaluators} To what degree do the evaluators agree on the ranking of generators? We are more interested in how the automated evaluators compare to the human evaluators, and whether there is any suitable automated surrogate for human judges at all. To do this, we compute the correlations between $H1$, $H2$ and each discriminative evaluator and correlations between $H1$, $H2$ and the text-overlap evaluators, based on either their decisions on individual reviews, their scores of the generators (by Pearson's coefficient \cite{fieller1957tests}), and their rankings of the generators (by Spearman's $\rho$ \cite{spearman1904proof} and Kendall's $\tau$ \cite{daniel1978applied}). Patterns of the three correlation metrics are similar; please see Figure~\ref{fig::barchart_correlation_evaluators} and Table \ref{table_correlation_results} in Appendix \ref{appendix_comparing_evaluators}. \begin{figure}[!h] \centering \includegraphics[width=2.5in]{figures/hbarchart_correlation_evaluators.png} \caption{Kendall $\tau$-b between human and automated evaluators. Human's ranking is positively correlated to text-overlap evaluators and negatively correlated to adversarial evaluators ($^*$ is $p\le 0.05$).} \label{fig::barchart_correlation_evaluators} \end{figure} Surprisingly, none of the discriminative evaluators have a positive correlation with the human evaluators. That says, \textit{generators that fool machine judges easily are less likely to confuse human judges, and vice versa}. \textit{Word-overlap evaluators tend to have a positive correlation with the human evaluators in ranking the generators}. Among them, BLEU appears to be closer to humans. This pattern is consistent in all three types of correlations. These two observations are intriguing, which indicates that when identifying fake reviews, humans might focus more on word usage rather than trying to construct a ``decision boundary'' mentally. In summary, we find that 1) human evaluators cannot distinguish machine-generated reviews from real reviews perfectly, with significant bias between the two classes; 2) meta-adversarial evaluators can better distinguish individual fake reviews, but their rankings at the generator level tend to be negatively correlated with human evaluators; and 3) text-overlap evaluators are highly correlated with human evaluators in ranking generators.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,122
Cheshire coach Bill Mrowka holds up the championship plaque as the Rams celebrate winning the CIAC Class LL championship 1-0 over Ridgefield on Saturday at Palmer Field in Middletown. The Cheshire baseball players gathered around the championship plaque and pointed their fingers skyward letting everyone know they are No. 1. The Rams are not only the newly minted CIAC Class LL state champions, they also close the season as the No. 1 ranked team in the GametimeCT poll. There should be no debate. The No. 4 seed completed a run through the LL tournament by defeating Ridgefield 1-0 in the championship game Saturday afternoon at Palmer Field in Middletown. Winning pitcher Ben DeLaubell tossed a complete game and also drove in the winning run on a groundout to second in the top of the seventh inning. To reach the final, the Rams had to get through defending champions No. 5 Staples (4-3) in the quarterfinals and the No.1 team in the state entering the tournament, Fairfield Prep (5-4), in the semifinals. Cheshire closes the season 20-4 and with the first state championship for the school since 1993. Rams coach Bill Mrowka thinks his team has earned the No. 1 ranking. Once Windsor (22-3) lost to Wethesfield in the Class L championship Friday, it was clear to most the Cheshire-Ridgefield winner would likely be the state's top team. That was further confirmed when Wolcott (21-4) lost to Seymour 13-2 in the Class M final, with the thought being some No. 1 votes may have gone the way of the Eagles.
{ "redpajama_set_name": "RedPajamaC4" }
5,277
{"url":"http:\/\/tex.stackexchange.com\/questions\/41924\/as-an-expert-can-you-always-use-tex-for-nearly-any-kind-of-document\/41932","text":"# As an expert, can you always use TeX for (nearly) any kind of document?\n\nI am just beginning to learn TeX. Coming from web-development, I find separating content from style very sensible, and I like the logic behind it. To me it looks that having control over formatting is very convenient and it makes TeX superior to other Word processing programs like OpenOffice or Word, even when making 'simple' documents. As I am just a beginner, I am wondering what the experts do when having to write a simple letter, or any small document that really does not require any advanced typesetting. Do you fall back to a Word processing program, or do you stick with TeX, even though it might implicate a bit more hassle to get your final document?\n\nIn other words, do you find TeX can replace any type of document creation in everyday use, or is it overkill for simple documents?\n\n-\nSince you qualify your question with the phrase \"(nearly) any kind of document\", one would have to mention that TeX and friends aren't widely used for music (i.e., musical note notation). While there are TeX formats called musictex and musixtex, my personal impression is that other open-source packages are in far greater use. That said, I wholeheartedly concur that TeX is a fabulous tool for nearly all other types of documents, including documents written with non-Latin letters. \u2013\u00a0 Mico Jan 22 '12 at 21:33\n@Mico - Lilypond is very similar to TeX and I'd happily use that for typesetting music. \u2013\u00a0 CJBrew Jan 23 '12 at 9:37\n@CJBrew: Agreed. The fact that Lilypond was written (mainly) by people who were very well versed in musixtex makes the point: TeX is not the method of choice for typesetting music. \u2013\u00a0 Mico Jan 23 '12 at 9:48\nIn addition to all the \"No!\"'s, I'll just add that as an academic, sometimes internal proposals - those that are handled within the university itself - are required to be in a .doc format. \u2013\u00a0 cm2 Jan 23 '12 at 17:55\nI think that in fact beamer encourages good (academic) style : focus on the information and forget the shinny effects. And if you need some visual support, it's even possible. \u2013\u00a0 Matsaya Feb 1 '12 at 14:47\n\nFor my personal documents, I use LaTeX for all purposes, since\n\n\u2022 It's easy if you are a routine user, you know the common packages.\n\n\u2022 Basic page layout is quickly done with typearea or geometry.\n\n\u2022 After some time you've got a lot of documents to use as a template or as a start for a similar document.\n\n\u2022 My 16 years old documents still work, such as older articles, letters, CV. If I would have used a word processor with any format other than plain text, I'm sure they wouldn't be usable for me today.\n\nOn rare occasions, where I never made a certain kind of document, I take some minutes to create one, such as recently a leaflet for my girlfriend promoting an event. It took an hour, but looked great - with all the implicit advantages of LaTeX, such as nice justification even in narrow columns, also thanks to microtype.\n\nI don't like to install the huge OpenOffice or LibreOffice suite, or Abiword, for a small purpose. And I don't buy Word - and don't copy it. I must admit, that I have to use it at work.\n\nA TeX distribution is also not small - but I already have it installed, and the experts you asked for sure have it as well. I run it on my netbook, a laptop, a desktop, and have a TeX installation on a server where I can access it via SSH and FTPS from everywhere.\n\n-\nYou mean TeXLive and MikTeX are not small. teTeX used to be moderately large (I would say small comparing to TeXLive) distribution. There is a new exciting distribution for Plan 9 and Unix called KerTeX kergis.com\/en\/kertex.html and it is really tiny comparing to TeXLive. \u2013\u00a0 Predrag Punosevac Jan 23 '12 at 4:46\n\"I don't like to install the huge OpenOffice or LibreOffice suite, or Abiword, for a small purpose.\" ... of course, if you communicate with other human beings, chances are that you'll need to install an Office Suite anyway to open the stuff they send you :-) \u2013\u00a0 Martin Jan 23 '12 at 13:47\n@Martin: Google docs also works fairly well for this purpose in a pinch. \u2013\u00a0 Reid Jan 23 '12 at 22:47\n@PredragPunosevac: But KerTeX doesn't include pdfTeX\/XeTeX\/LuaTeX and most likely never will (due to it being BSD-licensed). \u2013\u00a0 Martin Schr\u00f6der Jan 24 '12 at 10:59\nToo all: kerTeX supports already latin1 fonts, because of PostScript core fonts handling. Same can be true with CM via virtual fonts---planed in the future, as well as Unicode via utf-8. Since kerTeX has put the needles out of the haystack, it is now easy to understand how things work---for fonts look at the adm\/pkg_core.sh to see how tfm are generated from PS AFM with reencoding. PostScript system fonts could be used too etc. This is the purpose of kerTeX to ease the understanding by simplifying. [BTW, hello and thanks to Predrag for his mention of kerTeX!] \u2013\u00a0 Thierry LARONDE Jan 31 '12 at 18:09\n\nYes, I use TeX for generally anything that it can be used for. The reasons include:\n\n\u2022 Typing a letter is just like 20 TeX commands added to the text.\n\u2022 You get the best hyphenation ever possible. (I'm Czech and Czech language is really complicated considering word-breaking etc., TeX deals it correctly and if not, it can be manually reset)\n\u2022 My letters look professional, I get the correct, nicely placed, and consistent header and footer.\n\u2022 I can easily make automated texts. (combining MySQL, PHP, linux shell and LaTeX is extremly powerful weapon! And my friends who do not know anything about LaTeX can use it.) Example of automated plot and table\n\u2022 I don't have Windows and I don't like OpenOffice very much.\n\u2022 Templates do work very correctly and can be easily set up and modified.\n-\nThe \"automated texts\" sound interesting. What kind of things do you create for instance? \u2013\u00a0 user Jan 22 '12 at 21:46\nI generate: 1) programme for a scout group, 2) plots and tables for the data on their progress: link \u2013\u00a0 yo' Jan 23 '12 at 0:10\n\nYes.\n\nI do tend to have to look up how to do letters, as I do so few of them, but I'd have to look up how to do them in a word processing package anyway so I don't view that as anything extra. Even without the control that TeX affords, just the familiarity means that I'm so much faster writing any document in TeX than anything else.\n\nI even do my kids birthday invitations in TeX.\n\nHaving read the other answers (so far), I thought I'd clarify one point. I took the question to mean \"Is there anything where you would use a word-processor (or maybe DTP) instead of TeX?\". I also use Lilypond for music, and bare text (ie no markup) for emails and text files that will never see the light of print, so I don't use LaTeX for absolutely every text document ever. But I do use it for anything someone else would use a word processor or DTP for, and I also use it for producing webpages and blog posts, so my rule seems to be: if I'm going to have formatted text and my document is not music, then I'll use LaTeX.\n\nWith 17 votes (at time of counting) for Stefan's comments, I feel I have no choice but to post the following picture:\n\n-\nWe already got New Years fireworks and Christmas trees in TikZ for use in TeX - I look forward to seeing TikZ birthday cakes from your kids invitations. ;-) \u2013\u00a0 Stefan Kottwitz Jan 22 '12 at 21:23\nThe recipe for the cake: tex.stackexchange.com\/a\/42617\/5701. \u2013\u00a0 N.N. Jan 29 '12 at 12:11\nTeX has the immense advantage that you could compute the number of candles required from the system date, giving reusable cake code. Try that with a word processor. \u2013\u00a0 mabartibin Apr 20 '12 at 8:16\nNever thought of that! Neat. \u2013\u00a0 Loop Space Apr 20 '12 at 8:40\n\nNo!\n\nYou can not and you should not use TeX for everything. There are two things from the top of my head for which TeX should not be used.\n\n1. The first one is Unix man pages. Please use mandoc (BSD systems) or use Groff (System V Unix and Unix like including Linux).\n\n2. Although possible to type music in TeX (MusicTeX and MusiXTeX) LilyPond is just more beautiful due to inheritable antisymmetry of music scores. People who developed LilyPond used to work on MusicTeX as developers.\n\nDisclaimer I have been using TeX since early 90s but I could not be considered an expert by any stretch of imagination!\n\n-\nI don't really consider the 1) being a real \"document\", similarly to a HTML page, but I agree, TeX isn't a tool for that. And I completely agree on 2) because LilyPond is really great. \u2013\u00a0 yo' Jan 23 '12 at 0:12\n+1 for Lilypond. I love it... Do you know, is Lilypond actually based on TeX or just inspired by it? \u2013\u00a0 CJBrew Jan 23 '12 at 9:43\nAs far as I know just inspired by it. It is written from the scratch by former MusicTeX developers. \u2013\u00a0 Predrag Punosevac Jan 23 '12 at 12:20\n@CJBrew LilyPond has two keywords in common with TeX: typography and backslash-braces-syntax. However, notice that LilyPond uses this syntax only in the user layer, the programmer layer uses parenteses and apostrophes. \u2013\u00a0 yo' Dec 31 '12 at 10:28\n@tohecz I feel that it's worth mentioning that the programmer layer is in fact a LISP variant Scheme; see lilypond.org\/doc\/v2.19\/Documentation\/internals\/backend \u2013\u00a0 Sean Allred Mar 23 '14 at 6:02\n\nWhen creating a one-off document, you should wonder whether it is really one-off. I continuously recycle existing TeX documents for new scenario's. You will find that it gets easier once you have already produced a number of documents, because you will have encountered more and more typesetting scenario's and problems.\n\nThere is one very good reason NOT to use TeX. This is the case when you are producing read-write documents. And this happens more than you think. Suppose you are documenting the website you have developed. If your colleague takes over this project, he will have to learn TeX to be able to correct\/adjust your documentation.\n\nThere is one very good reason to use TeX in this scenario: when collaborating on a document. The ability for a TeX document to be stored in Subversion is just awesome, and there is no comparison to any binary WYSIWYG editor document.\n\n-\nWriting a CV in (La)TeX? You'll have a beautiful CV in a PDF file. But then you'll find you still have to convert it to HTML and then to a Word document, or recruiters won't touch it (since they can't remove your personal details, introduce spelling errors, etc... ;-) \u2013\u00a0 CJBrew Jan 23 '12 at 9:40\nYes, TeX is indeed awesome for folks that know TeX. But you're right on about collaboration... unless you're an academic, you may be the only one in your team who can maintain TeX documents. \u2013\u00a0 Adam Monsen Jan 23 '12 at 17:44\n\nNo. I use it (well, LaTeX, actually) for any document that needs to look good, and that includes letters and any technical documents. But I use the markdown format for simpler documents that may also be used in pre-formatted form. Prime among these is the README document that exists in pretty much every working directory in my computer. Markdown is useful because it provides some formatting (lists, font changes, images, etc) but without adding very much in the way of formatting codes. If I wish to expand a markdown format in more detail, I just use pandoc to create a LaTeX file from it, and then proceed with that.\n\n-\n\nI'm a bit late to this party, but here are things that I use my TeX distro for:\n\n\u2022 CV: Yes, I send people a PDF; tell them I don't provide source code; and offer to do whatever they want to it (like including their logo and excluding direct contact details). I've generally met with a very positive response, but this is not for internet bodyshops.\n\n\u2022 Proposals to clients: as contractor, but also as subcontractor, with requests for tailoring from prime contractors. Again, very positive feedback, in part because \"Oh, how did you do this with Word\" leads into interesting and engaging conversations.\n\n\u2022 Design documents: again very well received, especially the TikZ diagrams.\n\n\u2022 Software architecture manuals: cross-referencing and indexing is very productive, and, again, TikZ rules.\n\n\u2022 Presentations, with beamer; PDF presentations, projected with Impressive, are, and beamer's facilities for adding and subtracting things-on-a-slide are highly effective. Of course, one still has to be careful of replacing \"Death by PowerPoint\" with \"Death by Beamer\".\n\n\u2022 Letters and envelopes of all shapes, sizes and natures.\n\n\u2022 Personal and business cards. (OK, I know that's a bit old-fashioned.)\n\nAll of this (except for beamer, of course) is built on Peter Wilson's incomparable memoir document class.\n\nTeX needs some investment of your time, but it will be a lifelong friend (though sometimes a little irritating). Of course, in this modern world, largely managed by elderly teenagers, everything is \"urgent\", but join us all in restoring a bit of elegance to modern document production.\n\n-\nYou can write letters using the \"memoir\" class? \u2013\u00a0 Faheem Mitha Jan 30 '12 at 21:06\nWell, I do. I knocked together a few commands and built a special-purpose document class which does a \\LoadClass{memoir}, and another one for envelopes. They're a bit too \"Brent-ish\" for CTAN. \u2013\u00a0 Brent.Longborough Jan 30 '12 at 21:34\nOk, I see, you created a custom documentation class. Thanks. \u2013\u00a0 Faheem Mitha Jan 30 '12 at 21:39\nYes, but using facilities from memoir (for example, the page layout controls, so I can write on 229x178mm stock). So memoir does most of the heavy lifting. \u2013\u00a0 Brent.Longborough Jan 30 '12 at 21:45\n+1 for \"Oh, how did you do this with Word\" :D \u2013\u00a0 yo' Jan 9 '13 at 8:40\n\nI'm far from being an expert (I'd rather call myself an enthusiastic user :)) and my relationship with LaTeX wasn't love at first sight. In the early days, I was typesetting formulas with MS Office and that was a real pain. Then I discovered OpenOffice, which has some (limited) tool for typesetting formulas that is similar to LaTeX. Somebody saw me do that and said: 'Hey, that's like LaTeX! If you prefer typing your formulas, why not use it anyway?' So I gave it a try. In the very beginning I was frustrated with the (seemingly) odd way floats were placed, the various packages I had to load, and so on. So I abandoned it for the time being. Then I had to write a paper in LaTeX anyway and thought I might as well do some research. That's how I discovered the logic behind floats placement and became fascinated. As I dug myself in deeper and deeper, I realized that you can indeed do almost anything with LaTeX. Maybe that's not good enough. (For me in any case it isn't. ;)) But then there is Lua(La)TeX and the upcoming LaTeX3, so if there is anything you can't do with LaTeX (and friends), there already is someone bound to be working on a solution. :D\n\nAs 'further reading', I'd just suggest this question.\n\n-\n\nI'm by no means an expert, but for me it's pretty situational. I'll use TeX for nearly everything if I can get away with it.\n\nI'm still in school, so if a professor\/TA requests that I complete an assignment and send it in .doc(x) format (which is rare), I'm going to be restricted to Word. Of course there are TeX to Word conversion tools, but this usually isn't worth going out of my way for. But if a professor\/TA says .pdf is also fine, TeX it is---all day, every day!\n\nBeing in science program, there are usually a lot of abbreviations, numbers, units, figures, reaction schemes, citations, etc. that I deal with when I'm writing, so TeX and Bib(La)TeX are more than infinitely useful here. I'd probably drop out of school if it weren't for both of them (hah).\n\nIn the rare case that I want a WYSIWYG environment (though LyX provides this, which I do not use), I'll usually open Word and work in there, but 99.9% of the time I'll just translate that to TeX anyway.\n\nUsing TeX for even the most simplest documents is fine in my book, because everything just works. If there's a formatting issue or error, you can either find it and fix it right away or turn to great communities for support like we have here. :-) With Word, fixing a formatting issue usually leads to me slamming my head into a wall repeatedly.\n\n-\nIf a professor asks for a .doc document, I pretend he really meant .pdf and send a pdf. Since a pdf is the same as a .doc in the sense that you can click on it, and the prof's computer can open it, PDF is always fine ;-) \u2013\u00a0 Unapiedra Jan 22 '12 at 21:36\nThat's certainly true! But in my experience, profs\/TAs want .docs so they can use Word's commenting functionality so they can leave comments at certain places in the document. But then again, I guess they could do that in Acrobat or any other reader with a .pdf! \u2013\u00a0 user2473 Jan 22 '12 at 22:23\nI am an assistant professor of mathematics and I and most of my colleagues will immediately dismiss any documents (in particular job applications) which were not typed in TeX. The point that I am trying to make is that every field has its own standards and preferences and if you are going to be a good citizen you will have to conform to the rules of the community. For example my wife is a Biologist and in Biology 99% of all documents are written in Word. You can like it or not but if you are going to be a Biologist you will have to use it. \u2013\u00a0 Predrag Punosevac Jan 23 '12 at 0:02\n\nYes, definitely.\n\nAside from the fact that, as you mention, you separate form and content, which is both wise and effective in most cases, you also have to consider the fact that you can write LaTeX on nearly every text editor\u00a0(I say nearly because I usually prefer having syntax colouring on when coding a layout\u2026 but pretty much anything is fine for regular text). And dedicated editors usually have just one button, one that works when you click it\u00a0\u2013 which is perfect for dummies like me. Word processors have so many buttons and menus everywhere, so many options to tamper with, and behave in such an unruly way that I feel like banging my head against the closest wall whenever forced to use them. Some of them are even evil enough to randomly swallow your footnotes\u00a0(or is it just me they hate?).\n\nIf the alternative is MS Word, all documents require \"advanced\" typesetting. One can usually spot a document written in a word processor in about five seconds, mainly because of the default layout and of the horizontal and vertical spacing. As has been said by others above, as long as you define your own basic templates for most texts, writing a LaTeX document will basically require you to copy a file that contains the preamble \/ layout, or load a package you created for such use. This is not much of an effort and certainly does not take more time than using a word processor, in which most things cannot be automated.\n\nRegarding automation, as has been said above, Bib(La)TeX does a great job at citations, and I don't know how I would manage to write anything if it weren't for it\u00a0\u2013 writing up bibliographies yourself and making sure they are somewhat consistent is just awful, writing indexes would simply be out of the question. And you can use all sorts of default or custom commands for bits of text, with the advantage that you can redefine them at any time, should you need to. One can even automate most language and punctuation related things, and this really saves time for most languages.\n\nThat being said, I admit that I do not use LaTeX for everything. For one thing, I do not send my emails with an attached .pdf file because most people would not read them, and it does not really make sense anyway except for very long or very formal emails (in which case I do use LaTeX). I do not use LaTeX syntax either when writing up to-do notes and lists of things, because I would never process them anyway and just read the text as it is\u00a0(but then again, I usually write these by hand on a piece of paper). And I have some old text documents that I have been too lazy to convert, but that's definitely a mistake on my part.\n\n-\n\nOf course asking such a question on a TeX site will give you a pretty biased view.\n\nThere are several points in favor of using TeX as a general purpose typesetting tool and a pretty good replacement for programs like MS Word. The main advantages are in my opinion:\n\n\u2022 Division of structure and layout\n\u2022 Typesetting quality\n\u2022 Classes that provide you with professional formats within minutes\n\u2022 Mathematical and other special typesetting is well supported\n\u2022 Automation (this is hard to explain, as a TeX document is based on plain text files it is relatively easy to connect it to other tools or programming languages)\n\nTeX on the other hand is not as user friendly as a more visually oriented program. In MS Word, OpenOffice, Adobe InDesign I can press buttons or have menus and can learn the basic functionality within a few hours. In LaTeX I often struggle to find the right \"word\" to describe what I am trying to achieve and then I still have to google for a minute or two to find out how to change the page footer that it contains the total number of pages on every page.\n\nIn the end the final document is often worth the effort but with a lesser focus on quality I could have hacked together something often faster in Word.\n\nAnother limitation is that you can only exchange your documents within a relatively small group of people. Most computer users can somehow edit a MS Word document, this is unfortunately not true for anything TeX-related.\n\nIn the end just try to use the tool that is best for the job. For professional typesetting I can recommend LaTeX and also Indesign but just throwing together something will often be faster in MS Office, OpenOffice, etc.\n\n(Background: I am familiar with LaTeX and have used it to typeset letters, books, lecture notes and posters but not consider myself an expert by far.)\n\n-\n+1 for a balanced answer. A few points, however: 1) Word and Writer both have good separation of content and style, it's just that very few people use it (and it's not forced on users, which is probably a bad thing). 2) Typesetting quality is a function of the renderer, not the format. It's conceivable that OO might implement tex-quality rendering for odt->pdf. \u2013\u00a0 naught101 Nov 21 '12 at 3:25\n\nIt certainly depends on the content. For instance, I use TeX for almost everything, that is mostly text. Everything from info posters to business cards. However, when the text is a message in itself, such as price lists or signs, I use vector graphing programs, like Inkscape. I don't see the point in using TikZ for making the entrance sign of a pub. I much rather use graphing programs for that.\n\nThere is one thing in using a tool just because you can, and another thing in using what makes the most sense.\n\n-\n\nAnother thing to keep in mind is that TeX (any macro package) is designed for static typesetting. It works best if your desired final result is a pdf, slides, or ink on paper. It does not work quite as well if you also want to target reflowable formats like HTML.\n\nThere are packages to do that, but they have limitations, and not all other packages work well with them. If reflowable formats like HTML are part of your target, you might be better off using some other tool as the basis for your document, such as DocBook.\n\nI'd still suggest writing an XSLT sheet or similar to create LaTeX from the original for the typeset verions.\n\n-\n\nI wish I could do everything with LaTeX! I agree with the other contributers that it is hardly over kill. Once you've typeset one letter or CV, you can make a thousand more with very little time investment. And with XeTeX multiple languages and fonts are no longer major issues.\n\nUnfortunately, one common situation that TeX handles poorly is collaboration. When I have to interact with coworkers or family, I find myself collaborating on docs via trac or Google Docs. Often, I'd prefer to be working in TeX but the system is ill-suited for such exercises--continuously recompiling? broken syntax not rendering at all? I am not aware of a production quality online collaborative service for editing TeX documents. For more conventional \"edit and return\" collaboration also, TeX is limited. Aside from the interoperability problem (because the person you're collaborating with may not know TeX), I don't know of any straightforward way to \"track changes\" the way you can with Word or comparable software. I applied for a technical writing job in which I would have had to write scientific text in Word so that the team could use \"track changes.\" I'm kind of glad it fell through. :)\n\nSo my answer is, any document that I control from start to finish, I do in TeX. But it's rare that I collaborate on a TeX document.\n\n-\nThe right tool for automatically tracking changes is a version control software. The bonus is that it works the same on all purely textual formats, be it LaTeX source, XML documents, program source code, or just plain ASCII text files. \u2013\u00a0 celtschk Jan 23 '12 at 13:47\nOne might note that LaTeX can actually be superior when collaborating, namely when you put the plain text sources into a source repository (Git, Hg, Subversion, ...) because you then can just merge the differences of concurrent edits. Obviously, this isn't always the case. \u2013\u00a0 Martin Jan 23 '12 at 13:50\nRegarding Google Docs and TeX see this paper and this software. \u2013\u00a0 Martin Schr\u00f6der Jan 24 '12 at 11:10\n@MartinSchr\u00f6der, I'm intrigued but I can't access either of those links (not a TUG member and 404, respectively) \u2013\u00a0 mmdanziger Jan 25 '12 at 8:59\n\nI use LaTeX for everything unless I'm directed otherwise. Since I'm still in college, I suppose I have a little more freedom. Most of my professors love how nice my documents look, especially papers, since they are usually typeset using the IEEEtran documentclass. The most positive feedback that I receive is in my math classes, where LaTeX really shines. Sure it takes about an hour or two longer to typeset my homework in LaTeX rather than handwriting it, but I feel that the process really helps me internalize what I'm doing as I'm doing it.\n\nI have found that typing notes during class with LaTeX is difficult. Even when using vim with a bunch of plugins (SnipMate for example), I can't really keep up with the lecture. I think my biggest problem is that I focus too much on formatting during class that I fall behind. I'm sure that if I took the time to develop some sort of template I would be able to pull it off. I also could use more practice.\n\nHere's a list of documents that I've used LaTeX for:\n\n\u2022 Homework\n\u2022 Resume\n\u2022 Cover Letter\n\u2022 Notes (Sometimes I take notes while reading through a textbook).\n\u2022 Presentations (Beamer)\n\u2022 Various graphics with Tikz\n\u2022 Papers (Bibtex is enough to make me want to use LaTeX for this)\n\nThis semester I'm taking a software engineering course where I'm going to attempt to do UML diagrams and a Software Requirements Specification with LaTeX. I'm sure my professor will be very impressed. This is why I try to use LaTeX for everything.\n\n-\nOut of curiosity, how did your SRS turn out? Would you be willing to post it as an example? \u2013\u00a0 Sean Allred Jun 5 '13 at 18:24\nUnfortunately the rest of my group preferred Microsoft Word... I did find a nice template that I was going to use. I'll see if I can dig it up. \u2013\u00a0 user2485 Jun 13 '13 at 13:12\nI'm sorry to hear that :-( my condolences XD Your chances get better by leaps and bounds if you have a TeXnically-inclined friend, though. Might want to find someone to stick close to XD \u2013\u00a0 Sean Allred Jun 13 '13 at 13:51\n\nThis is not a real answer, and I am not an expert:o)\n\nI believe that if the document is to be made sufficiently quick and dirty, LaTeX would not be the choice. e.g. a landscape sign saying noting but 'WC' in huge letters, would be easier and faster to do in some office program.\n\n-\nI'd use a rattle can ;-) \u2013\u00a0 Psirus Jan 23 '12 at 15:02\n@Psirus: I think that my girlfriend will be plenty mad, when she wakes with a piece of paper taped to her forehead. \u2013\u00a0 Hans-Peter E. Kristiansen Jan 24 '12 at 8:27\n\nGenerally yes. One case when I could use LaTeX, but I don't initially, is when designing a new document with a very different layout (such as a poster, or a flyer). I find it easier to do the initial design using Pages (on a Mac) since I need the visual feedback during the design stage. The typographic output of Pages can be very high if you are careful, and by making good use of styles, you can quickly explore changing fonts etc. (The latter can be done very well in XeLaTeX too, but changing layouts is not quick in TeX.)\n\nOnce I have answered to myself the design questions, however, then I will try to make a LaTeX style that reproduces that for future use. I find that it is much easier to separate the design from the implementation, and so use Pages as preliminary step. For one-off jobs, such as time-sensitive posters that are high on graphics, transparency etc., I often do not bother with the conversion to LaTeX, but for long term or repeated use, LaTeX all the way.\n\n-\n\nEverything I write at home is written in LaTeX. I mostly use the KOMA classes, which follow European typesetting guidelines. I mostly write letters nowadays, of course I wouldn't want to use Word for that.\n\n-\n\nI would not recommend TeX for any document that requires arbitrary layout, lots of graphics, e.g. magazines or presentations, or for ad-hoc one-off things. I know about beamer and prosper, but IMHO they encourage bad style (boring slides with lots of text and no real visual support of the talk). If you need arbitrary text blocks and flowing around images, a nice headache awaits you.\n\n(La)TeX really shines for a few kinds of documents:\n\n\u2022 complex structured documents with lots references, math, index, deep sectioning, etc (the typical example being textbooks, scientific works or technical manuals);\n\n\u2022 long texts where you want some sort of version control during the writing process, be it for recording changes or collaborating with several authors;\n\n\u2022 fine typography: for novels, there should be minimal markup required in the text itself, so the typographic design can be developed independently of the content;\n\n\u2022 small, regular documents (e.g. I use it for letters); the style took some effort to get right, but now making a new letter is just filling in a template.\n\n(answer expanded from my comment on the question)\n\n-\nThat's an interesting point about beamer. I actually like latex+beamer for presentations more than I like it for documents, because it discourages visual distractions and guff. But I can see how it might make some communication more difficult. Would be great to get some good examples of that. \u2013\u00a0 naught101 Nov 22 '12 at 2:03\nLike this perhaps: Bleamer: Beamer + Blender \u2013\u00a0 naught101 Nov 22 '12 at 2:06\nIt depends what sort of presentations you like to do, and I'm probably suffering of death-by-beamer, having suffered through too many bad talks with too much text and the default blue theme with these useless header\/footer decorations. I tend to have many one-off slides, sometimes with contrasting colors or 900pt size characters just because, annotations or highlights that appear as I discuss a point, and ad-hoc placement of stuff is vital for me. \u2013\u00a0 Damien Pollet Nov 22 '12 at 13:07\n\nNO!\nFor documents like:\n\n\u2022 notes \/ general text files <-- more work\n\u2022 music <-- better alternatives exist\n\u2022 videos <-- (????)\n\nusing TeX is more work or in some cases, impossible!\n\n-\nI guess the OP means writing text oriented documents, not calculations or videos. From the question text: \"Do you fall back to a Word processing program, or do you stick with TeX?\" \u2013\u00a0 Stefan Kottwitz Jan 23 '12 at 15:00\nok... got it... i missed his drift... updated my answer \u2013\u00a0 kumar_harsh Jan 25 '12 at 14:46\nThis answer seems a bit @Harsh. I would say that, for the writing of music, tools could be written (I'm thinking Python or LuaTeX) that would even ease the composition of music. And certainly for spreadsheets, TeX shines in data formatting---although it is worth it to maintain data manipulation in some other plain-text context (again, LuaTeX and Python come to mind). That said, I totally would have written this comment just for the pun. \u2013\u00a0 Sean Allred Jun 5 '13 at 18:54\n:D I'd have made it too, had I been not the poster :P \u2013\u00a0 kumar_harsh Jun 5 '13 at 19:23\n\n## protected by lockstepJan 23 '12 at 21:12\n\nThank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site.","date":"2015-07-30 20:25:21","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8565475940704346, \"perplexity\": 1835.9695933259402}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-32\/segments\/1438042987628.47\/warc\/CC-MAIN-20150728002307-00131-ip-10-236-191-2.ec2.internal.warc.gz\"}"}
null
null
\section{Introduction} Dynamical chiral symmetry breaking (DCSB) and its connection with the generation of hadron masses was first considered in Ref.\,\cite{Nambu:1961tp}. The effect was represented as a vacuum phenomenon. Two essentially inequivalent classes of ground-state were identified in the mean-field treatment of a meson-nucleon field theory: symmetry preserving (Wigner phase); and symmetry breaking (Nambu phase). Notably, within the symmetry breaking class, each of an uncountable infinity of distinct configurations is related to every other by a chiral rotation. This is arguably the origin of the concept that strongly-interacting quantum field theories possess a nontrivial vacuum. With the introduction of the parton model for the description of deep inelastic scattering (DIS), this notion was challenged via an argument \cite{Casher:1974xd} that DCSB can be realised as an intrinsic property of hadrons, instead of via a nontrivial vacuum exterior to the observable degrees of freedom. This perspective is tenable because the essential ingredient required for dynamical symmetry breaking in a composite system is the existence of a divergent number of constituents and DIS provided evidence for the existence within every hadron of a divergent sea of low-momentum partons. This view has, however, received scant attention. On the contrary, the introduction of QCD sum rules \cite{Shifman:1978bx} as a method to estimate nonperturbative strong-interaction matrix elements entrenched the belief that the QCD vacuum is characterised by numerous, independent, non-vanishing condensates. Notwithstanding the prevalence of this belief, it does lead to problems; e.g., entailing a cosmological constant that is $10^{46}$-times greater than that which is observed \cite{Turner:2001yu,Brodsky:2009zd}. This unwelcome consequence is partly responsible for reconsideration of the possibility that the so-called vacuum condensates are in fact an intrinsic property of hadrons. Namely, in a confining theory, condensates are not constant, physical mass-scales that fill all spacetime; instead, they are merely mass-dimensioned parameters that serve a practical purpose in some theoretical truncation schemes but otherwise do not have an existence independent of hadrons \cite{Brodsky:2009zd,Brodsky:2008be,Brodsky:2010xf,Glazek:2011vg}. Regarding the quark condensate, this perspective was recently elucidated for light pseudoscalar mesons \cite{Brodsky:2010xf}. Herein we propose an extension of the concept to all hadrons. We start with Ref.\,\cite{GellMann:1968rz}, which presents the relation \begin{equation} \label{gmor} m_\pi^2 = \lim_{P^\prime \to P \to 0} \langle \pi(P^\prime) | {\cal H}_{\chi{\rm sb}}|\pi(P)\rangle\,, \end{equation} where $m_\pi$ is the pion's mass and ${\cal H}_{\chi{\rm sb}}$ is that part of the hadronic Hamiltonian density which explicitly breaks chiral symmetry. It is important to observe that the operator expectation value in Eq.\,(\ref{gmor}) is evaluated between pion states. In terms of QCD quantities, Eq.\,(\ref{gmor}) entails \begin{eqnarray} \label{gmor1} \lefteqn{ \forall m_{ud} \sim 0\,,\; m_{\pi^\pm}^2 = m_{ud}^\zeta \, {\cal S}_\pi^\zeta(0)\,,}\rule{7.2em}{0ex}\\ {\cal S}_\pi^\zeta(0) & = & - \langle \pi(P) | \mbox{\small $\frac{1}{2}$}(\bar u u + \bar d d) |\pi(P)\rangle\,, \label{gmor1a} \end{eqnarray} where $m_{ud}^\zeta = m_u^\zeta+m_d^\zeta$, $m_{u,d}^\zeta$ are the current-quark masses at a renormalisation scale $\zeta$, and ${\cal S}^\zeta(0)$ is the pion's scalar form factor at zero momentum transfer, $Q^2=0$. The right-hand-side (rhs) of Eq.\,(\ref{gmor1}) is proportional to the pion $\sigma$-term (see, e.g., Ref.\,\cite{Flambaum:2005kc}). Consequently, using the connection between the $\sigma$-term and the Feynman-Hellmann theorem, Eq.\,(\ref{gmor}) is actually the statement \begin{equation} \label{pionmass2} \forall m_{ud} \sim 0\,,\; m_\pi^2 = m_{ud}^\zeta \frac{\partial }{\partial m^\zeta_{ud}} m_\pi^2. \end{equation} Recall now that one may use the axial-vector Ward-Takahashi identity to prove \cite{Maris:1997hd}: for any pseudoscalar meson, $P$, constituted from quarks $q$ and $Q$, whether ground-state, excited-state or hybrid, \begin{equation} \label{gmorR} f_P m_P^2 = (m_q^\zeta + m_Q^\zeta) \rho_P^\zeta , \end{equation} where $m_{q,Q}$ are the current-quark masses and \begin{eqnarray} \label{fpigen} \lefteqn{i f_P K_\mu = \langle 0 | \bar Q \gamma_5 \gamma_\mu q |P \rangle} \\ \nonumber & = & Z_2(\zeta,\Lambda)\; {\rm tr}_{\rm CD} \int_k^\Lambda i\gamma_5\gamma_\mu S_q(k_+) \Gamma_P(k;K) S_Q(k_-)\,, \\ \nonumber \lefteqn{i\rho_P^\zeta = -\langle 0 | \bar Q i\gamma_5 q |P \rangle} \\ & = & Z_4(\zeta,\Lambda)\; {\rm tr}_{\rm CD} \int_k^\Lambda \gamma_5 S_q(k_+) \Gamma_P(k;K) S_Q(k_-) \,.\label{rhogen} \end{eqnarray} ($K^2=-m_P^2$; $k_\pm = k\pm K/2$, without loss of generality in a Poincar\'e covariant approach.) Here, $f_P$ is the pseudoscalar meson's leptonic decay constant and the rhs of Eq.\,(\ref{fpigen}) expresses the axial-vector projection of the $P$-meson's Bethe-Salpeter wavefunction onto the origin in configuration space. Likewise, Eq.\,(\ref{rhogen}) describes the pseudoscalar projection of the $P$-meson's Bethe-Salpeter wavefunction onto the origin. It is therefore just another type of $P$-meson decay constant. Plainly then, both $f_P$ and $\rho_P^\zeta$ are intrinsic properties of the hadron. Moreover, \begin{equation} \label{inpiqbq} \kappa_P^\zeta \equiv -\langle \bar Q q \rangle^\zeta_P := -f_P \langle 0 | \bar Q \gamma_5 q |P \rangle = f_P \rho_P^\zeta \end{equation} is the in-hadron condensate introduced in Ref.\,\cite{Maris:1997tm}. We note that $\int_k^\Lambda:=\int^\Lambda \!\! \mbox{\footnotesize $\displaystyle\frac{d^4 k}{(2\pi)^4}$}$ in Eqs.\,(\ref{fpigen}), (\ref{rhogen}) represents a Poincar\'e-invariant regularization of the integral, with $\Lambda$ the ultraviolet regularization mass-scale; $\Gamma_{P}(k;P) $ is the pseudoscalar meson's canonically-normalised Bethe-Salpeter amplitude; viz., \begin{eqnarray} \nonumber \lefteqn{\Gamma_{P}(k;K) = \gamma_5 \left[ i E_P(k;K) + \gamma\cdot K F_P(k;K) \right.}\\ && \left. + \,\gamma\cdot k \, G_P(k;K) - \sigma_{\mu\nu} k_\mu K_\nu H_P(k;K) \right]; \label{genGpi} \end{eqnarray} $S_q$, $S_Q$ are the dressed-propagators of the $q$- and $Q$-quarks; and $Z_{2,4}(\zeta,\Lambda)$ are, respectively, the quark wavefunction and Lagrangian mass renormalisation constants. Using Eq.\,(\ref{gmorR}), one obtains \begin{equation} \label{gmor2} {\cal S}_\pi^\zeta(0) = \frac{\partial }{\partial m^\zeta_{ud}} m_\pi^2 =\frac{\partial }{\partial m^\zeta_{ud}} \left[ m_{ud}^\zeta\frac{\rho_\pi^\zeta}{f_\pi}\right]. \end{equation} Equation~(\ref{gmor2}) is valid for any values of $m_{u,d}$, including the neighbourhood of the chiral limit, wherein \begin{equation} \label{gmor3} \frac{\partial }{\partial m^\zeta_{ud}} \left[ m_{ud}^\zeta\frac{\rho_\pi^\zeta}{f_\pi} \right]_{m_{ud} = 0} = \frac{\rho_\pi^{\zeta 0}}{f_\pi^0} =: B_\pi^{\zeta 0}. \end{equation} The superscript ``0'' indicates that the quantity is computed in the chiral limit. It is well known that $f_\pi^0 \neq 0$ if (and only if) chiral symmetry is dynamically broken in QCD: it is an order parameter for DCSB. Less widely appreciated is that in the chiral limit the numerator is another well-known quantity; viz., using QCD's quark-level Goldberger-Treiman relations, one can prove \cite{Maris:1997hd}: \begin{equation} \label{gmor4} f_\pi^0 \, \rho_\pi^{\zeta 0} = - \langle \bar q q \rangle^{\zeta 0}\,, \end{equation} where the rhs is the so-called vacuum quark condensate. Thus, as demonstrated previously \cite{Brodsky:2010xf,Maris:1997hd,Maris:1997tm}, the vacuum quark condensate is actually the chiral-limit value of the in-pion condensate; i.e., it describes a property of the chiral-limit pion. Importantly, Ref.\,\cite{Langfeld:2003ye} establishes that the rhs of Eq.\,(\ref{gmor4}) is precisely the same condensate that appears: as a constant in the operator product expansion \cite{Lane:1974he}; via the Banks-Casher formula \cite{Banks:1979yr}; and through the trace of the chiral-limit dressed-quark propagator. With Eqs.\,(\ref{gmor1}), (\ref{gmor2}), (\ref{gmor3}), (\ref{gmor4}), one has shown that in the neighbourhood of the chiral limit \begin{equation} m_{\pi^\pm}^2 = -m_{ud}^\zeta \frac{\langle \bar q q \rangle^{\zeta 0}}{(f_\pi^0)^2} + {\rm O}(m_{ud}^2). \end{equation} Neither PCAC nor soft-pion theorems were employed in analysing the rhs of Eq.\,(\ref{gmor1}). The analysis emphasises anew that what is commonly regarded as the vacuum condensate is truly a property of the pion: it is simultaneously the chiral limit value of the in-pion condensate and proportional to the value of the chiral-limit pion's scalar form factor at zero momentum transfer. Given Eq.\,(\ref{gmorR}), Eq.\,(\ref{gmor2}) is plainly a particular case of a more general statement; viz., \begin{eqnarray} \label{chiPqQ} {\cal S}^\zeta_{P_{qQ}}&:=&-\langle P_{qQ}|\mbox{\small $\frac{1}{2}$} (\bar q q+\bar Q Q) | P_{qQ} \rangle =\frac{\partial}{\partial m_{qQ}^\zeta}m^2_{P_{qQ}} \rule{1em}{0ex}\\ &=& \frac{ \kappa^\zeta_{P_{qQ}} }{f^2_{P_{qQ}}} + m_{qQ}^\zeta \frac{\partial}{\partial m_{qQ}^\zeta} \left[ \frac{ \kappa^\zeta_{P_{qQ}} }{f^2_{P_{qQ}}}\right], \end{eqnarray} where $P_{qQ}$ is any pseudoscalar meson constituted from the current-quarks $q$, $Q$. The left-hand-side is this meson's scalar form factor at $Q^2=0$, which is here shown to be completely determined by the meson's leptonic decay constant and in-meson condensate, and their evolution with current-quark mass. It is noteworthy that for each quark line within the bound-state, the $Q^2=0$ operator insertion in Eq.\,(\ref{chiPqQ}) acts as a differentiation of the affected dressed-quark propagator with respect to the current-quark mass. On a dressed-quark in isolation, this would produce the vacuum chiral susceptibility \cite{Chang:2008ec} but here the observation establishes a clear connection between ${\cal S}$ and measurement of the chiral susceptibility within the hadron. We have already considered the chiral-limit behaviour of ${\cal S}^\zeta_{P_{qQ}}$; viz., Eq.\,(\ref{gmor3}). An exact result is also obtained in the heavy-quark limit: $m_Q\to \infty$, $m_q/m_Q \to 0$. Following Ref.\,\cite{Ivanov:1998ms} one may demonstrate \begin{equation} \label{mqmQlimit} \kappa^\zeta_{P_{qQ}} \stackrel{m_Q\to\infty}{=} {\cal C}_P^\zeta\,,\; f^2_{P_{qQ}} \stackrel{m_Q\to\infty}{=} \frac{\kappa^\zeta_{P_{qQ}}}{m_Q^\zeta}\,,\; \end{equation} where ${\cal C}_P^\zeta$ is an interaction-dependent constant. Hence $m_{P_{qQ}} = m_q+m_Q$ and \begin{equation} \label{RPqQ} {\cal S}^\zeta_{P_{qQ}} \stackrel{m_Q\to\infty}{=} 2 \frac{ \kappa^\zeta_{P_{qQ}} }{f^2_{P_{qQ}}} =: 2 B_{P_{qQ}}^\zeta . \end{equation} It is notable that whilst for light current-quark masses, $f_P$ is an order parameter for DCSB, its evolution and essence are very different in the heavy-quark limit. A single case remains; namely, pseudoscalar mesons constituted from current-quarks $Q_1$ and $Q_2$, with roughly equal masses, both of which become large: $m_{Q_1} \approx m_{Q_2}$, $m_{Q_2}\to \infty$. Equations~(\ref{mqmQlimit}) are not valid in this instance. Instead, the results depend on the nature of the interaction at short distances. However, that is known to be Coulomb-like in QCD, so that one can show \cite{Bhagwat:2006xi} \begin{eqnarray} \label{mqmQlimit1} \kappa^\zeta_{P_{Q_1Q_2}} &\stackrel{m_{Q}\to\infty}{=}& {\cal C}^\zeta_{P_Q}\,(M_{Q_1}+M_{Q_2})^3,\\ \label{mqmQlimit2} f^2_{P_{Q_1Q_2}} &\stackrel{m_Q\to\infty}{=} & \frac{\kappa^\zeta_{P_{Q_1Q_2}}}{M_{Q_1}+M_{Q_2}}, \end{eqnarray} with $M_Q^p:=M(-\mbox{\small $\frac{1}{4}$}m_{P_{Q_1Q_2}}^2)$, where $M(k^2)$ is the renormalisation-point-independent dressed-quark mass-function described, e.g., in Ref.\,\cite{Chang:2011vu}. (In the limit considered here, $M_Q^p$ becomes equivalent to the ``pole-mass'' in the effective field theory for quarkonium systems.) It follows therefore that, in precise analogy with Eq.\,(\ref{RPqQ}), \begin{equation} {\cal S}^\zeta_{P_{Q_1Q_2}} \stackrel{m_{Q_1}\sim m_{Q_2}}{\stackrel{m_{Q_2}\to\infty}{=}} 2 \frac{ \kappa^\zeta_{P_{Q_1Q_2}} }{f^2_{P_{Q_1Q_2}}} =: 2 B_{P_{Q_1Q_2}}^\zeta \,. \end{equation} \begin{figure}[t] \centerline{\includegraphics[clip,width=0.40\textwidth]{Fig1.eps}} \caption{\label{Fig1} Solid curve, ${\cal S}_{P_{qQ}}$; and dashed curve, ${\cal S}_{S_{qQ}}$. The dotted line is the heavy-quark limit: $m_Q\to\infty$, $m_u/m_Q \to 0$ $\Rightarrow {\cal S}_{{qQ}} = m_u+m_Q$. (${\cal S}^0_{P_{qQ}}=1.39\,$GeV, ${\cal S}^0_{S_{qQ}}=2.06\,$GeV; $m_u$ is fixed at 7\,MeV and $m_Q\geq m_u$.)} \end{figure} In order to explain and illustrate the nature of ${\cal S}^\zeta_{P_{qQ}}$, we have computed it using the symmetry-preserving regularisation and rainbow-ladder truncation of a vector$\,\times\,$vector contact-interaction that is described in Ref.\,\cite{Roberts:2011wy}. The result, obtained with the light-quark parameters fixed therein, is depicted in Fig.\,\ref{Fig1}. The behaviour is typical: ${\cal S}^\zeta_{P_{qQ}}$ is a positive-definite, monotonic function, bounded below by its chiral limit value ($B_{P_{qQ}}^{\zeta 0}$) and above by its large current-quark mass value ($2 B_{P_{qQ}}^{\zeta }$). With ${\cal S}^\zeta_{P}$, therefore, we have identified a quantity, defined for any and all pseudoscalar mesons, which directly measures the strength of helicity-coupling interactions within the hadron and whose value is between one- and two-times that strength. Moreover, \begin{equation} (f_{P_{qQ}}^0)^2 {\cal S}^{\zeta 0}_{P_{qQ}} = \kappa^{\zeta 0}_{P_{qQ}}\;\mbox{and}\; f_{P_{qQ}}^2 {\cal S}^\zeta_{P_{qQ}} \stackrel{\mbox{\footnotesize heavy}}{\stackrel{\mbox{\footnotesize quark(s)}}{=}} 2 \kappa^{\zeta }_{P_{qQ}}\,, \label{F2eq1} \end{equation} as illustrated in Fig.\,\ref{Fig2}, where $\kappa^{\zeta }_{P}$ is the in-pseudoscalar-meson condensate introduced in Ref.\,\cite{Maris:1997tm}. The matrix element ${\cal S}^\zeta_{P}$ thus appears ideal for use in extending the definition of in-hadron quark condensates to other states. \begin{figure}[t] \leftline{\includegraphics[clip,width=0.22\textwidth,height=0.30\textwidth]{Fig2L.eps}} \vspace*{-40ex} \rightline{\includegraphics[clip,width=0.225\textwidth,height=0.30\textwidth]{Fig2R.eps}} \caption{\label{Fig2} \emph{Left panel} -- solid curve, $[f^2_{P_{qQ}} {\cal S}_{P_{qQ}}]^{1/3}$; dashed curve, $[f^2_{S_{qQ}} {\cal S}_{S_{qQ}}]^{1/3}$; and dotted lines, heavy-quark-limit values of $[2 \kappa_{qQ}]^{1/3}$ computed directly from Eqs.\,(\ref{fpigen}),(\ref{rhogen}) and Eqs.\,(\ref{fsigmagen}), (\ref{rhosigmagen}), respectively. \emph{Right panel} -- $[\kappa_{qQ}]^{1/3}$ computed directly from Eqs.\,(\ref{fpigen}),(\ref{rhogen}) (pseudoscalar, solid curve) and Eqs.\,(\ref{fsigmagen}), (\ref{rhosigmagen}) (scalar, dashed curve). The figure illustrates that $f^2_{{qQ}} {\cal S}_{{qQ}}$ is a smoothly varying measure of DCSB and confirms Eqs.\,(\protect\ref{F2eq1}), (\protect\ref{F2eq2}). (NB.\ $\kappa^0_{P_{qQ}} = (0.24\,$GeV$)^3$; $m_u$ is fixed at 7\,MeV and $m_Q\geq m_u$.) } \end{figure} Further support for expansion of the in-hadron concept via this matrix element is provided by considering scalar mesons. Applying the method of Ref.\,\cite{Maris:1997hd} to the vector Ward-Takahashi identity, we have established that \begin{equation} f_{S_{qQ}} m_{S_{qQ}}^2 = - \check{m}_{qQ} \rho_{S_{qQ}}^\zeta, \label{mSqQ} \end{equation} where $\check{m}_{qQ}= m_q - m_Q$ and \begin{eqnarray} f_{S_{qQ}} K_\mu & = & Z_2\, {\rm tr}_{\rm CD}\!\!\! \int_k^\Lambda i\gamma_\mu S_q(k_+) \Gamma_{S_{qQ}}(k;K) S_Q(k_-)\,, \rule{2em}{0ex} \label{fsigmagen} \\ \rho^\zeta_{S_{qQ}} & = & - Z_4\, {\rm tr}_{\rm CD}\!\!\! \int_k^\Lambda S_q(k_+) \Gamma_{S_{qQ}}(k;K) S_Q(k_-) . \rule{2em}{0ex}\label{rhosigmagen} \end{eqnarray} The scalar meson leptonic decay constant changes sign under charge conjugation and vanishes for equal-mass constituents \cite{Maris:2000ig}. Hence, Eq.\,(\ref{mSqQ}) does not reveal much about scalar meson masses in the chiral limit\footnote{The structure of light-quark scalar mesons is a contentious issue \protect\cite{RuizdeElvira:2010cs}. Nevertheless, our result applies to any scalar meson that can be produced via $e^+ e^-$ annihilation. It is not of experimental significance, however, if the pole is deep in the complex plane.} nor those composed of equal-mass heavy constituents. On the other hand, much can be learnt in the heavy-quark limit. Indeed, one can prove analogues of Eq.\,(\ref{mqmQlimit}); viz., \begin{equation} \label{SmqmQlimit} \kappa^\zeta_{S_{qQ}} \stackrel{m_Q\to\infty}{=} {\cal C}_S^\zeta\,,\; f_{S_{qQ}} \stackrel{m_Q\to\infty}{=} \frac{\kappa^\zeta_{S_{qQ}}}{m_Q^\zeta}\,, \end{equation} and hence \begin{equation} \label{HQRSP} {\cal S}_{S_{qQ}} \stackrel{m_Q\to\infty}{=} 2 B_{S_{qQ}}^\zeta \stackrel{m_Q\to\infty}{=} 2 B_{P_{qQ}}^\zeta \stackrel{m_Q\to\infty}{=} {\cal S}_{P_{qQ}}\,, \end{equation} where ${\cal S}_{S_{qQ}}$ is defined by obvious analogy with Eq.\,(\ref{chiPqQ}). We have also computed ${\cal S}^\zeta_{S_{qQ}}$ using the symmetry-preserving treatment of the contact-interaction \cite{Roberts:2011wy}. Our result is depicted in Fig.\,\ref{Fig1}. The behaviour is again typical; namely, ${\cal S}^\zeta_{S_{qQ}}$ is a positive-definite function that exceeds ${\cal S}^\zeta_{P_{qQ}}$ for all finite $m_{qQ}$ and approaches its heavy-quark limit from above. Figure~\ref{Fig2} confirms the model-independent prediction in Eq.\,(\ref{HQRSP}); viz., \begin{equation} f_{S_{qQ}}^2 {\cal S}^\zeta_{S_{qQ}} \stackrel{m_Q \to \infty}{=} 2 \kappa^{\zeta }_{S_{qQ}}\,. \label{F2eq2} \end{equation} Quantitatively, the chiral-limit value of ${\cal S}^\zeta_{S_{qQ}}$ is interaction-dependent. Within the framework of Ref.\,\cite{Roberts:2011wy}, the result is ${\cal S}^0_{S_{qQ}}= 4 M^0 (dM/dm)^0=2.06\,$GeV, where $M^0=0.36\,$GeV is the model's chiral-limit dressed-quark mass. On the other hand, the qualitative connection to the dressed-quark mass, a bona-fide order parameter for DCSB which determines the so-called vacuum quark condensate, is model-independent. We have demonstrated unique, model-independent relationships between ${\cal S}^\zeta_{P,S}$ and the in-hadron condensates that appear in mass formulae for pseudoscalar and scalar mesons. Whilst such formulae do not exist for other mesons, the strength of the connections we've exhibited argues for the identification of an in-hadron condensate for each meson, $M$, with the product \begin{eqnarray} \label{chiMzeta} \chi_M^\zeta &:=& {\cal S}^\zeta_{M} f_M^2,\\ {\cal S}^\zeta_{M}&:=&-\langle M|\mbox{\small $\frac{1}{2}$} (\bar q q+\bar Q Q) | M \rangle =\frac{\partial}{\partial m_{qQ}^\zeta} m^2_{M}, \end{eqnarray} where $m_M$ is the meson's mass and $f_M$, its leptonic decay constant. The scalar case shows that a meaningful scale is determined even for systems with small $f_M$. Within the framework of Ref.\,\cite{Roberts:2011wy}, one can readily evaluate results that follow for ground-state vector and axial-vector mesons; viz. (in GeV), \begin{equation} \begin{array}{cccccc} {\cal S}_{\rho} & f_\rho & \chi_\rho^{1/3} & {\cal S}_{a_1} & f_{a_1} & \chi_{a_1}^{1/3} \\ 1.33 & 0.129 & 0.281 & 2.30 & 0.089 & 0.263 \end{array}. \end{equation} A comparison with Figs.\,\ref{Fig1} and \ref{Fig2} makes evident a similarity between the: vector and pseudoscalar channels; and axial-vector and scalar channels. This persists all the way to the heavy-quark limit whereat, owing to the suppression of hyperfine interactions, pseudoscalar and vector mesons are indistinguishable, as are scalar and axial-vector mesons, so that \begin{equation} f_{{V,A}_{qQ}}^2 \stackrel{m_Q\to\infty}{=} \frac{\kappa_{{V,A}_{qQ}}^\zeta}{m_Q^\zeta}\,,\; \kappa_{{V,A}_{qQ}}^\zeta \stackrel{m_Q\to\infty}{=} \kappa_{{P,S}_{qQ}}^\zeta. \end{equation} The case of heavy-heavy $J=1$ states can also be argued by analogy with the $J=0$ states. Baryons present a qualitatively different situation. Owing to baryon-number conservation, there are no analogues of the meson decay constants in, e.g., Eqs.\,(\ref{fpigen}), (\ref{rhogen}), and hence no correspondents of the meson mass formulae. Nonetheless, each baryon has a scalar form factor whose value at $Q^2=0$ is a perfect parallel to ${\cal S}_M$; viz., \begin{equation} \label{SBaryon} {\cal S}_{B_{1 2 3}}^\zeta:= -\langle B_{123} | \mbox{\small $\frac{1}{3}$} (\bar q_1q_1+\bar q_2q_2+\bar q_3q_3)|B_{123}\rangle, \end{equation} where $B_{123}$ is a baryon constituted from valence-quarks: $q_1$, $q_2$, $q_3$. For baryons, too, ${\cal S}$ is a direct measure of the strength of helicity-coupling interactions within the hadron. This commonality is a strength of our concept. In the absence of decay constants, one can still identify a DCSB order parameter; viz., the baryon's mass itself. This is clear once one appreciates that the nucleon's mass is approximately 1\,GeV because it is composed of three dressed-quarks, each of which has a mass $M\sim 350$\,MeV that owes primarily to DCSB \cite{Chang:2011vu}. ${\cal S}_B$ is thus a dimensionless in-baryon chiral susceptibility: it measures the response to changes in the current-quark mass of a chiral order parameter which is intrinsic to the baryon. \begin{figure}[t] \centerline{\includegraphics[clip,width=0.42\textwidth]{Fig3.eps}} \caption{\label{Fig3} Solid curve, ${\cal S}_{N}$: Eq.\,(\protect\ref{SBaryon}) for the nucleon; and dashed curve, $dM/dm$, where $M$ is the dressed-quark mass. Both results computed using a symmetry-preserving regularisation of a vector$\,\times\,$vector contact interaction \protect\cite{Roberts:2011wy,Roberts:2011cf}: at $m=7\,$MeV, ${\cal S}_{N}=1.42$.} \end{figure} Using the framework of Ref.\,\cite{Roberts:2011wy}, the masses of the nucleon and $\Delta$-resonance, and their evolution with current-quark mass have been computed \cite{Roberts:2011cf}, with the result: for $0<m_\pi^2<0.5\,$GeV$^2$, $m_N \approx 1.03 \times (3 M)$, as a consequence of cancellation between complex binding effects. It should therefore follow that ${\cal S}_N \approx dM/dm$ on this domain; viz., that helicity-coupling within the nucleon is as strong as that within ground state mesons. This expectation is verified in Fig.\,\ref{Fig3}. The quantitative results are interaction dependent. Qualitatively, however, the comparison illustrates and highlights the capacity of ${\cal S}_B$ to serve as a gauge of DCSB within an internally consistent approach: in any theory the contrasting of ${\cal S}_B$ with an analogue of $dM/dm$ will provide a representative measure of the strength of DCSB within the baryon under consideration. The last step is to identify a parallel for baryons of $\chi^\zeta_M$ in Eq.\,(\ref{chiMzeta}). This appears problematic because, owing to baryon-number conservation, there is no baryonic analogue of $f_M$. On the other hand, in contrast to ${\cal S}_M$, ${\cal S}_B$ is dimensionless and ${\cal S}_B\to 1$ in the heavy-quark limit. Another inspection of the meson case provides an answer. A homogeneous Bethe-Salpeter equation does not fix the normalisation of meson Bethe-Salpeter amplitudes. An auxiliary condition must be implemented: one requires that an integral involving the amplitude and its conjugate must evaluate to some predetermined number, $N_M^2$. The canonical normalisation condition constrains the bound-state to produce a pole with unit reside in the quark-antiquark scattering matrix. This may be represented as requiring $N_M^2=1$ (dimensionless). One can naturally choose a different convention; e.g, consider the chiral-limit pion and rescale all elements in Eq.\,(\ref{genGpi}) so that $E_\pi(k;0)=B(k^2)$, where the latter function is the scalar piece of the dressed-quark self-energy in the chiral limit. When evaluated now, the normalisation integral evaluates to $(N_\pi^0)^2 = (f_\pi^0)^2$, as a consequence of the axial-vector Ward-Takahashi identity \cite{Maris:1998hc}. Although equality is not maintained away from the chiral limit, $N_\pi$, defined as described, is an order parameter for DCSB and vanishes in the heavy-quark limit. Therefore, $N_{P_{qQ}}$ can mathematically be used to replace $f_{P_{qQ}}$ in Eq.\,(\ref{chiMzeta}). The effect of this is readily illustrated within the framework of Ref.\,\cite{Roberts:2011wy}. Normalising via $E_{P_{qQ}} = 2 \mu_{qQ}$, where $1/\mu_{qQ} = 1/m_q+1/m_Q$, one finds algebraically that $\forall \mu_{qQ}$, \begin{equation} N_{P_{qQ}} \rho_{P_{qQ}} = \tilde \chi_{P_{qQ}} = \mbox{\small $\frac{9}{2}$} \mu_{qQ} m_G^2 = N_{S_{qQ}} \rho_{S_{qQ}}, \end{equation} which grows quickly from a chiral-limit value of $(0.243\,{\rm GeV})^3$ to $(0.307\,{\rm GeV})^3$ in the heavy-quark limit. (NB.\ $m_G=0.132\,$GeV, fixed in the wide-ranging study of Ref.\,\cite{Roberts:2011wy}.) It follows that $\tilde \chi_M^\zeta$, defined through the mass-normalised Bethe-Salpeter amplitude and $N_{P_{qQ}}$ computed therefrom, produces in-meson quark condensate mass-scales that are recognisably characteristic of DCSB. Similar reasoning can be applied to the Faddeev equation. In this case: the normalisation integral is connected with the value of the proton's Dirac form factor at $Q^2=0$; a mass-normalised baryon Faddeev amplitude produces a normalisation constant $N_B^2$ with dimensions of energy-cubed; and we have \begin{equation} \tilde\chi_{B_{123}}^\zeta := N_{B_{123}}^2\, {\cal S}_{B_{123}}^\zeta. \end{equation} To illustrate, we report that within the framework of Refs.\,\cite{Roberts:2011wy,Roberts:2011cf}, $N_N^2 = 3.40 M^3$ so that, using the value of ${\cal S}_{N}$ in Fig.\,\ref{Fig3}, $\tilde\chi_{N}=(0.623\,{\rm GeV})^3$. The first rigorous demonstration that confinement restricts quark condensates to the interior of hadrons was made in connection with pseudoscalar mesons. The in-pseudoscalar-meson condensate is a quantity with an exact expression in QCD. We have proved that it can equally be represented through the pseudoscalar-meson's scalar form factor at zero momentum transfer, $Q^2=0$. Subsequently, with the aid of a mass formula for scalar mesons, revealed herein, we showed that the in-scalar-meson condensate can be represented in precisely the same way. By analogy, and with appeal to demonstrable results of heavy-quark symmetry, we argued that the $Q^2=0$ values of vector- and pseudovector-meson scalar form factors also determine the in-hadron condensates in these cases. We also demonstrated that this expression for the concept of in-hadron quark condensates is readily extended to the case of baryons. We therefore contend that via the $Q^2=0$ value of any hadron's scalar form factor, one can readily extract the value for a quark condensate in that hadron which is a reasonable and realistic measure of dynamical chiral symmetry breaking. We acknowledge valuable input from A.~Bashir, S.\,J.~Brodsky, R.~Shrock and D.~J.~Wilson. This work was supported by: U.\,S.\ Department of Energy, Office of Nuclear Physics, contract no.~DE-AC02-06CH11357; and U.\,S.\ National Science Foundation grant no.\ NSF-PHY-0903991, part of which constitutes USA-Mexico collaboration funding in partnership with the Mexican agency CONACyT.
{ "redpajama_set_name": "RedPajamaArXiv" }
1,741
Sir Richard Atwood Glass (1820 – 22 December 1873) was an English telegraph cable manufacturer and a Conservative politician who sat in the House of Commons from 1868 to 1869. Biography Glass was born in Bradford-on-Avon, Wiltshire, in Southern England, the son of Francis Glass. He was educated at King's College London. In 1846 with George Elliot, he provided capital for an insolvent wire-rope manufacturers Heimann & Kuper, and by 1851 the firm was trading as Glass, Elliott & Company. The company produced submarine communications cables and in 1854 ran a circuit from Denmark to Sweden and undertook the manufacture of long cables for the French Mediterranean Telegraph Company of J W Brett. The cables with a resin-insulated conducting wire protected by an armour of iron wire proved to be long-lasting, and in the later 1850s the company introduced anti-corrosive compounds to coat the finished cable. The firm merged with the Gutta-Percha Company in 1864, and Glass became managing director of the resulting Telegraph Construction & Maintenance Company. Glass's company provided half of the first transatlantic telegraph cable and all the cable laid by the Great Eastern in 1866. Glass was knighted for these services on 26 November 1867. In the 1868 general election Glass was elected Member of Parliament for Bewdley. He was unseated on 16 February 1869 when the election was declared void. Glass lived at Ashurst in Dorking, Surrey. He died on 22 December 1873, aged 53, of chronic Bright's disease at his home at South Stoneham, Hampshire. References External links 1820 births 1873 deaths UK MPs 1868–1874 Alumni of King's College London Conservative Party (UK) MPs for English constituencies Deaths from nephritis
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,326
Why Hired How Hired Works Job Directory Sonos Principal Software Product Manager- Identity and Access Control Principal Software Product Manager- Identity and Access Control Sonos Sonos is a consumer electronics company that makes a variety of wireless audio products. Companies like Sonos are looking for tech talent like you. On Hired, employers apply to you with up-front salaries. Sign up to start matching for free. About Sonos Department: Product Management Location: Boston, Santa Barbara, Seattle Req#: 4518 Do you believe in the Internet of Things, companies have a responsibility to provide great security alongside great technology and experiences? Do you have a passion in making those security experiences "magical", seamless, and robust from the gadgets people love having in their homes to the services in the cloud those gadgets talk to? Do you love working with engineering teams to ship products that people love having in their homes? And do you also constantly think about how to make things better? Sonos was founded on a belief that high-quality sound delivered seamlessly throughout your day can improve your life. This founding belief - that listening better can mean living better - motivates us now more than ever, as new technologies, particularly smart speakers, craft new experience possibilities. In creating the wireless home audio category over the past decade, we've also created a passionate fan base, one that deserves our assurance that the Sonos system is a personal and shared experience, individualized, as well as secure as you move throughout your daily life. We are looking for a technical Product Manager to lead our core Identity and Authorization technologies and grow the platform that enables us to provide incredibly rich sound experiences in all aspects of our lives both inside and outside the home, and in intensely personal times or shared with friends, family, or even just the people you happen to find yourself around. Sonos embraces people that come from diverse backgrounds, bring unique perspectives, and that share a common passion for creating sound experiences that become the audio track of people's lives. We are looking for a Product Manager that is as passionate about music as they are about protecting our customers from security threats and protecting customer privacy as technologies, standards and expectations evolve. If you have a deep appreciation for the ever evolving challenge of security and privacy protections; if you love the enduring responsibility of realizing these values using a mix of scaling through automation, software governance and policy, and solving complex problems through human collaboration; if you appreciate the complexity of working across cloud, mobile and desktop client, and hardware engineering boundaries; and your job satisfaction is anchored in what you anticipate and solve before it becomes a security or privacy concern, then we'd love to talk to you about working with us at Sonos. More than a candidate that checks every box, we're looking for people who are excited to work, learn, and grow at Sonos-no matter their background or how they identify. If that's you, we hope you'll apply for this role. You want to be part of a team. You come with new ideas and a unique point of view. You look forward to collaborating with a diverse team of individuals. You assume everyone's best intentions, welcome a healthy debate, and embrace differing opinions. You eagerly seek and give help. Transparency tops your list of values, and you proactively contribute to a culture of respect and inclusion. You enjoy a challenge. Inquisitive and focused, you see every challenge as an opportunity. You're ambitious and comfortable making mistakes because you learn from them and bounce back quickly. You would rather create the future than wait for it. You prioritize long-term value over short-term objectives. You love to listen. You approach every interaction with curiosity and a desire to understand. You want to make a positive impact in the world. You're passionate about culture and know the power that music, film, podcasts, games, and stories have to bring people together. * By closely working with teams across the firmware, cloud, and mobile software teams, you will create a highly scalable, highly secure, and highly usable Identity and Access Control ecosystem for SONOS, its partners, and its customers. You will define engineering investments and influence roadmap priorities and ensure appropriate resource allocation across the entire software organization. * You will think in terms of security, scale, user experience, and future-proofing, building the business case for the scale of investment, driving awareness on both the security and privacy model as well as the user model, and growing expertise throughout the software engineering organization. * You'll anticipate and identify security and privacy risks, educate our teams, propose policy, and design engineering solutions. You will work across client and cloud infrastructure, including device firmware, our developer platform and our partner integrations. You'll balance short-term effort with long-range strategy, planning, and execution. You will be a key leader in ensuring Sonos products and the Sonos brand continue to lead the industry in assuring customers we are the best experience for sound in the home and beyond. Skills You'll Need Identity and Access Control Technologies * You are someone who has a deep knowledge of modern Identity and Access Control technologies. You are comfortable with encryption at rest and over the wire, and know how to secure sensitive, personal information for use in authentication workflows. You are also convinced that Identity and Access Control can be designed in a way that provides a delightful and innovative experience while also providing great security protections. You are also someone who has an understanding of authorization policy and enforcement across a product landscape that provides a robust, flexible, and simple environment for controlling access to devices and services for consumer and business scenarios. Software Security Risk Assessment and Security Design * We need an authority on security threats and patterns across client, cloud, IoT devices and platform APIs. You are clear-eyed about assessing current and prospective risks and execute processes and systems (such as penetration testing) that anticipate and prevent attacks, as well as help design and implement technologies that detect threats and implement self-defense measures. Influence and Leadership * As an experienced technology leader, you will apply a range of influence methods to define and embed industry standard methodologies throughout software engineering. You'll partner with teams across the company to implement the technologies needed to run a highly available, highly secure Identity and Access Control service and the ancillary/supporting services and infrastructure that requires. * Your written and verbal communication skills are a significant factor in establishing your credentials as the Product Management Leader for Identity and Access Control within our software organization. You can crisply articulate a strategy and cast issues and trade-offs in language that ensures the broader engineering team understands risks and priorities. You provide insightful explanations of data and lead by example in urgent circumstances with deliberate, organized, thorough and grounded solutions. You can represent Sonos effectively with customers, the broader security and privacy community, and other outside organizations. * Your authentic optimism and appreciation for the benefits of highly secure yet seamlessly "magical" experiences are infectious and help Sonos understand the positive impacts of the work. You define your success in terms of the customer benefit and team-wide accomplishments as well as how well Sonos protects our customers' personal information and the systems they bring into their world. You broadcast lessons learned so that others can make more rapid progress and celebrate progress in a world in which we're never done, only always improving and moving forward. More About Sonos Sonos is a sound experience company. We pioneered multiroom wireless audio, made it sound amazing, and changed the way people listen, making it effortless for them to enjoy what they want, where they want, how they want. Today we continue empowering listeners by developing new technologies, thoughtfully designing products, expanding our software platform, and crafting brilliant sound experiences while participating in a culture that values respect, transparency, collaboration, and ownership. Together we're working to positively impact the world and inspire everyone to listen better-because listening brings people together, builds understanding, drives change, and makes us happier. Notice to European Job Applicants: Information you submit as a part of your job application will be used in accordance with Sonos EU Job Applicant Privacy Notice. Notice to U.S. Job Applicants: Sonos is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, and other legally protected characteristics. Follow the links to review the EEO is the Law poster and its supplement. The pay transparency policy is available here. Sonos is committed to working with and providing reasonable accommodations to individuals with disabilities. If you need a reasonable accommodation because of a disability for any part of the employment process, please send an e-mail to accommodations@sonos.com and let us know the nature of your request and your contact information. More Jobs At Sonos Software Engineer - Connected Devices Senior Software Development Engineer - Cloud Platform Sonos | Sr. UX Manager, IoT & Hardware View More Jobs In Los Angeles 614 Chapala St Let your dream job find you. Sign up to start matching with top companies. It's fast and free. Job-Seekers Talk Talent to Me Skills Directory Salaries Directory About Hired © 2022 Vettery, Inc. All Rights Reserved.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,107
//--------------------------------------------------------------------------- // Greenplum Database // Copyright 2012 EMC Corp. // // @filename: // CExpressionPreprocessor.cpp // // @doc: // Expression tree preprocessing routines, needed to prepare an input // logical expression to be optimized //--------------------------------------------------------------------------- #include "gpos/base.h" #include "gpopt/base/CUtils.h" #include "gpopt/base/CColRefSetIter.h" #include "gpopt/base/CColRefTable.h" #include "gpopt/base/CConstraintInterval.h" #include "gpos/common/CAutoTimer.h" #include "gpos/common/CAutoRef.h" #include "gpopt/exception.h" #include "gpopt/operators/CWindowPreprocessor.h" #include "gpopt/operators/CLogicalConstTableGet.h" #include "gpopt/operators/CLogicalCTEAnchor.h" #include "gpopt/operators/CLogicalCTEConsumer.h" #include "gpopt/operators/CLogicalCTEProducer.h" #include "gpopt/operators/CLogicalGbAgg.h" #include "gpopt/operators/CLogicalInnerJoin.h" #include "gpopt/operators/CLogicalLimit.h" #include "gpopt/operators/CLogicalNAryJoin.h" #include "gpopt/operators/CLogicalProject.h" #include "gpopt/operators/CLogicalSequenceProject.h" #include "gpopt/operators/CLogicalSetOp.h" #include "gpopt/operators/CLogicalUnion.h" #include "gpopt/operators/CLogicalUnionAll.h" #include "gpopt/operators/CPredicateUtils.h" #include "gpopt/operators/CNormalizer.h" #include "gpopt/operators/CExpressionUtils.h" #include "gpopt/operators/CExpressionFactorizer.h" #include "gpopt/operators/CExpressionPreprocessor.h" #include "gpopt/operators/CScalarCmp.h" #include "gpopt/operators/CScalarIdent.h" #include "gpopt/operators/CScalarNAryJoinPredList.h" #include "gpopt/operators/CScalarProjectElement.h" #include "gpopt/operators/CScalarProjectList.h" #include "gpopt/operators/CScalarSubquery.h" #include "gpopt/operators/CScalarSubqueryAny.h" #include "gpopt/operators/CScalarSubqueryExists.h" #include "gpopt/operators/CScalarSubqueryQuantified.h" #include "gpopt/optimizer/COptimizerConfig.h" #include "gpopt/mdcache/CMDAccessor.h" #include "gpopt/xforms/CXform.h" #include "naucrates/md/IMDScalarOp.h" #include "naucrates/md/IMDType.h" #include "naucrates/statistics/CStatistics.h" #include "naucrates/traceflags/traceflags.h" using namespace gpopt; // maximum number of equality predicates to be derived from existing equalities #define GPOPT_MAX_DERIVED_PREDS 50 // eliminate self comparisons in the given expression CExpression * CExpressionPreprocessor::PexprEliminateSelfComparison ( CMemoryPool *mp, CExpression *pexpr ) { // protect against stack overflow during recursion GPOS_CHECK_STACK_SIZE; GPOS_ASSERT(NULL != mp); GPOS_ASSERT(NULL != pexpr); if (CUtils::FScalarCmp(pexpr)) { return CPredicateUtils::PexprEliminateSelfComparison(mp, pexpr); } // recursively process children const ULONG arity = pexpr->Arity(); CExpressionArray *pdrgpexprChildren = GPOS_NEW(mp) CExpressionArray(mp); for (ULONG ul = 0; ul < arity; ul++) { CExpression *pexprChild = PexprEliminateSelfComparison(mp, (*pexpr)[ul]); pdrgpexprChildren->Append(pexprChild); } COperator *pop = pexpr->Pop(); pop->AddRef(); return GPOS_NEW(mp) CExpression(mp, pop, pdrgpexprChildren); } // remove superfluous equality operations CExpression * CExpressionPreprocessor::PexprPruneSuperfluousEquality ( CMemoryPool *mp, CExpression *pexpr ) { // protect against stack overflow during recursion GPOS_CHECK_STACK_SIZE; GPOS_ASSERT(NULL != mp); GPOS_ASSERT(NULL != pexpr); if (pexpr->Pop()->FScalar()) { return CPredicateUtils::PexprPruneSuperfluosEquality(mp, pexpr); } // recursively process children const ULONG arity = pexpr->Arity(); CExpressionArray *pdrgpexprChildren = GPOS_NEW(mp) CExpressionArray(mp); for (ULONG ul = 0; ul < arity; ul++) { CExpression *pexprChild = PexprPruneSuperfluousEquality(mp, (*pexpr)[ul]); pdrgpexprChildren->Append(pexprChild); } COperator *pop = pexpr->Pop(); pop->AddRef(); return GPOS_NEW(mp) CExpression(mp, pop, pdrgpexprChildren); } // an existential subquery whose inner expression is a GbAgg // with no grouping columns is replaced with a Boolean constant // // Example: // // exists(select sum(i) from X) --> True // not exists(select sum(i) from X) --> False CExpression * CExpressionPreprocessor::PexprTrimExistentialSubqueries ( CMemoryPool *mp, CExpression *pexpr ) { // protect against stack overflow during recursion GPOS_CHECK_STACK_SIZE; GPOS_ASSERT(NULL != mp); GPOS_ASSERT(NULL != pexpr); COperator *pop = pexpr->Pop(); if (CUtils::FExistentialSubquery(pop)) { CExpression *pexprInner = (*pexpr)[0]; if (COperator::EopLogicalGbAgg == pexprInner->Pop()->Eopid() && 0 == CLogicalGbAgg::PopConvert(pexprInner->Pop())->Pdrgpcr()->Size()) { GPOS_ASSERT(0 < (*pexprInner)[1]->Arity() && "Project list of GbAgg is expected to be non-empty"); BOOL fValue = true; if (COperator::EopScalarSubqueryNotExists == pop->Eopid()) { fValue = false; } return CUtils::PexprScalarConstBool(mp, fValue); } } // recursively process children const ULONG arity = pexpr->Arity(); CExpressionArray *pdrgpexprChildren = GPOS_NEW(mp) CExpressionArray(mp); for (ULONG ul = 0; ul < arity; ul++) { CExpression *pexprChild = PexprTrimExistentialSubqueries(mp, (*pexpr)[ul]); pdrgpexprChildren->Append(pexprChild); } if (CPredicateUtils::FAnd(pexpr)) { return CPredicateUtils::PexprConjunction(mp, pdrgpexprChildren); } if (CPredicateUtils::FOr(pexpr)) { return CPredicateUtils::PexprDisjunction(mp, pdrgpexprChildren); } pop->AddRef(); return GPOS_NEW(mp) CExpression(mp, pop, pdrgpexprChildren); } // a quantified subquery with maxcard 1 is simplified as a scalar subquery // // Example: // a = ANY (select sum(i) from X) --> a = (select sum(i) from X) // a <> ALL (select sum(i) from X) --> a <> (select sum(i) from X) CExpression * CExpressionPreprocessor::PexprSimplifyQuantifiedSubqueries ( CMemoryPool *mp, CExpression *pexpr ) { // protect against stack overflow during recursion GPOS_CHECK_STACK_SIZE; GPOS_ASSERT(NULL != mp); GPOS_ASSERT(NULL != pexpr); COperator *pop = pexpr->Pop(); if (CUtils::FQuantifiedSubquery(pop) && 1 == (*pexpr)[0]->DeriveMaxCard().Ull()) { CExpression *pexprInner = (*pexpr)[0]; // skip intermediate unary nodes CExpression *pexprChild = pexprInner; COperator *popChild = pexprChild->Pop(); while (NULL != pexprChild && CUtils::FLogicalUnary(popChild)) { pexprChild = (*pexprChild)[0]; popChild = pexprChild->Pop(); } // inspect next node BOOL fGbAggWithoutGrpCols = COperator::EopLogicalGbAgg == popChild->Eopid() && 0 == CLogicalGbAgg::PopConvert(popChild)->Pdrgpcr()->Size(); BOOL fOneRowConstTable = COperator::EopLogicalConstTableGet == popChild->Eopid() && 1 == CLogicalConstTableGet::PopConvert(popChild)->Pdrgpdrgpdatum()->Size(); if (fGbAggWithoutGrpCols || fOneRowConstTable) { // quantified subquery with max card 1 CExpression *pexprScalar = (*pexpr)[1]; CScalarSubqueryQuantified *popSubqQuantified = CScalarSubqueryQuantified::PopConvert(pexpr->Pop()); const CColRef *colref = popSubqQuantified->Pcr(); pexprInner->AddRef(); CExpression *pexprSubquery = GPOS_NEW(mp) CExpression(mp, GPOS_NEW(mp) CScalarSubquery(mp, colref, false /*fGeneratedByExist*/, true /*fGeneratedByQuantified*/), pexprInner); CMDAccessor *md_accessor = COptCtxt::PoctxtFromTLS()->Pmda(); IMDId *mdid = popSubqQuantified->MdIdOp(); const CWStringConst *str = md_accessor->RetrieveScOp(mdid)->Mdname().GetMDName(); mdid->AddRef(); pexprScalar->AddRef(); return CUtils::PexprScalarCmp(mp, pexprScalar, pexprSubquery, *str, mdid); } } // recursively process children const ULONG arity = pexpr->Arity(); CExpressionArray *pdrgpexprChildren = GPOS_NEW(mp) CExpressionArray(mp); for (ULONG ul = 0; ul < arity; ul++) { CExpression *pexprChild = PexprSimplifyQuantifiedSubqueries(mp, (*pexpr)[ul]); pdrgpexprChildren->Append(pexprChild); } pop->AddRef(); return GPOS_NEW(mp) CExpression(mp, pop, pdrgpexprChildren); } // preliminary unnesting of scalar subqueries // Example: // Input: SELECT k, (SELECT (SELECT Y.i FROM Y WHERE Y.j=X.j)) from X // Output: SELECT k, (SELECT Y.i FROM Y WHERE Y.j=X.j) from X CExpression * CExpressionPreprocessor::PexprUnnestScalarSubqueries ( CMemoryPool *mp, CExpression *pexpr ) { // protect against stack overflow during recursion GPOS_CHECK_STACK_SIZE; GPOS_ASSERT(NULL != mp); GPOS_ASSERT(NULL != pexpr); COperator *pop = pexpr->Pop(); // look for a Project Element with a scalar subquery below it if (CUtils::FProjElemWithScalarSubq(pexpr)) { // recursively process scalar subquery CExpression *pexprSubq = PexprUnnestScalarSubqueries(mp, (*pexpr)[0]); // if the scalar subquery is replaced by the CScalarIdent in the previous // recursive call we simply return the CScalarIdent and stop preprocessing // at this stage. // +--CScalarProjectList // +--CScalarProjectElement "?column?" (2) // +--CScalarIdent "column1" (1) if (COperator::EopScalarIdent == pexprSubq->Pop()->Eopid()) { pop->AddRef(); return GPOS_NEW(mp) CExpression(mp, pop, pexprSubq); } // check if subquery is defined as a Project on Const Table CExpression *pexprSubqChild = (*pexprSubq)[0]; if (CUtils::FProjectConstTableWithOneScalarSubq(pexprSubqChild)) { CExpression *pexprConstTable = (*pexprSubqChild)[0]; CExpression *pexprPrjList = (*pexprSubqChild)[1]; GPOS_ASSERT(1 == pexprPrjList->Arity()); CExpression *pexprPrjElem = (*pexprPrjList)[0]; CExpression *pexprInnerSubq = (*pexprPrjElem)[0]; GPOS_ASSERT(COperator::EopScalarSubquery == pexprInnerSubq->Pop()->Eopid()); // make sure that inner subquery has no outer references to Const Table // since Const Table will be eliminated in output expression CColRefSet *pcrsConstTableOutput = pexprConstTable->DeriveOutputColumns(); CColRefSet *outer_refs = (*pexprInnerSubq)[0]->DeriveOuterReferences(); if (0 == outer_refs->Size() || outer_refs->IsDisjoint(pcrsConstTableOutput)) { // recursively process inner subquery CExpression *pexprUnnestedSubq = PexprUnnestScalarSubqueries(mp, pexprInnerSubq); // the original subquery is processed and can be removed now pexprSubq->Release(); // build the new Project Element after eliminating outer subquery pop->AddRef(); return GPOS_NEW(mp) CExpression(mp, pop, pexprUnnestedSubq); } } // otherwise, return a Project Element with the processed outer subquery pop->AddRef(); return GPOS_NEW(mp) CExpression(mp, pop, pexprSubq); } else if (CUtils::FScalarSubqWithConstTblGet(pexpr)) { const CColRef *pcrSubq = CScalarSubquery::PopConvert(pexpr->Pop())->Pcr(); CColRefSet *pcrsConstTableOutput = (*pexpr)[0]->DeriveOutputColumns(); // if the subquery has outer ref, we do not make use of the output columns of constant table get. // In this scenairo, we replace the entire scalar subquery with a CScalarIdent with the outer reference. // Otherwise, the subquery remains unchanged. // Input: // +--CScalarSubquery["b" (8)] // +--CLogicalConstTableGet Columns: ["" (16)] Values: [(1)] // Output: // +--CScalarIdent "b" (8) if (!pcrsConstTableOutput->FMember(pcrSubq)) { CScalarSubquery *pScalarSubquery = CScalarSubquery::PopConvert(pexpr->Pop()); return CUtils::PexprScalarIdent(mp, pScalarSubquery->Pcr()); } } // recursively process children const ULONG arity = pexpr->Arity(); CExpressionArray *pdrgpexprChildren = GPOS_NEW(mp) CExpressionArray(mp); for (ULONG ul = 0; ul < arity; ul++) { CExpression *pexprChild = PexprUnnestScalarSubqueries(mp, (*pexpr)[ul]); pdrgpexprChildren->Append(pexprChild); } pop->AddRef(); return GPOS_NEW(mp) CExpression(mp, pop, pdrgpexprChildren); } // an intermediate limit is removed if it has neither row count nor offset CExpression * CExpressionPreprocessor::PexprRemoveSuperfluousLimit ( CMemoryPool *mp, CExpression *pexpr ) { // protect against stack overflow during recursion GPOS_CHECK_STACK_SIZE; GPOS_ASSERT(NULL != mp); GPOS_ASSERT(NULL != pexpr); COperator *pop = pexpr->Pop(); // if current operator is a logical limit with zero offset, and no specified // row count, skip to limit's logical child if (COperator::EopLogicalLimit == pop->Eopid() && CUtils::FHasZeroOffset(pexpr) && !CLogicalLimit::PopConvert(pop)->FHasCount()) { CLogicalLimit *popLgLimit = CLogicalLimit::PopConvert(pop); if (!popLgLimit->IsTopLimitUnderDMLorCTAS() || (popLgLimit->IsTopLimitUnderDMLorCTAS() && GPOS_FTRACE(EopttraceRemoveOrderBelowDML))) { return PexprRemoveSuperfluousLimit(mp, (*pexpr)[0]); } } // recursively process children const ULONG arity = pexpr->Arity(); CExpressionArray *pdrgpexprChildren = GPOS_NEW(mp) CExpressionArray(mp); for (ULONG ul = 0; ul < arity; ul++) { CExpression *pexprChild = PexprRemoveSuperfluousLimit(mp, (*pexpr)[ul]); pdrgpexprChildren->Append(pexprChild); } pop->AddRef(); return GPOS_NEW(mp) CExpression(mp, pop, pdrgpexprChildren); } // distinct is removed from a DQA if it has a max or min agg // e.g. select max(distinct(a)) from tbl -> select max(a) from tbl CExpression * CExpressionPreprocessor::PexprRemoveSuperfluousDistinctInDQA ( CMemoryPool *mp, CExpression *pexpr ) { // protect against stack overflow during recursion GPOS_CHECK_STACK_SIZE; GPOS_ASSERT(NULL != mp); GPOS_ASSERT(NULL != pexpr); COperator *pop = pexpr->Pop(); if (COperator::EopLogicalGbAgg == pop->Eopid()) { const CExpression* const pexprProjectList = (*pexpr)[1]; GPOS_ASSERT(COperator::EopScalarProjectList == pexprProjectList->Pop()->Eopid()); const ULONG arity = pexprProjectList->Arity(); CMDAccessor *md_accessor = COptCtxt::PoctxtFromTLS()->Pmda(); for (ULONG ul = 0; ul < arity; ul++) { CExpression* const pexprPrjElem = (*pexprProjectList)[ul]; if (COperator::EopScalarAggFunc == (*pexprPrjElem)[0]->Pop()->Eopid()) { CScalarAggFunc *popAggFunc = CScalarAggFunc::PopConvert((*pexprPrjElem)[0]->Pop()); IMDId *agg_child_mdid = CScalar::PopConvert((*pexprPrjElem)[0]->Pop())->MdidType(); const IMDType *agg_child_type = md_accessor->RetrieveType(agg_child_mdid); if (popAggFunc->IsDistinct() && popAggFunc->IsMinMax(agg_child_type)) { popAggFunc->SetIsDistinct(false); } } } } // recursively process children const ULONG arity = pexpr->Arity(); CExpressionArray *pdrgpexprChildren = GPOS_NEW(mp) CExpressionArray(mp); for (ULONG ul = 0; ul < arity; ul++) { CExpression *pexprChild = PexprRemoveSuperfluousDistinctInDQA(mp, (*pexpr)[ul]); pdrgpexprChildren->Append(pexprChild); } pop->AddRef(); return GPOS_NEW(mp) CExpression(mp, pop, pdrgpexprChildren); } // Remove outer references from order spec inside limit, grouping columns // in GbAgg, and Partition/Order columns in window operators. Also handle // cases where we would end up with an empty groupby list and project list, // which is not supported. // // Example, for the schema: t(a, b), s(i, j) // The query: // select * from t where a < all (select i from s order by j, b limit 1); // should be equivalent to: // select * from t where a < all (select i from s order by j limit 1); // after removing the outer reference (b) from the order by clause of the // subquery (all tuples in the subquery have the same value for the outer ref) // // Similarly, // select * from t where a in (select count(i) from s group by j, b); // is equivalent to: // select * from t where a in (select count(i) from s group by j); // // Similarly, // select * from t where a in (select row_number() over (partition by t.a order by t.b) from s); // is equivalent to: // select * from t where a in (select row_number() over () from s); CExpression * CExpressionPreprocessor::PexprRemoveSuperfluousOuterRefs ( CMemoryPool *mp, CExpression *pexpr ) { // protect against stack overflow during recursion GPOS_CHECK_STACK_SIZE; GPOS_ASSERT(NULL != mp); GPOS_ASSERT(NULL != pexpr); // operator, possibly altered below if we need to change the operator COperator *pop = pexpr->Pop(); // expression, possibly altered below if we need to change the children CExpression *newExpr = pexpr; COperator::EOperatorId op_id = pop->Eopid(); BOOL fHasOuterRefs = (pop->FLogical() && CUtils::HasOuterRefs(pexpr)); pop->AddRef(); if (fHasOuterRefs) { // special handling for three operator types: Limit, GrbyAgg, Sequence if (COperator::EopLogicalLimit == op_id) { CColRefSet *outer_refs = pexpr->DeriveOuterReferences(); CLogicalLimit *popLimit = CLogicalLimit::PopConvert(pop); COrderSpec *pos = popLimit->Pos(); COrderSpec *posNew = pos->PosExcludeColumns(mp, outer_refs); pop->Release(); pop = GPOS_NEW(mp) CLogicalLimit ( mp, posNew, popLimit->FGlobal(), popLimit->FHasCount(), popLimit->IsTopLimitUnderDMLorCTAS() ); } else if (COperator::EopLogicalGbAgg == op_id) { CColRefSet *outer_refs = pexpr->DeriveOuterReferences(); CLogicalGbAgg *popAgg = CLogicalGbAgg::PopConvert(pop); CColRefArray *colref_array = CUtils::PdrgpcrExcludeColumns(mp, popAgg->Pdrgpcr(), outer_refs); CExpression *pExprProjList = (*pexpr)[1]; // It's only valid to remove the outer reference if: // the projection list is NOT empty // or // the outer references are NOT the ONLY Group By column // // For example: // -- Cannot remove t.b from groupby, because this will produce an invalid plan // -- with both groupby list and project list empty, in this case we need to add // -- a project node below the GrbyAgg // select a from t where c in (select distinct t.b from s) // // -- remove t.b from groupby is ok, because there is at least one agg function: count() // select a from t where c in (select count(s.j) from s group by t.b) // // -- remove t.b from groupby is ok, because there is other groupby column s.j // select a from t where c in (select s.j from s group by t.b, s.j) // // -- remove t.b from groupby is ok, because outer reference is a // -- constant for each invocation of subquery // select a from t where c in (select count(s.j) from s group by s.i, t.b) // if (0 < pExprProjList->Arity() || 0 < colref_array->Size()) { // remove outer refs from the groupby columns list CColRefArray *pdrgpcrMinimal = popAgg->PdrgpcrMinimal(); if (NULL != pdrgpcrMinimal) { pdrgpcrMinimal = CUtils::PdrgpcrExcludeColumns(mp, pdrgpcrMinimal, outer_refs); } CColRefArray *pdrgpcrArgDQA = popAgg->PdrgpcrArgDQA(); if (NULL != pdrgpcrArgDQA) { pdrgpcrArgDQA->AddRef(); } pop->Release(); pop = GPOS_NEW(mp) CLogicalGbAgg ( mp, colref_array, pdrgpcrMinimal, popAgg->Egbaggtype(), popAgg->FGeneratesDuplicates(), pdrgpcrArgDQA ); } else { // grouping_cols has outer references that can't be removed, because // that would make both pExprProjList and grouping_cols empty, which is not allowed. // The solution in this case is to add a project node below that will simply echo // the outer reference, and to use that newly produced ColRef as groupby column. CExpression *child = (*pexpr)[0]; CExpressionArray *grouping_cols_arr = CUtils::PdrgpexprScalarIdents(mp, popAgg->Pdrgpcr()); GPOS_ASSERT(0 < grouping_cols_arr->Size()); child->AddRef(); // add a project node on top of our child CExpression *projectExpr = CUtils::PexprAddProjection( mp, child, grouping_cols_arr, false // don't add to hash table, // this is done at the end // of preprocessing ); grouping_cols_arr->Release(); // build a children array for the new GrbyAgg expression CExpressionArray *new_children = GPOS_NEW(mp) CExpressionArray(mp); new_children->Append(projectExpr); for (ULONG ul = 1; ul < pexpr->PdrgPexpr()->Size(); ul++) { new_children->Append((*pexpr->PdrgPexpr())[ul]); (*pexpr->PdrgPexpr())[ul]->AddRef(); } // build a new CLogicalGbAgg operator, with a new grouping columns list CColRefArray *new_grouping_cols = GPOS_NEW(mp) CColRefArray(mp); CExpression *new_projected_cols = (*projectExpr)[1]; for (ULONG ul = 0; ul < new_projected_cols->Arity(); ul++) { new_grouping_cols->Append(CUtils::PcrFromProjElem((*new_projected_cols)[ul])); } GPOS_ASSERT(NULL == popAgg->PdrgpcrArgDQA()); pop = GPOS_NEW(mp) CLogicalGbAgg( mp, new_grouping_cols, NULL, popAgg->Egbaggtype(), popAgg->FGeneratesDuplicates(), NULL // no DQA cols ); // release the previous pop popAgg->Release(); popAgg = NULL; // finally, put it all together, our new GrbyAgg now has a project node below // it that will turn the outer reference into a produced ColRef that is used // as a groupby column pop->AddRef(); newExpr = GPOS_NEW(mp) CExpression( mp, pop, new_children ); // clean up colref_array->Release(); } } else if (COperator::EopLogicalSequenceProject == op_id) { CExpressionHandle exprhdl(mp); exprhdl.Attach(pexpr); exprhdl.DeriveProps(NULL /*pdpctxt*/); CLogicalSequenceProject *popSequenceProject = CLogicalSequenceProject::PopConvert(pop); if (popSequenceProject->FHasLocalReferencesTo(exprhdl.DeriveOuterReferences())) { COperator *popNew = popSequenceProject->PopRemoveLocalOuterRefs(mp, exprhdl); pop->Release(); pop = popNew; } } } // recursively process children const ULONG arity = newExpr->Arity(); CExpressionArray *pdrgpexprChildren = GPOS_NEW(mp) CExpressionArray(mp); for (ULONG ul = 0; ul < arity; ul++) { CExpression *pexprChild = PexprRemoveSuperfluousOuterRefs(mp, (*newExpr)[ul]); pdrgpexprChildren->Append(pexprChild); } if (newExpr != pexpr) { newExpr->Release(); } return GPOS_NEW(mp) CExpression(mp, pop, pdrgpexprChildren); } // generate a ScalarBoolOp expression or simply return the only expression // in the array if there is only one. CExpression * CExpressionPreprocessor::PexprScalarBoolOpConvert2In ( CMemoryPool *mp, CScalarBoolOp::EBoolOperator eboolop, CExpressionArray *pdrgpexpr ) { GPOS_ASSERT(NULL != pdrgpexpr); GPOS_ASSERT(0 < pdrgpexpr->Size()); if (1 == pdrgpexpr->Size()) { // if there is one child, do not wrap it in a bool op CExpression *pexpr = (* pdrgpexpr)[0]; pexpr->AddRef(); pdrgpexpr->Release(); return pexpr; } return GPOS_NEW(mp) CExpression ( mp, GPOS_NEW(mp) CScalarBoolOp(mp, eboolop), pdrgpexpr ); } // checks if the given expression is likely to be simplified by the constraints // framework during array conversion. eboolop is the CScalarBoolOp type // of the expression which contains the argument expression BOOL CExpressionPreprocessor::FConvert2InIsConvertable(CExpression *pexpr, CScalarBoolOp::EBoolOperator eboolopParent) { bool fConvertableExpression = false; if (CPredicateUtils::FCompareIdentToConst(pexpr)) { fConvertableExpression |= IMDType::EcmptEq == CScalarCmp::PopConvert(pexpr->Pop())->ParseCmpType() && CScalarBoolOp::EboolopOr == eboolopParent; fConvertableExpression |= IMDType::EcmptNEq == CScalarCmp::PopConvert(pexpr->Pop())->ParseCmpType() && CScalarBoolOp::EboolopAnd == eboolopParent; } else if (CPredicateUtils::FCompareIdentToConstArray(pexpr) || CPredicateUtils::FCompareCastIdentToConstArray(pexpr)) { fConvertableExpression = true; } if (fConvertableExpression) { GPOS_ASSERT(0 < pexpr->Arity()); CScalarIdent *pscid = NULL; if (CUtils::FScalarIdent((*pexpr)[0])) { pscid = CScalarIdent::PopConvert((*pexpr)[0]->Pop()); } else { GPOS_ASSERT(CScalarIdent::FCastedScId((*pexpr)[0])); pscid = CScalarIdent::PopConvert((*(*pexpr)[0])[0]->Pop()); } if (!CUtils::FConstrainableType(pscid->MdidType())) { fConvertableExpression = false; } } return fConvertableExpression; } // converts series of AND or OR comparisons into array IN expressions. For // example, x = 1 OR x = 2 will convert to x IN (1,2). This stage assumes // the expression has been unnested using CExpressionUtils::PexprUnnest. CExpression * CExpressionPreprocessor::PexprConvert2In ( CMemoryPool *mp, CExpression *pexpr // does not take ownership ) { // protect against stack overflow during recursion GPOS_CHECK_STACK_SIZE; GPOS_ASSERT(NULL != mp); GPOS_ASSERT(NULL != pexpr); COperator *pop = pexpr->Pop(); if (CPredicateUtils::FOr(pexpr) || CPredicateUtils::FAnd(pexpr) ) { // the bool op type of this node CScalarBoolOp::EBoolOperator eboolop = CScalarBoolOp::PopConvert(pop)->Eboolop(); // derive constraints on all of the simple scalar children // and add them to a new AND or OR expression CExpressionArray *pdrgpexprCollapse = GPOS_NEW(mp) CExpressionArray(mp); CExpressionArray *pdrgpexprRemainder = GPOS_NEW(mp) CExpressionArray(mp); const ULONG arity = pexpr->Arity(); for (ULONG ul = 0; ul < arity; ul++) { CExpression *pexprChild = (*pexpr)[ul]; if (FConvert2InIsConvertable(pexprChild, eboolop)) { pexprChild->AddRef(); pdrgpexprCollapse->Append(pexprChild); } else { // recursively convert the remainder and add to the array pdrgpexprRemainder->Append(PexprConvert2In(mp, pexprChild)); } } if (0 != pdrgpexprCollapse->Size()) { // create the constraint, rederive the collapsed expression // add the new derived expr to remainder CColRefSetArray *colref_array = NULL; pop->AddRef(); CAutoRef<CExpression> apexprPreCollapse(GPOS_NEW(mp) CExpression(mp, pop, pdrgpexprCollapse)); CAutoRef<CConstraint> apcnst(CConstraint::PcnstrFromScalarExpr(mp, apexprPreCollapse.Value(), &colref_array)); GPOS_ASSERT(NULL != apcnst.Value()); CExpression *pexprPostCollapse = apcnst->PexprScalar(mp); pexprPostCollapse->AddRef(); pdrgpexprRemainder->Append(pexprPostCollapse); CRefCount::SafeRelease(colref_array); } else { pdrgpexprCollapse->Release(); } GPOS_ASSERT(0 < pdrgpexprRemainder->Size()); return PexprScalarBoolOpConvert2In(mp, eboolop, pdrgpexprRemainder); } CExpressionArray *pdrgpexpr = GPOS_NEW(mp) CExpressionArray(mp); CExpressionArray *pdrgexprChildren = pexpr->PdrgPexpr(); for (ULONG ul = 0; ul < pexpr->Arity(); ul++) { pdrgpexpr->Append(PexprConvert2In(mp, (*pdrgexprChildren)[ul])); } pop->AddRef(); return GPOS_NEW(mp) CExpression(mp, pop, pdrgpexpr); } // collapse cascaded inner and left outer joins into NAry-joins CExpression * CExpressionPreprocessor::PexprCollapseJoins ( CMemoryPool *mp, CExpression *pexpr ) { // protect against stack overflow during recursion GPOS_CHECK_STACK_SIZE; GPOS_ASSERT(NULL != mp); GPOS_ASSERT(NULL != pexpr); COperator *pop = pexpr->Pop(); const ULONG arity = pexpr->Arity(); if (CPredicateUtils::FInnerOrNAryJoin(pexpr) || (GPOS_FTRACE(EopttraceEnableLOJInNAryJoin) && CPredicateUtils::FLeftOuterJoin(pexpr))) { CExpressionArray *newChildNodes = GPOS_NEW(mp) CExpressionArray(mp); ULongPtrArray *lojChildPredIndexes = GPOS_NEW(mp) ULongPtrArray(mp); CExpressionArray *innerJoinPredicates = GPOS_NEW(mp) CExpressionArray(mp); CExpressionArray *lojPredicates = GPOS_NEW(mp) CExpressionArray(mp); CollectJoinChildrenRecursively(mp, pexpr, newChildNodes, lojChildPredIndexes, innerJoinPredicates, lojPredicates); if (lojPredicates->Size() > 0) { // each logical child must have an associated predicate index GPOS_ASSERT(newChildNodes->Size() == lojChildPredIndexes->Size()); // this NAry join involves LOJs; create a CScalarNAryJoinPredList to hold // the information which predicates are inner join preds and which ON predicates // are associated with the LOJs' right children CExpressionArray *naryJoinPredicates = GPOS_NEW(mp) CExpressionArray(mp); // create a new CScalarNAryJoinPredList as the last child of the NAry join // the first child are all the inner join predicates naryJoinPredicates->Append(CPredicateUtils::PexprConjunction(mp, innerJoinPredicates)); // the remaining children are the LOJ predicates, one by one for (ULONG ul = 0; ul < lojPredicates->Size(); ul++) { CExpression *predicate = (*lojPredicates)[ul]; predicate->AddRef(); naryJoinPredicates->Append(predicate); } CExpression *nAryJoinPredicateList = GPOS_NEW(mp) CExpression ( mp, GPOS_NEW(mp) CScalarNAryJoinPredList(mp), naryJoinPredicates ); newChildNodes->Append(nAryJoinPredicateList); // some sanity checks // Example: t1 join t2 on p12 left outer join t3 on p23 join t4 on p24 left outer join t5 on p35 // results from this call: // newChildNodes: [ t1, t2, t3, t4, t5 ] // lojChildPredIndexes: [ 0, 0, 1, 0, 2 ] (one entry per logical leaf node) // innerjoinPredicates: [ p12, p24 ] (all correspond to child pred index 0 (GPOPT_ZERO_INNER_JOIN_PRED_INDEX)) // lojPredicates: [ p23, p35 ] (p23 corresponds to child pred index 1, p35 corresponds to child pred index 2) // the leftmost child must have a predicate index of // GPOPT_ZERO_INNER_JOIN_PRED_INDEX, since it cannot be the right child of an LOJ GPOS_ASSERT(GPOPT_ZERO_INNER_JOIN_PRED_INDEX == *(*lojChildPredIndexes)[0]); #ifdef GPOS_DEBUG // lojChildPredIndexes must contain the numbers 1 ... lojPredicates->Size() // in ascending order, each number exactly once, with optional additional // GPOPT_ZERO_INNER_JOIN_PRED_INDEX (0) entries in-between entries ULONG highestNumberSeen = 0; for (ULONG ix=1; ix<lojChildPredIndexes->Size(); ix++) { ULONG nextNumber = *((*lojChildPredIndexes)[ix]); if (nextNumber == highestNumberSeen+1) { // child is right child of an LOJ highestNumberSeen = nextNumber; } else { // if we don't see the next number for a child, it must // be associated with the collective inner join predicates GPOS_ASSERT(GPOPT_ZERO_INNER_JOIN_PRED_INDEX == nextNumber); } } GPOS_ASSERT(highestNumberSeen == lojPredicates->Size()); #endif } else { // no LOJs involved, just add the ANDed preds as the scalar child newChildNodes->Append(CPredicateUtils::PexprConjunction(mp, innerJoinPredicates)); lojChildPredIndexes->Release(); lojChildPredIndexes = NULL; } CExpression *pexprNAryJoin = GPOS_NEW(mp) CExpression ( mp, GPOS_NEW(mp) CLogicalNAryJoin(mp, lojChildPredIndexes), newChildNodes ); COptimizerConfig *optimizer_config = COptCtxt::PoctxtFromTLS()->GetOptimizerConfig(); ULONG ulJoinArityLimit = optimizer_config->GetHint()->UlJoinArityForAssociativityCommutativity(); // The last child of an n-ary join expression is the scalar expression if (pexprNAryJoin->Arity() - 1 > ulJoinArityLimit) { GPOPT_DISABLE_XFORM(CXform::ExfJoinCommutativity); GPOPT_DISABLE_XFORM(CXform::ExfJoinAssociativity); } lojPredicates->Release(); return pexprNAryJoin; } // current operator is not an inner-join or supported LOJ, recursively process children CExpressionArray *pdrgpexprChildren = GPOS_NEW(mp) CExpressionArray(mp); for (ULONG ul = 0; ul < arity; ul++) { CExpression *pexprChild = PexprCollapseJoins(mp, (*pexpr)[ul]); pdrgpexprChildren->Append(pexprChild); } pop->AddRef(); return GPOS_NEW(mp) CExpression(mp, pop, pdrgpexprChildren); } // collect the children of a join backbone into an array of logical leaf // nodes (leaves of the backbone, that is) and arrays of predicates, such // that we can still associate the correct ON predicates to the children void CExpressionPreprocessor::CollectJoinChildrenRecursively ( CMemoryPool *mp, CExpression *pexpr, CExpressionArray *logicalLeafNodes, ULongPtrArray *lojChildPredIndexes, CExpressionArray *innerJoinPredicates, CExpressionArray *lojPredicates ) { // protect against stack overflow during recursion GPOS_CHECK_STACK_SIZE; GPOS_ASSERT(pexpr->Pop()->FLogical()); if (CPredicateUtils::FInnerOrNAryJoin(pexpr)) { const ULONG arity = pexpr->Arity(); CExpression *pexprScalar = (*pexpr) [arity - 1]; if (COperator::EopScalarNAryJoinPredList != pexprScalar->Pop()->Eopid()) { for (ULONG ul = 0; ul < arity - 1; ul++) { CExpression *child = (*pexpr)[ul]; CollectJoinChildrenRecursively(mp, child, logicalLeafNodes, lojChildPredIndexes, innerJoinPredicates, lojPredicates); } innerJoinPredicates->Append(PexprCollapseJoins(mp, pexprScalar)); } else { // we have collapsed this join before and it already has some non-inner join info, // merge the existing and new lists CLogicalNAryJoin *naryJoin = CLogicalNAryJoin::PopConvert(pexpr->Pop()); ULongPtrArray *naryJoinPredIndexes = naryJoin->GetLojChildPredIndexes(); // add all the inner join predicates innerJoinPredicates->Append(PexprCollapseJoins(mp,(*pexprScalar)[0])); // loop over the logical children for (ULONG ul=0; ul<arity-1; ul++) { if (GPOPT_ZERO_INNER_JOIN_PRED_INDEX == *(*naryJoinPredIndexes)[ul]) { // inner join child, collapse recursively CollectJoinChildrenRecursively ( mp, (*pexpr)[ul], logicalLeafNodes, lojChildPredIndexes, innerJoinPredicates, lojPredicates ); } else { // this is the right child of a non-inner join ULONG oldPredIndex = *(*naryJoinPredIndexes)[ul]; CExpression *lojPred = PexprCollapseJoins(mp,(*pexprScalar)[oldPredIndex]); // don't collapse this child into our current join node logicalLeafNodes->Append(PexprCollapseJoins(mp, (*pexpr)[ul])); lojPredicates->Append(lojPred); ULONG newPredIndex = lojPredicates->Size(); lojChildPredIndexes->Append(GPOS_NEW(mp) ULONG(newPredIndex)); } } } } else if (GPOS_FTRACE(EopttraceEnableLOJInNAryJoin) && CPredicateUtils::FLeftOuterJoin(pexpr)) { GPOS_ASSERT(3 == pexpr->Arity()); CExpression *leftChild = (*pexpr)[0]; CExpression *rightChild = (*pexpr)[1]; CExpression *pexprScalar = (*pexpr)[2]; CollectJoinChildrenRecursively(mp, leftChild, logicalLeafNodes, lojChildPredIndexes, innerJoinPredicates, lojPredicates); // stop collecting join children at the right child of the LOJ, // just add the child, regardless of whether it is a join or not logicalLeafNodes->Append(PexprCollapseJoins(mp, rightChild)); // create an entry in lojPredicates... lojPredicates->Append(PexprCollapseJoins(mp, pexprScalar)); // ... and point to this new entry in lojChildPredIndexes ULONG *indexOfThisLOJInTheArray = GPOS_NEW(mp)ULONG(lojPredicates->Size()); lojChildPredIndexes->Append(indexOfThisLOJInTheArray); } else { // pexpr is not the right child of a supported LOJ and is not a supported join logicalLeafNodes->Append(PexprCollapseJoins(mp, pexpr)); // this logical "leaf" node is a child of an inner join or it is the left child // of an LOJ, either way it is associated with the inner join predicates ULONG *innerJoinPredIndex = GPOS_NEW(mp)ULONG(GPOPT_ZERO_INNER_JOIN_PRED_INDEX); lojChildPredIndexes->Append(innerJoinPredIndex); } } // collapse cascaded logical project operators CExpression * CExpressionPreprocessor::PexprCollapseProjects ( CMemoryPool *mp, CExpression *pexpr ) { // protect against stack overflow during recursion GPOS_CHECK_STACK_SIZE; GPOS_ASSERT(NULL != mp); GPOS_ASSERT(NULL != pexpr); CExpressionArray *pdrgpexpr = GPOS_NEW(mp) CExpressionArray(mp); const ULONG arity = pexpr->Arity(); // recursively process children for (ULONG ul = 0; ul < arity; ul++) { CExpression *pexprChild = PexprCollapseProjects(mp, (*pexpr)[ul]); pdrgpexpr->Append(pexprChild); } COperator *pop = pexpr->Pop(); pop->AddRef(); CExpression *pexprNew = GPOS_NEW(mp) CExpression(mp, pop, pdrgpexpr); CExpression *pexprCollapsed = CUtils::PexprCollapseProjects(mp, pexprNew); if (NULL == pexprCollapsed) { return pexprNew; } pexprNew->Release(); return pexprCollapsed; } // insert dummy project element below scalar subquery when the (a) the scalar // subquery is below a project and (b) output column is an outer reference CExpression * CExpressionPreprocessor::PexprProjBelowSubquery ( CMemoryPool *mp, CExpression *pexpr, BOOL fUnderPrList ) { // protect against stack overflow during recursion GPOS_CHECK_STACK_SIZE; GPOS_ASSERT(NULL != mp); GPOS_ASSERT(NULL != pexpr); /* * Consider the following subquery: * SELECT (SELECT foo.b from bar) FROM foo * If bar is empty we should return null. * * For this query during DXL->Expr translation, the project element * (SELECT b FROM bar) is represented as scalar subquery that returns * an output column. To ensure that this scalar subquery under the project * operator is returned when bar (or an arbitrary tree instead of bar) * we insert a dummy project element that points to FOO.b under the * scalar subquery. This dummy project element prevents its incorrect * transformation into a non-correlated plan. * * One of the reasons we add this dummy project is to force the subquery * handler transformation to not produce a de-correlated plan * for queries such as this. * * We want to limit the of such introduction dummy projects only when the * following conditions are all satisfied: * a) The scalar subquery is in the project element scalar tree * Another use case: SELECT (SELECT foo.b from bar) + 1 FROM foo * b) The output of the scalar subquery is the column from the outer expression. * Consider the query: SELECT (SELECT foo.b + 5 from bar) FROM foo. In such cases, * since the foo.b + 5 is a new computed column inside the subquery with its own * project element, we do not need to add anything. */ BOOL fUnderPrListChild = fUnderPrList; COperator *pop = pexpr->Pop(); if (pop->FLogical()) { if (COperator::EopLogicalProject == pop->Eopid()) { CExpression *pexprRel = (*pexpr)[0]; CExpression *pexprRelNew = PexprProjBelowSubquery(mp, pexprRel, false /* fUnderPrList */); CExpression *pexprPrList = (*pexpr)[1]; CExpression *pexprPrListNew = PexprProjBelowSubquery(mp, pexprPrList, true /* fUnderPrList */); return GPOS_NEW(mp) CExpression(mp, GPOS_NEW(mp) CLogicalProject(mp), pexprRelNew, pexprPrListNew); } fUnderPrListChild = false; } else if (COperator::EopScalarSubquery == pop->Eopid() && fUnderPrList) { CExpression *pexprRel = (*pexpr)[0]; CExpression *pexprRelNew = PexprProjBelowSubquery(mp, pexprRel, false /* fUnderPrList */); const CColRefSet *prcsOutput = pexprRelNew->DeriveOutputColumns(); const CColRef *pcrSubquery = CScalarSubquery::PopConvert(pop)->Pcr(); if (NULL != prcsOutput && !prcsOutput->FMember(pcrSubquery)) { CColumnFactory *col_factory = COptCtxt::PoctxtFromTLS()->Pcf(); CColRef *pcrNewSubquery = col_factory->PcrCreate(pcrSubquery->RetrieveType(), pcrSubquery->TypeModifier()); CExpression *pexprPrEl = CUtils::PexprScalarProjectElement(mp, pcrNewSubquery, CUtils::PexprScalarIdent(mp, pcrSubquery)); CExpression *pexprProjList = GPOS_NEW(mp) CExpression(mp, GPOS_NEW(mp) CScalarProjectList(mp), pexprPrEl); CExpression *pexprProj = GPOS_NEW(mp) CExpression(mp, GPOS_NEW(mp) CLogicalProject(mp), pexprRelNew, pexprProjList); CScalarSubquery *popSubq = GPOS_NEW(mp) CScalarSubquery(mp, pcrNewSubquery, false /*fGeneratedByExist*/, false /*fGeneratedByQuantified*/); CExpression *pexprResult = GPOS_NEW(mp) CExpression(mp, popSubq, pexprProj); return pexprResult; } pop->AddRef(); return GPOS_NEW(mp) CExpression(mp, pop, pexprRelNew); } CExpressionArray *pdrgpexpr = GPOS_NEW(mp) CExpressionArray(mp); const ULONG arity = pexpr->Arity(); for (ULONG ul = 0; ul < arity; ul++) { CExpression *pexprChild = PexprProjBelowSubquery(mp, (*pexpr)[ul], fUnderPrListChild); pdrgpexpr->Append(pexprChild); } pop->AddRef(); return GPOS_NEW(mp) CExpression(mp, pop, pdrgpexpr); } // collapse cascaded union/union all into an NAry union/union all operator CExpression * CExpressionPreprocessor::PexprCollapseUnionUnionAll ( CMemoryPool *mp, CExpression *pexpr ) { // protect against stack overflow during recursion GPOS_CHECK_STACK_SIZE; GPOS_ASSERT(NULL != mp); GPOS_ASSERT(NULL != pexpr); COperator *pop = pexpr->Pop(); const ULONG arity = pexpr->Arity(); CExpressionArray *pdrgpexpr = GPOS_NEW(mp) CExpressionArray(mp); // recursively process children for (ULONG ul = 0; ul < arity; ul++) { CExpression *pexprChild = PexprCollapseUnionUnionAll(mp, (*pexpr)[ul]); pdrgpexpr->Append(pexprChild); } pop->AddRef(); CExpression *pexprNew = GPOS_NEW(mp) CExpression(mp, pop, pdrgpexpr); if (!CPredicateUtils::FUnionOrUnionAll(pexprNew)) { return pexprNew; } // array of input children and its column references CExpressionArray *pdrgpexprNew = GPOS_NEW(mp) CExpressionArray(mp); CColRef2dArray *pdrgdrgpcrOrig = CLogicalSetOp::PopConvert(pop)->PdrgpdrgpcrInput(); CColRef2dArray *pdrgdrgpcrNew = GPOS_NEW(mp) CColRef2dArray(mp); BOOL fCollapsed = false; for (ULONG ul = 0; ul < arity; ul++) { if (CPredicateUtils::FCollapsibleChildUnionUnionAll(pexprNew, ul)) { fCollapsed = true; CPredicateUtils::CollectGrandChildrenUnionUnionAll ( mp, pexprNew, ul, pdrgpexprNew, pdrgdrgpcrNew ); } else { CExpression *pexprChild = (*pexprNew)[ul]; pexprChild->AddRef(); pdrgpexprNew->Append(pexprChild); CColRefArray *pdrgpcrInput = (*pdrgdrgpcrOrig)[ul]; pdrgpcrInput->AddRef(); pdrgdrgpcrNew->Append(pdrgpcrInput); } } if (!fCollapsed) { // clean up pdrgdrgpcrNew->Release(); pdrgpexprNew->Release(); return pexprNew; } COperator *popNew = NULL; CColRefArray *pdrgpcrOutput = CLogicalSetOp::PopConvert(pop)->PdrgpcrOutput(); pdrgpcrOutput->AddRef(); if (pop->Eopid() == COperator::EopLogicalUnion) { popNew = GPOS_NEW(mp) CLogicalUnion(mp, pdrgpcrOutput, pdrgdrgpcrNew); } else { GPOS_ASSERT(pop->Eopid() == COperator::EopLogicalUnionAll); popNew = GPOS_NEW(mp) CLogicalUnionAll(mp, pdrgpcrOutput, pdrgdrgpcrNew); } // clean up pexprNew->Release(); return GPOS_NEW(mp) CExpression(mp, popNew, pdrgpexprNew); } // transform outer joins into inner joins CExpression * CExpressionPreprocessor::PexprOuterJoinToInnerJoin ( CMemoryPool *mp, CExpression *pexpr ) { // protect against stack overflow during recursion GPOS_CHECK_STACK_SIZE; GPOS_ASSERT(NULL != mp); GPOS_ASSERT(NULL != pexpr); COperator *pop = pexpr->Pop(); const ULONG arity = pexpr->Arity(); if (COperator::EopLogicalSelect == pop->Eopid() && COperator::EopLogicalLeftOuterJoin == (*pexpr)[0]->Pop()->Eopid()) { // a Select on top of LOJ can be turned into InnerJoin by normalization return CNormalizer::PexprNormalize(mp, pexpr); } if (CPredicateUtils::FInnerOrNAryJoin(pexpr)) { // the predicates of an inner join on top of outer join can be used to turn the child outer join into another inner join CExpression *pexprScalar = (*pexpr)[arity - 1]; if (COperator::EopScalarNAryJoinPredList == pexprScalar->Pop()->Eopid()) { // since we have ScalarNAryJoinPredList, it means we have already // converted all possible LOJs to Inner Joins and collapsed them pexpr->AddRef(); return pexpr; } CExpressionArray *pdrgpexprChildren = GPOS_NEW(mp) CExpressionArray(mp); for (ULONG ul = 0; ul < arity; ul++) { CExpression *pexprChild = (*pexpr)[ul]; BOOL fNewChild = false; if (COperator::EopLogicalLeftOuterJoin == pexprChild->Pop()->Eopid()) { CColRefSet *pcrsLOJInnerOutput = (*pexprChild)[1]->DeriveOutputColumns(); if (!GPOS_FTRACE(EopttraceDisableOuterJoin2InnerJoinRewrite) && CPredicateUtils::FNullRejecting(mp, pexprScalar, pcrsLOJInnerOutput)) { CExpression *pexprNewOuter = PexprOuterJoinToInnerJoin(mp, (*pexprChild)[0]); CExpression *pexprNewInner = PexprOuterJoinToInnerJoin(mp, (*pexprChild)[1]); CExpression *pexprNewScalar = PexprOuterJoinToInnerJoin(mp, (*pexprChild)[2]); CExpression *pexprJoin = CUtils::PexprLogicalJoin<CLogicalInnerJoin>(mp, pexprNewOuter, pexprNewInner, pexprNewScalar); pexprChild = PexprCollapseJoins(mp, pexprJoin); pexprJoin->Release(); fNewChild = true; } } // Consider the following join tree: // +--CLogicalNAryJoin // |--CLogicalLeftOuterJoin // | |--CLogicalLeftOuterJoin // | | |--CLogicalGet "t1" // | | |--CLogicalGet "t2" // | |--CLogicalGet "t3" // |--CLogicalGet "t4" // // If the predicate between the CLogicalNAryJoin and first CLogicalLeftOuterJoin // is NULL rejecting, we convert the left join to an inner join and create a new // expression. Then the modified tree would be: // // +--CLogicalNAryJoin // |--CLogicalLeftOuterJoin // | |--CLogicalGet "t1" // | |--CLogicalGet "t2" // |--CLogicalGet "t3" // |--CLogicalGet "t4" // // Note that we can still convert the second CLogicalLeftOuterJoin into an inner join // if the predicate between the CLogicalNAryJoin is NULL rejecting. So we need to recurse // into the child and continue checking if we can convert the LOJs into inner joins. CExpression *pexprChildNew = PexprOuterJoinToInnerJoin(mp, pexprChild); if (fNewChild) { pexprChild->Release(); } pdrgpexprChildren->Append(pexprChildNew); } return GPOS_NEW(mp) CExpression(mp, GPOS_NEW(mp) CLogicalNAryJoin(mp), pdrgpexprChildren); } // current operator is not an NAry-join, recursively process children CExpressionArray *pdrgpexprChildren = GPOS_NEW(mp) CExpressionArray(mp); for (ULONG ul = 0; ul < arity; ul++) { CExpression *pexprChild = PexprOuterJoinToInnerJoin(mp, (*pexpr)[ul]); pdrgpexprChildren->Append(pexprChild); } pop->AddRef(); return GPOS_NEW(mp) CExpression(mp, pop, pdrgpexprChildren); } // generate n*(n-1)/2 equality predicates, up to GPOPT_MAX_DERIVED_PREDS, between // the n columns in the given equivalence class (set), CExpression * CExpressionPreprocessor::PexprConjEqualityPredicates ( CMemoryPool *mp, CColRefSet *pcrs ) { GPOS_ASSERT(NULL != pcrs); CExpressionArray *pdrgpexpr = GPOS_NEW(mp) CExpressionArray(mp); ULONG ulPreds = 0; CColRefSetIter crsiRight(*pcrs); while ( crsiRight.Advance() && GPOPT_MAX_DERIVED_PREDS > ulPreds ) { CColRef *pcrRight = crsiRight.Pcr(); CColRefSetIter crsiLeft(*pcrs); while ( crsiLeft.Advance() && GPOPT_MAX_DERIVED_PREDS > ulPreds ) { CColRef *pcrLeft = crsiLeft.Pcr(); if (pcrLeft == pcrRight) { break; } pdrgpexpr->Append(CUtils::PexprScalarEqCmp(mp, pcrLeft, pcrRight)); ulPreds++; } } return CPredicateUtils::PexprConjunction(mp, pdrgpexpr); } // check if all columns in the given equivalent class come from one of the // children of the given expression BOOL CExpressionPreprocessor::FEquivClassFromChild ( CColRefSet *pcrs, CExpression *pexpr ) { GPOS_ASSERT(NULL != pcrs); GPOS_ASSERT(NULL != pexpr); const ULONG ulChildren = pexpr->Arity(); for (ULONG ul = 0; ul < ulChildren; ul++) { CExpression *pexprChild = (*pexpr)[ul]; if (!pexprChild->Pop()->FLogical()) { continue; } CColRefSetArray *pdrgpcrs = pexprChild->DerivePropertyConstraint()->PdrgpcrsEquivClasses(); if (pcrs->FContained(pdrgpcrs)) { return true; } } return false; } // additional equality predicates are generated based on the equivalence // classes in the constraint properties of the expression. CExpression * CExpressionPreprocessor::PexprAddEqualityPreds ( CMemoryPool *mp, CExpression *pexpr, CColRefSet *pcrsProcessed ) { GPOS_ASSERT(NULL != pcrsProcessed); GPOS_ASSERT(NULL != pexpr); GPOS_ASSERT(pexpr->Pop()->FLogical()); const ULONG ulChildren = pexpr->Arity(); CPropConstraint *ppc = pexpr->DerivePropertyConstraint(); CExpression *pexprPred = NULL; COperator *pop = pexpr->Pop(); if (CUtils::FLogicalDML(pop)) { pexprPred = CUtils::PexprScalarConstBool(mp, true); } else { CExpressionArray *pdrgpexpr = GPOS_NEW(mp) CExpressionArray(mp); CColRefSetArray *pdrgpcrs = ppc->PdrgpcrsEquivClasses(); GPOS_ASSERT(NULL != pdrgpcrs); const ULONG ulEquivClasses = pdrgpcrs->Size(); for (ULONG ul = 0; ul < ulEquivClasses; ul++) { CColRefSet *pcrsEquivClass = (*pdrgpcrs)[ul]; CColRefSet *pcrsEquality = GPOS_NEW(mp) CColRefSet(mp); pcrsEquality->Include(pcrsEquivClass); pcrsEquality->Exclude(pcrsProcessed); // if equivalence class comes from any of the children, then skip it if (FEquivClassFromChild(pcrsEquality, pexpr)) { pcrsEquality->Release(); continue; } CExpression *pexprEquality = PexprConjEqualityPredicates(mp, pcrsEquality); pcrsProcessed->Include(pcrsEquality); pcrsEquality->Release(); pdrgpexpr->Append(pexprEquality); } pexprPred = CPredicateUtils::PexprConjunction(mp, pdrgpexpr); } CExpressionArray *pdrgpexprChildren = GPOS_NEW(mp) CExpressionArray(mp); for (ULONG ul = 0; ul < ulChildren; ul++) { CExpression *pexprChild = (*pexpr)[ul]; if (pexprChild->Pop()->FLogical()) { CExpression *pexprChildNew = PexprAddEqualityPreds(mp, pexprChild, pcrsProcessed); pdrgpexprChildren->Append(pexprChildNew); } else { pexprChild->AddRef(); pdrgpexprChildren->Append(pexprChild); } } pop->AddRef(); return CUtils::PexprSafeSelect ( mp, GPOS_NEW(mp) CExpression(mp, pop, pdrgpexprChildren), pexprPred ); } // generate predicates for the given set of columns based on the given // constraint property. Columns for which predicates are generated will be // added to the set of processed columns CExpression * CExpressionPreprocessor::PexprScalarPredicates ( CMemoryPool *mp, CPropConstraint *ppc, CColRefSet *pcrsNotNull, CColRefSet *pcrs, CColRefSet *pcrsProcessed ) { CExpressionArray *pdrgpexpr = GPOS_NEW(mp) CExpressionArray(mp); CColRefSetIter crsi(*pcrs); while (crsi.Advance()) { CColRef *colref = crsi.Pcr(); CExpression *pexprScalar = ppc->PexprScalarMappedFromEquivCols(mp, colref); if (NULL == pexprScalar) { continue; } pcrsProcessed->Include(colref); // do not add a NOT NULL predicate if column is not nullable or if it // already has another predicate on it if (CUtils::FScalarNotNull(pexprScalar) && (pcrsNotNull->FMember(colref) || ppc->Pcnstr()->FConstraint(colref))) { pexprScalar->Release(); continue; } pdrgpexpr->Append(pexprScalar); } if (0 == pdrgpexpr->Size()) { pdrgpexpr->Release(); return NULL; } return CPredicateUtils::PexprConjunction(mp, pdrgpexpr); } // process scalar expressions for generating additional predicates based on // derived constraints. This function is needed because scalar expressions // can have relational children when there are subqueries CExpression * CExpressionPreprocessor::PexprFromConstraintsScalar ( CMemoryPool *mp, CExpression *pexpr ) { GPOS_ASSERT(NULL != pexpr); GPOS_ASSERT(pexpr->Pop()->FScalar()); if (!CUtils::FHasSubquery(pexpr)) { pexpr->AddRef(); return pexpr; } const ULONG ulChildren = pexpr->Arity(); CExpressionArray *pdrgpexprChildren = GPOS_NEW(mp) CExpressionArray(mp); for (ULONG ul = 0; ul < ulChildren; ul++) { CExpression *pexprChild = (*pexpr)[ul]; if (pexprChild->Pop()->FScalar()) { pexprChild = PexprFromConstraintsScalar(mp, pexprChild); } else { GPOS_ASSERT(pexprChild->Pop()->FLogical()); CColRefSet *pcrsProcessed = GPOS_NEW(mp) CColRefSet(mp); pexprChild = PexprFromConstraints(mp, pexprChild, pcrsProcessed); pcrsProcessed->Release(); } pdrgpexprChildren->Append(pexprChild); } COperator *pop = pexpr->Pop(); pop->AddRef(); return GPOS_NEW(mp) CExpression(mp, pop, pdrgpexprChildren); } // Imply new predicates on LOJ's inner child based on constraints derived // from LOJ's outer child and join predicate CExpression * CExpressionPreprocessor::PexprWithImpliedPredsOnLOJInnerChild ( CMemoryPool *mp, CExpression *pexprLOJ, BOOL *pfAddedPredicates // output: set to True if new predicates are added to inner child ) { GPOS_ASSERT(NULL != pexprLOJ); GPOS_ASSERT(NULL != pfAddedPredicates); GPOS_ASSERT(COperator::EopLogicalLeftOuterJoin == pexprLOJ->Pop()->Eopid()); CExpression *pexprOuter = (*pexprLOJ)[0]; CExpression *pexprInner = (*pexprLOJ)[1]; CExpression *pexprOuterJoinPred = (*pexprLOJ)[2]; // merge children constraints with constraints derived from join's predicate CExpressionHandle exprhdl(mp); exprhdl.Attach(pexprLOJ); CPropConstraint *ppc = CLogical::PpcDeriveConstraintFromPredicates(mp, exprhdl); // use the computed constraint to derive a scalar predicate on the inner child CColRefSet *pcrsInnerOutput = pexprInner->DeriveOutputColumns(); CColRefSet *pcrsInnerNotNull = pexprInner->DeriveNotNullColumns(); // generate a scalar predicate from the computed constraint, restricted to LOJ inner child CColRefSet *pcrsProcessed = GPOS_NEW(mp) CColRefSet(mp); CExpression *pexprPred = PexprScalarPredicates(mp, ppc, pcrsInnerNotNull, pcrsInnerOutput, pcrsProcessed); pcrsProcessed->Release(); ppc->Release(); pexprInner->AddRef(); if (NULL != pexprPred && !CUtils::FScalarConstTrue(pexprPred)) { // if a new predicate was added, set the output flag to True *pfAddedPredicates = true; pexprPred->AddRef(); CExpression *pexprSelect = CUtils::PexprLogicalSelect(mp, pexprInner, pexprPred); CExpression *pexprInnerNormalized = CNormalizer::PexprNormalize(mp, pexprSelect); pexprSelect->Release(); pexprInner = pexprInnerNormalized; } CRefCount::SafeRelease(pexprPred); // recursively process inner child CExpression *pexprNewInner = PexprOuterJoinInferPredsFromOuterChildToInnerChild(mp, pexprInner, pfAddedPredicates); pexprInner->Release(); // recursively process outer child CExpression *pexprNewOuter = PexprOuterJoinInferPredsFromOuterChildToInnerChild(mp, pexprOuter, pfAddedPredicates); pexprOuterJoinPred->AddRef(); COperator *pop = pexprLOJ->Pop(); pop->AddRef(); return GPOS_NEW(mp) CExpression(mp, pop, pexprNewOuter, pexprNewInner, pexprOuterJoinPred); } // Infer predicate from outer child to inner child of the outer join, // // for LOJ expressions with predicates on outer child, e.g., // // +-LOJ(x=y) // |---Select(x=5) // | +----X // +----Y // // this function implies an equivalent predicate (y=5) on the inner child of LOJ: // // +-LOJ(x=y) // |---Select(x=5) // | +----X // +---Select(y=5) // +----Y // // the correctness of this rewrite can be proven as follows: // - By removing all tuples from Y that do not satisfy (y=5), the LOJ // results, where x=y, are retained. The reason is that any such join result // must satisfy (x=5 ^ x=y) which implies that (y=5). // // - LOJ results that correspond to tuples from X not joining with any tuple // from Y are also retained. The reason is that such join results can only be // produced if for all tuples in Y, we have (y!=5). By selecting Y tuples where (y=5), // if we end up with no Y tuples, the LOJ results will be generated by joining X with empty Y. // This is the same as joining with all tuples from Y with (y!=5). If we end up with // any tuple in Y satisfying (y=5), no LOJ results corresponding to X tuples not joining // with Y can be produced. // // to implement this rewrite in a general form, we need to imply general constraints on // LOJ's inner child from constraints that exist on LOJ's outer child. The generated predicate // from this inference can only be inserted below LOJ (on top of the inner child), and cannot be // inserted on top of LOJ, otherwise we may wrongly convert LOJ to inner-join. CExpression * CExpressionPreprocessor::PexprOuterJoinInferPredsFromOuterChildToInnerChild ( CMemoryPool *mp, CExpression *pexpr, BOOL *pfAddedPredicates // output: set to True if new predicates are added to inner child ) { GPOS_ASSERT(NULL != pexpr); GPOS_ASSERT(NULL != pfAddedPredicates); COperator *pop = pexpr->Pop(); if (COperator::EopLogicalLeftOuterJoin == pop->Eopid()) { return PexprWithImpliedPredsOnLOJInnerChild(mp, pexpr, pfAddedPredicates); } // not an outer join, process children recursively CExpressionArray *pdrgpexpr = GPOS_NEW(mp) CExpressionArray(mp); const ULONG ulChildren = pexpr->Arity(); for (ULONG ul = 0; ul < ulChildren; ul++) { CExpression *pexprChild = PexprOuterJoinInferPredsFromOuterChildToInnerChild(mp, (*pexpr)[ul], pfAddedPredicates); pdrgpexpr->Append(pexprChild); } pop->AddRef(); return GPOS_NEW(mp) CExpression(mp, pop, pdrgpexpr); } // additional predicates are generated based on the derived constraint // properties of the expression. No predicates are generated for the columns // in the already processed set. This set is expanded with more columns // that get processed along the way CExpression * CExpressionPreprocessor::PexprFromConstraints ( CMemoryPool *mp, CExpression *pexpr, CColRefSet *pcrsProcessed ) { GPOS_ASSERT(NULL != pcrsProcessed); GPOS_ASSERT(NULL != pexpr); GPOS_ASSERT(pexpr->Pop()->FLogical()); const ULONG ulChildren = pexpr->Arity(); CPropConstraint *ppc = pexpr->DerivePropertyConstraint(); CColRefSet *pcrsNotNull = pexpr->DeriveNotNullColumns(); CExpressionArray *pdrgpexprChildren = GPOS_NEW(mp) CExpressionArray(mp); for (ULONG ul = 0; ul < ulChildren; ul++) { CExpression *pexprChild = (*pexpr)[ul]; if (pexprChild->Pop()->FScalar()) { pexprChild = PexprFromConstraintsScalar(mp, pexprChild); pdrgpexprChildren->Append(pexprChild); continue; } // process child CExpression *pexprChildNew = PexprFromConstraints(mp, pexprChild, pcrsProcessed); CColRefSet *pcrsOutChild = GPOS_NEW(mp) CColRefSet(mp); // output columns on which predicates must be inferred pcrsOutChild->Include(pexprChild->DeriveOutputColumns()); // exclude column references on which predicates had been already inferred, // this avoids generating duplicate predicates on the parent node if a // predicate has already been placed on the child. pcrsOutChild->Exclude(pcrsProcessed); // generate predicates for the output columns of child CExpression *pexprPred = PexprScalarPredicates(mp, ppc, pcrsNotNull, pcrsOutChild, pcrsProcessed); pcrsOutChild->Release(); if (NULL != pexprPred) { pdrgpexprChildren->Append(CUtils::PexprSafeSelect(mp, pexprChildNew, pexprPred)); } else { pdrgpexprChildren->Append(pexprChildNew); } } COperator *pop = pexpr->Pop(); pop->AddRef(); return GPOS_NEW(mp) CExpression(mp, pop, pdrgpexprChildren); } // eliminate subtrees that have a zero output cardinality, replacing them // with a const table get with the same output schema and zero tuples CExpression * CExpressionPreprocessor::PexprPruneEmptySubtrees ( CMemoryPool *mp, CExpression *pexpr ) { GPOS_ASSERT(NULL != pexpr); COperator *pop = pexpr->Pop(); if (pop->FLogical() && !CUtils::FLogicalDML(pop)) { // if maxcard = 0: return a const table get with same output columns and zero tuples if (0 == pexpr->DeriveMaxCard()) { // output columns CColRefArray *colref_array = pexpr->DeriveOutputColumns()->Pdrgpcr(mp); // empty output data IDatum2dArray *pdrgpdrgpdatum = GPOS_NEW(mp) IDatum2dArray(mp); COperator *popCTG = GPOS_NEW(mp) CLogicalConstTableGet(mp, colref_array, pdrgpdrgpdatum); return GPOS_NEW(mp) CExpression(mp, popCTG); } } // process children CExpressionArray *pdrgpexpr = GPOS_NEW(mp) CExpressionArray(mp); const ULONG ulChildren = pexpr->Arity(); for (ULONG ul = 0; ul < ulChildren; ul++) { CExpression *pexprChild = PexprPruneEmptySubtrees(mp, (*pexpr)[ul]); pdrgpexpr->Append(pexprChild); } pop->AddRef(); return GPOS_NEW(mp) CExpression(mp, pop, pdrgpexpr); } // eliminate CTE Anchors for CTEs that have zero consumers CExpression * CExpressionPreprocessor::PexprRemoveUnusedCTEs ( CMemoryPool *mp, CExpression *pexpr ) { GPOS_ASSERT(NULL != pexpr); COperator *pop = pexpr->Pop(); if (COperator::EopLogicalCTEAnchor == pop->Eopid()) { ULONG id = CLogicalCTEAnchor::PopConvert(pop)->Id(); if (!COptCtxt::PoctxtFromTLS()->Pcteinfo()->FUsed(id)) { GPOS_ASSERT(1 == pexpr->Arity()); return PexprRemoveUnusedCTEs(mp, (*pexpr)[0]); } } // process children CExpressionArray *pdrgpexpr = GPOS_NEW(mp) CExpressionArray(mp); const ULONG ulChildren = pexpr->Arity(); for (ULONG ul = 0; ul < ulChildren; ul++) { CExpression *pexprChild = PexprRemoveUnusedCTEs(mp, (*pexpr)[ul]); pdrgpexpr->Append(pexprChild); } pop->AddRef(); return GPOS_NEW(mp) CExpression(mp, pop, pdrgpexpr); } // for all consumers of the same CTE, collect all selection predicates // on top of these consumers, if any, and store them in hash map void CExpressionPreprocessor::CollectCTEPredicates ( CMemoryPool *mp, CExpression *pexpr, CTEPredsMap *phm ) { GPOS_CHECK_STACK_SIZE; if ( COperator::EopLogicalSelect == pexpr->Pop()->Eopid() && COperator::EopLogicalCTEConsumer == (*pexpr)[0]->Pop()->Eopid() && 0 == pexpr->DeriveOuterReferences()->Size() // no outer references in selection predicate ) { CExpression *pexprScalar = (*pexpr)[1]; if (!pexprScalar->DeriveHasSubquery()) { CExpression *pexprChild = (*pexpr)[0]; CLogicalCTEConsumer *popConsumer = CLogicalCTEConsumer::PopConvert(pexprChild->Pop()); ULONG ulCTEId = popConsumer->UlCTEId(); CExpression *pexprProducer = COptCtxt::PoctxtFromTLS()->Pcteinfo()->PexprCTEProducer(ulCTEId); GPOS_ASSERT(NULL != pexprProducer); CLogicalCTEProducer *popProducer = CLogicalCTEProducer::PopConvert(pexprProducer->Pop()); UlongToColRefMap *colref_mapping = CUtils::PhmulcrMapping(mp, popConsumer->Pdrgpcr(), popProducer->Pdrgpcr()); CExpression *pexprRemappedScalar = pexprScalar->PexprCopyWithRemappedColumns(mp, colref_mapping, true /*must_exist*/); colref_mapping->Release(); CExpressionArray *pdrgpexpr = phm->Find(&ulCTEId); if (NULL == pdrgpexpr) { pdrgpexpr = GPOS_NEW(mp) CExpressionArray(mp); BOOL fInserted GPOS_ASSERTS_ONLY = phm->Insert(GPOS_NEW(mp) ULONG(ulCTEId), pdrgpexpr); GPOS_ASSERT(fInserted); } pdrgpexpr->Append(pexprRemappedScalar); } } // process children recursively const ULONG ulChildren = pexpr->Arity(); for (ULONG ul = 0; ul < ulChildren; ul++) { CollectCTEPredicates(mp, (*pexpr)[ul], phm); } } // add CTE predicates collected from consumers to producer expressions void CExpressionPreprocessor::AddPredsToCTEProducers ( CMemoryPool *mp, CExpression *pexpr ) { CTEPredsMap *phm = GPOS_NEW(mp) CTEPredsMap(mp); CollectCTEPredicates(mp, pexpr, phm); CCTEInfo *pcteinfo = COptCtxt::PoctxtFromTLS()->Pcteinfo(); CTEPredsMapIter mi(phm); while (mi.Advance()) { ULONG ulCTEId = *(mi.Key()); CExpression *pexprProducer = pcteinfo->PexprCTEProducer(ulCTEId); GPOS_ASSERT(NULL != pexprProducer); ULONG ulConsumers = pcteinfo->UlConsumers(ulCTEId); CExpressionArray *pdrgpexpr = const_cast<CExpressionArray *>(mi.Value()); // skip the propagation of predicate contains volatile function e.g. random() (value change within a scan) if (CPredicateUtils::FContainsVolatileFunction(pdrgpexpr)) { continue; } if (0 < ulConsumers && pdrgpexpr->Size() == ulConsumers) { // add new predicate to CTE producer only if all consumers have selection predicates, // for example, in the following query // 'with v as (select * from A) select * from v where a > 5 union select * from v where b > 5' // we add the new predicate '(a > 5 or b > 5)' to CTE producer expression, // while for the following query // 'with v as (select * from A) select * from v where a > 5 union select * from v' // we do not add any new predicates to CTE producer expression pdrgpexpr->AddRef(); CExpression *pexprPred = CPredicateUtils::PexprDisjunction(mp, pdrgpexpr); (*pexprProducer)[0]->AddRef(); CExpression *pexprSelect = CUtils::PexprLogicalSelect(mp, (*pexprProducer)[0], pexprPred); COperator *pop = pexprProducer->Pop(); pop->AddRef(); CExpression *pexprNewProducer = GPOS_NEW(mp) CExpression(mp, pop, pexprSelect); pcteinfo->ReplaceCTEProducer(pexprNewProducer); pexprNewProducer->Release(); } } phm->Release(); } // derive constraints on given expression tree, and add new predicates by implication CExpression * CExpressionPreprocessor::PexprAddPredicatesFromConstraints ( CMemoryPool *mp, CExpression *pexpr ) { // normalize the tree, push down predicates (since we infer predicates bottom-up, // we want the predicates/constraints to be at the lowest possible point in the tree) CExpression *pexprNormalized = CNormalizer::PexprNormalize(mp, pexpr); // walk the tree and generate additional predicates from constraint properties // based on equivalence classes, e.g. constraint a=1 and equiv class {a,b} adds pred b=1 CColRefSet *pcrsProcessed = GPOS_NEW(mp) CColRefSet(mp); CExpression *pexprConstraints = PexprFromConstraints(mp, pexprNormalized, pcrsProcessed); GPOS_CHECK_ABORT; pexprNormalized->Release(); pcrsProcessed->Release(); // walk the tree again and generate equality predicates for columns in // equivalence classes, e.g. {cr1,cr2,cr3} results in cr1=cr2 and cr1=cr3 and cr2=cr3 pcrsProcessed = GPOS_NEW(mp) CColRefSet(mp); CExpression *pexprAddEqualityPreds = PexprAddEqualityPreds(mp, pexprConstraints, pcrsProcessed); // normalize the tree, push down predicates CExpression *pexprEqualityNormalized = CNormalizer::PexprNormalize(mp, pexprAddEqualityPreds); GPOS_CHECK_ABORT; pcrsProcessed->Release(); pexprConstraints->Release(); pexprAddEqualityPreds->Release(); // remove generated duplicate predicates CExpression *pexprDeduped = CExpressionUtils::PexprDedupChildren(mp, pexprEqualityNormalized); pexprEqualityNormalized->Release(); return pexprDeduped; } // driver for inferring predicates from constraints CExpression * CExpressionPreprocessor::PexprInferPredicates ( CMemoryPool *mp, CExpression *pexpr ) { GPOS_ASSERT(NULL != pexpr); // generate new predicates from constraint properties and normalize the result CExpression *pexprWithPreds = PexprAddPredicatesFromConstraints(mp, pexpr); // infer predicates from outer child to inner child of outer join BOOL fNewPreds = false; CExpression *pexprInferredPreds = PexprOuterJoinInferPredsFromOuterChildToInnerChild(mp, pexprWithPreds, &fNewPreds); pexprWithPreds->Release(); pexprWithPreds = pexprInferredPreds; if (fNewPreds) { // if we succeeded in generating new predicates below outer join, we need to // re-derive constraints to generate any other potential predicates pexprWithPreds = PexprAddPredicatesFromConstraints(mp, pexprInferredPreds); pexprInferredPreds->Release(); } return pexprWithPreds; } // Workhorse for pruning unused computed columns // // The required columns passed by the query is passed to this pre-processing // stage and the list of columns are copied to a new list. This driver function // calls the PexprPruneUnusedComputedColsRecursive function with the copied // required column set. The original required columns set is not modified by // this preprocessor. // // Extra copy of the required columns set is avoided in each recursive call by // creating a one-time copy and passing it by reference for all the recursive // calls. // // The functional behavior of the PruneUnusedComputedCols changed slightly // because we do not delete the required column set at the end of every // call but pass it to the next and consecutive recursive calls. However, // it is safe to add required columns by each operator we traverse, because non // of the required columns from other child of a tree will appear on the project // list of the other children. // // Therefore, the added columns to the required columns which is caused by // the recursive call and passing by reference will not have a bad affect // on the overall result. CExpression * CExpressionPreprocessor::PexprPruneUnusedComputedCols ( CMemoryPool *mp, CExpression *pexpr, CColRefSet *pcrsReqd ) { GPOS_ASSERT(NULL != pexpr); if (NULL == pcrsReqd || GPOS_FTRACE(EopttraceDisablePruneUnusedComputedColumns)) { pexpr->AddRef(); return pexpr; } CColRefSet *pcrsReqdNew = GPOS_NEW(mp) CColRefSet(mp); pcrsReqdNew->Include(pcrsReqd); CExpression *pExprNew = PexprPruneUnusedComputedColsRecursive(mp,pexpr,pcrsReqdNew); pcrsReqdNew->Release(); return pExprNew; } // Workhorse for pruning unused computed columns CExpression * CExpressionPreprocessor::PexprPruneUnusedComputedColsRecursive ( CMemoryPool *mp, CExpression *pexpr, CColRefSet *pcrsReqd ) { GPOS_ASSERT(NULL != pexpr); COperator *pop = pexpr->Pop(); // leave subquery alone if (CUtils::FSubquery(pop)) { pexpr->AddRef(); return pexpr; } if (COperator::EopLogicalProject == pop->Eopid() || COperator::EopLogicalGbAgg == pop->Eopid()) { CExpression *pexprProjList = (*pexpr)[1]; CColRefSet *pcrsDefined = pexprProjList->DeriveDefinedColumns(); CColRefSet *pcrsSetReturningFunction = pexprProjList->DeriveSetReturningFunctionColumns(); pcrsReqd->Include(CLogical::PopConvert(pop)->PcrsLocalUsed()); // columns containing set-returning functions are needed for correct query results pcrsReqd->Union(pcrsSetReturningFunction); CColRefSet *pcrsUnusedLocal = GPOS_NEW(mp) CColRefSet(mp); pcrsUnusedLocal->Include(pcrsDefined); pcrsUnusedLocal->Difference(pcrsReqd); if (0 < pcrsUnusedLocal->Size()) // need to prune { // actual construction of new operators without unnecessary project elements CExpression *pexprResult = PexprPruneProjListProjectOrGbAgg(mp, pexpr, pcrsUnusedLocal, pcrsDefined, pcrsReqd); pcrsUnusedLocal->Release(); return pexprResult; } pcrsUnusedLocal->Release(); } if (pop->FLogical()) { // for logical operators, collect the used columns // this includes columns used by the operator itself and its scalar children CExpressionHandle exprhdl(mp); exprhdl.Attach(pexpr); CColRefSet *pcrsLogicalUsed = exprhdl.PcrsUsedColumns(mp); pcrsReqd->Include(pcrsLogicalUsed); pcrsLogicalUsed->Release(); } // process children CExpressionArray *pdrgpexpr = GPOS_NEW(mp) CExpressionArray(mp); const ULONG ulChildren = pexpr->Arity(); for (ULONG ul = 0; ul < ulChildren; ul++) { CExpression *pexprChild = PexprPruneUnusedComputedColsRecursive(mp, (*pexpr)[ul], pcrsReqd); pdrgpexpr->Append(pexprChild); } pop->AddRef(); return GPOS_NEW(mp) CExpression(mp, pop, pdrgpexpr); } // Construct new Project or GroupBy operator without unused computed // columns as project elements CExpression * CExpressionPreprocessor::PexprPruneProjListProjectOrGbAgg ( CMemoryPool *mp, CExpression *pexpr, CColRefSet *pcrsUnused, CColRefSet *pcrsDefined, const CColRefSet *pcrsReqd ) { GPOS_ASSERT(NULL != pexpr); GPOS_ASSERT(NULL != pcrsUnused); GPOS_ASSERT(NULL != pcrsDefined); GPOS_ASSERT(NULL != pcrsReqd); CExpression *pexprResult = NULL; COperator *pop = pexpr->Pop(); CColRefSet *pcrsReqdNew = GPOS_NEW(mp) CColRefSet(mp); pcrsReqdNew->Include(pcrsReqd); GPOS_ASSERT(COperator::EopLogicalProject == pop->Eopid() || COperator::EopLogicalGbAgg == pop->Eopid()); CExpression *pexprRelational = (*pexpr)[0]; CExpression *pexprProjList = (*pexpr)[1]; // recursively process the relational child CExpression *pexprRelationalNew = NULL; if (pcrsUnused->Size() == pcrsDefined->Size()) { // the entire project list needs to be pruned if (COperator::EopLogicalProject == pop->Eopid()) { pexprRelationalNew = PexprPruneUnusedComputedColsRecursive(mp, pexprRelational, pcrsReqdNew); pexprResult = pexprRelationalNew; } else { GPOS_ASSERT(COperator::EopLogicalGbAgg == pop->Eopid()); CExpression *pexprProjectListNew = NULL; CColRefArray *pdrgpcrGroupingCols = CLogicalGbAgg::PopConvert(pop)->Pdrgpcr(); if (0 < pdrgpcrGroupingCols->Size()) { // if grouping cols exist, we need to maintain the GbAgg with an empty project list pexprProjectListNew = GPOS_NEW(mp) CExpression(mp, GPOS_NEW(mp) CScalarProjectList(mp)); pcrsReqdNew->Include(pdrgpcrGroupingCols); } else { // TODO: 10/15/2015: if there is no grouping cols, we could remove the entire GbAgg and plug in a ConstTableGet instead pexprProjList->AddRef(); pexprProjectListNew = pexprProjList; CExpressionHandle exprhdl(mp); exprhdl.Attach(pexpr); CColRefSet *pcrsLogicalUsed = exprhdl.PcrsUsedColumns(mp); pcrsReqdNew->Include(pcrsLogicalUsed); pcrsLogicalUsed->Release(); } pop->AddRef(); pexprRelationalNew = PexprPruneUnusedComputedColsRecursive(mp, pexprRelational, pcrsReqdNew); pexprResult = GPOS_NEW(mp) CExpression(mp, pop, pexprRelationalNew, pexprProjectListNew); } } else { // only remove part of the project elements CExpressionArray *pdrgpexprPrElRemain = GPOS_NEW(mp) CExpressionArray(mp); const ULONG ulPrjEls = pexprProjList->Arity(); CExpressionHandle exprhdl(mp); for (ULONG ul = 0; ul < ulPrjEls; ul++) { CExpression *pexprPrEl = (*pexprProjList)[ul]; CScalarProjectElement *popPrEl = CScalarProjectElement::PopConvert(pexprPrEl->Pop()); if (!pcrsUnused->FMember(popPrEl->Pcr())) { pexprPrEl->AddRef(); pdrgpexprPrElRemain->Append(pexprPrEl); pcrsReqdNew->Include(pexprPrEl->DeriveUsedColumns()); } } GPOS_ASSERT(0 < pdrgpexprPrElRemain->Size()); CExpression *pexprNewProjectList = GPOS_NEW(mp) CExpression(mp, GPOS_NEW(mp) CScalarProjectList(mp), pdrgpexprPrElRemain); pop->AddRef(); pexprRelationalNew = PexprPruneUnusedComputedColsRecursive(mp, pexprRelational, pcrsReqdNew); pexprResult = GPOS_NEW(mp) CExpression(mp, pop, pexprRelationalNew, pexprNewProjectList); } pcrsReqdNew->Release(); return pexprResult; } // reorder the child for scalar comparision to ensure that left child is a scalar ident and right child is a scalar const if not CExpression * CExpressionPreprocessor::PexprReorderScalarCmpChildren ( CMemoryPool *mp, CExpression *pexpr ) { GPOS_ASSERT(NULL != pexpr); COperator *pop = pexpr->Pop(); if (CUtils::FScalarCmp(pexpr) || COperator::EopScalarIsDistinctFrom == pexpr->Pop()->Eopid()) { GPOS_ASSERT(2 == pexpr->Arity()); CExpression *pexprLeft = (*pexpr)[0]; CExpression *pexprRight = (*pexpr)[1]; if (CUtils::FScalarConst(pexprLeft) && CUtils::FScalarIdent(pexprRight)) { CScalarCmp *popScalarCmpCommuted = (dynamic_cast<CScalarCmp *>(pop))->PopCommutedOp(mp, pop); if (popScalarCmpCommuted) { pexprLeft->AddRef(); pexprRight->AddRef(); return GPOS_NEW(mp) CExpression(mp, popScalarCmpCommuted, pexprRight, pexprLeft); } } } // process children CExpressionArray *pdrgpexpr = GPOS_NEW(mp) CExpressionArray(mp); const ULONG ulChildren = pexpr->Arity(); for (ULONG ul = 0; ul < ulChildren; ul++) { CExpression *pexprChild = PexprReorderScalarCmpChildren(mp, (*pexpr)[ul]); pdrgpexpr->Append(pexprChild); } pop->AddRef(); return GPOS_NEW(mp) CExpression(mp, pop, pdrgpexpr); } // converts IN subquery to a predicate AND an EXISTS subquery // Example Algebrized queries: // 1. Without a Project List: // Input: // +--CScalarSubqueryAny(=)["c2" (0)] // |--CLogicalGet "foo" ("foo"), Columns: ["c1" (8) ...] Key sets: {[1,7]} // +--CScalarIdent "c2" (0) // // Output: // +--CScalarBoolOp (EboolopAnd) // |--CScalarCmp (=) // | |--CScalarIdent "c2" (0) // | +--CScalarIdent "c2" (0) // +--CScalarSubqueryExists // +--CLogicalGet "foo" ("foo"), Columns: ["c1" (8) ...] Key sets: {[1,7]} // // 2. With a Project List: // Input: // +--CScalarSubqueryAny(=)["?column?" (16)] // |--CLogicalProject // | |--CLogicalGet "foo" ("foo"), Columns: ["c1" (8) ...] Key sets: {[1,7]} // | +--CScalarProjectList // | +--CScalarProjectElement "?column?" (16) // | +--CScalarOp (+) // | |--CScalarIdent "c2" (0) // | +--CScalarConst (1) // +--CScalarIdent "c2" (0) // // Output: // +--CScalarBoolOp (EboolopAnd) // |--CScalarCmp (=) // | |--CScalarIdent "c2" (0) // | +--CScalarOp (+) // | |--CScalarIdent "c2" (0) // | +--CScalarConst (1) // +--CScalarSubqueryExists // +--CLogicalGet "foo" ("foo"), Columns: ["c1" (8) ...] Key sets: {[1,7]} CExpression * CExpressionPreprocessor::ConvertInToSimpleExists ( CMemoryPool *mp, CExpression *pexpr ) { GPOS_ASSERT(COperator::EopScalarSubqueryAny == pexpr->Pop()->Eopid()); COperator *pop = pexpr->Pop(); CExpression *pexprRelational = (*pexpr)[0]; // Example for below variables: // SELECT * FROM bar WHERE // bar.a in (SELECT bar.b FROM foo) <- Input expression (pexpr) // | | // pexprLeft pexprRight // generate scalarOp expression by using column reference of the IN subquery's // inner child's column reference as well as the expression extracted above // from the project element CExpression *pexprLeft = (*pexpr)[1]; if (CUtils::FSubquery(pexprLeft->Pop())) { // don't convert if inner child is a subquery // Example: SELECT * FROM bar WHERE (SELECT 1) IN (SELECT c2 FROM foo); return NULL; } // since Orca doesn't support IN subqueries of multiple columns such as // (a,a) in (select foo.a, foo.a from ...) , // only extract the first expression under the first project element in the // project list and make it as the right operand to the scalar operation. CExpression *pexprRight = NULL; CExpression *pexprSubqOfExists = NULL; if (COperator::EopLogicalProject == pexprRelational->Pop()->Eopid()) { pexprRight = CUtils::PNthProjectElementExpr(pexprRelational, 0); pexprRight->AddRef(); pexprSubqOfExists = (*pexprRelational)[0]; } else { pexprRight = CUtils::PexprScalarIdent(mp, CScalarSubqueryAny::PopConvert(pop)->Pcr()); pexprSubqOfExists = pexprRelational; } CMDAccessor *md_accessor = COptCtxt::PoctxtFromTLS()->Pmda(); IMDId *mdid = CScalarSubqueryAny::PopConvert(pop)->MdIdOp(); const CWStringConst *str = md_accessor->RetrieveScOp(mdid)->Mdname().GetMDName(); mdid->AddRef(); pexprLeft->AddRef(); CExpression *pexprScalarOp = CUtils::PexprScalarCmp(mp, pexprLeft, pexprRight, *str, mdid); pexprSubqOfExists->AddRef(); CExpression *pexprScalarSubqExists = GPOS_NEW(mp) CExpression(mp, GPOS_NEW(mp) CScalarSubqueryExists(mp), pexprSubqOfExists); // AND the generated predicate with the EXISTS subquery expression and return. CExpressionArray *pdrgpexprBoolOperands = GPOS_NEW(mp) CExpressionArray(mp); pdrgpexprBoolOperands->Append(pexprScalarOp); pdrgpexprBoolOperands->Append(pexprScalarSubqExists); return CUtils::PexprScalarBoolOp(mp, CScalarBoolOp::EboolopAnd, pdrgpexprBoolOperands); } // rewrite IN subquery to EXIST subquery with a predicate // Example: // Input: SELECT * FROM foo WHERE foo.a IN (SELECT foo.b+1 FROM bar); // Output: SELECT * FROM foo WHERE foo.a=foo.b+1 AND EXISTS (SELECT * FROM bar); CExpression * CExpressionPreprocessor::PexprExistWithPredFromINSubq ( CMemoryPool *mp, CExpression *pexpr ) { // protect against stack overflow during recursion GPOS_CHECK_STACK_SIZE; GPOS_ASSERT(NULL != mp); GPOS_ASSERT(NULL != pexpr); COperator *pop = pexpr->Pop(); // recursively process children const ULONG arity = pexpr->Arity(); pop->AddRef(); CExpressionArray *pdrgpexprChildren = GPOS_NEW(mp) CExpressionArray(mp); for (ULONG ul = 0; ul < arity; ul++) { CExpression *pexprChild = PexprExistWithPredFromINSubq(mp, (*pexpr)[ul]); pdrgpexprChildren->Append(pexprChild); } CExpression *pexprNew = GPOS_NEW(mp) CExpression(mp, pop, pdrgpexprChildren); //Check if the inner is a SubqueryAny if (CUtils::FAnySubquery(pop)) { CExpression *pexprLogicalProject = (*pexprNew)[0]; // we do the conversion if the project list has an outer reference and // it does not include any column from the relational child. if (COperator::EopLogicalProject == pexprLogicalProject->Pop()->Eopid()) { // bail out if subquery has an inner reference or does not have any outer reference if(!CUtils::HasOuterRefs(pexprLogicalProject) || CUtils::FInnerRefInProjectList(pexprLogicalProject)) { return pexprNew; } } else { // perform conversion if subquery does not output any of the columns from relational child const CColRef *pcrSubquery = CScalarSubqueryAny::PopConvert(pop)->Pcr(); CColRefSet *pcrsRelationalChild = (*pexpr)[0]->DeriveOutputColumns(); if (pcrsRelationalChild->FMember(pcrSubquery)) { return pexprNew; } } CExpression *pexprNewConverted = ConvertInToSimpleExists(mp, pexprNew); if (NULL != pexprNewConverted) { pexprNew->Release(); pexprNew = pexprNewConverted;; } } return pexprNew; } // main driver, pre-processing of input logical expression CExpression * CExpressionPreprocessor::PexprPreprocess ( CMemoryPool *mp, CExpression *pexpr, CColRefSet *pcrsOutputAndOrderCols // query output cols and cols used in the order specs ) { GPOS_ASSERT(NULL != mp); GPOS_ASSERT(NULL != pexpr); CAutoTimer at("\n[OPT]: Expression Preprocessing Time", GPOS_FTRACE(EopttracePrintOptimizationStatistics)); // (1) remove unused CTE anchors CExpression *pexprNoUnusedCTEs = PexprRemoveUnusedCTEs(mp, pexpr); GPOS_CHECK_ABORT; // (2.a) remove intermediate superfluous limit CExpression *pexprSimplifiedLimit = PexprRemoveSuperfluousLimit(mp, pexprNoUnusedCTEs); GPOS_CHECK_ABORT; pexprNoUnusedCTEs->Release(); // (2.b) remove intermediate superfluous distinct CExpression *pexprSimplifiedDistinct = PexprRemoveSuperfluousDistinctInDQA(mp, pexprSimplifiedLimit); GPOS_CHECK_ABORT; pexprSimplifiedLimit->Release(); // (3) trim unnecessary existential subqueries CExpression * pexprTrimmed = PexprTrimExistentialSubqueries(mp, pexprSimplifiedDistinct); GPOS_CHECK_ABORT; pexprSimplifiedDistinct->Release(); // (4) collapse cascaded union / union all CExpression *pexprNaryUnionUnionAll = PexprCollapseUnionUnionAll(mp, pexprTrimmed); GPOS_CHECK_ABORT; pexprTrimmed->Release(); // (5) remove superfluous outer references from the order spec in limits, grouping columns in GbAgg, and // Partition/Order columns in window operators CExpression *pexprOuterRefsEleminated = PexprRemoveSuperfluousOuterRefs(mp, pexprNaryUnionUnionAll); GPOS_CHECK_ABORT; pexprNaryUnionUnionAll->Release(); // (6) remove superfluous equality CExpression *pexprTrimmed2 = PexprPruneSuperfluousEquality(mp, pexprOuterRefsEleminated); GPOS_CHECK_ABORT; pexprOuterRefsEleminated->Release(); // (7) simplify quantified subqueries CExpression *pexprSubqSimplified = PexprSimplifyQuantifiedSubqueries(mp, pexprTrimmed2); GPOS_CHECK_ABORT; pexprTrimmed2->Release(); // (8) do preliminary unnesting of scalar subqueries CExpression *pexprSubqUnnested = PexprUnnestScalarSubqueries(mp, pexprSubqSimplified); GPOS_CHECK_ABORT; pexprSubqSimplified->Release(); // (9) unnest AND/OR/NOT predicates CExpression *pexprUnnested = CExpressionUtils::PexprUnnest(mp, pexprSubqUnnested); GPOS_CHECK_ABORT; pexprSubqUnnested->Release(); CExpression *pexprConvert2In = pexprUnnested; if (GPOS_FTRACE(EopttraceArrayConstraints)) { // (9.5) ensure predicates are array IN or NOT IN where applicable pexprConvert2In = PexprConvert2In(mp, pexprUnnested); GPOS_CHECK_ABORT; pexprUnnested->Release(); } // (10) infer predicates from constraints CExpression *pexprInferredPreds = PexprInferPredicates(mp, pexprConvert2In); GPOS_CHECK_ABORT; pexprConvert2In->Release(); // (11) eliminate self comparisons CExpression *pexprSelfCompEliminated = PexprEliminateSelfComparison(mp, pexprInferredPreds); GPOS_CHECK_ABORT; pexprInferredPreds->Release(); // (12) remove duplicate AND/OR children CExpression *pexprDeduped = CExpressionUtils::PexprDedupChildren(mp, pexprSelfCompEliminated); GPOS_CHECK_ABORT; pexprSelfCompEliminated->Release(); // (13) factorize common expressions CExpression *pexprFactorized = CExpressionFactorizer::PexprFactorize(mp, pexprDeduped); GPOS_CHECK_ABORT; pexprDeduped->Release(); // (14) infer filters out of components of disjunctive filters CExpression *pexprPrefiltersExtracted = CExpressionFactorizer::PexprExtractInferredFilters(mp, pexprFactorized); GPOS_CHECK_ABORT; pexprFactorized->Release(); // (15) pre-process window functions CExpression *pexprWindowPreprocessed = CWindowPreprocessor::PexprPreprocess(mp, pexprPrefiltersExtracted); GPOS_CHECK_ABORT; pexprPrefiltersExtracted->Release(); // (16) eliminate unused computed columns CExpression *pexprNoUnusedPrEl = PexprPruneUnusedComputedCols(mp, pexprWindowPreprocessed, pcrsOutputAndOrderCols); GPOS_CHECK_ABORT; pexprWindowPreprocessed->Release(); // (17) normalize expression CExpression *pexprNormalized1 = CNormalizer::PexprNormalize(mp, pexprNoUnusedPrEl); GPOS_CHECK_ABORT; pexprNoUnusedPrEl->Release(); // (18) transform outer join into inner join whenever possible CExpression *pexprLOJToIJ = PexprOuterJoinToInnerJoin(mp, pexprNormalized1); GPOS_CHECK_ABORT; pexprNormalized1->Release(); // (19) collapse cascaded inner and left outer joins CExpression *pexprCollapsed = PexprCollapseJoins(mp, pexprLOJToIJ); GPOS_CHECK_ABORT; pexprLOJToIJ->Release(); // (20) after transforming outer joins to inner joins, we may be able to generate more predicates from constraints CExpression *pexprWithPreds = PexprAddPredicatesFromConstraints(mp, pexprCollapsed); GPOS_CHECK_ABORT; pexprCollapsed->Release(); // (21) eliminate empty subtrees CExpression *pexprPruned = PexprPruneEmptySubtrees(mp, pexprWithPreds); GPOS_CHECK_ABORT; pexprWithPreds->Release(); // (22) collapse cascade of projects CExpression *pexprCollapsedProjects = PexprCollapseProjects(mp, pexprPruned); GPOS_CHECK_ABORT; pexprPruned->Release(); // (23) insert dummy project when the scalar subquery is under a project and returns an outer reference CExpression *pexprSubquery = PexprProjBelowSubquery(mp, pexprCollapsedProjects, false /* fUnderPrList */); GPOS_CHECK_ABORT; pexprCollapsedProjects->Release(); // (24) reorder the children of scalar cmp operator to ensure that left child is scalar ident and right child is scalar const CExpression *pexrReorderedScalarCmpChildren = PexprReorderScalarCmpChildren(mp, pexprSubquery); GPOS_CHECK_ABORT; pexprSubquery->Release(); // (25) rewrite IN subquery to EXIST subquery with a predicate CExpression *pexprExistWithPredFromINSubq = PexprExistWithPredFromINSubq(mp, pexrReorderedScalarCmpChildren); GPOS_CHECK_ABORT; pexrReorderedScalarCmpChildren->Release(); // (26) normalize expression again CExpression *pexprNormalized2 = CNormalizer::PexprNormalize(mp, pexprExistWithPredFromINSubq); GPOS_CHECK_ABORT; pexprExistWithPredFromINSubq->Release(); return pexprNormalized2; } // EOF
{ "redpajama_set_name": "RedPajamaGithub" }
6,834
<?php namespace App\Http\Requests; use Illuminate\Foundation\Http\FormRequest; class ChangePasswordRequest extends FormRequest { /** * Determine if the user is authorized to make this request. * * @return bool */ public function authorize() { return true; } /** * Get the validation rules that apply to the request. * * @return array */ public function rules() { return [ 'oldpassword' => 'required|password', 'password' => 'required|string|min:6|confirmed|different:oldpassword' ]; } }
{ "redpajama_set_name": "RedPajamaGithub" }
7,733
https://github.com/Greeninfo-Network/IonicMapStarter Jumpstart your Ionic map-based apps with this starter template. A minimal but functional, standalone mobile app from which to build your own creations. In a few minutes your app will have basemaps from Google Maps, Bing Maps, OSM and other tile services, will have panels for navigating the app, and will support caching tiles onto your device so the map can be browsed offline. # Getting Started Using Ionic CLI, you should be able to "ionic start" using this repo directly: ``` ionic start -i com.example.yourapp -a "Your App Name" -t https://github.com/greeninfo/IonicMapStarter YourAppFolder ``` After initializing your Ionic app, add platforms and plugins as needed for your own development. Typically you will want at least these, in addition to the several that Ionic installs by default: ``` ionic platform add android ionic platform add ios ionic plugin add cordova-plugin-geolocation ionic plugin add cordova-plugin-file ionic plugin add cordova-plugin-file-transfer ionic plugin add cordova-plugin-inappbrowser ``` # Next Steps Now you're on your own, writing your Ionic app your way. But some steps come up a lot, so you may want to just knock them off now so you can focus on your application instead of these details. Edit _config.xml_ to set your app's name, author attribution, initial version number, etc. Edit _config.xml_ and add these two lines to enable Internet access. ``` <allow-navigation href="*" /> <allow-intent href="*" /> ``` If you're building for iOS, enable all orientations by adding this into _config.xml_: ``` <platform name="ios"> <preference name="Orientation" value="all"/> </platform> ``` Replace _resources/splash.png_ and _resources/icon.png_ with your own images, then run _ionic resources_ to build your new set of icons and splash screens. You may want to follow up by checking the image folders in your file explorer's thumbnail mode; sometimes it misses one. Edit _index.js_ and set the initial area of the map, which is also the bounding box used by isLatLngWithinMaxBounds() and pick out what basemap options you want to use, and add your own basemap options. Bing requires an API key but has very liberal terms of use, while Google Maps has tighter terms of use but does not require an API key. Of course, you can suply your own basemaps from Mapbox or the like. Unless you will be using both Bing and Google APIs, you may want to remove Bing and/or Google <script> tags from _index.html_ in order to speed up loading and reduce memory usage. # The Bits and Pieces This app brings together a few other libraries, and it's only right to mention them. * Ionic framework. http://ionicframework.com/ * Leaflet. http://leafletjs.com/ * Bing Maps and shamrov's Bing-Leaflet plugin. https://github.com/shramov/leaflet-plugins * Google Maps and shamrov's Google-Leaflet plugin. https://github.com/shramov/leaflet-plugins * angular-leaflet-directive by tombatossals, but this is a modified version to support popups and bounds. https://github.com/tombatossals/angular-leaflet-directive # Ionic's Official Map Template Ionic does have an official map starter template, which deserves a word. https://github.com/driftyco/ionic-starter-maps This template has only one page, and a sidemenu-based slide-in menu on the left. It has a few shortcomings and inflexibilities, which IonicMapStarter addresses: * It has only one single panel and a sidemenu. If you want to switch to another panel, no mechanism is provided; sidemenu really does restrict you in that regard. * When you switch panels Leaflet misbehaves and malfunctions (the old "hdden DIV" problem). IonicMapStarter works around that. * IonicMapStarter supports buttons in both the top-right and top-left corners, and these are customized in each view. Sidemenu hogs the top-left corner, and doesn't allow you to place an icon in the top-right. * It uses Google Maps which has usage restrictions and other potential concerns for your use case. This uses Leaflet so you're without restriction, but also provides working code for Bing Maps and Google Maps. * IonicMapStarter adds caching of tiles for offline use, as well as a UI for intentionally caching areas of the map. This can be extended to cache around an address, to cache the region of a park, etc. This isn't to disparage the fabulous work that is Ionic, of course! But it demonstrates that for your use case one or the other may be preferable. # Phonegap Build The content of the _www_ folder should be ready-to-run app with Phonegap Build. You should be able to ZIP up just the _www_ content and upload to PGB. I myself do not use Phonegap Build, and cannot provide support for it.
{ "redpajama_set_name": "RedPajamaGithub" }
8,946
{"url":"https:\/\/www.physicsforums.com\/threads\/properties-of-differentials-smooth-manifolds.674059\/","text":"# Properties of Differentials, Smooth Manifolds.\n\n1. Feb 24, 2013\n\n### BrainHurts\n\nI'm reading the second edition of John M. Lee's Introduction to Smooth Manifolds and he has a proposition that I'd like to understand better\n\nLet M, N, and P be smooth manifolds with or without boundary, let F:M\u2192N and G:N\u2192P be smooth maps and let p$\\in$M\n\nProposition: TpF : TpM \u2192 TF(p) is linear\n\nok I know that v$\\in$TpM means that\n\nv:C(M)\u2192\u211d is a derivation and that TpM is a vector space.\n\nDoes this mean that the image of (av+bw) under TpF where v,w $\\in$ TpM and a,b $\\in$ \u211d\n\nis aTpF(v) + bTpF(w) which means TpF is linear?\n\n2. Feb 24, 2013\n\n### micromass\n\nStaff Emeritus\nYes, that's what it means.","date":"2017-10-24 06:27:06","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.869114339351654, \"perplexity\": 3228.7062118561557}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-43\/segments\/1508187828178.96\/warc\/CC-MAIN-20171024052836-20171024072836-00133.warc.gz\"}"}
null
null
{"url":"https:\/\/artofproblemsolving.com\/wiki\/index.php?title=2018_AMC_10A_Problems\/Problem_13&diff=prev&oldid=90585","text":"# Difference between revisions of \"2018 AMC 10A Problems\/Problem 13\"\n\nA paper triangle with sides of lengths 3,4, and 5 inches, as shon, is folded so that point $A$ falls on point $B$. What is the length in inches of the crease? $[asy] draw((0,0)--(4,0)--(4,3)--(0,0)); label(\"A\", (0,0), SW); label(\"B\", (4,3), NE); label(\"C\", (4,0), SE); label(\"4\", (2,0), S); label(\"3\", (4,1.5), E); label(\"5\", (2,1.5), NW); fill(origin--(0,0)--(4,3)--(4,0)--cycle, gray); [\/asy]$ $\\textbf{(A) } 1+\\frac12 \\sqrt2 \\qquad \\textbf{(B) } \\sqrt3 \\qquad \\textbf{(C) } \\frac74 \\qquad \\textbf{(D) } \\frac{15}{8} \\qquad \\textbf{(E) } 2$\n\n## Solution 1\n\nFirst, we need to realize that the crease line is just the perpendicular bisector of side $AB$, the hypotenuse of right triangle $\\triangle ABC$. Call the midpoint of $AC$ point $D$. Draw this line and call the intersection point with $AC$ as $E$. Now, $\\triangle ABC$ is similar to $\\triangle ADE$ by $AA$ similarity. Setting up the ratios, we find that $$\\frac{BC}{AC}=\\frac{DE}{AD} \\Rightarrow \\frac{3}{4}=\\frac{DE}{\\frac{5}{2}} \\Rightarrow DE=\\frac{15}{8}.$$ Thus, our answer is $\\boxed{D}$.\n\n~Nivek\n\n## Solution 2 (if you are already out of time)\n\nSimply make a 3x4x5 inch triangle and then cut it out (using fine rips). Then, make the fold and mesure. It will be $\\boxed{D} \\frac{15}{8}$ inches in length.","date":"2021-03-01 01:33:32","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 16, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3315294086933136, \"perplexity\": 615.1937603010242}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-10\/segments\/1614178361808.18\/warc\/CC-MAIN-20210228235852-20210301025852-00527.warc.gz\"}"}
null
null
\section{Introduction} Let $n\geq 2$ be an integer. The braid group $B_n$ on $n$ strings is a finitely presented group generated by $n-1$ elementary braids $\sigma_1,\ldots,\sigma_{n-1}$ subject to the following relations: \begin{itemize} \item $\sigma_i\sigma_{i+1}\sigma_i=\sigma_{i+1}\sigma_i\sigma_{i+1}$ for all $1\leq i\leq n-2$; \item $\sigma_i\sigma_j=\sigma_j\sigma_i$ for all $1\leq i,j\leq n-1$ such that $|i-j|\geq 2$. \end{itemize} This is the classical Artin presentation of $B_n$ (see e.g. Chapter 10 in \cite{BZ}). The group $B_3$ is closely related to the modular group $\PSL_2({\mathrm{\bf Z}}}\newcommand{\NI}{{\mathrm{\bf N}})$. The above presentation shows that the braid $z=(\sigma_1\sigma_2)^3$ is central in $B_3$ and that $B_3/\langle z\rangle$ is generated by the class $u$ of $\sigma_1\sigma_2\sigma_1$ and $v$ of $\sigma_1\sigma_2$, where $u^2=v^3=z$. Thus $B_3/\langle z\rangle=\langle u,v\mid u^2=v^3=1\rangle=\PSL_2({\mathrm{\bf Z}}}\newcommand{\NI}{{\mathrm{\bf N}})$. In fact the group $B_3$ admits a proper isometric action with compact quotient on a metric product $T_3\times \RI$, where $T_3$ is a trivalent tree, which is the Bass-Serre tree of $\PSL_2({\mathrm{\bf Z}}}\newcommand{\NI}{{\mathrm{\bf N}})$. We are interested here in the 4-string braid groups $B_4$. It was proved by Brady in \cite{Brady} that $B_4$ admits a free isometric action with compact quotient on a CAT(0) simplicial complex $Y$ of dimension 3. The 3-dimensional cells of $Y$ are Euclidean tetrahedra whose faces are right-angle triangles and the quotient space $Y/B_4$ contains 16 tetrahedra, identified together along a single vertex. It is still true that $Y$ splits as a product $Y=X\times \RI$, where $X$ is now of dimension 2. The complex $X$ can be obtained from a non positively curved complex of groups whose fundamental group is the quotient of $B_4$ by its center (see \cite{crisp}). The existence of a CAT(0) structure on $B_n$ is an open problem for $n\geq 6$. Recall that on $B_4$, the 3-dimensional CAT(0) structure which are minimal (e.g., those whose links are isomorphic to that of $Y$) can be classified, by geometric rigidity results due to Crisp and Paoluzzi \cite{crisp}. On the other hand, Charney \cite{Charney-b4} proved that the Deligne complex \cite{Deligne} of $B_4$ is also a CAT(0) space of dimension 3, with respect to the Moussong metric (we remind that the Deligne action of $B_4$ on this complex is not proper). \subsection{Property RD} Let now $G$ be an arbitrary countable group. A length on $G$ is a map $|\cdot| : G\to \RI_+$ such that $|e|=0$, $|s|=|s^{-1}|$ and $|st|\leq |s|+|t|$ for $s,t\in G$ and $e$ the identity element. We recall that $G$ is said to have \emph{property RD} (\cite{Jol-def}) with respect to a length $|\cdot|$ if there is a polynomial $P$ such that for any $r\in \RI_+$ and $f,g\in \CI G$ with $\supp(f)\subset B_r$ one has \[ \|f*g\|_2\leq P(r)\|f\|_2\|g\|_ \] where $B_r=\{x\in G,~|x|\leq r\}$ is the ball of radius $r$ in $G$, $\supp(f)$ is the set of $x\in G$ with $f(x)\neq 0$, and $\CI G$ is the complex group algebra of $G$. For an introduction to property RD we refer to Chapter 8 in \cite{Val-bc}. The above convolution inequality, usually referred to as the Haagerup inequality (after Haagerup \cite{Haa}), allows to control the operator norm of $f$ acting by convolution on $\ell^2(G)$ in terms of its $\ell^2$ norm. Hence, some important consequences of property RD are of a spectral nature. When $G$ is finitely generated we have the word length $|\cdot|_S$ associated to any finite generating set $S$. Then property RD with respect to $|\cdot |_S$ is independent of $S$ so we simply speak of property RD for $G$ is that case. Our first main result is the following theorem. \begin{theorem}\label{th1} The braid group $B_4$ on 4 strings, as well as its central quotient $B_4/\langle z\rangle$, have property RD. \end{theorem} This gives a partial answer to a question in \cite{questions}, Section 8. The fact that $B_3$ has property RD was shown very early on by Jolissaint in \cite{Jol-def}, and the other cases remained open since then; in \cite{questions}, the question of property RD is raised more generally for all braid groups $B_n$. \emph{Update.} The problem of showing property RD for $B_n$ has been solved recently by Behrstock and Minsky (see \cite{bermin}). More generally, they established property RD for all mapping class groups (Recall that the braid group $B_n$ can be identified to the mapping class group of the $n$-punctured disk.) \medskip The proof of Theorem \ref{th1} is divided into two steps. The first step relies on our previous results from \cite{rd}: \begin{theorem}[{\cite[Theorem 5]{rd}}]\label{th3} Let $G$ be a group acting properly on a CAT(0) simplicial complex $\Delta$ of dimension 2 without boundary and whose faces are equilateral triangles of the Euclidean plane. Then $G$ has property RD with respect to the length induced from the 1-skeleton of $\Delta$. \end{theorem} We apply Theorem \ref{th3} to the quotient $B_4/\langle z\rangle$. By results of \cite{Brady,crisp}, this group acts on a simplicial complex $X$ with the required properties. The second step uses automaticity of $B_4$, and more precisely, the Thurston normal forms for braids in $B_4$, which allows to go back to $B_4$ from its central quotient. Details of the proof are in Section \ref{s2}, after a brief survey on property RD in Section \ref{s1'}. It would be interesting to implement the above approach of solving first the case of central quotients for higher braid groups. \medskip As a corollary of Theorem \ref{th1}, we obtain the following result (compare \cite{questions}, Section 8, where the question of property RD is raised in general for all $\Aut(F_n)$, $n \geq 2$): \begin{corollary}\label{c3} The automorphism group $\Aut(F_2)$ of the free group on 2 generators has property RD. \end{corollary} Indeed, $\Aut(F_2)$ is isomorphic to $\Aut(B_4)$, itself containing $\Inn(B_4)$ as a subgroup of index 2 (see \cite{dyer,Kram}). Thus property RD for $\Aut(F_2)$ follows from the corresponding result for $\Inn(B_4)$, which is isomorphic to the central quotient of $B_4$. Note that in \cite{prw}, a faithful action of $\Aut(F_2)=\Aut(B_4)$ on the complex $X$ is constructed. \subsection{The braid group $B_4$ as a group of intermediate rank} Groups and simplicial complexes appearing in Theorem \ref{th3} provide us with a large pool of objects satisfying \emph{intermediate rank} properties. See \cite{rd} for definitions and concrete examples. We discuss here the intermediate rank properties of $B_4$ and its central quotient (denoted $G$ below). We introduced in \cite{rd} a notion of \emph{mesoscopic rank} for a CAT(0) space $X$, which reflects the presence in $X$ of maximal flats portions (where maximal refers to the dimension, hence the rank terminology) which are (much) larger than ``germs of flats" in $X$ (say, flats of tangent cones) but \emph{are not actually contained in genuine flats of $X$} (i.e. copies of the Euclidean space $\RI^n$ inside $X$). We recall the precise definitions of mesoscopic rank and exponential mesoscopic rank in Section \ref{s3}. Following \cite{rd} we say that a group $G$ is of (exponential) mesoscopic rank when there is a proper action of $G$ with compact quotient on some CAT(0) space which is of (exponential) mesoscopic rank at some point. Our second main result is as follows. \begin{theorem}\label{meso} The braid group $B_4$ on 4 strings is of exponential mesoscopic rank. \end{theorem} For the proof, we first establish that the quotient $G$ of $B_4$ by its center is of exponential mesoscopic rank, and then extend the result to $B_4$. Note that $B_3$ is an example of a group acting freely and cocompactly on a simplicial complex as in Theorem \ref{th3} (see \cite{BradyCam}) but it is not of mesoscopic rank, and more precisely for any action with compact quotient on a 2-dimensional CAT(0) space $X$, the space $X$ cannot be of mesoscopic rank. In course of proving Theorem \ref{meso} we will see that the central quotient $G$ of $B_4$ is, at the local level, closely related to affine Bruhat-Tits buildings of type $\tilde A_2$ (what actually creates some complications in the proof of Theorem \ref{meso}, since the latter are not of mesoscopic rank by \cite{rd}). We will prove however that these connections cannot be extended beyond the local level (and specifically beyond the sphere of radius 1, see the last section of the paper). Related to this, we also show that being of exponential mesoscopic rank cannot serve as an obstruction to being embeddable in an affine building, and in particular, in spaces which are not of mesoscopic rank. \bigskip \emph{Acknowledgments.} We thank Jason Behrstock for communicating us his recent preprint \cite{bermin} with Yair Minsky, as well as for the reference \cite{prw}. The second author thanks JSPS for support. \section{Property of rapid decay}\label{s1'} In \cite{Haa} Haagerup proved that, for any finitely supported functions $f,g: F_n \to \RI$ defined on the free group $F_n$ on $n$ generators, the convolution product satisfies the inequality \[ \|f*g\|_2\leq (r+1)\|f\|_2\|g\|_2 \] where $r$ is the radius of the support of $f$, with respect to the usual word-length metric of $F_n$. In other words $f$, viewed as a convolution operator from $\ell^2(F_n)$ to itself, is bounded with operator norm at most $(r+1)\|f\|_2$. Groups satisfying the above inequality with some polynomial $P(r)$ instead of $r+1$ are said to have property RD (the precise definition of which we recalled in the introduction), see \cite{Jol-def}, where Jolissaint showed that (with respect to the word length): \begin{itemize} \item a finitely generated amenable group has property RD if and only if it is of polynomial growth; \item uniform lattices in a rank 1 Lie group have property RD. \end{itemize} The latter has been extended to all hyperbolic groups in the sense of Gromov by de la Harpe \cite{Harpe-rd}, and subsequently to groups which are hyperbolic relatively to polynomial growth subgroups by Chatterji and Ruane \cite{Chat-Ruane}, thereby establishing property RD for all lattices (uniform or not) in rank 1 Lie groups. The situation is different for groups of rank $\geq 2$. Non uniform lattices in a higher rank Lie group, typically $\mathrm {SL}}\newcommand{\PSL}{\mathrm {PSL}_3({\mathrm{\bf Z}}}\newcommand{\NI}{{\mathrm{\bf N}})$, are prominent examples of groups without property RD (cf. \cite{Jol-def}). Valette conjectured that all uniform lattices in higher rank Lie groups have property RD. This is known to hold for uniform lattices in $\mathrm {SL}}\newcommand{\PSL}{\mathrm {PSL}_3(\QI_p)$ (and other groups acting on triangle buildings), by a well-known theorem of Ramagge--Robertson--Steger \cite{RRS} (see also \cite{Laf-rd}) which was the first occurrence of property RD in higher rank situations. Their results were extended by Lafforgue \cite{Laf-rd} to cover all uniform lattices in $\mathrm {SL}}\newcommand{\PSL}{\mathrm {PSL}_3(\RI)$ and $\mathrm {SL}}\newcommand{\PSL}{\mathrm {PSL}_3(\CI)$. Chatterji \cite{Chat-quat} showed then that lattices in $\mathrm {SL}}\newcommand{\PSL}{\mathrm {PSL}_3({\mathrm{\bf H}}}\newcommand{\CI}{{\mathrm{\bf C}}}\newcommand{\RI}{{\mathrm{\bf R}}}\newcommand{\QI}{{\mathrm{\bf Q}})$ and $E_{6(-26)}$ behave similarly. We refer the interested reader to \cite{Val-bc,questions} for more information. A well-known application of property RD concerns the Baum--Connes conjecture without coefficient: by a theorem of Lafforgue \cite{Laf-bc}, groups which satisfy property RD together with some non positive curvature assumption (called strong bolicity) also satisfies the Baum--Connes without coefficient. For groups with property T, including most hyperbolic groups or cocompact lattices $\mathrm {SL}}\newcommand{\PSL}{\mathrm {PSL}_3(\RI)$ (for instance), this is the only known approach to the Baum--Connes conjecture. (The Baum--Connes conjecture is open for $\mathrm {SL}}\newcommand{\PSL}{\mathrm {PSL}_3({\mathrm{\bf Z}}}\newcommand{\NI}{{\mathrm{\bf N}})$.) In \cite{rd} we studied ``rank interpolation" for countable groups, that is, interpolation of the rank in between the usual $\mathrm{rk}= 1,2,\ldots$ integer values. The main applications presented in \cite{rd} are $C^*$-algebraic in nature and in particular, we established property RD for many groups of intermediate rank. This provided new examples where Lafforgue's approach to the Baum--Connes could be applied (in fact for many of these groups---e.g. for groups of rank ${7\over 4}$---this is also the only approach that is presently known to work, and the Baum-Connes conjecture with coefficients is open). See also \cite{notewise} and \cite{bs} for other results on intermediate rank and property RD. The accent in \cite{rd} is on interpolating the rank between 1 and 2, which includes a large class of groups of interest. In the present paper we will see that $B_4$ is also a group of intermediate rank, which interpolate the rank between 2 and 3. \section{Proof of Theorem \ref{th1}}\label{s2} The group $B_4$ admits the following presentation: \[ B_4=\langle a,b,c\mid aba=bab, bcb=cbc, ac=ca\rangle. \] The pure braid group $P_4$ is the kernel of the surjective homomorphism to the symmetric group on 4 letters, \[ B_4\to S_4, \] mapping a braid to the corresponding permutation of its endpoints. It is well-known that the center of both $B_4$ and $P_4$ is the cyclic group generated by the element $z=(bac)^4$, which consists in a full-twist braiding of the 4 strings (see \cite[Section 10.B]{BZ} for instance; this is known to hold for more general Artin groups \cite{BS,Deligne}). In other words $B_4$ is a central extension of the group \[ G=B_4/\langle z\rangle \] by the groups of integers ${\mathrm{\bf Z}}}\newcommand{\NI}{{\mathrm{\bf N}}=\langle z\rangle$, which gives an exact sequence \[ 1\longrightarrow {\mathrm{\bf Z}}}\newcommand{\NI}{{\mathrm{\bf N}}\longrightarrow B_4\longrightarrow G\longrightarrow 1, \] and in the same way, \[ 1\longrightarrow {\mathrm{\bf Z}}}\newcommand{\NI}{{\mathrm{\bf N}}\longrightarrow P_4\longrightarrow H\longrightarrow 1, \] where $H=P_4/{\mathrm{\bf Z}}}\newcommand{\NI}{{\mathrm{\bf N}}$ is a finite index subgroup of $G$. The torsion in $G$ corresponds to the the conjugacy classes of the elements $x=bac$ and $y=bac^2$ and their powers, where we have $x^4=y^3=z$ (see \cite[p. 139]{crisp} for a geometric proof of this fact; recall that $G_4$ itself is torsion free). It follows that $H$ is torsion free. We will need some results of Brady \cite{Brady} and their extensions in Crisp--Paoluzzi \cite[Section 3]{crisp}. Let $Y$ be classifying space of $B_4$ constructed in \cite{Brady}. As recalled in the introduction, $Y$ is a CAT(0) simplicial complex of dimension 3 whose 3-dimensional faces are Euclidean tetrahedra. The authors of \cite{crisp} consider the projection in $Y$ along the $z$-axis and obtain a 2-dimensional complex $X$ (called the Brady complex there) together with an action of $G$ (called the standard action, in view of \cite[Theorem 1]{crisp}) which commutes to the action of $B_4$ on $\Sigma$ under taking projection. The complex $Y$ splits metrically as a product: \[ Y=X\times \RI \] and $X$ is endowed with an action of $G$ (in Section \ref{s4} we will give more details on these constructions). As a CAT(0) space, $X$ is a triangle polyhedron, i.e. its faces are equilateral triangles of the Euclidean plane (\cite[p. 140]{crisp}), and the action of $G$ on $X$ is proper with compact quotient. Thus $H$ acts freely with compact quotient on $X$, so $H$ appears as the fundamental group of the complex \[ V=X/H \] (it can be shown that $V$ has 6 vertices and 32 faces). It follows then from Theorem \ref{th3} that $H$ has property RD with respect to some (hence any) finite generating set. As $H$ is a finite index normal subgroup of $G$ this implies, by Proposition 2.1.4 in \cite{Jol-def}, that $G$ itself has property RD. Further results of Jolissaint \cite{Jol-def} (in particular Proposition 2.1.9 of that paper, see also Chatterji--Pittet--Saloff-Coste \cite[Proposition 7.2]{CPSC}) show that property RD is stable under certain types of central extensions. We will prove that these results can by applied to the present situation and this will conclude the proof of Theorem \ref{th1}. Consider the section \[ \kappa : G\to B_4 \] of the quotient map $\pi : B_4\to G$, which identifies $G$ as the subset of braids in $B_4$ whose central part is trivial. Being a central extension of $G$, we can decompose $B_4$ as a product \[ B_4={\mathrm{\bf Z}}}\newcommand{\NI}{{\mathrm{\bf N}}\times_c G \] where the value at a point $(g,h)\in G\times G$ of the cocycle \[ c : G\times G \to {\mathrm{\bf Z}}}\newcommand{\NI}{{\mathrm{\bf N}} \] defining the extension is the exponent of $z\in G$ in the central element \[ \kappa(g)\kappa(h)\kappa(gh)^{-1} \] of $G$. Our goal is to find a symmetric finite generating set of $G$ such that, for the corresponding Cayley graph $Y_G$ of $G$, we have \[ |c(g,h)|\leq n \] for every elements $g,h\in G$ at distance at most $n$ from the identity in $Y_G$. That this implies property RD for $B_4$ follows from \cite[Proposition 7.2]{CPSC}. Let us fix some notations regarding the Thurston normal form for elements of $B_4$ (see \cite[Chapter 9]{thurston} and \cite{Charney-biaut}). In what follows we write $\Delta=(bac)^2$ for the half twist of the four strings. The braid group $B_4$ can be generated by a set $S$ of 23 elements, which are in bijective correspondence with the non trivial elements of the symmetric quotient $S_4$. The half-twist $\Delta$ belongs to $S$. Furthermore in this presentation, the monoid $B_4^+$ of positive braids is the submonoid of $G$ generated by $S$, and every elements $s\in B_4^+$ can be written in a canonical way \[ s=s_1\ldots s_n, \] called the greedy form of $s$, where $s_i\in S$ (see \cite{Garside,thurston}, for instance one can consider the right greedy form where the element $\Delta$ appears only on the right side of the expression $s_1\ldots s_n$). This decomposition can be extended to $B_4$: by \cite{thurston}, every $x\in B_4$ can be written as $x=s^{-1}t$ with $s,t\in B_4^+$, in a unique way (after obvious cancellation in case both $s$ and $t$ start with the same element $r\in B_4^+$). Thus any elements $x\in B_4$ can be written in a canonical form \[ x=s_n^{-1}\ldots s_1^{-1}t_1\ldots t_m, \] where $s_i,t_j\in S$. The latter decomposition is called the Thurston normal form (or the Garside normal form) of $x$. Following \cite{charney-n}, we let \[ |x|=n+m, \] where $n$ and $m$ are given by the normal form. The language associated to this normal form turns out to give a geodesic biautomatic structure on $B_4$ (see \cite{thurston,charney-n}), and if we denote by $Y$ the cayley graph of $B_4$ with respect to $S\cup S^{-1}$, then $|x|$ is the length of a simplicial geodesic in $Y$ from $e$ to $x\in B_4$. In particular for $x,y\in B_4$ we have \[ |xy|\leq |x|+|y| \] (see \cite[Lemma 3.4]{charney-n}). Let $Y_G$ be the Cayley graph of $G$ with respect to the generating set $\pi(S\cup S^{-1})$. It is easily seen that \[ |\kappa(gh)|\leq |\kappa(g)|+|\kappa(h)| \] since $\kappa(gh)$ is obtained from the product $\kappa(g)\kappa(h)$ by cancellation of the central factor. In particular \[ \kappa(\mathrm{Ball}_n(Y_G))\subset \mathrm{Ball}_n(Y), \] where $\mathrm{Ball}_n(\cdot)$ is the ball of radius $n$ in the corresponding Cayley graph. On the other hand, since $\Delta\in S$ and $z=\Delta^2$, the absolute value of the exponent of $z$ in the central part of an $x\in B_4$ is at most $|x|/2$ by construction of the normal form of $x$. Let $g,h\in G$ at distance at most $n$ from the identity in $Y_G$. By definition, the value of $c(g,h)$ is the exponent of $z$ in the central part of $\kappa(g)\kappa(h)$. Thus \[ |c(g,h)|\leq |\kappa(g)\kappa(h)|/2\leq (|\kappa(g)| + | \kappa(h)|)/2\leq n. \] This concludes the proof of Theorem \ref{th1}. \section{Some classical applications of property RD}\label{s4} We present below two classical applications of property RD. The first one concerns the Baum-Connes conjecture and the second one is about random walks, which gives use some useful information on random walks on $B_4$. For further consequences of property RD we refer to Valette's book \cite{Val-bc} and to the references there. \subsection{Braid groups and the Baum--Connes conjecture} As is well-known, the Baum--Connes conjecture \emph{with coefficients} holds for the $n$-string pure braid group $P_n$, as well as for its finite extension $B_n$ (see \cite{oyono,schick}). On the other hand, in the case $n=4$, we have property RD and thus the Banach $KK$-theory techniques of Lafforgue \cite{Laf-bc} applies as well. Hence: \begin{corollary} The groups $B_4$, $P_4$ and their respective central quotients, $G$ and $H$, satisfy the Baum--Connes conjecture without coefficient. \end{corollary} The Baum--Connes conjecture (even without coefficients) has a number of applications. See \cite{Val-bc} for more details. The problem of showing the Baum-Connes conjecture with coefficients for groups acting freely isometrically with compact quotient on a CAT(0) space satisfying the assumption of Theorem \ref{th3} is open. As far as we know, the Baum--Connes conjecture for the central quotients of $B_n$ and $P_n$ is open for $n\geq 5$. \subsection{$\ell^2$ spectral radius of random walks on $B_4$} Another application of property RD concerns random walks on groups, see Grigorchuk and Nagnibeda \cite{Grigo} and the end of Section 2.2 in \cite{rd} for more details and references. If $G$ is a countable group endowed with a length, one considers the \emph{operator growth function of $G$}, \[ F_r(z)=\sum_{n} a_n z^n \] where the coefficients $a_n$ are bounded operators on $\ell^2(G)$ defined by \[ a_n =\sum_{|x|=n} u_x \] with $u_x$, $x\in G$, the canonical family of unitary operators corresponding to $G$ in $C^*_r(G)$ under the regular representation. The radius of convergence $\rho_r$ of $F_r$ defined by \[ {1\over {\rho_r}}=\limsup_{n\to \infty} \|a_n\|_r^{1/n} \] is no lower than the radius of convergence of the usual growth series of the group $G$ with respect to $\ell$. Conjecture 2 in \cite{Grigo} states that $G$ is amenable if and only if $\rho=\rho_r$. For groups with property RD (in fact ``radial subexponential" property RD is sufficient, see \cite[Proposition 23]{rd} and references) we have $\rho_r=\sqrt{\rho}$ and thus the above Conjecture 2 holds. One can also deduce the $\ell^2$ spectral radius property for every element in the group algebra of $G$ provided $G$ has (subexponential) property RD, i.e., the fact that the spectral radius of every element $a\in \CI G$ acting by convolution on $\ell^2(G)$ is equal to \[ \lim_{n\to\infty} \|a^{*n}\|_2^{1/n} \] (which also has some important applications, again see the references in \cite{rd}). Thus we obtain: \begin{corollary}\label{cor} The groups $B_4$, $P_4$ and their respective central quotients, $G$ and $H$, satisfy the $\ell^2$ spectral radius property. Furthermore for these four groups the reduced spectral radius $\rho_r$ and the radius of convergence $\rho$ of the usual growth series are related as follows: \[ \rho_r=\sqrt{\rho}<1, \] and thus these groups satisfy Conjecture 2 in \cite{Grigo}. \end{corollary} \section{Mesoscopic rank}\label{s3} Let $X$ be a piecewise Euclidean CAT(0) simplicial complex of dimension $n\geq 2$, without boundary, and let $A$ be a point of $X$ (see \cite{BH} for a general reference on CAT(0) spaces). We call \emph{mesoscopic rank profile} of $X$ at $A$ the function \[ \varphi_A : \RI_+\to \NI \] which associate to an $r\in \RI_+$ the number of distinct flat balls of radius $r$ in $X$ which are centered at $A$, and which are not included in a flat of $X$. By a flat in $X$ (resp. flat subset of $X$) we mean an isometric embedding of the Euclidean space $\RI^n$ in $X$ (resp. of a subset of an Euclidean space $\RI^n$ with the induced metric). We then have the following. \begin{proposition}[see \cite{rd}]\label{p7} Let $X$ be a piecewise Euclidean CAT(0) simplicial complex without boundary and let $A$ be a point of $X$. Then, \begin{enumerate} \item if $X$ is hyperbolic, $\varphi_A$ is compactly supported; \item if $X$ is an affine Bruhat-Tits building, $\varphi_A$ vanishes identically. \end{enumerate} \end{proposition} We refer to \cite[Section 6]{rd}, where this theorem is stated for triangle polyhedra but the proof extends to the above general situation (in the first case there is no flat at all, while in the second, we have in fact that every flat ball is included in uncountably many flats). According to Proposition \ref{p7}, the mesoscopic rank profile trivializes when the rank takes the usual $\mathrm{rk}=1,2,3,\ldots$ integer values. The following property detects spaces of intermediate rank where, more precisely, intermediate rank occurs (exponentially) in between the local and asymptotic scale in $X$: \begin{definition} The space $X$ is said to have \emph{exponential mesoscopic rank} at $A$ if the function $\varphi_A$ converges exponentially fast to infinity at infinity. \end{definition} Mere \emph{mesoscopic rank} refers to the fact that the support of $\varphi_A$ contains a neighbourhood of infinity. Thus for spaces of mesoscopic rank at a point $A$, on can continuously rescale the radius of balls of center $A$ from some constant $C$ up to $\infty$, in such a way that all the balls in this family are flat but not included in flats. When the mesoscopic rank is exponential, the number of possible choices for these balls varies exponentially with the radius. \begin{definition} A group $G$ is said to be of \emph{exponential mesoscopic rank} if it admits a proper isometric action with compact quotient on a CAT(0) space which is of exponential mesoscopic rank at least at one point (and thus at infinitely many points). \end{definition} The following groups are known to be of exponential mesoscopic rank: \begin{itemize} \item[(a)] The group denoted $\Gamma}\renewcommand{\S}{S}\newcommand{\Pwei}{\wp_{\bowtie}$ in \cite{rd}, and called group of frieze there (see \cite[Section 6.1]{rd}); \item[(b)] The group of rank ${7\over 4}$\ which is the fundamental group of the complex denoted $V_0^1$ in \cite{rd} (see \cite[Section 6.1]{rd}); \item[(c)] D. Wise's non Hopfian group (see \cite{notewise,bs}). \end{itemize} In the present paper we add further groups to this list, namely $B_4$ and its central quotient (as well as the group $G_0$ of Section \ref{more}). \begin{remark} Most of the groups of rank ${7\over 4}$\ (see \cite[Section 4]{rd}) might be of exponential mesoscopic rank. Besides the one of Item (b) above, one can get more examples from the classification of transitive orientable groups of rank ${7\over 4}$\ given in \cite[Theorem 4]{rd}, but we presently have no general (say local or semi-local) criterion ensuring exponential mesoscopic rank (compare Section \ref{more} below). Another interesting problem is to prove or disprove the existence of groups of mesoscopic rank for which the mesoscopic rank profile at some vertex grows faster than polynomials but slower than exponential functions. \end{remark} \section{Proof of Theorem \ref{meso}}\label{s3'} We prove that the Braid group $B_4$ and its central quotient $G=B_4/\langle z \rangle$ are of exponential mesoscopic rank, respectively, in Section \ref{63} and Section \ref{62}. \subsection{A closer look at the 4-string complexes $Y$ and $X$.} Let us first recall in some more details the description of the Brady action of $B_4$ on $Y$ and its quotient action of $G$ on $X$, following \cite{Brady} and \cite{crisp}. Consider the following presentation of $B_4$, \begin{align*} B_4=\langle a,b,c,d,e,f\mid &ba=ae=eb,\, de=ec=cd,\\ &bc=cf=fb,\, df=fa=ad,\\ &ca=ac,\, ef=fe\rangle, \end{align*} and let us keep the notations $x=bac$ and $y=bac^2$, so that $x^4=y^3=z$ generates the center of $B_4$. There are exactly sixteen ways to write $x$ as a product of three of the generators $a,\ldots, f$. These can be expressed as the length 3 subwords of the following two words of length 12: \[ W_1=bcadefbacdfe;\hskip1.5cm W_2=faecfaecfaec, \] which are representative for the central element $x^4=y^3=z$ in $B_4$ (see \cite[page 139]{crisp}). To each of these expressions $x=a_1a_2a_3$ one associates an Euclidean tetrahedron whose faces are right-angled triangles, and whose edges have length \[ |x|=\sqrt3;~~ |a_i|=1;~~ |a_1a_2|=|a_2a_3|=\sqrt2. \] The corresponding labelled tetrahedra can be assembled to form a compact complex $V$ such that $\pi_1(V)=B_4$. Then $Y=\tilde V$ is the universal cover of $V$ with the corresponding deck-transformation action of $B_4$. The CAT(0) space $Y$ splits as a metric product $Y=X\times \RI$, where $X$ is the range of a projection of $Y$ along the $z$-axis. The image of each tetrahedron in $Y$ under this projection is an Euclidean equilateral triangle in $X$ and the action of $B_4$ factors out to a simplicial action of $G=B_4/\langle z\rangle$ on $X$, which is proper and cocompact. The Cayley graph of $G$ with respect to the generating set $S=\{a,\ldots, f\}$ (where the above $a,\ldots, f$ are viewed as elements of $G$ under a slight abuse of notation) is a 4-to-1 cover of the 1-skeleton of $X$. Links at vertices in $X$ are represented on Figure \ref{fig1} below, where the right hand side representation corresponds to Figure 3 in \cite{Brady} and Figure 6 in \cite{crisp}. The equivalent left hand side representation is included for future reference (see Section \ref{more}). \begin{figure}[htbp] \centerline{\includegraphics[width=13cm]{B4=immeuble.eps}} \caption{The link $L$ and its labelling}\label{fig1} \end{figure} The labellings on these figure corresponds to edges of the generating set $S$ entering and leaving the given vertex (this depends on the choice of a representative of the coset of $\langle x\rangle$ in $G$, but a different choice will simply relabel the link according to the action of an element of stabilizer of the vertex, see \cite[p. 141]{crisp}). We call \emph{lozenge} in $X$ the reunion of two triangles glued along an edge of valence 2, and restrict from now on the terminology \emph{triangle of $X$} to those equilateral triangles in $X$ which are not included in a lozenge. According to the description given on p. 160 of \cite{crisp}, the complex $X$ is built out of triangles and lozenges, all of whose edges being trivalent in $X$ and labelled in the same way. There are three types of corners: in triangles, all angles labelled by 1, while in lozenges the angles are labelled 2 or 3 depending of whether it equals $\pi/3$ or $2\pi/3$. Then triangles and lozenges in the complex $X$ are arranged in such a way that the labelled link at each vertex matches that given on the following Fig. \ref{figX} (our notations differ slightly from those of \cite{crisp}, in particular our label 3 correspond to $2T_3^-$ and $2T_3^+$ in \cite[Fig. 19]{crisp}). \begin{figure}[htbp] \centerline{\includegraphics[width=13.5cm]{complexe.eps}} \caption{Description of the complex $X$}\label{figX} \end{figure} \subsection{Exponential mesoscopic rank for $X$ and $G=B_4/\langle z\rangle$}\label{62} The proof will follow the strategy of \cite{rd} (see Section 6.1 and 6.2), with the additional difficulty that the link $L$ embeds into the incidence graph of the Fano plane (compare Section \ref{more}). Let us first derive a few elementary lemmas regarding the local structure of $X$. \begin{lemma}\label{loz} Let $R$ be a lozenge of $X$. Any boundary edge of $R$ is incident to exactly a triangle and a lozenge $R'\neq R$ of $X$. Furthermore, $R\cup R'$ is isometric to a parallelogram and we will say that $R$ and $R'$ are \emph{aligned} in $X$. \end{lemma} \begin{proof} Let $R$ be a lozenge of $X$, $e=[A,B]$ be a boundary edge of $R$, where $A$ is the vertex of $e$ whose internal angle in $R$ is $2\pi/3$. The geometry of $L$ shows that there are two faces incident to $e$ which are not included in $R$. Inspecting the link at $B$, we see that one of these faces is a triangle of $X$, while the other is a lozenge whose internal angle at $B$ is $2\pi/3$. This follows from the fact that every vertex of valence of 3 in $L$ is adjacent to a vertex of valence 2, and the vertices with valence 2 are at distance $\geq 3$ one from the other. Hence the lemma is proved. \end{proof} \begin{lemma}\label{hexa} Let $R$ be a lozenge of $X$ and $A$ be a vertex of $R$ of internal angle $2\pi/3$. There are exactly two lozenges $R_1$ and $R_2$ in $X$ such that $R\cap R_1=R\cap R_2=\{A\}$ and such that both $R\cup R_1$ and $R\cup R_2$ are included in a flat hexagon of $X$. (This hexagon contains $R$ and $R_1$ (resp. $R$ and $R_2$) and the two triangles of $X$ containing $A$ and completing $R$ and $R_1$ (resp. $R_2$) to a local flat at $A$.) \end{lemma} \begin{proof} The assertion follows from the fact that, given a vertex $x$ of valence 2 in $L$, there are exactly two vertices $y$ and $z$ of valence 2 in $L$ such that $(x,y)$ on the one hand, and $(x,z)$ on the other, are at distance $\pi$ in a cycle of $L$ of length $2\pi$. \end{proof} \begin{lemma}\label{Lpi} Let $x$ and $y$ be two trivalent vertex at distance $\pi$ in $L$. Then there are precisely three distinct paths of length $\pi$ with extremities $x$ and $y$ in $L$. Depending on the position of $x$ and $y$ in $L$ these paths are labelled in either one of the following two ways (with the labelling given by Fig. \ref{figX}): \begin{itemize} \item Case I: 2-3, ~ ~3-2, ~ ~and~ ~ 1-2-1; \item Case II: 1-3, 3-1, and 2-1-2. \end{itemize} \end{lemma} \begin{proof} It is easily seen from the geometry of the link (Fig. \ref{figX}) that trivalent points at distance $\pi$ in $L$ can be joined by simplicial paths whose edges labelling are: \begin{itemize} \item[(a)] 1-3 (or 3-1) \item[(b)] 2-3 (or 3-2) \item[(c)] 1-2-1 \item[(d)] 2-1-2 \end{itemize} and furthermore that there are precisely three distinct paths between any two such points. Indeed, the group of labelling preserving automorphisms of $L$ is homogeneous on trivalent vertices, so we can assume for instance that $x=a^+$ (in the notation of Fig. \ref{fig1}), in which case there are two possibilities for $y$, namely $y=a^-$ and $y=e^-$. The lemma follows, where case I corresponds to $y=a^-$ and case II to $y=e^-$. \end{proof} We call \emph{singular geodesic} in $X$ a CAT(0) geodesic of $X$ which is included in the 1-skeleton of $X$ (viewed with respect to the triangle/lozenge simplicialization, in particular, all edges of singular geodesics are of valence 3). It is easy to see that for $u=a$ or $u=c$, and every vertex $A$ in $X$, the vertices $u^iA$, $i\in {\mathrm{\bf Z}}}\newcommand{\NI}{{\mathrm{\bf N}}$, belong to a singular geodesic of $X$. Indeed, since the labellings of the link at each vertex $u^iA$ are given by permuting letters of $L$, it is sufficient to show that the points of $L$ with label $u^-$ and $u^+$ are trivalent vertices at distance $\pi$ in $L$, which straightforward. We will denote this geodesic by $u^{\mathrm{\bf Z}}}\newcommand{\NI}{{\mathrm{\bf N}} A$. Recall that a subset $S$ of $X$ is called a (flat) \emph{strip} if it is isometric to a product $I\times \RI\subset \RI^2$ where $I$ is a compact interval of $\RI$. The boundary of $S$ is a reunion of two parallel geodesics of $X$, say $d$ and $d'$, and is denoted $(d,d')$. The \emph{height} of $S$ is the CAT(0) distance between $d$ and $d'$. The following lemma asserts that singular geodesic in $X$ all appear as branching locus of flat strips of $X$. This property is reminiscent of affine Bruhat-Tits buildings (say, of dimension 2), where it is true in a somewhat stronger form (in particular strips may be extended arbitrarily there). \begin{lemma}\label{smallstrip} Let $d$ be a singular geodesic of $X$. There are precisely three flats strips in $X$ of height at least $\sqrt 3/2$ whose pairwise intersection are reduced to $d$. \end{lemma} \begin{proof} For each edge $e$ of $d$ consider the three faces $T_e^i$, $i=1,2, 3$, whose boundary contains $e$. Two of these faces are lozenges and one of them, say $T_e^1$, is a triangle of $X$ (see Lemma \ref{loz}). Let $f$ be an edge of $d$ adjacent to $e$ and let $A$ be their intersection point. The points corresponding to $e$ and $f$ in the link $L_A$ of $X$ at $A$ are trivalent, and it can be easily checked that they are at distance $\pi$ from each other in $L_A$. Thus Lemma \ref{Lpi} applies. In case I, the faces $T_e^1$ and $T_f^1$ correspond to a path of length $\pi$ of the form 1-2-1, and, up to permutation of indices $i$, the faces $T_e^i$ and $T_f^i$ (for $i=2,3$) corresponds to a cycle of length $\pi$ of the form 2-3 and 3-2. In case 2 and again up to permutation of the indices $i$, the faces $T_e^1$ and $T_f^2$ correspond to a path of length $\pi$ of the form 1-3, the faces $T_f^1$ and $T_e^2$ correspond to a path of length $\pi$ of the form 3-1, while the faces $T_e^3$ and $T_f^3$ correspond to a path of length $\pi$ of the form 2-1-2. Then the lemma follows by iterating this on both sides of the geodesic $d$ starting from a fixed edges $e$. The height of each strip may be taken to be at least $\sqrt 3/2$. \end{proof} We say that a vertex of a singular geodesic of type I (resp. of type II) depending on whether case 1 (resp. case 2) applied in the proof of the above lemma, and call a geodesic of type I (resp. of type II) if all its vertices are of type I (resp. type II). For instance the geodesic $a^{\mathrm{\bf Z}}}\newcommand{\NI}{{\mathrm{\bf N}} A$ and $c^{\mathrm{\bf Z}}}\newcommand{\NI}{{\mathrm{\bf N}} A$ are of type I for any vertex $A$ of $X$. \begin{lemma}\label{strip} Let $d$ be a singular geodesic of type I in $X$. There are precisely three flats strips in $X$ of minimal height whose pairwise intersection are reduced to $d$ and whose boundary geodesics are singular geodesics type I in $X$. Two of them have height $\sqrt 3/2$, and are reunions of aligned lozenges (see Lemma \ref{loz}), and the other one has height $\sqrt 3$, and is a reunion of hexagons as described in Lemma \ref{hexa}, and triangles of $X$ which are the unique triangles completing these hexagons to a flat strip. \end{lemma} \begin{proof} Let $d$ be a singular geodesic of type I in $X$ and let $S_1$, $S_2$ and $S_3$ be the strips of height $\sqrt 3/2$ given by Lemma \ref{smallstrip}, whose pairwise intersections are reduced to $d$. We may assume at each vertex $A$ of $d$ the path of length $\pi$ in $L_A$ corresponding to $S_1$ are of the form 1-2-1. Then the path corresponding to $S_2$ and $S_3$ are either of the form 2-3 or 3-2. \begin{figure}[htbp] \centerline{\includegraphics[width=13cm]{parallelogramme.eps}} \caption{Parallelograms and strips on type I geodesics of $X$}\label{figpar} \end{figure} Let us first consider the strip $S_1$ and for each vertex $A$ of $d$ denote by $R_A$ the lozenge of $X$ corresponding to the index 2 in the path 1-2-1 of $L_A$. It is easily seen that if $A$ and $B$ are consecutive vertices on $d$, then the lozenges $R_A$ and $R_B$ are in the configuration described in Lemma \ref{hexa} and they can be completed by a unique triangle of $X$ (besides the one in $S_1$) to form an hexagon $H_{AB}$. The reunion $S_1'$ of all hexagon $H_{AB}$ when $A$ and $B$ runs over the pair of adjacent vertices on $d$ is a flat strip of $X$ of height $\sqrt 3$. Furthermore it is a simple matter to check (with Lemma \ref{Lpi}) that all the vertices of the boundary of this strip which is distinct from $d$ are of type I, which proves the assertion of the Lemma in that case. A parallelogram of the strip $S_1'$ is represented on Fig. \ref{figpar} on the left. Consider now the strip $S_2$, which is of height $\sqrt 3/2$. The boundary of this strip which is distinct from $d$ contains only vertices whose link intersect $S_2$ along a path of the form 2-3 or 3-2. By Lemma \ref{Lpi} again, these vertices are of type I. The case of $S_3$ being identical to that of $S_2$, this proves the lemma. Parallelograms of the strips $S_2$ and $S_3$ are represented on Fig. \ref{figpar}. \end{proof} \begin{theorem}\label{mesoGth} The complex $X$ is of exponential mesoscopic rank at every vertex. More precisely let $O$ be a vertex of $X$ and $k$ be a sufficiently large integer (in fact $k\geq 32$ is sufficient for our purpose). Then the mesoscopic profile $\varphi_O$ at $X$ satisfies \[ \varphi_O\geq \left ({3\over 2}\right)^{2\mu_k-4} \] on the interval $[k-1, k]$ of $\RI_+$, where \[ \mu_k= \left \lceil k({2\over \sqrt3 }-1) + ({2\over \sqrt 3}-3)\right \rceil. \] In particular the group $G$ is of exponential mesoscopic rank. \end{theorem} \begin{proof} Let $\Pi$ be the flat containing the origin $O=O_0$ of $X$ and generated by the subgroup $\langle a,c\rangle\simeq {\mathrm{\bf Z}}}\newcommand{\NI}{{\mathrm{\bf N}}^2$ of $G$. Denote $O_1=(ac)^{-1}(O_0)$ let $d=[O_0,\infty)$ be the semi-line of $\Pi$ of origin $O_0$ and containing $O_1$. Hence the vertices of $d$ are the points \[ O_k=(ac)^{-k}(O_0) \] for $k\in \NI$. Let $\Pi_0$ be the sector of $X$ of extremity $O_0$, of angle $2\pi/3$ at $O_0$, and which is bisected by the semi-line $d$ (see Figure \ref{mesoG}). The boundary of $\Pi_0$ is included in the reunion of singular geodesics $d_1$ and $d_2$ which intersects at $O_0$; the first one contains the vertices $a^{-k}(O_0)$ and the second one the vertices $c^{-k}(O_0)$, $k\in \NI$. Both $d_1$ and $d_2$ are of type I. Consider the vertices $A=a(O_0)$ of the flat $\Pi$. By Lemma \ref{strip}: \begin{enumerate} \item There is a unique strip $S_1$ of height $\sqrt 3$ whose intersection with $\Pi$ is reduced to $a^{{\mathrm{\bf Z}}}\newcommand{\NI}{{\mathrm{\bf N}}}(O_0)$, and whose other boundary $d_1'$ is a singular geodesic of type I in $X$. We consider then on $d_1'$ the unique strip of height 1, say $S_1'$, which corresponds in the link of vertices of $d_1'$ to paths of the form 3-2 (see Fig. \ref{mesoG}). Let $d_1''$ be the other boundary of $S_1'$. \item Consider the strip $S_2$ in $X$ of height $\sqrt 3/2$ on $c^{{\mathrm{\bf Z}}}\newcommand{\NI}{{\mathrm{\bf N}}}(O_0)$ which contains $A$. (This strip is included in $\Pi$ and its other boundary $d_2'=c^{{\mathrm{\bf Z}}}\newcommand{\NI}{{\mathrm{\bf N}}}(A)$ is a singular geodesic of type I in $X$.) There is on $d_2'$ a unique strip $S_2'$ of height $\sqrt 3$ whose other boundary $d_2''$ is a singular geodesic of type I in $X$. \end{enumerate} The strips $S_1$, $S_1'$, $S_2$ and $S_2'$ are represented on Fig. \ref{mesoG}, together with the labellings given by the links at their vertices. \begin{figure}[htbp] \centerline{\includegraphics[width=14cm]{mesoscopB4.eps}} \caption{Exponential mesoscopic rank of the complex $X$}\label{mesoG} \end{figure} \begin{lemma}\label{c13} Let $k\in \NI$ and let $D$ be a flat disk in $X$ of center $O_k$ such that $D\backslash (X\backslash \Pi_0) =D\cap \Pi_0$. If the intersections $D\cap S_i$ and $D\cap S_i'$, $i=1,2$ are non empty open sets, then $D$ is not included in a flat of $X$. \end{lemma} \begin{proof}[Proof of Lemma \ref{c13}] As we see on the link $L_A$ of $A$, there is a unique lozenge $R$, which corresponds to a label 2 in $L_A$ and which extends the strips $S_1$ and $S_2'$ at the point $A$ to a flat disk in $X$ containing $A$ as an interior point. This lozenge contains a vertex $B$ at distance $\pi$ from $c^{-1}(A)$ in $L_A$ and in turn, there is a unique way to extend the resulting configuration to a flat disk in $X$ containing $B$ as an interior point. This disk corresponds to a circle of length $2\pi$ which is labelled 1-3-2-1-2 in $L_B$. Let $R'$ be the lozenge of $X$ distinct from $R$ which corresponds the label 2 in this circle. It is easy to see that, if $D$ is a flat disk as in the statement of the lemma, then any flat disk $D'$ of center $O_k$ and radius $> k+1$ which contains $D$ must contain the points $A$ and $B$ as interior points and must intersect the lozenge $R'$ along a non-empty open subset. On the other hand $D$, and a fortiori $D'$, intersects the strip $S_1'$ along a non empty open set. Thus $D'$ intersects along an non empty open set the lozenge of $S_1'$ which contains $C$ and whose internal angle at $C$ is $2\pi/3$. But this shows that $D'$ cannot be extended beyond the point $C$, since this would give a cycle of length $2\pi$ in the link $L_C$ containing two successive edges of length $2\pi/3$. Thus neither $D'$ nor $D$ is included in a flat of $X$. \end{proof} We can now conclude the proof of Theorem \ref{mesoGth}. We proceed as in Lemma 59 of \cite{rd}, to which we refer for more details. For $k\geq 32$ let $\mu_k$ be the integer defined in the statement of the theorem and let $\nu_k=\left (3\over 2\right )^{\mu_k}$. (Since $k\geq 32$ we have $\mu_k\geq 3$.) Using Lemma \ref{strip}, we can construct, for $i=1,2$, (at least) $\nu_k$ distinct flat strips \[ \S_i^1,\ldots, S_i^{\nu_k} \] in $X$ of height ${\sqrt 3\over 2}\mu_k$, each of whose intersection with $S_i'$ is reduced to $d_i''$. (The lower bound $\nu_k$ is estimated by examining transverse trees in the sets $\cup_{j=1}^{\nu_k} S_i^j$; sharper bounds can be obtained easily but $\nu_k$ is enough to show exponential growth of the mesoscopic profile.) So let $i=(i_1,i_2)\in \{1,\ldots,\nu_k\}^2$ and consider the subset $\Pi_i$ of $X$ defined by \[ \Pi_i=\Pi_0\cup S_1^{i_1}\cup S_2^{i_2}. \] Then the set $D_i$ of points of $\Pi_i$ at distance $\leq k+1$ from $O_k$ in $\Pi_i$ is a flat disk in $X$ whose boundary contains $B$. Furthermore the disks $D_i$ are pairwise distinct when $i$ varies in $\{1,\ldots,\nu_k\}^2$. For $r\in [0,k+1]$ write $D_i^r$ for the concentric disk of radius $r$ in $D_i$. Then for any fixed $r\in [k,k+1]$ the family of disks \[ \{D_1^r, \ldots D_{\nu_k}^r\} \] contains at least $\left ({3\over 2}\right) ^{2\mu_k -4}$ distinct elements. Furthermore all these disks satisfy the assumption of Lemma \ref{c13} and thus are not included in a flat of $X$. Since the vertex $O_k$ are all equivalent under the group $G$, this proves the theorem. \end{proof} \subsection{Exponential mesoscopic rank for $Y$ and the braid group $B_4$.}\label{63} We conclude this section with the proof of Theorem \ref{meso}. \begin{theorem} Let $O$ be a vertex of $Y$ and consider the CAT(0) projection $\pi : Y\to X$ associated to the metric decomposition $Y\simeq X\times \RI$. Then the mesoscopic profile $\varphi_O^Y$ at $Y$ satisfies \[ \varphi_O^Y\geq \varphi_{\pi(O)}^X \] where $\varphi_{O'}^X$ is the mesoscopic profile of $X$ at a vertex $O'\in X$. In particular the braid group $B_4$ is of exponential mesoscopic rank. \end{theorem} \begin{proof} For $r\in \RI$, let $k=\varphi_X(r)$ and consider $k$ distinct flat disks $D_1,\ldots, D_k$ of center $O'=\pi(O)$ and radius $r$ in $X$ which are not included in a flat of $X$. Let \[ C_i=\pi^{-1}(D_i)\simeq D_i\times \RI \] be the cylinder of $Y$ corresponding to the decomposition $Y\simeq X\times \RI$. These cylinder are isometric to cylinders in the Euclidean space $\RI^3$ and in particular the ball of center $O$ and radius $r$ in $C_k$ are all flat balls of $X$. Furthermore, these balls are not included in flats of $Y$. Indeed, if $B_i\subset \Pi$ where $\Pi\simeq \RI^3$ is isometric to the Euclidean space $\RI^3$, then the projection $\pi(\Pi)$ is a convex subset of $X$ which is isometric to the Euclidean space $\RI^2$. But this shows that $D_i$ is included in a flat of $X$. Finally, as the ball $B_i$ are pairwise distinct (since the disks $D_i$ are), we obtain that $\varphi_Y(r)\geq k$ as claimed. The last assertion follows from Theorem \ref{mesoGth}. \end{proof} \begin{remark} It would be interesting to give example of groups which act properly with compact quotient on a CAT(0) space of dimension $\geq 3$ of exponential mesoscopic rank, which doesn't split as a metric product where some factor is of exponential mesoscopic rank. In view of Tits' classification of affine buildings, it seems plausible that CAT(0) simplicial complexes ``whose rank is close to their dimension" will get sparse when the dimension gets strictly greater than 2. Recall here that affine Bruhat-Tits buildings are completely classified in dimension $\geq 3$ by work of Tits \cite{Tits74}, and that this is far from being possible in dimension 2 which offers a great degree of freedom \cite{henri}. We also refer to the paper of Ballmann and Brin \cite{BB} concerning rank rigidity results in dimension 3. \end{remark} \section{More on mesoscopic rank}\label{more} In the present section we investigate possible relations between the Brady complex $X$ and triangle buildings of order 2. Furthermore we present a group which acts freely isometrically with compact quotient on polyhedra of exponential mesoscopic rank that is embeddable into a triangle building. Let us observe first that the link of $X$ (represented Fig. \ref{fig1}) obviously embeds (simplicially) into the incidence graph $L_2$ of the Fano plane. Such an embedding is made explicit on Fig. \ref{fano} below; the graph $L$ is obtained from $L_2$ by removing a tree $T$ of 5 edges (indicated in dots on Fig. \ref{fano}). We call \emph{center-edge} of $T$ the only non-extremal edge of $T$---this tree $T$ appears often in \cite{poisson} where it is called a fishy edge (or a fish bone, depending on the translation). \begin{figure}[htbp] \centerline{\includegraphics[width=5.5cm]{fano.eps}} \caption{The incidence graph $L_2$ of the Fano plane and the link of $X$}\label{fano} \end{figure} The graph $L_2$ is a spherical building and can be identified to the link of triangle buildings of order 2 (see e.g. \cite{Ronan}; triangle buildings are also called affine Bruhat-Tits buildings of type $\tilde A_2$). By \cite{henri} there are uncountably many such buildings, and their groups of automorphisms is generically trivial (generic is taken here in the sense of Baire with respect to some appropriate topology). In view of the above embedding $L\hookrightarrow L_2$, it is natural to ask whether the complex $X$ itself can be simplicially embedded into one of these triangle buildings. It turns out that this problem has an elementary answer. \begin{proposition}\label{prop20} Let $X$ be the brady complex and $\Delta$ be a triangle building of order 2. There is no simplicial embedding $X\hookrightarrow \Delta$. More generally, any CAT(0) complex $X$ of dimension 2 whose faces are equilateral triangle and whose links at each vertex are isomorphic to $L$ does not embed simplicially into a triangle building. \end{proposition} \begin{proof} Let $X$ be a CAT(0) complex $X$ of dimension 2 whose faces are equilateral triangle and whose links at each vertex are isomorphic to $L$, and assume that we are given a simplicial embedding $X\hookrightarrow \Delta$. For a vertex $D$ of $X$ we write $T_D$ for the removed tree, \[ T_D=L_{\Delta,D}\backslash L_{X,D}, \] where $L_{Z,D}$ denotes the link of $D$ in the complex $Z$. Fix some vertex $A\in X$. Then there is a unique triangle in $\Delta$, say $(ABC)$, which corresponds to the center-edge of the tree $T_A$ at the point $A$. Denote by $(ABB')$ and $(ABB'')$ the two other triangles in $\Delta$ adjacent to the edge \begin{figure}[htbp] \centerline{\includegraphics[width=6cm]{plongement.eps}} \caption{Embedding $X$ into a triangle building}\label{fig6} \end{figure} $[A,B]$. In the link $L_{\Delta,B'}$ (resp. $L_{\Delta,B''}$), the edge corresponding to the triangle $(ABB')$ (resp. $(ABB'')$) is extremal in the tree $T_{B'}$ (otherwise $L_{\Delta,A}\backslash L_{X,A}$ would contain more than 5 edges). Thus the three triangles of $\Delta$ adjacent to the edge $[B,B']$ (resp. $[B,B'']$) do not belong to $X$. But then the graph $L_{\Delta,B}\backslash L_{X,B}$ contains at least six edges, contradicting our assumptions. Therefore there is no simplicial embedding $X\hookrightarrow \Delta$. \end{proof} \begin{remark} The proof of Proposition \ref{prop20} shows more, namely, it shows that the obstruction of an embedding $X\hookrightarrow \Delta$ is \emph{local}: for $X$ and $\Delta$ as in the proposition, there is no simplicial embedding of simplicial balls of radius 2 in $X$ into simplicial balls of radius 2 in $\Delta$. In other words the embedding $L\hookrightarrow L_2$ is the best we can do. \end{remark} We will conclude with a discussion of the following question, which is natural in view of the above. \medskip Does (exponential) mesoscopic rank for a CAT(0) complex prevents simplicial embeddings of this complex into an affine Bruhat-Tits building ? \medskip It turns out that the answer is negative. It can be shown that the group $G_0$ defined by the presentation: \[ G_0=\langle r,s\mid s^{-2}ts^2t=t^2st^{-2}\rangle \] admits a free and isometric action with compact quotient on a CAT(0) simplicial complex $X_0$ of dimension 2 such that: \begin{enumerate} \item there is a simplicial embedding $X_0\hookrightarrow \Delta$ where $\Delta$ is a triangle building of order 2; \item $X_0$ is of exponential mesoscopic rank. \end{enumerate} Since the construction from which $G_0$ is derived is not related to braid groups and would take us too far away from the subject of the present paper, we will omit the proofs of the above two statements. Let us simply describe the local geometry of $X_0$, which can be interestingly compared to that of the Brady complex $X$. \begin{figure}[htbp] \centerline{\includegraphics[width=5.5cm]{fanomeso.eps}} \caption{The link of a complex of mesoscopic rank that can be embedded into a triangle building}\label{fanomeso} \end{figure} The links at each vertex of $X_0$ (which necessarily embed simplicially into $L_2$) are all isometric (in fact $G_0$ acts transitively on the vertices of $X_0$). They are obtain from graph $L_2$ by removing 3 edges. Note that these edges are not in a tree but they are irregularly positioned on the graph $L_2$, as for the link $L$ of the Brady complex $X$. The complex $X_0$ has exponential asymptotic rank in the sense of \cite{rd}. The above figure represents the links in $X_0$. The complex $X_0$ itself contains two types of faces, equilateral triangles and parallelograms of size $2\times1$ in the Euclidean plane; a representation of these faces and their labellings can be found on Fig. \ref{x0}. \begin{figure}[htbp] \centerline{\includegraphics[width=11cm]{quasifrise.eps}} \caption{Description of the complex $X_0$}\label{x0} \end{figure}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,662
/* Produces a minimal ELF file for espcoredump tests */ typedef struct { char stuff[TASK_NAME_OFFSET]; char pcTaskName[16]; } TCB_t; TCB_t foo; int main(void) { return 0; }
{ "redpajama_set_name": "RedPajamaGithub" }
2,818
TTM Success, Heath Justin Bell For all those of you who have kids that watch Nickelodeon, tell them that Heath Bell is the cousin of Drake Bell from Josh & Drake. Heath Bell was drafted by Tampa Bay in the 69th round of the 1997 draft but did not sign. He was signed in 1998 by the Mets as an amateur free agent. After 6 years in the minors he made his major league debut on August 24, 2004 against the Padres. He pitched for three years for the Mets, bouncing back and forth between the Mets and AAA being a injury fill in for the Mets. On November 15, 2006 he was traded to the Padres with Royce Ring for Jon Adkins and Ben Johnson. During '07 and '08 he pitched in 171.7 innings going 12-10 and often acting as the set up man for Trevor Hoffman. Hells' Bells may not be playing at Petco Park anymore, but the Bell will toil in the 9th inning this year. I got this back in the mail yesterday from Arizona. It only took 10 days to get there and come back. Heath signed with the green Sharpie that I sent him, it really pops on this card, he also signed a '08 Heritage Short print that I sent him. That is a 50% return rate after 10 days, that leaves Glenn Hoffman, Chip Ambres and Baek to go. TTM Success Another day of work, another couple thousand books and a self addressed stamped envelope waiting for me when I got home. The envelope was postmarked Phoenix (Arizona, not Oregon) so I knew it had to be one of the Padres I had sent out. It was the two Edgar Gonzalez cards that I had sent him and a thank you note. These days I am usually able to buy a lot of a certain player when they come out, so I keep a couple at home, ask them to sign a couple and send the rest along for them to keep. I think I sent Edgar about 15 of the Bowman card and 1 Topps card. When I mail the request out I try to make it as easy as possible for the player. I include a stamped self addressed envelope, a Sharpie, and some extra cards for them to keep. Since I started doing that I think David Wells is the only one who hasn't signed for me. It took 8 days for the cards to travel from Portland to San Diego to Peoria and back again. Yesterday evening after too long a day at work and a parent teacher conference and multiple bus rides, I got home to find my first through the mail autograph request of the year returned. I had sent Dirk 9 copies of this wonderful Punk Rock Paint creation. He signed 4, I only asked him to sign 2 and sent them back in only 7 days. I hope he manages to stick with the Jays, although in his email he said something about maybe playing in Japan this year. Matt Bush Matt, we hardly know you and what we knew of you wasn't real positive. Matt was the number one pick overall of the 2004 draft. He signed for a bonus of $3,150,000, and then was suspended before he even played one game. He and his brother snuck in to a bar and got in a fight, while Matt was still a minor. He played in the Padres minor league system and got hurt, hit south of the Mendoza line and it was decided that he would become a pitcher. It's been done before, if I remember correctly Trevor Hoffman was originally drafted as a shortstop. That didn't work out too well as Bush had to have Tommy John surgery. At the beginning he was dropped from the Padres 40 man roster to make room for Cliff Floyd. Part of the reason for this was that he was allegedly involved in an assault on some high schoolers. The Padres seem to get want to rid themselves of him they traded him to the Blue Jays for a player to be named later or cash considerations. The Blue Jays bumped fan favorite (at least mine) Dirk Hayhurst off their 40 man roster. This is a case of trash bumping class. I got this signed through the mail in 2005 or 2006. Homer Giles Bush Homer Bush, one of the all time great baseball names, was drafted in the 7th round of the 1991 draft by the Padres, he never played for the Padres but did win a World Series ring at the their expense. He was traded to the Yankees with Gordon Amerson, Hideki Irabu and Vernon for Rafael Medina, Ruben Rivera and $3 in April 1997. He made his major league debut in August with the Yankees against the Texas Rangers. In February of 1999 he was traded to the Blue Jays with Graeme Lloyd and David Wells for Roger Clemens. In May of 2002 he signed as a free agent with the Marlins, he signed in December of 2002 with the Padres, but was released shortly thereafter and signed as a free agent with the Yankees. Homer won a World Series ring in the 1998 series, when the Yankees swept the Padres. He went o-2 in the series, but he was there and won the ring. I got the card off of ebay. Matt Buschmann Matt Buschmann was drafted in the 15th round of the 2006 draft. He has played a Eugene, Lake Elsinore and San Antonio. Hopefully he will be here in Portland this year. I got this signed through the mail last year. An interesting response to a TTM request A few days ago I sent out 6 cards to 4 Padres and 2 former Padres. One of them was Dirk Hayhurst whom had been a career minor leaguer who finally made his major league debut last year with the Padres. At the end of the seasons he was placed on waivers and the Blue Jays claimed him. I had met him when he pitched for the Portland Beavers, I loaned him some books and we would talk about all kinds of things, but baseball. This morning when I got up and checked my email there was a email from Dirk Hayhurst, telling me that he had received the cards and telling me what he had been doing and that he had landed a book deal and been working on that. Lest you think it is just because he is a baseball player, you should read some of the stuff he wrote for Baseball America and the Canton Rep . I am looking forward to reading whatever he writes. It was awesome to hear from him. Beyond Belief, Finding the Strength to Bounce Back Beyond Belief, Finding The Strength to Come Back by Josh Hamilton with Time Keown; 2008; Faith Works, New York, NY; 256 pages; 978-1-59995-161-4; 2/18/09-2/20/09 This is one of the best autobiographies I have read in awhile. I would put in my top three autobiographies, Charlton Heston, Roddy Piper and Josh Hamilton. These are books that are like sitting down with the authors and sharing stories with them over a cup of coffee. Josh Hamilton was a pheom at age 6, being advanced to a higher age league cause he was so good. He was playing with 11 and 12 year old at 6. He was the BMOC on campus in high school being stopped between classes his senior year to sign autographs. He was drafted first by the Tampa Bay Rays and signed for a $4 million bonus. He was a close knit family who always took care of one another, his parents even lived with him when he started his minor league career. After awhile they moved home and Josh started hanging out in some strange places, getting multiple tattoos. He had never even had a drink before this but now began drinking, then moved on to cocaine, and finally to crack. He was an addict who did not want to acknowledge his addiction even when confronted with positive tests. He was confronted with his addiction and turned to Jesus Christ and through his relationship with him he was able to make a comeback, first as a man, husband, father, son, friend and finally as a ballplayer. The Reds claimed him in the Rule 5 draft and then after a season traded him to the Rangers to bolster there pitching. Then he put on a hitting performance at last years' Home Run Derby. This is a great book that I have learned a lot from. RRRR Sean Patrick Burroughs Sean Burroughs had an impressive bloodline and resume before major league baseball. His father, Jeff Burroughs, was an American League MVP, and Sean won two Little League World Series in 1992 and 1993 and an Olympic Gold Medal in 2000. After winning the Little League World Series, at age 12, he appeared on David Letterman and told Dave he wanted to be a gynecologist when he grew up. Sean was drafted by the Padres in 1998 with the 9th overall pick. He made his major league debut on April 2, 2002 against the Diamondbacks. He played parts of 4 seasons with the Padres before being traded to the Rays for Dewon Brazelton. This trade was seen as a trade of two former Number 1 picks needing a change of scenery, alas it didn't really help either one of them. In August of 2006 he was released by the Rays and signed as a free agent in January of 2007 with the Seattle Mariners. He only played four games at AAA Tacoma before being released by the M's. Hopefully Sean has decided to go to college, he is only 28 and maybe he is fulfilling that dream he shared with David Letterman. I got the card signed through the mail when Sean was with the Padres. I can't come up with a punderful title for this post This card doesn't actually exist but I sent it to Dirk anyway, we'll see if it comes back. Today I sent out cards to Padres who haven't had cards before. I sent to Dirk Hayhurst, now with the Blue Jays, this wonderful card produced by Travis over at Punk Rock Paint. I met Dirk last year when he was pitching for Portland, I provided him books to read and we shared some great conversations. I also sent a card to third base coach Glenn Hoffman, a card that I made from MLB headshots. Then I sent '08 and '09 cards to Edgar Gonzalez, Heath Bell, Chip Ambres, and Cha Seung Baek. I hope to get these back soon, and then in early March I will send to the minor leaguers, the guys in the Bowman Draft Picks and Prospects. My packages always include a letter, a SASE, a sharpie (for the player to keep) and extra cards for them to keep. I always buy extras and put them in a package with a note explaining that they are for them to keep. Casey Burns Casey Burns was drafted in the first round of the 1999 draft by the Padres. He played one season in Rookie League at Idaho Falls in 1999, then in 2000 and 2001 he played A ball for the Ft Wayne Wizards. Then he was gone. I got the card from ebay. This concludes todays multiple entries of players that got to A or AA but had Bowman cards. This is the biggest problem with the Bowman rookies too many of them disappear. Kyler Brandon Burke Kyler Burke was the 35th overall pick in the 2006 draft by the Padres. He played one and half seasons playing for the AZL Padres and the Fort Wayne Wizards before he, Rob Bowen and cash considerations were traded to the Cubs for Michael Barrett. He has played for the Boise Hawks and Peoria Chiefs in the Cubs organization. I got this one from Heaven Sent Sports Cards in Tualatin. Posted by Rod (Padrographs) at 12:16 PM No comments: Brian Burgamy Brian Burgamy was drafted in the 9th round of the 2002 draft by the Padres. He played 4 season with the Padres minor league teams getting as high as AA Mobile, then he went to the Phillies organization for two years. In 2008 he was signed by the Mets off the roster of the independent Newark Bears and sent to St. Lucie. I remember finding this card on ebay and saying who is this? but getting the card because it is a player pictured in a Padres uniform. Kevin Burford Kevin Burford was drafted in the 15th round of the 1997 draft by the Padres. He played two season in the Padres organization, before moving to the Rockies organization. He played here in Portland when the team was the short season rookie league team the Portland Rockies. He played 5 seasons in the Rockies organization never rising higher than AA. In 2004 he played for the Clearwater Phillies in the Florida State League. And then he seems to have left baseball. I got the card off of ebay. Posted by Rod (Padrographs) at 11:57 AM No comments: Alonza Benjamin Bumbry Al Bumbry played all but one year of his career with the Baltimore Orioles, but before he even got to the majors he had accomplished more than most of us. His minor league service was interuppted by a tour of duty in Vietnam. He was a platoon leader during 1969 and 1970, and was awarded the Bronze Star which is given for "heroic or meritorious achievement or service". Al Bumbry was drafted by the Orioles in the 11th round of the 1968 draft and then served in Vietnam in 1969 and '70 before returning to the minors. He made his major league debut on September 5, 1972, and in 1973 was named the American League Rookie of the Year. He was an All Star in 1980 and won a World Series with the Orioles in 1983. He signed as a free agent with the Orioles in 1978 and played for them through 1984, and they released him and he signed as a free agent with the Padres for the 1985 season. He played in 68 games, getting 95 at bats, 19 hits and hitting 1 HR. He played his final game October 5, 1985. I believe I got the card signed through the mail. Brian James Buchanan I sure do like Mothers Cookies, especially the frosted animals, and I must confess I really like the pink ones better than the white ones. Weirdness is afoot. By the way their card sets were pretty cool also, I wish teams still did them. This card is from one of the Keebler sets, who replaced Mothers around the turn of the century. I sent it to Brian when he was with the Padres and got it back pretty quickly. Brian Buchanan was drafted in the first round, 24th pick overall of the 1994 draft by the New York Yankees. In February of 1998 he was traded to the Twins with Christian Guzman, Eric Milton, Danny Mota and cash for Chuck Knoblauch. He made his major league debut with the Twins against the A's on May 19, 2000. He was traded in July of 2002 to the Padres for Jason Bartlett, playing part of 02, all of '03 and part of '04 for the Padres. He played in 203 games coming to bat 353 times with 91 hits and 16 home runs before being released in 2004. In 2004 he played with the Mets, he signed in 2005 as a free agent with the Rockies and Twins. He signed in 2006 as a free agent with the Reds. In 2007 he played mainly as a DH in Japan with the Fukuoka Softbank Hawks hitting .285, with 11 Homers and 48 RBI's. Last year he played in the minor leagues for the Royals and in January of this year he resigned with the Royals. The Code, Baseball's Unwritten Rules and Its Ignore-at-Your-Own-Risk Code of Conduct The Code, Baseball's Unwritten Rules and Its Ignore-at-Your-Own-Risk Code of Conduct by Ross Bernstein; 2008; Triumph Books, Chicago, IL; 240 pages; 978-1-60078-3; 2/8/09-2/11/09 Remember during the first OJ trial, when Marcia Clark or Johnny Cochran would ask for a sidebar, and they got to be a joke. Well that is what this book suffers from is way too many sidebars, there are boxes with antecdotes everywhere, breaking the flow of the narrative. Bernstein didn't really write a book, he talked to a bunch of players and coaches about the different aspects of the code and then transcribed what they said. He did some research and then cut and pasted it into book form. I was looking forward to this but it was a real disappointment. Some pages were nothing but sidebars, which I think should have been included in the narrative. The book is choppy because of all the different ways it is broken up, chapters on stealing signs, on charging the mound, running up the lead and the like. I think this could have been told in a chronological order, how things have changed over the years. RR James Scott Bruske Jim Bruske was drafted by the Padres in the 7th round of the 1985 draft but did not sign. He was drafted by the Mariners in the 3rd round of the secondary phase of the 1985 draft but did not sign. In 1986 he was drafted by the Indians in the 1st round of the draft. In 1992 he signed as a free agent with the Astros, in 1995 he signed as a free agent with the Dodgers. He made his major league debut with the Dodgers against the Phillies on August 25. In 1996 he signed with the Padres and in 1997 he pitched in 28 games going 4-1 with 44.7 innings pitched. Later that year he was picked up via waiver wire by the Dodgers, who traded him in July on 1998 to the Padres for Widd Workman (great name). He went 0-0 in 4 games for the Padres before being traded to the Yankees with Brad Kaufman for Ray Ricken and Shea Morenz. In 2000 he signed as a free agent with the Brewers, where he pitched his final game on May 13, 2000. I got the card from ebay. Julio Cesar Bruno Julio Bruno is the fourth consecutive card of a player who never actually played for the Padres. Julio played in the Padres minor league system from 1990-1996 getting as far as AAA in both 1995 and 1996. In 1997 he played for the Tigers at AA, and from 1998-2000 he played for Tabasco of the Mexican League. From 2001-2006 he managed the Dominican Summer League Royals, in 2007 he was the hitting coach for the Arizona League Royals and in 2008 he was promoted to Manager of the Arizona League Royals. I got the card in a lot on ebay. Anthony Michael Brumley This is the third card in a row to feature a player who never actually played in the major leagues for the Padres. However Mike Brumley actually had a decent major league career, just none of it with the Padres. Mike Brumley was drafted by the Phillies in the 16th round of the 1980 draft but did not signed, three years later he was drafted by the Red Sox in the 2nd round. A year after being drafted he was traded to the Cubs with Dennis Eckersley for Bill Buckner, Red Sox fans how about that trade? He made his major league debut on June 16, 1987 against the Phillies. In February of 1988 the Cubs traded Mike and Keith Moreland to San Diego for Goose Gossage and Ray Hayward. At least Brumley is less than six degrees away from the Hall of Fame. He played at Las Vegas for the Padres in 1988 in 113 games. In March of '89 he was traded to the Tigers for Luis Salazar, a player who bounced between San Diego and the rest of the league. In January of '90 he was traded to the O's for Larry Sheets. After that he was an annual free signing with the Mariners, Angels, A's, Marlins and Astros, before retiring in 1995. He has gone to manage around the minor leagues, in 2008 he managed the Ogden Raptors. I believe I got the card off of ebay. Matt Bruback Matt Bruback was a minor league pitcher, but now he is known as the inventor of the Balance Pro SportBelt. Matt was drafted by the Chicago Cubs in the 47th round of the 1997 draft. He played in the Cubs minor league organization working his way up to AAA Iowa, then he was traded to the Pirates with Jose Hernandez and Bobby for Aramis Ramirez and Kenny Lofton. He pitched in four games for AAA Nashville before he was claimed off waivers by the Padres. He pitched 1o innings saving two games for AAA Portland. The 2004 season was split between the Padres organization and Baltimore, AAA Portland and AA Bowie, pitching in 14 games for the Beavers. In 2005 he pitched at AA and AAA for the Orioles, and in 2006 he pitched in A and AA for the Orioles. I believe that I got the card in an ebay lot. Ray L. Brown Ray Brown never made it to the majors but he got paid for playing for 12 seasons. He was drafted by the Reds in the 28th round of the 1994 draft. In his first professional season he was named MVP of the Pioneer League. In 1997 he was traded to the Padres for Joey Eischen, he played at Mobile and Las Vegas.In 1998 he was chosen by the Royals from the Padres in the minor portion of the Rule V Draft. In 1999 the Royals loaned him to Tabasco of the Mexican League. He was traded to the Orioles for Jeff Reboulet, but he was released during Spring Training. He played for the Chico Heat of the Western League during the 2000 season, he signed with the Cubs and went to Spring Training with them in 2001, but was released before the season. He played for Chico again in 2001 and then the Astros invited him to spring training for 2002, but again he was released before the season. 2002 was a season of travel for Ray he started the season with Tabasco of the Mexican League, then returned to Chico and then had his contract purchased by the Mariners and played at San Antonio before being released and returning to Chico. From 2003 to 2005 he played with the Kansas City T Bones of the Northern League. I got the card through ABC Unlimited. Ollie Lee (Downtown) Brown Ollie Brown is the first Padre, he was drafted with the first pick of the 1968 expansion draft. He comes from an athletic family, his brother Oscar played five seasons with the Braves between 1969-1973, his older brother Willie played running back at USC (University of Spoiled Children) and then played for the Rams and Eagles. Ollie pitched and played outfield in the minor leagues, he actually pitched a no-hitter on August 23, 1963 for Decatur of the Midwest League. Ollie was signed by the Giants as an amateur free agent in 1962, he made his major league debut September 10, 1965. Then he was chosen by the Padres in the expansion draft. He played three and a half seasons with the Padres and hit 52 Home Runs, 43 of them were hit in the first two seasons. In May of 1972 he was traded to the A's for Curt Blefary, Mike Kilkenny, & Greg Schubert. In June of '72 he was plucked off the waiver wire by the Brewers, in October of 1973 the Brew Crew traded Ollie along with Joe Lahoud, Skip Lockwood, Ellie Rodriguez, Gary Ryerson for Clyde Wright, Steve Barber, Ken Berry, Art Kushyner and cash to the Angels. The Houston Astros purchased him from the Angels. In June of 1974 he was chosen off the waiver wire by the Phillies, three years later he played his final game on September 27, 1977. I got this great 1969 Topps card from ebay. James Kevin Brown There have been three men who have played in Major League baseball that were known as Kevin Brown, although only one of them has a World Series ring. Wikipedia list a whole batch of people named Kevin Brown, including Tommy Lee Jones Agent Kay in the Men in Black movies. The Kevin Brown that we will be focusing on this evening was drafted with the 4th pick of the 1986 draft. He made his major league debut on September 30 of 1986 against Oakland. He pitched for Texas until he signed as a free agent with the Orioles in early 1995, in November of 1995 he signed as a free agent with the Marlins. After helping the Marlins win the 1997 World Series to the Padres for Derek Lee, Steve Hoff and Rafael Medina. He pitched for the Padres for one year, going 18-7 with a .239 ERA. After the 1998 season Brown signed with the Dodgers, where he pitched for five years before being traded to the Yankees for Jeff Weaver, Brandon Weeden, Yhency Brazoban and cash. He played two years for the Yankees before being granted free agency at the end of the 2005 season. Brown was a six time All-Star who was named the NL Sporting News Pitcher of the Year for the 1998 season. I think I got this card signed through the mail. Jarvis Ardel Brown The only other person I have ever know named Ardel is my brother in law's father in law, Ardel Dock. A very uncommon name, I wonder how Jarvis came by it. Jarvis was drafted with the 9th pick in the 1986 draft by the Minnesota Twins. He made his major league debut in 1991 against the the Toronto Blue Jays, he won a World Series ring that same year with the Twins. In November of 1992 he signed as a free agent with the Padres. He played in 47 games batting .233 as a Padre. In November of 1993 he was selected off waivers by the Braves. In December of 1994 he signed as a free agent with the Mets. On June 12, 1995 he signed as a free agent with the Reds, two days later he was sent to the Orioles as part of a conditional deal. He played his final game on October 1, 1995. I got the card through ABC Limited in Arizona. John Christopher Brown Chris Brown left this planet way too soon. A talent that flared brightly but shortly. He suffered burns in a house fire in a vacant house he owned on November 30, 2006 and passed away as a result of those injuries on December 26, 2006. Police have never determined whether it was a homicide, suicide or accident. Chris Brown attended Crenshaw High School in Los Angeles where he was a high school teammate of Daryl Strawberry. He was drafted by the Giants in the second round in 1979 and made his major league debut with them on September 3, 1984 against the Cincinati Reds. He was named to the National League All Star Team in 1986. On July 7, 1987 he was traded along with Keith Comstock, Mark Davis and Mark Grant to San Diego for Dave Dravecky, Craig Lefferts and Kevin Mitchell. He spent the last half of the '86 season and all of the 1988 season in San Diego playing in 124 games, with 402 AB, hitting 8 home runs. On October 28, 1988 he was traded from San Diego with Keith Moreland for Walt Terrell. In June of 1989 he was signed as a free agent by the Pirates. He played his final game April 16, 1989. In 2004 he worked as a truck driver for Halliburton in Iraq, at one point being in a convoy during 2006 in which several drivers and a soldier were killed. I got the card signed sometime while Chris was with the Padres through the mail. The Code, Baseball's Unwritten Rules and Its Ignor...
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,754
Q: Understanding central limit theorem I am not understanding the central limit theorem. From wikipedia: ...suppose that a sample is obtained containing a large number of observations, each observation being randomly generated in a way that does not depend on the values of the other observations, and that the arithmetic average of the observed values is computed. If this procedure is performed many times, the central limit theorem says that the computed values of the average will be distributed according to the normal distribution what I'm confused about is...if we have a sample of n observed values, then the average of the population will be the sum of all the observed values divided by the total number of observed values. So we will have an average....THE average, meaning ONLY one average, so how can ONE value have a "distribution"? Obviously I'm missing something or interpreting what the definition is saying wrongly, so can somebody help me out? Edit: Should I think of this as like...let's say we have 1 value. It will have an average. Then we have another value, and take the average of the two values. Then a third value, and find the average of the three. Eventually as you get larger and larger numbers, the "distribution" of all these separate averages will be normal, with the average value eventually equaling the expected value mu? A: You have a bunch of data consisting of independent observations or executions of a random experiment. Each of these will be a random variable with a certain distribution (it's not a value, it's some data with a distribution). Each "package" of data, or each random variable has an expected value or an average. What the CLT says is that if the number of random variables (or observations) is very very large (tends to infinity), then the averages of each observation will have a normal distribution. Collecting the averages is like an observation. So these will have a distribution (a normal one). To state it better: You have random variables $X_1,X_2,X_3,\dotsc,X_n$. Let's define a random variable $$Y_n=\frac{X_1,\dotsc,X_n}{n}.$$ Then $Y_n$ will have a normal distribution as $n\to \infty$. $Y_n$ is not the average of averages, it is a random variable that at least takes the values of the averages of $X_1,\dotsc,X_n$.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,914
\section{Introduction} \label{sect:Intro} Groups and clusters are commonly viewed as sites where environmental influences can affect the colors, star formation histories and morphologies of their member galaxies. One of the first pieces of empirical evidence supporting this claim was the observation by Butcher \& Oemler that clusters of galaxies contain a higher fraction of blue galaxies at progressively higher redshift, the so-called Butcher-Oemler effect \citep{ButcherOemler1978,ButcherOemler1984}. Their result provided direct observational evidence of strong, rapidly evolving galaxy population colors inside cluster cores with redshift. Since these early papers, the Butcher-Oemler effect has been confirmed photometrically \citep{Rakos1995, Margoniner2000, Margoniner2001, Kodama2001, Goto2003}, spectroscopically \citep{DresslerGunn1982, DresslerGunn1992, Lavery1986, Lavery1988, Fabricant1991, Poggianti1999, Poggianti2006, Ellingson2001}, has been extended to groups \citep{Allington-Smith1993, Wilman2005b, Gerke2007, Cucciati2009a}, and critically discussed in the context of selection biases \citep{Andreon1999, Andreon2004, Andreon2006}. In parallel to these studies, evidence has emerged that the Universe as a whole formed stars more actively in the past than today \citep{Lilly1996, Madau1998, Hopkins2004, Schiminovich2005} and that the typical mass of galaxies where the bulk of star formation occurs is higher in the past than today, the so-called downsizing effect \citep{Cowie1996, Gavazzi1996}. These observations questioned whether the Butcher-Oemler phenomenon is caused by physical mechanisms typical of dense environments, that significantly alter global trends displayed by the global galaxy population in the coeval field, or reflects the evolution of the global galaxy population. Interestingly, with growing evidence that denser environments only suppress star formation \citep{Balogh2004a, Balogh2004b}, we have started to test in groups and/or clusters at higher redshifts whether we measure a higher fraction of blue galaxies that is nevertheless lower than the coeval field. Since groups contain a large fraction of galaxies in the nearby Universe, nearly $\sim 50\%$ \citep{HuchraGeller1982, Eke2004, Berlind2006}, while only a few percent of galaxies are contained in the denser cluster cores, group-related transformations may drive the observed strong decrease in star formation with cosmic time, at least since $z \sim 1$, when these structures started to become predominant according to the hierarchical structure scenarios. In the cores of rich clusters phenomena such as ram pressure stripping have been widely documented in the literature, as for well studied galaxies in the Virgo cluster, \citep{Kenney2004, Vollmer2004}, and observed in simulations \citep{Bruggen2008}. In contrast, similar environment-dependent effects in groups are less clearly defined, although possibilities have been presented in the literature, including gradual cessation of star formation induced either by gentle gas stripping and starvation by a diffuse intragroup medium, or by slow group-scale harassment \citep{Larson1980, Moore1999, Gnedin2003, Roediger2005, Kawata2008}. From a theoretical perspective, numerical simulations incorporating the standard cosmological paradigm suggest that galaxy properties such as, e.g.,~ colors, spin etc.) are primarily determined by the mass of the dark matter halo in which the galaxy resides \citep{CooraySheth2002}, and that, at a given mass, in overdense environments dark matter haloes assemble at higher redshifts than in underdense environments \citep{Gao2005}. This framework could provide a simple way of explaining the observed trends in colors with galaxy luminosity, mass and environment, at low \citep{DePropris2004, BlantonBerlind2007} and also at high redshifts \citep{Wilman2005b, Balogh2007}, without resorting to any specific mechanisms acting in groups. Two large recent redshift surveys, VVDS and DEEP2, have addressed this problem by studying both groups \citep{Gerke2007, Cucciati2009a} and local density field measurements \citep{Cucciati2006, Cooper2006, Cooper2007}, although both studies considered both luminosity selected samples, a choice that, as we discuss later, offers only partial insight into the problem. The question of which variables are needed to fully define galaxy evolution therefore remains unanswered, and is usually considered in terms of either {\it nature} or {\it nurture} processes. This corresponds to asking whether: galaxy evolution is driven mainly by internal processes, imprinted at galaxy birth, that operate inside the average galaxy, or group environment has a specific effect on shaping galaxy evolution, because of specific mechanisms taking place in dense, possibly virialized regions, where secular influences have better chances to affect galaxy evolution. To distinguish between the effects of environment and trends related to galaxy evolution with redshift, one needs homogeneous and sizeable group and field galaxy samples, covering a wide redshift range and with reliable measurements of galaxy rest-frame colors, luminosities and masses. These data would allow us to monitor with look-back time the evolutionary histories of galaxies located in different group/field environments, and to disentangle between the different dependencies and their relative importance. The advantage of the data-set used in our analysis is that it satisfies all of these requirements. zCOSMOS is a survey tailored for studying the large scale structure and detecting groups up to $z \sim 1$ \citep{Lilly2007, Lilly2009}. Its large volume coverage and small errors in galaxy redshift measurements enable the production, even for the first batch of $\sim 10000$ measured redshifts, of a large group catalogue containing $102$ groups with $N \geq 5$ spectroscopically confirmed members and a further $\sim 700$ going down to pairs \citep{Knobel2009}. Furthermore this catalogue, because of the precise fine tuning of the algorithm used for group detection, is remarkably free from contamination and incompleteness, especially at the low richness, low velocity dispersion end and, most importantly, its quality is stable as a function of redshift \citep{Knobel2009}. Last but not least, the large amount of precise photometric ancillary data available from the COSMOS survey \citep{Scoville2007}, provides robust estimates of the fundamental properties of each galaxy, such as rest-frame luminosities, colors and masses. We are therefore in best position with our data-set to investigate in detail which processes are the most influential in shaping galaxy evolution. Complementary analyses of the same \10K data-set have been carried out in other papers. \citet{Kovac2009b} in a parallel paper study the influence of group environment in shaping galaxy morphologies. Using the density field measured in \citet{Kovac2009a}, \citet{Zucca2009} and \citet{Bolzonella2009} study the galaxy luminosity and mass functions respectively as a function of environment, while \citet{Cucciati2009b} and \citet{Tasca2009} investigate the dependencies of galaxy colors and morphologies, respectively, from the general density field. Finally \citet{Silverman2009} and \citet{Vergani2009} study how environment plays a role in triggering active galactic nuclei activity and in quenching star-formation respectively. For more details, we refer the interested reader to those papers. A concordance cosmology is adopted throughout our paper, with $h_{70} = H_0/70$ km s$^{-1} Mpc^{-1}$, $\Omega_{m} = 0.25$ and $\Omega_{\Lambda } = 0.75$. All magnitudes are always quoted in the AB system. \section{Samples used in the analysis} \subsection{The zCOSMOS \10K} \label{sect:sample10K} The zCOSMOS survey is a large spectroscopic survey undertaken in the COSMOS field \citep{Scoville2007}, using 600 hours of observations with the VIMOS spectrograph at VLT. It consists of two parts: zCOSMOS-bright and zCOSMOS-deep. zCOSMOS-bright is a survey purely magnitude limited in I-band; when complete it will provide a sample of $\sim 20000$ galaxies in the range $15.0 \leq I_{AB} \leq 22.5$ from the HST ACS imaging \citep{Koekemoer2007} over the whole area of 1.7 deg$^2$ of the COSMOS field. zCOSMOS-deep targets $\sim 10000$ galaxies, selected through color criteria to have $1.4 \la z \la 3.0$ within the central 1 deg$^2$. At completion it will provide redshifts for $\sim 10000$ galaxies with magnitudes $B_{AB} \leq 25.0$. \begin{figure*} \centering \includegraphics[width=17cm,angle=0]{AI_fig1.eps} \caption{The left panel shows the $ra-dec$ distribution of the $\sim 10000$ objects observed in the first half of the zCOSMOS-bright survey. The right panel shows the $ra-dec$ distribution of the ratio of the number of objects with reliable spectroscopic redshift to the total number of potential targets i.e.,~ non-stellar objects in the parent bright photometric catalogue ($15.0 \leq I_{AB} \leq 22.5$). The color scale provides the legenda for the range of values displayed on the plot. In both panels the red rectangle indicates the restricted area used for the analysis presented in this paper, corresponding to $\sim 0.83 sqdegs$. Within this restricted area the mean value of the sampling rate is around $40$\%, i.e.,~ two objects out of five have a reliable redshift measured (see text for more details). } \label{fig:radec_distr_map} \end{figure*} The analysis presented in this paper uses of the sample of $10644$ objects for which we obtained spectra during the first half of the zCOSMOS-bright observational campaign. This number corresponds to a total of 83 pointings of the VIMOS spectrograph, observed during ESO periods P75, P76, and P77 and includes compulsory, i.e.,~ objects with forced slit positioning, and secondary targets, i.e.,~ objects serendipitously falling inside the slit other than the primary target. zCOSMOS-bright observations use the $R = 600$ MR grism and 1 hour integrations to secure redshifts with a high success rate. The wavelength range covered goes from $5500 \leq \lambda \leq 9700$ \AA. For more details of the survey strategy and characteristics we refer the reader to \citet{Lilly2007} and Lilly et al. (2009). The distribution on the sky of the $\sim 10000$ objects observed in the first half of the zCOSMOS-bright survey is illustrated in the left panel of Fig.~\ref{fig:radec_distr_map}. The vertical banding visible in the external, less finely sampled, regions reflects the quadrant design of VIMOS and an additional pattern introduced by the slit positioning software SPOC \citep{Bottini2005}. This pattern should almost completely disappear at survey completion, since the observational strategy foresees an eight-pass coverage with two mask designs at each pointing. The expected final sampling rate is around $\sim 70$\%. The redshift distribution of the observed galaxies covers the range $0.1 \leq z \leq 1.2 $ and peaks at redshift $\sim 0.7$. The error in the redshift measurement, as determined from repeated observations, is around 100 km s$^{-1}$, an accuracy well suited to the original survey scientific goals of the investigation of large scale structure and the detection of groups. For each measured redshift, we adopted a ranking scheme reflecting our confidence in its correctness. It is based on six broad confidence classes (0-1-2-3-4-9) reflecting the quality of the redshift measurement as obtained from the spectra. This scheme is similar to that originally adopted in the CFRS \citep{LeFevre1995} and VVDS \citep{LeFevre2005}, but with some further refinements taking advantage of the wealth of photometric information available for each targeted object. The large, exquisite quality, ancillary photometric database provided by the COSMOS survey (from HST data, to Spitzer, Galex, Chandra, CFHTLS, and Subaru data, see \citet{Scoville2007}) has enabled us to derive reliable photometric redshifts for all objects in the zCOSMOS-bright parent photometric catalogue, with an uncertainty as low as $\Delta z \sim 0.01 \times (1+z)$ \citep{Ilbert2009}. The photometric redshift information was used to incorporate in our analysis objects whose spectroscopic redshift, although less secure, was consistent with its photometric value and therefore deemed reliable ($\delta z$ smaller that ~$0.08 \times (1+z)$, see Lilly et al., 2009, for more details). In this way one can use $\sim 85$\% of the observed sample, totalling $\sim 8600$ galaxies up to $z=2.0$ ($\sim 9200$ including stars and with no high redshift cut-off), with a nominal spectroscopic confirmation rate of $\sim 98.5$\% as found by duplicate observations. This sample represents roughly half of the final zCOSMOS sample and when we talk of the \10K we always refer to this subset of objects, the same used to perform group searches \citep{Knobel2009}. For the \10K galaxies, absolute rest-frame magnitudes and stellar masses were obtained using standard multicolor spectral energy distribution (SED) fitting analysis. Rest-frame absolute magnitudes were obtained using the ZEBRA code, for which a detailed description is provided in \citet{Feldmann2006} and Oesch et al. (in prep.). We note here that the templates used by ZEBRA are the standard CWW templates \citep{Coleman1980} and starburst templates from \citet{Kinney1996}, and the best fit template is normalized to each galaxy photometry and spectroscopic redshift. Stellar masses in units of solar masses were obtained by fitting stellar population synthesis models to the multicolor spectral SED of the observed magnitudes using the {\it Hyperzmass} code \citep{Bolzonella2009, Pozzetti2009}. In the subsequent analysis, we use stellar masses calculated adopting the \citet{BruzualCharlot2003} libraries, and assuming a Chabrier initial mass function \citep{Chabrier2003}. More details about the {\it Hyperzmass} code can be found in \citet{Bolzonella2009}. \begin{figure*} \centering \includegraphics[width=17cm,angle=0]{AI_fig2.ps} \caption{Panel (a) shows the distribution along the line of sight of the 6204 galaxies used in our analysis, the subset of the \10K galaxies sample located within the central, high sampling rate, region of the survey and in the redshift range $ 0.1 \leq z \leq 1.0 $. Panel (b) shows the distribution along the line of sight of the 1966 group galaxies, while panel (c) show the same for the sample of 1146 isolated galaxies. In each panel we collapsed the $dec$ axis and expanded by a factor of $\times 10$ the distances corresponding to the $ra$ axis to make the plot easier to read. Labels on the top of the cone indicate the line-of-sight distance in units of comoving $h^{-1}_{70}~Mpc$~, while those on the bottom indicates the redshift. The transverse dimension at redshift $z=0.1$ is $\sim 7$ $h^{-1}_{70}~Mpc$~, while that at redshift $z=1.0$ is $\sim 50$ $h^{-1}_{70}~Mpc$~, corresponding to the dimension of $\Delta ra \sim 0.87 ~deg$ of our restricted sample.} \label{fig:all_cones} \end{figure*} Panel (a) of Fig.~\ref{fig:all_cones} shows the distribution along the line of sight of the galaxies within the boundaries defined by $149.55 \leq ra \leq 150.42 $, $ 1.75 \leq dec \leq 2.70 $ and $ 0.1 \leq z \leq 1.0 $. These boundaries (see Section~\ref{sect:IncomplCorr}) are those adopted in our analysis to avoid being affected by the inhomogeneous coverage of the \10K. The number of galaxies surviving within these $ra-dec-z$ boundaries equals 6204, of which 1966 are in groups, while 1146 define the so-called isolated galaxy sample (see Section~\ref{sect:sample10Kisolated}). The transverse dimension of this restricted sample along $ra$ is $\sim 7$ $h^{-1}_{70}~Mpc$~ at $z=0.1$, and $\sim 50$ $h^{-1}_{70}~Mpc$~ at $z=1.0$. In the same redshift interval the total contiguous comoving volume sampled is $ \sim 3.3 \times 10^6$$h^{-1}_{70}~Mpc$~$^3$. \subsection{The 10K group catalogue} \label{sect:sample10Kgroups} In our analysis, we use the catalogue of groups presented in \citet{Knobel2009} and refer the reader to that paper for a detailed presentation of both the group finding algorithm and the group catalogue. Here we summarize the main points and advantages of the adopted group finding algorithm and briefly discuss the resulting group catalogue. \citet{Knobel2009} introduced a novel method, defined as a ``multi-pass procedure'', to achieve an impressive quality in group reconstruction as tested using realistic mock catalogues. This method, when combined with the standard fried-of-friends (FOF) algorithm, yields values of completeness and purity for the group catalogue obtained that are extremely stable with both redshift and number of members observed in the reconstructed groups. Typical values of these two quantities for groups reconstructed with more than five observed members are around $\sim 80$\% at all redshifts and do not decrease substantially for groups with lower number of members. Correspondingly the interloper fraction always remains below $\sim 20$\% at all redshifts for groups reconstructed with more than five observed members, with only a slight increase for groups with lower number of members \citep{Knobel2009}. These results provide reassurance that the group catalogue that we use in our subsequent analysis is highly homogeneous up to $z \sim 1$, a fundamental prerequisite, since the aim of this paper is to explore redshift trends in group galaxy colors. If our results are to be reliable, we need to be confident that the group catalogue we use is almost entirely free from redshift dependent biases. The presence of a significantly higher interloper fraction with increasing redshift could surreptitiously increase the fraction of blue (field) galaxies observed in our group catalogue and be mistakenly interpreted as evidence of evolution. The extensive tests performed in \citet{Knobel2009} place on solid basis the analysis that we perform in the following sections. Panel (b) of Fig.~\ref{fig:all_cones} shows the distribution along the line of sight of the group galaxy population (1966 galaxies in total) within the boundaries defined by $149.55 \leq ra \leq 150.42 $, $ 1.75 \leq dec \leq 2.70 $ and $ 0.1 \leq z \leq 1.0 $. The presence of large structures is clearly delineated by the group galaxy sample. In fact there are quite a few conspicuous structures visible in this plot, e.g.,~ those around redshifts $\sim 0.35$ and $\sim 0.7$, while there are, on the other hand, regions devoid of large structures, e.g.,~ in the redshift range $ 0.4 \leq z \leq 0.6$ \citep[see][for a detailed description of the density field structures in the \10K field]{Kovac2009a}. We also note that our survey does not contain any single rich cluster, for example comparable to Coma cluster in the local Universe. This is not unexpected: because the size of the volume of the Universe explored by zCOSMOS-bright the probability of one such cluster being observed is negligible \citep[see also][]{Finoguenov2007}. \subsection{The isolated galaxy sample} \label{sect:sample10Kisolated} We complemented the analysis performed on the sample of group galaxies with a parallel one on a sample of isolated galaxies, i.e.,~ a sample of galaxies located in low density regions. This comparative analysis should highlight the differences -- if any -- in properties (rest frame colors in our analysis) from the group galaxy sample and therefore allow us to quantify the environmental dependencies of the properties explored more reliably. To define the isolated galaxy sample, we use the Voronoi Tessellation method \citep{Voronoi1908}. Voronoi Tessellation divides the space occupied by the survey into a set of unique polyhedral sub-volumes, each containing exactly one galaxy and all points in space that are closer to that galaxy than to any other. As a consequence, while galaxies with many neighbors (e.g.,~ those in groups and high density environments) have small Voronoi volumes, relatively isolated galaxies have larger Voronoi volumes. Voronoi Tessellation has been used in the literature as a basis for group-finding algorithms \citep{Marinoni2002, Gerke2005, Cucciati2009a, Knobel2009}. It is quite straightforward to use Voronoi volumes to select a sample of isolated galaxies, defined as galaxies occupying the largest Voronoi volumes. This strategy has the advantage of being non-parametric, i.e.,~ it avoids any arbitrarily chosen smoothing/window profile in defining low density regions. However, proper attention must be taken to exclude galaxies that are close to survey borders and correct for the progressive increase in the typical size of Voronoi volumes between low and high redshifts in our flux-limited galaxy sample. To avoid the first problem, i.e.,~ of galaxies near the survey boundaries entering the isolated galaxy sample because of their apparently large Voronoi volumes, we decided to restrict the volume of the search for isolated galaxies within the boundaries defined by $149.57 \leq ra \leq 150.41 $, $ 1.76 \leq dec \leq 2.68 $ and $ 0.1 \leq z \leq 1.0 $, which is slightly more restrictive than the limits adopted for the group analysis indicated by the red lines in Fig.~\ref{fig:radec_distr_map}. Furthermore, in all the subsequent analysis we decided to reject all isolated galaxies located in areas of lower sampling, that is galaxies with mean correction factor $~\psi(\alpha,\delta) \ge 5$ (see Section~\ref{sect:IncomplCorr}). For these galaxies, a large measured Voronoi volume could be the result of the low spectroscopic sampling rate in the surrounding area. \begin{figure} \includegraphics[width=9cm,angle=0]{AI_fig3.ps} \caption{Top panel: Distribution of logarithm of normalized Voronoi volumes as a function of redshift. The stripes extending towards lower Voronoi volumes values are due to the presence of groups. Points marked with a cyan cross correspond to galaxies removed from the isolated galaxy sample because their Voronoi volume exceeded by a factor of $100$ that of the median at the corresponding redshift. Bottom panel: histogram of the total distribution of normalized Voronoi volumes. The shaded blue area corresponds to the last quartile of the distribution, chosen to select our isolated galaxies sample, while the long tail in cyan extending to higher values - and indicated also by the vertical line - is the one clipped from the sample.} \label{fig:isol_selection} \end{figure} To avoid the second possible problem, of being biased in the definition of isolated galaxies by the progressive decrease in the galaxy density in our flux-limited sample, we computed the median value of the logarithm of Voronoi volumes sizes as a function of redshift using running bins of size $\Delta z \leq 0.2$ in redshift steps of $0.05$. A simple linear fit to this quantity (as deemed reasonable by visual inspection) was then used to normalize all measured Voronoi volumes, correcting for the progressive increase with redshift in the mean inter-galaxy separation. We then selected the highest quartile of the normalized volumes distribution obtained in this way, after taking the simple precaution of further rejecting galaxies (148 in total, 80 at $ z \leq 0.25$) whose normalized Voronoi volume was more than 100 times larger than the median one: mostly galaxies located too close to the survey borders, as suggested by the large predominance of low redshift objects and by their general distribution on the sky. Figure~\ref{fig:isol_selection} illustrates the method adopted to select isolated galaxies. The final number of isolated galaxies obtained this way is 1146, after the removal of galaxies (206, out of which 128 located in pairs) listed in our group catalogue. We checked the reliability of our approach by selecting isolated galaxies in simulations. We used the 24 COSMOS mock light-cones kindly provided by M. Kitzbichler, \citep{KitzbichlerWhite2007}, based on the Millennium DM N-body simulations \citep {Springel2005}. We applied the same observational strategy to these cones used to select the \10K: we chose the same pointings observed in \10K, used SPOC to select {\it observed} targets and included the same redshift success rate as the real data. Out of the sample of isolated galaxies obtained from the mocks using the procedure described above, $\sim 60$\% are truly isolated galaxies, with a variance of a few percent from cone to cone, i.e.,~ galaxies that in mock light-cones are inside a halo that contains only one galaxy down to the $R=26$ magnitude limit. However, when considering the 10K mock samples - limited to $I_{AB} = 22.5$ and with our sampling rate applied - this number increases to $\sim 90$\%. In other words, only $\sim 10$\% of the galaxies in the isolated sample selected using our strategy are located in groups with at least two members in the 10K mock samples. We should get rid of most of this contamination by the last step of our procedure: the final trimming of galaxies listed in our real group catalogue. Panel (c) of Fig.~\ref{fig:all_cones} shows the distribution along the line of sight of the isolated galaxy sample, whose uniformity is evident. \section{Measuring $\it{F_{blue}}$~} \label{sect:sampleanalysis} We use in this paper the diagnostic tool introduced in the literature in the seminal work by \citet{ButcherOemler1978}. These authors were the first to note that the fraction of blue galaxies ($\it{F_{blue}}$~ from now onwards) in clusters seems to increase with redshift. Their work started a long-lasting wave of observational and theoretical papers, which is still far from being completed (see the short literature review presented in the introduction). After thirty years, the value of $\it{F_{blue}}$~ is still a valuable and effective empirical tool in studying of the dependence of galaxy evolution from the environment in which they reside. Galaxy color is the easiest parameter to measure among those that exhibit a distinctive bi--modality: spectral class, morphology, star formation rates and metallicities \citep[see][]{Strateva2001, Mignoli2009}. Therefore it is the simplest to adopt in parametrizing the differences between evolution of groups and field or isolated galaxies. As far as its physical meaning is concerned, the rest-frame $(U-B)$ color adopted in our analysis, bracketing the 4000\AA ~break, can be used to study the average star formation histories over longer time-scales than emission lines indicators such as, e.g.,~ [OII]. This choice could therefore provide clearer insights into mechanisms that operate on longer time scales, such as possibly those in action in dense environments as groups, where member galaxies have resided for a significant fraction of their lifetime. Despite the apparent simplicity of this parameter, the origin of the physical mechanisms responsible for the variations in $\it{F_{blue}}$~ in the group/cluster population still remains to be fully explained. In particular we are still unable to determine the relative influences of processes related to the environment and those that are intrinsic to the galaxy itself, and therefore the dichotomy between ab-initio/internal and external mechanisms responsible for the variation of $\it{F_{blue}}$~ is still an open one. The fraction of galaxies on either sides of the bimodality in $(U-B)$ colors has been shown to depend strongly on galaxy luminosity and stellar mass \citep[see, e.g.,~][]{Baldry2004, Baldry2006}. Therefore, in studying the dependence of $\it{F_{blue}}$~ on group environment, we define and adopt both luminosity volume-limited and mass volume limited samples. In this Section, we discuss the strategy adopted to correct for the \10K incompleteness when measuring $\it{F_{blue}}$, the cut-off adopted in defining $\it{F_{blue}}$, and how we estimate errors on this quantity. \subsection{Correcting for survey incompleteness} \label{sect:IncomplCorr} The left panel of Fig.~\ref{fig:radec_distr_map} shows that the coverage in $ra-dec$ of the \10K remains very uneven. While the mean sampling rate of the \10K is around $\sim 30$\%, this number varies significantly as a function of position: in the central regions the sampling rate is as high as $\sim 70$\%, while it is as low as $\sim 10$\% in the regions near the borders. This unevenness can create problems when defining groups of homogeneous numerosity/richness irrespective of their position in the sky (see Section ~\ref{sect:GroupRich}). To correct for this problem we adopted for each galaxy a weighting scheme consisting of two factors: $~\phi(m)$ and $~\psi(\alpha,\delta)$. The first factor $\phi(m)$ is similar to one adopted for the luminosity and mass function estimates (see Zucca et al., 2009, for more details). It is obtained by a parabolic fit to the product {\it W} of the inverse of the target sampling rate ($TSR$) and the inverse of the spectroscopic sampling rate ($SSR$): \begin{equation} {\it W } =(1/TSR)*(1/SSR) \end{equation} \noindent $TSR$ is defined as ${TSR} = N_{obs}/N_{phot}$, the ratio of the total number of objects observed to the total number of potential targets, i.e.,~ non-stellar objects in the parent bright photometric catalogue, \citep[see][]{Lilly2009}. For the few compulsory targets observed in our survey (i.e.,~ with forced slit positioning) TSR was defined to equal 1. $SSR$ is defined as ${SSR(m)} = N_{spec}(m)/N_{obs}(m)$ the ratio, calculated in bins of apparent magnitude, of the number of observed objects whose redshift was reliably measured to the total number of observed objects. The apparent magnitude dependence takes into account the progressive difficulty, moving toward fainter magnitudes, to measure a redshift. A more complex scheme, which includes the redshift dependence of SSR does not alter appreciably the final results (see Bolzonella et al., 2009). The second factor $~\psi(\alpha,\delta)$ corrects for the variation, as a function of $ra-dec$, of the mean correction factor expressed by $~{\phi}(m)$. We estimated $~\psi(\alpha,\delta)$ in two passes. In a grid of steps equal to $30\arcsec$ in right ascension and declination and in squares of $2\arcmin\times2\arcmin$, we computed the ratio of the number of observed objects whose redshift was reliably measured to the total number of potential targets, as defined as above, within the same area. We then obtained $~\psi(\alpha,\delta)$ by normalizing to unity the mean value of this ratio over the full $ra-dec$ coverage of the \10K survey. The right panel of Fig.~\ref{fig:radec_distr_map} shows in color-scale the resulting function $~\psi(\alpha,\delta)$ before normalization. The parameters chosen in calculating this function allow us to reproduce well the inhomogeneities in the survey, even the vertical banding, visible in left panel of Fig.~\ref{fig:radec_distr_map}. To each galaxy we therefore assigned a weight: $w_{i} = \phi(m)\times ~\psi(\alpha_i,\delta_i)$ which is the galaxy weighting scheme used in the following analysis. At the borders of the survey sampling is lower than average resulting both in higher galaxy weights and higher incompleteness in group detection. To alleviate this problem, we decided to restrict the analysis to the central area of the survey, where the inhomogeneity in sampling rate is significantly lower. This region is indicated by red lines in Fig.~\ref{fig:radec_distr_map} and corresponds to galaxies within the following boundaries: $149.55 \leq ra \leq 150.42 $, $ 1.75 \leq dec \leq 2.70$. We note that our results are relatively insensitive to changes in the strategy used to define the weights, for example larger smoothing boxes in defining $\psi(\alpha_i,\delta_i)$. Even when no weights at all are used, our results are almost unchanged. A weighting scheme is needed when estimating in a homogeneous way group richness (for example when exploring trends of $\it{F_{blue}}$~ as a function of groups richness). When dealing with the galaxy group population as a whole, the impact of the use of weights is minimal. \subsection{Computing the blue fraction} \label{sect:CompBlueFrac} We divided galaxies into red and blue sub-samples taking advantage of the observed bimodality in galaxy $(U-B)$ rest-frame colors, visible in Fig.~\ref{fig:colmag} (see also Cucciati et al., 2009). Accordingly, we defined blue galaxies as those with rest-frame colors $(U-B) \leq 1.0 $. This value agrees with both the value chosen by \citet{Gerke2007} in their analysis of $\it{F_{blue}}$~ in the DEEP2 groups sample and with the value adopted in a parallel analysis to our own by \citet{Cucciati2009a}. We did not allow this value to vary with galaxy luminosity, as suggested for example by \citet{vanDokkum2000} and \citet{Blanton2006a}. Given the relatively small variation in $M_B$ of the bulk of our galaxy sample (of roughly 3 magnitudes), the color-magnitude relationship quoted by these authors would imply a corresponding variation in the cut-off color value $\leq 0.1$ mags, which we deemed to be negligible. From our data, there is no obvious evidence of evolution to redshift $\sim 1$ in the adopted cut-off value, and in our analysis we therefore decided to keep its value fixed with redshift. After defining the cut-off value between red and blue galaxies we obtained a set of ${N}_{b}$ blue galaxies from the total sample of ${N}_t$ galaxies, each with a weight $w_i$. The corrected blue fraction was then given by: \begin{equation} \it{F_{blue}} = {\cal N}_{b} / {\cal N}_{t} \end{equation} \noindent where the number of blue galaxies ${\cal N}_{b}$ and the total number of galaxies ${\cal N}_{t}$ are defined to be: \begin{equation} {\cal N}_b = \sum_{j=1}^M w_j , ~~ {\cal N}_t = \sum_{i=1}^N w_i \end{equation} \noindent where the index $j$ corresponds to all the blue galaxies, while the index $i$ corresponds to the full galaxy sample. \subsection{Estimating errors in $\it{F_{blue}}$~} \label{sect:ErrBlueFrac} To estimate errors in the values computed for $\it{F_{blue}}$, we adopted a bootstrap re-sampling strategy. We randomly sampled by replacement the entire data set under consideration, e.g.,~ all isolated galaxies in a given volume-limited sample. The error in $\it{F_{blue}}$~ was then estimated to be the standard deviation in $\it{F_{blue}}$~ distribution for 1000 such Montecarlo samples. We used also the approximate analytical formulas provided by \citet{Gehrels1986} to estimate the error in $\it{F_{blue}}$~ but the differences in value with respect to the bootstrapping technique are minimal. In our plots we always show bootstrap errors. Another source of errors and noise in our plots is cosmic variance. At lower redshifts, the volume sampled by zCOSMOS survey is not large enough to be considered a fair representation of the universal matter distribution. It is therefore possible that the presence of large scale structures introduces large fluctuations in the trends of $\it{F_{blue}}$~ as a function of redshift, lowering significantly $\it{F_{blue}}$~ at the redshift where these structures are located. Our survey shows quite a few of these prominent structures, for example those located at $z \sim 0.35$ and $z \sim 0.7$, readily visible in the top two panels of Fig.~\ref{fig:all_cones} (see also Kovac et al., 2009). To alleviate this problem in our analysis we tried to adopt redshift bins large enough to smooth out as much as possible this effect. \section{Defining luminosity volume-limited samples} \label{sect:VolLim} The zCOSMOS survey provides a unique data-set for measuring the evolution of the blue fraction up to $z \sim 1$. The excellent quality of the observed spectra prevent any possible bias against red, absorption lines only spectra \citep{Lilly2009}, while the simple $I_{AB} \leq 22.5$ magnitude limit used to select survey targets translates into a selection in the rest-frame $B$-band at $z \sim 0.8$. Therefore the zCOSMOS galaxy sample when rest-frame B-band selection is adopted is free from significant color-dependent incompleteness in $(U-B)$ rest-frame colors to the highest redshift bin explored. However the reader should be warned that completeness in B-band rest frame selection does not imply completeness in, e.g.,~ mass selection, as we will discuss at lenght in Section~\ref{sect:RedefBO} and following. As a consequence any trend observed in rest-frame B-band selected samples needs to be re-examined when the selection criterion of the sample changes \citep[see also, e.g.,~][]{DePropris2004}. The absence of $(U-B)$ color incompleteness in zCOSMOS B-band volume limited samples can be visually appreciated in Fig.~\ref{fig:colmag}, where we plot for different redshift bins (as indicated in each panel) the rest-frame $(U-B)$ color versus~ rest-frame $B-$band absolute magnitude $M_{B}$. \begin{figure} \includegraphics[width=9cm,angle=0]{AI_fig4.ps} \caption{Rest-frame (U-B) colors plotted vs $B-$band rest-frame magnitudes. In each panel redshift bins of width $\Delta z = 0.1$ have been considered, as indicated by the labels, that list also the total number of galaxies located in each redshift bin. The points in red show galaxies located in groups according to the catalogue obtained from the full \10K. The vertical dashed line in each panel indicates the absolute magnitude limits corresponding to the volume-limited sample that was chosen for galaxies contained in each redshift range.} \label{fig:colmag} \end{figure} In each panel distinctive red and blue populations of galaxies are visible, with loci that are separated approximately at $(U-B) = 1 $ at all redshifts. The cut-off in the galaxy population distribution visible on the left hand side of each panel is a consequence of zCOSMOS purely $I-$band flux-limited target-selection strategy and moves towards brighter magnitudes as the redshift increases. However, this progressively brighter cut-off does not introduce obvious biases against red galaxies as indicated by the cut-off line being nearly vertical in all panels with the possible exception of the last redshift bin, where the observed $I-$band begins moving blue-ward of the rest-frame $B-$band and a slight slanting of the cut-off in the galaxy population distribution starts becoming appreciable. Therefore, in this last redshift bin we need to be more conservative in the definition of the cut-off in absolute $B$-band rest-frame magnitude to avoid biases against red galaxies, even if this choice will result in smaller number of objects for our analysis. Another factor to consider in our definition of volume-limited samples is that the typical galaxy luminosity evolves with redshift. We need to include an evolutionary term in our definition of cut-off magnitudes for the volume-limited samples because we aim to select a population of galaxies that is similar with respect to $M^*_{B}$ at all redshifts. As suggested by the results obtained for the global luminosity function evolution of our sample (see Zucca et al., 2009, for more details), the evolution in $M^*_{B}$ can be parametrized linearly by the equation \begin{equation} M^*_{B \it ev} = -20.3 - 5\times log~(h_{70}) -1.1\times z, \end{equation} \noindent which includes an evolution with redshift of roughly 1 magnitude between $z\sim 0.1$ and $z\sim 1$ for $M^*_{B}$. The galaxy luminosities quoted from now on are always evolutionary--corrected present--day luminosities to ensure that galaxies of similar luminosity are being compared in different redshifts bins. We defined four different luminosity volume-limited samples, from sample I to sample IV, each covering progressively higher ranges of redshift, and defined by evolving the cut-off magnitudes $M_{cut-off} = M^*_{B \it ev} + 2.1/+1.5/+0.8/+0.2 $, as illustrated in Fig.~\ref{fig:span_lum}. Table~\ref{tab:vollimnumb} summarizes the properties of these four different volume-limited samples: the different redshift ranges covered and the total numbers of galaxies and isolated/group galaxies contained within the $ra-dec$ limits described in Section~\ref{sect:IncomplCorr}. From now onwards when we will talk of the field population we always mean the total galaxy sample, i.e.,~ the full galaxy population including group/isolated galaxies. We note that while the full group catalogue was obtained using the entire \10K galaxy catalogue, for each of the volume-limited samples defined above we selected a corresponding uniform sample of groups possessing at least two member galaxies brighter than the B-band rest-frame $M_{cut-off}$ considered (Group galaxies I). This strategy avoids the redshift inhomogeneity introduced in our group catalogue by the progressive brightening of the rest-frame $B-$band magnitudes sampled by the survey as redshift increases. A given group will have a different number of members in each volume-limited sample, but within each volume-limited sample group's numerosity/richness will be measured consistently at all redshifts. Unless explicitly mentioned when we talk of group galaxies, we refer to Group galaxies I. We also introduced a further set of galaxy groups: those that possess at least two members in sample IV (Group galaxies II). By studying the variation in $\it{F_{blue}}$~ for galaxies of different luminosities that are members of this group sample one can hope to disentangle the effect of galaxy luminosity on $\it{F_{blue}}$~ from that of group richness: this is because the groups in this sample should be homogeneous in terms of richness as a function of redshift, irrespective of the magnitude of the member galaxies considered in the analysis (see Section~\ref{sect:VolLimSamples}). For the sake of robustness, the value of the group observed line-of-sight velocity dispersion $\sigma$, whenever used in our analysis, is always estimated using all observed group members, irrespective of their absolute magnitude. \begin{figure} \includegraphics[width=9cm,angle=0]{AI_fig5.ps} \caption{Redshift distribution of the \10K zCOSMOS galaxies. Red points represent galaxies located in groups. The lines drawn correspond to the four different volume-limited samples discussed in the text. We assumed $M^*_{\it ev} = -20.3 - 5\times log~(h_{70}) -1.1\times z$ and the different labels and the lines drawn correspond to the four different volume-limited samples discussed in the text. } \label{fig:span_lum} \end{figure} \begin{table*} \caption{Summary of the four volume-limited data samples. We assume $M^*_{\it ev} = -20.3 - 5~log~h_{70} -1.1~z$.} \label{tab:vollimnumb} \centering \begin{tabular}{c c c c c } \hline\hline & Sample I & Sample II & Sample III & Sample IV \\ $M_{B}$ range & $M_{B}\leq M_{\it ev}^*+2.1$ & $M_{B}\leq M_{\it ev}^*+1.5$ & $M_{B}\leq M_{\it ev}^*+0.8$ & $M_{B}\leq M^*_{\it ev}+0.2$ \\ $z$ range & $0.1\leq z\leq0.45$ & $0.1\leq z\leq0.6$ & $0.1\leq z\leq0.8$ & $0.1\leq z\leq 1.0$ \\ \hline \hline All galaxies & 1798 & 2122 & 2616 & 2182 \\ Isolated galaxies & 315 & 442 & 431 & 326 \\ Group galaxies I & 676 & 670 & 709 & 447 \\ Group galaxies II & 218 & 237 & 412 & 447 \\ \hline \end{tabular} \end{table*} After defining galaxy weights, volume-limited samples and the corresponding group/isolated subsets, we proceeded to estimate $\it{F_{blue}}$, the fraction of blue galaxies, for each galaxy sample and its dependence on group properties, galaxy luminosity, and redshift. \section{Blue fraction as a function of galaxy luminosity and environment up to $z \sim 1$} \label{sect:VolLimSamples} In the local Universe, the correlation between galaxy luminosities and colors is a well-known observational result: more luminous galaxies have typically redder colors than less luminous galaxies \citep[see][and references therein]{Baldry2004}. A similar color segregation has been observed between local groups and field samples: redder galaxies are preferentially located in galaxy groups and clusters \citep[see][and references therein]{DePropris2004}. It is therefore interesting to use our sample to check whether these trends survive at higher redshifts and if they show weakening or even any visible reversal. A similar analysis was performed using DEEP2 data for the redshift range $0.75 \leq z \leq 1.3$ by \citet{Gerke2007}, and using VVDS data for the range $0.25 \leq z \leq 1.2$ by \citet{Cucciati2009a}. The VVDS and DEEP2 surveys were the first to use in their investigation a homogeneous dataset from the lowest to the highest redshift bins explored, and a group sample spanning a wide range of richnesses, down to poorest systems, in contrast to previous work that mainly considered higher richness, and more easily detectable, systems. With respect to these two pioneering large high-redshift surveys, zCOSMOS presents some non-negligible advantages. We have a larger volume coverage enabling us to complete more robust statistical analyses than VVDS, and smaller errors in galaxy redshift measurements - around 275 km s$^{-1}$ for VVDS, \citep[see][]{LeFevre2005}, which allows us to compile a group catalogue that is less prone to contamination and incompleteness, especially for low richness and low velocity dispersion systems. We are also less plagued by the color incompleteness (and, more importantly for the subsequent analysis, mass incompleteness) that affects DEEP2 data in the redshift range covered by their analysis, and have the ability to cover the complete redshift range $0.2\leq z \leq 1.0$, monitoring the redshift evolution in $\it{F_{blue}}$~ in a continuous way. As a first step, we explored how $\it{F_{blue}}$~ varies with galaxy luminosity. We defined four independent redshift bins as shown in Fig.~\ref{fig:fbflum}: [0.25:0.45], [0.45:0.6], [0.6:0.8], [0.8:1.0]. Within each of these redshift intervals and using the volume-limited samples defined in Table~\ref{tab:vollimnumb}, we defined sub-samples of galaxies in independent bins of galaxy luminosities. The binning in galaxy luminosity was chosen in such a way to ensure a sizeable number of galaxies in each environment and redshift bin considered. $\it{F_{blue}}$~ and its error bar were estimated using the procedures described in Section~\ref{sect:sampleanalysis}, while the error bars drawn along the luminosity axis link the upper and lower quartiles of the luminosity distribution of galaxies within each bin. \begin{figure*} \centering \includegraphics[width=12cm,angle=270]{AI_fig6.ps} \caption{The four panels show $\it{F_{blue}}$~ in different redshift bins, as indicated on the bottom of each panel, as a function of absolute luminosity, evolution corrected to $z \sim 0$ to ensure that similar galaxies are being compared across different redshifts bins. Different colors refer to different galaxy samples: red circles refer to group galaxies, blue triangles to isolated galaxies, while black squares to the total galaxy population. The errors on $\it{F_{blue}}$~ are obtained using bootstrapping, while the error bars along the luminosity axis link the upper and lower quartiles of the luminosity distribution of galaxies within the bin considered. At all magnitudes and at all redshifts groups contain less blue galaxies than the field and the isolated galaxies population. For all redshift bins considered and irrespective of the environment fainter galaxies are always bluer than brighter galaxies. For all environments $\it{F_{blue}}$~ increases with redshift.} \label{fig:fbflum} \end{figure*} Figure~\ref{fig:fbflum} shows the results obtained for the different galaxy samples: red circles for group galaxies, blue triangles for isolated galaxies, and black squares for the total galaxy population. In each redshift bin all the different galaxy populations display a decrease in the fraction of blue galaxies for increasing rest-frame galaxy luminosities, while at a fixed luminosity bin, blue galaxies are always less common in the group environment than in the field and most common among the isolated galaxy population. Figure~\ref{fig:fbflum} therefore suggests that at all redshifts explored the color of galaxies at a given luminosity becomes redder earlier in groups than in the field or in lower density regions. Furthermore, the differences between the galaxy population of the three different environments seem to increase at higher luminosities in each of the four panels of Fig.~\ref{fig:fbflum} and this result echoes a similar one in \citet{Cucciati2006}. Towards redshift $z \sim 1$ the differences among the three environments progressively decrease. However up to the highest redshift bin explored we do not see any hint of a possible reversal of the trend of $\it{F_{blue}}$~ as a function of luminosity, a robust result as our sample is free from significant color-dependent incompleteness up to $z\sim1$ (see Section~\ref{sect:VolLim}). Such possible trend reversal was tentatively detected by \citet{Gerke2007}, albeit with large error bars, for the redshift bin $0.7 \leq z \leq 1.0$ and for magnitudes brighter than $M_{B} \sim -21.5$. We used the four volume-limited samples and the three galaxy samples defined in Table~\ref{tab:vollimnumb} to explore in better detail the redshift trends implied by Fig.~\ref{fig:fbflum}. For each of these samples, we plotted $\it{F_{blue}}$~ as a function of redshift in Fig.~\ref{fig:fbz}, to help determine directly whether the rate of variation in $\it{F_{blue}}$~ differs significantly in groups compared to the field/isolated galaxy population. Each panel refers to a volume-limited sample defined by the labels at its bottom, where red circles indicate $\it{F_{blue}}$~ for group galaxies, while black squares and blue triangles show the same quantity for field and isolated galaxies, respectively. \begin{figure*} \sidecaption \includegraphics[width=12cm,angle=0]{AI_fig7.ps} \caption{The four panels show $\it{F_{blue}}$~ as a function of redshift for each of the different volume-limited samples defined in Table~\ref{tab:vollimnumb}. The label in each panel indicates the range in evolution corrected, present-day, absolute magnitude for the galaxies plotted (we assumed $M^*_{\it ev} = -20.3 - 5~log~h_{70} -1.1~z$). Red circles refer to group galaxies, blue triangles to isolated galaxies, while black squares to the total galaxy population. Brown stars are those corresponding, for each volume-limited sample, to the population of galaxies in groups with at least two members in Sample IV, that is sample Group galaxies II in Table~\ref{tab:vollimnumb}. Color segregation is already in place at redshift $\sim 1$ and increases sensibly moving from higher to lower redshifts for all the volume limited samples considered. See text for more details.} \label{fig:fbz} \end{figure*} The first piece of information conveyed by Fig.~\ref{fig:fbz} is that color segregation appears to be already in place at $z \sim 1$: panel (d) shows that even in the highest redshift bin explored there is a small, but significant, difference in $\it{F_{blue}}$~ among the different galaxy samples, mirroring the information provided by panel (d) of Fig.~\ref{fig:fbflum}. Furthermore for each of the luminosity bins explored color segregation increases with cosmic time, as the differences of $\it{F_{blue}}$~ in the group, field and isolated galaxy populations increase significantly moving from high to low redshifts. These results are in good agreement with those from the VVDS survey presented by \citet{Cucciati2009a}. However we seem to detect evolution in $\it{F_{blue}}$~ across the range $0.75 \leq z \leq 1.0$, in contrast with the results of \citet{Gerke2007} using DEEP2 data-set. We note that comparing directly our panel (d) of Fig.~\ref{fig:fbz} with the first panel of their Fig. 7, where the magnitude ranges explored are quite similar and the sample analyzed is purely volume-limited as in our analysis, the disagreement is not so evident. We chose to parametrize the evolution in $\it{F_{blue}}$~ with redshift with a law of the form $\it{F_{blue}}$~ $ \propto (1+z)^{\beta}$. The results of the best fit solutions obtained with this parametrization are given in Table ~\ref{tab:vollimfit} and shown as dashed lines in Fig.~\ref{fig:fbz}. These lines tend to diverge between high and low z. Toward higher redshift, one can consider whether, irrespective of the environment considered, most galaxies in the luminosity ranges explored resided in the blue cloud, while the red sequence remained more or less empty. As cosmic time increases, the blue cloud may then become progressively depleted and the rate at which this depletion occurs seems to be higher in higher density environments, implying that the star-formation rate is declining more rapidly in groups and clusters. While the extrapolated values of $\it{F_{blue}}$~ at $z \sim 0$ vary as a function both of environment and the luminosity cut-off considered, the values of $\beta$ do not exhibit any appreciable differences between the different environments as a function of the chosen luminosity cut-off. On the other hand, there is a noticeable increase in the value of $\beta$ moving from isolated to group galaxies, although the error bars are quite large. Since $\beta$ implies that there is a fractional decrease in $\it{F_{blue}}$~ with cosmic time (or, alternatively, a fractional increase of the percentage of red galaxies with cosmic time) this result suggests that we detect the signature of an environmental dependence of the variations in $\it{F_{blue}}$~ with cosmic time. However the mechanisms responsible for the environmental trends that we witness cannot be accurately constrained. Our results could be the consequence of physical mechanisms operating in the denser group environment or simply the result of an {\it ab initio} bias relating galaxy luminosity/mass and its environment. In other words, we could be witnessing the more rapid quenching of star formation - and, as a consequence, the faster build-up of the red sequence - in denser environments, or the delayed and more efficient replenishing of the blue cloud in lower density environments. We will return to this point in the following sections. \begin{table*} \caption{Summary of fits results for $\it{F_{blue}}$~ as a function of redshift in the volume-limited samples defined in Table~\ref{tab:vollimnumb}. We parametrized the evolution of $\it{F_{blue}}$~ with a fit of the form $\it{F_{blue}}$~ $ \propto (1+z)^{\beta}$. We assume $M^*_{\it Ev} = -20.3 - 5\times log~(h_{70}) -1.1\times z$.} \label{tab:vollimfit} \centering \begin{tabular}{c c c c c c c c c } \hline\hline & \multicolumn{2}{c}{Sample I} & \multicolumn{2}{c}{Sample II} & \multicolumn{2}{c}{Sample III} & \multicolumn{2}{c}{Sample IV} \\ $M_{B}$ range & \multicolumn{2}{c}{$M_{B}\leq M_{B \it Ev}^*+2.1$} & \multicolumn{2}{c}{$M_{B}\leq M_{B \it Ev}^*+1.5$} & \multicolumn{2}{c}{$M_{B}\leq M_{B \it Ev}^*+0.8$} & \multicolumn{2}{c}{$M_{B} \leq M_{B \it Ev}^*+0.2$} \\ $z$ range & \multicolumn{2}{c}{$0.1\leq z\leq0.45$} & \multicolumn{2}{c}{$0.1\leq z\leq0.6$} & \multicolumn{2}{c}{$0.1\leq z\leq0.8$} & \multicolumn{2}{c}{$0.1\leq z\leq 1.0$} \\ \hline \hline & $\it{F_{blue}}$~($z=0)$ & $\beta$ & $\it{F_{blue}}$~($z=0)$ & $\beta$ & $\it{F_{blue}}$~($z=0)$ & $\beta$ & $\it{F_{blue}}$~($z=0)$ & $\beta$ \\ All galaxies & 0.51$\pm$0.06 & 0.88$\pm$0.40 & 0.45$\pm$0.04 & 0.92$\pm$0.27 & 0.36$\pm$0.03 & 0.99$\pm$0.20 & 0.32$\pm$0.03 & 0.96$\pm$0.18 \\ Isolated galaxies & 0.73$\pm$0.15 & 0.16$\pm$0.67 & 0.58$\pm$0.10 & 0.69$\pm$0.44 & 0.48$\pm$0.08 & 0.85$\pm$0.36 & 0.60$\pm$0.12 & 0.23$\pm$0.36 \\ Group galaxies I & 0.29$\pm$0.08 & 2.01$\pm$0.87 & 0.24$\pm$0.06 & 2.05$\pm$0.67 & 0.19$\pm$0.05 & 1.97$\pm$0.50 & 0.16$\pm$0.05 & 1.87$\pm$0.53 \\ Group galaxies II & 0.21$\pm$0.12 & 2.09$\pm$1.85 & 0.19$\pm$0.08 & 1.75$\pm$1.27 & 0.14$\pm$0.05 & 2.20$\pm$0.61 & 0.16$\pm$0.05 & 1.87$\pm$0.53 \\ \hline \end{tabular} \end{table*} The group points in Fig.~\ref{fig:fbz} do not correspond to a group population that is homogeneous across each of the four different panels but to the samples indicated as Group galaxies I in Table~\ref{tab:vollimnumb}, i.e.,~ galaxies in groups of more than two members observed within the volume-limited sample plotted in each panel. Moving from panel (a) to panel (d) in Fig.~\ref{fig:fbz}, we consider groups that are intrinsically more rich, since they possess two or more members at progressively brighter cut-off magnitudes. Accordingly the observed decrease in $\it{F_{blue}}$~ for the group galaxy population between the first and the last panel of Fig.~\ref{fig:fbz} is the result of two different effects: the brightening of the galaxy population, an effect easily visible also for the isolated and the field samples, and the increasing richness of groups observed in the brighter volume-limited samples. It is therefore interesting to remove the richness-dependent effect from this plot and to compare galaxies residing in groups of homogeneous richness across the different volume-limited samples, using the sample of groups with two or more members brighter than $M_{B} \leq M_{B \it Ev}^*+0.2$, i.e.,~ the brightest absolute magnitude cut-off in our sample. This way, we select for each panel, except panel (d) whose group sample remains unchanged by definition, a set of groups richer than those considered before. Obviously, once a group survives in the catalogue defined by the more stringent luminosity cut-off, we then plot in the appropriate panel all its members observed within the volume-limited sample under study. The result of this exercise is shown by the brown stars in Fig.~\ref{fig:fbz} and by the last row of Table~\ref{tab:vollimfit}, labeled Group galaxies II. In this case, the groups considered are homogeneous in richness up to $z \sim 1$. The difference compared to the field population (and the isolated galaxies) increases significantly for these richer groups. In contrast the dependence of $\it{F_{blue}}$~ on the rest-frame B magnitude of the group population is reduced significantly. Table~\ref{tab:vollimfit} shows that the slopes of the fits to the Group galaxies II points are virtually indistinguishable among themselves, irrespective of the luminosity limits adopted in each of the four different panels. \section{Blue fraction as a function of group properties to $z \sim 1$} \label{sect:GroupsSamples} The results obtained in the previous section using Group galaxies II samples suggest that group richness is an important ingredient in setting the value of $\it{F_{blue}}$, possibly more influential than the galaxy rest-frame B magnitude. Group richness can be considered, albeit with a large scatter, a proxy for the mass of the halo where the group resides (see Knobel et al., 2009). Therefore, the results just obtained echo at higher redshifts findings in the local Universe by \citet{Weinmann06}. Using both galaxy colors and specific star formation rate indicators to define samples of early/late type galaxies, these authors showed that at fixed halo mass the dependence of the galaxy type fraction on luminosity is quite weak. Concerning the dependence of $\it{F_{blue}}$~ on group richness and/or velocity dispersion - another possible proxy of halo mass - many conflicting observational results, however, exist in the literature. Some authors have claimed the presence of a relationship between group and galaxy properties \citep[see, e.g.,~][to quote a few]{Biviano1997, ZabludoffMulchaey1998a, Margoniner2001, Martinez2002a, Goto2003, Tanaka2004, Poggianti2006, DeLucia2007, Gerke2007, Koyama2007}, while other authors have claimed that such relationships are not present \citep[see, e.g.,~][]{Smail1998, Ellingson2001, Fairley2002, DePropris2004, Goto2005c, Wilman2005a, Popesso2007}. In the following, we explore with our sample the dependence of $\it{F_{blue}}$~ on group richness and velocity dispersion. \subsection{Blue fraction as a function of group richness} \label{sect:GroupRich} The use of the term ``richness'' for galaxy clusters dates back to Abell, who introduced a broad classification of clusters in three richness classes based on counting galaxies between m3 and m3 + 2 mag, where m3 is the magnitude of the third-brightest galaxy \citep[see][]{Abell1958}. In this paper, we use the term richness for each group to simply indicate the number of members observed in each of the different volume-limited samples. As such, the richness of a group is not an absolute number, but varies depending on the absolute magnitude cut-off chosen to define the sample of groups. A possible better name for this quantity could be group {\it numerosity}. In estimating group richness/numerosity, however, even within this more limited definition, one has to consider corrections to the number of galaxy members observed, to properly account for the large-scale variations in the mean sampling rate of the \10K. While the mean sampling rate of the \10K is about $\sim 30$\% and increases up to $\sim 40$\% in the restricted central area adopted for our analysis, there are large spatial variations in this number (see Fig.~\ref{fig:radec_distr_map}). To correct for this problem, we estimated richness using the weighting scheme discussed in Section~\ref{sect:IncomplCorr}. For each volume limited sample we simply added galaxy weights to count the group members, by writing richness as ${\cal N}_{corr} = \sum_{j=1}^M w_j$, where M is the number of members observed in each volume-limited sample. Figure~\ref{fig:rich_frac} shows the dependence of the value of $\it{F_{blue}}$~ on group richness. In each panel, we consider different redshift ranges, corresponding to the four volume-limited samples defined in Table~\ref{tab:vollimnumb}, and divide the corresponding group sample according to richness. In all panels, the red dashed line indicates the fit to the global galaxy population obtained in Section~\ref{sect:VolLimSamples}. In the first three panels, the yellow points correspond to groups of observed richness ${\cal N}_{corr} \leq 4$, the orange points to groups of observed richness $ 4 < {\cal N}_{corr} \leq 10$ and the brown points to groups of observed richness $ {\cal N}_{corr} > 10$. In panel (d), yellow points correspond to groups of observed richness ${\cal N}_{corr} \leq 8$ and brown points to groups of observed richness ${\cal N}_{corr} > 8$. The limits chosen to divide the groups into bins of richness are arbitrary but ensure that each richness bin contains a sizeable number of group member galaxies (always above $\sim 30$). The main result presented in Fig.~\ref{fig:rich_frac} is that for all redshifts explored and volume limited samples, $\it{F_{blue}}$~ decreases monotonically between higher and lower redshifts and richer groups have a lower value of $\it{F_{blue}}$. This result is in agreement with similar trends observed in the local Universe, \citep[see][]{Margoniner2001, Goto2003}. We noted in Section~\ref{sect:sample10Kgroups} that in our sample we do not have any more massive relaxed clusters, especially at low redshift, but instead we probe mainly the poorer clusters and group environment. For our lower richness groups, the results obtained -- especially in the lowest redshift bin, where we are dealing with the poorest groups of the sample in absolute terms -- blend with those observed for the global field population. We may be concerned that these results are just the by--product of a higher interloper fraction for these (poorer) groups. However, as discussed in detail in Knobel et al. (2009), the interloper fraction of our group catalogue shows a minimal increase when the number of detected members decreases. Furthermore in the local Universe extremely poor groups are known to be dominated by spiral galaxies \citep{ZabludoffMulchaey1998a}, and so what we are witnessing is most probably a real physical trend. \begin{figure} \includegraphics[width=9cm,angle=0]{AI_fig8.ps} \caption{Dependence of the blue fraction $\it{F_{blue}}$~ on group redshift and richness. In each panel, the different redshift ranges indicated by the label are considered. In panels (a), (b) and (c), we plot the value of $\it{F_{blue}}$~ for groups with $N_{corr} \leq 4$, yellow points, $4 < N_{corr} \leq 10$, orange points and $N_{corr} > 10$, brown points, in samples I, II and III respectively. In panel (d) we plot the value of $\it{F_{blue}}$~ for groups in sample IV with $N_{corr} \leq 8$, yellow points and $N_{corr} > 8$ brown points. The points have been slightly offset in redshift for the sake of clarity. In all panels, the red dashed line corresponds to fits obtained for the entire galaxy group population, as shown in Fig.~\ref{fig:fbz}. There is a consistent trend for all the samples and all the redshift bin explored: groups with higher $N_{corr}$ tend to have a lower fraction of blue galaxies, a trend superimposed on the global decrease of $\it{F_{blue}}$~ moving from high to low redshift.} \label{fig:rich_frac} \end{figure} \subsection{Blue fraction as a function of group velocity dispersion} \label{sect:GroupVeldisp} We binned our group sample using also the observed line-of-sight velocity dispersion $\sigma$, another possible proxy for group mass. The estimate of $\sigma$ is difficult, especially when one is dealing, as in our case, with groups of only a few members with measured redshifts \citep[see][]{Beers1990}. However, restricting the analysis to groups with observed numbers of members equal or greater than 5 allows us to observe a reasonable correlation between $\sigma$ and halo group mass (Knobel et al., 2009). We therefore used only groups with at least 5 members with measured redshifts in the \10K to explore how $\it{F_{blue}}$~ depends on the measured $\sigma$ in different redshift bins and for different volume-limited samples. Our results are shown in Fig.~\ref{fig:slos_frac}. In panel (a), the yellow point indicates $\it{F_{blue}}$~ for groups with $\sigma \leq 250$ km s$^{-1}$, the orange point for groups with $250 \leq \sigma \leq 550$ km s$^{-1}$, and the brown point for groups with $\sigma > 550$ km s$^{-1}$. In the three remaining panels, the yellow points show $\it{F_{blue}}$~ for groups with $\sigma \leq 350$ km s$^{-1}$, the orange points for groups with $350 \leq \sigma \leq 650$ km s$^{-1}$, and the brown points for groups with $\sigma > 650$ km s$^{-1}$. In all panels, the red dashed line corresponds to the fits obtained for the entire galaxy groups population in the volume-limited sample, as in Fig.~\ref{fig:rich_frac}, while the red points are those corresponding to the sample of groups detected with at least 5 members in the flux-limited sample irrespective of their velocity dispersion. Figure~\ref{fig:slos_frac} shows that there is a consistent trend in all panels: groups with higher velocity dispersion tend to have a lower fraction of blue galaxies. Limiting the sample in the analysis only to groups of at least 5 observed members, produces the systematic offset towards lower $\it{F_{blue}}$~ observed in each of the panels when comparing the red points to the dashed red lines: with the requirement of $N \geq 5$ we remove from the sample the poorer groups in each volume-limited sample. We conclude this section with a word of caution. One would be tempted to compare points as a function of redshift across the last three different panels, since the $\sigma$ cut-off chosen is equal for each of them, and after showing in Section~\ref{sect:isolgrfield} that galaxy luminosity is of much less importance in determining $\it{F_{blue}}$~ than group richness. This comparison would apparently result in a statement of {\it non-evolution} of $\it{F_{blue}}$~ as a function of redshift for groups of similar velocity dispersion. However one needs to be extremely careful when comparing results obtained at different redshifts. As cosmic time increases a system will experience an increase in its velocity dispersion, as its halo mass becomes higher due to structure growth (see, e.g.,~ the prescriptions obtained by using semi-analytic models by \citet{Wechsler2002} and \citet{Poggianti2006} for an application). Furthermore, one should keep in mind that the imposed cut-off on the number of members observed in the flux-limited sample is expected to introduce a strong bias favoring richer groups moving from panel (a) to panel (d), as the absolute luminosity of the observed galaxies becomes brighter. What appears as an absence in evolution of $\it{F_{blue}}$~ as a function of redshift at a fixed velocity dispersion is thus at least partly caused by the progressive bias against lower richness groups moving from panel (a) to panel (d). Estimates of group richness and group velocity dispersion often have large error bars, due to the paucity of member-galaxy samples available, and group richness/group velocity dispersion are properties that have a large scatter in their relationship to more fundamental quantities - as the mass of the halo where the group resides. As a consequence, it is unsurprising to observe a large scatter in the trends that relate group richness and group velocity dispersion with the value of $\it{F_{blue}}$. To avoid producing biased results on evolution proper care has to be taken to compare properties of group samples that are truly homogeneous in the different redshift bins explored. \begin{figure} \includegraphics[width=9cm,angle=0]{AI_fig9.ps} \caption{Dependence of the blue fraction $\it{F_{blue}}$~ on redshift and line-of-sight velocity dispersion of groups. In each panel, the different redshift ranges indicated by the label are considered. In all panels only groups detected with at least 5 members in the flux-limited sample used by the group detection algorithm are plotted, to avoid the large uncertainty in velocity dispersion measurement when poorer structures are considered. In panel (a), the yellow point shows $\it{F_{blue}}$~ for groups with $\sigma \leq 250$ km s$^{-1}$, the orange point for groups with $250 \leq \sigma \leq 550$ km s$^{-1}$, and the brown point for groups with $\sigma > 550$ km s$^{-1}$. In the three remaining panels, the yellow points show $\it{F_{blue}}$~ for groups with $\sigma \leq 350$ km s$^{-1}$, the orange points for groups with $350 \leq \sigma \leq 650$ km s$^{-1}$, and the browns point for groups with $\sigma > 650$ km s$^{-1}$. In all panels, the red dashed line corresponds to fits obtained for the entire galaxy group population in the volume-limited sample, as in Fig.~\ref{fig:rich_frac}, while the red points are those corresponding to the sample of groups detected with at least 5 members in the flux-limited sample, irrespective of their measured line-of-sight velocity dispersion. In each plot, the points are located in redshift at the median value of the sample considered, with a small offset, for the sake of clarity, among the different $\sigma$ limited samples. There is a consistent trend for all the redshift bins explored: groups with higher line-of-sight velocity dispersion tend to have a lower fraction of blue galaxies. } \label{fig:slos_frac} \end{figure} \section{Moving from luminosity to stellar mass: redefining the Butcher-Oemler effect} \label{sect:RedefBO} The results obtained in the previous section can all be interpreted in the framework of the classical Butcher-Oemler effect \citep{ButcherOemler1978}, in its wider context extended to group population \citep[see][]{Allington-Smith1993}. Taking advantage of the wide redshift and galaxy/group population coverage of the \10K galaxy/groups catalogue, we have been able to show that the blueing of the galaxy population in groups/clusters when moving to higher redshift, originally observed by these authors thirty years ago, is a real effect, which differs from that observed for the global galaxy population. It also exhibits specific trends as a function of both galaxy B-band rest-frame luminosity and group properties. These trends are present as a function of environment in all luminosity and redshift bins explored and seem to become progressively more conspicuous moving from $z \sim 1 $ to lower redshifts and from lower to higher luminosities. However almost galaxy properties depend strongly on galaxy stellar mass, and this is true particularly for galaxy colors. Galaxy stellar mass in turn is known to correlate with environment and can be a key player in determining galaxy properties and in linking them to the environment in which they reside. It is therefore important to check whether the strong effects evident in luminosity-selected samples are still present when the analysis is repeated using stellar-mass-selected samples. In this way, we probe the possibility of these effects being the distorted/amplified reflection -- related to the biased view imposed by the luminosity selection -- of more fundamental relationships either between masses and environment, or between masses and galaxy colors. In the following Sections, we re-examine the original Butcher-Oemler results using mass-limited samples instead of volume-limited samples. What becomes of the observed strong trends in $\it{F_{blue}}$~ as a function of environment and redshift, shown in, e.g.,~ Fig.~\ref{fig:fbz}, when one utilizes samples complete in mass? Are we able to confirm the existence of a higher proportion of blue galaxies in higher redshift groups with respect to their lower redshift counterparts, when using mass-limited samples? Can we still see an excess of red galaxies in groups with respect to the field/isolated galaxy population even when analyzing mass bins? Obviously analyzing volume-limited stellar-mass selected samples involves a significant reduction in the galaxy sample size available to our study, as the selection of mass-complete samples implies the rejection of a large number of low-mass galaxies for which our $I_{AB}$ selected redshift survey is incomplete. However this is an unavoidable step in clarifying the key mechanisms determining the relationships observed for luminosity-selected samples. Galaxy stellar mass has the further advantage of being more {\it stable} than its luminosity. The B-band galaxy rest-frame luminosity may indeed change dramatically during a galaxy lifetime because of bursts of star-formation. Even in the absence of these bursts, the rest-frame B-band luminosity evolves with redshift, possibly in different ways for different galaxy populations, and one needs to introduce - as we have - an average evolution correction term to sample homogeneous galaxy populations in the different redshift bins explored. On the other hand, stellar mass varies to a far lesser extent during a galaxy's life; it also increases due to star formation and mergers, but by a smaller percentage, as confirmed both by observational evidence, showing that up to $z\sim1$ the mass function evolves only mildly (see \citet {Pozzetti2007} and references therein), and by numerical simulations, \citep{DeLucia2006} . As a consequence, the selection of mass-limited samples eases the task of tracing the same population of galaxies in the different redshift bins explored. In the subsequent Sections, we will investigate the impact of the use of mass-selected samples on our analysis. \section{Defining stellar-mass, volume-limited samples} \label{sect:masscompl} To construct volume-limited, stellar-mass selected samples, we followed a simple approach. For each of the four redshift bins adopted in the previous analysis, we estimated the limiting mass at which even the oldest/reddest galaxies (i.e.,~ those with the maximum possible stellar mass-to-light ratio) would be observable given the magnitude limit of our survey. To estimate this limiting mass, we first proceeded by calculating the limiting stellar mass of each galaxy, i.e.,~ the stellar mass it would have, at its spectroscopic redshift, if its apparent magnitude were equal to the limiting magnitude of our survey ($I_{AB} = 22.5$). We then used these estimated limiting masses to define, in bins of $(U-B)$ rest-frame colors for each redshift bin, the mass ${\cal M}_{cut-off}$ below which $85$\% of galaxies of that color lie. The value of ${\cal M}_{cut-off}$ for the reddest galaxies in each redshift bin is the one that we use as limiting mass. \begin{figure} \includegraphics[width=9cm,angle=0]{AI_fig10.ps} \caption{ Rest-frame (U-B) colors plotted versus galaxy stellar mass in solar mass units for the samples defined in Table \ref{tab:vollimnumb}. On top of each panel, we write the redshift bins considered. For each panel, only galaxies contained in the corresponding volume-limited sample have been plotted (sample I to sample IV). The points in red show galaxies located in groups, while those in black correspond to the total population of galaxies. The blue dashed line in each panel indicates the color-dependent $85$\% completeness mass-limit (see text for more details on how it is computed). The shaded area indicates the mass bins that we used in our analysis, and its lower boundary in mass equals the limiting mass above which all galaxies, irrespective of their color, are observed in our flux-limited survey.} \label{fig:masslim} \end{figure} Figure~\ref{fig:masslim} shows the distribution of $(U-B)$ colors versus stellar masses for each of the volume-limited samples defined in Table~\ref{tab:vollimnumb}. The points in red show galaxies located in groups, while those in black correspond to the total population of galaxies. The blue, dashed line in each panel indicates the fit to the values of the color-dependent $85$\% completeness mass-limit, estimated as described above. The lower boundary to the shaded rectangular area in each panel indicates the limiting mass above which all galaxies, even those redder in colors, are observed in our magnitude-limited survey. Figure~\ref{fig:masslim} confirms that adopting mass volume-limited samples rejects a large number of lower-mass, bright, blue galaxies, which were included in B-band luminosity, volume-limited samples. For each of the mass volume-limited samples Table~\ref{tab:masslimnumb} summarizes its lower mass limit and the number of galaxies contained in both the full galaxy sample and isolated/group galaxy samples. \begin{table*} \caption{Summary of the four mass volume-limited data samples. {\it Mass} is in units of $log({\cal M}_*/({\cal M}_{\odot} \times h_{70}^{-2}))$. } \label{tab:masslimnumb} \centering \begin{tabular}{c c c c c } \hline\hline & Sample M-I & Sample M-II & Sample M-III & Sample M-IV \\ Stellar mass range & \it{Mass}$\geq 10.0 $ & \it{Mass}$\geq 10.3 $ & \it{Mass}$\geq 10.6 $ & \it{Mass}$\geq 10.9 $ \\ $z$ range & $0.1\leq z\leq0.45$ & $0.1\leq z\leq0.6$ & $0.1\leq z\leq0.8$ & $0.1\leq z\leq 0.9$ \\ \hline \hline All galaxies & 883 & 914 & 1033 & 491 \\ Isolated galaxies & 119 & 141 & 131 & 55 \\ Group galaxies I & 386 & 355 & 330 & 137 \\ Group galaxies II & 155 & 165 & 230 & 137 \\ \hline \end{tabular} \end{table*} \begin{figure*} \centering \includegraphics[width=10cm,angle=270]{AI_fig11.ps} \caption{Top panels: Normalized histograms of galaxy mass for different galaxy populations. Isolated galaxies are in blue, group galaxies are in red and black galaxies are the full sample. Labels on top indicate the redshift bin considered and the cut-off mass adopted. Bottom panels: Histogram of normalized differences with respect to the full galaxy sample of the group population (shaded in red) and the isolated galaxy population (shaded in blue). With respect to the whole galaxy population there is a visible and statistically significant excess of low/high mass galaxies in the isolated/group galaxy sample.} \label{fig:histo_masses} \end{figure*} Comparing the numbers in Table~\ref{tab:masslimnumb} with those in Table~\ref{tab:vollimnumb}, it is clear that the samples that become more depleted moving from a luminosity to a mass selection are those of isolated galaxies. On average, these samples experience a decrease in number by a factor of $\sim 3 $. The group samples, instead, are at most just halved. The blue low-mass galaxies that are excluded when moving from luminosity to mass limited samples are a larger fractions of galaxies residing in low-density environments. It is natural to conclude that at least part of the strong trends of $\it{F_{blue}}$~ as a function of environment observed in our volume-limited samples are driven by the large population of lower mass, bright blue galaxies for which we miss the redder, equally low mass, counterparts \citep[see] [for a similar suggestion]{DePropris2003}. In other words, the trends that we witness in Fig. ~\ref{fig:fbz} are at least partly due to the bias in B-band magnitudes volume-limited samples against red, low-mass galaxies, which are too faint to be included by the adopted luminosity cut-off. It remains to be seen whether these trends are still observed when adopting mass volume-limited samples for the analysis, or whether mass is all that is needed for predicting galaxy colors, irrespective of environment, a possibility still compatible with our results until now. Such a possibility is the one expected in simple, pure {\it nature}, galaxy formation models, where the characteristics of a galaxy (e.g.,~ colors, spin) are primarily determined by the mass of the dark matter halo in which it resides, which is in turn closely related to the galaxy stellar mass on one side, and to the density field on a $\sim 1~Mpc$ scale on the other side \citep[see, e.g.,~][]{CooraySheth2002}. In this framework, the color segregation just mirrors the change in the distribution of galaxy stellar masses as a function of environment. \subsection{Mass segregation in groups up to $z \sim 1$} The variation in the galaxy stellar mass function between different environments has been observed in the local Universe \citep[see, e.g.,~] []{Baldry2006}, and, at higher redshifts, in DEEP2 data \citep{Bundy2006}, in VVDS data (Scodeggio et al., 2009), in COSMOS data \citep{Scoville2007}, and in zCOSMOS \10K data (Bolzonella et al. 2009). In this section, we check whether mass segregation is detectable using our group, field, and isolated galaxy samples. \begin{table*} \caption{Summary of the four mass volume-limited data samples. {\it Mass} is in units of $log({\cal M}_*/({\cal M}_{\odot}\times h_{70}^{-2}))$.} \label{tab:massbinnumb} \centering \begin{tabular}{c c c c c } \hline\hline & Sample MM-I & Sample MM-II & Sample MM-III & Sample MM-IV \\ Stellar mass range & $10.0 \leq {\it Mass}\leq 10.5 $ & $10.3 \leq {\it Mass}\leq 10.8 $ & $10.6 \leq {\it Mass}\leq 11.1 $ & $ 10.9 \leq {\it Mass}\leq 11.4 $ \\ $z$ range & $0.1\leq z\leq0.45$ & $0.1\leq z\leq0.6$ & $0.1\leq z\leq0.8$ & $0.1\leq z\leq 0.9$ \\ \hline \hline All galaxies & 437 & 617 & 885 & 477 \\ Isolated galaxies & 64 & 101 & 117 & 45 \\ Group galaxies I & 174 & 221 & 330 & 132 \\ Group galaxies II & 56 & 95 & 187 & 132 \\ \hline \end{tabular} \end{table*} Figure \ref{fig:histo_masses} shows in its top panels the normalized histograms of the mass distribution for the first three mass volume--limited samples of Table~\ref{tab:masslimnumb}, plotted in red, blue and black for the group, isolated, and all galaxy samples, respectively. The bottom panels show, shaded in red, the difference between the group and all-galaxy normalized histograms, and shaded in blue the difference between the isolated and all-galaxy normalized histograms. There is a visible excess of both low-mass galaxies in the isolated galaxy sample and high-mass galaxies in the group galaxy sample, and the significance of this trend, estimated using a K-S test, is always at least $\sim 2.3 \sigma$ or more for all mass/redshift ranges considered. For sample M-IV of Table~\ref{tab:masslimnumb}, there is no significant difference between the mass distributions of isolated and group galaxies, a result possibly caused by both the lower number statistic and the narrower mass range considered. We therefore have not plotted the corresponding histograms. Given these differences in the mass distribution in different environments, we need to define mass bins that are narrow enough for mass segregation to become negligible before we can disentangle the mass/environment influence in determining galaxy colors. Only in this way shall we be able to check if environment has truly some influence on galaxy colors other than being the by--product of mass segregation. This is the approach we adopt in the following section, using as mass bins those indicated by the gray shaded areas in Fig.~\ref{fig:masslim}. A K-S test applied to the mass distribution within these bins confirms that there is no residual significant difference in mass among galaxies located in different environments. \subsection{Blue fraction as a function of galaxy mass and environment up to $z \sim 1$} \label{sect:isolgrfield} \begin{figure*} \sidecaption \includegraphics[width=12cm,angle=0]{AI_fig12.ps} \caption{ The four panels show $\it{F_{blue}}$~ as a function of redshift for each of the different mass limited samples defined in Table \ref{tab:masslimnumb}, as indicated by the labels. Red circles refer to group galaxies, blue triangles to isolated galaxies, and black squares to the total galaxy population. Brown stars are those corresponding, for each of the mass limited sample considered, to the population of galaxies in groups with at least two members in sample IV. While for the lowest mass bin explored there is still a significant residual difference in color as a function of environment, such difference progressively disappears moving to higher masses. } \label{fig:fbz_mass} \end{figure*} We explore how $\it{F_{blue}}$~ changes as a function of environment in bins of mass volume-limited samples. For our analysis, we use the logarithmic mass bins shown by the shaded rectangles in Fig.~\ref{fig:masslim}, whose number of galaxies are listed in Table~\ref{tab:massbinnumb}. Because of the large reduction in our sample when adopting mass volume-limited samples, we used bins partially overlapping in mass, a choice dictated by the desire to have a sufficient number of galaxies in each bin such that our findings could be deemed statistically significant. As a consequence, the results shown for the various mass bins are not completely independent. Needless to say, the completion of zCOSMOS bright will enable a much more detailed analysis. Figure~\ref{fig:fbz_mass} shows the fraction of blue galaxies as a function of redshift in the four samples and in the three galaxy samples listed in Table~\ref{tab:massbinnumb}. In each of the four panels, the red circles show $\it{F_{blue}}$~ for group galaxies, while the black squares and the blue triangles show the same quantity for field and isolated galaxies, respectively. The labels in each panel indicate the mass range under inspection. The first information that this plot conveys is that color segregation is still present in the lowest mass bin explored, the one shown in panel (a), while in the intermediate-mass bins, i.e.,~ in panels (b) and (c), is barely detectable and only at the highest limits of the redshift ranges explored. For the highest mass bin, in panel (d), there is no hint of a color segregation up to $z \sim 1$ and no evolution with redshift is detectable. We parametrized the evolution in $\it{F_{blue}}$~ with a fit of the form $\it{F_{blue}}$~ $ \propto (1+z)^{\beta}$, and the results are indicated by the dashed lines in Fig.~\ref{fig:fbz_mass}. When dealing with samples defined by bins in mass, however, at odds with what we discussed in Section~\ref{sect:isolgrfield}, the dashed lines obtained from the fit to the data points seem, if anything, to indicate that the relative differences of $\it{F_{blue}}$~ between group, field, and isolated galaxy samples progressively disappear moving from high to low redshift. One can imagine a time when, irrespective of the environment considered, most galaxies in each of the mass bin ranges explored, reside on the red sequence, having exhausted their fuel for star formation, while the blue cloud becomes more or less empty. This seems to be already the case for the highest mass bin in our plot. Panel (d) indicates that the majority of red-sequence galaxies in the mass range $10.9\leq log({\cal M}_*/{\cal M}_{\odot})\leq 11.4$, were already in place, irrespective of the environment, at the highest redshift bin we can explore ($ z \sim 0.9$). In contrast, for the lower mass bins explored, our data display a significant decrease in $\it{F_{blue}}$~ between high and low redshifts. Extrapolating the observed trends further back in time up to $z \sim 1$, one can speculate that there must have been a time when most galaxies resided in the blue cloud, irrespective of their environment. Panel (a) of Fig.~\ref{fig:fbz_mass} clearly suggests that the time when blue galaxies were in the majority has ended earlier for galaxies in groups than for those in field or isolated galaxies, and a similar trend is present, albeit at a far lower significance, also for galaxies in panels (b) and (c). Unfortunately, as shown by Table~\ref{tab:massvollimfit}, the error bars in the slopes of the fit are quite large and it is difficult to draw definitive conclusions on the fractional rate of change in $\it{F_{blue}}$~ for the different environments of each mass bin considered. In each redshift bin, all the values obtained for $\beta$ are compatible with each other for the three environments considered, given their large error bars. In parallel with the analysis completed in Section \ref{sect:VolLimSamples} for the mass volume-limited samples we proceeded to plot in Fig.~\ref{fig:fbz_mass} results for galaxies residing in groups homogeneous in richness across the different panels. We again used the sample of groups with two or more members in the brighter absolute magnitude cut-off sample, and then repeated our measurements of $\it{F_{blue}}$~ for the group members satisfying the mass-bins limits. The numbers of the galaxy group samples defined in this way are those indicated by the entry Group galaxies II in Table~\ref{tab:massbinnumb}. With the possible exception of panel (a), moving to richer groups does not seem to affect significantly the value of $\it{F_{blue}}$. This result is consistent with the previous one: only for galaxies of lower stellar masses do we still see color segregation as a function of environment and therefore only for these masses can we expect to see a significant dependence of $\it{F_{blue}}$~ on group richness. \begin{table*} \caption{Summary of fits results for mass bins of Table ~\ref{tab:massbinnumb}. We parametrized the evolution of $\it{F_{blue}}$~ with a fit of the form $\it{F_{blue}}$~ $ \propto (1+z)^{\beta}$. {\it Mass} is in units of $log({\cal M}_*/({\cal M}_{\odot} \times h_{70}^{-2}))$.} \label{tab:massvollimfit} \centering \begin{tabular}{c c c c c c c c c } \hline\hline & \multicolumn{2}{c}{Sample MM-I} & \multicolumn{2}{c}{Sample MM-II} & \multicolumn{2}{c}{Sample MM-III} & \multicolumn{2}{c}{Sample MM-IV} \\ Stellar mass range & \multicolumn{2}{c} {$10.0\leq{\it Mass}\leq 10.5$} & \multicolumn{2}{c} {$10.3\leq{\it Mass}\leq 10.8$} & \multicolumn{2}{c} {$10.6\leq{\it Mass}\leq 11.1$} & \multicolumn{2}{c} {$10.9\leq{\it Mass}\leq 11.4$} \\ $z$ range & \multicolumn{2}{c}{$0.1\leq z\leq0.45$} & \multicolumn{2}{c}{$0.1\leq z\leq0.6$} & \multicolumn{2}{c}{$0.1\leq z\leq0.8$} & \multicolumn{2}{c}{$0.1\leq z\leq 1.0$} \\ \hline \hline & $\it{F_{blue}}$~($z=0)$ & $\beta$ & $\it{F_{blue}}$~($z=0)$ & $\beta$ & $\it{F_{blue}}$~($z=0)$ & $\beta$ & $\it{F_{blue}}$~($z=0)$ & $\beta$ \\ All galaxies & 0.27$\pm$0.10 & 1.9$\pm$1.2 & 0.22$\pm$0.06 & 1.3$\pm$0.8 & 0.12$\pm$0.04 & 1.4$\pm$0.7 & 0.06$\pm$0.04 & 1.0$\pm$1.2 \\ Isolated galaxies & 0.32$\pm$0.18 & 2.4$\pm$1.7 & 0.11$\pm$0.06 & 3.7$\pm$1.4 & 0.11$\pm$0.06 & 2.3$\pm$1.1 & 0.14$\pm$0.20 & -0.5$\pm$2.6 \\ Group galaxies I & 0.15$\pm$0.11 & 2.7$\pm$2.4 & 0.16$\pm$0.08 & 1.5$\pm$1.5 & 0.10$\pm$0.05 & 1.6$\pm$1.1 & 0.10$\pm$0.07 & -0.3$\pm$1.5 \\ Group galaxies II & 0.13$\pm$1.18 & 2.0$\pm$9.9 & 0.20$\pm$0.19 & 0.6$\pm$3.1 & 0.12$\pm$0.08 & 1.2$\pm$1.3 & 0.10$\pm$0.06 & -0.3$\pm$1.5 \\ \hline \end{tabular} \end{table*} An interesting trend suggested by Fig.~\ref{fig:fbz_mass} is that for more massive galaxies the predominance of the redder-color galaxy population started earlier in cosmic time than for lower mass galaxies. We decided to investigate this trend directly by plotting at fixed redshift $\it{F_{blue}}$~ as a function of mass. The results are shown in Fig.~\ref{fig:fixedz}. The label on each of the three panels shows the redshift ranges adopted, while the color code is, as usual, blue, red, and black for isolated, group and field galaxies, respectively. We adopted the following three redshift bins where to perform this analysis [0.25:0.45],[0.45:0.60] and [0.60:0.80]. For these bins, we had already defined complete mass limited samples as listed in Table~\ref{tab:masslimnumb}. However, to increase the range of masses probed in each redshift bin we decided to extend the analysis down to masses where, according to the procedure described in Section~\ref{sect:masscompl}, we had in the redshift bin considered a completeness lower than $85\%$ for the reddest galaxies of our sample. Obviously such a strategy can be adopted only if one is sure that a representative (in color) sample of the lower mass galaxies under scrutiny is observed in a large fraction of the volume under consideration, so that it can be statistically reconstructed, e.g.,~ applying a correction using the $V/V_{max}$ technique. We therefore lowered our mass limit to masses such that the completeness even for the reddest galaxies was always around $100\%$ at the lowest limit of the redshift bin considered. We then weighted each observed galaxy with its corresponding volume correction, estimated as the ratio of the volume contained within the [$z_{min}$:$z_{max}$] bin and the actual volume up to which the galaxy can be observed within the survey $I_{AB} \leq 22.5$ selection. Filled points in Fig.~\ref{fig:fixedz} are those referring to bins in masses where we are complete, while empty points refer to the lower masses bins where the $V/V_{max}$ corrections discussed above have been applied. \begin{figure*} \centering \includegraphics[width=10cm,angle=0,angle=270]{AI_fig13.ps} \caption{The three panels show, in the redshift bin indicated by the top labels, $\it{F_{blue}}$~ as a function of galaxy stellar mass. Red circles refer to group galaxies, blue triangles to isolated galaxies, while black squares to the total galaxy population. Error bars along the y-axis are those obtained by bootstrap, and those along the x-axis indicates the inter-quartiles ranges of the mass distribution in the bin under scrutiny.} \label{fig:fixedz} \end{figure*} In the x-axis we plot the median mass value for each mass bin, while in the y-axis is the value of $\it{F_{blue}}$~ for the same bin. The error-bars along the y-axis are those obtained with bootstrap analysis, and the ones along the x-axis indicate the inter-quartiles ranges of the mass distribution within the mass bin considered. The choice of the mass bins is somewhat arbitrary, as for the isolated galaxies sample we were forced to split the sample in fewer bins for statistical reasons. The trends displayed in the lowest redshift bin well agree qualitatively with similar trends detected at $z \sim 0$ \citep[e.g.,~][]{Kauffmann2004, Baldry2006}, showing a clear dependence of $\it{F_{blue}}$~ both on mass and on environment. However, there are some new interesting points, obserbed thanks to the unprecedented wide redshift/mass ranges covered by our dataset. On one hand, at all redshift bins more massive galaxies always display a lower $\it{F_{blue}}$, near to zero values, irrespective of the environment they live in, while for lower mass galaxies the value of $\it{F_{blue}}$~ raises towards unity, again irrespective of the environment they live in. Therefore, in each redshift range, and more clearly in the first two ones plotted, where the mass coverage is wider, we witness the presence of a progressive saturation of $\it{F_{blue}}$~ towards high/low values at the extremes of the mass ranges studied. However, there is a restricted range of masses for which the color of galaxies show a visible dependence on environment. This mass range is the one where both sides of the bimodal distribution of galaxy colors are well populated and we can detect a considerable environment dependent variation of $\it{F_{blue}}$. This result echoes a similar one obtained by \citet{Kauffmann2004} in the local Universe. On the other hand, moving from lower to higher redshifts we witness a progressive increase of $\it{F_{blue}}$~ for each mass bin, with the possible exception of the highest masses, as already observed in the previous section. Such decrease of $\it{F_{blue}}$~ as cosmic time goes by seem to be accompanied by a progressive {\it opening} along the x-axis of the difference between the different environments, most prominent in the mass ranges for which $\it{F_{blue}}$~ $\sim 0.5$. \subsection{Detection of the possible signature of environmental effects} \label{sect:signature} It is interesting to quantify the trend discussed at the end of the previous section using a simple parameter: the value, for each mass bin and environment, of the redshift when $\it{F_{blue}}$~ $= 0.5$. We can call this quantity {\it t$_{50-50}$} to indicate that it corresponds to the time when the galaxies in the environment and mass bin considered were equally partitioned between blue and red colors. Although obtained through a slightly different type of analysis, this quantity is equivalent to the transitional mass m$_{tr}$ identified by various authors, both in the low redshift regime by \citet{Baldry2004} and \citet{Kauffmann2004} and at higher redshifts by \citet{Bundy2006} and for the \10K by Bolzonella et al. (2009). Figure~\ref{fig:down} shows the value of {\it t$_{50-50}$}, expressed in units of time on the left-hand scale and redshift on the right-hand scale, for different galaxy stellar masses. The triangles, circles, and squares refer to the sample of isolated, group and field galaxies, respectively. Filled points are estimated from Fig.~\ref{fig:fbz_mass} using the fits to the points plotted in each mass bin as shown in Table~\ref{tab:massvollimfit}. The values obtained in this way for {\it t$_{50-50}$} do not need any incompleteness correction, since they are observed directly in our \10K in a mass range where we are complete, but they cover only a limited range of mass and environments. Empty points are obtained using a $V/V_{max}$ correction and Fig.~\ref{fig:fixedz}. These values for {\it t$_{50-50}$} are therefore more uncertain, since they are based on incompleteness corrections. \begin{figure*} \sidecaption \includegraphics[width=12cm,angle=0]{AI_fig14.ps} \caption{The time {\it t$_{50-50}$}, at which $\it{F_{blue}}$~$ \sim 0.5$, is plotted as a function of galaxy stellar mass. The left scale is in units of cosmic time, in Gyrs, while the right scale is in redshift. The triangles, circles, and squares refer to the sample of isolated, group and field galaxies, respectively. Filled points are those corresponding to bins in masses where we are complete, while empty points refer to the lower mass bins, where the $V/V_{max}$ corrections are needed. The shaded boxes have been obtained from \citet{Baldry2006}. See text for more details.} \label{fig:down} \end{figure*} For filled points, the error bars along the x-axis link the upper and lower quartiles of the mass distribution in the mass bin considered, while the error bars along the y-axis are obtained by the {\it r.m.s.} of the value of {\it t$_{50-50}$} as obtained by bootstrapping the sample of galaxies that enters in the corresponding panels of Fig.~\ref{fig:fbz_mass}. For empty points, the error bars along the y-axis indicate the redshift bin where the value of {\it t$_{50-50}$} was estimated, while the error bars along the x-axis show the upper and lower interval for the masses as obtained from the error bars in Fig.~\ref{fig:fixedz}. For the sake of comparison, we have added boxes indicating masses for which {\it t$_{50-50}$} $\sim 0.05$, as obtained from the relationship between mass, galaxy colors, and environment in the local Universe determined in \citet{Baldry2006}. These values are only indicative and extracted from their Fig. 11, panel b, where curves of the fraction of red galaxies versus~ stellar mass are shown in 12 different bins of galaxy density. We choose to use as representative of the global population the two central curves of their plot, corresponding to $-0.2 \leq log(\Sigma) \leq 0.2$, while for isolated/group galaxies in the local Universe we used the curves covering densities $-0.8 \leq log(\Sigma) \leq -0.4$ and $+0.4 \leq log(\Sigma) \leq +0.8$. This last choice was made considering that our total span in densities is not as wide as theirs: the bulk of the galaxy population in our groups/isolated galaxies is located in regions that are roughly a factor of 3 above/below median densities (see Fig. 15, panel a in Kovac et al., 2009 and a similar plot for our isolated galaxy population). The vertical size of the boxes plotted in Fig.~\ref{fig:down} corresponds to the redshift range of the sample used in \citet{Baldry2006}, while the mass range corresponding to the horizontal box size corresponds to the range of galaxy stellar mass values where the fraction of galaxies in the red sequence equals 0.5 for the three different environments defined by the curves mentioned above. It should be noticed that the masses in \citet{Baldry2006} were estimated using a \citet{Kroupa2001} initial mass function and recurrent bursts of star formation superimposed to continuous models of star formation. As a consequence the \citet{Baldry2006} masses have a systematically offset towards higher values with respect to our masses by a factor that can be as high as 0.15 dex, possibly explaining the slight offset of local points with respect to the trends displayed by our high-z points. Figure~\ref{fig:down} highlights the main result of our paper. The first visible trend is that as cosmic time goes by the typical mass at which the galaxy population is equally partitioned between red and blue galaxies moves progressively to lower values. This is another way of expressing the well-known downsizing pattern observed in galaxy population evolution. In our plot the decrease in galaxy K-band luminosity with decreasing redshift of galaxies dominated by star formation, as originally reported by \citet{Cowie1996}, translates into a progressive increase of {\it t$_{50-50}$} when considering galaxies of lower masses. But this global behavior displays differences depending on the subset of the galaxy population we are considering. A consistent trend emerges, despite the large error bars: for each mass considered {\it t$_{50-50}$} is progressively delayed moving from groups to the field and to the isolated galaxy population. In other words the downsizing for the galaxy population is further modulated by the environment: galaxies located in more massive halos (groups) become red earlier in cosmic time, a trend that shows again a downsizing behavior on the larger scales now considered. The trends displayed by our data well match those observed in the local Universe by \citet{Baldry2006}. Last but not least, another interesting trend suggested by Fig.~\ref{fig:down} is the convergence, visible at higher masses, of the value of {\it t$_{50-50}$}, irrespective of the environment considered. We are aware that this interpretation is plagued by uncertainties, as for these high masses the redshifts are correspondingly higher and those most affected by various incompleteness/contamination effects. Two possible biases can be at play at higher redshifts: the progressive degradation in the efficiency of the group/isolated galaxies algorithms and the progressive incompleteness towards the red galaxy population. Both biases act in the direction of reducing the differences between the color properties of the galactic populations we are studying. However we expect that these two biases are minimal, as discussed at length in the previous sections. One should also consider that for galaxies residing in more extreme density regimes, as those represented by rich cluster cores and not observed in our sample, there could be a residual difference even at redshift $\sim 1$ from the general galaxy population \citep[see, e.g.,~][]{Tanaka2008}. As already suggested in the introduction, however, the physical mechanisms responsible for these differences are presumably not the same as those at play in the group environment. Our results parallel those obtained, although by a different kind of analysis, by Bolzonella et al. (2009), and those shown for galaxies morphologies by \citet{Kovac2009b}. We note that the evidence we presented, i.e.,~ the faster shutdown of star formation in group environment - as the color transition from blue to red galaxy can broadly be interpreted - cannot be interpreted utilizing only {\it ab initio}/internal mechanisms, and was obtained thanks to the unprecedented wide redshift, mass and environment ranges covered by our survey. The different value of {\it t$_{50-50}$} for galaxies of different stellar masses, irrespective of environment, can be explained by resorting to internally driven mechanisms shutting-down star formation. The presence of an active galactic nucleus (AGN) feedback and shock-heating physics can be enough to explain the anti--hierarchical nature of the relation between stellar mass and stellar age of galaxies, because these mechanisms can be more efficient in more massive galaxies \citep[see][]{Birnboim2003, Bower2006, Bundy2006, Croton2006, Cattaneo2008}. In a similar way, the detection of an offset in the value of {\it t$_{50-50}$} between samples of group/isolated galaxies at fixed stellar mass does not necessarily imply that nurture mechanisms are at work. It could be explained by a different time of assembly of galaxies in haloes of different masses, a {\it nature} mechanism that results in a more evolved galaxy population in groups and clusters, at fixed stellar mass, in a given redshift bin \citep[see][]{Gao2005, Balogh2007}. In contrast, the trend suggested by Fig.~\ref{fig:down} indicates that the migration of galaxies from the blue cloud to the red sequence is a process more efficient/faster in groups than in isolated/field galaxies, and is therefore the signature of environmental processes at play in groups in shaping galaxy evolution. Interestingly, such mechanisms seem to become progressively more relevant moving to lower galaxy stellar masses, while they seem to be irrelevant to galaxies of higher stellar masses, at least in the redshift range we explored (see also Bolzonella et al., 2009). We can therefore distinguish between two different channels for the production of red galaxies, corresponding respectively, to use a common nomenclature, to {\it nature} red galaxies and {\it nurture} red galaxies. Our results suggest that galaxies of masses $\approx 10.8$ solar in logarithmic scale are already in place by $z \sim 1$ and their origin could be due primarily to so-called {\it nature}/internal mechanisms, as no strong environmental dependency is visible up to $z \sim 1$. In contrast, for masses below this value and at redshifts lower than $z\sim 1$, we witness the emergence in groups of an additional contribution of red galaxies. This is what we can call {\it nurture} red galaxies: galaxies that deviate slightly from the trend of the downsizing scenario as displayed by the global galaxy population. This nurture population is the one responsible for the earlier value of {\it t$_{50-50}$} in groups, and its importance grows as cosmic time goes by, causing the steady growth in the difference of {\it t$_{50-50}$} moving to lower galaxy masses. There are various mechanisms that occur in groups and that are more efficient for less massive galaxies, including the gradual cessation of star formation induced by gentle gas stripping and starvation by a diffuse intragroup medium, or by slow group-scale harassment \citep[see, e.g.,~][]{Larson1980, Moore1999, Gnedin2003, Roediger2005, Kawata2008}. These mechanisms could be natural candidates for explaining the trends we observe. Their increasing importance after $z\sim1$ most probably mirrors the progressive emergence of structures, as predicted by hierarchical clustering growth scenario, where such mechanisms can effectively take place. \section{Summary and conclusions} \label{sect:Concl} Taking advantage of the large coverage both of redshift and galaxy/group properties in the 10K galaxy/groups catalogue, we revisited of the blueing of the galaxy population in groups toward higher redshift, originally observed by Butcher and Oemler (1978), gaining some interesting new insights, that can be summarized as follows. 1. We have showed that using rest-frame B-band volume-limited samples, the group galaxy population becomes bluer as redshift increases, but maintains a systematic difference with respect to the global galaxy population, and an even larger difference with respect to the isolated galaxy population. Superimposed on this global effect, we detected additional trends as a function of both galaxy B-band rest-frame luminosity and group properties. More luminous galaxies exhibit stronger variations in $\it{F_{blue}}$~ among group, field, and isolated environments and groups richer or with higher $\sigma$ show a lower $\it{F_{blue}}$. 2. The difference between the three different environments increases between high and low redshift. At the highest redshift bin explored ($z \sim 1$), there is a small but still significant difference in $\it{F_{blue}}$~ among group,field, and isolated samples. This gradual increase in the $\it{F_{blue}}$~ difference with cosmic time is a clear signature of an environmental dependence, but not necessarily of the existence of environmental effects at work. It could be the result of an {\it ab initio} bias that favors later formation of lower-mass galaxies in lower density environments, causing the delayed and more efficient replenishing of the blue cloud in lower density environments. 3. Moving to mass-selected samples, a necessary step in clarifying the key mechanisms in determining the relationship observed using luminosity-selected samples, allows us to realize almost immediately that at least part of the strong trends observed when using rest-frame evolving B-band volume-limited samples are caused by the large population of lower mass, bright blue galaxies for which we miss the redder, equally low mass, counterparts. In other words, the biased view imposed by the B-band luminosity selection amplifies the findings obtained using B-band volume-limited samples. 4. Another effect has to be taken into consideration if one wants to disentangle the mass and environment influence on galaxy colors. The existence of different mass functions in different environments (see Bolzonella et al. 2009) forces us to work in mass bins narrow enough so that any color segregation cannot be attributed simply to the different mass distribution. 5. The first outcome of this careful analysis is that there is still a significant residual difference in color as a function of environment only for the lowest mass bin explored (${\it Mass}\leq 10.6 $, solar masses in logarithmic scale), while this difference progressively disappears moving to higher masses. 6. By using a $V/V_{max}$ correction, we can extend our analysis to lower masses, witnessing, in all the redshift range we explore, the presence of progressive saturation of $\it{F_{blue}}$~ towards high/low values at the extremes of the mass ranges studied. At each redshift, there is a restricted range of masses for which the color of galaxies show a visible dependence on environment, and as cosmic time increases the typical mass at which the galaxy population is equally partitioned between red and blue galaxies moves progressively to lower values. This pattern, consistent with the well known downsizing pattern observed in galaxy population evolution, is further modulated by environment: galaxies located in more massive halos (groups) become red earlier in cosmic time. 7. Finally our most interesting finding is that there is evidence that the color transition from blue to red galaxies seems to be faster in groups as cosmic time increases. In other words, we seem to witness the slow emergence of an environmental/nurture effect on galaxy evolution, which causes the faster migration of galaxies from the blue cloud to the red sequence in groups (with respect to isolated/field galaxies) and effect that becomes more relevant moving from higher to lower galaxy stellar masses \citep[see also][for a parallel analysis using galaxy morphologies]{Kovac2009b}. 8. Our results suggest that galaxies of ${\it Mass} \approx 10.8$ solar masses in logarithmic scale are already in place by $z \sim 1$ and their origin could be due primarily to so-called {\it nature}/internal mechanisms, since no strong environmental dependency is detectable up to $z \sim 1$. 9. In contrast, for masses below this value and at redshifts lower than $z\sim 1$ we witness the emergence of what we call {\it nurture} red galaxies: galaxies that slightly deviate from the trend of the downsizing scenario displayed by the global galaxy population and do more so as cosmic time progresses. There are various mechanisms that occur in groups and are more efficient for less massive galaxies (gradual cessation of star formation induced by gentle gas stripping and starvation by a diffuse intragroup medium, or by slow group-scale harassment). These mechanisms could be the natural candidates to explain the trends observed after $z\sim1$, a timing that could simply mirror that of the progressive emergence of structures where these mechanisms can effectively take place. The completion of zCOSMOS bright, and subsequent availability of \20K, will enable us to place on a more robust basis this result, which indicates that environment starts playing an active role in shaping galaxy evolution after $z \sim 1$. \begin{acknowledgements} We acknowledge support from an INAF contract PRIN-2007/1.06.10.08 and an ASI grant ASI/COFIS/WP3110 I/026/07/0. \end{acknowledgements} \bibliographystyle{aa}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,491
\section*{Data Availability} The data that support the findings of this study are available from the corresponding authors upon reasonable request. \section*{Supplementary Material} See supplementary material for details on the effects of finite bulk sizes, and for extended data on the results shown in Fig. 5. \begin{acknowledgments} This work was supported by AFOSR grant FA9550-16-1-0093. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,078
{"url":"http:\/\/alivedise.logdown.com\/posts\/92050-javascript-memorization","text":"Javascript memorization\n\nAs you know, function in javascript is also an object,\nyou could use a property of the object to keep function result.\nThis is called Memorization.\n\nToday I have a chance to utilize this skill.\n\ngetOffOrigin has to use some logic to compare the two arguments and return a string.\nThe result wouldn't change if the arguments are the same.\nTherefore, to avoid calling the function many times, the result could be stored in the function itself.\n\n\u2022 If we only have one arguments we could use it as the key to cache directly.\nBut we have multiple arguments in this case, we then use stringilized JSON to be the key.\n\n\u2022 Hence we don't need to do the same thing every time we enter this function if the arguments are used before.\n\n\u2022 Have fun with your function cache!","date":"2019-10-16 21:39:25","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.32686081528663635, \"perplexity\": 516.7980895788216}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570986670928.29\/warc\/CC-MAIN-20191016213112-20191017000612-00212.warc.gz\"}"}
null
null
Staples Interns See The Real World. And Rock It. It's late June, and summer is already in full swing. A few newly minted Staples graduates are doing actual jobs: caddying and working at restaurants. Some are taking summer courses, to get ahead for college (or make sure their acceptances are not rescinded). Many recent grads are interning. In 2014, internships are the way to get jobs after graduating from college in 2018. (Although, even then, they might need a few internships before landing a full-time, paying gig.) But these are not the first internships for the Class of '14. For a month — from mid-May until right before commencement — 94% of all Staples seniors took part in what has become one of the most important, highly valued and intriguing parts of their entire education. This year's interns were too busy working to take photos. So the images here are from years past. In 2009, Matt Takiff (above) worked at Sport Hill Farms. The Staples Senior Internship program is several years old. But this year it exploded, with 426 of the 463 class members taking part. (The ones who did not had their reasons, including academic or disciplinary ineligibility.) Forget senioritis. Instead of sitting around for the last month of school, burned out and bored out of their skulls, the Future of Our Country headed to offices, other schools, even farms, to learn about the Real World before actually entering into it. Thanks to the incredible work of program director Lee Saveliff, every intern has a site, a supervisor and a Staples staff mentor. Each intern must complete 95 verified hours of work — and each week, must write an in-depth "reflection" on the experience so far. The reflections provide great insight into the world of work — and the minds of today's teenagers. Four interns went to New York with MLB.com — the online arm of Major League Baseball. They worked on social media projects, and enjoyed devising ideas for GoPros at every different stadium. (For example: a "tour" of Fenway's Green Monster.) But they also had to make a presentation to top executives, including CEO Bob Bowman. One intern was amazed at the vast difference between standing up in a classroom, and a boardroom. (MLB execs were quite impressed, fortunately.) Several interns worked with the Himes for Congress campaign. (Hold your fire. Republicans had interns too. One traveled often to Hartford with State Representative Gail Lavielle.) The Himes interns slogged through mundane tasks, like stuffing envelopes. But they also learned the ins and outs of campaigning. They met the Congressman — and Governor Malloy. And they had to do something most folks older than 25 or so take for granted: talking on the phone. The interns followed up with constituents. They called likely and uncertain voters. For a generation raised on texting, that aspect of the job was "terrifying." But they did it. And their weekly reflections show their confidence in going outside comfort zones, gratitude for learning an important life skill, and pride in doing something tangible, with results that can be measured. In 2009, Carolyn Ross worked at Taylor's Floral Arts. She even arranged flowers for her own baccalaureate and graduation ceremonies. The internships spanned nearly every job imaginable. Some seniors worked in Westport schools (where teachers and — especially — young students adored them). Others worked at Wakeman Town Farm. Tauck World Discovery. Voices of September 11. Marinas. Wealth management firms. Contractors. WPKN. Country Clubs. Restaurants. CLASP Homes. Harbor Watch. The police. Norwalk Hour. Auto body shops. Discovery Museum. Terex. Jewish Home for the Elderly. Verizon. The public defender. Longshore. Priceline. Law and medical offices. The Westport-Weston Health District. Westport Arts Center. Winged Monkey. Veterinarians. The Bridgeport Bluefish. Yale University. Mitchells. Many internships — like this from last year at WEBE — involve something new for teenagers: interacting with the public. Interns were exposed to everything: The tedium of some jobs. Bosses who don't always explain things clearly. Commuting. (A number of interns freaked when problems at the South Norwalk bridge threw Metro-North into chaos. They instantly gained new appreciation for what their parents go through every day.) "We know our kids are hard-working, polite, creative problem-solvers," says Staples principal John Dodig — one of the internship's driving forces. "It's nice for the community to see that too." It certainly is. But that's just a side benefit. The main reason the program is such a success is seen by the nuanced reflections the interns write. The strength of their voices as they describe how much they've learned and grown in just one month. The confidence they display as they return to Staples, for one final week, to graduate. And the ease with which they go on to their next steps in life: College. Travel. The next internship. Categories: Education, Local business, Organizations, Staples HS, Teenagers Tags: Lee Saveliff, Staples Senior Internship Program
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,532
Browse profiles of Member members that have joined Meet Atheist Singles that are tagged with Democrat. Meeting others who have like minded interests is a pefect way to find things to do once you are dating. Register for a Totally Free Profile to Date Tonight! If you wish to contact any member on Meet Atheist Singles, you need to create a Totally Free Account to verify you are real. After you verified you are scammer, you can start contacting to find out if they like your profile. It's easy to find the right one. Just click to send a wink or quick message and patiently wait for a response. Once there is a mutual interest, then you can schedule to meetup to find out if there happens to be a genuine connection. So what are you waiting for? Go ahead and open up a free profile today!
{ "redpajama_set_name": "RedPajamaC4" }
9,077
{"url":"https:\/\/corindex.com\/coin\/ONT","text":"Coming soon ...\n\nI'm aware of this service's rules\nThank you!\nYou have successfully subscribed\nOntology (ONT) - 0.61117667 USD\n2020-01-14\nOntology (ONT)\nAverage ONT\/USD Rate\nAverage ONT\/USD Rate Average rate of ONT\/USD as based on ONT rate on exchanges indexed by COR\n\nComputational method: The sum of analyzed exchange rates, divided by their quantity. The average ONT\/USD rate is the value (price or cryptoasset rate) equal to the sum of all exchange rates, divided by their number, at any given point in time. In simple terms, it is the average value of ONT quotes, expressed in USD.\n\nFormula: Average BTC\/USD rate = (Rate from exchange 1 + rate from exchange 2 + \u2026 + rate from exchange N) \/ N\nN is the total number of exchanges indexed by COR Index\n0.61117667 USD +0.00557667 +0.92%\n1 USD = 1.63618812 ONT\nEmission\nEmission The amount and percentage of released, mined or created cryptoassets in circulation\n\nComputational method: Calculations are based on data listed in available cryptoasset documents, public nodes, and white papers.\n637,351,169\n\nIssued (64%)\n\n\u2022 Form:Cryptocurrency\n\u2022 Group:Digital currencies\n\u2022 Unit:1 ONT\n\u2022 Total emission1,000,000,000 ONT\n\n## Market Dynamics Market Dynamics Monitoring ONT\/USD rates on exchanges Computational method: Monitors ONT\/USD and ONT\/USDT exchange rates on the main exchanges. Display item for rate change ranges: High \u2014 displays maximum rate for chosen period. Low \u2014 displays minimum rate for chosen period. Open \u2014 displays opening rates for chosen period. Closed \u2014 displays closing rates for chosen period. view all\n\nONT\/USD price chart\nCategory:\n\n## Key ONT Exchange Data Key ONT Exchange Data A range of data from crypto exchanges, including rates, information about the price and volume of ONT trading. Includes research and reports based on data derived from indexed exchanges. Computational method: Calculated based on data released by the main exchanges. 5m 15m 30m 1h 4h 1d 1w 1mo max List\n\n Average Weighted ONT\/USD Rate The average weighted ONT\/USD exchange rate analyzes ONT\/USD exchange rates, factoring in the volume of performed transactions and rates those transactions are performed at. Computational method: Calculations are based on the volume of performed ONT\/USD exchange operations and rates at time of transaction. Formula: Average weighted ONT\/USD exchange rate = [(Rate t1 x transaction volume.t1) x (Rate t2 x transaction volume.t2) x,\u2026,x (Rate tN x transaction volume.tN)] \/ (Transaction volume.t1 + transaction volume.t2 + ,\u2026, transaction volume.tN) Remark: In other words, the average weighted ONT\/USD exchange rate shows prices at which the bulk of transactions were performed on exchanges. Volatility The percent change in value in the analyzed period. Computational method: Calculations are based on the percentage change in the average ONT\/USD rate. Formula: Volatility = (% change in the average ONT\/USD exchange rate t1) - (% change in the average ONT\/USD exchange rate t0) Remark: In other words, volatility displays the dynamics of change in ONT exchange rates. ONT Transaction Volume The sum total of ONT transactions on exchanges in the analyzed period. Computational method: Calculations are based on the sum total of ONT\/Cryptocurrencies transactions in the analyzed period on the main exchanges. Formula: The volume of ONT transactions = (Vtransaction. ONT exchange 1) + (Vtransaction. ONT exchange 2) + ,..., + (Vtransaction. ONT exchange N) Remark: In other words, ONT transaction volume shows the amount of ONT that changed owners in the period of time being analyzed. ONT Refined Transaction Volume The sum total of ONT transactions on exchanges in the analyzed period. Computational method: Calculations are based on the sum total of ONT\/USD and ONT\/USDT transactions in the analyzed period on the main exchanges. Formula: The volume of ONT transactions = (Vtransaction. ONT\/USD and ONT\/USDT exchange 1) + (Vtransaction. ONT\/USD and ONT\/USDT exchange 2) + ,..., + (Vtransaction. ONT\/USD and ONT\/USDT exchange N) Remark: In other words, ONT transaction volume shows the amount of ONT that changed owners in the period of time being analyzed, expressed in USD and USDT. ONT\/USD Transaction Volume The volume of ONT\/USD transactions for the analyzed period, expressed in USD. Computational method: Displays the volume of ONT\/Cryptocurrencies transactions for the analyzed period on the main exchanges. Formula: The volume of ONT\/USD transactions = (Vtransaction. ONT exchange 1) + (Vtransaction. ONT exchange 2) + ,..., + (Vtransaction. ONT exchange N) Remark: In other words, the volume of ONT\/USD transactions displays the sum of USD used in completing transactions in the analized time period, expressed in USD. ONT\/USD Refined Transaction Volume The volume of ONT\/USD and ONT\/USDT transactions for the analyzed period, expressed in USD. Computational method: Displays the volume of ONT\/USD and ONT\/USDT transactions for the analyzed period on the main exchanges. Formula: The volume of ONT\/USD transactions = (Vtransaction. ONT\/USD and ONT\/USDT exchange 1) + (Vtransaction. ONT\/USD and ONT\/USDT exchange 2) + ,..., + (Vtransaction. ONT\/USD and ONT\/USDT exchange N) Remark: In other words, the refined volume of ONT\/USD transactions displays the sum of USD and USDT used in completing transactions in the analized time period, expressed in USD. ONT\/USD Market Capitalization The sum cost of emitted ONT, based on current average market price. Computational method: Calculations are based on the simple average of ONT\/USD rates and emission of ONT. Formula: Market capitalization of ONT\/USD = (Average ONT\/USD rate) x (Emission) Remark: In other words, market capitalization is the value of the entire ONTmarket, based on recent ONT\/USD exchanges. ONT Share in Total Cryptoasset Market Capitalization ONT market capitalization, relative to the entirety of the cryptoasset market capitalization, expressed in USD. Computational method: Calculated based on the ONT\/USD market capitalization relative to all cryptoassets. Formula: ONT share = (The sum of all cryptoasset market capitalization) \/ (ONT\/USD market capitalization) Remark: In other words, ONT's share in the total cryptocurrency market capitalization is the part of the market taken up by ONT, as compared to all other cryptoassets. ONT Operations Share Percentage of ONT operations compared to other cryptoassets. Computational method: Calculations are based on the volume of ONT\/USD trading, compared to volume of exchanges of other cryptoassets. Formula: ONT Operations Share = (ONT\/USD opertaions x 100%) \/ (Sum total operations for all cryptoassets) Remark: In other words, this is the percentage of exchange operations with ONT, compared to exchange operations with other cryptoassets. Percentage of ONT Exchange Operations Compared to ONT Emission The percentage volume of ONT exchange operations from total ONT emission, expressed in %. Computational method: Calculations are based on the ONT\/USD transaction volume compared to total ONT emission, expressed in %. Formula: The percentage of ONT exchange operations compared to ONT emission = (ONT\/USD transaction volume x 100%) \/ (ONT emission) Remark: In other words, the percentage of ONT operations compared to ONT emission shows what part of total emitted ONT was bought and sold on exchanges in the period being analyzed.\n\n## The popularity in the regions\n\n*Based on find requests in Google in last 12 month\n\nLevel of ONT popularity worldwide\nLevel of ONT popularity worldwide The analysis of popularity and seasonality of search terms by world region. Based on search term metrics by Google Trends.\n\n## Ontology Markets\n\n#\nName\nPair\nVolume (24h)\nPrice\nUpdated\n1\n\nbinance\n\nONT\/USDT\n0 ONT\n$0.61 2020-01-14 00:39:00 2 okex ONT\/USDT 0 ONT Old data 2019-11-11 10:15:00 3 coinex ONT\/USDT 0 ONT$0.61\n2020-01-14 00:39:00\n4\n\nkucoin\n\nONT\/BTC\n0 ONT\n$0.61 2020-01-14 00:39:00 5 gateio ONT\/USDT 0 ONT$0.61\n2020-01-14 00:39:00\n6\n\nhuobipro\n\nONT\/USDT\n0 ONT\n\\$0.61\n2020-01-14 00:39:00\n7\n\nliquid\n\nONT\/USD\n0 ONT\nOld data\n2020-01-14 00:34:00\n8\n\ncoinegg\n\nONT\/BTC\n0 ONT\nOld data\n2019-05-07 16:58:00","date":"2020-08-09 05:02:59","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6888787150382996, \"perplexity\": 10679.67617145491}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-34\/segments\/1596439738425.43\/warc\/CC-MAIN-20200809043422-20200809073422-00183.warc.gz\"}"}
null
null
{"url":"http:\/\/math.stackexchange.com\/questions\/76283\/a-circle-graph-is-not-a-function?answertab=votes","text":"# a circle graph is not a function?\n\nI'm a little confused by the rule: If you draw a vertical line that intersects the graph at more than 1 point then it is not a function.\n\nBecause then a circle like $y^2 + x^2 = 1$ is not a function?\n\nAnd indeed if I rewrite it as $f(x) = \\sqrt(1 - x^2)$ then wolfram alpha doesn't draw a circle. I guess I'm missing the intuition as to why this is though?\n\n-\nNow you got several similar answers - I hope it helps! \u2013\u00a0 AD. Oct 27 '11 at 7:32\n\nThe definition of a function is so important. In addition to the above, the picture below (taken from: What is a function) may help.\n\n(the left hand side is your X and the right hand side is the value Y)\n\n-\n\nA function is a rule that assigns uniquely to a member of domain set, a member of the image set. The key word is \"uniquely\". So if you assign say 2 as well as -2 to number 1, then you have a rule, but not a function. That is the logic behind the vertical line test. If you draw a vertical line and it intersects the graph of the function in two distinct points, then you can see that it means I have assigned both of these points to the point where my vertical line crosses the x-axis. An example of this is the circle.\n\nHowever a semi-circle is a legit function-the upper half is the positive square root (y=+$\\sqrt{1-x^2}$) and the bottom half is the negative square root (y=-$\\sqrt{1-x^2}$).\n\n-\n\nFunctions need to be well-defined as part of their definition, so for a given input there can only be one output.\n\n$f(x,y)=x^2+y^2-1$ is a function of two variables, and the set of points for which this function gets $0$ is the unit circle.\n\nHowever writing $y^2+x^2=1$ as a function of $x$ alone cannot be done, as $x=\\dfrac12$ has two solutions ($y=\\pm\\sqrt{\\dfrac34}$).\n\n-\n\nIf you want have a function that \"draws\" a circle with radius $r$ and center $P = (x_0, y_0)$ on the cartesian plane, you can use the function $f : [0, 2\\pi] \\rightarrow \\mathbb{R} \\times \\mathbb{R}$ defined by $$f(\\varphi) = (x_0 + r \\cos \\varphi, y_0 + r \\sin \\varphi)$$ But, of course, this is not a function from $\\mathbb{R}$ to $\\mathbb{R}$.\n\nAlso, you can define a curve in the plane by means of an equation of two variables $x$ and $y$. If you have a (continuous) function $f : A\\subseteq \\mathbb{R}\\rightarrow \\mathbb{R}$, you can get an equation $y = f(x)$ from it, which defines a curve. But you cannot always transform an equation containing two variables to an equivalent equation $y = f(x)$. The equation $x^2 + y^2 = r^2, r\\in\\mathbb{R}$ is an example of this fact.\n\n-\n+1 for \u201c\u2026from $\\mathbb{R}$ to $\\mathbb{R}$\u201d; the OP\u2019s equation is indeed a function, just one from reals to pairs of reals, like your parametric version. \u2013\u00a0 Jon Purdy Oct 27 '11 at 14:13\nI would not say that an equation is function: an equation can define a function. Both an equation and a function can define a curve on the plane, but some equations do not have an associated function. \u2013\u00a0 Giorgio Oct 27 '11 at 15:00\nFair enough. It seems to me that the OP is saying $\\pm y$ intuitively ought to represent the pair $(+y, -y)$. \u2013\u00a0 Jon Purdy Oct 27 '11 at 16:14\nAh, ok, you mean $\\pm y = (+y, -y)$, so you would have a function $f:\\mathbb{R}\\rightarrow \\mathbb{R}\\times\\mathbb{R}$. \u2013\u00a0 Giorgio Oct 27 '11 at 16:16\n\nA function $f(x_1, \\ldots, x_n)$ has the property, that for one set of values $(v_1, \\ldots, v_n)$ there is at most one result. If you compare, your $f(0) = 1$, but there are 2 values for $y$ s.t. $y^2 + x^2 = 1 \\mid x = 0$, namely $\\{ 1, -1 \\}$.\n\n-\n\nThe standard definition of a function $f$ is that it takes one value $f(x)$ for each $x$ (where it is defined).\n\nIn particular, the square root is a single valued function - for a real number $x$, the square root is given by $\\sqrt{x^2} =|x|$.\n\nIn your example, when solving for $y$ in the circle equation $y^2+x^2=1$ there are two possibilities $$y=\\sqrt{1-x^2}\\qquad \\text{or}\\qquad y=-\\sqrt{1-x^2}$$ which are two different functions and the union of their graphs is the circle.\n\n-\n\n$y^2+x^2=1$ is implicit definition of $y$\n\nAn equivalent explicit definition of $y$ is:\n\n$y=\\pm \\sqrt{1-x^2}$ , with condition $x\\in [-1,1]$\n\n-\nYes, but $y = \\pm \\sqrt{1 - x^2}$ is not a function. \u2013\u00a0 Giorgio Oct 27 '11 at 15:03\n@Giorgio,$\\pm\\sqrt{1-x^2}$ means $y=+\\sqrt{1-x^2} \\lor y=-\\sqrt{1-x^2}$ \u2013\u00a0 pedja Oct 27 '11 at 19:27","date":"2014-12-18 12:42:44","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.910024881362915, \"perplexity\": 178.06861641262412}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-52\/segments\/1418802766292.96\/warc\/CC-MAIN-20141217075246-00158-ip-10-231-17-201.ec2.internal.warc.gz\"}"}
null
null
{"url":"https:\/\/matsci.org\/t\/lammps-pair-style-granular-damping-velocity-and-mass-velocity-unable-to-simulate-polydisperse-granular-systems\/39149","text":"# LAMMPS pair_style granular damping velocity and mass_velocity unable to simulate polydisperse granular systems\n\nDear LAMMPS users and developers\n\nI have a query relating to damping while simulating poly disperse (in size) granular systems.\n\nIn \u2018pair_style granular\u2019, we have 4 types of damping: velocity, mass_velocity, viscoelastic, tsuji.\n(pair_style granular command \u2014 LAMMPS documentation)\n\nI will concentrate on the first two (velocity, mass_velocity) in this query.\n\nAs per documentation, we (the user) has to specify \\eta_{n0}.\nIn case of damping velocity, \\eta_{n0} specified by the user should be in terms of mass\/time units, and damping force is evaluated as:\nF_{damp} = \\eta_{n0} \\times v\n(v=normal or tangential velocity, as the case may be.)\n\nIn case of damping velocity, \\eta_{n0} specified by the user should be in terms of 1\/time units, and damping force is evaluated as:\nF_{damp} = \\eta_{n0} \\times meff \\times v\nwhere meff = effective mass of colliding particles meff=(mi*mj\/(mi+mj))\n\n(see lines 363-374 here: lammps\/pair_granular.cpp at a4ceda9706e56920f9168ef5987c87d7a327244d \u00b7 lammps\/lammps \u00b7 GitHub)\n\nEvaluation of \\eta_{n0} in both of the above cases is as follows:\n\nFollowing Silbert(2001),\nif we are to use damping velocity, then \\eta_{n0}=\\frac{2 \\ln(e_n) \\sqrt{m_{eff}k}}{\\sqrt{\\pi^2 + \\ln^2(e_n)}} (units: mass\/time) [Eqn. 1]\nand, if we are using damping velocity, then \\eta_{n0}=\\frac{2 \\ln(e_n) \\sqrt{k\/m_{eff}}}{\\sqrt{\\pi^2 + \\ln^2(e_n)}} (units: 1\/time) [Eqn. 2]\n\n(Some more details about [Eqn. 1, 2] are given towards the end in the footnote.)\n\nWith the above expressions of \\eta_{n0}, we shall have \\eta_{n0} in appropriate units required\/expected by LAMMPS as given in the documentation.\n(see also: [Update documentation] pair_style granular command damping mass velocity \u00b7 Issue #3016 \u00b7 lammps\/lammps \u00b7 GitHub)\n\nMy concern is that the term meff appears in the expression used to evaluate \\eta_{n0}.\nHence, we have an issue that these damping styles will work only for a mono-disperse system.\n\nIn a poly-disperse system (eg. 10000 particle system whose dia is uniformly distributed in a range, say 0.8 to 1.2), the meff of the colliding particles is not known apriori before the simulation starts.\n\nParticles of any two sizes can possibly collide. In such a scenario, we do not know meff of colliding particles and as a consequence, corresponding \\eta_{n0} cannot be evaluated and specified in the input script by the user.\n\nKindly confirm if I am correct so far.\nPlease correct me if I am missing something in here. I\u2019d be very grateful.\n\nIf my understanding detailed above is indeed correct, the code needs to be modified to take care of poly-disperse collisions.\n\nThis can be achieved in the following manner:\n\n1. meff term should be pulled out of \\eta_{n0} in \u2018damping velocity\u2019 and\/or \u2018damping mass_velocity\u2019.\nIn such a case\nF_{damp} = \\eta_{n0} \\times sqrt(meff) \\times v\nwhere, user specified \\eta_{n0} = \\frac{2 \\ln(e_n) \\sqrt{k}}{\\sqrt{\\pi^2 + \\ln^2(e_n)}}\n(units: mass^0.5\/time)\n\nOr an entirely new damping can be included if we wish to ensure that the old input scripts of other users are not affected by these changes in newer versions of LAMMPS.\nIf that is the case, we can in fact go one step further and ask the user to just specify normal(and tangential) coefficient of restitution (e_n and e_t) since this is more physical.\nThen, F_{damp} can be internally calculated as\nF_{damp} = \\frac{2 \\ln(e_n) \\sqrt{m_{eff}k}}{\\sqrt{\\pi^2 + \\ln^2(e_n)}} \\times v\n\nWe have to ensure that the above changes are made for:\n\n1. both the normal and tangential damping cases for the case of particle-particle collision. (src\/GRANULAR\/pair_granular.cpp file)\n2. both the normal and tangential damping cases for the case of particle-wall collision. (src\/GRANULAR\/fix_wall_gran.cpp)\n\nFootnote:\nQ: Why should we believe your expressions for \\eta_{n0} [Eqn. 1,2].\nA: I have obtained this by solving the (linear spring,damper,mass) system (Silbert. 2001).\n\nFurthermore, I have checked and confirmed LAMMPS behaviour by running a few test simulations. I have considered the case of two particle collision in which one particle is initially at rest (v=0) and other impacts it with velocity (v=1). We can obtain final velocities of the particles and evaluate e=(v2-v1)_{final}\/(v2-v1)_{initial} = (v2-v1)_{final}\/1 = v2-v1\n\nCase 1: Damping mass velocity.\nChoose any kn value.\nChoose any e value.\nChoose dia of the colliding particles\nChoose density of the particles.\nCalculate the masses of two particles and meff.\nCalculate \\eta_{n0} value as per [Eqn. 1]\nSpecify this \\eta_{n0} value in the input script.\nRun the simulation.\nObtain final velocities of the particles.\nCalculate e=(v2-v1)_{final}. This should be the same as that of our chosen e, thus confirming the correctness of [Eqn. 1] to evaluate \\eta_{n0} for damping velocity.\n\nThe same can be carried out with damping mass velocity.\nComment\/uncomment appropriate line in the input script.\nUse [Eqn. 2] to evaluate \\eta_{n0} for this case.\n\nI am copying below three files: (new user-can\u2019t upload files)\n\n1. in.dampingcheck.txt: input file\n2. grains.txt : read_data file\n3. etan0_calculator.py to calculate {eta_n0} value.\n\n#input script begin\n\nunits \t\tsi\ndimension\t2\natom_style\tsphere\nboundary\tp p p\nnewton off\ncomm_modify\tvel yes\natom_modify map yes\n\nneighbor\t0.2 bin #0.002 bin\nneigh_modify\tdelay 0\n\nvariable knpp equal 100000\nvariable ktpp equal (2\/7)*${knpp} variable xgammat equal 1 #tgt_damp = xgammat * normal_damp variable mupp equal 0.5 #coeff. of friction between particle-particle #gammapp = eta_n0; can use etan0_calculator.py to get this value. pair_style granular #comment\/uncomment one of the two pair_coeff commands. use appropriate gammanpp value. #variable gammanpp equal 64.025 #64.025-->en=0.5 damping velocity #pair_coeff * * hooke${knpp} ${gammanpp} tangential linear_history${ktpp} ${xgammat}${mupp} damping velocity #limit_damping\n\nvariable gammanpp equal 290.0136 #290.0136-->en=0.5 damping mass_velocity\npair_coeff * * hooke ${knpp}${gammanpp} tangential linear_history ${ktpp}${xgammat} ${mupp} damping mass_velocity #limit_damping fix 1 all nve\/sphere #disc group id1 id 1 group id2 id 2 group id3 id 3 compute 1 id1 property\/atom vy compute v1 id1 reduce sum c_1 compute 2 id2 property\/atom vy compute v2 id2 reduce sum c_2 thermo 10000 thermo_style custom step time c_v1 c_v2 variable dumpfreq equal 1000000 dump depositions all custom${dumpfreq} test2.dump.* id x y z vx vy vz type diameter radius fx fy fz mass omegaz\n\nfix 2dfix all enforce2d\ntimestep 0.000001\n\nrun 5000000\n\nprint \"Simulation Complete.\"\n\n\n#input script end\n\natom config\n\n2 atoms\n10 atom types\n0.0 10.0 xlo xhi\n0.0 20.0 ylo yhi\n-0.5 0.5 zlo zhi\nAtoms\n\n1 6 1.0 1.0 3.0 11.2 0.0\n2 6 0.9 1.0 3.0 9.1 0.0\nVelocities\n\n1 -0.0 -1.0 0.0 0 0 0\n2 0.0 0.0 -0.0 0 0 0\n\n\n#end of read_data file\n\n#.py script to calculate eta_n0 values\n\n#etan0calculator.py\n\nimport numpy as np\npi = np.pi\n\ndia1 = 1.0\ndia2 = 0.9 #1.0\n\ndensity = 1.0\nen = 0.5 # normal coeff. of restitution\nkn = 10**5 # hookean spring\n\nm1 = density * (4\/3) * pi * ((dia1\/2)**3)\nm2 = density * (4\/3) * pi * ((dia2\/2)**3)\nmeff = m1*m2\/(m1+m2)\n\neta_n0_vel = -(2 * np.log(en) * np.sqrt(meff*kn))\/np.sqrt(pi**2 + np.log(en)**2)\neta_n0_massvel = -(2 * np.log(en) * np.sqrt(kn\/meff))\/np.sqrt(pi**2 + np.log(en)**2)\n\nprint('eta_n0_vel = ', eta_n0_vel)\nprint('eta_n0_massvel = ', eta_n0_massvel)\n\n\n#end of .py script\n\n## Thank you.\n\nVikas K\nIIT Kanpur\nIndia.\n\nHey Vikas,\n\nMay I know what is the update on this discussion? Thank you.\n\nRegard,","date":"2022-12-10 02:28:08","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.888990581035614, \"perplexity\": 9195.969365357243}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446711637.64\/warc\/CC-MAIN-20221210005738-20221210035738-00305.warc.gz\"}"}
null
null
\section{Introduction} The Cox ring ${\rm Cox}(X)$ of an algebraic variety $X$ captures the geometry of the variety and all the line bundles on it~\cite{ADHL15}. This ring is also known as universal torsor in arithmetic geometry~\cite{CTS76,CTS77}. The Cox ring construction generalizes the classic description of toric varieties as quotients of (big open subsets of) affine spaces~\cite{Cox95}. Whenever the $\mathbb{C}$-algebra ${\rm Cox}(X)$ is finitely generated, it controls all birational models of $X$ via GIT~\cite{HK00}. In this case, we say that $X$ is a {\em Mori dream space}, since the variety $X$ behaves optimally with respect to the minimal model program (also known as Mori program). The use of Cox rings to study varieties has become a standard technique in algebraic geometry. An explicit presentation of the Cox ring in terms of generators and relations often enlightens the geometry of the variety. In this direction, there are several results on weak del Pezzo surfaces~\cite{BP02,HT02}. Many results about Cox rings have been obtained for K3 surfaces~\cite{AHL10,ACL21}. In higher dimensions, the Cox rings of some Fano manifolds have been described explicitly~\cite{DHHKL15,HKL16,HLM19}. The Cox rings of certain Moduli spaces are considered in~\cite{Cas09,MR17,BM17}. The study of Cox rings has also become a central topic in $\mathbb{T}$-varieties~\cite{AH06,AIPSV12}, especially in the case of {\em complexity one}, i.e., $n$-dimensional algebraic varieties with an effective action of a $(n-1)$-dimensional torus~\cite{HaSu10, AP12, BHHN16}. In this case, the Cox ring can be described combinatorially. More generally, horospherical varieties can be described using Cox rings~\cite{LT17,Vez20}. In many cases, computations of intersection theory can be carried out in the Cox ring of an algebraic variety. The most general setting in which Cox rings are known to be well-defined is algebraic stacks~\cite{HM15, HMT20}. In the present work, we will mostly deal with the class of integral, noetherian, normal schemes~\cite[Sec 2.3]{HMT20}. We refer the reader to~\cite{LV09,ADHL15} for a systematic study of Cox rings. Cox rings have also been used to study singularities. In this case, the definition of the Cox ring is often applied to a certain resolution of singularities. In~\cite{FGL11}, the authors study the Cox ring of the minimal resolution of a surface Du Val singularity. In~\cite{Don16}, the author provides two different descriptions of the Cox ring of the minimal resolution of a quotient singularity. Further results have been obtained towards the computation of the Cox ring of some minimal (or crepant) resolution of singularities~\cite{DG17,DK17,Gra18}. In~\cite{ABHW18}, a slightly different approach to studying Kawamata log terminal (klt) singularities via Cox rings is proposed by the authors. Instead of looking at the Cox ring of a resolution of singularities, the authors use the definition of Cox ring on the germ itself. Then, if possible, this process is iterated to simplify the singularity (and possibly, increasing the dimension). This construction generalizes the presentation of surface klt singularities as quotients of factorial canonical singularities by solvable finite groups. In~\cite{ABHW18}, the iteration of Cox rings is performed for singularities of complexity one. The iteration has at most four steps and can be read off directly from the first Cox ring. Furthermore, the last variety in this sequence, the so-called \emph{master Cox ring}, is factorial and it can be listed explicitly. In~\cite{HW18}, the authors characterize all varieties with a torus action of complexity one that admit a finite iteration of Cox rings. In~\cite{Gag19}, it is shown that for spherical varieties, the iteration of Cox rings has at most two steps. More generally, in the works~\cite{Vez20,Vez20a}, Vezier considers the iteration of Cox rings for $G$-varieties of complexity one and determines bounds on the number of iterations. In order to define an iteration of Cox rings, we must check that (the spectrum of) the Cox ring ${\rm Cox}(X)$ of our variety $X$ is itself a Mori dream space. It is known that Fano type varieties are a special class of Mori dream spaces~\cite{BCHM10}. Furthermore, the Cox ring of a Fano type variety is an affine Gorenstein canonical quasi-cone~\cite{GOST15, Bro13, Bra19}. In particular, it is an affine model of a klt singularity. A klt singularity in turn is a local version of a Fano type variety. Indeed, a klt singularity is a relative Mori dream space over itself, i.e., when considering the identity as the structure morphism. Thus, it is natural to iterate the Cox construction for Fano type varieties, or more generally, for klt singularities. The first author made this observation in~\cite{Bra19}, where he proves the existence and termination of the iteration of Cox rings for Fano type varieties and klt quasi-cones. \begin{introthm} [Cf.~\cite{Bra19}] \label{introthm:braun} Let $X$ be a Fano type variety. Then, for each $k\geq 0$ the $k$-th iteration of Cox rings ${\rm Cox}^{(k)}(X)$ exists. Furthermore, the iteration stabilizes for $k$ large enough. \end{introthm} We recall that the iteration of Cox rings can lead to two different outcomes: it could stop with a factorial master Cox ring or an affine variety which is not a Mori dream space. Furthermore, it could lead to an infinite sequence of Cox rings. In this article, we recover Theorem~\ref{introthm:braun} for klt singularities in the general setting. This means that the iteration of Cox rings always exists for klt singularities and terminates after finitely many iterations. We also generalize the concept of Cox rings for log pairs, leading to our first result. \begin{introthm}\label{introthm2-existence-iteration-local} Let $(X,\Delta;x)$ be a Kawamata log terminal singularity. Then, for each $k\geq 0$ the $k$-th iteration of Cox rings ${\rm Cox}^{(k)}(X,\Delta;x)$ exists. Furthermore, the iteration stabilizes for $k$ large enough. \end{introthm} The main tool used to prove the above theorem is the finiteness of the regional fundamental group of a klt singularity. Two natural questions emanate from the two above theorems. First, we can ask how many times we need to iterate the Cox construction before it stabilizes. A natural way to study the iteration of Cox rings is to quotient (in each step) by the connected component of the solvable group acting on each model ${\rm Cox}^{(k)}(X,\Delta;x)$. In this way, we obtain a sequence of finite solvable Galois covers of the starting singularity (or Fano variety). This method was initiated in~\cite{Bra19}. Using the Jordan property for the regional fundamental group of klt singularities~\cite{BFMS20}, we prove that the number of iterations is bounded from above by a constant which only depends on the dimension. This means that the iteration of Cox rings is controlled by the topology of the variety (or singularity). The following theorem has a projective and a local version. For simplicity of the exposition, we just write the local version in the introduction. \begin{introthm}\label{introthm3-bounded-iteration-local}\label{introthm:bounded-iteration} There exists a constant $c(n)$, only depending on $n$, satisfying the following. Let $(X,\Delta;x)$ be a $n$-dimensional Kawamata log terminal singularity. Then, the $k$-th iteration of Cox rings ${\rm Cox}^{(k)}(X,\Delta;x)$ stabilizes for $k\geq c(n)$. \end{introthm} Secondly, we can ask how (if possible) to control the dimension of the iteration of Cox rings. For instance, we can ask if there is any invariant of the singularity which can give an upper bound for the dimension of the master Cox ring. Note that, in general, the iteration of Cox rings could have arbitrarily large dimension. Indeed, the spectrum of the Cox ring of an affine toric variety of dimension $n$ and Picard rank $\rho$ is isomorphic to the affine space $\mathbb{A}^{n+\rho}$. On the other hand, even if the Picard rank of the singularity is bounded, it could happen that the Cox ring itself (or any of the higher iterated Cox rings) has unbounded Picard rank. Thus, in general, the Picard rank of the initial germ does not control the dimension of the master Cox ring. This leads to our third result in terms of the second homotopy group of the smooth locus. The following theorem answers the above question. \begin{introthm} \label{introthm4-bounded-dim-it-local} Let $n$ and $o$ be positive integers. Let $(X,\Delta;x)$ be a n-dimensional Kawamata log terminal singularity. Assume that $\pi_2^{\rm reg}(X,\Delta;x) \otimes \mathbb{Q}$ has rank $o$. Then, the master Cox ring of $(X,\Delta;x)$ has dimension at most $n+o$. \end{introthm} We will prove Theorem~\ref{introthm2-existence-iteration-local} in two different settings, for two different definitions of the iteration of Cox rings. We will define the iteration of Cox rings for the (Zariski) local ring and the Henselization of the local ring (i.e., the local ring in the \'etale topology) of a Kawamata log terminal singularity. The first one will be called the {\em affine iteration}, while the second will be called the {\em Henselian iteration}. The advantage of the affine iteration is that the outcome of the iteration is an affine klt variety with a distinguished point. Thus, techniques of affine geometry can be applied to the master Cox ring in this case. On the other hand, the Henselian iteration captures the local topology of the singularity. This is the main property that we will use for our next theorem. We prove that klt singularities admit factorial canonical simply connected covers. This cover can be understood as a cover that encompasses all the good properties of the universal cover and the iteration of Cox rings. \begin{introthm} \label{introthm-5-existence-scf-cover} Let $(X,\Delta;x)$ be a Kawamata log terminal singularity. Let $X^h$ be the spectrum of the Henselization of the local ring of $X$ at $x$. There exists a local Henselian ring $R_Y$ so that $Y={\rm Spec}(R_Y)$ satisfies: \begin{enumerate} \item $Y$ is canonical factorial, \item $\pi_1^{\rm reg}(Y,y)$ is trivial, \item $Y$ admits the action of a reductive group $G$, and \item we have an isomorphism $Y/G\cong X^h$. \end{enumerate} Furthermore, $G$ is an extension of a solvable reductive group and $\pi_1^{\rm reg}(X,\Delta,x)$. \end{introthm} Throughout this article, reductive groups are not assumed to be connected. In particular, a solvable reductive group is an extension of a finite solvable group by a torus. We call the germ $(Y,y)$ constructed in Theorem~\ref{introthm-5-existence-scf-cover} the {\em simply connected factorial canonical cover} of the klt singularity, or {\em scfc} cover for short. Note that the name of this cover is idiosyncratic since the condition on the regional fundamental group is stronger than being simply connected. However, in the context of singularities, it is natural to consider the fundamental group of the smooth locus instead of the fundamental group of the germ itself. The scfc cover is a generalization of both; the universal cover and the iteration of Cox rings of a singularity. Furthermore, it satisfies the universal cover of both aforementioned covers. Our next result says that the scfc cover of a Kawamata log terminal singularity dominates any sequence of pointed abelian covers and pointed finite covers. \begin{introthm} \label{introthm-6-univ-scf-cover} Let $(X,\Delta;x)$ be a Kawamata log terminal singularity. Let $X^h$ be the spectrum of the Henselization of the local ring of $X$ at $x$. Let $(Y,y)$ be the scfc cover of $(X,\Delta;x)$. Let \[ (X^h,x) \leftarrow (X_1,x_1) \leftarrow (X_2,x_2) \leftarrow \dots \leftarrow (X_n,x_n) \] be a sequence of pointed finite covers and pointed abelian covers. Let $X_n^h$ be the spectrum of the Henselization of the local ring of $X_n$ at $x_n$. Then, there is a quotient morphism $Y\rightarrow X_n^h$. \end{introthm} In view of the above theorem, the scfc cover of a Kawamata log terminal singularity can be regarded as the best singularity that can be obtained from $(X,\Delta;x)$ by taking sequences of finite covers and abelian covers, more generally, by taking solvable-finite covers, i.e., covers by finite extensions of solvable reductive groups. So far, we have four different covers of klt singularities (see Appendix~\ref{appendix}). The universal cover, the Cox ring, the iteration of Cox rings, and the scfc cover. The universal cover of a klt singularity is smooth if and only if the singularity is the quotient of $\mathbb{C}^n$ by a finite group acting linearly. Furthermore, we know that the Cox ring of a klt singularity is smooth if and only if the singularity is formally toric. The following theorem characterizes when the iteration of Cox rings of a klt singularity is smooth. \begin{introthm} \label{introthm7-smooth-it} Let $(X,\Delta;x)$ be a klt singularity. Then, the following statements are equivalent: \begin{enumerate} \item The spectrum of the iteration of Cox rings ${\rm Cox}^{\rm it}(X,\Delta;x)$ is smooth, and \item $(X,\Delta;x)$ is a finite quasi-\'etale solvable quotient of a toric singularity. \end{enumerate} \end{introthm} The following theorem characterizes when the scfc cover of a klt singularity is smooth. \begin{introthm} \label{introthm8-smooth-scfc} Let $(X,\Delta;x)$ be a klt singularity. Then, the following statements are equivalent: \begin{enumerate} \item The simply connected factorial canonical cover of $(X,\Delta)$ is smooth, and \item $(X,\Delta)$ is a finite quasi-\'etale quotient of a projective toric singularity. \end{enumerate} \end{introthm} In Appendix~\ref{appendix}, we show a diagram with all the natural morphisms among the covers considered in this article. In this direction, it is also natural to compare the iteration of Cox rings with the scfc cover. We prove that they coincide as long as the regional fundamental group of the klt singularity is a solvable group. \begin{introthm} \label{introthm9-equal-it-scfc} Let $(X,\Delta;x)$ be a klt singularity. Then, the following are equivalent: \begin{enumerate} \item The spectrum of the iteration of Cox rings coincides with the simply connected factorial canonical cover, and \item the regional fundamental group $\pi_1^{\rm reg}(X,\Delta;x)$ is solvable. \end{enumerate} \end{introthm} All the theorems in this article are also proved for Fano type varieties. In many cases, we also prove the statements for Fano type morphisms. As mentioned above, the boundedness of iterations is a consequence of the Jordan property for the regional fundamental group of klt singularities~\cite{BFMS20}. To generalize to the relative setting, we will need the following relative version of the Jordan property. \begin{introthm} \label{introthm10-jordan-relative} Let $n$ be a positive integer. There exists a constant $c(n)$, only depending on $n$, satisfying the following. Let $\phi\colon X \rightarrow Z$ be a projective contraction so that $X$ has dimension $n$. Let $(X,\Delta)$ be a log pair of Fano type over $Z$. Let $z\in Z$ be a closed point. Then, the fundamental group $\pi_1^{\rm reg}(X/Z,\Delta;z)$ is finite. Furthermore, there exists a normal abelian subgroup $A\leqslant \pi_1^{\rm reg}(X/Z,\Delta;z)$ of rank at most $n$ and index at most $c(n)$. \end{introthm} Here, the group $\pi_1^{\rm reg}(X/Z,\Delta;z)$ consists of loops over a small punctured analytic neighborhood of a closed point $z\in Z$. In this direction, we also prove an enhanced version of the Jordan property for klt $\mathbb{T}$-singularities. Recall that the complexity of a $\mathbb{T}$-variety is the dimension of the variety minus the dimension of the torus acting on it. In this direction, we prove the following theorem. \begin{introthm}\label{introthm11-jordan-t-var} Let $r$ be a positive integer. There exists a constant $c(r)$, only depending on $r$, satisfying the following. Let $(X,\Delta;x)$ be a $n$-dimensional klt $\mathbb{T}$-singularity of complexity $r$. Then, there exists a normal abelian subgroup $A\leqslant \pi_1^{\rm reg}(X,\Delta;x)$ of rank at most $n$ and index at most $c(r)$. \end{introthm} In particular, the non-abelian quotient of the regional fundamental group of complexity one klt $\mathbb{T}$-singularities is bounded by a constant which is independent of the dimension. As a consequence of Theorem~\ref{introthm11-jordan-t-var} and the proof of Theorem~\ref{introthm3-bounded-iteration-local}, we conclude the following statement about the iteration of Cox ring of klt $\mathbb{T}$-singularities. \begin{introthm}\label{introthm12-it-t-var} Let $r$ be a positive integer. There exists a constant $c(r)$, only depending on $r$, satisfying the following. Let $(X,\Delta;x)$ be a klt $\mathbb{T}$-singularity of complexity $r$. Then, the $k$-th iteration of Cox rings ${\rm Cox}^{(k)}(X,\Delta;x)$ stabilizes for $k\geq c(r)$. \end{introthm} Note that the constant $c(r)$ only depends on the complexity and not on the dimension of the germ. In Subsection~\ref{subsec:compl-one}, we will culminate the article with an extensive study of the regional fundamental group, the iteration of Cox rings, and the scfc covers of klt $\mathbb{T}$-singularities of complexity one. The iteration of Cox rings of these singularities has already been considered in the works~\cite{ABHW18, HW18}. The present article gives a good understanding of the finite-solvable covers of klt singularities and Fano type varieties. It is natural to try to extend the above results to general reductive groups. However, to do so, a better understanding of semi-simple covers of Fano varieties is required. It is also interesting to consider the opposite question: whether the quotient of an affine klt singularity by a reductive group is klt type. The authors will settle this question in a forthcoming article. \subsection*{Structure of the paper} The structure of the paper is as follows: in Section~\ref{sec:prel}, we give some preliminaries about Cox rings, graded-local rings, and the minimal model program. In Section~\ref{sec:gen-cox}, we turn to define Cox rings in a variety of cases, for morphisms of log pairs, over local rings, and over Henselian rings. In Section~\ref{sec:bounded}, we turn to prove the existence and boundedness of the iteration of Cox rings for klt singularities. In Section~\ref{sec:scfc}, we prove the existence of the simply connected factorial canonical cover of a klt singularity. In Section~\ref{sec:smoothit}, we give a characterization of Fano type varieties with a smooth iteration of Cox rings and Fano type varieties with smooth scfc cover. Finally, in Section~\ref{sec:ex}, we give several examples including a complete classification of the iteration of Cox rings of klt complexity one $\mathbb{T}$-singularities. \subsection*{Acknowledgements} The authors would like to thank Karl Schwede, Stefano Filipazzi, Christopher Hacon, Burt Totaro, and J\'anos Koll\'ar for many useful comments. \section{Preliminaries}\label{sec:prel} Throughout this article, we work over the field of complex numbers $\mathbb{C}$. The rank of a finite group is the least number of generators. As usual, we may denote by $1$ (resp. $0$) the trivial multiplicative (resp. additive) group. In this section, we collect some preliminary results and definitions. In Subsection~\ref{subsec:cox-ring}, we recall the concept of Cox rings and Mori dream spaces. In Subsection~\ref{subsec:grocal-rings}, we prove some properties about the class groups of gr-local rings. In Subsection~\ref{subsec:gr-Henselian-rings}, we recall the concept of gr-Henselian rings. Then, in Subsection~\ref{subsec:shvs-gr-local}, we define sheaves of gr-local rings. The Cox sheaves considered in this article will be sheaves of gr-local rings. In Subsection~\ref{subsec:covers-grocal-rings}, we bring together the concepts of regional fundamental groups and Cox rings. Finally, in Subsection~\ref{subsec:mmp}, we recollect some notions of singularities of the minimal model program. \subsection{Cox rings and Mori dream spaces}\label{subsec:cox-ring} In this subsection, we recall the concept of Cox rings and Mori dream spaces. \begin{definition} {\em Let $X$ be a normal algebraic variety with free finitely generated class group ${\rm Cl}(X)$. We can define the {\em Cox ring} of $X$ to be \[ {\rm Cox}(X):= \bigoplus_{[D]\in {\rm Cl}(X)} \Gamma(X,\mathcal{O}_X(D)). \] Here, the multiplication of sections is computed in the field of fractions of $X$. We say that a normal algebraic variety $X$ is a {\em Mori dream space} (or {\em MDS} for short) if its Cox ring is finitely generated over $\mathbb{C}$. In this case we denote the affine variety $\overline{X}:=\operatorname{Spec}\, {\rm Cox}(X)$ and call it the {\em total coordinate space} of $X$. We get $X$ back as a good quotient of the big open subset $\hat{X} \subseteq \overline{X}$. This big open subset $\hat{X}$ is called the {\em characteristic space}. The diagonalizable group (also called a {\em quasi-torus}) $H_X:=\operatorname{Spec} \mathbb{C}[\operatorname{Cl}(X)]$ is called the {\em characteristic quasi-torus} of $X$. } \end{definition} The name Mori dream space is given to these varieties because they behave optimally with respect to the minimal model program. For any divisor $D$ on a Mori dream space $X$, we can run a $D$-MMP that will terminate with either a Mori fiber space or a good minimal model for $D$ (i.e., a model on which its strict transform is a semiample divisor). Toric varieties are known to be Mori dream spaces. It is known that the Cox ring of a $n$-dimensional smooth projective toric variety of Picard rank $\rho$ is a polynomial ring in $\rho+n$ variables (see, e.g.,~\cite[Corollary 2.10]{HK00}). Furthermore, in~\cite[Corollary 1.9]{BCHM10} it is proved that smooth Fano varieties are Mori dream spaces. The quotient morphism $\hat{X} \xrightarrow{/H_X} X$ restricted to the preimage of the smooth locus of $X$ is a torsor, i.e. a principal $H_X$-bundle. Following~\cite[Definition 1.6.4.1]{ADHL15}, we say that the action of an affine algebraic group $G$ on a variety $Y$ is \emph{strongly stable}, if $Y$ allows an open subset $W$, such that \begin{enumerate} \item the complement $Y \setminus W$ is of codimension at least two in $Y$, \item $G$ acts freely on $W$, \item the orbit $G\cdot w$ is closed in $Y$ for every $w \in W$. \end{enumerate} In particular, we see that the action of $H_X$ on $\hat{X}$ is strongly stable~\cite[Section 1.6.4]{ADHL15}. \subsection{Graded-local rings}\label{subsec:grocal-rings} In this subsection, we recall the concept of gr-local rings and prove some preliminary results about their class groups. Let $K$ be a finitely generated abelian group and $A$ be a $K$-algebra containing the field of complex numbers $\mathbb{C}$. In this subsection, we aim to find a suitable category in which certain generalizations of Cox rings fit in. Later on, we define the Cox ring for pair structures on projective varieties and quasi-cones, i.e. affine varieties with a $\mathbb{C}^*$-action, such that all orbit closures meet in one distinguished point, the vertex. Obviously, quasi-cones share similarities with spectra of local rings. In particular, the Picard group is trivial. Thus, it is natural to extend the definition of Cox rings to spectra of local rings. This will be done in the next section. It turns out that the Cox rings of all these objects will be graded-local rings in the sense of~\cite[Definition 1.1.6]{GW78}. These rings are graded by a finitely generated abelian group $K$, such that the set of graded ideals has a unique maximal element. This ideal does not need to be a maximal ideal in the usual sense. We will see that in our context, such unique maximal element is always a maximal ideal. The equivalent notion of ${\mathbb{C}}^*$-local rings is considered in~\cite[Definition 1.5.13]{BH93}. In~\cite[Theorem 2.5]{Hui12} it is proved that being a graded-local ring is equivalent to the degree zero part being a local ring in the classical sense. In particular, the graded maximal ideal is generated by all homogeneous non-units. As the graded maximal ideal is a maximal ideal, it is straightforward to see that in fact $\mathfrak{m}= \mathfrak{m}_0 \oplus \bigoplus_{k \neq 0} A_k$. We will use the following definition, including the restriction that the ring is finitely generated over the degree zero part. \begin{definition}{\em Let $K$ be a finitely generated abelian group. Let \[ A^{(K)}= \bigoplus_{k \in K} A^{(K)}_k \] be a $K$-graded noetherian integral domain. Then, we call $A^{(K)}$ a \emph{gr-local} ring if \begin{enumerate} \item The set of graded ideals of $A^{(K)}$ has a unique maximal element $\mathfrak{m}$, which is a maximal ideal in the usual sense, and \item the degree-zero part $A^{(K)}_0$ is a local ring with maximal ideal $\mathfrak{m}_0=\mathfrak{m} \cap A^{(K)}_0$ and $A^{(K)}$ is finitely generated as an algebra over $A^{(K)}_0$. \end{enumerate}} \end{definition} \begin{example}{\em Let $X$ be a quasi-cone. Then $A:=\mathcal{O}_X(X)$ is a gr-local ring with $A_0=\mathbb{C}$. If in addition $X$ is a Mori dream space, then the Cox ring ${\rm Cox}(X)$ has a $(\operatorname{Cl}(X)\times \mathbb{Z})$-grading that endows it with the structure of a gr-local ring. Note that the ring $A=({\rm Cox}(X)^{(\operatorname{Cl}(X))})_0$ is a gr-local but not a local ring. On the other hand, if $X$ is a projective Mori dream space, then ${\rm Cox}(X)^{(\operatorname{Cl}(X))}$ is a gr-local ring. Indeed, in this case $\left({\rm Cox}(X)^{(\operatorname{Cl}(X))}\right)_0$ is just the ground field and thus a local ring.} \end{example} \begin{remark}{\em We remark that since $A$ is finitely generated over $A_0$, every homogeneous component $A_k$ is a finite $A_0$-module. This can be seen by taking a finite homogeneous set $\{f_i\}_{i \in I}$ of $A_0$-algebra generators of $A$. Then, there are only finitely many monomials in the $f_i$ that do not differ by a monomial lying in $A_0$. } \end{remark} Similar to the process of localizing at a prime ideal, we can graded-localize at a graded prime ideal. This process gives us a gr-local ring. \begin{definition}{\em Let $A$ be a $K$-graded ring and $\mathfrak{p}$ be a graded prime ideal. Let $S$ be the set of \emph{homogeneous} elements of $A \setminus \mathfrak{p}$. Then \[ A_{(\mathfrak{p})}:=S^{-1}A \] is the \emph{gr-localization} of $A$ at $\mathfrak{p}$. It is a graded-local ring. It is not necessarily finitely generated over $\left(A_{(\mathfrak{p})}\right)_0$. Furthermore, the unique maximal graded ideal is not necessarily maximal in the usual sense. } \end{definition} It is straightforward to see that the unique graded maximal ideal of $A_{(\mathfrak{p})}$ is maximal if and only if $\mathfrak{p}$ is maximal. \begin{example} {\em Consider $\mathbb{C}[x,y]$ with the $\mathbb{Z}$-grading given by ${\rm deg}(x)=1$ and ${\rm deg}(y)=-1$. We consider gr-localizations at different graded prime ideals $\mathfrak{p}$. We study them by understanding which scheme points of $\mathbb{A}^2$ define scheme points of $\operatorname{Spec} \mathbb{C}[x,y]_{(\mathfrak{p})}$. When gr-localizing at $\mathfrak{p}=\langle x,y \rangle$, all closed points on the coordinate axes of $\mathbb{A}^2$ define scheme points of $\operatorname{Spec} \mathbb{C}[x,y]_{(\mathfrak{p})}$. Among the curves, the coordinate axes are the only closures of $\mathbb{C}^*$-orbits that define points on the quotient. If instead, we gr-localize at the coordinate axis $\langle x \rangle$, which is not even graded maximal since it is contained in the graded $\langle x,y \rangle$, only the closed points (different from the origin) on this axis survive. The other axis together with its points vanishes, as all other closures of $\mathbb{C}^*$-orbits do. Moreover, those curves that do not meet the axis $\langle x \rangle$ become closed points. If finally, we gr-localize at the orbit $\langle xy-1 \rangle$, which is graded maximal but not maximal, the surviving closed points are exactly those that lie on this curve. } \end{example} We remark that localization at a prime $\mathfrak{p}$ factors through gr-localization at $\mathfrak{p}$. In the following, we collect some useful properties that gr-local rings possess. First, we note that our definition of gr-local contains the finite-generation property over the degree-zero part $A_0$. Hence, if $A_0$ is essentially of finite type over $\mathbb{C}$, then so is $A$ (see, e.g.,~\cite[Proposition 1.3.9]{EGA4}). The second advantage of the finite-generation property is that the arguments of~\cite[Sec 1.2]{ADHL15} apply. Indeed, we have a surjection $A_0 \otimes \mathbb{C}[x_1,\ldots,x_n] \twoheadrightarrow A$. In particular, we have an equivalence of categories between gr-local rings and affine schemes of finite type over the spectrum of a local ring with a quasi-torus action. \begin{example} {\em We consider $\mathbb{C}[x,y]$ with the $\mathbb{Z}$-grading given by the weight $(1,-1)$ and the maximal graded ideal $\mathfrak{m}=\langle x, y \rangle$. The degree zero part is $\mathbb{C}[xy]$ and the set of homogeneous elements of $\mathbb{C}[x,y] \setminus \mathfrak{m}$ is just $S:=\mathbb{C}[xy] \setminus \langle xy \rangle$. On the other hand, graded localizing gives the gr-local ring $\mathbb{C}[x,y]_{(\mathfrak{m})}$, which has degree-zero elements of the form $\frac{f}{g}$, where $f \in \mathbb{C}[x,y]$, $g \in S$, and the degree of $f$ equals the degree of $g$. Since the elements of $S$ are of degree zero, both $f$ and $g$ are of degree zero. We conclude that there is an isomorphism \[ (\mathbb{C}[x,y]_{(\mathfrak{m})})_0 \cong \mathbb{C}[xy]_{\langle xy \rangle}. \] This means that we can see the local ring at the origin of the $\mathbb{C}^*$-quotient $\mathbb{C}=\operatorname{Spec} \mathbb{C}[x,y]_0$ as a $\mathbb{C}^*$-quotient of the gr-local ring $\mathbb{C}[x,y]_{(\mathfrak{m})}$. Furthermore, this quotient parametrizes orbits in a special way: closed points parametrize closed orbits. Non-closed points parametrize orbits that are not closed, but their closure consists of closures of orbit points. In this, sense it is a good quotient. } \end{example} In the following, we try to formalize this notion of good quotient. The classical one for varieties is not sharp enough here, since it does not stress what is parametrized by non-closed points. \begin{definition}{\em Let $X$ be an affine scheme over $\mathbb{C}$. Let an affine algebraic group $G$ act on $X$. A morphism $\varphi \colon X \to Y$ to a scheme $Y$ over $\mathbb{C}$ is a \emph{good quotient} if: \begin{enumerate} \item $\varphi$ is $G$-invariant, surjective, and affine, \item for $U \subseteq Y$ open, the morphism $\mathcal{O}_Y(U) \to \mathcal{O}_X(\varphi^{-1}(U))$ is an isomorphism to $\mathcal{O}_X(\varphi^{-1}(U))^G$, \item the image of a $G$-invariant closed subset of $X$ is closed in $Y$, and \item the images of disjoint $G$-invariant closed subsets are disjoint. \end{enumerate} Let $x \in X$ be a scheme point with closure $\overline{x}$. We denote by $Gx$ the orbit of $x$. We call the set \[ G\overline{x}:=\{g\cdot x' \mid g \in G \text{ and } x' \in \overline{x} \} \] the \emph{scheme-orbit} of $x$. We say that $\varphi$ is a \emph{good scheme quotient} if in addition to (1)-(4) the following hold: \begin{enumerate} \item[(1')] The image of an orbit $Gx$ (not necessarily closed) with a \emph{closed} scheme-orbit $G\overline{x}$ is a point in $Y$, \item[(2')] this point is closed if and only if $Gx$ is closed, \item[(3')] if this point is not closed, then it's closure consists of the image $\varphi(G\overline{x})$ of the scheme-orbit, and \item[(4')] in the pre-image of any $y \in Y$ lies exactly one orbit with a closed corresponding scheme-orbit. \end{enumerate} We say that a good scheme quotient is a \emph{geometric scheme quotient}, if the pre-image of any $ y \in Y$ is an orbit with a closed corresponding scheme-orbit. } \end{definition} In particular, we will see that for a gr-local ring $A$, the morphism $\operatorname{Spec} A \to \operatorname{Spec} A_0$ is a good scheme quotient, which is even geometric if the grading group is finite. We also remark that since we work in characteristic zero, categorical GIT-quotients by reductive groups are {\em universally categorical}, i.e., they behave well under base change, see~\cite[Theorem 1.1]{MFK94}. \begin{lemma} Let $K$ be a finite abelian group and $A$ be a $K$-graded gr-local ring. Then $A$ is local. \end{lemma} \begin{proof} We have to show that the unique maximal and graded ideal $\mathfrak{m}$ is the only maximal ideal. Assume that there is another maximal ideal, which is not graded by the uniqueness property. Consider the corresponding closed point $x \in \operatorname{Spec} A$. Let $G \cong K$ be the finite abelian group acting on $\operatorname{Spec} A$. The orbit $Gx=G\overline{x}$ is closed, hence it's image $y'$ under the quotient morphism $\operatorname{Spec} A \to \operatorname{Spec} A_0$ is a closed point. Since $A_0$ is local, the point $y$ coincides with the point corresponding to the maximal ideal $\mathfrak{m}_0=\mathfrak{m} \cap A_0$. But since $Gx$ is closed, it coincides with the orbit $x_{\mathfrak{m}}$. This leads to a contradiction. \end{proof} In particular, we see that gr-local rings as defined above are also graded-local in the classical sense. Namely only allowing $K$ to be a free abelian group. Indeed, the degree-zero part with respect to the free part of the grading group is local as well, by the above lemma. We finish this subsection proving that the class group of the spectrum of a gr-local ring is concentrated at the unique maximal graded ideal. In particular, the following holds. \begin{lemma} \label{lem:pic0} Let $A$ be a gr-local ring, $X:=\operatorname{Spec} A$, and $x\in X$ be the closed point corresponding to the unique graded maximal ideal $\mathfrak{m}$. Then \[ \operatorname{Cl}(X)\cong \operatorname{Cl}(X,x) \cong \operatorname{Cl}(X_x) \quad {\rm and} \quad \operatorname{Pic}(X)\cong \operatorname{Pic}(X_x) \cong 0. \] \end{lemma} \begin{proof} By~\cite[Prop. 7.1]{Sam64}, the class group $\operatorname{Cl}(X)$ is isomorphic to the group of graded divisorial ideals modulo the subgroup of principal graded ideals. Thus, we only have to show that for an ideal $I \subseteq A$, if $IA_\mathfrak{m}$ is principal, then $I$ is already principal. Let $a \in A_{\mathfrak{m}}$ be a generator of $IA_\mathfrak{m}$, which we can assume to be a graded element of $A$. Then, for a graded $x \in I$, there are $p \in A$ and $q \in S=A \setminus \mathfrak{m}$, such that $x=\frac{p}{q}a$ in $A_{\mathfrak{m}}$. So $xq=pa$ holds in $A$. Writing $q_i$ for the graded components of $q$, we know that $q_0$ is nonzero and a unit. Thus, there is a homogeneous component $p_k$ of $p$, such that $x q_0=p_k a$. Hence, $I$ is generated by $a$ in $A$. The argument for triviality of the Picard group is the same as in~\cite[Lemma 5.1]{Mur69}. \end{proof} The advantage of the notion of gr-local rings is that not only it will encompass the local Cox rings of singularities, but also it stresses the grading. Note that this provides us with a meaningful notion of finite generation for Cox rings of (spectra of) local rings: the Cox ring should be finitely generated (as an algebra) over the local ring itself. \begin{remark}{\em \label{rem:nongrocalMDS} If $X$ is an affine Mori dream space, then the Cox ring ${\rm Cox}(X)$ may have no grading that makes it a gr-local ring. However, we will see later that the Cox sheaf $\Cox{X}$ is always a sheaf of gr-local rings (see Definition~\ref{def:sheaf-gr-local-rings}).} \end{remark} \subsection{Graded-Henselian rings}\label{subsec:gr-Henselian-rings} In this subsection, we recall the concept of gr-Henselian rings and prove some preliminary results about their class groups. The local rings in the Zariski topology are too coarse to capture the local topology at a singularity well. Thus in the following, we consider also the local rings in the \'etale topology, which are Henselian local rings. The resulting Cox rings are gr-local rings with a Henselian degree-zero part. Such rings were studied in~\cite{Cae83} and are called \emph{gr-Henselian} rings. \begin{definition}[Cf.~\cite{Cae83}] {\em Let $A$ be a gr-local ring. Then, we say that $A$ is \emph{gr-Henselian}, if it satisfies one of the following equivalent conditions: \begin{enumerate} \item $A_0$ is Henselian, and \item every graded $A$-algebra is a direct sum of gr-local rings. \end{enumerate} } \end{definition} In fact,~\cite[Teorem 4.6]{Cae83} contains several more equivalent characterizations analogous to those for Henselian rings. For us, maybe the most important property of gr-Henselian rings is the following. \begin{theorem} \label{thm:Cl-gr-Hens} Let $A$ be an excellent rational $\mathbb{Z}^k$-graded gr-Henselian ring. Let $A_\mathfrak{m}^h$ be the Henselization of the local ring at the unique maximal graded ideal $\mathfrak{m}$ and $\hat{A}$ be the $\mathfrak{m}$-adic completion. Assume the graded prime ideals $\mathfrak{p}$ of height one in $A$ are in one-to-one-correspondence with the height one prime ideals in $A_0$ via $\mathfrak{p}=\mathfrak{p}_0A$. Furthermore, we assume the same property holds for the base change $\tilde{A}:=A\otimes_{A_0} \widehat{A_0}$. Then, we have isomorphisms \[ \operatorname{Cl}(A) \cong \operatorname{Cl}(A_\mathfrak{m}^h) \cong \operatorname{Cl}(\hat{A}). \] \end{theorem} The assumptions on the height one prime ideals are not as restrictive as they may seem. They are fulfilled for gr-local rings if the morphisms $\operatorname{Spec}(A) \to \operatorname{Spec}(A_0)$ and $\operatorname{Spec}(\tilde{A}) \to \operatorname{Spec}(\widehat{A_0})$ are locally trivial fiber bundles in codimension one. To prove the theorem, we follow the line of arguments of~\cite[Sec 1 \& 2]{Fl81}, where an analogous result is proved for $\mathbb{N}$-graded rational rings. Before we can use the results from~\cite{Fl81}, we have to prove the following lemma. \begin{lemma} \label{le:componentcompletion} Let $A$ be a $\mathbb{Z}^K$-graded gr-local ring with maximal graded ideal $\mathfrak{m}=\mathfrak{m}_0 \oplus \bigoplus_{k \neq 0} A_k$. Then the degree $k$ piece of the $\mathfrak{m}$-adic completion $\hat{A}=\prod_{k \in \mathbb{Z}^K} \underleftarrow{\lim}\, A_k/\left(\mathfrak{m}^j\right)_k$ is isomorphic to the $\mathfrak{m}_0$-adic completion of the $A_0$-module $A_k$. This means that we have isomorphisms \[ \underleftarrow{\lim}\, A_k/\left(\mathfrak{m}^j\right)_k \cong \underleftarrow{\lim}\, A_k/\left(\mathfrak{m}_0\right)^j\!A_k =: \widehat{A_k}. \] \end{lemma} \begin{proof} Let $g_{01},\ldots,g_{0n_0},g_{i_11},\ldots, g_{i_1n_{i_1}},\ldots,g_{i_m1},\ldots, g_{i_mn_{i_m}}$ be a finite set of $A$-module generators of $\mathfrak{m}$, where $g_{ij} \in A_i$ for $1\leq j \leq n_i$. In the following, by degree, we mean the standard degree of a monomial $m(x_{ij})$. Otherwise, we speak of the $\mathbb{Z}^K$-degree. Since $\mathfrak{m}^l$ is generated as an $A$-module by all monomials in the $g_{ij}$ of degree $l$, we know that $(\mathfrak{m}^l)_0$ is generated as an $A_0$-module by all monomials in the $g_{ij}$ of $\mathbb{Z}^K$-degree zero and degree at least $l$. Indeed, a monomial of degree $l$ and nonzero $\mathbb{Z}^K$-degree $k$ has an $A$-coefficient in $A_{-k}$ in order to lie in $A_0$, and expanding it in the $A_0$-module generators of $A_k$ leads to monomials of degree greater than $l$. On the other hand, we know that there are only finitely many monomials $m_1,\ldots,m_M$ in the $g_{ij}$ such that any other monomial in the $g_{ij}$ is in turn a monomial in these . This follows from standard monomial combinatorics. We set $\mu:=\max(\deg(m_i))_{i=1,\ldots,M}$ and get $$ (\mathfrak{m}_0)^{\nu \mu} \subseteq \left(\mathfrak{m}^{\nu \mu}\right)_0 \subseteq (\mathfrak{m}_0)^{\mu} $$ for $\nu \geq 1$. So the claim follows for $k=0$. The argument for the $A_0$-modules $A_k$ is similar. There are only finitely many monomials in the $g_{ij}$ of $\mathbb{Z}^K$-degree $k$, such that all others of $\mathbb{Z}^K$-degree $k$ differ from them by multiplication with a monomial of $\mathbb{Z}^K$-degree $0$. Set $\mu_k$ to be the maximal degree of these finitely many monomials. Then we get $$ (\mathfrak{m}_0)^{\nu \mu + \mu_k-1} A_k \subseteq \left(\mathfrak{m}^{\nu \mu + \mu_k}\right)_0 \subseteq (\mathfrak{m}_0)^{\mu} A_k $$ and the claim is proved. \end{proof} We need two additional lemmas and use the following definitions. Let $B$ be the $\mathfrak{m}$-adic completion of $A[t_1,t_1^{-1},\cdots,t_k,t_k^{-1}]$. Denote $\hat{A}\llbracket \mathbf{x} \rrbracket:=\hat{A}\llbracket x_1,\ldots,x_k\rrbracket$ and let $p \colon \hat{A} \to \hat{A}\llbracket \mathbf{x} \rrbracket$ and $p_0 \colon \hat{A}\to B$ be the canonical injections. Let $q \colon \hat{A} \to \hat{A}\llbracket \mathbf{x} \rrbracket$ be the homomorphism defined by \[ A_{(z_1,\ldots,z_k)} \ni f \mapsto f\cdot (x_1+1)^{z_1}\cdots (x_k+1)^{z_k}, \] where $(x_i+1)^{-1}=\sum_{j=1}^{\infty} (-x_i)^j$. Further, let $q_0 \colon \hat{A}\to B$ be the homomorphism defined by \[ A_{(z_1,\ldots,z_k)} \ni f \mapsto f\cdot t_1^{z_1}\cdots t_k^{z_k}, \] and $g \colon B \to \hat{A}\llbracket \mathbf{x} \rrbracket$ the $\hat{A}$-homomorphism mapping $t_i$ to $x_i+1$. Observe that the equalities $g \circ p_0=p$ and $g \circ q_0=q$ hold. We prove the following lemma. \begin{lemma} \label{le:cl-injective} The map $g_* \colon \operatorname{Cl}(B) \to \operatorname{Cl}(\hat{A}\llbracket \mathbf{x} \rrbracket)$ is injective. \end{lemma} \begin{proof} Let $\mathfrak{b}$ be a divisorial ideal of $B$ in the kernel of $g_*$. We note that $\hat{A}\llbracket \mathbf{x} \rrbracket$ is the $\langle t_1-1,\ldots,t_k-1\rangle$-adic completion of $B$. For any prime $\mathfrak{p} \subseteq \hat{A}$ the ring extension $B_{\mathfrak{p}B} \to \hat{A}\llbracket \mathbf{x} \rrbracket_{\mathfrak{p}\hat{A}\llbracket \mathbf{x} \rrbracket}$ is faithfully flat. Thus due to principality of $\mathfrak{b}\otimes_B \mathfrak{p}\hat{A}\llbracket \mathbf{x} \rrbracket$, also $\mathfrak{b}\cdot B_{\mathfrak{p}B}$ is principal for any prime $\mathfrak{p} \subseteq \hat{A}$. We want to show that $\mathfrak{b}$ is locally principal, i.e. $\mathfrak{b}_\mathfrak{p}$ is principal for any prime $\mathfrak{p}$. So let $\mathfrak{n} \subseteq B$ be maximal and $\mathfrak{m}=\hat{A} \cap \mathfrak{n}$ be the unique maximal ideal of $\hat{A}$. Now, we have an isomorphism $\hat{A}/\mathfrak{m} \cong B/\mathfrak{n}$ and the local homomorphism $\hat{A}\to B_{\mathfrak{n}}$ of local rings is formally smooth. Then~\cite[II, Corollaire 9.8]{Bou78} implies that $B_\mathfrak{q}$ is parafactorial for any prime $\mathfrak{q} \subseteq \mathfrak{n}$ with $\mathfrak{q} \not\subseteq \mathfrak{m}B$ and $\dim(B_\mathfrak{q})\geq 2$. Due to normality of $B_\mathfrak{n}$ and by induction on $\dim(B_\mathfrak{q})$, we get that $\mathfrak{b}_\mathfrak{q}$ is principal. Thus $\mathfrak{b}$ is locally principal. But since $B$ is $\mathfrak{m}B$-adically complete and $A/\mathfrak{m}$ is a field, we get $$ \operatorname{Pic}(B) \cong \operatorname{Pic}(B/\mathfrak{m}B) \cong \operatorname{Pic}((A/\mathfrak{m})[t_1,t_1^{-1},\ldots,t_k,t_k^{-1}]) \cong 0. $$ So $\mathfrak{b}$ is principal. \end{proof} \begin{lemma} \label{le:cl-equalizer} Under the assumptions of Theorem~\ref{thm:Cl-gr-Hens}. The sequence \[ \xymatrix{ 0 \ar[r] & \operatorname{Cl}(A) \ar[r] & \operatorname{Cl}(\hat{A}) \ar@<-.5ex>[r]_{p_{0*}} \ar@<.5ex>[r]^{q_{0*}} & \operatorname{Cl}(B) } \] is exact. \end{lemma} \begin{proof} We define \[ \tilde{A}:= A \otimes_{A_0} \widehat{A_0} = \bigoplus_{k \in \mathbb{Z}^K} \left( A_k \otimes_{A_0} \widehat{A_0} \right) = \bigoplus_{k \in \mathbb{Z}^K} \widehat{A_k} \cong \bigoplus_{k \in \mathbb{Z}^K} \underleftarrow{\lim}\, A_k/\left(\mathfrak{m}^j\right)_k , \] where hats denote $\mathfrak{m}_0$-adic completion, the second identity is due to the fact that the $A_k$ are finitely generated $A_0$-modules and the isomorphy is due to Lemma~\ref{le:componentcompletion}. The $A_0$-algebra-homomorphism $A \to \hat{A}$ factors through $A \to \tilde{A}$ and $\tilde{A} \to \hat{A}$. By~\cite[Sec 2]{Fl81}, it follows that for any height one prime ideal $\bar{\mathfrak{a}}$ of $\hat{A}$ such that $q_0^*(\bar{\mathfrak{a}})=p_0^*(\bar{\mathfrak{a}})$ in $\operatorname{Cl}(B)$, there is a graded height one ideal $\tilde{\mathfrak{a}} \subseteq \tilde{A}$ such that $\hat{\tilde{\mathfrak{a}}}=\bar{\mathfrak{a}}$. Then, $\tilde{\mathfrak{a}}_0$ is an ideal of $\widehat{A_0}$ of height one. In particular, $\tilde{\mathfrak{a}}_0 \tilde{A}=\tilde{\mathfrak{a}}$ by the assumptions of the theorem. Since $A_0$ is rational, by~\cite[Theorem (6.2)]{BF84}, there is a height one prime ideal $\mathfrak{a}_0$ of $A_0$, such that $\widehat{\mathfrak{a}_0}=\tilde{\mathfrak{a}}_0$. Then $\mathfrak{a}_0A$ is a height one prime of $A$ such that $\mathfrak{a}_0A \otimes_{A_0} \widehat{A_0}=\tilde{\mathfrak{a}}$. So the equalizer of $q_0^*$ and $p_0^*$ indeed equals the image of $\operatorname{Cl}(A)$ in $\operatorname{Cl}(\hat{A})$. This concludes the proof of the lemma. \end{proof} \begin{proof}[Proof of Theorem ~\ref{thm:Cl-gr-Hens}] Since $A$ is excellent and rational, the completion $\hat{A}$ has the DCG property. This means that $\pi_*\colon \operatorname{Cl}(\hat{A}\llbracket x \rrbracket) \to \operatorname{Cl}(\hat{A})$ induced by $\pi \colon \hat{A}\llbracket x \rrbracket \to \hat{A}$ mapping $x$ to $0$ is a bijection, see~\cite[p. 128]{Fl81}. But then $\hat{A}\llbracket x_1,\ldots,x_k \rrbracket$ has the DCG property. Moreover, $\omega \colon \hat{A}\llbracket x_1,\ldots,x_k \rrbracket \to \hat{A}$ mapping all the $x_i$ to $0$ induces a bijection $\omega_*$ between the divisor class groups by induction. But since $\omega \circ p = \omega \circ q$, $p=g \circ p_0$, and $q= g \circ q_0$, by Lemma~\ref{le:cl-injective} and Lemma~\ref{le:cl-equalizer}, we get $p_{0*}=q_{0*}$ and $\operatorname{Cl}(A) \to \operatorname{Cl}(\hat{A})$ is surjective and hence bijective. Since this map factors through $\operatorname{Cl}(A) \to \operatorname{Cl}(A_\mathfrak{m}^h)$, which is injective, bijectivity of the three divisor class groups follows as claimed. \end{proof} \subsection{Sheaves of gr-local rings} \label{subsec:shvs-gr-local} In this subsection, we define sheaves of gr-local rings on algebraic varieties. Throughout this subsection, we consider the case that $X$ is only locally a Mori dream space, that is, its Cox sheaf $\Cox{X}$ is locally of finite type in the sense of~\cite[Constr. 1.3.2.1]{ADHL15}. This means that every $x \in X$ has an open affine neighbourhood $U$, such that $\Cox{X}(U)$ is a finitely generated $\mathbb{C}$-algebra. This makes it possible to define the relative spectrum $\widehat{X} := \operatorname{Spec}_X \Cox{X}$ of the Cox sheaf, the so-called characteristic space of $X$. However, it may happen that the ring of global sections ${\rm Cox}(X)$ is not a finitely generated $\mathbb{C}$-algebra. In this subsection, we define \emph{sheaves of gr-local rings} and we show that a Cox sheaf is locally of finite type in the aforementioned sense if and only if it is a sheaf of gr-local rings. This means, in particular, that this property has to be checked only locally at the singularities whenever the divisor class group $\operatorname{Cl}(X)$ is finitely generated. \begin{definition}\label{def:sheaf-gr-local-rings}{\em Let $X$ be a normal variety. Let $\mathcal{S}$ be a quasi-coherent sheaf of $\mathcal{O}_X$-modules. If the stalk $\mathcal{S}_x$ of $\mathcal{S}$ at any point of $X$ is a gr-local ring, then we call $\mathcal{S}$ a \emph{sheaf of gr-local rings}.} \end{definition} We recall from~\cite[Def 1.3.1.1]{ADHL15}, that the \emph{sheaf of divisorial algebras} associated to a finitely generated subgroup $K \subseteq \operatorname{WDiv}(X)$ is the quasi-coherent sheaf \[ \mathcal{S}:= \bigoplus_{D \in K} \mathcal{O}_X(D). \] \begin{definition}{\em Let $X$ be a normal variety. Let $\mathcal{S}$ be a quasi-coherent sheaf of $\mathcal{O}_X$-modules. We say that $\mathcal{S}$ is {\em locally of finite type} if for every point $x\in X$ there is an open affine neighborhood $x\in U$ with $\mathcal{S}(U)$ a finitely generated $\mathbb{C}$-algebra. } \end{definition} \begin{lemma} \label{le:fg-local} Let $X$ be a normal algebraic variety and $\mathcal{S}$ a sheaf of divisorial algebras associated to the finitely generated subgroup $K \subseteq \operatorname{WDiv}(X)$. Then the stalk $\mathcal{S}_x$ is a finitely generated $\mathcal{O}_{X,x}$-algebra for any $x \in X$ if and only if $\mathcal{S}$ is locally of finite type. \end{lemma} \begin{proof} First let $\mathcal{S}$ be a sheaf of divisorial algebras locally of finite type and $U \subseteq X$ be affine. It follows that for some $k \in \mathbb{N}$, there is a surjection $\mathcal{O}_X(U)[x_1,\ldots,x_k] \to \mathcal{S}(U)$. Since surjectivity of $R$-modules is local, this induces a surjection $\mathcal{O}_{X,x}[x_1,\ldots,x_k] \to \mathcal{S}_x$ and thus $\mathcal{S}_x$ is a finitely generated $\mathcal{O}_{X,x}$-algebra for every $x \in U$. Now, fix $x \in X$ and assume that $\mathcal{S}_x$ is a finitely generated $\mathcal{O}_{X,x}$-algebra. We fix a set of generators $D_1,\ldots,D_m$ of $K$. Then, there is an open affine neighbourhood $x\in U \subseteq X$, such that $x$ lies in every irreducible component of $D_i\cap U$ for any $i$. Let $f_1,\ldots,f_k \in \mathcal{S}_x$ be a finite set of $K$-homogeneous $\mathcal{O}_{X,x}$-algebra-generators of the stalk $\mathcal{S}_x$. By shrinking $U$ if necessary , we can lift these germs to sections $f_1,\ldots,f_k \in \mathcal{S}(U)$, such that $x$ lies in every irreducible component of $\operatorname{supp}(f_i)$ for any $i$. Now let $D \in K$. We have a primary decomposition of the divisorial ideal $\mathcal{S}_D(U)=\mathfrak{q}_1 \cdots \mathfrak{q}_r$, such that the associated primes $\mathfrak{p}_i$ all lie in $\mathfrak{m}_x$. In particular, $\mathrm{sat}_{\mathfrak{m}_x}(\mathcal{S}_D(U))=\mathcal{S}_D(U)$, see e.g.~\cite[Prop. 4.9]{AM69}. The localization $S_{x,D}$ of $\mathcal{S}_D(U)$ is generated as an $\mathcal{O}_{X,x}$-module by monomials $p_1,\ldots,p_l$ in the $f_i$. In particular, $x$ lies in every irreducible component of $\operatorname{supp}(p_i)$ for any $i$. So the $\mathcal{O}(U)$-module $J:=\sum_{i=1}^{l} \mathcal{O}(U) p_i$ has localization $S_{x,D}=\sum_{i=1}^{l} \mathcal{O}_{U,x} p_i$ and saturation $\mathrm{sat}_{\mathfrak{m}_x}(J)=J$. Thus $J=\mathcal{S}_D(U)$ and $\mathrm{S}(U)$ is generated as an $\mathcal{O}(U)$-algebra by the $f_i$. The proof is finished. \end{proof} \begin{corollary} \label{cor:sheaf-groc-sheaf-finite-type} Let $X$ be a normal algebraic variety such that $\operatorname{Cl}(X)$ is finitely generated. Then $\Cox{X}$ is a sheaf of gr-local rings if and only if it is locally of finite type. \end{corollary} \begin{example}{\em If $X$ is a point, then a sheaf of gr-local rings over $X$ is a gr-local ring.} \end{example} \subsection{Coverings of gr-local rings}\label{subsec:covers-grocal-rings} In this subsection, we bring together the concepts of fundamental group and Cox rings. There are different notions for the regional fundamental groups of singularities. In the case of a klt singularity, they all agree. Let $ x \in (X,\Delta)$ be a klt singularity. Then the regional fundamental group $\pi_1^{\operatorname{reg}} (X,\Delta;x)$ is the inverse limit of the orbifold fundamental groups $\pi_1^{\operatorname{reg}} (U_{\operatorname{reg}},\Delta;x)$, where $U$ runs through analytic open neighborhoods of $x$. The regional fundamental group is computed by some neighborhood $U$, that can be chosen to be the intersection of $X$ with a small euclidean ball around $x$ in some complex manifold $M \supseteq X$. It equals the fundamental group of the regional link of $x$, the intersection of $X_{\operatorname{reg}}$ with a small euclidean sphere, which is just a deformation retract of $U_{\operatorname{reg}}$. However, when we work in the algebraic category, we deal with \'etale neighborhoods of local rings. In the case of klt singularities, this makes no difference. This follows from the fact that the regional fundamental group is finite by~\cite[Theorem 1]{Bra20}. In particular, $\pi_1^{\operatorname{reg}}(X,\Delta;x)$ equals the \'etale fundamental group of the smooth locus of the spectrum of the holomorphic local ring $\mathcal{O}_{X,x}^{\rm hol}$. Since this ring is Henselian, by~\cite[Cor. p. 579]{Elk73}, we have \[ \pi_1^{\operatorname{reg}} (X,\Delta;x) \cong \pi_1^{\rm et}(X^h_{x,\operatorname{reg}}, \Delta^h_{\rm reg}) \cong \pi_1^{\rm et}(\widehat{X}_{x,\operatorname{reg}}, \widehat{\Delta}_{\rm reg}). \] Here, $X_x^h$ and $\widehat{X_x}$ denote the spectra of the \'etale and complete local rings. The subscript (or supscrit) reg, means that we consider the regular locus. Furthermore, the divisor $\Delta^h_{\rm reg}$ (resp. $\widehat{\Delta}_{\rm reg}$) is the pull-back of $\Delta$ to $X_{x,{\rm reg}}^h$ (resp. $\widehat{X}_{x,{\rm reg}}$). Since $\pi_1^{\operatorname{reg}} (X,\Delta;x)$ is finite, it is computed by an affine \'etale neighborhood $V_x \to X$ of $x$. Moreover, by~\cite[Sec 6]{BF84}, we know that $\operatorname{Cl}(X_x^h)$ and $\operatorname{Cl}(\widehat{X_x})$ are finitely generated and isomorphic. Thus, we can find an affine \'etale neighborhood $U_x \to X$ of $x$ that computes both the regional fundamental and the local divisor class group. We will use these facts often throughout the article. \subsection{Minimal Model Program}\label{subsec:mmp} In this subsection, we recall the definition of the singularities of the minimal model program. We also recall some basic constructions as the purely log terminal blow-up. \begin{definition} {\em A projective morphism $f\colon X\rightarrow Z$ is called a {\em contraction} if $f_*\mathcal{O}_X=\mathcal{O}_Z$. In particular, if $X$ is normal and $X\rightarrow Z$ is a contraction, then $Z$ is normal as well. } \end{definition} On the other hand, if $g\colon X \to Z$ is an affine morphism, then $X$ is isomorphic to the relative spectrum over $Z$ of the direct image sheaf $f_* \mathcal{O}_X$, i.e., $ X \cong \operatorname{Spec}_Z f_* \mathcal{O}_X$. So if $h= f \circ g \colon X \to Y \to Z$ is an affine morphism $g$ composed with a contraction $f$, then $h_*\mathcal{O}_X$ is a quasi-coherent sheaf of $\mathcal{O}_Z$-modules that is locally of finite type. Morphisms of this kind will become important in the following. \begin{definition} \label{def:aff-contraction} {\em A morphism $h\colon X \to Z$ that factors through an affine morphism $g\colon X \to Y$ and a contraction $f \colon Y \to Z$ is called an {\em aff-contraction}. } \end{definition} \begin{example} {\em Let $X$ be a projective Mori dream space with structure morphism $\phi \colon X \to {\rm Spec}(\mathbb{C})$. Let $\psi \colon \hat{X}=\operatorname{Spec}_X \Cox{X} \to X$ be its characteristic space. Then $h:= \phi \circ \psi$ is an aff-contraction. } \end{example} \begin{definition} {\em Let $X$ be a normal quasi-projective variety. A {\em log pair} $(X,\Delta)$ consists of $X$ and an effective divisor $\Delta\geq 0$ so that $K_X+\Delta$ is a $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor. } \end{definition} \begin{definition} {\em Let $(X,\Delta)$ be a log pair. A {\em prime divisor over $X$} is a prime divisor on a normal quasi-projective variety $Y$ that admits a projective birational morphism to $X$. This means that there exists a projective birational morphism $\pi\colon Y\rightarrow X$ and $E\subset Y$ a prime divisor. The {\em log discrepancy} of $(X,\Delta)$ at $E$ is defined to be \[ a_E(X,\Delta) := 1+{\rm coeff}_E(K_Y-\pi^*(K_X+\Delta)). \] A {\rm log resolution} of a log pair $(X,\Delta)$ is a projective birational morphism $\pi\colon Y\rightarrow X$ so that $Y$ is a regular variety, the exceptional divisor $E$ is purely divisorial, and $E_{\rm red}+\pi^{-1}_*\Delta$ has simple normal crossing support. Any log pair admits a log resolution by Hironaka's resolution of singularities. } \end{definition} \begin{definition} {\em A pair $(X,\Delta)$ is said to be {\em Kawamata log terminal} (or {\em klt} for short) if all its log discrepancies are positive. This means that $a_E(X,\Delta)>0$ for every prime divisor $E$ over $X$. A pair $(X,\Delta)$ is said to be {\em log canonical} (or {\em lc} for short) if all its log discrepancies are non-negative. This means that $a_E(X,\Delta)\geq 0$ for every prime divisor $E$ over $X$. In both cases, it suffices to check all the prime divisors which appear on an arbitrary log resolution of the pair. A {\em non-klt} center of a pair $(X,\Delta)$ is the image on $X$ of a prime divisor $E$ over $X$ for which $a_E(X,\Delta)\leq 0$. In particular, if $(X,\Delta)$ is a log canonical pair, a non-klt center is the image on $X$ of a divisor with log discrepancy zero. } \end{definition} \begin{definition} {\em A variety $X$ is said to be {\em klt type} if there exists a boundary $\Delta$ so that $(X,\Delta)$ is a klt pair. Analogously, we say that a germ $(X,x)$ is {\em klt type} if there exists $\Delta$ through $x$ so that $(X,\Delta)$ is a klt germ. } \end{definition} \begin{definition} {\em A pair $(X,\Delta)$ is called {\em divisorially log terminal} or {\em dlt} if there exists an open subset $U\subset X$ satisfying the following conditions: \begin{enumerate} \item $U$ is smooth and $\Delta|_U$ has simple normal crossing support, \item the coefficients of $\Delta$ are at most one, \item all the non-klt centers of $(X,\Delta)$ intersect $U$ and are given by strata of the divisor $\lfloor \Delta \rfloor$. \end{enumerate} A pair $(X,\Delta)$ is said to be {\em purely log terminal} or {\em plt} if it is dlt and it has at most one non-klt center. } \end{definition} \begin{definition}{\em Let $X\rightarrow Z$ be a contraction and $(X,\Delta)$ be a log pair. We say that $(X,\Delta)$ is of {\em Fano type} over $Z$ if there exists a boundary $\Delta'$ on $X$ that is big over $Z$, so that $(X,\Delta+\Delta')$ is klt and $K_X+\Delta+\Delta' \sim_{\mathbb{Q},Z}0$. } \end{definition} \begin{definition} {\em Let $(X,\Delta;x)$ be a klt singularity. A {\em purely log terminal blow-up} of $(X,\Delta)$ at $x$ (or a {\em plt blow-up} for short) is a projective birational morphism $\pi\colon Y\rightarrow X$ satisfying the following conditions: \begin{enumerate} \item $\pi$ is an isomorphism on the complement of $x$, \item the pre-image of $x$ on $Y$ is a unique prime divisor $E$, \item the pair $(Y,E+\Delta_Y)$ is plt, where $\Delta_Y:=\pi_*^{-1}(\Delta)$, and \item $-E$ is ample over $X$. \end{enumerate} In particular, the log pair $(E,\Delta_E)$ obtained by adjunction of $(Y,E+\Delta_Y)$ to $E$ is of Fano type. } \end{definition} In this article, we will be concerned with orbifold structures on Fano type varieties and klt singularities. Therefore, we will need the following definitios. \begin{definition} \label{def:standard-approx} {\em We say that the coefficients of $\Delta$ are {\em standard} if they have the form $1-\frac{1}{n}$, where $n$ is a positive integer. Given a boundary $\Delta$ on $X$, we define its {\em standard approximation} to be the effective divisor $\Delta_s$ on $X$ with largest standard coefficients such that $\Delta\geq \Delta_s$. Note that if $(X,\Delta)$ is of Fano type over $Z$, then $(X,\Delta_s)$ is of Fano type over $Z$ as well.} \end{definition} The following is the definition of one of the main kind of covers that we will consider in this article. \begin{definition} {\em Let $(X,\Delta;x)$ be a klt singularity. We say that $\phi\colon Y\rightarrow X$ is a {\em finite Galois quasi-\'etale cover} if the following conditions are satisfied: \begin{enumerate} \item There exists a finite group $G$ acting on $Y$, \item $X$ is the quotient of $Y$ by $G$, and \item the pull-back of $K_X+\Delta$ equals $K_Y+\Delta_Y$, where $\Delta_Y$ is effective. \end{enumerate} Note that $Y\rightarrow X$ may not be unramified in codimension one. However, it is unramified in codimension one when considering $(X^{\rm reg},\Delta^{\rm reg})$ as an orbifold. This justifies the quasi-\'etale property. We say that $(Y,y)$ is a {pointed finite Galois quasi-\'etale cover} of $(X,\Delta;x)$ if $y\in Y$ is a point whose image on $X$ is $x$. To shorten the notation, we may say that $Y\rightarrow X$ is a {\em pointed finite cover} of $(X,\Delta;x)$. } \end{definition} \section{Generalized Cox rings} \label{sec:gen-cox} In this section, we generalize the concept of Cox rings to different settings and prove some basic properties. In subsection~\ref{subsec:proj-log-cox}, we will define the Cox ring of a log pair and study its properties. In subsection~\ref{subsec:rel-log-cox} and subsection~\ref{subsec:local-cox}, we introduce the relative Cox ring and the local Cox ring, respectively. In subsection~\ref{subsec:prop-gen-cox}, we prove some properties of the above generalizations. For instance, we prove that the Cox ring of a relatively Fano type variety admits the structure of a klt type singularity (Theorem~\ref{thm:klt-Cox-ring}). Finally, in subsection~\ref{subsec:local-Hensel-cox}, we will define the local Henselian Cox ring of a singularity. This is one of the main objects considered in this article. \subsection{The Cox ring of a log pair}\label{subsec:proj-log-cox} In this subsection, we generalize the Cox ring and related notions to log pairs $(X,\Delta)$, where $X$ is a normal algebraic variety and $\Delta$ is an effective divisor on $X$. We prove some basic properties of the Cox ring of a log pair and show that these objects become interesting even for log pair structures on $\mathbb{P}^1$. In the case that $\Delta$ has standard coefficients, such pairs can be viewed as geometric orbifolds in the sense of Campana~\cite{Cam11}. We proceed to define the class group ${\rm Cl}(X,\Delta)$ of a log pair $(X,\Delta)$. We will define the class group ${\rm Cl}(X,\Delta)$ to be ${\rm Cl}(X,\Delta_s)$, where $\Delta_s$ is the standard approximation. Hence, it suffices to define the class group for standard pairs. \begin{definition}{\em Let $(X,\Delta)$ be a log pair. We denote by $(X^{\rm reg},\Delta^{\rm reg})$ the orbifold smooth locus. As usual, the divisor $\Delta^{\rm reg}$ denotes the restriction of $\Delta$ to $X^{\rm reg}$. Then, we have canonical orbifold charts $V:=\mathbb{A}^n\rightarrow \mathbb{A}^n\cong U\subset X^{\rm reg}$. These charts are quotients by abelian reflection groups, ramifying over $\Delta|_U$ for an analytic neighborhood $U$ of any point $x\in X^{\rm reg}$. For such smooth orbifolds, there is a notion of orbifold Weil divisors and orbifold Picard group ${\rm Pic}^{\rm orb}$ (see, e.g.,~\cite[Sec 4.4.3]{BG08}). Then, we can define $$ \operatorname{Cl}(X,\Delta):=\operatorname{Pic}^{\rm orb} \left(X^{\operatorname{reg}},\Delta^{\operatorname{reg}}\right). $$ The group ${\rm Cl}(X,\Delta)$ is essentially the group of orbifold Weil divisors ${\rm WDiv}(X,\Delta)$ quotient by linear equivalence of $\mathbb{Q}$-divisors. } \end{definition} Now, we define the sheaves of sections $\mathcal{O}_{(X,\Delta)} (D)$ for orbifold Weil divisors $D$ on pairs $(X,\Delta)$. \begin{definition} {\em Let $(X,\Delta)$ be a log pair and let $D \in \operatorname{WDiv}(X,\Delta)$. Then, we define the sheaf $\mathcal{O}_{(X,\Delta)}(D)$ by \[ \Gamma(U,\mathcal{O}_{(X,\Delta)}(D)) := \langle D' \in \operatorname{WDiv}^{\mathrm{eff}}(U,\Delta) \mid D- D' \in \operatorname{PDiv}(U) \rangle \] for any open $U \subseteq X$. In particular, $\mathcal{O}_{(X,\Delta)}(D)$ is a coherent sheaf of $\mathcal{O}_{X}$-modules for any $D \in \operatorname{WDiv}(X,\Delta)$. When $X=U$, we may write $\Gamma(X,\Delta,\mathcal{O}_X(D))$ or simply $\Gamma(X,\Delta,D)$. } \end{definition} Proceeding as in~\cite[Sec 3.1,3.2]{ADHL15}, we first define the sheaves of divisorial algebras for subgroups $K \subseteq \operatorname{Cl}(X)$, before defining Cox sheaves and Cox rings. \begin{definition}{\em Let $(X,\Delta)$ be a log pair. Let $N \subseteq \operatorname{WDiv}(X,\Delta)$ be a subgroup. Then the sheaf of divisorial algebras associated to $N$ is \[ \mathcal{S}^{(N)}:=\bigoplus_{D \in N} \mathcal{S}^{(N)}_D \text{ where } \mathcal{S}^{(N)}_D:=\mathcal{O}_{(X,\Delta)}(D). \] } \end{definition} Now, if $\operatorname{Cl}(X,\Delta)$ is torsion free, we can define the Cox sheaf to be the sheaf of divisorial algebras associated to any $N \subseteq \operatorname{WDiv}(X,\Delta)$ such that $K \to \operatorname{Cl}(X,\Delta)$ is an isomorphism. If $\operatorname{Cl}(X,\Delta)$ has torsion, we proceed similarly to~\cite[Constr. 1.4.2.1]{ADHL15} in the case of ordinary Cox rings. The idea is to take the sheaf of divisorial algebras $N \subseteq \operatorname{WDiv}(X,\Delta)$ projecting onto $\operatorname{Cl}(X,\Delta)$, and then quotient by a certain ideal sheaf identifying homogeneous components $\mathcal{S}^{(N)}_D$ and $\mathcal{S}^{(N)}_{D'}$ whenever $D$ and $D'$ are linearly equivalent. \begin{definition} \label{def:logCox} {\em Let $(X,\Delta)$ be a log pair with finitely generated log divisor class group $\operatorname{Cl}(X,\Delta)$. Let $N \subseteq \operatorname{WDiv}(X,\Delta)$ be a finitely generated subgroup. Assume that \[ c: N \to \operatorname{Cl}(X,\Delta), \qquad D \mapsto [D] \] is onto and denote its kernel by $N^0$. Let $\mathcal{S}$ be the sheaf of divisorial algebras associated to $N$. Let $\chi: N^0 \to \mathbb{C}(X,\Delta)^*$ be a group homomorphism yielding \begin{equation} \label{chi-eq} \operatorname{div}(\chi(E))=E \end{equation} for all $E \in N^0$. Denote by $\mathcal{I}$ the sheaf of ideals of $\mathcal{S}$ locally generated by the sections $1-\chi(E)$, where $E$ runs through $N^0$. We define the \emph{log Cox sheaf} of $(X,\Delta)$ to be the quotient sheaf $\Cox{(X,\Delta)}:=\mathcal{S}/\mathcal{I}$, graded by \[ \Cox{(X,\Delta)}:=\bigoplus_{[D] \in \operatorname{Cl}(X,\Delta)} (\Cox{(X,\Delta)})_{[D]}, \text{ where } (\Cox{(X,\Delta)})_{[D]}:=\pi \left( \bigoplus_{D' \in c^{-1}([D])} S_{D'} \right), \] and $\pi:\mathcal{S} \to \Cox{(X,\Delta)}$ is the projection. The ring of global sections \[ {\rm Cox}(X,\Delta):=\Gamma(X,\Cox{(X,\Delta)}) \] of this sheaf is called the \emph{log Cox ring} of $(X,\Delta)$. In what follows, we may need to consider the Cox ring with respect to a finitely generated subgroup $N\leqslant {\rm WDiv}(X,\Delta)$ which may not surject onto ${\rm Cl}(X,\Delta)$. Analogously, in this case we have a homomorphism $N\rightarrow {\rm Cl}(X,\Delta)$ with kernel $N^0$ and we choose a group homomorphism $\chi\colon N^0\rightarrow \mathbb{C}(X,\Delta)^*$ satisfying the equality~\eqref{chi-eq}. In this case, we denote the Cox ring by \[ {\rm Cox}(X,\Delta)_{N,\chi}. \] } \end{definition} \begin{remark}{\em It is clear from the construction that $N^0$ is always a subgroup of $\operatorname{PDiv}(X)$, so $\chi$ is a group homomorphism $\chi: N^0 \to \mathbb{C}(X)^*$ to the field of rational functions on $X$. Thus, the assertions from~\cite[Sec 1.4.2]{ADHL15} hold. In particular, if $\Gamma(X,\mathcal{O}^*)=\mathbb{C}^*$, then the above definition of the log Cox sheaf and log Cox ring does not depend on the choice of $N$ and $\chi$ up to isomorphism, see~\cite[Prop. 1.4.2.2]{ADHL15}. Note that the requirement $\Gamma(X,\mathcal{O}^*)=\mathbb{C}^*$ is fulfilled for projective varieties and quasi-cones.} \end{remark} \begin{proposition}\label{prop:fg-log-cox} Let $(X,\Delta)$ be a log pair. The Cox ring ${\rm Cox}(X,\Delta)$ is finitely generated if and only if ${\rm Cox}(X)$ is finitely generated. \end{proposition} \begin{proof} Note that we have an inclusion of groups ${\rm Cl}(X)\leqslant {\rm Cl}(X,\Delta)$ of finite index. Furthermore, we have a monomorphism of rings ${\rm Cox}(X)\hookrightarrow {\rm Cox}(X,\Delta)$ obtained by coarsening the grading. By~\cite[Corollary 1.2.5]{ADHL15}, we conclude that $\mathcal{R}_{(X,\Delta)}(D_1,\dots,D_k)$ is finitely generated over $\mathbb{C}$ if and only if $\mathcal{R}_{X}(m_1D_1,\dots,m_kD_k)$ is finitely generated over $\mathbb{C}$. \end{proof} \begin{corollary} \label{cor:CoxlogCoxfg} Let $X$ be a Mori dream space. For any log pair structure $(X,\Delta)$ the Cox ring $\mathcal{R}(X,\Delta)$ is finitely generated. \end{corollary} The following proposition says that the only case in which the Cox ring of a log pair $(X,\Delta)$ may be non-isomorphic to the Cox ring of $X$ is when there is at least one coefficient of $\Delta$ which is equal to or larger than one half. \begin{proposition} Let $(X,\Delta)$ be a log pair so that ${\rm coeff}_P(\Delta)<\frac{1}{2}$ for every prime divisor $P$ on $X$. Then ${\rm Cox}(X,\Delta)\cong {\rm Cox}(X)$. \end{proposition} \begin{proof} Note that ${\rm coeff}_P(\Delta)<\frac{1}{2}$ if and only if $\Delta_s$, the standard approximation of $\Delta$, equals the zero divisor. The above condition is equivalent to ${\rm WDiv}(X,\Delta)\cong {\rm WDiv}(X)$. Furthermore, any section of a orbifold Weil divisor of $(X,\Delta)$ is just a section of a Weil divisor on $X$. Hence, we have that $\mathcal{R}_{(X,\Delta)} \cong\mathcal{R}_{X}$. This implies the desired isomorphism. \end{proof} We are interested in the universal abelian covering space that the log Cox ring provides us. We can also study other abelian covers of $X$. In analogy to the case of the ordinary Cox ring, they should correspond to quotients of ${\rm Cox}(X,\Delta)$ by subgroups of $\operatorname{Cl}(X,\Delta)$, see~\cite[Them 4.2.1.4]{ADHL15}. We explore the interplay in the following example. \begin{example}{\em Consider the $D_4$-cone singularity $X$ given by the equation $\{x_3^2+x_1^2x_2+x_2^2x_1=0\}$ in $\mathbb{A}^3$. Then \[ \operatorname{Cl}(X)= \langle D_1, D_2 \mid 2D_1=2D_2=0 \rangle \cong (\mathbb{Z}/2\mathbb{Z})^2, \] where $D_1=V(x_1)$, $D_2=V(x_2)$. Then, the Cox ring ${\rm Cox}(X)$ is the $A_1$-singularity $Y$ given by $y_1^2+y_2^2+y_3^2$, where $(\mathbb{Z}/2\mathbb{Z})^2$ acts via \[ (a,b)\cdot (y_1,y_2,y_3) := ((-1)^{a}y_1,(-1)^{b}y_2,(-1)^{a+b}y_3). \] The generating invariants for this action are \[ x_1:=y_1^2, \quad x_2:=y_2^2, \quad x_3:=y_1y_2y_3, \text{ and} \quad x_4:=y_3^2. \] They satisfy the relation $x_3^2-x_1x_2x_4=0$. Furthermore, the relation of the $A_1$-singularity gives us $x_1+x_2+x_4=0$. Eliminating $x_4$ gives us back our initial relation. Now, consider pair structures $(X,\Delta)$, with $\Delta=(1-\frac{1}{m_1})D_1 + (1-\frac{1}{m_2})D_2$. We have that \[ \operatorname{Cl}(X,\Delta)=\left\langle \frac{1}{m_1}D_1,\frac{1}{m_2}D_2 \mid 2 D_1=2D_2=0 \right\rangle. \] Also note that for the ordinary Cox cover $\pi:Y \to X$, we have \[ \pi^{-1}(D_1)=V(y_2+iy_3) \cup V(y_2-iy_3), \quad \pi^{-1}(D_2)=V(y_1+iy_3) \cup V(y_1-iy_3). \] Since $\pi:Y \to X$ does not ramify over divisors, we have \[ \pi^*(\Delta)=\left(1-\frac{1}{m_1}\right)(V(y_2+iy_3) + V(y_2-iy_3)) + \left(1-\frac{1}{m_2} \right)(V(y_1+iy_3) + V(y_1-iy_3)). \] That is, when $m_1$ and $m_2$ are different from one, the pull-back does not have normal crossings.} \end{example} To finish this subsection, we show the Cox ring of a log Fano structure on $\mathbb{P}^1$. In this case, the standard approximation has at most three non-trivial coefficients. In the case that there are two non-trivial coefficients, the Cox ring is isomorphic to $\mathbb{A}^2$ with a characteristic quasi-torus action. In the case that there are three non-trivial coefficients, the Cox ring may not be isomorphic to $\mathbb{A}^2$. \begin{example}{\em Let $\Delta$ be an effective divisor on $\mathbb{P}^1$ so that $-(K_{\mathbb{P}^1}+\Delta)$ is ample. Assume that $\Delta_s$ has two non-trivial coefficients. Then $\Delta_s=\left(1-\frac{1}{n}\right)p+\left(1-\frac{1}{m}\right)q$ for some positive integers $n$ and $m$. In this case, we have that ${\rm Cl}(\mathbb{P}^1,\Delta)=\langle \frac{1}{n}p,\frac{1}{m}q\mid p=q \rangle$. The above group is isomorphic to $\mathbb{Z}\oplus \mathbb{Z}/\gcd(n,m)\mathbb{Z}$. Let $g=\gcd(n,m)$. We conclude that the Cox ring is isomorphic to $\mathbb{A}^2$ with the Picard action \[ t\cdot (x,y) \mapsto \left(t^{\frac{m}{g}}x,t^{\frac{n}{g}}y \right) \] and \[ \mu\cdot (x,y)\mapsto \left(\mu^{\frac{n}{g}}x,\mu^{\frac{-m}{g}}y\right), \] where $\mu$ is a $g$-th root of unity. } \end{example} \begin{example} {\em Let $\Delta$ be an effective divisor on $\mathbb{P}^1$ so that $-(K_{\mathbb{P}^1}+\Delta)$ is ample. Assume that $\Delta_s$ has three non-trivial coefficients. In this case, the coefficients of $\Delta_s$ correspond to platonic triples (see, e.g.,~\cite{LS13,LLM19}). In this case, we have that ${\rm Cl}(\mathbb{P}^1,\Delta)=\langle \frac{1}{n}p,\frac{1}{m}q,\frac{1}{s}r \mid p=q=r\rangle$. Let $g=\gcd(ms,ns,nm)$. We may assume that the points $p,q$ and $r$ are $0,\{\infty\}$ and $1$, respectively. In this case, the class group is isomorphic to $\mathbb{Z} \oplus T_{n,m,s}$, where $T_{n,m,s}$ is the roots system of the fork Dynkin diagram with three branches of length $n,m$ and $s$. The Cox ring is isomorphic to \[ \mathbb{C}[x,y,z]/\langle x^n+y^m+z^s\rangle. \] The characteristic quasi-torus action is given by \[ t\cdot (x,y,z) =\left( t^{\frac{ms}{g}}x, t^{\frac{ns}{g}}y, t^{\frac{nm}{g}}z \right) \] and $T_{n,m,s}$ acts on $(x,y,z)$ in the usual way (see, e.g.,~\cite{Muk04}). } \end{example} \begin{remark}{\em By taking all the possible Cox rings of log Fano pairs on $\mathbb{P}^1$ and quotient by the finite part of the characteristic quasi-torus action, we obtain back surface klt singularities. These singularities are quotients of smooth points by finite groups. For the classification of surface klt singularities see, e.g.,~\cite{Ale93}. } \end{remark} \subsection{The relative Cox ring of a log pair}\label{subsec:rel-log-cox} In this subsection, we define the relative Cox ring of a log pair, prove some basic properties, and give some examples. \begin{definition}\label{def:rel-cox}{\em Let $(X,\Delta)$ be a log pair and $\phi\colon X\rightarrow Z$ be a contraction. We define the \emph{relative log Cox sheaf of $X/Z$} to be the direct image sheaf \[ \Cox{(X/Z,\Delta)}:=\phi_* \Cox{(X,\Delta)}, \] where $\Cox{(X,\Delta)}$ is the log Cox sheaf of $(X,\Delta)$ as in Definition~\ref{def:logCox}. If $\Cox{(X/Z,\Delta)}$ is a sheaf of finitely generated $\mathcal{O}_Z$-algebras, we say that $X \to Z$ is a \emph{relative Mori dream space} for the log pair $(X,\Delta)$. The \emph{relative affine log Cox ring} is defined to be \[ {\rm Cox}^{\rm aff}(X/Z,\Delta):=\Gamma(\mathcal{R}_{(X/Z,\Delta)},Z). \] We write ${\rm aff}$ on top of the relative Cox ring to stress that, in this case, we are working with an affine base $Z$. Later on, we will be interested in the local behaviour around some special point of the base. } \end{definition} More generally, we can make the above definitions if $h \colon X \to Z$ is an aff-contraction, see Definition~\ref{def:aff-contraction}. \begin{remark}\label{rem:gen-stalks}{\em If $Z$ is a point, we can identify the relative log Cox sheaf $\Cox{(X/Z,\Delta)}$ with the log Cox ring ${\rm Cox}(X,\Delta)$. More generally, when $Z$ is affine, then $\Cox{(X/Z,\Delta)}$ is a sheaf of finitely generated $\mathcal{O}_Z$-algebras if and only if the algebra of global sections ${\rm Cox}^{\rm aff}(X/Z,\Delta)$ is finitely generated over $\mathcal{O}_Z(Z)$ and thus over $\mathbb{C}$ by the same argument as in~\cite[Prop. 4.3.1.3]{ADHL15}. Again more generally, if $\Cox{(X/Z,\Delta)}$ is a sheaf of finitely generated $\mathcal{O}_Z$-algebras, then any fiber $X_z:=\phi^{-1}(z)$ of $\phi: X \to Z$ has an open neighbourhood $X_z \subseteq U \subseteq X$, such that $\Cox{(X,\Delta)}(U)$ is a finitely generated $\mathcal{O}_X(U)$-algebra.} \end{remark} \subsection{The local Cox ring}\label{subsec:local-cox} In this subsection, we define the local Cox ring for germs $(X,\Delta;x)$, where $(X,\Delta)$ is a pair and $x \in X$ is a closed point. More generally, when $\phi: X \to Z $ is a contraction, we define the relative local Cox ring for closed points $z \in Z$. Here, it makes sense to consider different local models depending on the needs. A priori, we consider points on algebraic varieties $X$. Since we can realize $\operatorname{Cl}(X,\Delta,x)$ as a subgroup of $\operatorname{Cl}(X,\Delta)$. The approach is to define the local Cox ring at $x \in X$ to be $$ \bigoplus_{[D] \in \operatorname{Cl}(X,\Delta,x)} \Gamma(X,\Delta,\mathcal{O}_X(D)). $$ This definition amounts to choosing a subgroup $N$ of the orbifold Weil divisors of $(X,\Delta)$ surjecting onto $\operatorname{Cl}(X,\Delta,x)$ with kernel $N^0$ and a character $\chi: N^0 \to \mathbb{C}(X)^*$. Note that by~\cite[Theorem 2.3]{HMT20}, the set of isomorphism classes of Cox rings defined in this way is in bijection to \[ \mathrm{Ext}^{1}(\operatorname{Cl}(X,\Delta,x),\mathcal{O}(X)^{*}). \] This construction only makes sense if $X$ is affine, so we will assume this in the following. Moreover, we assume that the group $N$ consists of Weil divisors going through $x$ and we fix a character $\chi: N^0 \to \mathbb{C}(X)^*$. Then, we can define the affine local Cox ring (or aff-local Cox ring for short) as above. If it is finitely generated over $\mathcal{O}(X)$, then its spectrum is an affine scheme of finite type. We denote by $X_x$ the spectrum of the local ring of $X$ at $x$. We have $\operatorname{Cl}(X_x,\Delta_x) \cong \operatorname{Cl}(X,\Delta,x)$ and we can identify uniquely the group $N$ with a subgroup of $\operatorname{WDiv}(X_x,\Delta_x)$. Here, $\Delta_x$ is the pull-back of $\Delta$ to $X_x$. Moreover, since $X$ and $X_x$ are birational, we can use the character $\chi$ from above in order to define the Cox ring $$ \bigoplus_{[D] \in \operatorname{Cl}(X_x)} \Gamma(X_x,\Delta_x,\mathcal{O}_{X_x}(D)). $$ This is a gr-local ring, finitely generated over the degree-zero part $\mathcal{O}_{X,x}$, which is why we call it the \emph{gr-local Cox ring} of $x \in X$. By localizing at the unique graded maximal ideal, we get a local ring. \begin{definition} {\em Let $X$ be an affine variety, $(X,\Delta)$ a log pair, and $x \in X$ a closed point. Fix a subgroup $N \subseteq \operatorname{WDiv}(X,\Delta)$ of orbifold Weil divisors going trough $x$ such that the induced homomorphism $\varphi\colon N \to \operatorname{Cl}(X,x)$ is surjective. Fix a character $\chi \colon \ker(\varphi) \to \mathbb{C}(X)^*$. Let $\mathcal{S}$ be the sheaf of divisorial algebras on $X$ associated to $N$ and $\mathcal{I}$ the ideal subsheaf generated by sections $1-\chi(E)$, where $E \in \ker(\varphi)$. Then, we define the \emph{aff-local Cox ring} of $x \in (X,\Delta)$ to be \[ {\rm Cox}(X,\Delta;x)^{\rm aff}_{N,\chi}:= \bigoplus_{[D] \in \operatorname{Cl}(X,\Delta,x)} \frac{ \bigoplus_{D' \in \varphi^{-1}([D])} \mathcal{S}_{D'}(X)}{\mathcal{I}(X)}. \] Similarly, where $X_x:=\operatorname{Spec} \mathcal{O}_{X,x}$, we define the \emph{gr-local Cox ring} of $x \in (X,\Delta)$ to be \[ {\rm Cox}(X,\Delta;x)^{\operatorname{gr}}_{N,\chi}={\rm Cox}(X_x,\Delta_x)_{N,\chi}:= \bigoplus_{[D] \in \operatorname{Cl}(X_x,\Delta_x)} \frac{ \bigoplus_{D' \in \varphi^{-1}([D])} \mathcal{S}_{D',x}}{\mathcal{I}_{x}}. \] Finally, we define the \emph{local Cox ring} of $x \in (X,\Delta)$ to be the localization \[ {\rm Cox}(X,\Delta;x)^{\operatorname{loc}}_{N,\chi}:=\left({\rm Cox}(X,\Delta;x)^{\operatorname{gr}}_{N,\chi}\right)_{\mathfrak{m}} \] at the unique homogeneous maximal ideal of the gr-local Cox ring. We denote the spectra of these rings by \[ \overline{X}^{\operatorname{aff}}_{N,\chi}, \overline{X}^{\operatorname{gr}}_{N,\chi}, \text{ and } \overline{X}^{\operatorname{loc}}_{N,\chi} \] respectively. The isomorphism class of the Cox rings just defined depends on the choice of $N$ and $\chi$, but having made such a choice, we will usually omit them in the notation. } \end{definition} In particular, ${\rm Cox}(X,\Delta;x)^{\operatorname{gr}}$ is the stalk of the quotient sheaf $\mathcal{S}/\mathcal{I}$ at $x$ or, equivalently, the gr-localization at the unique pre-image of $x \in X$ in $\overline{X}^{\operatorname{aff}}$. Thus, since localization factors through gr-localization, we have the following commutative diagram. \[ \xymatrix{ \overline{X}^{\operatorname{loc}} \ar[rr] \ar@/^1pc/[rrrr] \ar[dd] && \overline{X}^{\operatorname{gr}} \ar[rr] \ar[dd] \ar[rrddd] && \overline{X}^{\operatorname{aff}} \ar[dd] \ar[rrddd] \ \\ \ \\ \overline{X}^{\operatorname{loc}}_{\rm fin} \ar[rr]^{\rm \operatorname{id}} && \overline{X}^{\operatorname{gr}}_{\rm fin} \ar[rr]|!{[uu];[drr]}\hole \ar[rrd] && \overline{X}^{\operatorname{aff}}_{\rm fin} \ar[rrd] \\ &&&& X_x \ar[rr] && X. } \] In particular, there is still a morphism $\overline{X}^{\operatorname{loc}} \to X_x$, but it may not be a quotient by the characteristic quasi-torus (at least in a strict sense). For instance, for an open orbit with the unique fixed point of $\overline{X}^{\operatorname{gr}}$ in its closure, localization will remove all closed points and only keep the generic point of the orbit. Note that $\operatorname{Cl}(\overline{X}^{\operatorname{gr}})\cong \operatorname{Cl}(\overline{X}^{\operatorname{loc}})$, since the class group of the gr-local ring ${\rm Cox}(X,\Delta;x)^{\operatorname{gr}}$ is concentrated at the unique graded maximal ideal. This essentially means that we can iterate Cox rings in a unique way. Since ${\rm Cox}(\overline{X}^{\operatorname{loc}})$ can be obtained from ${\rm Cox}(\overline{X}^{\operatorname{gr}})$ via base change. The iteration of Cox rings is defined in~\ref{def:cox-iteration}. \begin{definition} \label{def:rel-loc-cox} {\em Let $(X,\Delta)$ be a log pair and $\phi\colon X\rightarrow Z$ be a contraction. Let $z\in Z$ be a closed point. Let $Z_z$ be the spectrum of the local ring $\mathcal{O}_{Z,z}$. Let $X_z\rightarrow Z_z$ be the projective morphism obtained by the base change $Z_z\rightarrow Z$. We denote by $\Delta_z$ the pull-back of $\Delta$ to $X_z$. Analogously to the local case, we can define the {\em relative gr-local Cox ring} at $z\in Z$ to be \[ {\rm Cox}(X/Z,\Delta;z)^{\rm gr}:= \bigoplus_{[D] \in \operatorname{Cl}(X_z,\Delta_z)} \frac{ \bigoplus_{D' \in \varphi^{-1}([D])} \mathcal{S}_{D',\phi^{-1}(z)}}{\mathcal{I}_{\phi^{-1}(z)}}. \] Note that the relative gr-local Cox ring comes with a natural maximal graded ideal $\mathfrak{m}$, i.e., the ideal generated by homogeneous regular functions in the Cox ring which correspond to Weil divisors on $X$ that intersect the fiber $\phi^{-1}(z)$ non-trivially. The {\em relative local Cox ring} of $(X/Z,\Delta)$ at $z$ is then defined to be \[ {\rm Cox}(X/Z,\Delta;z)^{\rm loc}:= \left( {\rm Cox}(X/Z,\Delta;z)^{\rm gr} \right)_{\mathfrak{m}}. \] The {\em local Cox ring} comes as the special case where $X \to X$ is the identity and $x \in X$: \[ {\rm Cox}(X,\Delta;x)^{\rm loc} \cong {\rm Cox}(X_x/X_x,\Delta)^{\rm loc}. \] } \end{definition} \subsection{Properties of generalized Cox rings} \label{subsec:prop-gen-cox} In this subsection, we prove some properties of the Cox rings defined in the previous subsections. First, we prove two general statements concerning relative Mori dream spaces, then we focus on the case of klt pairs. \begin{proposition} \label{prop:rel-log-Cox-sheaf-groc} Let $X \to Z$ be a relative Mori dream space. Then $\Cox{(X/Z,\Delta)}$ is a sheaf of gr-local rings. In particular, if $Z$ is the spectrum of a local ring essentially of finite type, then the local Cox ring ${\rm Cox}^{\rm loc}(X/Z,\Delta;z)$ is a local ring essentially of finite type. \end{proposition} \begin{proof} Since $X \to Z$ is a relative Mori dream space, we know that the stalks $\left(\Cox{(X/Z,\Delta)}\right)_z$ are graded rings, with zero graded piece $\mathcal{O}_{Z,z}$, which is a local ring. Finite generation over $\mathcal{O}_{Z,z}$ follows as in Corollary~\ref{cor:sheaf-groc-sheaf-finite-type}. The last assertion follows from the definition. \end{proof} \begin{comment} \begin{proof}[Proof of Theorem~\ref{thm:relative-MDS-transitive}] By Corollary~\ref{cor:CoxlogCoxfg}, we can ignore the boundaries. Assume $f_1 \colon X_1 \to X_2$ and $f_2 \colon X_2 \to X_3$ are relative Mori dream spaces. That is $\Cox{X_i/X_{i+1}}$ is a sheaf of finitely generated $\mathcal{O}_{X_{i+1}}$ algebras for $i=1,2$. Let $x \in X_3$. Then by definition, there is an affine open neighbourhood $x \in U \subseteq X_3$, such that $\Gamma(U,\Cox{X_{2}/X_{3}})$ is finitely generated as an algebra over $\Gamma(U,\mathcal{O}_{X_3})$. Since $f_2$ is a contraction, we have that $\Gamma(f_2^{-1}(U),\Cox{X_{2}})$ is a graded algebra, finitely generated over its zero-component $\Gamma(f_2^{-1}(U), \mathcal{O}_{X_{2}})$. Now consider $\Gamma(f_2^{-1}(U),\Cox{X_{1}/X_{2}})$. If we can show that this algebra is finitely generated over $\Gamma(f_2^{-1}(U),\mathcal{O}_{X_{2}})$, we are done. By Proposition~\ref{prop:rel-log-Cox-sheaf-groc}, we know that $\Cox{X_{1}/X_{2}}$ is a sheaf of grocal rings. Thus $\left(\Cox{X_{1}/X_{2}}\right)_y$ is finitely generated as an algebra over $\mathcal{O}_{X_2,y}$, in particular, it is finitely generated as an algebra over the stalk $\left(\Cox{X_{2}}\right)_y$. The idea is that $\Cox{X_{2}}$ should be generated by global sections on $f_2^{-1}(U)$. That means there are sections of $\Gamma(f_2^{-1}(U),\Cox{X_{2}})$, such that their germs at $y$ generate $\left(\Cox{X_{2}}\right)_y$ over $\mathcal{O}_{X_{2},y}$ for all $y \in f_2^{-1}(U)$. Then similarly as in Lemma~\ref{le:fg-local}, since the stalks $\left(\Cox{X_{1}/X_{2}}\right)_y$ are finitely generated over $\left(\Cox{X_{2}}\right)_y$, we should get global finite generation on $f_2^{-1}(U)$. \end{proof} \end{comment} The following set of statements shows that klt singularities and weakly Fano pairs behave optimally with respect to the Cox construction. This is known for the classical Cox ring of weakly Fano pairs and klt quasi-cones (see, e.g.,~\cite{GOST15}). \begin{theorem}\label{thm:relative-fano} Let $(X,\Delta)$ be of Fano type over $Z$, where $Z$ is either projective, a quasi-cone or the spectrum of a local ring essentially of finite type. Then $X\rightarrow Z$ is a relative Mori dream space for the log pair $(X,\Delta)$. \end{theorem} \begin{proof} By Remark~\ref{rem:gen-stalks} and Lemma~\ref{le:fg-local}, it suffices to check that for every point $z\in Z$ the stalk $\left(\Cox{(X/Z,\Delta)}\right)_z$ is a finitely generated $\mathcal{O}_{Z,z}$-algebra. We denote by $\phi_X \colon X\rightarrow Z$ the contraction morphism. By~\cite[Corollary 1.4.3]{BCHM10}, we can take a small $\mathbb{Q}$-factorialization $Y\rightarrow X$ of $X$. Note that $\pi\colon Y\rightarrow Z$ is still of Fano type over $Z$ (see, for instance~\cite[Lemma 3.1]{GOST15}). Let $K_Y+\Delta_Y=\pi^*(K_X+\Delta)$. We denote by $\phi_Y\colon Y\rightarrow Z$ the contraction morphism. By~\cite[Corollary 1.3.2]{BCHM10} and Lemma~\ref{le:fg-local}, we know that $\left(\Cox{(Y/Z)}\right)_z$ is a finitely generated algebra over $\mathcal{O}_{Z,z}$. Since $\pi$ is small, we conclude that $\left(\Cox{(X/Z)}\right)_z$ is a finitely generated algebra over $\mathcal{O}_{Z,z}$. By Proposition~\ref{prop:fg-log-cox}, we deduce that $\left(\Cox{(X/Z,\Delta)}\right)_z$ is a finitely generated algebra over $\mathcal{O}_{Z,z}$. Hence, $\Cox{(X/Z,\Delta)}$ is a sheaf of finitely generated $\mathcal{O}_Z$-algebras. Then, $X\rightarrow Z$ is a relative Mori dream space for the log pair $(X,\Delta)$. \end{proof} \begin{corollary} \label{cor:klt-mds} Let $x \in (X,\Delta)$ be a klt singularity. Then $(X_x,\Delta_x)$ is a Mori dream space at $x\in X$, that is ${\rm Cox}^{\rm gr}(X,\Delta;x)$ is a gr-local ring, finitely generated as an algebra over $\mathcal{O}_{X,x}$. \end{corollary} \begin{proof} This follows by setting $X=Z=X_x$ in the statement of Theorem~\ref{thm:relative-fano}. \end{proof} \begin{corollary} \label{cor:klt-mds-glob} Let $(X,\Delta)$ be a klt pair with finitely generated log divisor class group $\operatorname{Cl}(X,\Delta)$. Then the Cox sheaf is a sheaf of grocal rings. In particular, it is a sheaf of finitely generated $\mathcal{O}_X$-algebras. \end{corollary} \begin{proof} Since $(X,\Delta)$ is klt, Corollary~\ref{cor:klt-mds} tells us that for any $x \in X$, the gr-local Cox ring ${\rm Cox}^{\rm gr}(X,\Delta;x)$ is finitely generated over $\mathcal{O}_{X,x}$. By~\cite[Remark 1.3.1.4]{ADHL15}, we have a surjective homomorphism \[ {\rm Cox}^{\rm gr}(X,\Delta;x)[x_1^{\pm 1},\ldots,x_k^{\pm 1}] \to \left(\Cox{(X,\Delta;x)}\right)_{x}, \] where $\operatorname{Cl}(X,\Delta,x)=\operatorname{Cl}(X,\Delta)/\operatorname{Cl}_x(X,\Delta)$ and the subgroup $\operatorname{Cl}_x(X,\Delta)$ has $k$ generators. Hence, we have that \[ \left({\rm Cox}^{\rm loc}(X,\Delta;x))\right)_{x} \] is a gr-local ring, finitely generated as an algebra over $\mathcal{O}_{X,x}$. The last assertion follows from Lemma~\ref{le:fg-local}. \end{proof} In view of~\cite[Proposition 4.3.1.4]{ADHL15}, we have the stronger result that affine klt varieties with finitely generated class group are Mori dream spaces. \begin{corollary} \label{cor:klt-aff-mds} Let $(X,\Delta)$ be an affine klt pair with finitely generated log divisor class group $\operatorname{Cl}(X,\Delta)$. Then the Cox ring ${\rm Cox}^{\rm aff}(X,\Delta)$ is finitely generated. \end{corollary} \begin{proof} This follows directly from Corollary~\ref{cor:klt-mds-glob} and~\cite[Prop. 4.3.1.3]{ADHL15}. \end{proof} \begin{lemma}\label{lem:one-dim-cone} Let $X$ be a $\mathbb{Q}$-factorial $\mathbb{T}$-variety. Let $X\rightarrow Z$ be a projective contraction. Let $(X,\Delta)$ be a log pair which is of Fano type over $Z$. Let $D$ be a $\mathbb{T}$-invariant $\mathbb{Q}$-divisor on $X$. Assume that for each prime $P\subset X$ we have that \[ {\rm coeff}_P(\Delta) \geq 1-\frac{1}{i_P(D)}, \] where $i_P(D)$ is the Cartier index of $D$ at the generic point of $P$. Then, the spectrum $Y$ of the ring \[ \bigoplus_{m\in \mathbb{Z}} H^0(X/Z,\mathcal{O}_X(mD)) \] is klt type. \end{lemma} \begin{proof} Note that if neither $-D$ or $D$ is effective over the base $Z$, then there is nothing to prove. Without loss of generality, we may assume that $D$ is effective. We run a $D$-MMP over the base $X\dashrightarrow X'$. This $D$-MMP terminates since $X$ is of Fano type over the base. Furthermore, $X'$ is also of Fano type over the base. Since $\mathbb{T}$ is connected, this MMP is $\mathbb{T}$-equivariant. The induced divisor $D'$ on $X'$ is still $\mathbb{T}$-invariant. Let $X''$ be the ample model of $D'$ over $X$. Hence, $X''$ is of Fano type over $Z$, being the image of a Fano type variety over $Z$. Let $X^{(3)}$ be a small $\mathbb{Q}$-factorialization of $X''$. Replacing $X$ by $X^{(3)}$, we may assume that $D$ is ample over $Z$. Let $\phi\colon \tilde{X}\rightarrow X$ be the relative spectrum of the divisorial sheaf $\bigoplus_{m\in \mathbb{Z}} \mathcal{O}_X(mD)$. Hence, we have a projection morphism $r\colon \tilde{X}\rightarrow Y$ which contracts at most one horizontal divisor over $X$. We denote such divisor (if it exists) by $F$. Note that $(X,\Delta)$ is $\mathbb{Q}$-complemented over $Z$, hence $(X,\Delta_s)$ is $\mathbb{Q}$-complemented over $Z$ as well. Hence, we can find $\Delta'$ so that $(X,\Delta_s+\Delta')$ is klt and log Calabi-Yau over $Z$. Since $\Delta'$ is big over the base, we can find a general ample divisor $A$ and $E\geq 0$ so that $\Delta' \sim_{\mathbb{Q},Z} A+E$. We can assume that $(X,\Delta_s+A+E)$ is a klt pair which is trivial over $Z$. Let $\epsilon>0$ be small enough so that $A-\epsilon D$ is an ample divisor over $Z$. We can write $A-\epsilon D \sim_{\mathbb{Q},Z} \Gamma \geq 0$ general enough so that the pair $(X,\Delta_s+E+\Gamma)$ remains klt. Note that we have a $\mathbb{Q}$-linear equivalence over the base \[ -(K_X+\Delta_s + E +\Gamma) \sim_{\mathbb{Q},Z} \epsilon D. \] Hence, we have that $\phi^*(K_X+\Delta_s+E+\Gamma)=K_{\tilde{X}}+\Gamma_{\tilde{X}}$ satisfies that $(\tilde{X},\Gamma_{\tilde{X}}+(1-\epsilon)F)$ is a log pair which is klt and $\mathbb{Q}$-trivial over $Y$. Define $\Gamma:=r_*\Gamma_{\tilde{X}}$. Then, we have that $(Y,\Gamma)$ is crepant equivalent to $(\tilde{X},\Gamma_{\tilde{X}}+(1-\epsilon)F)$ so it is a klt pair. \end{proof} \begin{theorem}\label{thm:klt-Cox-ring} Let $\phi\colon X \rightarrow Z$ be a contraction. Assume that $(X,\Delta)$ is a log pair which is of Fano type over $Z$. Let $K\leqslant {\rm WDiv}(X,\Delta)$ be a finitely generated subgroup. Consider $\pi\colon K\rightarrow {\rm Cl}(X/Z,\Delta)$ the induced homomorphism and $K_0$ its kernel. Let $\chi\colon K_0\rightarrow \mathbb{C}(X,\Delta)^*$ be a character. Then, the spectrum of the Cox ring ${\rm Cox}(X/Z,\Delta)_{K,\chi}$ is klt type. \end{theorem} \begin{proof} Observe that replacing $X$ with a small $\mathbb{Q}$-factorialization does not change the Cox ring, so we may assume that $X$ is $\mathbb{Q}$-factorial. Let $D_1,\dots,D_s,D_{s+1},\dots,D_r$ be a finite set of Weil divisors so that $\langle D_1,\dots,D_s \rangle$ maps isomorphically to $\pi(N)_{\rm free}$ and $\langle D_{s+1},\dots,D_r \rangle$ surjects onto $\pi(N)_{\rm tor}$. We denote the spectrum of the Cox ring ${\rm Cox}(X/Z,\Delta)_{N,\chi}$ by $Y'$. Note that we have a natural split $\mathbb{T}\cong \mathbb{T}_0 \times A$, where $A$ is a finite abelian group and $\mathbb{T}_0$ is a torus. Let $Y$ be the quotient of $Y'$ by $A$ and $X'$ the quotient of $Y'$ by $\mathbb{T}_0$. Then, we have a commutative diagram as follows \[ \xymatrix{ Y'\ar[r]^-{/A}\ar[d]_-{/\mathbb{T}_0} & Y\ar[d]^-{/\mathbb{T}_0} \\ X' \ar[r]^-{/A} & X.} \] We denote the finite quasi-\'etale Galois morphism $X'\rightarrow X$ by $p$. We have a natural isomorphism \[ {\rm Cox}(X/Z,\Delta)_{N,\chi} \cong \bigoplus_{(m_1,\dots,m_s)\in \mathbb{Z}^s} H^0(X'/Z, \mathcal{O}_{X'}(m_1p^*(D_1)+ \dots +m_sp^*(D_s)). \] The above isomorphism is induced by the isomorphism \[ p_*\mathcal{O}_{X'} \cong \bigoplus_{D\in \pi(K)_{\rm tor}} \mathcal{O}_X(D). \] The finite morphism $X'\rightarrow X$ ramifies with multiplicity at most $m$ at prime divisors of $X$ with coefficient at least $1-\frac{1}{m}$. Then, the log pull-back of $K_X+\Delta$ is a klt pair $K_{X'}+\Delta'$. Since $(X,\Delta)$ is of Fano type over $Z$, we can find a boundary $B$ on $X$ so that $K_X+\Delta+B\sim_{\mathbb{Q},Z} 0$ is klt and $B$ is big over $Z$. We may assume that $B$ contains no component of the branch locus of $p$. Then, the pull-back $B':=p^*(B)$ satisfies that $K_{X'}+\Delta'+B'_{\mathbb{Q},Z} 0$ is klt and $B'$ is big over $Z$. Hence, $(X',\Delta')$ is of Fano type over $Z$. Hence, it suffices to prove the statement for $\pi(K)$ free. Thus, we may replace $X$ with $X'$ and assume $s=r$. We reduce to the case in which each $D_i \sim_{\mathbb{Q},Z} K_X+B_i$, where $(X,B_i)$ is klt and $B_i\geq A$ for some fixed effective divisor $A$ ample over $Z$. Without loss of generality, we may assume that each $D_i$ is effective. Furthermore, we may assume that $(X,B'+D_i)$ is klt, where $B':=\Delta+B$ is big over $Z$. For this purpose, it suffices to replace $D_i$ with $D_i/n_i$ with $n_i$ large enough. Note that $K_X+B'+D_i\sim_{\mathbb{Q},Z} D_i$. Since $B'$ is big over $Z$, we can write $B'\sim_{\mathbb{Q},Z} A+E$ where $A$ is ample over $Z$ and $E$ is effective. By choosing a very general section of $A$, we may replace $B'$ with $A+E$ and set $B_i=D_i+A+E$. In this step, we reduce to the case in which there is a single divisor $D_1$. For each $D_i$, we can find $k_i>0$ so that $k_iD_i$ is Cartier. Consider the orbifold projective bundle \[ X_1:=\mathbb{P}_X(\mathcal{O}_X(D_1)\oplus \dots \oplus \mathcal{O}_X(D_s)) \] over $X$, and the projective bundle \[ X_2:=\mathbb{P}_X (\mathcal{O}_X(k_1D_1)\oplus \dots \oplus \mathcal{O}_X(k_sD_s)) \] over $X$. Note that we have a finite morphism $X_1\rightarrow X_2$. We denote by $\pi_1\colon X_1\rightarrow X$ and $\pi_2\colon X_2\rightarrow X$ the corresponding morphisms. We claim that $X_2$ is of Fano type over $Z$. Let $H_1,\dots,H_{s+1}$ be the hyperplane sections of $X_2$ over $X$ and $H:=H_1+\dots+H_{s_1}$. Let $\Delta_{X_2}=\pi_2^*(B')$. By inversion of adjunction, we conclude that the pair $(X_2,H+\Delta_{X_2})$ is dlt and $K_{X_2}+H+\Delta_{X_2}$ is $\mathbb{Q}$-trivial over $Z$. Furthermore, the boundary $H+\Delta_{X_2}$ is big over $Z$. For $\epsilon>0$ small enough, the pair $\pi_2^*(A)+\epsilon H$ is ample over $Z$. Let $A_{X_2} \sim_{\mathbb{Q},Z} \pi_2^*(A)+\epsilon H$ be a general effective divisor. We conclude that \[ K_{X_2}+(1-\epsilon)H +\Delta_{X_2} + (1-\epsilon)\pi_2^*(A)+ \pi_2^*(H) + A_{X_2} \sim_{\mathbb{Q},Z} 0 \] is klt and its boundary is effective. We conclude that $X_2$ is of Fano type over $Z$. By taking $\epsilon$ small enough, we can make sure that the log pull-back of the above pair to $X_1$ is a klt pair. Thus, we conclude that $X_1$ is of Fano type over $Z$ as well. Note that the section ring of the tautological $\mathbb{Q}$-line bundle of $X_1$ coincides with the multi-section ring generated by the $D_i$'s on $X$. However, the grading given by the ring of sections of the tautological line bundle is coarser. Thus, replacing $X$ with $X_1$, we reduced the statement to the $\mathbb{T}$-equivariant case with $s=1$. Then, the statement follows from Lemma~\ref{lem:one-dim-cone}. \end{proof} \begin{corollary}\label{cor:loc-pot-klt} Let $(X,\Delta)$ be a log pair and $\phi\colon X\rightarrow Z$ be a contraction. Assume that $(X,\Delta)$ is of Fano type over $Z$. Let $z\in Z$ be a closed point. Then, the spectrum of the Cox ring ${\rm Cox}^{\rm aff}(X/Z,\Delta)$ (resp. ${\rm Cox}^{\rm gr}(X/Z,\Delta;z)$ or ${\rm Cox}^{\rm loc}(X/Z,\Delta;z)$) is klt type. \end{corollary} \begin{proof} The statement for ${\rm Cox}^{\rm aff}(X/Z,\Delta;z)$ is a direct consequence of Theorem~\ref{thm:klt-Cox-ring}. It suffices to show the statement for ${\rm Cox}^{\rm gr}(X/Z,\Delta;z)$. Indeed, the spectrum of the localization at the maximal graded ideal will have klt type singularities provided that the spectrum of ${\rm Cox}^{\rm gr}(X/Z,\Delta;z)$ satifies that property. Let $W'_1,\dots,W'_k$ be Weil divisors on $X_z$ whose classes generate ${\rm Cl}(X_z/Z_z,\Delta_z)$. Up to shrinking $Z$ around $z$, we may find Weil divisors $W_1,\dots,W_k$ on $X$ whose pull-backs to $X_z$ coincide with the $W'_i$'s. Up to shrinking $Z$ around $z$ again, we may assume that the group $K$ generated by the $W_i$'s in ${\rm Cl}(X/Z,\Delta)$ is isomorphic to the group $K'$ generated by the $W_i$'s in ${\rm Cl}(X_z/Z_z,\Delta_z)$. We can consider the Cox ring construction with respect to the subgroup $K\leqslant {\rm Cl}(X/Z,\Delta)$. We denote this ring by \begin{equation} \label{eq:ring} {\rm Cox}^{\rm aff}(X/Z,\Delta)(W_1,\dots,W_k). \end{equation} By Theorem~\ref{thm:klt-Cox-ring}, we know that the spectrum of the ring~\eqref{eq:ring} is klt type. Note that the spectrum of the relative grocal Cox ring ${\rm Cox}^{\rm gr}(X/Z,\Delta;z)$ is induced by the base change $Z_z\rightarrow Z$ from the spectrum of the ring~\eqref{eq:ring}. Hence, we conclude that the spectrum of ${\rm Cox}^{\rm gr}(X/Z,\Delta;z)$ is klt type. \end{proof} \subsection{The local Henselian Cox ring}\label{subsec:local-Hensel-cox} In this section, we define the local Henselian Cox ring for germs $(X,\Delta;x)$, where $(X,\Delta)$ is a pair and $x \in X$ is a closed point. More generally, when $\phi: X \to Z $ is a contraction, we define the relative local Henselian Cox ring for closed points $z \in Z$. \begin{definition}{\em Let $(X,\Delta)$ be a log pair and $\phi\colon X\rightarrow Z$ be a contraction. Let $z\in Z$ be a closed point. As usual, we denote by $Z^h$ the spectrum of the Henselization of the local ring of $Z$ at $z$. We obtain a morphism by base change $X^h\rightarrow Z^h$. The {\em relative gr-Henselian Cox ring} at $z\in Z$ is defined to be \[ {\rm Cox}^{\rm gr\text{-}h}(X/Z,\Delta;z):={\rm Cox}(X^h/Z^h,\Delta^h), \] where the right side is the relative Cox ring of the associated morphism as defined in Definition~\ref{def:rel-cox}. Then the {\em relative local Henselian Cox ring} at $z\in Z$ is defined to be \[ {\rm Cox}^h(X/Z,\Delta;z) := \left({\rm Cox}^{\rm gr\text{-}h}(X/Z,\Delta;z)_{\mathfrak{m}}\right)^h. \] Here, the localization at the maximal ideal $\mathfrak{m}$ defined by homogeneous regular functions on the Cox ring which correspond to Weil divisors on $X^h$ which intersect $\phi^{-1}(z)$ non-trivially, followed by Henselization. The boundary $\Delta^h$ is the boundary induced by $\Delta$ on $X^h$. We can also define the {\em gr-Henselian} and {\em local Henselian Cox ring}, which come as the special cases where $X\to X$ is the identity and $x\in X$ is a closed point. We denote them respectively by \[ {\rm Cox}^{\rm gr\text{-}h}(X,\Delta;x) \quad \mathrm{and} \quad {\rm Cox}^h(X,\Delta;x). \] } \end{definition} The following example shows that the Henselian local Cox ring often differs from the local Cox ring. The former captures the local topology of the singularity, while the latter does not (see subsection~\ref{subsec:covers-grocal-rings}). \begin{example}{\em This example shows that the Henselian local Cox ring can be different from the local Cox ring. Consider the factorial quasi-cone threefold singularity $X$ defined by \[ \{ (x,y,z) \mid x^2+y^3+z^3w=0\}. \] There is a singular stratum $C$ given by $x=y=z=0$. Around a general point $c$ of $C$, \'etale locally, $X$ is isomorphic to $\mathbb{C}$ times the $E_6$-singularity. Thus the regional fundamental group at $c$ is the binary tetrahedral group, which has abelianization $\mathbb{Z}/3\mathbb{Z}$. However, the class group ${\rm Cl}(X)$ and thus ${\rm Cl}(X_c)$ is trivial. Since the regional fundamental group of the singularity is not perfect, then ${\rm Cl}(X_c^h)$ is non-trivial, so the local Henselian Cox ring at $c$ is non-trivial while the local Cox ring is $X$ itself. Similar examples are given in~\cite[Ex 5.5]{BF84}.} \end{example} The following theorem shows that the relative local Henselian Cox ring is well-behaved for Fano type morphisms. \begin{theorem}\label{thm:hen-cox} Let $X\rightarrow Z$ be a Fano type morphism, where $(Z,z)$ is the spectrum of a local ring essentially of finite type over $\mathbb{C}$. Let $X^h\rightarrow Z^h$ be the base change to the Henselization of the local ring. Then, the following statements hold: \begin{enumerate} \item The class group ${\rm Cl}(X^h/Z^h)$ is finitely generated, \item the relative gr-Henselian Cox ring ${\rm Cox}^{\rm gr\text{-}h}(X/Z,\Delta;z)$ is finitely generated over $\mathcal{O}_{Z^h}$, and \item the spectra ${\rm Spec}({\rm Cox}^{\rm gr\text{-}h}(X/Z,\Delta;z))$ and ${\rm Spec}({\rm Cox}^h(X/Z,\Delta;z))$ are klt type. \end{enumerate} \end{theorem} \begin{proof} Note that $(Z,z)$ is a klt type singularity. Indeed, since $X\rightarrow Z$ is a Fano type morphism, we can find a boundary $B$ on $X$ so that $(X,B)$ is klt and $K_X+B\sim_{\mathbb{Q},Z}0$. By the canonical bundle formula, we can find a boundary $B_Z$ on $Z$ so that $(Z,B_Z)$ is klt. We prove the first statement. The subgroup of ${\rm Cl}(X^h/Z^h)$ generated by the class of the effective Weil divisors contracted by $X^h\rightarrow Z^h$ and the class group of the generic fiber is finitely generated. Hence, it suffices to show that ${\rm Cl}(Z^h)$ is finitely generated. Let $Y_0\rightarrow Z$ be a purely log terminal blow-up of the klt type singularity $(Z,z)$. By base change, we obtain a plt blow-up $Y_0^h\rightarrow Z^h$ of the local Henselian klt singularity. Let $E$ be the exceptional divisor. Then, we have that \[ {\rm rank}_{\mathbb{Q}} ( {\rm Cl}(Z^h)_{\mathbb{Q}}) \leq {\rm rank}_{\mathbb{Q}} ({\rm Cl}_{\mathbb{Q}} (E)+1). \] On the other hand, the torsion subgroup of ${\rm Cl}(Z^h)$ is finite. Indeed, its order is bounded by the order of the regional fundamental group of $Z$ at $z$, which is finite by~\cite[Theorem 2]{Bra20}. We conclude that ${\rm Cl}(Z^h)$ is finitely generated, so ${\rm Cl}(X^h/Z^h)$ is finitely generated as claimed. This proves the first statement. We prove the second statement. Since ${\rm Cl}(X^h/Z^h)$ is finitely generated, we can find a finite set of Weil divisors $W_1,\dots,W_r$ on $X^h$ which generate this group. Recall that $Z^h\rightarrow Z$ is a colimit of \'etale morphisms. Hence, there exists a pointed \'etale cover $(Z',z')\rightarrow (Z,z)$ a base change $X'\rightarrow Z'$ and divisors $W'_1,\dots,W'_r$ on $X'$ which pull-back to $W_1,\dots,W_r$ respectively. Since $Z'\rightarrow Z$ is of finite type, we conclude that $Z'$ is essentially of finite type over $\mathbb{C}$. Hence, $X'\rightarrow Z'$ is a projective morphism to the spectrum of a local ring essentially of finite type over $\mathbb{C}$. We can find a projective morphism $X''\rightarrow Z''$ of Fano type over a pointed affine algebraic variety $(Z'',z'')$ so that the base change of $X''\rightarrow Z''$ to the localization of $Z''$ at $z''$ is isomorphic to $X'\rightarrow Z'$. Let $W^{''}_1,\dots,W^{''}_r$ be Weil divisors on $X''$ which restrict to the divisors $W'_1,\dots,W'_r$. By Theorem~\ref{thm:relative-fano}, we conclude that the multigraded ring \begin{equation}\label{eq:cox-ring-partial} {\rm Cox}(X''/Z'',\Delta)(W^{''}_1,\dots,W^{''}_r) \end{equation} is finitely generated over $\mathbb{C}$. By faithfully flat base change, we conclude that the ring \[ {\rm Cox}^{\rm gr\text{-}h}(X/Z,\Delta;z) \] is finitely generated over $\mathcal{O}_{Z^h}$. This proves the second statement. By the proof of Corollary~\ref{cor:loc-pot-klt}, we have that the spectrum of the ring~\eqref{eq:cox-ring-partial} is klt type. Hence, the same statement holds when we take the base change with respect to $Z^h\rightarrow Z''$. We conclude that the spectrum of ${\rm Cox}^{\rm gr\text{-}h}(X/Z,\Delta;z)$ is klt type. Since the spectrum of ${\rm Cox}^{h}(X/Z,\Delta;z)$ is obtained from this ring by localization and Henselization, it is klt type as well. \end{proof} \begin{remark}{\em Finite generatedness of the class group $\operatorname{Cl}(\mathcal{O}_{X,x}^h)$ of the \'etale local ring more generally holds for rational singularities~\cite[Theorem 6.1]{BF84}. Moreover, it is isomorphic to the class group of the completion, i.e., $\operatorname{Cl}(\mathcal{O}_{X,x}^h) \cong \operatorname{Cl}(\widehat{\mathcal{O}_{X,x}})$, by~\cite[Theorem 6.2]{BF84}. This complements the statement of Theorem~\ref{thm:Cl-gr-Hens}, that for gr-Henselian rational rings, all local class groups are isomorphic.} \end{remark} \begin{remark} {\em We remark at this point that due to the above considerations, it would also be possible to define Cox rings and iteration of Cox rings for complete local rings. We omit the complete local case in order not to overload the notation. One may feel free to pass to a completion or also to base change to a {\em gr-complete Cox ring} anytime. Graded rings over complete local rings are also considered in~\cite{Cae83}. } \end{remark} \section{Boundedness of iteration of Cox rings} \label{sec:bounded} In this section, we aim to prove that the iteration of the Cox ring of a relatively log Fano variety is bounded in terms of the dimension. In subsection~\ref{subsec:iteration-cox-ring}, we will define the iteration of Cox. We prove that for a Fano type morphism, the iteration stops after finitely many steps. In particular, the iteration stabilizes for klt singularities. In subsection~\ref{subsec:finite-rel-fund}, we prove the Jordan property for the relative regional fundamental group of a relative Fano type variety. Finally, in subsection~\ref{subsec:bounded-iteration}, we use the Jordan property to prove the boundedness of iteration of Cox rings. This means that there exists an upper bound for the number of iterations which only depends on the dimension. \subsection{Iteration of Cox rings for relative Mori dream spaces} \label{subsec:iteration-cox-ring} In this subsection, we define the iteration of Cox rings and generalize some results from~\cite{Bra19} to the case of relative Mori dream spaces. The setting is the following. Let $(X,\Delta)$ be a log pair and $\phi \colon X \to Z$ be a contraction, so that $(X,\Delta)$ becomes a relative Mori dream space over $Z$. Here, $Z$ will either be affine, the spectrum of a local ring essentially of finite type, or the Henselization of such a ring. We denote by $\mathbb{T}_X$ the characteristic quasi-torus of $X$ over $Z$, which is a direct product of a torus $\mathbb{T}_0$ and a finite abelian group $A$. The next crucial statement is a generalization of~\cite[Lemma 1]{Bra19}: \begin{lemma} \label{le:CoxCox} Let $\phi\colon X \to Z$ be a contraction and let $(X,\Delta)$ be a log pair. Assume that $(X,\Delta)$ is a relative Mori dream space over $Z$. Denote by $\overline{X}:=\operatorname{Spec} {\rm Cox}(X/Z,\Delta)_{N,\chi}$ the characteristic space of the relative log Cox ring with respect to $N \subseteq \operatorname{WDiv}(X,\Delta)$ and $\chi$. Denote by $X_1$ the finite Galois cover of $X$ corresponding to the abelian group $A$, by $\Delta_1$ the log-pullback of $\Delta$ to $X_1$, and by $Y$ the quotient of $\overline{X}$ by $A$. Then, the following statements hold: \begin{enumerate} \item $Y$ is $\mathbb{Q}$-factorial over $Z$ and there exists a boundary $\Delta_Y$ on $Y$, such that the aff-contraction $Y \to Z$ is a relative Mori dream space for $(Y,\Delta_Y)$ with characteristic quasi-torus $A$. In particular, there are $N_Y \subseteq \operatorname{WDiv}(Y,\Delta)$ and $\chi_Y$, such that \[ {\rm Cox}(X/Z,\Delta)_{N,\chi} \cong {\rm Cox}(Y/Z,\Delta_Y)_{N_Y,\chi_Y}. \] \item There exists a boundary $\overline{\Delta}$ on $\overline{X}$, so that $(X_1,\Delta_1)$ is a relative Mori dream space over $Z$ if and only if $(\overline{X},\overline{\Delta})$ is a relative Mori dream space over $Z$. If this is the case, then $\mathbb{T}_{X_1} \cong \mathbb{T}_{\overline{X}} \times \mathbb{T}_0$ and there exist $N_{X_1}$, $\chi_{X_1}$, $N_{\overline{X}}$, and $\chi_{\overline{X}}$, such that \[ {\rm Cox}(X_1/Z,\Delta_1)_{N_{X_1}, \chi_{X_1}} \cong {\rm Cox}(\overline{X}/Z,\overline{\Delta})_{N_{\overline{X}}, \chi_{\overline{X}}}. \] \end{enumerate} In particular, if {\rm (2)} holds, we have a commutative diagram, where dashed arrows denote good quasi-torus quotients of big open subsets \[ \xymatrix@R=30pt@C=30pt{ \overline{X}_1=\overline{\overline{X}} \ar@{-->}[dr]^{/\mathbb{T}_{\overline{X}}} \ar@{-->}[ddr]_{/\mathbb{T}_{X_1}}\\ & \overline{X}=\overline{Y} \ar[r]^{/\mathbb{T}_Y} \ar@{-->}[d]^{/\mathbb{T}_{0}} \ar@{-->}[dr]^{/\mathbb{T}_X}& Y\ar@{-->}[d] \\ &X_1 \ar[dr] \ar[r] & X \ar[d] \\ & & Z. } \] \end{lemma} \begin{corollary} \label{cor:CoxCoxFano} Under the assumptions of Lemma~\ref{le:CoxCox}. Assume that $(X,\Delta)$ is of Fano type over $Z$. Then: \begin{enumerate} \item $(X_1,\Delta_1)$ is of Fano type over $Z$. \item $\overline{X}$ has Gorenstein canonical singularities. \item There is a boundary $\overline{\Delta}$ on $\overline{X}$, such that $(\overline{X},\overline{\Delta})$ is a relative Mori dream space over $Z$ and its Cox ring coincides with the Cox ring of $(X_1,\Delta_1)$. \end{enumerate} \end{corollary} \begin{proof} The first item follows from the proof of Theorem~\ref{thm:klt-Cox-ring}. The second item follows by the same considerations as in the proof of~\cite[Theorem 1]{Bra19}, with the following two differences. Firstly, $Y$ is in general only $\mathbb{Q}$-factorial over $Z$, so we only get Gorensteinnness locally. Secondly, Cox rings are not unique but involve a choice of subgroup $N \subseteq \operatorname{WDiv}$ and $\chi$. Thus, for some index one cover $\tilde{Y} \to Y$ of $K_Y$ and some choice of $N_Y \subseteq \operatorname{WDiv}(Y,\Delta)$ and $\chi_Y$, the Cox construction $\overline{Y} \to Y$ factors through $\tilde{Y} \to Y$. Indeed, the index one cover is cyclic quasi-\'etale and thus a quotient presentation in the sense of~\cite[Sec 4.2.1]{ADHL15}. Thus, $\overline{Y}$ is Gorenstein. Since it is klt type by Theorem~\ref{thm:klt-Cox-ring}, it is canonical. The third item follows from Lemma~\ref{le:CoxCox}, {\rm (2)} and the fact that the relative Fano type $(X_1,\Delta_1)$ is a relative Mori dream space over $Z$. \end{proof} \begin{remark} {\em Note that in order to ensure that $\overline{X}$ is Mori Dream, in general it does not suffice that it is Gorenstein canonical. The essential property is that $(X_1,\Delta_1)$ is of Fano type relative over the base. } \end{remark} \begin{proof}[Proof of Lemma~\ref{le:CoxCox}] We start by defining the boundaries on $Y$ and $\overline{X}$. Since $\overline{X} \to X_1$ and $Y \to X$ are locally trivial torus bundles in codimension one, we can uniquely pullback Weil divisors by first restricting to the smooth locus, pulling back via usual pullback of Cartier divisors and finally taking the closure, (see, e.g.,~\cite[Rem 1.3.4.1]{ADHL15}). Hence, we can define $\Delta_Y$ and $\overline{\Delta}$ to be the pullbacks of $\Delta$ and $\Delta_1$, respectively. Moreover, the divisor $\overline{\Delta}$ is the unique divisor so that $K_{\overline{X}}+\overline{\Delta}$ is the log pull-back of $K_Y+\Delta_Y$. Now, we argue that ${\rm Cox}(X/Z,\Delta)_{N,\chi}$ together with the coarsened $A$-grading is the Cox ring of $(Y,\Delta_Y)$ over $Z$. First, since ${\rm Cox}(X/Z,\Delta)_{N,\chi}$ is the Cox ring of $(X,\Delta)$ over $Z$, it is factorially $\operatorname{Cl}(X/Z,\Delta)$-graded by~\cite[Theorem 1.5.3.7]{ADHL15}. Thus by~\cite[Theorem 1.5]{Bech12}, it is also factorially $A$-graded. Since $\overline{X} \to X$ is the characteristic space of $(X,\Delta)$ over $Z$, the characteristic quasi-torus $\mathbb{T}_X$ acts log-strongly stably on $(\overline{X},\overline{\Delta})$, thus the subgroup $A$ acts log-strongly stably as well. Altogether, by Theorem~1.6.4.3 and Corollary~1.6.4.4 of~\cite{ADHL15}, we get that $\operatorname{Cl}(Y/Z,\Delta_Y) \cong A$ and ${\rm Cox}(X/Z,\Delta)_{N,\chi}$ is a Cox ring for $(Y,\Delta_Y)$ over $Z$. The choice of $N_Y \subseteq \operatorname{WDiv}(Y,\Delta)$ and $\chi_Y$ is as follows: we can identify $N \subseteq \operatorname{WDiv}(X,\Delta)$ with a subgroup of $\operatorname{WDiv}^{\mathbb{T}_0}(Y,\Delta_Y)$. For $N_Y$, we take the subgroup mapping to the torsion part of $\operatorname{Cl}(X/Z,\Delta)$, while $\chi_Y$ is the restriction of $\chi$ to this subgroup concatenated with the inclusion $\mathbb{C}(X,\Delta)^* \hookrightarrow \mathbb{C}(Y,\Delta_Y)^*$. So the first item from the Lemma follows. We prove the second item. We already defined the boundary $\overline{\Delta}$ on $\overline{X}$. First assume $(\overline{X},\overline{\Delta})$ is a relative Mori dream space over $Z$. Then, for a choice $N_{\overline{X}} \subseteq \operatorname{WDiv}(\overline{X},\overline{\Delta})$, we can assume that $N_{\overline{X}}$ is a subgroup of $\operatorname{WDiv}^{\mathbb{T}_0}(\overline{X},\overline{\Delta})$. Moreover, it is the direct product of a subgroup mapping isomorphically to the free part of $\operatorname{Cl}(\overline{X}/Z,\overline{\Delta})$ and a subgroup mapping to its torsion part. Secondly, for a choice of $\chi_{\overline{X}}$, we can assume that $\chi_{\overline{X}}$ maps to $\mathbb{C}(X_1,\Delta_1)^* \subseteq \mathbb{C}(\overline{X},\overline{\Delta})^*$. Then, the Cox ring ${\rm Cox}(\overline{X}/Z,\overline{\Delta})_{N_{\overline{X}}, \chi_{\overline{X}}}$ is factorially $\operatorname{Cl}(\overline{X}/Z,\overline{\Delta})$-graded. Invoking~\cite[Theorem 1.5]{Bech12} as above, we see that it is also factorially $\operatorname{Cl}(\overline{X}/Z,\overline{\Delta}) \times \mathbb{Z}^{\dim(\mathbb{T}_0)}$-graded. Moreover, the action of $\mathbb{T}_{\overline{X},0} \times \mathbb{T}_0$ on $\overline{\overline{X}}$ is strongly stable, since the actions of $\mathbb{T}_{\overline{X},0}$ and $\mathbb{T}_0$ on $\overline{\overline{X}}$ and $\overline{X}$, respectively, are so. Thus ${\rm Cox}(\overline{X}/Z,\overline{\Delta})_{N_{\overline{X}}, \chi_{\overline{X}}}$ is indeed a Cox ring for $(X_1,\Delta_1)$ over $Z$. In particular, \[ \operatorname{Cl}(X_1/Z,\Delta_1) \cong \operatorname{Cl}(\overline{X}/Z,\overline{\Delta}) \times \mathbb{Z}^{\dim(\mathbb{T}_0)}. \] The choice of $N_{X_1} \subseteq \operatorname{Cl}(X_1/Z,\Delta_1)$ and $\chi_{X_1}$ is as follows: for $N_{X_1}$ take the direct product of $N_{\overline{X}}$ as a subgroup of $\operatorname{WDiv}(X_1/Z,\Delta_1)$ and an arbitrary subgroup mapping isomorphically to the $\mathbb{Z}^{\dim(\mathbb{T}_0)}$-part of $\operatorname{Cl}(X_1/Z,\Delta_1)$. Hence, we can identify the kernel of $N_{X_1} \to \operatorname{Cl}(X_1/Z,\Delta_1)$ with the kernel of $N_{\overline{X}} \to \operatorname{Cl}(\overline{X}/Z,\overline{\Delta})$.Thus, $\chi_{\overline{X}}$ from above can be taken to define $\chi_{X_1}$. The arguing in the other direction, i.e., when $(X_1,\Delta_1)$ is a Mori dream space over $Z$, is analogous to the proof of the first item. This concludes the proof. \end{proof} \begin{definition} \label{def:cox-iteration} {\em Let $\phi \colon X \to Z$ be a contraction (or an aff-contraction) and $(X,\Delta)$ a relative Mori dream space over $Z$. We denote $\mathbb{T}^1:=\mathbb{T}_X$ with torus part $\mathbb{T}^1_0$ and finite abelian part $A^1$ respectively. We define \[ {\rm Cox}^{(1)} (X/Z,\Delta):={\rm Cox} (X/Z,\Delta), \text{ } \overline{X}_1:=\overline{X}=\operatorname{Spec} {\rm Cox} (X/Z,\Delta), \text{ and } \overline{\Delta}_1:=\overline{\Delta}. \] We iteratively define ${\rm Cox}^{(i)} (X/Z,\Delta)$ as follows. Assume $(\overline{X}_{i-1},\overline{\Delta}_{i-1})$ is a relative Mori dream space over $Z$. Then, we set \[ {\rm Cox}^{(i)} (X/Z,\Delta):={\rm Cox} (\overline{X}_{i-1}/Z,\overline{\Delta}_{i-1}), \text{ } \overline{X}_{i}:= \operatorname{Spec} {\rm Cox}^{(i)} (X/Z,\Delta), \text{ and } \mathbb{T}^{i}:=\mathbb{T}_{\overline{X}_{i-1}}=\mathbb{T}^{i}_{0} \times A^{i}. \] We let $\overline{\Delta}_{i}$ be the log-pullback of $\overline{\Delta}_{i-1}$. Then, we call the (possibly infinite) chain \[ \xymatrix@R=30pt@C=30pt{ \cdots \ar@{-->}[r] & (\overline{X}_3,\overline{\Delta}_3)\ar[rrrd] \ar@{-->}[r]^{/\mathbb{T}^3} & (\overline{X}_2,\overline{\Delta}_2)\ar[rrd] \ar@{-->}[r]^{/\mathbb{T}^2} & (\overline{X}_1,\overline{\Delta}_1) \ar[rd] \ar@{-->}[r]^{/\mathbb{T}^1} & (X,\Delta) \ar[d] \\ &&&& Z } \] the {\em iteration of Cox rings of $(X,\Delta)$ over $Z$}. If $\operatorname{Cl}(\overline{X}_{i}/Z,\overline{\Delta}_{i})$ is trivial for some $i\geq 1$, we say that $(X,\Delta)$ has finite iteration of Cox rings over $Z$. If the iteration stabilizes for some $k$, i.e., the ring is eventually factorial, then we denote by ${\rm Cox}^{\rm it}(X/Z,\Delta)$ the isomorphism class over $Z$ of this ring. The ring ${\rm Cox}^{\rm it}(X/Z,\Delta)$ is called the {\em iteration of Cox rings} or the {\em master Cox ring}. } \end{definition} \begin{remark} {\em In the case that $Z$ is local, essentially of finite type or Henselian, in the above definition we iterate the gr-local or gr-Henselian Cox rings. In each step, we can also localize (and Henselize respectively) $\overline{X}_{i-1}$ at the unique graded maximal ideal and take the Cox ring of such spectrum. However, by Lemma~\ref{lem:pic0} and Theorem~\ref{thm:Cl-gr-Hens}, the class groups of these spaces agree. Hence, the iteration defined in such way is compatible to the iteration defined above. Indeed, the localization (and Henselization respectively) of $\overline{X}_{i}$ will always yield the same spectrum. } \end{remark} \begin{remark} \label{rem:fin-Cox-covers} {\em By Lemma~\ref{le:CoxCox}, $(\overline{X},\overline{\Delta})$ is Mori Dream over $Z$ if and only if $(X_1,\Delta_1)$ is so. Thus, the iteration of Cox rings induces a chain of finite abelian Galois covers $(X_i,\Delta_i) \xrightarrow{/A_i} (X_{i-1},\Delta_{i-1})$, where $\Delta_i$ is the log-pullback of $\Delta_{i-1}$. In particular, the characteristic quasi-tori satisfy \[ \mathbb{T}_{X_i} \cong \mathbb{T}^{i} \times \mathbb{T}_{X_{i-1}}^0. \] We get the following commutative diagram: \[ \xymatrix@R=25pt@C=20pt{ \ddots \ar@{-->}[dr] \ar@{-->}[dddr] \ar@{-->}[ddd] \\ &(\overline{X}_{2},\overline{\Delta}_{2}) \ar@{-->}[dd] \ar@{-->}[dr] \ar@{-->}[ddr]\\ && (\overline{X}_{1},\overline{\Delta}_{1}) \ar@{-->}[d] \ar@{-->}[dr]\\ \cdots \ar[r] \ar[drrr] &(X_2,\Delta_2) \ar[drr] \ar[r] &(X_1,\Delta_1) \ar[dr] \ar[r] & (X,\Delta) \ar[d] \\ && & Z. } \] In particular, Corollary~\ref{cor:CoxCoxFano} tells us that if $(X,\Delta)$ is of Fano type over $Z$, so is $(X_i,\Delta_i)$ for any $i\geq 1$. Hence, the $i$-th iterated Cox ring and $(\overline{X}_{i},\overline{\Delta}_{i})$ is defined for any $i\geq 1$. The question remains if the iteration stabilizes or not. And, if yes, if there is any bound on the number of iteration steps. We answer these questions in Section~\ref{subsec:bounded-iteration}. } \end{remark} We finish the present subsection by showing that the actions of the characteristic quasi-tori can be lifted to the iterated total coordinate spaces. Moreover, they induce an action of a solvable reductive group. This generalizes observations made in~\cite{ABHW18, Bra19}. We start with the following lemma slightly generalizing~\cite[Theorem 5.1]{AG10}. This lemma covers the lifting of automorphisms to the Cox ring of relative Mori dream spaces, affine over the base. \begin{lemma} \label{le:aff-lift-aut} Let $(X,\Delta)$ be a relative Mori dream space, affine over $Z$ and $\operatorname{Aut}_Z(X)$ the automorphism group of $X$ over $Z$. Denote by $\operatorname{Aut}^\mathbb{T}(\overline{X})$ the normalizer of the characteristic quasi-torus $\mathbb{T}$ in the automorphism group $\operatorname{Aut}_Z(\overline{X})$ of $\overline{X}:= \operatorname{Spec} {\rm Cox}(X/Z,\Delta)_{N,\chi}$. Then there is a short exact sequence: \[ \xymatrix{ 1 \ar[r] & \mathbb{T} \ar[r] & \operatorname{Aut}^\mathbb{T}(\overline{X}) \ar[r] & \operatorname{Aut}_Z(X) \ar[r] & 1. } \] The above short exact sequence is called a lifting of ${\rm Aut}_Z(X)$ to ${\rm Aut}_Z(\overline{X})$. \end{lemma} \begin{proof} The proof is analogous to the one of~\cite[Theorem 5.1]{AG10}. We need to argue the surjectivity of the map $\operatorname{Aut}^\mathbb{T}(\overline{X}) \to \operatorname{Aut}_Z(X)$ since we have non-isomorphic Cox rings depending on the choice of $N \subseteq \operatorname{WDiv}(X,\Delta)$ and $\chi$. An automorphism $\psi \in \operatorname{Aut}_Z(X)$ induces an automorphism $\psi$ of $\operatorname{WDiv}(X,\Delta)$. Observe that $\psi$ maps the kernel of the surjective map $ N \to \operatorname{Cl}(X/Z,\Delta)$ to the kernel of the surjective map $ \varphi \colon \psi(N) \to \operatorname{Cl}(X/Z,\Delta)$ and induces a character $\psi^*(\chi)\colon \ker(\varphi) \to \mathbb{C}(X)^*$. Hence, ${\rm Cox}(X/Z,\Delta)_{N,\chi}$ and ${\rm Cox}(X/Z,\Delta)_{\psi(N),\psi^*(\chi)}$ are isomorphic. Moreover, fixing an isomorphism $\tau$ between them, $\psi^* \circ \tau$ is an element of $\operatorname{Aut}^\mathbb{T}(\overline{X})$ mapping to $\psi$. This finishes the proof of the lemma. \end{proof} The following proposition explains how to lift automorphisms to the Cox ring of a relative Mori dream space that is projective over the base. Here, we have to distinguish between automorphisms on the total coordinated space $\overline{X}$ and its big open subset $\hat{X}$. Following~\cite[Sec 4.2.4]{ADHL15}, we denote by ${\rm Bir}_{2,Z}(X)$ the \emph{weak automorphisms} of $X$ over $Z$, namely birational maps $X \dashrightarrow X$ being regular isomorphisms in codimension one over $Z$. \begin{proposition} \label{prop:proj-lift-aut} Let $(X,\Delta)$ be a relative Mori dream space, projective over $Z$. Let $\operatorname{Aut}_Z(X)$ be the automorphism group of $X$ over $Z$. Then, the following statements hold: \begin{enumerate} \item $\operatorname{Aut}_Z(X)$ is a linear algebraic group. \item There is a divisor $L$ on $X$, ample over $Z$, with relative section ring $R_L$ and spectrum $\tilde{X}:=\operatorname{Spec} R_L$, such that there is a short exact sequence \[ \xymatrix{ 1 \ar[r] & \mathbb{C}^* \ar[r] & \operatorname{Aut}^{\mathbb{C}^*}(\tilde{X}) \ar[r] & \operatorname{Aut}_Z(X) \ar[r] & 1. } \] \item If $(X,\Delta)$ is of Fano type over $Z$, then for $L:=-(K_X+\Delta)$, the spectrum $\tilde{X}$ of the section ring $R_L$ allows an action of these $Z$-automorphisms $\operatorname{Aut}_Z(X,\Delta)$ leaving $\Delta$ invariant, as a subgroup of $\operatorname{Aut}_Z(\tilde{X})$. In particular, there is a split short exact sequence \[ \xymatrix{ 1 \ar[r] & \mathbb{C}^* \ar[r] & \operatorname{Aut}^{\mathbb{C}^*}(\tilde{X}) \ar[r] & \operatorname{Aut}_Z(X,\Delta) \ar[r] & 1. } \] \item There is a commutative diagram with exact sequences as rows and vertical inclusions of finite index: \[ \xymatrix{ 1 \ar[r] & \mathbb{T} \ar[r] & \operatorname{Aut}^\mathbb{T}(\overline{X}) \ar[r] & {\rm Bir}_{2,Z}(X) \ar[r] & 1 \\ 1 \ar[r] & \mathbb{T} \ar[r] \ar@{=}[u] & \operatorname{Aut}^\mathbb{T}(\hat{X}) \ar[r] \ar@{^{(}->}[u] & \operatorname{Aut}_Z(X) \ar@{^{(}->}[u] \ar[r] & 1. } \] \end{enumerate} \end{proposition} \begin{proof} Since $(X,\Delta)$ is relative Mori Dream over $Z$, the cone of nef divisors relative to $Z$ is rational polyhedral. Its rays are permuted by the group of components of $\operatorname{Aut}_Z(X)$. Taking the sum of the ray generators, we get the relatively ample $\operatorname{Aut}_Z(X)$-invariant class $[L]$. By~\cite[Theorem 2.16]{Bri19}, $\operatorname{Aut}_Z(X)$ is a linear algebraic group. Now, $\operatorname{Aut}_Z(X)$ may not stabilize the divisor $L$, but only its class. We argue as in the proof of Lemma~\ref{le:aff-lift-aut}. Since $L$ is ample, its relative stable base locus is empty (see, e.g.,~\cite[Sec 2.3]{Bri19}). Thus, we obtain an isomorphism $\operatorname{Aut}^{\mathbb{C}^*}(L) \cong \operatorname{Aut}^{\mathbb{C}^*}(\tilde{X})$. Hence, we get the short exact sequence of the second item. The statement about $L:=-(K_X+\Delta)$ in the case of a Fano type follows since $K_X$ is invariant under the action of $\operatorname{Aut}_Z(X)$. The proof of the last item is analogous to the one of~\cite[Theorem 4.2.4.1]{ADHL15}. \end{proof} In what follows, we aim to lift the whole characteristic quasi-torus action to the iterated Cox rings. In this way, we will produce an action of a solvable reductive group. The derived normal series of this solvable reductive group reflects the iteration of Cox rings. We denote the $k$-th derived subgroup of a group $G$ by $\mathcal{D}^G_k:=[\mathcal{D}^G_{k-1},\mathcal{D}^G_{k-1}]$, where $\mathcal{D}^G_0:=G$. An important property of Cox rings is that their spectra dominate all \emph{quotient presentations} in the sense of~\cite[Sec 4.2.1]{ADHL15}. This means, all good quasi-torus quotients $Y \to X = Y / H$, such that the action of $H$ is strongly stable. In analogy to the notion of quasi-\'etale covers, we call them \emph{abelian quasi-torsors} in the followng. In the classical setting, where $X$ is assumed to have only constant invertible global functions, the quasi-torsors are assumed to have only constant invertible $H$-homogeneous functions. Thus in our setting, we have two major differences: firstly, a priori, we have invertible non-constant functions on $X$, which means that we have non-isomorphic Cox rings depending on the choice of $N \subseteq \operatorname{WDiv}(X)$ and the character $\chi$. This means that an abelian quasi-torsor $Y \to X$ is dominated by $\overline{X}_{N,\chi} \to X$ for some choice of $N$ and $\chi$. Secondly, what we have to impose is not that invertible $H$-homogeneous functions are constant, but that they descend to $X$. This property is fulfilled e.g. if there is at least one maximal homogeneous ideal in $\mathcal{O}(Y)$. This is the case in all situations relevant for us, e.g. finite coverings of relative Fano type $(X,\Delta)$ or (iterated) Cox rings. However, even if $X$ has only constant invertible functions, there may be non-constant non-homogeneous invertible functions in the Cox ring (see, e.g.,~\cite[Ex 1.4.4.2]{ADHL15}). The precise definition of an abelian quasi-torsor in our setting is the following. \begin{definition} \label{def:quot-pres}{\em Let $(X,\Delta)$ be a relative Mori dream space over $Z$. Let $Y \to X= Y/H$ be a good quotient by a quasi-torus $H$. We call $\varphi\colon Y \to X$ an {\em abelian quasi-torsor}, if the following are satisfied. \begin{enumerate} \item Let $H_0$ be the identity component and $A$ be the group of components of $H$. Then the finite abelian cover $Y':=Y/H_0 \xrightarrow{/A} X$ is log quasi-\'etale over $(X,\Delta)$ with log-pullback $\Delta_{Y'}$ of $\Delta$. \item There are big open subsets $U_{Y'} \subseteq Y'$ and $U_Y :=\varphi^{-1}(U_{Y'}) \subseteq Y$ such that the restriction \[ \left. \varphi \right|_{U_Y}\colon U_Y \xrightarrow{/H_0} U_{Y'} \] is an \'etale locally trivial $H_0$-bundle. In particular, the action of $H_0$ on $Y$ is strongly-stable. \item Global invertible homogeneous functions on $Y$ descend to $X$ via the induced homomorphism $\mathcal{O}(Y)^H \cong \mathcal{O}(X) \hookrightarrow \mathcal{O}(Y)$. \end{enumerate} In the case that $H$ is a torus, then we may say that it is a {\em torus quasi-torsor}. Whenever the quasi-torus $H$ is clear from the context, we may just say that $Y\rightarrow X$ is a {\em quasi-torsor}. Let $(X,\Delta;x)$ be a klt singularity. We say that $(Y,y)$ is a {\em pointed abelian quasi-torsor} of $(X,\Delta;x)$ if there exists an abelian quasi-torsor $Y\rightarrow X$ so that the image of $y$ in $X$ equals $x$. To shorten notation, we may say that $Y\rightarrow X$ is an {\em abelian pointed cover}. Observe that if $Y\rightarrow X$ is an abelian pointed cover, then the corresponding finite morphism $Y'\rightarrow X$ is a finite pointed cover. } \end{definition} \begin{proposition} \label{prop:quot-pres} Let $(X,\Delta)$ be a relative Mori dream space over $Z$. Let $Y \to X= Y/H$ be a quasi-torsor. Then, there exists \begin{itemize} \item a monomorphism $\mathbb{X}(H)\rightarrow {\rm Cl}(X/Z,\Delta)$, \item a subgroup $N_Y \leqslant N\leqslant {\rm WDiv}(X)$, \item surjections $\varphi \colon N \to \operatorname{Cl}(X/Z,\Delta)$ and $\left. \varphi \right|_{N_Y} \colon N_Y \to \mathbb{X}(H)$, and \item a character $\chi: \ker(\varphi) \to \mathbb{C}(X)^*$ \end{itemize} such that the following statements are satisfied: \begin{enumerate} \item $Y \cong \operatorname{Spec}_X \mathcal{S}^{(N_Y)} / \mathcal{I}$, where $\mathcal{I}$ is the ideal subsheaf of $\mathcal{S}^{(N_Y)}$ locally generated by sections $1-\chi(E)$, where $E$ runs in $\ker(\left. \varphi \right|_{N_Y})$. \item There is a commutative diagram \[ \xymatrix{ \hat{X}_{N,\chi} \ar[rr]^{/ H'} \ar[dr]^{/\mathbb{T}_X} & & Y \ar[dl]^{/H} \\ & X } \] where the quasi-torus $H'$ is defined by the exact sequence $1\rightarrow H'\rightarrow \mathbb{T}_X \rightarrow H \rightarrow 1$. \end{enumerate} \end{proposition} \begin{proof} The proof is analogous to the one of~\cite[Theorem 4.2.1.4]{ADHL15}, with the two differences mentioned above. In particular, invoking~\cite[Prop. 1.6.4.5]{ADHL15} and the notation therein, we get the following. Let $M:=\mathbb{X}(H)$ and $E(Y)$ be the multiplicative group of non-zero $M$-homogeneous rational functions on $Y$ with $E(Y)_{w}$ those of degree $w \in M$. Then we have the following diagram of group homomorphisms \[ \xymatrix{ E(Y) \ar[rr]^{f \mapsto \operatorname{div}(f)} && \operatorname{WDiv}(Y)^H \ar@/_/[rr]_{q_*} && \operatorname{WDiv}(X) \ar@/_/[ll]_{q^*}. } \] As $Y \to X$ is an \'etale locally trivial $H$-bundle in codimension one, the homomorphisms $q_*$ and $q^*$ are inverse to each other. As in~\cite[Prop. 1.6.4.5]{ADHL15}, but using item (3) of Definition~\ref{def:quot-pres}, the homomorphism $E(Y) \to \operatorname{WDiv}(X)$ induces a monomorphism $M \to \operatorname{Cl}(X/Z,\Delta)$. Thus, we can choose a subgroup $N_Y$ of $ \operatorname{WDiv}(X) \cong \operatorname{WDiv}(Y)^H $ surjecting onto $M$ and enlarge $N_Y \subseteq N$ such that $\varphi \colon N \to \operatorname{Cl}(X/Z,\Delta)$ is onto. Choosing a character $\chi \colon \ker(\varphi) \to \mathbb{C}(X)^*$ yields the desired statements together with the rest of the proof of~\cite[Theorem 4.2.1.4]{ADHL15}. \end{proof} \begin{corollary} \label{cor:Cox-it-solv-cover} Let $(X,\Delta)$ be a relative Mori dream space over $Z$. Assume the $k$-th iterated Cox ring ${\rm Cox}^{(k)} (X/Z,\Delta)$ exists and is of finite type over $Z$. Then $\overline{X}_k$ allows an action of a solvable reductive group $S$ with maximal torus $\mathbb{T}:=\mathbb{T}_{X_{k}}$ and an $S$-invariant big open subset $\hat{X}^k$, such that: \begin{enumerate} \item $X_k \cong \hat{X}^k / \mathbb{T}$ and $X \cong \hat{X}^k / S $. \item $\hat{X}^j \cong \hat{X}^k / \mathcal{D}_j^S$ and $\mathbb{T}^{j}\cong \mathcal{D}_{j-1}^S/\mathcal{D}_j^S$ for $j \leq k$. \item For the finite solvable group $S_{\rm fin}:=S/\mathbb{T}$ and the finite covers $X_j$, the assertions hold analogously, i.e., \[ X_j\cong X_k / \mathcal{D}_j^{S_{\rm fin}}, \text{ and} \quad A^{j}\cong \mathcal{D}_{j-1}^{S_{\rm fin}}/\mathcal{D}_j^{S_{\rm fin}}. \] \end{enumerate} \end{corollary} \begin{proof} The arguing is analogous to the proof of~\cite[Theorem 1.6]{ABHW18} in the case that $X$ is affine over $Z$, where we use Proposition~\ref{prop:quot-pres} instead of~\cite[Prop. 3.5]{AG10}. If $X$ is projective over $Z$, then we choose a divisor $L$ on $X$ ample over $Z$. By the same argument as in the proof of Lemma~\ref{le:CoxCox} (1), we have ${\rm Cox}(X/Z,\Delta)\cong {\rm Cox}(\tilde{X}/Z,\tilde{\Delta)}$ and thus we can reduce to the relatively affine case. \end{proof} \begin{remark} {\em In Definition~\ref{def:quot-pres} (2), it is essential, that not only $U_{Y'} \subseteq Y'$ but also $U_Y \subseteq Y$ is a big open subset. Otherwise $\varphi \colon Y \to Y/H$ may contract divisors. In particular, the existence of the monomorphism $\mathbb{X}(H) \to \operatorname{Cl}(X)$ from Proposition~\ref{prop:quot-pres} would not hold true in this more general setting. As an example, consider the blowup of $Y^n:={\rm Bl}_0(\mathbb{A}^n) \to \mathbb{A}^n$ at the origin, which has relative Cox ring ${\rm Cox}(Y^n/\mathbb{A}^n) \cong \mathbb{C}[x_1,\ldots,x_{n+1}]$. Then the induced $\mathbb{C}^*$-quotient $\mathbb{A}^{n+1} \to \mathbb{A}^n$, given by the weights $(1,\ldots,1,-1)$, is not a quotient presentation. Indeed, the divisor $\{x_{n+1}=0\}$ maps to the origin. Observe that we have an infinite sequence of $\mathbb{C}^*$-quotients $\mathbb{A}^1 \leftarrow \mathbb{A}^2 \leftarrow \mathbb{A}^3 \leftarrow \dots$. This sequence does not contradict Theorem~\ref{introthm-6-univ-scf-cover}, because this covers are not pointed abelian covers in the sense of Definition~\ref{def:quot-pres}. } \end{remark} \subsection{Regional fundamental group of a relative Fano type variety}\label{subsec:finite-rel-fund} In this subsection, we prove that the regional fundamental group of a relative Fano type variety is finite and it satisfies the Jordan property. \begin{definition} {\em Let $\phi \colon X \rightarrow Z$ be a projective contraction. Let $(X,\Delta)$ be a log pair. Let $z\in Z$ be a closed point. We define the fundamental group \[ \pi_1^{\rm reg}(X/Z,\Delta;z) \] to be the inverse limit of the fundamental groups \[ \pi_1^{\rm reg}(\phi^{-1}(U), \Delta_U), \] where the limit runs through all the open sets $U$ on $Z$ which contain $z$. In the above, by abuse of notation, we let $\Delta_U$ to be the restriction of $\Delta$ to $\phi^{-1}(U)^{\rm reg}$. } \end{definition} \begin{theorem}\label{thm:rel-finiteness} Let $n$ be a positive integer. There exists a constant $c(n)$, only depending on $n$, satisfying the following. Let $\phi\colon X \rightarrow Z$ be a projective contraction so that $X$ has dimension $n$. Let $(X,\Delta)$ be a log pair of Fano type over $Z$. Let $z\in Z$ be a closed point. Then, the fundamental group $\pi_1^{\rm reg}(X/Z,\Delta;z)$ is finite. Furthermore, there exists a normal abelian subgroup $A\leqslant \pi_1^{\rm reg}(X/Z,\Delta;z)$ of rank at most $n$ and index at most $c(n)$. \end{theorem} \begin{proof} The divisor $-(K_X+\Delta)$ is nef and big over $Z$. Hence, it is semiample and big over $Z$, given that $X$ is a relative Mori dream space over $Z$. Let $X'$ be an ample model of $-(K_X+\Delta)$ over $Z$. Let $\Delta'$ be the push-forward of $\Delta$ to $X'$. Then, we have that $(X',\Delta')$ has klt singularities and $-(K_{X'}+\Delta')$ is ample over $Z$. We will prove the statement for $\pi_1^{\rm reg}(X'/Z,\Delta';z)$. Let $Y$ be the orbifold cone with respect to the $\mathbb{Q}$-polarization $-(K_{X'}+\Delta')$ over $Z$, i.e., \[ Y:={\rm Spec} \left( \bigoplus_{m\geq 0} H^0\left( X'/Z, \mathcal{O}_X(-m(K_{X'}+\Delta')) \right) \right). \] Note that $\dim(Y)=\dim(X)+1=n+1$. We have a rational map $\pi\colon Y\dashrightarrow X'$, which is defined outside a codimension two subset of $Y$. Let $\Delta_Y$ be the effective divisor so that $\pi^*(K_{X'}+\Delta')=K_Y+\Delta_Y$. Then, the pair $(Y,\Delta_Y)$ is klt. The blow-up of $Y$ at the vertex of the torus action is a variety $\tilde{Y}$ which admits a good quotient to $X'$. The exceptional locus of $\tilde{Y}\rightarrow Y$ is isomorphic to $X'$ and its image in $Y$ is isomorphic to $Z$. Hence, under this isomorphism, we can consider an embedding $Z\hookrightarrow Y$. So, we can consider $z\in Y$. By~\cite[Theorem 1]{Bra19}, we know that the regional fundamental group $\pi_1^{\rm reg}(Y,\Delta_Y;z)$ is finite. By~\cite[Theorem 2]{BFMS20}, we know that there exists an abelian normal subgroup $A_Y$ of $\pi_1^{\rm reg}(Y,\Delta_Y;z)$ of rank at most $n+1$ and index at most $c(n+1)$. Here, $c(n+1)$ is a constant which only depends on $n+1$, hence it only depends on $n$. Let $U_Z$ be an arbitrary open neighborhood of $z$ in $Z$. Let $\phi'\colon X'\rightarrow Z$ the associated projective morphism and define $U_{X'}:={\phi'}^{-1}(U_Z)$. We define $U_Y:=\pi^{-1}(U_{X'})$. For every such $U_Z$, we have a short exact sequence: \[ 1\rightarrow \mathbb{Z}_m \rightarrow \pi_1^{\rm reg}(U_Y,\Delta_{U_Y}) \rightarrow \pi_1^{\rm reg}(U_{X'},\Delta_{U_{X'}}) \rightarrow 1. \] Here, $m$ may depend on the chosen neighborhood. As usual $\Delta_{U_Y}$ (resp. $\Delta_{U_{X'}}$) is the restriction of $\Delta_Y$ (resp. $\Delta_{X'}$) to the open set $U_Y$ (resp. $U_{X'})$. We claim that for certain neighborhood $U$ of $z$ in $Z$, there is an isomorphism \begin{equation}\label{eq:iso} \pi_1^{\rm reg}(U_Y,\Delta_{U_Y})\cong \pi_1^{\rm reg}(Y,\Delta_Y;z). \end{equation} Let $U_0$ be an open neighborhood of $z$ in $Y$ which computes the regional fundamental group of the pair $(Y,\Delta_Y)$ at $z$, i.e., there is an isomorphism \[ \pi_1^{\rm reg}(Y,\Delta_Y;z) \cong \pi_1^{\rm reg}(U_0,\Delta_{U_0}). \] Let $U_{Z,0}$ be the inverse image of $U_0$ under the embedding $Z\hookrightarrow X$. We define $U_{X',0}:={\phi'}^{-1}(U_{Z,0})$ and $U_{Y,0}:=\pi^{-1}(U_{X',0})$. Note that $U_{Y,0}$ is homotopic to an analytic open subset which is contained in $U_0$. Thus, we conclude that $U_{Y,0}$ satisfies the isomorphism in equation~\eqref{eq:iso}. Hence, we have an exact sequence \[ 1\rightarrow \mathbb{Z}_m \rightarrow \pi_1^{\rm reg}(Y,\Delta_Y;z) \rightarrow \pi_1^{\rm reg}(U_{X',0}, \Delta_{U_{X',0}}) \rightarrow 1. \] Passing to the inverse limit, we have an exact sequence \[ 1\rightarrow \mathbb{Z}_m \rightarrow \pi_1^{\rm reg}(Y,\Delta_Y;z) \rightarrow \pi_1^{\rm reg}(X'/Z,\Delta;z) \rightarrow 1. \] Hence, we conclude that $\pi_1^{\rm reg}(X'/Z,\Delta;z)$ is finite and satisfies the Jordan property of rank $n+1$, i.e., it contains a normal abelian subgroup of rank at most $n+1$ and index at most $c(n)$. We denote such group by $A_{X'}$. We claim that $\pi_1^{\rm reg}(X'/Z,\Delta;z)$ actually satisfies the Jordan property of rank $n$. Let $H'\geq 0$ be an effective divisor which is general in the $\mathbb{Q}$-linear system of $-(K_{X'}+\Delta')$ relative to $Z$. We may assume that all the coefficients of $H'$ are less than one half. Then, the pair $(X',\Delta'+H')$ is klt and $\mathbb{Q}$-trivial over the base. Let $K_Z+H_Z$ be the pair obtained by the canonical bundle formula on $Z$, i.e., we have that \[ K_{X'}+B'+H' \sim_{\mathbb{Q}} {\phi'}^*(K_Z+H_Z). \] Since all the coefficients of $H'$ are less than one half, there is a natural isomorphism \[ \pi_1^{\rm reg}(X'/Z,\Delta'+H';z)\rightarrow \pi_1^{\rm reg}(X/Z,\Delta';z). \] Hence, the group on the left hand side contains an abelian subgroup of rank at most $n+1$ and index at most $c(n)$. On the other hand, there is an exact sequence \[ \pi_1^{\rm reg}(F,\Delta_F+H_F) \rightarrow \pi_1^{\rm reg}(X'/Z,\Delta'+H';z)\rightarrow \pi_1^{\rm reg}(Z,H_Z;z)\rightarrow 1. \] Let $A_Z$ be the homomorphic image of $A_{X'}$ in the regional fundamental group of $(Z,H_Z)$ at $z$. Let $A_F$ be the kernel of the surjection $A_{X'}\rightarrow A_Z$. Hence, we have an exact sequence \[ A_F \rightarrow A_{X'}\rightarrow A_Z\rightarrow 1. \] By~\cite[Theorem 2]{BFMS20}, we know that $A_F$ admits a subgroup $H_F$ of index at most $c(f)$ and rank at most $f$, while $A_Z$ admits a subgroup $H_Z$ of index at most $c(z)$ and rank at most $z$. Here, $f$ and $z$ are the dimension of $F$ and $Z$ respectively. We conclude that $A_{X'}$ admits a subgroup $H_{X'}$ of rank at most $\dim(X')=z+f$ and index at most $c(z)+c(f)$. Note that $c(z)+c(f)$ is bounded by a constant in terms of $\dim(X')$. Finally, we prove that $\pi_1^{\rm reg}(X/Z,\Delta;z)$ satisfies the Jordan property of rank $n$ with respect to the dimension. It suffices to prove that there is a surjection \begin{equation}\label{claim:surj} \pi_1^{\rm reg}(X'/Z,\Delta';z) \rightarrow \pi_1^{\rm reg}(X/Z,\Delta;z). \end{equation} Indeed, let $U$ be an arbitrary open neighborhood of $z$ in $Z$. Let $U_X$ be its pre-image in $X$ and $U_{X'}$ be its pre-image in $X'$. Then, there is a natural surjection \[ \pi_1^{\rm reg}(U_{X'},\Delta_{U_{X'}}) \rightarrow \pi_1^{\rm reg}(U_X,\Delta_{U_X}). \] Indeed, the image of the exceptional locus of $U_{X}\rightarrow U_{X'}$ is a union of closed subsets which are either contained in the singular locus of $U_X$ or codimension two subsets of the smooth locus. Since $\pi_1^{\rm reg}(X'/Z,\Delta';z)$ surjects into each of the fundamental groups $\pi_1^{\rm reg}(U_{X'},\Delta_{U_{X'}})$, we conclude that for each $U$ there is a surjective homomorphism \[ \pi_1^{\rm reg}(X'/Z,\Delta;z) \rightarrow \pi_1^{\rm reg}(U_X,\Delta_{U_X}). \] Taking inverse limit, we conclude that the surjection~\eqref{claim:surj} holds. Hence, $\pi_1^{\rm reg}(X/Z,\Delta;z)$ is finite and contains a normal abelian subgroup of rank at most $n$ and index at most $c(n)$. This completes the proof. \end{proof} \begin{corollary} \label{cor:abelianization} Let $(X,\Delta)$ be of relative Fano type over $Z$. Let $A:= \mathbb{T}_X / \mathbb{T}_X^0 \cong \operatorname{Cl}(X/Z,\Delta;z)_{\rm tor}$ be the group of components of the characteristic quasi-torus of $(X,\Delta)$ over $Z$ at $z$. Then $A$ is the abelianization of $\pi_1^{\rm reg}(X/Z,\Delta;z)$, i.e., \[ A \cong \pi_1^{\rm reg}(X/Z,\Delta;z) / [\pi_1^{\rm reg}(X/Z,\Delta;z),\pi_1^{\rm reg}(X/Z,\Delta;z)], \] where $[\pi_1^{\rm reg}(X/Z,\Delta),\pi_1^{\rm reg}(X/Z,\Delta)]$ is the commutator subgroup of $\pi_1^{\rm reg}(X/Z,\Delta)$. \end{corollary} \begin{proof} By Theorem~\ref{thm:rel-finiteness}, we know that $\pi_1^{\rm reg}(X/Z,\Delta;z)$ is finite and the \'etale fundamental group of $X_{\rm reg}^h$. Here, $X^h \to Z^h$ is the base change to the Henselization of $Z$ at $z$. Thus, we can assume $Z$ is local Henselian. Then $G:=\pi_1^{\rm reg}(X/Z,\Delta;z)$ induces a finite log quasi-\'etale Galois cover of $(X,\Delta)$, which by abuse of notation, we denote by \[ (\tilde{X},\tilde{\Delta}) \xrightarrow{/G} (X,\Delta). \] Then $[G,G]$ is a normal subgroup of $G$ and induces an abelian log quasi-\'etale Galois cover \[ (Y,\Delta_Y) \xrightarrow{/(G/[G,G])} (X,\Delta), \] which is a quasi-torsor in the sense of Definition~\ref{def:quot-pres}. Indeed, $G/[G,G]$ is finite and $(Y,\Delta_Y)$ is of Fano type over $Z$. In particular, invertible functions of $Y$ and $X$ are invertible functions of $Z$. Thus by Proposition~\ref{prop:quot-pres}, $G/[G,G]$ is a subgroup of $\operatorname{Cl}(X/Z,\Delta)$. In particular, it is a subgroup of $A$. But since $A$ induces a log quasi-\'etale abelian Galois cover of $(X,\Delta)$ as well, we have $A=G/[G,G]$. Otherwise, there would be a normal subgroup of $G$ smaller than $[G,G]$ with abelian quotient, which is a contradiction. This finishes the proof of the corollary. \end{proof} \subsection{Boundedness of the iteration of Cox rings} \label{subsec:bounded-iteration} In this subsection, we prove the main theorem of this article, the boundedness of the iteration of Cox rings for Fano type varieties. \begin{theorem}\label{thm:bounded-iteration} There exists a constant $k(n)$, only depending on $n$, satisfying the following. Let $\phi\colon X \rightarrow Z$ be a projective contraction so that $X$ has dimension $n$. Let $(X,\Delta)$ be a log pair of Fano type over $Z$. Then, ${\rm Cox}^{(k)}(X/Z, \Delta)$ stabilizes for $k\geq k(n)$. \end{theorem} \begin{proof} Firstly, we prove that the iteration of Cox rings \[ {\rm Cox}^{(k)}(X/Z,\Delta) \] stabilizes for $k$ large enough. It suffices to show that $\operatorname{Cl}(\overline{X}_k/Z,\overline{\Delta}_k)$ is torsion-free for some $k \in \mathbb{N}$, since then $(\overline{X}_{k+1},\overline{\Delta}_{k+1})$ is factorial over $Z$, see, e.g.,~\cite[1.4.1.5]{ADHL15}. By Remark~\ref{rem:fin-Cox-covers}, torsion-freeness of $\operatorname{Cl}(\overline{X}_k/Z,\overline{\Delta}_k)$ is equivalent to torsion-freeness of the divisor class group of the finite abelian covering space $(X_k/Z,\Delta_k)$. Now, we assume that \[ \operatorname{Cl}(X_k/Z,\Delta_k)_{\rm tor} \not\cong 1 \] for any $k \in \mathbb{N}$. Then by Corollary~\ref{cor:Cox-it-solv-cover} (3), there is an infinite chain of log quasi-\'etale finite solvable Galois covers \[ (X_k,\Delta_k) \xrightarrow{/S_{k}} (X,\Delta), \] where $\left|S_{k+1} \right| > \left| S_k \right| $. This is a contradiction to the finiteness of $\pi_1^{\rm \operatorname{reg}}(X/Z,\Delta)$. Thus ${\rm Cox}^{(k)}(X/Z,\Delta)$ stabilizes for $k$ large enough. Now, we show that such bound at which the iteration of Cox rings stabilizes admits an upper bound which only depends on the dimension of $X$. As before, we denote $\mathcal{D}_0:=\pi_1^{\rm \operatorname{reg}}(X/Z,\Delta)$ and inductively we define $\mathcal{D}_i:=[\mathcal{D}_{i-1},\mathcal{D}_{i-1}]$. By Theorem~\ref{thm:rel-finiteness}, we know that there is an exact sequence $1 \to A_0 \to \mathcal{D}_0 \to N_0 \to 1$, where $A_0$ is an abelian normal subgroup of rank at most $n$ and $N_0$ has order at most $c(n)$. We denote $A_{i}:=\mathcal{D}_i \cap A_0$. Now, we have a commutative diagram \[ \xymatrix{ & 1 \ar[d] & 1 \ar[d] & 1 \ar[d] \\ 1 \ar[r] & A_{i+1} \ar[r] \ar[d] & A_i \ar[r] \ar[d] & B_i \ar[r] \ar[d] & 1 \\ 1 \ar[r] & \mathcal{D}_{i+1} \ar[r] \ar[d] & \mathcal{D}_{i} \ar[r] \ar[d] & \operatorname{Cl}(X_i/Z,\Delta_i)_{\rm tor} \ar[r] \ar[d] & 1 \\ 1 \ar[r] & N_{i+1} \ar[r] \ar[d] & N_{i} \ar[r] \ar[d] & M_i \ar[r] \ar[d] & 1 \\ & 1 & 1 & 1 } \] with exact rows and columns for each $i \geq 0$. In particular, we get a chain of normal subgroups $\cdots \trianglelefteq N_2 \trianglelefteq N_1 \trianglelefteq N_0$ with $M_i:=N_{i+1}/N_i$. If in such a chain, no two consecutive $M_i$ are trivial, then the length $k$ of the chain is bounded by $2\log_2(c(n))$. Hence, we know that there is some $j \leq 2\log_2(c(n))+1$ with $M_j\cong M_{j+1}\cong 1$. We have a commutative diagram \[ \xymatrix{ & 1 \ar[d] & 1 \ar[d] & 1 \ar[d] \\ 1 \ar[r] & A_{j+2} \ar[r] \ar[d] & A_j \ar[r] \ar[d] & C \ar[r] \ar[d] & 1 \\ 1 \ar[r] & \mathcal{D}_{j+2} \ar[r] \ar[d] & \mathcal{D}_{j} \ar[r] \ar[d] & S \ar[r] \ar[d] & 1 \\ 1 \ar[r] & N_{j+2} \ar[r] \ar[d] & N_{j} \ar[r] \ar[d] & L \ar[r] \ar[d] & 1 \\ & 1 & 1 & 1 } \] with exact rows and columns similar to the one from above. But here $L$ is trivial, since $N_{j+2}\cong N_{j+1} \cong N_j$. The group $C$ is abelian, since $A_j$ and $A_{j+2}$ are abelian. So $S \cong C$ is abelian. Thus $\mathcal{D}_{j+2}$ equals $\mathcal{D}_{j+1}$, the derived subgroup of $\mathcal{D}_j$. But then $\operatorname{Cl}(X_{j+1}/Z,\Delta_{X_{j+1}})_{\rm tor}\cong \mathcal{D}_{j+1}/\mathcal{D}_{j+2}$ is trivial and the iteration of Cox rings stabilizes for $k \geq j+2$. Since $j \leq 2\log_2(c(n))+1$, we can set $k(n):= 2\log_2(c(n))+3$. Since $c(n)$ only depends on $n$, the proof is finished. \end{proof} \section{Simply connected factorial canonical cover}\label{sec:scfc} In this section, we aim to prove the existence of a simply connected factorial canonical cover for klt singularities. In Subsection~\ref{subsec:existence-scfc}, we prove the existence of the scfc cover. In Subsection~\ref{subsec:universality-scfc}, we prove that the scfc cover dominates any sequence of finite covers and abelian covers. In Subsection~\ref{subsec:upper-bound-dim}, we give an upper bound for the dimension of the iteration of Cox rings of the singularity. \subsection{Existence of the simply connected factorial canonical cover}\label{subsec:existence-scfc} In this subsection, we prove the existence of a simply connected factorial canonical cover for a klt singularity. The following proposition is the cornerstone for the construction. \begin{proposition} \label{prop:fiber-prod-fincov-and-quotpres} Let $(X,\Delta)$ be a log pair. Let $\phi\colon X\rightarrow Z$ be an aff-contraction where $Z$ is local Henselian. Assume that $(X,\Delta)$ is of relative Fano type over $Z$. Let $(X',\Delta') \to (X,\Delta)$ be a torus quasi-torsor in the sense of Definition~\ref{def:quot-pres}. Let $\mathbb{T}$ be the acting torus on $X'$. Let $(\tilde{X},\tilde{\Delta}) \to (X,\Delta)$ be the finite log quasi-\'etale Galois cover associated to $\pi:=\pi_1^{\rm reg}(X/Z,\Delta)$. Define $\tilde{X}':= \tilde{X} \times_{X} X'$. Then, there is a commutative diagram \[ \xymatrix{ (\tilde{X}',\tilde{\Delta}') \ar[r]^{/\pi} \ar[d]^{/\mathbb{T}} & (X',\Delta') \ar[d]^{/\mathbb{T}} \\ (\tilde{X},\tilde{\Delta}) \ar[r]^{/\pi} & (X,\Delta) } \] where $\pi_1^{\rm reg}(X'/Z,\Delta') \cong \pi_1^{\rm reg}(X/Z,\Delta)$ and $\tilde{X}'$ allows a $\mathbb{T} \times \pi_1^{\rm reg}(X'/Z,\Delta')$-action satisfying the following conditions: \begin{enumerate} \item $\tilde{X}' \to X'$ is the finite log quasi-\'etale Galois cover associated to $\pi_1^{\rm reg}(X'/Z,\Delta')$ and $\tilde{\Delta}'$ is the log-pullback of $\Delta'$. \item $(\tilde{X}',\tilde{\Delta}') \to (\tilde{X},\tilde{\Delta})$ is a $\mathbb{T}$-quasi-torsor. \item The $\mathbb{T}\times \pi_1^{\rm reg}(X'/Z,\Delta')$-action is log-free in codimension one. \item $\mathbb{T}$ is the same torus as in the quasi-torsor $X'\rightarrow X$. \end{enumerate} \end{proposition} \begin{proof} First, we note that both $\tilde{X} \to X$ is log quasi-\'etale and and $X' \to X$ is \'etale locally trivial over the smooth locus. All homotopy groups are considered relatively over $Z$. Thus, we have an exact sequence of homotopy groups \[ \xymatrix@C=15pt{ \pi_2^{\rm reg}(\mathbb{T}) \ar[r] & \pi_2^{\rm reg}(X' \setminus \Delta') \ar[r] & \pi_2^{\rm reg}(X \setminus \Delta) \ar[r] & \pi_1^{\rm reg}(\mathbb{T}) \ar[r] & \pi_1^{\rm reg}(X' \setminus \Delta') \ar[r] & \pi_1^{\rm reg}(X \setminus \Delta) \ar[r] & \pi_0^{\rm reg}(\mathbb{T}). } \] Note that $\pi_2^{\rm reg}(\mathbb{T})\cong \pi_0^{\rm reg}(\mathbb{T}) \cong 1$ and $\pi_1^{\rm reg}(\mathbb{T})\cong \mathbb{Z}^{\dim(\mathbb{T})}$. Hence, we have the following exact sequence \[ \xymatrix@C=15pt{ 1 \ar[r] & \pi_2^{\rm reg}(X' \setminus \Delta') \ar[r] & \pi_2^{\rm reg}(X \setminus \Delta) \ar[r] & \mathbb{Z}^{\dim(\mathbb{T})} \ar[r] & \pi_1^{\rm reg}(X' \setminus \Delta') \ar[r] & \pi_1^{\rm reg}(X \setminus \Delta) \ar[r] & 1. } \] Above, by abuse of notation, we denote by $X\setminus \Delta$ the complement of the support of $\Delta$. Now, we recall that the orbifold fundamental group $\pi_1^{\rm \operatorname{reg}}(X,\Delta)$ is by definition $\pi_1^{\rm \operatorname{reg}}(X \setminus \Delta)/ \langle \gamma_i^{m_i} \rangle$, where $\gamma_i$ is a small loop around a general component of $\Delta_i$ and $\frac{m_i-1}{m_i}$ is the coefficient of $\Delta_i$ in the standard approximation of $\Delta$, cf. Definition~\ref{def:standard-approx}. But since $X' \to X$ is \'etale locally trivial over the smooth locus, a small loop around a general point of $\Delta_i$ is as well a small loop around a general point of the pullback $\Delta_i'$. Of course the coefficient $\Delta_i'$ in the standard approximation of $\Delta'$ is $\frac{m_i-1}{m_i}$ again. By the above exact sequence, setting $A :={\rm im}(\mathbb{Z}^{\dim(\mathbb{T})})$, we have $ \pi_1^{\rm reg}(X \setminus \Delta) = \pi_1^{\rm reg}(X',\Delta')/A $. So, with the above considerations, we get \[ \pi_1^{\rm \operatorname{reg}}(X,\Delta):=\pi_1^{\rm reg}(X \setminus \Delta) / \langle \gamma_i^{m_i} \rangle = \left(\pi_1^{\rm reg}(X',\Delta')/A \right)/ \langle \gamma_i^{m_i} \rangle = \left(\pi_1^{\rm reg}(X',\Delta')/\langle \gamma_i^{m_i} \rangle \right)/ A = \pi_1^{\rm \operatorname{reg}}(X',\Delta')/A. \] Since $\pi_1^{\rm \operatorname{reg}}(X'/Z,\Delta')$ is finite by Theorem~\ref{thm:rel-finiteness}, in fact $A$ is a finite abelian group. Recall that arbitrary base change preserves \'etaleness, finiteness and GIT-quotients. Hence, the cover $\tilde{X}' \to X'$ is indeed a log quasi-\'etale finite Galois cover with Galois group $\pi_1^{\rm reg}(X,\Delta)$. So $\tilde{X}' \to X'$ is the Galois cover associated to the normal subgroup $A$. In particular, there is an $A$-quasi-torsor \[ Y \xrightarrow{/A} \tilde{X}'. \] With the same arguments as above, $\tilde{X}' \to \tilde{X}$ is a $\mathbb{T}$-quasi-torsor. Since $\mathbb{T}$ is connected, by~\cite[Theorem 4.2.3.2]{ADHL15} we can lift the $\mathbb{T}$-action on $\tilde{X}'$ to a $\mathbb{T} \times A$ action on $Y$, such that $Y \to \tilde{X}$ becomes a $\mathbb{T} \times A$-quasi-torsor. Furthermore, $\operatorname{Cl}(\tilde{X}/Z,\tilde{\Delta})$ is torsion free. Otherwise by Lemma~\ref{le:CoxCox}, the torsion part would induce a log quasi-\'etale Galois cover of $(\tilde{X},\tilde{\Delta})$, which contradicts the log-simply-connectedness of the smooth locus. By Proposition~\ref{prop:quot-pres}, this means there is a monomorphism from $A$ to a torsion free group, which implies that $A$ is trivial. \end{proof} Now, we turn to prove the existence of the scfc cover. \begin{theorem} \label{thm:scfc-cover} Let $(X,\Delta)$ be a log pair. Let $\phi\colon X\rightarrow Z$ be an aff-contraction so that $Z$ is local Henselian. Assume that $(X,\Delta)$ is relatively Fano over $Z$. Let $(\tilde{X},\tilde{\Delta}) \to (X,\Delta)$ be the finite log quasi-\'etale Galois cover associated to $\pi:=\pi_1^{\rm reg}(X/Z,\Delta)$. Denote by $(\widetilde{\overline{X}},\widetilde{\overline{\Delta}})$ the total coordinate space of $(\tilde{X},\tilde{\Delta})$ over $Z$. Then we have a commutative diagram \[ \xymatrix{ (\widetilde{\overline{X}},\widetilde{\overline{\Delta}}) \ar[dr]^{/G} \ar[d]^{/\mathbb{T}} \\ (\tilde{X},\tilde{\Delta}) \ar[r]^{/\pi} & (X,\Delta), } \] where the following conditions hold: \begin{enumerate} \item The characteristic quasi-torus $\mathbb{T}$ is connected, i.e., a torus. \item $G$ is a reductive group acting freely in log-codimension one on $(\widetilde{\overline{X}},\widetilde{\overline{\Delta}})$ and fitting in the short exact sequence \[ \xymatrix{ 1 \ar[r] & \mathbb{T} \ar[r] & G \ar[r] & \pi \ar[r] & 1. } \] \item $(\widetilde{\overline{X}},\widetilde{\overline{\Delta}})$ is factorial over $Z$ and has canonical singularities. \item $(\widetilde{\overline{X}},\widetilde{\overline{\Delta}})$ is log-simply connected in codimension one. \end{enumerate} \end{theorem} \begin{proof} By Proposition~\ref{prop:fiber-prod-fincov-and-quotpres}, we know that $\operatorname{Cl}(\tilde{X}/Z,\tilde{\Delta})$ is torsion free. Thus $\mathbb{T}$ is a torus, and $(\widetilde{\overline{X}},\widetilde{\overline{\Delta}})$ is factorial over $Z$. In particular, it is locally factorial, and since it is klt type by Theorem~\ref{thm:hen-cox}, it has canonical singularities, yielding items (1) and (3). In fact, by Corollary~\ref{cor:CoxCoxFano}, we already know that the Cox ring of a relative Fano pair is Gorenstein and has canonical singularities, while in general, it is of course not factorial. Proposition~\ref{prop:fiber-prod-fincov-and-quotpres} shows that \[ \pi_1^{\operatorname{reg}}(\widetilde{\overline{X}}/Z,\widetilde{\overline{\Delta}}) \cong \pi_1^{\operatorname{reg}}(\tilde{X}/Z,\tilde{\Delta}) \cong 1, \] yielding (4). Lastly, (2) follows from Proposition~\ref{prop:proj-lift-aut}. \end{proof} \begin{remark}{\em The construction of the scfc cover, as usual, depends on the choice of a subgroup $N \subseteq {\rm CL}(\tilde{X}/Z,\tilde{\Delta})$ and character $\chi$. } \end{remark} \subsection{Universality of the simply connected factorial canonical cover}\label{subsec:universality-scfc} In this subsection, we prove a universality property for the scfc cover. This means that the scfc cover dominates any sequence of finite covers and abelian quasi-torsors overf the singularity. \begin{theorem}\label{thm-univ-scfc} Let $(X,\Delta)$ be a log pair. Let $\phi\colon X \rightarrow Z$ be an aff-contraction, where $X$ is of dimension $n$ and $Z$ is local Henselian. Assume that $(X,\Delta)$ is of relative Fano type over $Z$. Let \[ \xymatrix{ \cdots \ar[r] & (X_{(2)},\Delta_{(2)}) \ar[r] & (X_{(1)},\Delta_{(1)}) \ar[r] & (X_{(0)},\Delta_{(0)}) :=(X,\Delta) } \] be a (possibly infinite) sequence of finite log quasi-\'etale covers and abelian quasi-torsors. Then $(X_{(j)},\Delta_{(j)})$ stabilizes after finitely many steps and the scfc covers of $(X_{(j)},\Delta_{(j)})$ coincide for all $j \geq 0$. If in addition all abelian (finite or quasi-torsor-) covers in the sequence are given by the respective Cox rings, then there is a constant $j(n)$ only depending on $n$, such that $(X_{(j)},\Delta_{(j)})$ stabilizes for $j \geq j(n)$. \end{theorem} \begin{remark} {\em As mentioned before, the construction of the scfc cover depends on the choice of a subgroup $N \subseteq \operatorname{Cl}(\tilde{X}/Z,\tilde{\Delta})$ and a character $\chi$. So the equality of the scfc covers of $(X_{(j)},\Delta_{(j)})$ means that for any $j_1 \neq j_2 \geq 0$, any scfc cover of $(X_{(j_1)},\Delta_{(j_1)})$ is also a scfc cover of $(X_{(j_2)},\Delta_{(j_2)})$ and vice-versa. } \end{remark} \begin{proof}[Proof of Theorem~\ref{thm-univ-scfc}] We show by induction that all the scfc covers coincide. So let first $(X_{(j+1)},\Delta_{(j+1)}) \to (X_{(j)},\Delta_{(j)})$ be finite. Then the covers $(\tilde{X}_{(j+1)},\tilde{\Delta}_{(j+1)})$ and $(\tilde{X}_{(j)},\tilde{\Delta}_{(j)})$ associated to the respective regional fundamental groups obviously coincide, so the scfc covers coincide as well. Now let $(X_{(j+1)},\Delta_{(j+1)}) \to (X_{(j)},\Delta_{(j)})$ be a $H$-quotient presentation. Then taking the quotient by the identity component $H^0$ yields a commutative diagram \[ \xymatrix@C=50pt{ (X_{(j+1)},\Delta_{(j+1)}) \ar[d]^{/H^0} \ar[dr]^{/H} \\ (X_{(j+1)}',\Delta_{(j+1)}') \ar[r]^{/(H/H^0)} & (X_{(j)},\Delta_{(j)}), } \] where $(X_{(j+1)}',\Delta_{(j+1)}') \to (X_{(j)},\Delta_{(j)})$ is finite log quasi-\'etale abelian. Thus the scfc covers of $(X_{(j+1)}',\Delta_{(j+1)}')$ and $(X_{(j)},\Delta_{(j)})$ coincide. By Proposition~\ref{prop:fiber-prod-fincov-and-quotpres}, we can extend the diagram in the following way: \[ \xymatrix@C=50pt{ (\tilde{X}_{(j+1)},\tilde{\Delta}_{(j+1)}) \ar[r]^{/\pi} \ar[d]^{/H^0} & (X_{(j+1)},\Delta_{(j+1)}) \ar[d]^{/H^0} \ar[dr]^{/H} \\ (\tilde{X}_{(j)},\tilde{\Delta}_{(j)}) \ar[r]^{/\pi} & (X_{(j+1)}',\Delta_{(j+1)}') \ar[r]^{/(H/H^0)} & (X_{(j)},\Delta_{(j)}). } \] Observe that $(\tilde{X}_{(j+1)},\tilde{\Delta}_{(j+1)}) \to (\tilde{X}_{(j)},\tilde{\Delta}_{(j)})$ is a quasi-torsor. We conclude that the scfc cover of $(X_{(j+1)},\Delta_{(j+1)})$ and the scfc cover of $(X_{(j)},\Delta_{(j)})$ coincide. In order to show that the sequence stabilizes, we show the following claim by induction.\\ \textbf{Claim:} The first $k$ covers in the sequence induce a sequence of $k$ finite covers \[ (X'_{(j+1)},\Delta'_{(j+1)})\rightarrow (X'_{(j)},\Delta'_{(j)}), \] where \[ (X_{(j)},\Delta_{(j)}) \rightarrow (X'_{(j)},\Delta'_{(j)}) \] is a torus-quasi-torsor for every $j$.\\ \begin{proof}[Proof of the Claim] For $k=1$, either the cover is finite or it is a $H$-quasi-torsor. In the latter case, we take the finite cover given by the group of components $H/H^0$. Assume the claim is proven for $k$. Then, as we have seen before, $(X_{(k+1)},\Delta_{(k+1)}) \to (X_{(k)},\Delta_{(k)})$ yields a finite Galois cover $(X_{(k+1)}^*,\Delta_{(k+1)}^*) \to (X_{(k)},\Delta_{(k)})$. Due to Proposition~\ref{prop:fiber-prod-fincov-and-quotpres}, we know that the regional fundamental groups of $(X_{(k)},\Delta_{(k)})$ and the $k$-th finite cover $(X_{(k)}',\Delta_{(k)}')$ coincide. So $(X_{(k+1)}^*,\Delta_{(k+1)}^*) \to (X_{(k)},\Delta_{(k)})$ induces a finite Galois cover $(X_{(k+1)}',\Delta_{(k+1)}') \to (X_{(k)}',\Delta_{(k)}')$. Here, \[ (X_{(k+1)}^*,\Delta_{(k+1)}^*) \to (X_{(k+1)}'\Delta'_{(k+1)}) \] is a $\mathbb{T}_1$-quasi-torsor. We can lift the action of $\mathbb{T}_1$ to the $\mathbb{T}_2$-quasi-torsor $(X_{(k+1)},\Delta_{(k+1)}) \to (X_{(k+1)}^*\Delta_{(k+1)}^*)$ such that \[ (X_{(k+1)},\Delta_{(k+1)}) \to (X_{(k+1)}'\Delta_{(k+1)}') \] is a $(\mathbb{T}_1 \times \mathbb{T}_2)$-quasi-torsor. Thus, we showed the existence of the sequence of $k$ induced finite covers. This finishes the proof of the claim. \end{proof} By the claim, we have already that the scfc cover $(\widetilde{\overline{X}},\widetilde{\overline{\Delta}})$ of $(X,\Delta)$ dominates the original sequence. It follows that at most \[ \kappa:=\dim(\widetilde{\overline{X}}) - \dim(X) \] of the original covers can induce a trivial finite cover. Thus, by finiteness of $\pi_1^{\operatorname{reg}}(X/Z,\Delta)$, the original sequence stabilizes for $j$ large enough. In the case that all quasi-torsors in the original sequence are Cox covers, the bound $j(n)$ on the number of nontrivial covers follows as in the proof of Theorem~\ref{thm:bounded-iteration}. \end{proof} \begin{remark} {\em If we do not assume the quasi-torsors to be Cox covers, there is no bound depending only on the dimension. This already happens in dimension two. We can construct sequences of arbitrary length of nontrivial abelian quasi-\'etale covers over two-dimensional $A_n$-singularities, if we do not fix $n$. } \end{remark} \subsection{Upper bound for the dimension of the iteration of Cox rings}\label{subsec:upper-bound-dim} In this subsection, we give an upper bound for the dimension of the iteration of Cox rings in terms of homotopy groups. For orbifolds (and more general, orbispaces), similarly to the fundamental group, one may define higher homotops groups $\pi_k$, and for orbispace fibrations, these groups satisfy the same long exact sequence as ordinary homotopy groups, cf.~\cite[Theorem 4.5]{Chen01}. The precise statement is the following. \begin{theorem} \label{thm:dim-bound-2-homotopy} Let $(X,\Delta)$ be a log pair. Let $\phi\colon X\rightarrow Z$ be an aff-contraction so that $Z$ is local Henselian. Assume that $(X,\Delta)$ is relatively Fano over $Z$. Let $(\widetilde{\overline{X}},\widetilde{\overline{\Delta}})$ be the scfc cover of $(X,\Delta)$. Then \[ \dim(\widetilde{\overline{X}},\widetilde{\overline{\Delta}}) \leq \dim (X) + {\rm rk}( \pi_2^{\operatorname{reg}}(X/Z, \Delta)\otimes \mathbb{Q}). \] \end{theorem} \begin{proof} As in the proof of Proposition~\ref{prop:fiber-prod-fincov-and-quotpres}, we use the fact that $(\widetilde{\overline{X}},\widetilde{\overline{\Delta}}) \xrightarrow{/G} (X,\Delta)$ is an \'etale locally trivial $G$-bundle over $(X_{\operatorname{reg}}, \Delta_{\operatorname{reg}})$. Thus, we have an exact sequence of orbifold homotopy groups \[ \xymatrix@C=15pt{ \pi_2^{\rm reg}(G) \ar[r] & \pi_2^{\rm reg}(\widetilde{\overline{X}}/Z, \widetilde{\overline{\Delta}}) \ar[r] & \pi_2^{\rm reg}(X/Z, \Delta) \ar[r] & \pi_1^{\rm reg}(G) \ar[r] & \pi_1^{\rm reg}(\widetilde{\overline{X}}/Z, \widetilde{\overline{\Delta}}) \ar[r] & \pi_1^{\rm reg}(X/Z, \Delta) \ar[r] & \pi_0^{\rm reg}(G). } \] Note that the orbifold structure on the fiber $G$ is trivial and, moreover, $\pi_2^{\rm reg}(G)\cong 1$, $\pi_1^{\rm reg}(G) \cong \mathbb{Z}^{\dim(\widetilde{\overline{X}}) - \dim(X)}$, and $\pi_0^{\rm reg}(G) \cong \pi_1^{\rm reg}(X, \Delta)$. Since $\mathbb{Q}$ is a flat $\mathbb{Z}$-module, tensoring the above exact sequence with $\mathbb{Q}$ yields an exact sequence of $\mathbb{Q}$-vector spaces \[ \xymatrix@C=15pt{ \pi_2^{\rm reg}(X, \Delta) \otimes \mathbb{Q} \ar[r] & \mathbb{Q}^{\dim(\widetilde{\overline{X}}) - \dim(X) } \ar[r] & 0 } \] which finishes the proof. \end{proof} \begin{remark}{\em If we consider the scfc cover with respect only to $X$, without the orbifold structure, then the second homotopy group in Theorem~\ref{thm:dim-bound-2-homotopy} is the ordinary second homotopy group $\pi_2^{\rm reg}(X/Z)$ of the smooth locus. } \end{remark} \section{Fano type varieties with smooth iteration of Cox rings}\label{sec:smoothit} Throughout this paper, we introduced some special covers of Fano type varieties and klt singularities. The aim of this section is to explain when such coverings are smooth. In Subsection~\ref{subsec:smooth-iteration}, we give a characterization of Fano type varieties with smooth iteration of Cox rings. In Subsection~\ref{subsec:smoothness-scfc}, we give a characterization of Fano type varieties with smooth simply connected factorial canonical cover. Analogous theorems hold for klt singularities. In such case, we will just enunciate the results without a proof since these are verbatim from the projective case. Finally, in Subsection~\ref{subsec:iteration-vs-scfc}, we will give a characterization of Fano type varieties for which the spectrum of the iteration coincides with the scfc cover. \subsection{Smoothness of the iteration of Cox rings}\label{subsec:smooth-iteration} In this subsection, we give a characterization of the smoothness of the iteration of Cox rings. \begin{theorem}\label{thm:smooth-it-proj} Let $(X,\Delta)$ be a Fano type pair. Then, the following statements are equivalent: \begin{enumerate} \item The spectrum of the iteration of Cox rings ${\rm Cox}^{\rm it}(X,\Delta)$ is smooth, and \item $(X,\Delta)$ is a finite quasi-\'etale solvable quotient of a projective toric pair. \end{enumerate} Furthermore, if any of the above conditions holds, then we have that \begin{enumerate} \item[(1')] The simply connected factorial canonical cover coincides with the spectrum of the iteration of Cox rings, and \item[(2')] $(X,\Delta)$ is a finite quotient of a projective toric pair with torsion-free class group. \end{enumerate} \end{theorem} \begin{proof} Assume that $(X,\Delta)$ is a finite quasi-\'etale solvable quotient of a projective toric pair $(T,\Delta_T)$. Then, we can find: \begin{itemize} \item a sequence of finite abelian groups $A_1,\dots,A_n$, \item projective Fano type pairs $(X_i,\Delta_i)$ with $(X_n,\Delta_n)=(T,\Delta_T)$, and \item $A_i$ acts on $(X_i, \Delta_i)$ so that $(X_{i-1},\Delta_{i-1})$ is the quotient by this action. \end{itemize} Moreover, we may assume $(X_0,\Delta_0)=(X,\Delta)$. Let ${\rm Cox}^{(k)}(X,\Delta)$ be the $k$-th iteration of the Cox ring of $(X,\Delta)$. We denote by $\mathbb{T}_k$ the connected component of the reductive solvable group acting on the $k$-th iteration ${\rm Cox}^{(k)}(X,\Delta)$. Since the Cox ring dominates all quasi-\'etale finite abelian covers, we have a diagram as follows: \[ \xymatrix{ (T,\Delta_T)\ar[d]^-{/A_n} & (Y_n,B_n)\ar[d]\ar[l] & {\rm Cox}^{(n)}(X,\Delta)\ar[d]\ar[l]_-{/ \mathbb{T}_n} \\ (X_{n-1},\Delta_{n-1})\ar[d]^-{/A_{n-1}} & (Y_{n-1},B_{n-1})\ar[d]\ar[l] & {\rm Cox}^{(n-1)}(X,\Delta)\ar[d]\ar[l]_-{/ \mathbb{T}_{n-1}} \\ \vdots \ar[d]^-{/A_2} & \vdots\ar[l]\ar[d] & \vdots\ar[d]\ar[l] \\ (X_{1},\Delta_{1})\ar[d]^-{/A_1} & (Y_{1}, B_{1})\ar[l] & {\rm Cox}^{(1)}(X,\Delta)\ar[l]_-{/ \mathbb{T}_{1}} \\ (X,\Delta). & & } \] In the above diagram, each map $(Y_i,B_i)\rightarrow (X_i,\Delta_i)$ is a finite quotient. Since $T$ is toric and $Y_n\rightarrow T$ only branches along the support of $\Delta_T$, we conclude that $Y_n\rightarrow T$ does not branch along the torus. Hence, $(Y_n,B_n)$ is a projective toric pair. Thus, we have that ${\rm Cox}(Y_n,B_n)\cong \mathbb{A}^{\rho(Y_n)+\dim(Y_n)}$. Since the morphism ${\rm Cox}^{(n)}(X,\Delta)\rightarrow Y_n$ is a quasi-\'etale abelian cover, we conclude that there is a commutative diagram: \[ \xymatrix{ \mathbb{A}^{\rho(Y_n)+\dim(Y_n)} \ar[d]^-{\slash Q} \ar[rd] & \\ {\rm Cox}^{(n)}(X,\Delta)\ar[r]^-{\slash \mathbb{T}_n} & (Y_n,\Delta_n). } \] Here, $Q$ is a quasi-torus acting on the affine space. Thus $\mathbb{A}^{\rho(Y_n)+\dim(Y_n)}\rightarrow {\rm Cox}^{(n)}(X,\Delta)$ is a quasi-\'etale abelian cover. We conclude that \[ {\rm Cox}^{(n+1)}(X,\Delta) \cong \mathbb{A}^{\rho(Y_n)+\dim(Y_n)}. \] This concludes that $(2)$ implies $(1)$. Now, we prove that $(1)$ implies $(2)$ and we argue that $(1')$ and $(2')$ holds in this case. We assume that the iteration of Cox rings is smooth. Recall from Section~\ref{sec:scfc} that the simply connected factorial canonical cover is the Cox ring of the universal cover of the spectrum of the iteration of Cox rings. This implies that $(1')$ holds in this case. Without loss of generality, we may assume that the iteration of Cox rings is an affine space $\mathbb{A}^n$ and the reductive solvable group $G$ acting on it satisfies $G\leqslant {\rm GL}_n(\mathbb{C})$. Indeed, this can be achieved by applying Luna \'etale slice theorem to the smooth $G$-fixed point on the iteration of Cox rings. The connected component at the identity of the reductive solvable group is a torus, we call it $\mathbb{T}$. Hence, we have that $T:=\mathbb{A}^n/ \mathbb{T}$ is a projective toric variety with torsion-free class group. This shows that $(2')$ holds in this case. Finally, $X$ is the quotient of $Y$ by the component group of $G$ which is a finite solvable group. This finishes the second implication. \end{proof} We have the following corresponding statement for klt singularities. \begin{theorem}\label{thm:smooth-it-local} Let $(X,\Delta;x)$ be a klt singularity. Then, the following statements are equivalent: \begin{enumerate} \item The spectrum of the iteration of Cox rings ${\rm Cox}^{\rm it}(X,\Delta;x)$ is smooth, and \item $(X,\Delta;x)$ is a finite quasi-\'etale solvable quotient of a toric singularity. \end{enumerate} Furthermore, if any of the above conditions hold, then we have that \begin{enumerate} \item[(1')] The simply connected factorial canonical cover coincides with the spectrum of the iteration of Cox rings, and \item[(2')] $(X,\Delta)$ is a finite quotient of a toric singularity with torsion-free class group. \end{enumerate} \end{theorem} \subsection{Smoothness of the scfc cover}\label{subsec:smoothness-scfc} In this subsection, we give a characterization of Fano type varieties with smooth scfc cover. \begin{theorem}\label{thm:smooth-scfc-proj} Let $(X,\Delta)$ be a Fano type pair. Then, the following statements are equivalent: \begin{enumerate} \item The simply connected factorial canonical cover of $(X,\Delta)$ is smooth, and \item $(X,\Delta)$ is a finite quasi-\'etale quotient of a projective toric pair. \end{enumerate} \end{theorem} \begin{proof} Assume that $(X,\Delta)$ is a finite quasi-\'etale quotient of a projective toric pair $(T,\Delta_T)$. Let ${\rm Cox}^{\rm it}(X,\Delta)$ be the iteration of Cox rings of $(X,\Delta)$. Let $\mathbb{T}$ be the connected component of the reductive solvable group acting on ${\rm Cox}^{\rm it}(X,\Delta)$. We denote the quotient by $(Y,B):={\rm Cox}^{\rm it}(X,\Delta)\slash \mathbb{T}$. Hence, as in the proof of Theorem~\ref{thm:smooth-it-proj}, we have a commutative diagram as follows \[ \xymatrix{ (T,\Delta_T)\ar[d]^-{/S} & & \\ (T',\Delta_{T'}) & (Y,B)\ar[l]^-{/H} & {\rm Cox}^{\rm it}(X,\Delta)\ar[l]^-{/\mathbb{T}}. } \] Here, $(T',\Delta_{T'})$ is a quotient of the projective toric pair $(T,\Delta_T)$ by a finite perfect group $S$. Furthermore, $H$ is a finite solvable group. Hence, we obtain natural inclusions $S\leqslant {\pi_1}^{\rm reg}(Y,B)$ and $S\leqslant {\pi_1}^{\rm reg}({\rm Cox}^{\rm it}(X,\Delta))$. Thus, we conclude that the universal cover of ${\rm Cox}^{\rm it}(X,\Delta)$ admits a quasi-\'etale abelian quotient to $(T,\Delta_T)$. Therefore, the universal cover of the iteration of Cox rings is a torus quotient of the Cox ring of $(T,\Delta_T)$ which is an affine space $\mathbb{A}^{\rho(T)+\dim(T)}$. Hence, the simply connected factorial canonical cover of $(X,\Delta)$ is the affine space $\mathbb{A}^{\rho(T)+\dim(T)}$ as claimed. We conclude that $(2)$ implies $(1)$. Now, we prove that $(1)$ implies $(2)$ We assume that the simply connected factorial canonical cover of $(X,\Delta)$ is smooth. Without loss of generality, we may assume that the simply connected factorial canonical cover is the affine space $\mathbb{A}^n$ and the reductive solvable group $G$ acting on it satisfies $G\leqslant {\rm GL}_n(\mathbb{C})$. Indeed, this can be achieved by applying Luna \'etale slice theorem to the smooth $G$-fixed point on the iteration of Cox rings. The connected component at the identity of the reductive solvable group is a torus, we call it $\mathbb{T}$. Hence, we have that $T:=\mathbb{A}^n/ \mathbb{T}$ is a projective toric variety with torsion-free class group. Finally, $X$ is the quotient of $Y$ by the component group of $G$ which is a finite group. This finishes the second implication. \end{proof} We have the following corresponding statement for klt singularities. \begin{theorem}\label{thm:smooth-scfc-local} Let $(X,\Delta;x)$ be a klt singularity. Then, the following statements are equivalent: \begin{enumerate} \item The simply connected factorial canonical cover of $(X,\Delta)$ is smooth, and \item $(X,\Delta)$ is a finite quasi-\'etale quotient of a toric singularity. \end{enumerate} \end{theorem} \begin{remark} {\em The singularities that appear in Theorem~\ref{thm:smooth-scfc-local} are considered by the second author in~\cite{Mor20a,Mor20b}, where they are called toric quotient singularities. In~\cite{Mor20a,Mor20b}, it is shown that toric quotient singularities are the prototypes of klt singularities with large fundamental group. Moreover, the minimal log discrepancies of these singularities are described in~\cite{Mor20c}. } \end{remark} \subsection{Iteration of Cox rings and scfc cover} \label{subsec:iteration-vs-scfc} In this subsection, we characterize when the iteration of Cox rings is isomorphic to the scfc cover. \begin{theorem} Let $(X,\Delta)$ be a Fano type variety. Then, the following are equivalent: \begin{enumerate} \item The spectrum of the iteration of Cox rings has trivial regional fundamental group, \item the spectrum of the iteration of Cox rings coincides with the simply connected factorial canonical cover, and \item the fundamental group $\pi_1^{\rm reg}(X,\Delta)$ is solvable. \end{enumerate} \end{theorem} \begin{proof} If the spectrum of the iteration of Cox rings has trivial regional fundamental group, then it is factorial and simply connected. Thus, we have that $(1)$ implies $(2)$. The condition $(2)$ trivially implies $(1)$. Assume that the spectrum of the iteration of Cox rings has trivial regional fundamental group. Let $G$ be the solvable reductive group acting on the iteration of Cox rings ${\rm Cox}^{\rm it}(X,\Delta)$. We know that $X$ is the quotient of ${\rm Cox}^{\rm it}(X,\Delta)$ by $G$. Let $G^0$ be the connected compontent at the identity of $G$. The quotient $X':={\rm Cox}^{\rm it}(X,\Delta)/G^0$ is a finite solvable cover of $(X,\Delta)$. Furthermore, the pull-back of $K_X+\Delta$ to $X'$ equals $K_{X'}+\Delta'$ where $\Delta'$ is an effective divisor. Assume that $\pi_1^{\rm reg}(X,\Delta)$ is not solvable. Then, $X'\rightarrow X$ is not the regional universal cover of $(X,\Delta)$. Thus, we can take a non-trivial finite log quasi-\'etale Galois cover of $(X',\Delta')$. We call this finite cover $Y\rightarrow X$. By Proposition~\ref{prop:fiber-prod-fincov-and-quotpres}, this cover induces a non-trivial finite log quasi-\'etale Galois cover of the spectrum of the iteration of Cox rings. This contradicts the fact that the spectrum of the iteration of Cox rings has trivial regional fundamental group. We conclude that $(X',\Delta')$ is the universal cover of $(X,\Delta)$. Thus, $\pi_1^{\rm reg}(X,\Delta)$ is a solvable group. We conclude that $(2)$ implies $(3)$. Now, assume that the regional fundamental group $\pi_1^{\rm reg}(X,\Delta)$ is solvable. We proceed as in the previous paragraph. Let $G$ be the solvable reductive group acting on the iteration of Cox rings ${\rm Cox}^{\rm it}(X,\Delta)$. Let $G^0$ be the connected component at the identity of $G$. The quotient $X':={\rm Cox}^{\rm it}(X,\Delta)/G^0$ is a finite solvable cover of $(X,\Delta)$. If the regional fundamental group of $(X,\Delta)$ is solvable, then the regional fundamental group of $(X',\Delta')$ is solvable. If $(X',\Delta')$ has simply connected log smooth locus, then by the proof of Theorem~\ref{thm:scfc-cover}, we conclude that ${\rm Cox}^{\rm it}(X,\Delta)$ equals the scfc cover. Indeed, in this case the Cox ring of $(X',\Delta')$ equals ${\rm Cox}^{\rm it}(X,\Delta)$. On the other hand, assume that $(X',\Delta')$ has non-trivial regional fundamental group $S$. By assumption, $S$ is solvable. In particular, its commutator is a proper subgroup. By Proposition~\ref{prop:fiber-prod-fincov-and-quotpres}, there exists a finite log quasi-\'etale Galois cover of ${\rm Cox}^{\rm it}(X,\Delta)$ with acting group isomorphic to $S$. Since $[S,S]\leqslant S$ is proper, by Corollary~\ref{cor:abelianization}, we conclude that ${\rm Cox}^{\rm it}(X,\Delta)$ is not factorial. This leads to a contradiction. We conclude that $(X',\Delta')$ is simply connected and its Cox ring is isomorphic to ${\rm Cox}^{\rm it}(X,\Delta)$. Thus, the spectrum of the iteration equals the scfc of $(X,\Delta)$. We have that $(3)$ implies $(2)$. Hence, all the statements are equivalent. \end{proof} The following local version of the above theorem is proved analogously. \begin{theorem}\label{thm:scfc=it-local} Let $(X,\Delta;x)$ be a klt singularity. Then, the following are equivalent: \begin{enumerate} \item The spectrum of the iteration of Cox rings has trivial regional fundamental group, \item the spectrum of the iteration of Cox rings coincides with the simply connected factorial canonical cover, and \item the regional fundamental group $\pi_1^{\rm reg}(X,\Delta;x)$ is solvable. \end{enumerate} \end{theorem} \section{Examples and proofs of the theorems}\label{sec:ex} In this section, we collect some examples that enlighten the techniques of the paper. Then, we explain how the theorems of the introduction are implied by the theorems proved throughout the manuscript. In subsection~\ref{subsec:p1-non-solvable}, we will give an example of a Fano type variety with non-solvable regional fundamental group. We describe its scfc and iteration of Cox rings explicitly. In Subsection~\ref{subsec:jordan-t-varieties}, we prove a special Jordan property for singularities with torus action. Finally, in Subsection~\ref{subsec:compl-one}, we give a detailed study of the iteration of Cox rings, regional fundamental groups, and scfc covers of klt singularities of complexity one. \subsection{Examples with $\pi_1^{\rm reg}(X^{\rm reg})$ non-solvable} \label{subsec:p1-non-solvable} In this subsection, we give an example of a Fano type variety $X$ so that its regional fundamental group $\pi_1^{\rm reg}(X)$ is non-solvable. We also explain how to obtain the simply connected factorial canonical cover of $X$. \begin{example} {\em Let $X$ be the variety obtained from $(\mathbb{P}^1)^n$ quotient by the action of $S_n$ permuting the coordinates. We denote the quotient by $\rho\colon (\mathbb{P}^1)^n\rightarrow X$. Then, $X$ is a projective variety of Fano type, which is not toric. Furthermore, we have that $X$ satisfies that $\pi_1^{\rm reg}(X)\cong S_n$. Indeed, we have a natural \'etale Galois morphism $\rho\colon \rho^{-1}(X^{\rm reg})\rightarrow X^{\rm reg}$ with Galois group $S_n$. Furthermore, $\rho^{-1}(X^{\rm reg})$ has codimension at least two in $(\mathbb{P}^1)^n$ from which we conclude that $\pi_1(\rho^{-1}(X^{\rm reg}))=0$. This implies the claim. We proceed to compute the iteration of Cox rings of $X$. Note that $\rho(X)=1$. Hence, its first Cox ring is just the ring over the Weil $\mathbb{Q}$-Cartier divisor $-K_X$. This gives us a klt cone singularity \[ ({\rm Cox}(X), x_0) \] where $x_0$ is the vertex for the action. The singularity ${\rm Cox}(X)$ is factorial at $x_0$. Hence, the Iteration of Cox rings coincides with the first Cox ring. Furthermore, the regional fundamental group of ${\rm Cox}(X)$ at $x_0$ is isomorphic to $S_n$ and its universal cover is isomorphic to the cone over $(\mathbb{P}^1)^n$ with respect to $-K_{(\mathbb{P}^1)^n}$. We denote this variety by ${\rm Cone}((\mathbb{P}^1)^n)$. Note that the regional fundamental group of ${\rm Cone}((\mathbb{P}^1)^n)$ is trivial and its Cox ring is the affine space $\mathbb{A}^{2n}$. Hence, the simply connected factorial canonical cover of $X$ is the $2n$-dimensional affine space. We obtain the following commutative diagram \[ \xymatrix{ \mathbb{A}^{2n}\ar[r]^-{/\mathbb{T}_0} \ar[rd]^-{/\mathbb{T}_1} & {\rm Cone}((\mathbb{P}^1)^n)\ar[d]^-{/\mathbb{G}_m}\ar[r]^-{/S_n} & {\rm Cox}(X) \ar[d]^-{/\mathbb{G}_m} \\ & (\mathbb{P}^1)^n\ar[r]^-{/S_n} & X } \] Here, the torus $\mathbb{T}_0$ acts on the affine space $\mathbb{A}^{2n}$ by \[ (t_1,\dots,t_n)\cdot (x_1,\dots,x_{2n})= (t_1x_1,t_1x_2, t_2x_3,t_2x_4,\dots, t_nx_{2n-1},t_nx_{2n}). \] On the other hand, the torus $\mathbb{T}_1$ acts on the affine space $\mathbb{A}^{2n}$ by \[ (t_1,\dots,t_{n-1})\cdot (x_1,\dots,x_{2n})= (tx_1,tx_2,t_1x_3,t_1x_4,t_2x_5,t_2x_6,\dots,t_{n-1}x_{2n-1},t_{n-1}x_{2n}), \] where $t=(t_1\dots t_{n-1})^{-1}$. Thus, we obtain a representation $X\cong \mathbb{A}^{2n}/(S_n\rtimes \mathbb{T}_0)$, where $S_n$ is acting as permutation on the components of $\mathbb{A}^{2n} \cong (\mathbb{A}^2)^n$ and $\mathbb{T}_0$ is acting as above. The above example reflects two different ways in which the simply connected factorial canonical ring can be obtained: As the iteration of Cox rings of the universal cover, or as the Cox ring of the universal cover of the iteration of Cox ring. This example is a particular case of Theorem~\ref{thm:smooth-scfc-proj}. } \end{example} \subsection{Jordan property for $\mathbb{T}$-varieties} \label{subsec:jordan-t-varieties} In this subsection, we prove a strengthened version of the Jordan property for the regional fundamental group of affine klt $\mathbb{T}$-varieties of complexity $k$. Then, we specialize this statement for $\mathbb{T}$-varieties of complexity one. \begin{theorem}\label{thm:jordan-comp-k} Let $k$ be a positive integer. There exists a constant $c(k)$, only depending on $k$, satisfying the following. Let $(X,\Delta;x)$ be a $n$-dimensional klt $\mathbb{T}$-singularity of complexity $k$. Then, there exists an exact sequence \[ 1\rightarrow A\rightarrow \pi_1^{\rm reg}(X,\Delta;x) \rightarrow N\rightarrow 1, \] where $A$ is an abelian group of rank at most $n$ and index at most $c(k)$. \end{theorem} \begin{proof} Let $(Y,B_Y)$ be the normalized Chow quotient of $(X,\Delta)$. Then $(X,\Delta)$ is defined by a polyhedral divisor $\mathcal{D}$ on $Y$ with tailcone $\sigma^\vee \in M_\mathbb{Q}$ (see, e.g.,~\cite[Theorem]{AH06}). By~\cite[Theorem 4.9]{LS13}, we know that $(Y,B_Y)$ is a Fano type pair. We may replace $Y$ with a small $\mathbb{Q}$-factorialization to assume that $Y$ is $\mathbb{Q}$-factorial. Let $\tilde{X}$ be relative spectrum over $Y$ of $\bigoplus_{u\in \sigma^\vee \cap M} \mathcal{O}_Y(\mathcal{D}(u))$. Then, $\tilde{X}\rightarrow Y$ is an orbifold toric bundle. Over the log smooth locus of $(Y,B_Y)$ the toric bundle is trivial. Let $(\tilde{X},\Delta_{\tilde{X}})$ be the log pull-back of $(X,\Delta)$ to $\tilde{X}$. Let $\Gamma$ be the boundary obtained from $\Delta_{\tilde{X}}$ by increasing to one the coefficients of all the torus invariant divisors which are horizontal over $Y$. Hence, we conclude that there is an exact sequence \[ 1\rightarrow \mathbb{Z}^{n-k} \rightarrow \pi_1^{\rm reg}(\tilde{X},\Gamma) \rightarrow \pi_1^{\rm reg}(Y,B_Y) \rightarrow 1. \] By construction, $\mathbb{Z}^{n-k}$ lies in the center of $\pi_1^{\rm reg}(\tilde{X},\Gamma)$. Furthermore, we have a surjection $\pi_1^{\rm reg}(\tilde{X},\Gamma) \rightarrow \pi_1^{\rm reg}(X,\Delta;x)$. Observe that $Y$ has dimension $k$. By~\cite[Theorem 3]{BFMS20}, we can find an abelian normal subgroup $A_Y \leqslant \pi_1^{\rm reg}(Y,B_Y)$ of rank at most $k$ and index at most $c(k)$, where $c(k)$ is a constant which only depends on $k$. Hence, the pre-image $A_{\tilde{X}}$ of $A_Y$ in $\pi_1^{\rm reg}(\tilde{X},\Gamma) $ is a free finitely generated abelian group of rank at most $n$ and index at most $c(k)$. Let $A$ be the image of $A_{\tilde{X}}$ in $\pi_1^{\rm reg}(X,\Delta;x)$. Since $\pi_1^{\rm reg}(X,\Delta;x)$ is finite, we conclude that $A$ is a finite abelian group of rank at most $n$ and index at most $c(k)$. \end{proof} \begin{remark}{\em Note that the size of the non-abelian part $N$ of the regional fundamental group only depends on the complexity and not on the dimension of the germ as in~\cite{BFMS20}. This, of course, happens because the $(n-k)$-dimensional torus action can not contribute to the non-abelian part of the regional fundamental group. If the complexity is zero, we can simply take $c(0)=0$ since the regional fundamental group of a toric pair is always abelian. The following corollary gives an explicit bound for $c(1)$. } \end{remark} \begin{corollary}\label{cor:jordan-comp-1} Let $(X,x)$ be a $n$-dimensional klt $\mathbb{T}$-singularity of complexity one. Then, there exists an exact sequence \[ 1\rightarrow A \rightarrow \pi_1^{\rm reg}(X,x)\rightarrow N \rightarrow 1, \] where $A$ is an abelian group of rank at most $n$ and index at most 60. \end{corollary} \begin{proof} This follows from Theorem~\ref{thm:jordan-comp-k} and the classification of the regional fundamental groups of log pair structures on $\mathbb{P}^1$ with standard coefficients (see, e.g.,~\cite[Example 5.1]{LLM19}). Indeed, the bound $60$ is obtained by the binary icosahedral group of order $120$ with the center $\mathbb{Z}_2$ being the only normal abelian subgroup. \end{proof} \subsection{Complexity one klt singularities} \label{subsec:compl-one} In the case of klt type singularities with a torus action of complexity one, we are able to explicitly determine all invariants defined so far: Cox rings, iterated Cox rings, regional fundamental groups, associated universal covers, and the simply connected factorial canonical covers. We start by recalling the construction of affine rational complexity one $\mathbb{T}$-varieties. \begin{definition} {\em Let $N$ be a free finitely generated abelian group of rank $r$. Let $M$ be the dual of $N$. We denote by $N_\mathbb{Q}$ and $M_\mathbb{Q}$ the corresponding $\mathbb{Q}$-vector spaces. Given a polyhedron $\Delta\subset N_\mathbb{Q}$, we denote its {\em recession cone} to be the set of $v\in N_\mathbb{Q}$ so that $v+\Delta \subset \Delta$. The recession cone of a polyhedron is a strongly convex polyhedral cone. It is denoted by ${\rm rec}(\Delta)$. Let $\sigma$ be a strongly convex polyhedral cone in $N_\mathbb{Q}$. We denote by ${\rm Pol}_\mathbb{Q}(N,\sigma)$ the semigroup of polyhedra $\Delta$ of $N_\mathbb{Q}$ for which ${\rm rec}(\Delta)=\sigma$. The additive structure of this semigroup is the Minkowski sum. The elements of this group are called $\sigma$-polyhedra. We denote by ${\rm CaDiv}_{\geq 0}(\mathbb{P}^1)$ the semigroup of effective Cartier divisors on $\mathbb{P}^1$. A {\em polyhedral divisor} on $(\mathbb{P}^1,N)$ with recession cone $\sigma$ is an element of \[ {\rm Pol}_{\mathbb{Q}}(N,\sigma) \otimes_{\mathbb{Z}_{\geq 0}} {\rm CaDiv}_{\geq 0}(\mathbb{P}^1). \] Note that a polyhedral cone can be written as a formal finite sum \[ \mathcal{D}=\sum_{i=1}^s \Delta_i \otimes \{p_i\}, \] for a finite set of points $p_1,\dots,p_s$ in $\mathbb{P}^1$ and $\sigma$-polyhedra $\Delta_1,\dots,\Delta_s$. If we don't fix $N$ or the recession cone, then we just say that $\mathcal{D}$ is a {\em polyhedral divisor} on $\mathbb{P}^1$. } \end{definition} Let $\mathcal{D}$ be a polyhedral divisor on $\mathbb{P}^1$. We have a homomorphism of semigroups, called the evaluation homomorphism, defined as follows \[ \mathcal{D}\colon \sigma^\vee \rightarrow {\rm CaDiv}_\mathbb{Q}(\mathbb{P}^1) \] \[ \mathcal{D}(u)=\sum_{i=1}^s \min \langle \Delta_i, u \rangle p_i. \] By abuse of notation, we are denoting the polyhedral divisor and the evaluation homomorphism by $\mathcal{D}$. \begin{definition} {\em A polyhedral divisor in $\mathbb{P}^1$ is said to be a {\em proper polyhedral divisor} if $\mathcal{D}(u)$ is semiample for $u\in \sigma^\vee$ and $\mathcal{D}(u)$ is big for $u\in {\rm relint}(\sigma^\vee)$. For a proper polyhedral divisor $\mathcal{D}$ on $\mathbb{P}^1$, we can define its {\em degree polyhedron} to be \[ {\rm deg}(\mathcal{D}) = \sum_{i=1}^s \subset \sigma. \] } \end{definition} Given a proper polyedral divisor $\mathcal{D}$, we can associate to it a normal rational affine variety of dimension $r+1$ with an effective action of a $r$-dimensional torus. We have a sheaf of $\mathcal{O}_{\mathbb{P}^1}$-algebras \[ \mathcal{A}(\mathcal{D}) = \bigoplus_{u\in \sigma^\vee \cap M} \mathcal{O}_{\mathbb{P}^1}(\mathcal{D}(u)) \chi^u. \] We denote by $\widetilde{X}(\mathcal{D})$ the relative spectrum of $\mathcal{A}(\mathcal{D})$ over $\mathbb{P}^1$. We denote by $X(\mathcal{D})$ the ring of sections of $\mathcal{A}(\mathcal{D})$. The variety $X(\mathcal{D})$ is a normal rational affine variety of dimension $r+1$ with an effective action of a $r$-dimensional torus. Indeed, it admits an effective action of $\mathbb{T}:={\rm Spec}(\mathbb{C}[M])$. This means that $X(\mathcal{D})$ is a rational $\mathbb{T}$-variety of complexity one. It is known that every rational $\mathbb{T}$-variety of complexity one is isomorphic to $X(\mathcal{D})$ for some polyhedral divisor $\mathcal{D}$ on $\mathbb{P}^1$ (see, e.g.,~\cite[Theorem on p. 559]{AH06}). \begin{notation} {\em Let $\mathcal{D}$ be a proper polyhedral divisor on $\mathbb{P}^1$, with recess cone $\sigma$, and let $p\in \mathbb{P}^1$. We denote $\Delta_p =\Delta_i$ if $p=p_i$ or $\Delta_p=\sigma$ otherwise. For every vertex $v\in \Delta_p$, we denote by $\mu(v)$ the smallest positive integer so that $\mu(v)v\in N$. For every $p\in \mathbb{P}^1$, we define \[ \mu_p := {\rm max}\{ \mu(v) \mid v\in \Delta_p \}. \] For every $p\in \mathbb{P}^1$, we define $b_p:=(1-\mu_p^{-1})p$. We define $B(\mathcal{D}):=\sum_{p\in \mathbb{P}^1} b_p$. Note that $B(\mathcal{D})$ is a divisor on $\mathbb{P}^1$ with standard coefficients, i.e., $(\mathbb{P}^1, B(\mathcal{D}))$ is a log pair with standard coefficients. } \end{notation} From now on, we focus on complexity one $\mathbb{T}$-singularities. This means, affine $\mathbb{T}$-varieties of complexity one $X$ with a distinguished point $x\in X$ which is a klt singularity. We have the following theorem which characterizes the klt-ness of the complexity one $\mathbb{T}$-singularity \begin{theorem}[Cf.~\cite{LS13}] Let $\mathcal{D}$ be a polyhedral divisor on $\mathbb{P}^1$. Then, $(X(\mathcal{D}),x)$ is klt if and only if $(\mathbb{P}^1,B(\mathcal{D}))$ is a log Fano pair. \end{theorem} Note that this happens if and only if $\mu_p$ is non-trivial for at most three points in $\mathbb{P}^1$, and, in addition, for these three points, the corresponding $\mu_p$ must form a platonic triple in the sense of~\cite[Example 4.1]{LLM19}. From now on, we turn to describe the regional fundamental group of $X(\mathcal{D})$ at $x$. To do so, first we need to understand the $\mathbb{T}$-equivariant birational contraction $r\colon \widetilde{X}(\mathcal{D})\rightarrow X(\mathcal{D})$. We proceed to explain which divisors are contracted by this birational contraction. There are two types of $\mathbb{T}$-invariant divisors in $\widetilde{X}(\mathcal{D})$; the divisors which are mapped to points in $\mathbb{P}^1$ via the projection $\widetilde{X}(\mathcal{D})\rightarrow \mathbb{P}^1$, which are called vertical invariant divisors. Vertical invariant divisors are in bijection with pairs $(p,v)$ where $p\in \mathbb{P}^1$ and $v$ is a vertex of the polyhedron $\Delta_p$. Hence, we will denote the corresponding vertical divisor by $\Delta_{(p,v)}$. The invariant divisors which dominate $\mathbb{P}^1$ are called horizontal divisors. Horizontal divisors are in bijection with rays of the recession cone $\sigma$. The contraction $r$ contracts exactly those horizontal divisors corresponding to rays of $\sigma$ which intersect ${\rm deg}(\mathcal{D})$ non-trivially (see, e.g.,~\cite[\S 10]{AH06}). \begin{notation} {\em Let $\mathcal{D}$ be a proper polyhedral divisor on $\mathbb{P}^1$ with recession cone $\sigma$. Let $N_\mathcal{D} \subset N$ be the sub-lattice generated by elements of $N$ which belong to a regular sub-cone of $\sigma$ which does not intersect $\deg(\mathcal{D})$. We introduce variables $t_1,\dots, t_r$, corresponding to a basis of $N$. For every $n\in N_{\mathcal{D}}$, we let $t^n:=t_1^{n_1}\dots t_r^{n_r}$. Let $p\in \mathbb{P}^1$ and $\Delta_p$ the corresponding $\sigma$-polyhedra. Consider the cone $\sigma(\mathcal{D},p)$ in $N_\mathbb{Q} \times \mathbb{Q}$. Let $N_{\sigma(\mathcal{D},p)} \subset N$ be the sub-lattice generated by elements of $N\times \mathbb{Z}$ which belong to a regular sub-cone of $\sigma(\mathcal{D},p)$ which does not intersect $\deg(\mathcal{D})$. We denote by $\mathcal{B}(\mathcal{D},p)$ a basis of $N_{\sigma(\mathcal{D},p)}$. For every $v\in \mathcal{B}(\mathcal{D},p)$, we denote by $\pi_1(v)$ the projection in $N$ and by $\pi_2(v)$ the projection in $\mathbb{Z}_{\geq 1}$. } \end{notation} \begin{theorem}\label{thm:reg-compl-1} Let $\mathcal{D}$ be a polyhedral divisor on $(\mathbb{P}^1,N)$. Write $\mathcal{D}=\sum_{i=1}^s \Delta_i \otimes \{p_i\}$. Let $x\in X(\mathcal{D})$ be the vertex of the torus action. Then, $\pi_1^{\rm reg}(X(\mathcal{D}),x)$ is isomorphic to the group generated by \[ t_1,\dots, t_r, b_1,\dots, b_s \] with the relations \begin{itemize} \item $b_1\cdots b_s$, \item $[t_i,t_j]$ for every $1\leq i\leq j \leq r$, \item $[t_i,b_j]$ for every $i\in \{1,\dots, r\}$ and $j\in \{1,\dots, s\}$, \item $t^n$ for every $n\in N_{\mathcal{D}}$, and \item $t^{\pi_1(v)}b_j^{\pi_2(v)}$ for every $v\in \mathcal{B}(\mathcal{D},p)$. \end{itemize} \end{theorem} \begin{proof} We have a good quotient $\widetilde{X}(\mathcal{D})\rightarrow \mathbb{P}^1$. This good quotient is trivial with fiber $X(\sigma)={\rm Spec}(\mathbb{C}[\sigma^\vee \cap M])$ over $\mathbb{P}^1\setminus \{p_1,\dots,p_s\}$. Around each $p_i$ the variety $\widetilde{X}(\mathcal{D})$ has an analytic neighborhood diffeomorphic to an analytic neighborhood of the fiber of $X(\sigma(\mathcal{D},p))\rightarrow \mathbb{A}^1$ around zero (see, e.g.,~\cite[Example 2.5]{LS13}). The equivariant birational contraction $r\colon \widetilde{X}(\mathcal{D})\rightarrow X(\mathcal{D})$ contract the closure of the $\mathbb{T}$-invariant cycles of the form $X(\tau)\times (\mathbb{P}^1 \setminus \{p_1,\dots,p_s\})$, where $\tau\leqslant \sigma$ is a cone intersecting $\deg(\mathcal{D})$ (see, e.g.,~\cite[\S 5]{AIPSV12}). In order to compute the regional fundamental group of $X(\mathcal{D})$ at $x$, it suffices to compute the regional fundamental group of $\widetilde{X}(\mathcal{D})\setminus {\rm Ex}(r)$. Indeed, the image of every prime component of ${\rm Ex}(r)$ has codimension at least two in $X(\mathcal{D})$. If the image of such component is contained in the singular locus of $X(\mathcal{D})$, then it does not contribute to the regional fundamental group. Thus, it suffices to compute $\pi_1^{\rm reg}(\widetilde{X}(\mathcal{D})\setminus {\rm Ex}(r))$. A general fiber of $X(\mathcal{D})^{\rm reg}\setminus {\rm Ex}(r) \rightarrow \mathbb{P}^1$ is isomorphic to the open subvariety of $X(\sigma)$ which corresponds to the regular sub-cones of $\sigma$ that does not intersect $\deg(\mathcal{D})$. Around each $p_i$, the variety $\widetilde{X}(\mathcal{D})^{\rm reg} \setminus {\rm Ex}(r)$ is diffeomorphic to the open subvariety of $X(\sigma(\mathcal{D},p))$ corresponding to the regular sub-cones of $\sigma(\mathcal{D},p)$ not intersecting $\deg(\mathcal{D})$. Thus, we have a formally toric description of $\widetilde{X}(\mathcal{D})^{\rm reg} \setminus {\rm Ex}(r)$ over $\mathbb{P}^1$. Then, the rest of the description follows from applying Van Kampen Theorem to glue the fundamental group of $X(\sigma)^{\rm reg} \times (\mathbb{P}^1\setminus \{p_1,\dots,p_s\})$ with those of the analytic neighborhoods of the fibers of the $p_i$'s. The proof proceeds similarly as in~\cite[Theorem 3.4]{LLM19}. \end{proof} In the above theorem, the loop $t_i$ corresponds to a loop around the $i$-th factor of the $r$-dimensional torus $\mathbb{T} \cong (\mathbb{C}^*)^r$ of a general fiber of $\widetilde{X}(\mathcal{D})\rightarrow \mathbb{P}^1$. On the other hand, the loops $b_j$ correspond to liftings to $\mathcal{X}(\mathcal{D})$ of the loops around the points $p_j$ in $\mathbb{P}^1$. Note that the above description gives an explicit version of Corollary~\ref{cor:jordan-comp-1}. Indeed, for the group $A$, we can consider the normal abelian group generated by the $t_i$'s. Since we have at most three points for which $\mu_p$ is non-trivial, we conclude that the quotient $\pi_1^{\rm reg}(X(\mathcal{D}),x)$ has order at most $60M$. Indeed, the quotient $\pi_1^{\rm reg}(X(\mathcal{D}),x)/A$ admits a surjection from $\pi_1^{\rm reg}(\mathbb{P}^1,B(\mathcal{D}))$. Theorem~\ref{thm:reg-compl-1}, gives a simple way to construct the universal cover of a complexity-one $\mathbb{T}$-singularity. Let $\mathcal{D}$ be a proper polyhedral divisor on $(\mathbb{P}^1,N)$. Let $(\mathbb{P}^1,B(\mathcal{D}))$ be the associated log Fano pair. Let $p\colon (\mathbb{P}^1, B')\rightarrow (\mathbb{P}^1,B(\mathcal{D}))$ be the universal cover of $\pi_1(\mathbb{P}^1, B(\mathcal{D}))$. Then, $p^ *\mathcal{D}$ is a proper polyhedral divisor on $\mathbb{P}^1$ and we have a finite quasi-\'etale Galois morphism \[ p\colon (X(p^*\mathcal{D}), x') \rightarrow (X(\mathcal{D}),x). \] We denote it by $p$ by abuse of notation. Here, $x'$ is the unique pre-image of $x$. We are considering the pull-back of proper polyhedral divisors as defined in~\cite[\S 8]{AH06}. By Theorem~\ref{thm:reg-compl-1}, the regional fundamental group of $(X(p^*\mathcal{D}),x')$ is generated by the loops $t_i$. In particular, it is abelian. Hence, its universal cover is nothing else than an isogeny of the torus given by a lattice extension $N \hookrightarrow N'$. Now, we turn to describe the Cox ring of an affine $\mathbb{T}$-variety of complexity one. We restrict ourselves to the klt case, so the singularities will impose some restriction on the structure of the Cox ring. \begin{definition}[Cf.~\cite{ABHW18}] {\em Let $\mathcal{D}$ be a proper polyhedral divisor on $(\mathbb{P}^1,N)$ which defines a klt complexity one affine variety $X(\mathcal{D})$. Fix integers $m \geq 0$, $n,r > 0$, and a partition $n=n_0 + \ldots + n_r$. For every $i=0,\ldots,r$, let $l_i=(l_{i1},\ldots,l_{in_i}) \in \mathbb{Z}^{n_i}$ with $l_{i1} \geq \ldots \geq l_{in_i}>0$ and $l_{i1} \geq \ldots \geq l_{r1}$. Define monomials $T_i^{l_i}:=T_{i1}^{l_{i1}} \cdots T_{in_i}^{l_{in_i}}$ in the polynomial ring \[ \mathbb{C}[T_{ij},S_k]:=\mathbb{C}[T_{ij},S_k; i=1,\ldots,r, j=1,\ldots,n_i, k=1,\ldots,m]. \] Now, define pairwise different scalars $\theta_0 =1,\theta_1,\ldots,\theta_{r-a} \in \mathbb{C}^*$ and for $i=0,\ldots, r-2$ a trinomial \[ g_i:= \theta_iT_i^{l_i}+ T_{i+1}^{l_{i+1}}+ T_{i+2}^{l_{i+2}}. \] If the $\mathfrak{l}_i:=\max(l_{i1},\ldots,l_{in_i})$ are {\em platonic tuples}, i.e. of the form \[ (5,3,2,1\ldots,1), (4,3,2,1\ldots,1), (3,3,2,1\ldots,1), (k,2,2,1\ldots,1), (k,l,1,1\ldots,1), \] then, we call the factor ring $R:=\mathbb{C}[T_{ij},S_{k}]/\langle g_0,\ldots,g_{r-2}\rangle$ a {\em platonic ring}. } \end{definition} Now, we have the following slight generalization of~\cite[Theorem 1.3]{ABHW18} that was first stated in~\cite[Theorem 5]{BraThesis}. \begin{theorem} Let $\mathcal{D}$ be a proper polyhedral divisor on $(\mathbb{P}^1,N)$ which defines a klt complexity one affine variety $X(\mathcal{D})$. Let $x\in X(\mathcal{D})$ be the vertex of the torus action. Write $\mathcal{D}=\sum_{i=1}^s \Delta_i \otimes \{p_i\}$ and assume $\mu(p_i)=1$ for $i\geq 4$. Then, the Cox ring of $(X(\mathcal{D}),x)$ is a platonic ring with associated tuple $(\mu(p_1),\mu(p_2),\mu(p_3))$. \end{theorem} Now, we turn to explicitly describe the possible Cox ring iterations in terms of the platonic Cox ring of a klt singularity of complexity one. The original reference is~\cite[Rem 6.7]{ABHW18}. \begin{theorem}[Cf.~\cite{HW18}] Let $\mathcal{D}$ be a proper polyhedral divisor on $(\mathbb{P}^1,N)$ which defines a klt complexity one affine variety $X(\mathcal{D})$. Then, the possible sequences of platonic triples arising from Cox ring iterations of $X(\mathcal{D})$ are the following: \begin{itemize} \item $(1,1,1)\rightarrow (2,2,2)\rightarrow (3,3,2)\rightarrow (4,3,2)$, \item $(1,1,1)\rightarrow (x,x,1)\rightarrow (2x,2,2)$, \item $(1,1,1)\rightarrow (x,x,1)\rightarrow (x,2,2)$, and \item $(l^{-1}l_0,l^{-1}l_1,1) \rightarrow (l_0,l_1,1)$ where $l:={\rm gcd}(l_0,l_1)>1$. \end{itemize} \end{theorem} Finally, the following theorem describes the simply connected factorial canonical cover of a klt singularity of complexity one. \begin{theorem} Let $\mathcal{D}$ be a proper polyhedral divisor on $(\mathbb{P}^1,N)$ which defines a klt complexity one affine variety $X(\mathcal{D})$. Let $p\colon \mathbb{P}^1\rightarrow \mathbb{P}^1$ be the universal cover of $(\mathbb{P}^1,B(\mathcal{D}))$. Then, the scfc cover is ${\rm Cox}(X(p^*\mathcal{D}))$. \end{theorem} \begin{proof} We have a Galois quasi-\'etale finite morphism $X(p^*\mathcal{D})\rightarrow X(\mathcal{D})$. By Theorem~\ref{thm:reg-compl-1}, we know that the regional fundamental group of $X(p^*\mathcal{D})$ is abelian and generated by the loops $t_1,\dots, t_r$. Hence, by Theorem~\ref{thm:scfc=it-local}, we conclude that the scfc cover of $X(p^*\mathcal{D})$, which coincides with the scfc cover of $X(\mathcal{D})$, is isomorphic to ${\rm Cox}(X(p^*\mathcal{D}))$. \end{proof} \subsection{Proof of the theorems} \label{subsec:proofs} In this subsection, we explain how the theorems in the introduction follow from the theorems proved throughout the manuscript. \begin{proof}[Proof of Theorem~\ref{introthm2-existence-iteration-local}] Follows from Theorem~\ref{thm:bounded-iteration}. \end{proof} \begin{proof}[Proof of Theorem~\ref{introthm3-bounded-iteration-local}] Follows from Theorem~\ref{thm:bounded-iteration}. \end{proof} \begin{proof}[Proof of Theorem~\ref{introthm4-bounded-dim-it-local}] Follows from Theorem~\ref{thm:dim-bound-2-homotopy}. \end{proof} \begin{proof}[Proof of Theorem~\ref{introthm-5-existence-scf-cover}] Follows from Theorem~\ref{thm:scfc-cover}. \end{proof} \begin{proof}[Proof of Theorem~\ref{introthm-6-univ-scf-cover}] Follows from Theorem~\ref{thm-univ-scfc}. \end{proof} \begin{proof}[Proof of Theorem~\ref{introthm7-smooth-it}] Follows from Theorem~\ref{thm:smooth-it-local}. \end{proof} \begin{proof}[Proof of Theorem~\ref{introthm8-smooth-scfc}] Follows from Theorem~\ref{thm:smooth-scfc-local}. \end{proof} \begin{proof}[Proof of Theorem~\ref{introthm9-equal-it-scfc}] Follows from Theorem~\ref{thm:scfc=it-local}. \end{proof} \begin{proof}[Proof of Theorem~\ref{introthm10-jordan-relative}] Follows from Theorem~\ref{thm:rel-finiteness}. \end{proof} \begin{proof}[Proof of Theorem~\ref{introthm11-jordan-t-var}] Follows from Theorem~\ref{thm:jordan-comp-k}. \end{proof} \begin{proof}[Proof of Theorem~\ref{introthm12-it-t-var}] Follows from Theorem~\ref{thm:jordan-comp-k} and the proof of Theorem~\ref{thm:bounded-iteration}. \end{proof} \section{Appendix: Table of covers} \label{appendix} In this appendix, we summarize all the different categories of covers of klt singularities (or Fano type varieties) that we consider throughout this article. We describe the category of covers over $X$, the group that acts on such covers, the inverse limit, and the main property of the inverse limit.\\ \begin{center} \textbf{Table 1.} Covers of klt singularities.\\ \vspace{0.5cm} \begin{tabularx} {0.8\textwidth} { | >{\centering\arraybackslash}X | >{\centering\arraybackslash}X | >{\centering\arraybackslash}X | >{\centering\arraybackslash}X |} \hline \begin{center}\textbf{ Category}\end{center} & \begin{center}\textbf{Acting group}\end{center} & \begin{center}\textbf{ Inverse limit }\end{center} & \begin{center}\textbf{ Main property}\end{center} \\ \hline \begin{center} Finite Galois quasi-\'etale covers \end{center}& \begin{center} Finite group \end{center} & \begin{center} Universal cover \end{center} & \begin{center} Simply connectedness \end{center} \\ \hline \begin{center} Abelian reductive quasi-\'etale covers \end{center} & \begin{center} Quasi-torus \end{center} & \begin{center} Cox ring \end{center} & \begin{center} $\mathbb{T}$-factoriality \end{center} \\ \hline \begin{center} Solvable reductive quasi-\'etale covers \end{center} & \begin{center} Solvable reductive group \end{center}& \begin{center} Iteration of Cox ring \end{center}& \begin{center} Factoriality \end{center} \\ \hline \begin{center} Finite-solvable quasi-\'etale covers \end{center} & \begin{center} Finite extensions of solvable reductive group \end{center}& \begin{center} Simply connected factorial canonical cover (scfc cover)\end{center} & \begin{center} Simply connectedness and factoriality \end{center} \\ \hline \end{tabularx} \end{center} \vspace{0.5cm} Note that all the group isomorphism classes considered in the above table are closed under extensions. The following diagram shows the natural morphisms between the different covers in the above table. \[ \xymatrix{ & {\rm Cox}^{\rm it}(X,\Delta;x)\ar[ld]^-{\phi_3} & (Y,\Delta;y)\ar[l]^-{\phi_1}\ar[dd]^-{\phi_2} \\ {\rm Cox}(X,\Delta;x)\ar[d]^-{\phi_5} & & \\ (X,\Delta;x) & & (\tilde{X},\tilde{\Delta},\tilde{x}) \ar[ll]^-{\phi_4} } \] Here, $(\tilde{X},\tilde{\Delta};\tilde{x})$ is the universal cover of $(X,\Delta;x)$ and $(Y,\Delta_Y;y)$ is the scfc cover of $(X,\Delta;x)$. We finish the appendix explaining when the morphisms in the above diagram are isomorphisms: \begin{enumerate} \item By Theorem~\ref{thm:scfc=it-local}, $\phi_1$ is an isomorphism if and only if $\pi_1^{\rm reg}(X,\Delta;x)$ is solvable. \item $\phi_{i}$ is an isomorphism if and only if the target is factorial for $i\in \{2,3,5\}$. \item $\phi_4$ is an isomorphism if and only if $\pi_1^{\rm reg}(X,\Delta;x)$ is trivial. \end{enumerate} \bibliographystyle{habbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,087
May 28, 2010 … The Incidence of Secondary Lymphedema in Both the Upper and Lower ….. lymphoedema compression bandaging (LCB)—specialist bandaging technique. wrapping techniques and/or …. bandaging can be worn through the night. for primary prevention of arm lymphedema after mastectomy. and reported good results in. 16 patients. … in bandage fashion for ten minutes. Jul 15, 2011 … "various techniques"; "whether steroids were used"; "were 18-88 years of age" …. ball or soft rolled bandage … bandage in hand of arm undergoing venesection. meditation ….. element complements the massage and bandaging,. within the petitioner's scope of practice as a Certified.
{ "redpajama_set_name": "RedPajamaC4" }
2,319
\section{Introduction} \label{sec:in} Generating functions are a powerful tool for solving problems in number theory, combinatorics, algebra, probability theory. One of the advantages of the generating functions is that an infinite number sequence can be represented in the form of single expression. There are various types of generating functions: ordinary, exponential, Dirichlet, Poisson, etc. In this paper we consider exponential generating functions. \begin{definition} Power series of the form \begin{equation} \label{formula1} \sum_{n=1}^\infty\frac{a(n)}{n!}x^{n} \end{equation} is a \textit{exponential generating function}, where $a(n)$ is an integer sequence. \end{definition} \section{Preliminary} V.~V.~Kruchinin \cite{KruCompositae} introduced the notion of the \emph{composita} of a given generating function \begin{math} F(x)=\sum_{n>0}f(n)x^n. \end{math} Suppose \begin{math} F(x) = \sum_{n>0} f(n) x^n \end{math} is the generating function, in which there is no free term $f(0)=0$. From this generating function we can write the following condition: \begin{displaymath} [F(x)]^k=\sum_{n>0} F(n,k)x^n. \end{displaymath} The expression $F(n,k)$ is the \emph{composita} and it's denoted by $F^{\Delta}(n,k) $. Also the \emph{composita} can be written based on the composition of $n$: \begin{definition} The composita is the function of two variables defined by \begin{equation} \label{Fnk0}F^{\Delta}(n,k)=\sum_{\pi_k \in C_n}{f(\lambda_1)f(\lambda_2) \cdots f(\lambda_k)}, \end{equation} where $C_n$ is the set of all compositions of an integer $n$, $\pi_k$ is the composition \begin{math} \sum_{i=1}^k\lambda_i=n \end{math} into $k$ parts exactly. \end{definition} For instance, we obtain the composita of the exponential generating function \begin{math} F(x)=e^x-1. \end{math} Raising this generating function to the power of $k$ and applying the binomial theorem, we obtain \begin{displaymath} \left(e^x-1\right)^k=\sum_{j=0}^k {k \choose j}e^{xj}(-1)^{k-j}. \end{displaymath} Since \begin{displaymath} \left(e^{x} \right)^k=\sum_{n\geq 0} \frac{k^n}{n!}x^n, \end{displaymath} we get \begin{displaymath} F^{\Delta}(n,k)=\sum_{j=0}^k {k \choose j}\frac{j^n}{n!}(-1)^{k-j}. \end{displaymath} or since the general formula for the Stirling numbers of the second kind is given as follows: \begin{displaymath} \genfrac{\{}{\}}{0pt}{}{n}{k}=\frac{1}{k!}\sum_{j=0}^k(-1)^{k-j}\binom{k}{j}j^n, \end{displaymath} we have \begin{equation} \label{exp(x)-1} F^{\Delta}(n,k)=\frac{k!}{n!}\genfrac{\{}{\}}{0pt}{}{n}{k}. \end{equation} Here \begin{math} \genfrac{\{}{\}}{0pt}{}{n}{k} \end{math} stand for the Stirling numbers of the second kind(see \cite{Comtet_1974,ConcreteMath}). The Stirling numbers of the second kind \begin{math} \genfrac{\{}{\}}{0pt}{}{n}{k}=S(n,k) \end{math} count the number of ways to partition a set of $n$ elements into $k$ nonempty subsets. Calculation of the composita is essential for obtaining the coefficients function of a composition of generating functions. Starting with functions $f(n)$, $r(n)$ and their generating functions \begin{math} F(x)=\sum_{n\geq1}f(n)x^{n} \end{math}, \begin{math} R(x)=\sum_{n\geq0}r(n)x^{n} \end{math} accordingly, we consider a composition of the generating functions \begin{math} H(x)=R\left( F(x)\right) \end{math}. For the generating function \begin{math} H(x)=\sum_{n\geq0}h(n)x^{n} \end{math}, the coefficients function is determined by the expression: \begin{equation} \label{composition} h(n)=\sum^{n}_{k=1}\sum_{\lambda_i>0 \atop \lambda_1+\lambda_2+\ldots+\lambda_k=n}f(\lambda_{1})f(\lambda_{2})\ldots f(\lambda_{k})r(k)=\sum^{n}_{k=1}F^{\Delta}(n,k)r(k),\quad h(0)=r(0). \end{equation} \section{Main results} In this section we consider a composition of exponential generating function and its integer properties. \begin{theorem} Suppose \begin{math} E(x)=\sum_{n>0} e(n)\frac{x^n}{n!} \end{math} is an exponential generating function, and \begin{math} E^{\Delta}(n,k) \end{math} is the composita of $E(x)$. Then the expression \begin{equation} \label{Enk} \frac{n!}{k!}E^{\Delta}(n,k) \end{equation} is integer for $k\leq n$ \end{theorem} \begin{proof} Let us consider a composition of exponential generating functions \begin{math} A(E(x)) \end{math}, where \begin{math}A(x)=\sum_{n\geq 0} a(n)\frac{x^n}{n!} \end{math}, $a(n)$ is integer. According to \cite{Stanley_2}, the composition of exponential generating functions is a exponential generating function \begin{displaymath} G(x)=A(E(x)), \end{displaymath} where \begin{math} G(x)=\sum_{n\geq 0} g(n)\frac{x^n}{n!} \end{math}. Using formula (\ref{composition}), we get \begin{displaymath} \frac{g(n)}{n!}=\sum_{k=1}^n E^{\Delta}(n,k)\frac{a(k)}{k!}. \end{displaymath} Therefore, the expression \begin{displaymath} \sum_{k=1}^n E^{\Delta}(n,k)a(k)\frac{n!}{k!} \end{displaymath} is integer. Since $g(k)$ can be any integer, the expression (\ref{Enk}) is integer. The theorem is proved. \end{proof} \begin{corollary} \label{cor1} Suppose $E^{\Delta}(n,k)$ is the composita of an exponential generating function. Then the expression \begin{equation}\label{coEnk} \sum_{k=2}^{n-1} E^{\Delta}(n,k)\frac{(n-1)!}{k!} \end{equation} is integer for all prime $n$. \end{corollary} \begin{proof} Let us consider the following cases: \begin{itemize} \item For $k=1$. The composita equals to \begin{math} E^{\Delta}(n,1)=\frac{e(n)}{n!}. \end{math} Hence, the expression \begin{displaymath} E^{\Delta}(n,k)\frac{(n-1)!}{k!}=\frac{e(n)}{n} \end{displaymath} is not integer for $k=1$. \item For $k=n$. The composita equals to \begin{math} E^{\Delta}(n,n)={e(1)^n}. \end{math} Hence, the expression \begin{displaymath} E^{\Delta}(n,k)\frac{(n-1)!}{k!}=\frac{e(1)^n}{n} \end{displaymath} is not integer for $k=n$. \item For $1<k<n$. According to (\ref{Fnk0}), the composita equals to \begin{displaymath} E^{\Delta}(n,k)=\sum_{\pi_k\in C_n} \frac{e(\lambda_1)e(\lambda_2)\ldots e(\lambda_k)} {\lambda_1! \lambda_2!\ldots \lambda_k!}. \end{displaymath} From $\pi_k$ is the composition \begin{math} \lambda_1+\lambda_2+\ldots+\lambda_k=n \end{math} and $k>1$ it follows that no exists $\lambda_i$ such that \begin{math} \lambda_i\neq n. \end{math} Hence, the expression \begin{displaymath} \sum_{\pi_k\in C_n} \frac{e(\lambda_1)e(\lambda_2)\ldots e(\lambda_k)} {\lambda_1! \lambda_2!\ldots \lambda_k!}\frac{(n-1)!}{k!} \end{displaymath} is integer for all prime $n$ and for $1<k<n$. Therefore, the expression \begin{displaymath} \sum_{k=2}^{n-1} E^{\Delta}(n,k)\frac{(n-1)!}{k!} \end{displaymath} is integer for all prime $n$. \end{itemize} \end{proof} We represent the expression (\ref{coEnk}) \begin{equation}\label{coEnksum} \frac{1}{n}\left(n!\sum_{k=1}^{n} E^{\Delta}(n,k)\frac{1}{k!}-e(n)-e(1)^n\right), \end{equation} where \begin{displaymath} g(n)=n!\sum_{k=1}^{n} E^{\Delta}(n,k)\frac{1}{k!} \end{displaymath} is the coefficients function of the composition of generating functions \begin{math} G(x)=\exp(E(x)). \end{math} In the general case it also holds. For the composition of generating functions \begin{displaymath} G(x)=A(E(x))=\sum_{n\geq 0} g(n)\frac{x^n}{n!}, \end{displaymath} where \begin{displaymath} A(x)=\sum_{n\geq 0} a(n)\frac{x^n}{n!}, \qquad E(x)=\sum_{n\geq 1} e(n)\frac{x^n}{n!} \end{displaymath} are exponential generating functions, the expression \begin{equation} \frac{1}{n}\left(g(n)-e(n)a(1)-e(1)^na(n)\right) \end{equation} is integer for all prime $n$. As applications of Corollary \ref{cor1}, we consider the following examples. \begin{example} Consider the following composition of generating function: \begin{displaymath} G(x)=e^{e^x-1}=\sum_{n\geq 0} g(n)\frac{x^n}{n!}. \end{displaymath} The composita of \begin{math} E(x)=\exp(x)-1 \end{math}, according to (\ref{exp(x)-1}), is equal to \begin{displaymath} E^{\Delta}(n,k)=\frac{k!}{n!}\genfrac{\{}{\}}{0pt}{}{n}{k}. \end{displaymath} Then for the composition \begin{math} e^{e^x-1}, \end{math} the expression \begin{displaymath} \frac{1}{n}\left(n!\sum_{k=1}^n \frac{k!}{n!}\genfrac{\{}{\}}{0pt}{}{n}{k}\frac{1}{k!}-1-1^n\right) \end{displaymath} is integer for all prime $n$. Since $n>0$, we get that the expression \begin{equation} \label{Touchard} (B_n-2)\equiv 0 \mod n. \end{equation} is integer for all prime $n$. Here $B_n$ are the Bell numbers (counting the ways to partition a set of $n$ elements) \cite{Comtet_1974,ConcreteMath,Bell_1934}. In 1933 J. Touchard \cite{Tou} proved the next congruence for the Bell numbers: \begin{equation} B_{n+k}\equiv B_{k+1}+B_k\pmod{n} \end{equation} for any prime number $p$. The expression (\ref{Touchard}) is a special case of Touchard's Congruence (for $k=0$). \end{example} \begin{example} Let us consider the following composition of exponential generating functions \begin{displaymath} E(x)=e^{x+\frac{1}{2}x^2+\frac{1}{6}x^3}. \end{displaymath} This generating function generates the sequence of integers (A001333) \cite{oeis}. \begin{displaymath} \left[1, 1, 2, 5, 14, 46, 166, 652, 2780, 12644, 61136, 312676, 1680592, 9467680, 55704104, \ldots \right] \end{displaymath} According to \cite{KruCompositae}, the composita of the generating function \begin{math} E(x)=x+\frac{1}{2}x^2+\frac{1}{6}x^3 \end{math} has the following form \begin{displaymath} E^{\Delta}(n,k)=\sum_{j=0}^{k}{{{j}\choose{n-3\,k+2\,j}}\,3^{j-k}\,{{k}\choose{j}} \,2^{-n+2\,k-j}}. \end{displaymath} Then, according to (\ref{composition}), for finding the composition we use the following expression \begin{displaymath} g(n)=n!\sum_{k=1}^{n}\frac{1}{k!}\sum_{j=0}^{k}{{j}\choose{n- 3\,k+2\,j}}\,3^{j-k}\,{{k}\choose{j}}\,2^{-n+2\,k-j}. \end{displaymath} Hence, the expression \begin{equation} (n-1)!\sum_{k=2}^{n-1}\frac{1}{k!}\sum_{j=0}^{k}{{j}\choose{n- 3\,k+2\,j}}\,3^{j-k}\,{{k}\choose{j}}\,2^{-n+2\,k-j} \end{equation} is integer for all prime $n$. A few initial terms of this expression are shown below (starting with $n=1$): \begin{displaymath} \left[0, 0, 1, \frac{13}{4}, 9, \frac{55}{2}, 93, \frac{2779}{8}, \frac{12643}{9}, \frac{12227}{2}, 28425, \frac{560197}{4}, 728283,\ldots \right] \end{displaymath} \end{example} \begin{example} Let us consider the following composition of exponential generating functions \begin{displaymath} E(x)=e^{artanh(x)}. \end{displaymath} This generating function generates the sequence of integers (A000246) \cite{oeis}. \begin{displaymath} \left[1, 1, 1, 3, 9, 45, 225, 1575, 11025, 99225, 893025, 9823275, 108056025, 1404728325, \ldots \right] \end{displaymath} The composita of the generating function \begin{math} E(x)=artanh(x) \end{math} has the following form \begin{displaymath} E^{\Delta}(n,k)=k!\,\sum_{m=k}^{n}\frac{2^{m-k}}{m!}\genfrac{[}{]}{0pt}{}{m}{k}\, {{n-1}\choose{m-1}}. \end{displaymath} Here \begin{math} \genfrac{[}{]}{0pt}{}{m}{k} \end{math} are the Stirling numbers of the first kind with parameters $m$ and $k$ (counting the ways to partition a set of $m$ elements into $k$ blocks) \cite{Comtet_1974} Then, according to (\ref{composition}), for finding the composition we use the following expression \begin{displaymath} g(n)=n!\sum_{k=1}^{n}\sum_{m=k}^{n}\frac{2^{m-k}}{m!}\genfrac{[}{]}{0pt}{}{m}{k}\, {{n-1}\choose{m-1}}. \end{displaymath} Hence, the expression \begin{equation} (n-1)!\sum_{k=2}^{n-1}\sum_{m=k}^{n}\frac{2^{m-k}}{m!}\genfrac{[}{]}{0pt}{}{m}{k}\, {{n-1}\choose{m-1}} \end{equation} is integer for all prime $n$. \end{example} Now let us consider the composition of generating functions \begin{math} G(x)=B(E(x))=\sum_{n\geq 0} g(n)\frac{x^n}{n!}, \end{math} where \begin{math} E(x)=\sum_{n>0} e(n)\frac{x^n}{n!} \end{math} is an exponential generating function, \begin{math} B(x)=\sum_{n\geq0} b(n)x^n \end{math} is an ordinary generating function with integer coefficients. \begin{theorem} \label{Thm2} For the composition \begin{math} G(x)=B(E(x))=\sum_{n\geq 0} g(n)\frac{x^n}{n!}, \end{math} the expression \begin{equation} \label{Enk2} \frac{1}{n}\left(g(n)-e(n)b(1)\right) \end{equation} is integer for all prime $n$. \end{theorem} \begin{proof} From (\ref{composition}) it follows that \begin{displaymath} g(n)=n!\sum_{k=1}^nb(k)\sum_{\pi_k\in C_n} \frac{e(\lambda_1)e(\lambda_2)\ldots e(\lambda_k)} {\lambda_1! \lambda_2!\ldots \lambda_k!}. \end{displaymath} Then \begin{displaymath} n!\sum_{\pi_k\in C_n} \frac{e(\lambda_1)e(\lambda_2)\ldots e(\lambda_k)} {\lambda_1! \lambda_2!\ldots \lambda_k!}=\sum_{\pi_k\in C_n}{n \choose {\lambda_1, \lambda_2\ldots \lambda_k} }e(\lambda_1)e(\lambda_2)\ldots e(\lambda_k). \end{displaymath} Since $n$ is prime, the multinomial coefficient is divided by $n$ evenly for $k>1$. Therefore, the expression \begin{displaymath} \frac{1}{n}\left(n!\sum_{k=1}^n E^{\Delta}(n,k)b(k)-e(n)b(1)\right) \end{displaymath} is integer for all prime $n$. The theorem is proved. \end{proof} As applications of Theorem \ref{Thm2}, we consider the following example. \begin{example} Let us consider the generating function for the Euler numbers A000111 \cite{oeis,Stanley_1} \begin{equation} \frac{1}{1-\sin(x)}=\sum_{n\geq 0} E(n+1)\frac{x^n}{n!} \end{equation} Since \begin{displaymath} E(x)=\sin(x)=\sum_{n\geq0}\frac{\left((-1)^{n-1}+1\right)(-1)^{\frac{n+1}{2}+n}}{2}\frac{x^n}{n!}, \end{displaymath} the expression \begin{equation} E(n+1)-\frac{\left((-1)^{n-1}+1\right)(-1)^{\frac{3n+1}{2}}}{2} \equiv 0 \mod n \end{equation} holds for all prime $n$. \end{example}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,122
\section \documentclass[english,prl,twocolumn, aps,amssymb,footinbib,showpacs]{revtex4-1} \usepackage[T1]{fontenc} \usepackage{amsmath} \usepackage{amssymb} \usepackage{graphicx} \usepackage{braket} \usepackage{babel} \usepackage{color} \usepackage[colorlinks]{hyperref} \newcommand{\red}[1]{\textcolor{red}{#1}} \makeatletter \@ifundefined{textcolor}{} { \definecolor{BLACK}{gray}{0} \definecolor{WHITE}{gray}{1} \definecolor{RED}{rgb}{1,0,0} \definecolor{GREEN}{rgb}{0,1,0} \definecolor{BLUE}{rgb}{0,0,1} \definecolor{CYAN}{cmyk}{1,0,0,0} \definecolor{MAGENTA}{cmyk}{0,1,0,0} \definecolor{YELLOW}{cmyk}{0,0,1,0} } \usepackage{bm}\@ifundefined{definecolor}{\usepackage{color}}{} \newcommand{\mf}[1]{{\color{red} #1}} \newcommand{\nr}[1]{{\color{blue} #1}} \newcommand{\bhb}[1]{{\color{Orange} #1}} \newcommand{\bh}[1]{#1} \newcommand{\bhc}[1]{{\color{Orange} #1}} \newcommand{\DD}[0]{\mathcal{D}} \newcommand{\tr}[0]{\text{Tr}} \newcommand{\re}[1]{\text{Re}\!\left(#1\right)} \newcommand{\im}[1]{\text{Im}\!\left(#1\right)} \newcommand{\ee}[0]{\mathcal{E}} \newcommand{\bin}[0]{b_\mathrm{in}} \newcommand{\f@size{} pt}{\f@size{} pt} \newcommand{\kzero}[0]{\ket{0}_\alpha} \newcommand{\kone}[0]{\ket{1}_\alpha} \newcommand{\ba}[0]{\bm{a}} \newcommand{\bb}[0]{\bm{b}} \newcommand{\bq}[0]{\bm{q}} \newcommand{\bvarphi}[0]{\bm{\varphi}} \newcommand{\kappac}[0]{\kappa_\text{conf}} \newcommand{\kappae}[0]{\kappa_\text{err}} \newcommand{\kun}[0]{\kappa_a} \newcommand{\kde}[0]{\kappa_2} \newcommand{\ainf}[0]{\alpha_{\infty}} \newcommand{\ketbra}[2]{\ket{#1}\!\!\bra{#2}} \newcommand{\ef}[1]{{\color[rgb]{0.0,0.5,0.0}#1}} \newcommand{\sd}[1]{{\color[rgb]{0.0,0.0,1.0}#1}} \newcommand{\zl}[1]{{\color[rgb]{1.0,0.0,0.0}#1}} \newcommand{\zlt}[1]{{\color[rgb]{0.0,0.0,1.0}#1}} \newcommand{\rl}[1]{{\color[rgb]{1.0,0.0,1.0}#1}} \newcommand{\tc}[1]{{\color[rgb]{0.2,1,.2}#1}} \usepackage[T1]{fontenc} \usepackage[latin9]{inputenc} \usepackage{textcomp} \usepackage{color} \newcommand{\beginsupplement}{% \setcounter{table}{0} \renewcommand{\thetable}{S\arabic{table}}% \setcounter{figure}{0} \renewcommand{\thefigure}{S\arabic{figure}}% \setcounter{equation}{0} \renewcommand{\theequation}{S\arabic{equation}} } \usepackage{babel} \makeatother \begin{document} \title{Exponential suppression of bit-flips in a qubit encoded in an oscillator} \author{Rapha\"el Lescanne$^{1,2}$, Marius Villiers$^{1,2}$, Th\'eau Peronnin$^{3}$, Alain Sarlette$^{2}$, Matthieu Delbecq$^{1}$, Benjamin Huard$^{3}$, Takis Kontos$^{1}$, Mazyar Mirrahimi$^{2}$, Zaki Leghtas$^{4, 1, 2}$} \affiliation{$^1$Laboratoire de Physique de l'Ecole Normale Sup\'erieure, ENS, Universit\'e PSL, CNRS, Sorbonne Universit\'e, Universit\'e Paris-Diderot, Sorbonne Paris Cit\'e, Paris, France} \affiliation{$^2$QUANTIC team, INRIA de Paris, 2 Rue Simone Iff, 75012 Paris, France} \affiliation{$^3$Universit\'e Lyon, ENS de Lyon, Universit\'ee Claude Bernard Lyon 1, CNRS, Laboratoire de Physique, F-69342 Lyon, France} \affiliation{$^4$Centre Automatique et Syst\`emes, Mines-ParisTech, PSL Research University, 60, bd Saint-Michel, 75006 Paris, France} \begin{abstract} {A quantum system interacts with its environment, if ever so slightly, no matter how much care is put into isolating it. As a consequence, quantum bits (qubits) undergo errors, putting dauntingly difficult constraints on the hardware suitable for quantum computation. New strategies are emerging to circumvent this problem by encoding a qubit non-locally across the phase space of a physical system. Since most sources of decoherence are due to local fluctuations, the foundational promise is to exponentially suppress errors by increasing a measure of this non-locality. Prominent examples are topological qubits which delocalize quantum information over real space and where spatial extent measures non-locality. In this work, we encode a qubit in the field quadrature space of a superconducting resonator endowed with a special mechanism that dissipates photons in pairs. This process pins down two computational states to separate locations in phase space. As we increase this separation, we measure an exponential decrease of the bit-flip rate while only linearly increasing the phase-flip rate. Since bit-flips are continuously and autonomously corrected at the single qubit level, only phase-flips are left to be corrected via a one-dimensional quantum error correction code. This exponential scaling demonstrates that resonators with non-linear dissipation are promising building blocks for universal fault-tolerant quantum computation with drastically reduced hardware overhead.} \end{abstract} \date{\today} \maketitle Protecting quantum states against decoherence is a fundamental problem in physics, and is pivotal for the future of quantum computing. The theory of quantum error correction (QEC) and its fault-tolerant implementation \cite{Shor1995, Steane1996} provides a solution. In QEC, groups of noisy physical qubits are arranged together to encode qubits with reduced noise, and fault-tolerance establishes that noisy quantum computers can operate reliably if the noise is below a threshold. A strong focus in quantum architecture design has been to increase this threshold to a value within experimental reach, {but the required hardware overhead remains daunting} \cite{Fowler2012}. Therefore, there is a pressing need for new ideas to encode and protect quantum information. Let us start by understanding why classical information is so stable. Consider a light switch, which has two stable states labeled 0 and 1. Their stability is provided by two properties. First, in order to change states one needs to apply a force to overcome an energy barrier, usually provided by the deformation of a spring. Second, friction between mechanical parts is essential for stability: when a perturbation randomly deviates the switch from its stable state, the gained entropy must be dissipated into a reservoir in order to {recover} the initial state. Can these two properties be transposed to protect quantum information? {The $\ket{0}$ and $\ket{1}$ states of a qubit}, such as electronic orbitals of an ion or energy levels of a non-linear resonator, {often} have overlapping supports in phase space. First, one needs to isolate the two states so that they no longer overlap \cite{Fluhmann2019, Campagne2019} and separate them by an energy barrier \cite{Brooks2013, Albrecht2016, Lin2018, Earnest2018, Smith2019, Puri2017}. The second property, friction (or dissipation) leaks information about the system and therefore seems incompatible with the requirement for a qubit to adopt quantum superpositions of states. Remarkably, there exists a dissipative mechanism, known as two-photon dissipation, which stabilizes the $\ket{0}$ and $\ket{1}$ states of a qubit without affecting quantum superpositions of the two \cite{Wolinsky1988}. {Recent superconducting circuit experiments \cite{Leghtas2015, Touzard2018a} have demonstrated that a resonator endowed with two-photon dissipation develops a manifold of steady states spanned by two states $\kzero$ and $\kone$, lying in two distinct locations of the resonator two-dimensional (2D) phase space. The combination of dissipation and non-locality should prevent random swaps between $\kzero$ and $\kone$ (bit-flips). However, the circuit architectures mediating the two-photon dissipation impinged errors on the resonator. These experiments fell short of crossing the demanding threshold where the correction is faster than the occurrence of all errors, including those induced by the correcting mechanism itself.} \begin{figure} \includegraphics[width=\columnwidth]{fig1.pdf} \caption{\textbf{The cat-qubit} (\textbf{a}) Quantum information is encoded in a resonator (blue mirrors) coupled to its environment through a special apparatus (hatched mirror) where pairs of photons are exchanged at rate $\kappa_2$ (double arrows). (\textbf{b}) This dynamics is illustrated by a pseudo-potential $V$ (purple) defined over the resonator {quadrature} phase space ($\beta$ plane). The cat-qubit states $\kzero$ and $\kone$ lie in the minima of $V$ and are separated in phase space as shown by their Wigner representations (stacked color plots). Bit-flip errors, which randomly swap $\kzero$ and $\kone$, are exponentially suppressed by increasing this separation. {Crucially, } this pseudo-potential does not alter quantum superpositions {of $\kzero$ and $\kone$} such as the Schr\"odinger cat state $\ket{+}_\alpha$.} \label{fig1} \end{figure} In this work, we measure an exponential {decrease of the bit-flip rate} as we increase the separation between states {$\kzero$ and $\kone$}, while only linearly increasing the phase-flip rate (errors {that} scramble the phase of a superposition of $\kzero$ and $\kone$). The bit-flip time reaches 1~ms, a 300-fold improvement over the energy decay time of the resonator. This was made possible by inventing a circuit which mediates a pristine non-linear coupling between the resonator and its environment, {thus circumventing the problems of previous implementations \cite{Leghtas2015,Touzard2018a}}. Our qubit combines two unique features: only phase-flips remain to be actively corrected \cite{Guillaud2019}, and its 2D phase space can be accessed to perform gates \cite{Mirrahimi2014, Grimm2019, Puri2019, Guillaud2019}, making it an ideal building block for scalable fault-tolerant quantum computation with a significant reduction in hardware overhead \cite{Guillaud2019}. We follow the paradigm of cat-qubits \cite{Leghtas2013, Mirrahimi2014} where information is encoded in quantum superpositions of resonator states (see Fig.~\ref{fig1}): \begin{eqnarray*} \ket{0}_\alpha &=& \frac{1}{\sqrt{2}}\left(\ket{+}_\alpha+\ket{-}_\alpha\right) = \ket{+\alpha} + \mathcal{O}(e^{-2|\alpha|^2})\\ \ket{1}_\alpha &=& \frac{1}{\sqrt{2}}\left(\ket{+}_\alpha-\ket{-}_\alpha\right) = \ket{-\alpha} + \mathcal{O}(e^{-2|\alpha|^2}) \label{eq:01} \end{eqnarray*} where $\ket{\pm}_\alpha=\mathcal{N_\pm}\left(\ket{\alpha}\pm\ket{-\alpha}\right)$, $\ket{\alpha}$ is a coherent state with complex amplitude $\alpha$, and $\mathcal{N_\pm}=1/\sqrt{2(1\pm e^{-2|\alpha|^2})}$. All these states contain an average number of photons $\approx|\alpha|^2$ for $|\alpha| > 1$. A significant source of errors in a resonator is energy decay which collapses all states ($\kzero$ and $\kone$ included) towards the vacuum, thus erasing any encoded information. This decay is balanced by a mechanism where the resonator exchanges only pairs of photons with its environment (Fig.~\ref{fig1}a) \cite{Wolinsky1988}, known as two photon dissipation. This dynamics is modeled by the following loss operator \begin{equation} \label{eq:H2} \bm{L}_2 = \sqrt{\kappa_2}\left(\ba^2-\alpha^2\right)\,, \end{equation} where $\ba$ is the annihilation operator of the resonator, $\kappa_2$ {is the rate at which pairs of photons are exchanged with the environment} and the term in $\alpha^2$ results from a drive which inserts pairs of photons \cite{supplement}. The cat-qubit states $\kzero$, $\kone$ and all their superpositions are steady states of this dynamics. A convenient tool to visualize the {semi-classical dynamics of} \eqref{eq:H2} is the pseudo-potential $V$ defined over the complex plane as $-\nabla V(\beta)=\frac{d\beta}{dt}$, where $\beta$ is the expectation value of $\ba$ at time $t$ in a semi-classical approximation \cite{supplement}. Stable steady states are local minima of $V$ (see Fig.~\ref{fig1}b) and correspond to $\beta=\pm\alpha$. An error process can disrupt the stability of these states and induce transitions between them. By analogy with a particle in a double {well} potential, tunneling (or bit-flips) from one well to another is exponentially suppressed in the separation between the two wells (here defined as $|\alpha|^2)$, as long as the error process fulfills two criteria: it has to be local and sufficiently weak. {An error process is local if it} {transforms} {a state into neighboring states in phase space \cite{Gottesman2001}. As an example, dominant errors such as photon loss, gain and dephasing are local}. Moreover, the {effective} error rate $\kappae$ must be weaker than the {confining rate} $\kappac=2|\alpha|^2\kappa_2$ \cite{supplement} inherited from the confining potential $V$, in order for the cat-qubit states to remain localized near the potential minima. The outstanding challenge to observe an exponential increase in the bit-flip time is therefore to engineer $\kappac>\kappae$ for all dominant local error processes. \begin{figure*} \includegraphics[width=\textwidth]{fig2.pdf} \caption{\textbf{Circuit diagram and implementation} (\textbf{a}) The cat-qubit resonator (blue) is coupled on one end to a transmon qubit and a readout resonator (green) to measure its Wigner function, and on the other end to the buffer (red), a lumped element resonator connected to ground through a non-linear element coined the Asymmetrically Threaded SQUID (ATS). The ATS consists of a SQUID shunted by an inductance, forming two loops. Pumping the ATS at frequency $\omega_p=2\omega_a-\omega_b$ (purple arrow), where $\omega_{a,b}$ are the cat-qubit and buffer frequencies, mediates the exchange of two photons of the cat-qubit (blue arrows) with one photon of the buffer (red arrows) (\textbf{b}) {False color} optical image of the ATS. The shunt inductance is made of an array of 5 Josephson junctions ({marked by} large red crosses). The left and right flux lines (purple) are connected to the same input through an on-chip hybrid (not represented). They carry the radio-frequency pump and the DC current $I_{_\Sigma}$, which thread both loops with flux $\varphi_{_\Sigma}$. The bottom flux line (yellow) carries current $I_{_\Delta}$ and threads each loop with flux $\pm\varphi_{_\Delta}$. Combining these two controls, we bias the ATS at the $\pi/0$ asymmetric {DC} working point. (\textbf{c})~Measured buffer frequency (color) as a function of $\varphi_{_\Sigma}$ (x-axis) and $\varphi_{_\Delta}$ (y-axis), around the working point $\varphi_{_\Sigma}, \varphi_{_\Delta}= \pi/2, \pi/2$ (white dot). As expected, for $\varphi_{_\Sigma}=\pi/2$ (open SQUID), the buffer frequency does not depend on $\varphi_{_\Delta}$. We operate the ATS by modulating the flux along the orthogonal direction $\varphi_{_\Sigma}$ (purple arrow). From this {measurement}, we extract all the ATS parameters \cite{supplement}.} \label{fig2} \end{figure*} Two-photon exchange between a resonator and its environment does not occur spontaneously. Instead, it is synthesized by engineering an interaction that exchanges pairs of photons of the cat-qubit resonator with one photon of an intentionally lossy mode referred to as the buffer \cite{Leghtas2015}. The interaction Hamiltonian takes the form \begin{equation} \label{eq:g2} \bm{H}_i/\hbar = g_2\ba^{\dag 2}\bm{b}+g_2^*\ba^2\bm{b}^\dag\,, \end{equation} where $\bm{b}$ is the annihilation operator of the buffer and $g_2$ is the interaction strength. Adding a resonant drive on the buffer, we recover \eqref{eq:H2} with $\kappa_2\approx{4|g_2|^2}/{\kappa_b}$ and $\alpha^2 = -{\epsilon_d}/{g_2^*}$, where $\epsilon_d$ is the drive amplitude and $\kappa_b$ is the buffer energy decay rate, {engineered to be larger than $g_2$} \cite{Carmichael2007, Leghtas2015}. Conveniently, the separation $|\alpha|^2$ between the cat-qubit states is readily tunable \textit{in situ} since it is proportional to the buffer drive amplitude. {We implement our cat-qubit in a circuit quantum electrodynamics architecture described in Fig~\ref{fig2}a {operated at 10 mK}. It consists of a sputtered niobium film on a silicon substrate patterned into coplanar waveguide resonators. The cat-qubit mode resonates at $\omega_a/2\pi = 8.0381$~GHz, has a single photon lifetime $T_1 = 3.0~\mu$s and is probed through a transmon qubit coupled to a readout resonator followed by a parametric amplifier. {At the flux operating point,} the buffer mode resonates at $\omega_b/2\pi=4.8336$~GHz and has an energy decay rate $\kappa_b/2\pi = 13$~MHz.} \begin{figure*} \includegraphics[width=\textwidth]{fig4.pdf} \caption{\textbf{Exponential increase of the bit-flip time with the cat size.} (\textbf{a}) The bit-flip time (y-axis) is measured (open circles) as a function of the cat size {defined as} $|\alpha|^2$ (x-axis). Up to $|\alpha|^2\approx 3.5$, $T_\text{bit-flip}$ undergoes an exponential increase to $\approx 0.8$~ms, rising by a factor of 4.2 per added photon (solid line). The bit-flip time then saturates (dashed line is a guide for the eye) for $|\alpha|^2\ge 5$ at $1~$ms, a factor of 300 larger than the cat-qubit resonator lifetime $T_1$ in the absence of the pump and drive. {Each circle is obtained from measurements such as in (b) for the circle indicated by the blue arrow}. (\textbf{b}) The cat-qubit is initialized in $\kzero$, for a cat size $|\alpha|^2 = 5.4$. After applying the pump and drive for a variable duration (x-axis), the population $P$ (y-axis) of $\kzero$ (top curve) and $\kone$ (bottom curve) is measured. The data (open circles) are fitted to decaying exponential functions (solid lines) from which we extract the bit-flip time. (\textbf{c}) Each panel displays the measured Wigner function of the cat-qubit after a pump and drive duration indicated on the right of each plot. {Labels 1-5 mark the correspondence with (b)}. The cat-qubit is initialized in $\kzero$ (top panel) and over a millisecond timescale, the population escapes towards $\kone$ (lower panels). The two-photon dissipation ensures that the cat-qubit resonator state remains entirely in the steady state manifold spanned by $\kzero$ and $\kone$.} \label{fig4} \end{figure*} It is a technical challenge to engineer the interaction \eqref{eq:g2} without inducing spurious effects which are detrimental for the protection of quantum information. Examples of such effects are induced relaxation \cite{Sank2016, Gao2018}, escape to unconfined states \cite{LescannePRApp2019} and quasiparticle generation \cite{Wang2014}. To mitigate these effects, the interaction \eqref{eq:g2} is induced by a {novel} non-linear dipole: the Asymmetrically Threaded SQUID (ATS, Fig~\ref{fig2}b). The ATS consists of a symmetric SQUID (Superconducting Quantum Interference Device) shunted in its center by a large inductance, {thus forming two loops. Here the inductance} is built from an array of 5 Josephson junctions. The ATS mediates an interaction of the form {$U=-2E_{J}\cos(\varphi_{_\Sigma})\cos(\bvarphi+\varphi_{_\Delta})$, where $E_J$ is the Josephson energy of the SQUID junctions, $\bvarphi$ is the phase across the dipole, and $2\varphi_{_\Sigma,_\Delta}$ are the sum and differences of flux threading the two loops \cite{supplement}. We bias the ATS at $\varphi_\Sigma = \varphi_\Delta = \pi/2$, or equivalently, we thread the left and right loops with flux $\pi$ and $0$, respectively. In addition, we drive the sum port with a radio-frequency flux pump $\epsilon(t)$. At this bias point $U=-2E_{J}\sin(\epsilon(t))\sin(\bvarphi)$. The ATS is coupled to the buffer and cat-qubit, so that $\bvarphi$ is a linear combination of $\ba,\ba^\dag,\bb,\bb^\dag$, and $\sin(\bvarphi)$ contains only odd powers of these operators. The desired interaction \eqref{eq:g2} is present in the expansion of $\sin(\bvarphi)$, and is resonantly selected by a flux pump frequency $\omega_p=2\omega_a-\omega_b$ \cite{Vrajitoarea2018}}. {In contrast with previous strategies \cite{Leghtas2015, Touzard2018a}}, the ATS mediates a pristine two-photon coupling, since \eqref{eq:g2} is the only leading order non-rotating term, the presence of the inductive shunt prevents instabilities \cite{VerneyPRApp2019}, and the device operates at a first order flux insensitive point (Fig~\ref{fig2}c). {These features are key in order not to introduce inherent error processes that cannot be corrected by two-photon dissipation.} The root advantage of the cat-qubit is that its computational states $\kzero$ and $\kone$ can be made arbitrarily long-lived simply by increasing the cat size $|\alpha|^2$, provided that $\kappac>\kappae$. In this experiment, the dominant error is due to energy decay so that $\kappae/2\pi=(2\pi T_1)^{-1}=53$~kHz \cite{supplement}, and $\kappac=2|\alpha|^2\kappa_2$ with {a measured} $\kappa_2/2\pi = 40$~kHz {(from which we infer $g_2/2\pi = 360 $~kHz)}. Hence, we enter the regime $\kappac>\kappae$ as soon as $|\alpha|^2 > 0.6$. We have measured that for each added photon in the cat-qubit state, the bit-flip time is multiplied by 4.2. This exponential scaling persists up to $|\alpha|^2\approx 3.5$, and the bit-flip time saturates for $|\alpha|^2\ge 5$ at 1 ms, a 300-fold improvement over the resonator intrinsic lifetime (see Fig.~\ref{fig4}). We expect a saturation {when the corrected bit flip rate reaches the rate of} residual errors which are not correctable, such as non-local errors. In the present experiment, we attribute this saturation to the coupling with the transmon employed for the resonator tomography \cite{supplement}, which has a thermal occupation of $1\%$, a lifetime $T_{1,q}=5~\mu$s and is dispersively coupled to the cat-qubit resonator with a rate $\chi/2\pi = 720~$kHz. Over a timescale in the millisecond range, the transmon acquires a thermal excitation {that} shifts the cat-qubit resonator frequency by $\chi$. This triggers a rotation of the resonator states which overcomes the confining potential since in this experiment $\chi \gg \kappac/2$ \cite{supplement} (note that tomography protocols compatible with smaller values of $\chi$ {have been recently demonstrated} \cite{Touzard2018b, Campagne2019}). During an average time $T_{1,q}$, the resonator states acquire an angle of order $\chi T_{1,q} \gg 2\pi$. When the transmon excitation decays, the rotation stops and the two-photon dissipation brings the resonator state back into the cat-qubit computational basis. By virtue of the dissipative nature of the protection mechanism, this process may result in a bit-flip but does not cause any leakage. \begin{figure*} \includegraphics[width=\textwidth]{fig3.pdf} \caption{\textbf{Linear increase of the phase-flip rate with the cat size}. (\textbf{a}) The phase-flip rate (y-axis) is measured as a function of the cat size $|\alpha|^2$. The data (open circles) follow a linear trend (solid line) as expected for the decay rate of a Schr\"odinger cat coherence $\Gamma_\text{phase-flip}=2|\alpha|^2/T_{1,\mathrm{eff}}$. We measure $T_{1,\mathrm{eff}}=2.0~\mu$s, comparable to the intrinsic resonator lifetime of $3.0~\mu$s. {Each circle is obtained from measurements such as in (b) for the circle indicated by the blue arrow}. (\textbf{b}) The cat-qubit is prepared in the initial states $\ket{\pm}_{\alpha}$, for a cat size $|\alpha|^2 = 2.6$. After applying the pump and drive for a variable duration (x-axis), $\braket{\sigma_x^{\alpha}}_\pm$ is measured for each initial state and the difference is represented on the y-axis. The $X$ Pauli operator of the cat-qubit $\sigma_x^{\alpha}$ corresponds to the photon number parity. The data (open circles) are fitted to a decaying exponential (solid line) from which we extract the phase-flip rate. (\textbf{c}) Each panel displays the measured Wigner function of the cat-qubit after a pump and drive duration indicated on the right of each plot. {Labels 1-5 mark the correspondence with (b)}. The cat-qubit is initialized in the $\ket{+}_{\alpha}$ state and the positive and negative fringes demonstrate the quantum nature of this initial state (top panel). The fringe contrast is reduced by single photon loss which mixes $\ket{+}_{\alpha}$ with $\ket{-}_{\alpha}$.} \label{fig3} \end{figure*} Schr\"odinger cat states like $\ket{\pm}_\alpha$ living in a resonator with a lifetime $T_1$, lose their coherence at a rate $2|\alpha|^2/T_1$ \cite{Raimond2006}. In the cat-qubit paradigm, this translates into a phase-flip rate which increases linearly with the cat size $|\alpha|^2$. In addition, our cat-qubit undergoes a flux pump, a drive and non-linear interactions, which could further increase the phase-flip rate. We measure the phase-flip rate for increasing $|\alpha|^2$ and confirm a linear scaling (Fig.~\ref{fig3}a). Moving towards three dimensional cavities and engineering ever-improving non-linear interactions should decrease the phase-flip rate below a threshold where a line repetition code can actively correct remaining errors \cite{Guillaud2019}. In conclusion, we have observed the exponential {decrease of the bit-flip rate} between our cat-qubit states $\kzero$ and $\kone$, as a function of their separation in phase space, while only linearly increasing their phase-flip rate. {Such an exponential scaling is necessary to bridge the gap between the modest performance of quantum hardware and the exquisite performance needed for quantum computation \cite{Fowler2012}}. This was made possible by inventing a Josephson circuit which mediates a pristine non-linear coupling between our cat-qubit mode and its environment. {Further improving the lifetime of the cavity to the state of the art of a millisecond \cite{Reagor2016} and a cat size of $|\alpha|^2\approx 5$ (resp. 10) should lead to a bit-flip time of $\approx 1$ second (resp. 0.5 hour), and a phase-flip time of $\approx 100~\mu$s (resp. 50~$\mu$s).} With such a long bit-flip time, the entire effort of active QEC will be focused on correcting the only significant error: phase-flips. In addition, conditional rotations in the 2D phase space of our cat-qubit form a universal set of gates, thus bypassing the need for magic states. These features suggest a significant reduction in hardware overhead for QEC \cite{Guillaud2019}. \paragraph{Acknowledgements} The authors acknowledge fruitful discussions with Pierre Rouchon and Clarke Smith. ZL acknowledges support from ANR project ENDURANCE, and EMERGENCES grant ENDURANCE of Ville de Paris. AS acknowledges support from ANR project HAMROQS. The devices were fabricated within the consortium Salle Blanche Paris Centre. This work has been supported by the Paris \^Ile-de-France Region in the framework of DIM SIRTEQ. {\paragraph{Author contributions} RL designed, fabricated and measured the device, and analyzed the data. RL and ZL conceived the ATS element with help from BH and TP. RL and ZL wrote the paper with input from all authors. MV fabricated the parametric amplifier. TK and MD provided experimental support. AS and MM provided theory support. ZL managed the project. All authors contributed to extensive discussions of the results.}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,075
using System; using System.Collections; using System.Reflection; using NUnit.Core; using NUnit.Core.Extensibility; namespace NUnit.Util { /// <summary> /// Summary description for AddinRegistry. /// </summary> public class AddinRegistry : MarshalByRefObject, IAddinRegistry, IService { #region Instance Fields private ArrayList addins = new ArrayList(); #endregion #region IAddinRegistry Members public void Register(Addin addin) { addins.Add( addin ); } public IList Addins { get { return addins; } } public bool IsAddinRegistered(string name) { return FindAddinByName(name) != null; } public void SetStatus( string name, AddinStatus status, string message ) { Addin addin = FindAddinByName(name); if (addin != null) { addin.Status = status; addin.Message = message; } } private Addin FindAddinByName(string name) { foreach (Addin addin in addins) if (addin.Name == name) return addin; return null; } #endregion #region IService Members public void InitializeService() { } public void UnloadService() { } #endregion #region InitializeLifetimeService public override object InitializeLifetimeService() { return null; } #endregion } }
{ "redpajama_set_name": "RedPajamaGithub" }
3,032
Třída P400 (jinak též Super PATRA) je třída hlídkových lodí francouzského námořnictva. Třída patří do rodiny hlídkových lodí třídy Vigilante 400. Celkem bylo postaveno 10 jednotek pro Francii a dvě další pro Gabon. Mezi jejich hlavní úkoly patří ochrana francouzských teritoriálních vod, hašení požárů či záchranné operace. Slouží především ve francouzských departementech. Zahraničními uživateli třídy jsou Gabon, Keňa a Pobřeží slonoviny. Stavba V letech 1986–1988 bylo do služby zařazeno deset jednotky této třídy – L'Audacieuse, La Boudeuse, La Capricieuse, La Fougeuse, La Glorieuse, La Gracieuse, La Moqueuse, La Railleuse, La Rieuse a Tapageuse. Pro Gabon byly postaveny hlídkové lodě General d'Armee Ba Oumar (P07) a Colonel Djoue Dabany (P08). Všechny postavila loděnice Constructions Mécaniques de Normandie (CMN) v Cherbourgu. Jednotky třídy P400: Konstrukce Mimo 24 členů posádky mohou přepravovat až 20 dalších osob. Jejich výzbroj tvoří po jednom 40mm a 20mm kanónu a jeden 12,7mm kulomet. V případě potřeby mohou nést také dvě protilodní střely Exocet. Pohon zajišťují dva diesely. Nejvyšší rychlost dosahuje 24 uzlů. Dosah je 4200 námořních mil při ekonomické rychlosti 15 uzlů. Reference Literatura Externí odkazy Třídy hlídkových lodí
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,215
Q: docker apache reverse proxy Im facing issues getting apache2 to work within docker and to serve as a reverse proxy towards another docker container that is running a Python based API. My Dockerfile FROM httpd:latest COPY ./httpd.conf /usr/local/apache2/conf/httpd.conf EXPOSE 80 httpd.conf: Listen 80 <VirtualHost *:80> ProxyPreserveHost On ProxyPass / http://127.0.0.1:8000/ ProxyPassReverse / http://127.0.0.1:8000/ </VirtualHost> LoadModule headers_module modules/mod_headers.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_core_module modules/mod_authn_core.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_core_module modules/mod_authz_core.so LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule access_compat_module modules/mod_access_compat.so LoadModule log_config_module modules/mod_log_config.so LoadModule ssl_module modules/mod_ssl.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule unixd_module modules/mod_unixd.so <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> ServerAdmin you@example.com ServerRoot "/etc/httpd" When I start the container, I do a curl request to localhost and I face error message: curl: (7) Failed to connect to localhost port 80: Connection refused If I remove the COPY httpd.conf command from my Dockerfile before building the image, the docker container works without issue as in default out of box I can see the "It works!" page, no proxy of course.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,818
package com.hubrick.vertx.rest.common; import com.hubrick.vertx.rest.RestClientResponse; import io.vertx.core.Handler; import io.vertx.core.MultiMap; import java.util.List; /** * @author Emir Dizdarevic * @since 1.0.0 */ public class DefaultRxRestClientResponse<T> implements RestClientResponse<T> { private final RestClientResponse<T> decorated; public DefaultRxRestClientResponse(RestClientResponse<T> decorated) { this.decorated = decorated; } @Override public void exceptionHandler(Handler<Throwable> exceptionHandler) { throw new UnsupportedOperationException("This method is not supported. Use RxJava's doOnError method to handle exceptions"); } @Override public int statusCode() { return decorated.statusCode(); } @Override public String statusMessage() { return decorated.statusMessage(); } @Override public MultiMap headers() { return decorated.headers(); } @Override public MultiMap trailers() { return decorated.trailers(); } @Override public List<String> cookies() { return decorated.cookies(); } @Override public T getBody() { return decorated.getBody(); } }
{ "redpajama_set_name": "RedPajamaGithub" }
5,190
\subsection{Coefficient-Based Error Estimators} \label{sec:coeff-based} \label{sec:ohara1969} \label{sec:oliver1972} \label{sec:berntsen1991} In 1969, \citeN{ref:OHara1969} publish a recursive adaptive quadrature routine based on Clenshaw-Curtis quadrature\index{Clenshaw-Curtis quadrature} rules \cite{ref:Clenshaw1960}. Their algorithm uses a cascade of error estimates based on pairs of Newton-Cotes and Clenshaw-Curtis quadrature rules and the final error estimate is computed as \begin{equation} \label{eqn:ohara_err3} \varepsilon_k = \frac{32}{(6^2 - 9)(6^2 - 1)} \left[ \left| \sideset{}{''}\sum_{i=1}^7 (-1)^{i-1}f_{l,i} \right| + \left| \sideset{}{''}\sum_{i=1}^7 (-1)^{i-1}f_{r,i} \right| \right] \end{equation} where $\Sigma''$ denotes a sum in which first and last terms are halved and where the $f_{l,i}$ and $f_{r,i}$ are the values of the integrand evaluated at the nodes of two 7-point Clenshaw-Curtis quadrature rules over the left and right halves of the interval respectively. These sums are the approximated Chebyshev coefficients\index{Chebyshev coefficients} $\tilde{c}_6$ of the integrand over the left and right half of the interval. The error estimate \eqn{ohara_err3} is derived by \citeN{ref:OHara1968} based on the error estimation used by \citeN{ref:Clenshaw1960}. They start by writing the error of a Clenshaw-Curtis quadrature rule over $n+1$ nodes as \begin{multline} \label{eqn:ohara_ccerr2} \lefteqn{\intfx{a}{b} - \mathsf{CC}^{(1)}_{n+1}[a,b] = } \\ (b-a)\left[ \frac{16n}{(n^2-1)(n^2 - 9)}c_{n+2} + \frac{32n}{(n^2 - 9)(n^2 - 25)}c_{n+4} + \dots \right] \end{multline} where the $c_k$ are the exact Chebyshev coefficients of \begin{equation*} f(x) = \sum_{k=0}^\infty c_kT_k(x) \end{equation*} where $T_k(x)$ is the $k$th Chebyshev polynomial of the first kind. They note that for most regular functions, the first term in \eqn{ohara_ccerr2} is often larger than the sum of the following terms. They find that if they define the higher-order $|c_{2i}|$, $i>n+1$ in terms of $|c_{n+2}|$ using the recurrence relation $|c_{i+2}| = K_n|c_i|$, then they can define $K_n$ for different $n$ such that the first term of \eqn{ohara_ccerr2} dominates the series. For the 7-point Clenshaw-Curtis rule, this value is $K_6 = 0.12$. If the relation $|c_{i+2}| \leq K_{n}|c_i|$ holds, then the error is bounded by twice the first term of \eqn{ohara_ccerr2} \begin{equation*} \left| \intfx{a}{b} - \mathsf{CC}^{(1)}_{n+1}[a,b] \right| \leq (b-a)\frac{32n}{(n^2-1)(n^2-9)}|c_{n+2}|. \end{equation*} However, we do not know $c_{n+2}$, yet since we assume that the magnitude of the coefficients decays, we can assume that $|c_{n+2}| < |c_n| \approx \frac{1}{2}|\tilde{c}_n|$ and use $\frac{1}{2}|\tilde{c}_n|$. Since $|c_n|$ might be ``{\em accidentally small}'', they suggest, in \cite{ref:OHara1968}, as an error estimate \begin{equation} \label{eqn:ohara_errfinal} \varepsilon = (b-a)\frac{16n}{(n^2-1)(n^2-9)} \max \left\{ |\tilde{c}_n|, 2K_n|\tilde{c}_{n-2}|, 2K_n^2|\tilde{c}_{n-4}| \right\}. \end{equation} \citeN{ref:Oliver1972} presents a similar doubly-adaptive\index{doubly-adaptive} Clenshaw-Curtis\index{Clenshaw-Curtis quadrature} quadrature routine using an extension of the error estimate of O'Hara and Smith (see \sect{ohara1969}). Instead of assuming a constant $K_n$ such that $\left| c_{i+2} \right| \leq K_n \left|c_i\right|$ where the $c_i$ are the Chebyshev coefficients of the integrand, as do O'Hara and Smith, Oliver approximates the smallest rate of decrease of the coefficients as \begin{equation} \label{eqn:oliver_K} K = \max \left\{ \left| \frac{\tilde{c}_n}{\tilde{c}_{n-2}} \right| , \left| \frac{\tilde{c}_{n-2}}{\tilde{c}_{n-4}} \right| , \left| \frac{\tilde{c}_{n-4}}{\tilde{c}_{n-6}} \right| \right\} \end{equation} where the $\tilde{c}_i$ are the Chebyshev coefficients\index{Chebyshev coefficients} approximated over the nodes of the quadrature rule. He also pre-computes a number of convergence rates $K_n(\sigma)$, which are the rates of decay required such that, for $n$ coefficients, $\sigma$ times the first term of the error expansion in \eqn{ohara_ccerr2} dominates the sum of the remaining terms. If $K$ is less than any $K_n(\sigma)$ for $\sigma = 2$, $4$, $8$ or $16$, then the error estimate \begin{equation} \label{eqn:oliver_err} \varepsilon = \sigma (b-a) \frac{16n}{(n^2-1)(n^2-9)} \max \left\{ K|\tilde{c}_n| , K^2|\tilde{c}_{n-2}| , K^3|\tilde{c}_{n-4}| \right\}, \end{equation} which is consistent with \eqn{ohara_errfinal} by O'Hara and Smith, is used. If $\varepsilon$ exceeds the required local tolerance $\tau_k$, the computed rate of decrease $K$ is compared to a pre-computed limit $K^*_n$. This limit is defined by \citeN{ref:Oliver1971} as the rate of decrease of the Chebyshev coefficients as of which it is preferable to subdivide the interval as opposed to doubling the order of the quadrature rule. Therefore, if $K > K^*_n$, the interval is subdivided, otherwise the order of the Clenshaw-Curtis quadrature rule is doubled. Finally, \citeN{ref:Berntsen1991} present an error estimator based on sequences of null rules\index{null rules}. Introduced by \citeN{ref:Lyness1965}, a null rule $\mathsf{N}^{(k)}_n$ of degree $k$ is defined as a set of weights $u^{(k)}_i$ over the $n$ nodes $x_i$, $i=1\dots n$ such that \begin{equation} \label{eqn:null_rule} \sum_{i=1}^n u^{(k)}_i x_i^j = \left\{ \begin{array}{ll} 0, & j \leq k \\ \neq 0 & j = k+1 \end{array} \right. \end{equation} \ie the rule evaluates all polynomials of degree $j \leq k$ to $0$ and the $(k+1)\st$ monomial to some non-zero value. Berntsen and Espelid compute a sequence of {\em orthonormal}\footnote{ The null rules are normalized such that the norm of the coefficients is equal to the norm of the quadrature weights. } null rules of decreasing degree $\mathsf{N}^{(n-1)}_n$, $\mathsf{N}^{(n-2)}_n$, \dots , $\mathsf{N}^{(0)}_n$ which form an orthogonal basis $S_n$. Applying the null rules to the integrand $f(x)$ we obtain the {\em interpolation coefficients} $e_k = \mathsf{N}^{(k)}_n[a,b] = \sum_{i=1}^n u^{(k)}_i f(x_i)$ of the integrand $f(x)$ onto $S_n$ such that \begin{equation} \label{eqn:null_interp} f(x_i) = \frac{1}{\sum_{k=1}^n w_k^2}\sum_{k=0}^{n-1} e_k u^{(k)}_i, \quad i=1 \dots n. \end{equation} To avoid ``phase effects'' as described in \cite{ref:Lyness1976}, the coefficients are then paired and the ratio of these pairs is computed \begin{equation} \label{eqn:null_ratios} r_k = \frac{E_k}{E_{k+1}}, \quad E_k = \left( e_{2k}^2 + e_{2k+1}^2 \right)^{1/2}, \quad k=0\dots n/2 - 1. \end{equation} The largest of the last $K$ ratios $r_\mathsf{max} = \max_{k} r_k$ is taken as an estimate of the convergence rate of the coefficients. If this ratio is larger than $1$ then the function is assumed to be ``{\em non-asymptotic}'' in the interval and the largest $E_k$ is used as a local error estimate. If $r_\mathsf{max}$ is below $1$ yet still above some critical value $r_\mathsf{critical}$, the function is assumed to be ``{\em weakly asymptotic}'' and the value of the next-highest coefficient $E_{n/2+1}$ --- and thus the local error --- is estimated using \begin{equation} \label{eqn:null_err1} \varepsilon_k = 10 r_\mathsf{max} E_{n/2-1} \end{equation} Finally, if $r_\mathsf{max}$ is below the critical ratio, then the function is assumed to be ``{\em strongly asymptotic}'' and the error is estimated using \begin{equation} \label{eqn:null_err2} \varepsilon_k = 10 r_\mathsf{critical}^{1-\alpha} r_\mathsf{max}^\alpha E_{n/2-1}. \end{equation} where $\alpha \ge 1$ is chosen to reflect, as Berntsen and Espelid state, ``{\em the degree of optimism we want to put into this algorithm}.'' Berntsen and Espelid implement and test this error estimate using 21-point Gauss, Lobatto, Gauss-Kronrod \index{Gauss quadrature}\index{Kronrod extension}\index{Gauss-Lobatto quadrature}\index{Clenshaw-Curtis quadrature} and Clenshaw-Curtis quadrature rules as well as 61-point Gauss and Gauss-Kronrod rules, and later in {\tt DQAINT}\index{DQAINT@{\tt DQAINT}} \cite{ref:Espelid1992}, based on {\small QUADPACK}'s {\tt QAG}\index{QAG@{\tt QAG}} (see \sect{piessens1983}), using the Gauss, Gauss-Lobatto and Gauss-Kronrod rules over 21 nodes. This approach is then extended to Newton-Cotes rules of different degrees and tested against a number of different quadrature routines \cite{ref:Espelid2002,ref:Espelid2003,ref:Espelid2004,ref:Espelid2004b,ref:Espelid2007}. More recently, \citeN{ref:Battles2004} and \citeN{ref:Pachon2009} use a similar approach in the Chebfun system, in which arbitrary functions are represented as single or piecewise interpolants over Chebyshev nodes. These interpolations are considered to be sufficiently accurate in each interval when the absolute values of the highest-degree coefficients drop below a given tolerance. The integral of these interpolants can then be computed using Clenshaw-Curtis quadrature over the interpolation nodes, resulting in an adaptive quadrature scheme of sorts, although this is not the only purpose of the Chebfun system. \section{Comparison} \label{sec:compare} In the following, we will compare the performance of some of the error estimation techniques presented in \S\sect{linear} and \ref{sec:non-linear}, including the new error estimator presented in \sect{new}. \subsection{Methodology} \label{sec:method} Whereas other authors \cite{ref:Casaletto1969,ref:Hillstrom1970,ref:Kahaner1971,ref:Malcolm1975,ref:Robinson1979,ref:Krommer1998,ref:Favati1991} have focused on comparing different algorithms as a whole, using sets of functions chosen to best represent typical integrands, we will focus here only on specific error estimators and on integrands chosen such that they specifically should or should not cause the error estimator to fail. For these test functions we will not consider the usual metrics of efficiency, \ie number of function evaluations required for a given accuracy, but the number of correct estimates, {\em false negatives} and {\em false positives} for each error estimator when integrating functions which it should or should not integrate correctly, respectively. We define a {\em false positive} as a returned error estimate which is {\em below} the required tolerance when the actual error is {\em above} the later. Likewise, a {\em false negative} is a returned error estimate which is {\em above} the required tolerance when the actual error is {\em below} the later. In practical terms, false negatives are a sign that the error estimator is overly cautious and continues to refine an interval even though the required tolerance would already have been achieved. False positives, however, may cause the algorithm to fail completely: if the actual error in a sub-interval is larger than the global tolerance, no amount of excess precision in the other intervals will fix it and the result will be incorrect, save an identical false positive elsewhere of opposite sign. The test integrands, along with an explanation of why they were chosen, are: \begin{enumerate} \item $p_n(x)$: The Chebyshev polynomial of degree $n$ in the interval $[-\alpha,\beta]$, where $\alpha$ and $\beta$ are chosen randomly in $(0,1]$ and $n$ is the degree of the quadrature rule for which the error estimate is computed\footnote{ For error estimates computed from the difference of two quadrature rules of different degree, the degree of the quadrature rule of lower degree is used since although the result rule of higher degree is effectively used for the returned integrand, the error estimate is usually understood to be that of the lower-degree rule.}. The polynomial is shifted by $+1$ to avoid an integral of zero. \item $p_{n+1}(x)$: Same as the function above, yet one degree above the degree of the quadrature rule. Although this integrand is, by design, beyond the degree of the quadrature rule, the error term (\ie the $n+1\st$ derivative) is constant and can be extrapolated reliably\footnote{\eg as is done implicitly in {\tt SQUANK} (see \sect{lyness1969}, (\ref{eqn:lyness_err})) or explicitly in Ninomiya's error estimator (see \sect{ninomiya1980}, (\ref{eqn:ninomiya_diff}))}. \item $p_{n+2}(x)$: Same as the function above, yet two degrees above the degree of the quadrature rule. By design, the $n+1$st derivative is linear in $x$ and changes sign inside the interval, meaning that any attempt to extrapolate that derivative from two estimates of equal degree may fail. \item $d_k(x)$: A function with a discontinuity at $x=\alpha$ in the $k$th derivative, where $\alpha$ is chosen randomly in the interval of integration $[-1,1]$ for $k=0,1$ and $2$: % \begin{eqnarray} d_0(x) & = & \left\{\begin{array}{ll} 0 & x < \alpha \\ 1 & \mbox{otherwise} \end{array}\right. \\ d_1(x) & = & \max\left\{0,x-\alpha\right\} \\ d_2(x) & = & \left( \max\left\{0,x-\alpha\right\} \right)^2 \end{eqnarray} Since all quadrature rules considered herein are interpolatory in nature and these integrands can not be reliably interpolated, these functions will only be correctly integrated by chance\footnote{ The only exception is {\tt CADRE} (see \sect{deboor1971}), which attempts to detect jump discontinuities explicitly}. \item $s(x)$: A function with an integrable singularity at $x=\alpha$, where $\alpha$ is chosen randomly in $(-1,1)$: % \begin{equation*} s(x) = |x-\alpha|^{-1/2} \end{equation*} % As with the previous set of functions, this function can not be reliably interpolated and an interpolatory quadrature rule will produce a correct result only by chance\footnote{ The only exception is again {\tt CADRE}, which treats such singularities explicitly when detected (see \sect{deboor1971}, in cases where $\alpha$ is near the edges of the domain}. \end{enumerate} These functions were tested for $10\,000$ realizations of the random parameters $\alpha$ and $\beta$ for each of the relative tolerances $\tau = 10^{-1}$, $10^{-3}$, $10^{-6}$, $10^{-9}$ and $10^{-12}$. Since most error estimators use absolute tolerances, the tolerance was set to the respective fraction of the integral. The following representative\footnote{ For compactness, the results for similar error estimators were omitted. The results for most other error estimators can be found in \cite{ref:Gonnet2009}.} error estimators were implemented in Matlab (2007a, The MathWorks, Natick, MA.)\footnote{The Matlab source code of each routine tested is available from this author online at {\tt http://people.inf.ethz.ch/gonnetp/csur/}.} and tested: \begin{enumerate} \item Kuncir's error estimate (\sect{kuncir1962}, (\ref{eqn:kuncir_err})), where $n=3$ is the degree of the composite Simpson's rules used, \item Oliver's error estimate (\sect{oliver1972}, (\ref{eqn:oliver_err})), starting with a Clenshaw-Curtis rule of degree 3, where $n=9$ is the degree of the second-last rule used and the first error estimate below tolerance is returned or $2\tau$ if the interval is to be subdivided, \item {\small QUADPACK}'s {\tt QAG} error estimator (\sect{piessens1983}, (\ref{eqn:quadpack_err})) using the 10-point Gauss quadrature rule with $n=19$ and its 21-point Kronrod extension, \item Berntsen and Espelid's null-rule error estimate (\sect{berntsen1991}, (\ref{eqn:null_err1}) and (\ref{eqn:null_err2})) using, as a basic quadrature rule, the 21-point Clenshaw-Curtis quadrature rule\footnote{the 21-point Gauss quadrature rule was also tried but left out since it produced worse results, \ie more false positives.} with $n=21$ and values $K=3$, $r_\mathsf{critical} = 1/4$ and $\alpha = 1/2$. \item Gander and Gautschi's error estimate as implemented in Matlab's {\tt quadl} (\sect{gander2001b}, (\ref{eqn:gander_err2})) using the 4-point Gauss-Lobatto quadrature rule with $n=5$ and its 7-point Kronrod extension, \item Laurie's sharper error estimate (\sect{laurie1983}, (\ref{eqn:laurie_err})) using the 10-point Gauss quadrature rule with $n=19$ and its 21-point Kronrod extension for the two rules $\mathsf{Q}_\beta$ and $\mathsf{Q}_\alpha$ respectively, as suggested by \citeN{ref:Laurie1985} himself, \item The trivial error estimate (\sect{new}, (\ref{eqn:new_err2})) using the nodes of the $n=n_1=11$ and $n_2=21$-point Clenshaw-Curtis quadrature rules to compute the two interpolations $g^{(1)}_{n_1}(x)$ and $g^{(2)}_{n_2}(x)$ respectively. \item The more refined error estimate (\sect{new}, (\ref{eqn:new_eps2})) using the nodes of an 11-point Clenshaw-Curtis quadrature rule with $n=10$ and one level of recursion to obtain $\mathbf c^\mathsf{old}$, as well as 1.1 for the constant $\vartheta_1$ in (\ref{eqn:err_test}). \end{enumerate} \subsection{Results} The results of the tests described in \sect{method} are shown in Tables~\ref{tab:res_kuncir1962} to \ref{tab:res_gonnet2008b}. For each integrand and tolerance, the percentage of correct integrations is given (\ie the error estimate and the actual error are both below the required tolerance), as well as, in brackets, the percentage of false positives and false negatives respectively. \begin{table} \begin{tiny} \begin{center}\begin{tabular}{lccccc} Function & $\tau=10^{-1}$ & $\tau=10^{-3}$ & $\tau=10^{-6}$ & $\tau=10^{-9}$ & $\tau=10^{-12}$ \\ \hline $p_{n}(x)$ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $\\ $p_{n+1}(x)$ & $ 65.67\,( 0 / 34.33 ) $ & $ 8.49\,( 0 / 21.14 ) $ & $ 0.38\,( 0 / 0.76 ) $ & $ 0.01\,( 0 / 0.02 ) $ & $ 0\,( 0 / 0.01 ) $\\ $p_{n+2}(x)$ & $ 51.07\,( 0 / 48.93 ) $ & $ 8.44\,( 0 / 15.77 ) $ & $ 0.50\,( 0 / 1.07 ) $ & $ 0.03\,( 0 / 0.11 ) $ & $ 0.02\,( 0 / 0 ) $\\ $d_0(x)$ & $ 16.58\,( 0 / 22.27 ) $ & $ 0\,( 0 / 0.35 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $\\ $d_1(x)$ & $ 44.92\,( 0 / 29.98 ) $ & $ 0.73\,( 0 / 1.59 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $\\ $d_2(x)$ & $ 54.30\,( 0 / 22.66 ) $ & $ 5.74\,( 0 / 7.16 ) $ & $ 0.22\,( 0 / 0.12 ) $ & $ 0.01\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $\\ $s(x)$ & $ 0\,( 33.05 / 17.11 ) $ & $ 0\,( 0.42 / 0.20 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $\\ \hline \end{tabular}\end{center} \end{tiny} \caption{Results for Kuncir's 1962 error estimate.} \label{tab:res_kuncir1962} \end{table} Despite the low degree of the quadrature rule and its simplicity, Kuncir's error estimate (\sect{kuncir1962}) performs rather well: almost all functions return no false positives and relatively few false negatives. Only the singularity returns false positives for $\tau=10^{-1}$ in more than a third of the cases. \begin{figure} \centerline{\epsfig{file=oliver1972_err.eps,width=0.48\textwidth}\hfill\epsfig{file=berntsen1991_err.eps,width=0.48\textwidth}} \caption{The integrand assumed by the 9-point Clenshaw-Curtis rule (left, dotted) used in Oliver's 1972 error estimate and the 21-point Clenshaw-Curtis rule (right, dotted) used in Berntsen and Espelid's 1991 error estimate for the singular integrand $s(x)$ (solid).} \label{fig:oliver1972_err} \end{figure} \begin{table} \begin{tiny} \begin{center}\begin{tabular}{lccccc} Function & $\tau=10^{-1}$ & $\tau=10^{-3}$ & $\tau=10^{-6}$ & $\tau=10^{-9}$ & $\tau=10^{-12}$ \\ \hline $p_{n}(x)$ & $ 65.69\,( 2.40 / 31.91 ) $ & $ 22.20\,( 0.25 / 77.55 ) $ & $ 8.67\,( 0 / 91.33 ) $ & $ 2.77\,( 0 / 97.23 ) $ & $ 0.72\,( 0 / 99.28 ) $\\ $p_{n+1}(x)$ & $ 55.07\,( 3.87 / 41.06 ) $ & $ 18.34\,( 0.22 / 69.25 ) $ & $ 6.03\,( 0 / 21.70 ) $ & $ 1.18\,( 0 / 5.57 ) $ & $ 0.23\,( 0 / 1.13 ) $\\ $p_{n+2}(x)$ & $ 49.62\,( 5.79 / 44.59 ) $ & $ 14.93\,( 0.30 / 64.31 ) $ & $ 5.72\,( 0 / 18.04 ) $ & $ 1.52\,( 0 / 5.15 ) $ & $ 0.50\,( 0 / 1.58 ) $\\ $d_0(x)$ & $ 20.44\,( 0 / 35.08 ) $ & $ 0\,( 0 / 0.64 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $\\ $d_1(x)$ & $ 71.27\,( 0.86 / 18.23 ) $ & $ 3.60\,( 6.96 / 10.86 ) $ & $ 0\,( 0 / 0.03 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $\\ $d_2(x)$ & $ 78.09\,( 0 / 16.14 ) $ & $ 23.55\,( 5.33 / 18.77 ) $ & $ 0.35\,( 0 / 0.90 ) $ & $ 0.01\,( 0 / 0.03 ) $ & $ 0\,( 0 / 0 ) $\\ $s(x)$ & $ 2.06\,( 66.71 / 15.27 ) $ & $ 0\,( 0.60 / 0.23 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $\\ \hline \end{tabular}\end{center} \end{tiny} \caption{Results for Oliver's 1972 error estimate.} \label{tab:res_oliver1972} \end{table} Oliver's 1972 error estimate (\sect{oliver1972}) mis-predicts the errors for all three polynomials $p_n(x)$, $p_{n+1}(x)$ and $p_{n+2}(x)$, due to the large higher-degree coefficients of the integrands. The false positives are cases where the doubly-adaptive algorithm exited after incorrectly predicting the error with a lower-order rule. This is also true for the discontinuities $d_0(x)$, $d_1(x)$ and $d_2(x)$, which are detected well by the higher-order rules since the higher-degree Chebyshev coefficients become relatively large, yet fail when the error is mis-predicted by the lower-degree rules. The algorithm fails when integrating the singularity $s(x)$, since the coefficients of the interpolation often decay smoothly, misleading it to believe the integrand itself is smooth (see Fig.~\ref{fig:oliver1972_err}, left). \begin{table} \begin{tiny} \begin{center}\begin{tabular}{lccccc} Function & $\tau=10^{-1}$ & $\tau=10^{-3}$ & $\tau=10^{-6}$ & $\tau=10^{-9}$ & $\tau=10^{-12}$ \\ \hline $p_{n}(x)$ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $\\ $p_{n+1}(x)$ & $ 84.04\,( 0 / 15.96 ) $ & $ 70.01\,( 0 / 29.99 ) $ & $ 47.75\,( 0 / 52.25 ) $ & $ 30.61\,( 0 / 69.39 ) $ & $ 18.19\,( 0 / 81.81 ) $\\ $p_{n+2}(x)$ & $ 76.68\,( 0 / 23.32 ) $ & $ 60.87\,( 0 / 39.13 ) $ & $ 38.91\,( 0 / 61.09 ) $ & $ 25.60\,( 0 / 74.40 ) $ & $ 16.22\,( 0 / 83.78 ) $\\ $d_0(x)$ & $ 6.04\,( 0.32 / 79.64 ) $ & $ 0.11\,( 0.29 / 2.06 ) $ & $ 0\,( 0.49 / 0 ) $ & $ 0\,( 0.45 / 0 ) $ & $ 0\,( 0.38 / 0 ) $\\ $d_1(x)$ & $ 22.50\,( 0.21 / 76.36 ) $ & $ 1.43\,( 0.35 / 44.96 ) $ & $ 0.12\,( 0.45 / 0.22 ) $ & $ 0.01\,( 0.52 / 0 ) $ & $ 0\,( 0.44 / 0 ) $\\ $d_2(x)$ & $ 57.99\,( 0.18 / 41.19 ) $ & $ 15.36\,( 0.28 / 67.99 ) $ & $ 0.79\,( 0.30 / 5.23 ) $ & $ 0.09\,( 0.34 / 0 ) $ & $ 0.03\,( 0.48 / 0 ) $\\ $s(x)$ & $ 0.26\,( 0.54 / 62.29 ) $ & $ 0\,( 0.03 / 0.35 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $\\ \hline \end{tabular}\end{center} \end{tiny} \caption{Results for Piessens \ea's 1983 error estimate.} \label{tab:res_piessens1983} \end{table} QUADPACK's error estimate (\sect{piessens1983}) does a very good job over all functions (Table~\ref{tab:res_piessens1983}). The error estimate generates a high number of false negatives for the polynomials $p_{n+1}(x)$ and $p_{n+2}(x)$ since the quadrature rule used to approximate the integral is several degrees more exact than that for which the returned error estimate is computed. The few false positives are due to the error estimate's scaling of the error, causing it to under-predict the actual error and to cases where the discontinuity at $\alpha$ was outside of the open nodes of the quadrature rule. The false positives for the discontinuities $d_0(x)$, $d_1(x)$ and $d_2(x)$ and the singularity $s(x)$ at $\tau=10^{-1}$ are due to accidentally small differences between the Gauss and Gauss-Kronrod approximations. \begin{table} \begin{tiny} \begin{center}\begin{tabular}{lccccc} Function & $\tau=10^{-1}$ & $\tau=10^{-3}$ & $\tau=10^{-6}$ & $\tau=10^{-9}$ & $\tau=10^{-12}$ \\ \hline $p_{n}(x)$ & $ 51.98\,( 0 / 48.02 ) $ & $ 23.69\,( 0 / 76.31 ) $ & $ 8.15\,( 0 / 91.85 ) $ & $ 2.56\,( 0 / 97.44 ) $ & $ 0.97\,( 0 / 99.03 ) $\\ $p_{n+1}(x)$ & $ 48.42\,( 0 / 51.58 ) $ & $ 21.97\,( 0 / 78.03 ) $ & $ 7.24\,( 0 / 78.24 ) $ & $ 2.13\,( 0 / 54.11 ) $ & $ 0.84\,( 0 / 29.48 ) $\\ $p_{n+2}(x)$ & $ 43.89\,( 0 / 56.11 ) $ & $ 20.23\,( 0 / 79.77 ) $ & $ 6.77\,( 0 / 71.77 ) $ & $ 2.34\,( 0 / 45.22 ) $ & $ 0.73\,( 0 / 26.05 ) $\\ $d_0(x)$ & $ 53.45\,( 0 / 31.20 ) $ & $ 0\,( 0 / 1.86 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $\\ $d_1(x)$ & $ 85.10\,( 0 / 13.32 ) $ & $ 3.76\,( 0 / 41.23 ) $ & $ 0\,( 0 / 0.26 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $\\ $d_2(x)$ & $ 90.18\,( 0 / 8.94 ) $ & $ 34.92\,( 0 / 47.13 ) $ & $ 0.27\,( 0 / 5.23 ) $ & $ 0\,( 0 / 0.11 ) $ & $ 0\,( 0 / 0 ) $\\ $s(x)$ & $ 13.03\,( 28.88 / 45.80 ) $ & $ 0\,( 0 / 0.34 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $\\ \hline \end{tabular}\end{center} \end{tiny} \caption{Results for Berntsen and Espelid's 1991 error estimate.} \label{tab:res_berntsen1991} \end{table} Berntsen and Espelid's null-rule error estimate (\sect{berntsen1991}) suffers from the same problems as Oliver's error estimate for the polynomial $p_n(x)$: Although the integration is exact, the coefficients $\tilde{c}_i$ increase towards $i=n$, leading the algorithm to believe that the $n+1\st$ coefficient will be large when it is, in fact, zero. The algorithm mis-predicts the error for the singularity $s(x)$ for the same reason as Oliver's algorithm, namely that the coefficients of the polynomial interpolation decrease smoothly, falsely indicating convergence (see Fig.~\ref{fig:oliver1972_err}, right). \begin{table} \begin{tiny} \begin{center}\begin{tabular}{lccccc} Function & $\tau=10^{-1}$ & $\tau=10^{-3}$ & $\tau=10^{-6}$ & $\tau=10^{-9}$ & $\tau=10^{-12}$ \\ \hline $p_{n}(x)$ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $\\ $p_{n+1}(x)$ & $ 80.08\,( 0 / 19.92 ) $ & $ 17.69\,( 0 / 82.31 ) $ & $ 0.56\,( 0 / 99.44 ) $ & $ 0\,( 0 / 100 ) $ & $ 0\,( 0 / 99.99 ) $\\ $p_{n+2}(x)$ & $ 68.15\,( 0 / 31.85 ) $ & $ 17.88\,( 0 / 82.12 ) $ & $ 2.46\,( 0 / 97.54 ) $ & $ 0.33\,( 0 / 99.67 ) $ & $ 0.08\,( 0 / 99.92 ) $\\ $d_0(x)$ & $ 10.33\,( 0 / 39.32 ) $ & $ 0\,( 0 / 0.59 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $\\ $d_1(x)$ & $ 63.43\,( 2.32 / 23.63 ) $ & $ 0.70\,( 1.33 / 9.97 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $\\ $d_2(x)$ & $ 68.98\,( 0 / 19.77 ) $ & $ 8.69\,( 0.03 / 25.79 ) $ & $ 0.31\,( 0 / 0.13 ) $ & $ 0.02\,( 0 / 0.01 ) $ & $ 0\,( 0 / 0 ) $\\ $s(x)$ & $ 0\,( 44.15 / 22.67 ) $ & $ 0\,( 0.50 / 0.22 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $\\ \hline \end{tabular}\end{center} \end{tiny} \caption{Results for Gander and Gautschi's 2001 error estimate.} \label{tab:res_gander2001} \end{table} Gander and Gautschi's error estimate (\sect{gander2001b}) generates a high number of false negatives for $p_{n+1}(x)$ and $p_{n+2}(x)$, due to the higher degree of the estimate effectively returned. The error estimation returns some false positives for the discontinuities $d_0(x)$, $d_1(x)$ and $d_2(x)$, as well as for the singularity $s(x)$, due to the difference between the two quadrature rules used being ``accidentally small'' (\eg Fig.~\ref{fig:gander2001_err}). \begin{figure} \centerline{\epsfig{file=gander2001_err.eps,width=0.6\textwidth}} \caption{The integrands assumed by the Gauss-Lobatto (dashed) and Gauss-Kronrod (dotted) quadrature rules in Gander and Gautschi's 2001 error estimate over the discontinuous integrand $d_1(x)$ (solid).} \label{fig:gander2001_err} \end{figure} \begin{table} \begin{tiny} \begin{center}\begin{tabular}{lccccc} Function & $\tau=10^{-1}$ & $\tau=10^{-3}$ & $\tau=10^{-6}$ & $\tau=10^{-9}$ & $\tau=10^{-12}$ \\ \hline $p_{n}(x)$ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $\\ $p_{n+1}(x)$ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $\\ $p_{n+2}(x)$ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $\\ $d_0(x)$ & $ 30.26\,( 0.09 / 62.46 ) $ & $ 0.12\,( 0.09 / 3.93 ) $ & $ 0\,( 0.18 / 0.01 ) $ & $ 0\,( 0.20 / 0 ) $ & $ 0\,( 0.24 / 0 ) $\\ $d_1(x)$ & $ 36.78\,( 0.07 / 62.75 ) $ & $ 24.67\,( 3.78 / 48.51 ) $ & $ 0.25\,( 1.14 / 0.55 ) $ & $ 0\,( 0.41 / 0.01 ) $ & $ 0\,( 0.46 / 0 ) $\\ $d_2(x)$ & $ 44.81\,( 0.11 / 54.70 ) $ & $ 40.21\,( 0.94 / 51.18 ) $ & $ 3.52\,( 4.74 / 15.25 ) $ & $ 0.14\,( 0.13 / 0.16 ) $ & $ 0.03\,( 0.32 / 0 ) $\\ $s(x)$ & $ 25.01\,( 0.06 / 64.82 ) $ & $ 0\,( 4.52 / 0.52 ) $ & $ 0\,( 0.03 / 0 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $\\ \hline \end{tabular}\end{center} \end{tiny} \caption{Results for Laurie's 1983 error estimate.} \label{tab:res_laurie1983} \end{table} Laurie's error estimate (\sect{laurie1983}) is exact even for the polynomials $p_{n+1}(x)$ and $p_{n+2}(x)$: despite being of higher degree than the second-highest degree rule, the error of the highest-degree rule is correctly extrapolated. The discontinuities $d_0(x)$, $d_1(x)$ and $d_2(x)$ and the singularity $s(x)$ are not always detected since the condition in (\ref{eqn:laurie_cond3}) holds in some cases where the necessary condition in (\ref{eqn:laurie_conds}) does not, resulting in some false positives over all tolerances. \begin{table} \begin{tiny} \begin{center}\begin{tabular}{lccccc} Function & $\tau=10^{-1}$ & $\tau=10^{-3}$ & $\tau=10^{-6}$ & $\tau=10^{-9}$ & $\tau=10^{-12}$ \\ \hline $p_{n}(x)$ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $\\ $p_{n+1}(x)$ & $ 89.78\,( 0 / 10.22 ) $ & $ 52.10\,( 0 / 47.90 ) $ & $ 14.80\,( 0 / 85.20 ) $ & $ 4.06\,( 0 / 95.94 ) $ & $ 1.12\,( 0 / 98.88 ) $\\ $p_{n+2}(x)$ & $ 81.73\,( 0 / 18.27 ) $ & $ 40.76\,( 0 / 59.24 ) $ & $ 12.22\,( 0 / 87.78 ) $ & $ 4.52\,( 0 / 95.48 ) $ & $ 1.34\,( 0 / 98.66 ) $\\ $d_0(x)$ & $ 0\,( 0 / 84.09 ) $ & $ 0\,( 0 / 2.31 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $\\ $d_1(x)$ & $ 66.03\,( 0 / 32.46 ) $ & $ 0.34\,( 0 / 44.30 ) $ & $ 0\,( 0 / 0.28 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $\\ $d_2(x)$ & $ 76.67\,( 0 / 22.50 ) $ & $ 16.19\,( 0 / 65.95 ) $ & $ 0.16\,( 0 / 5.34 ) $ & $ 0.01\,( 0 / 0.12 ) $ & $ 0\,( 0 / 0 ) $\\ $s(x)$ & $ 0\,( 0 / 59.16 ) $ & $ 0\,( 0 / 0.39 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $\\ \hline \end{tabular}\end{center} \end{tiny} \caption{Results for Gonnet's 2009 trivial error estimate.} \label{tab:res_gonnet2008a} \end{table} \begin{table} \begin{tiny} \begin{center}\begin{tabular}{lccccc} Function & $\tau=10^{-1}$ & $\tau=10^{-3}$ & $\tau=10^{-6}$ & $\tau=10^{-9}$ & $\tau=10^{-12}$ \\ \hline $p_{n}(x)$ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $\\ $p_{n+1}(x)$ & $ 100\,( 0 / 0 ) $ & $ 100\,( 0 / 0 ) $ & $ 58.76\,( 0 / 41.24 ) $ & $ 17.49\,( 0 / 82.51 ) $ & $ 5.15\,( 0 / 94.85 ) $\\ $p_{n+2}(x)$ & $ 83.30\,( 0 / 16.70 ) $ & $ 58.78\,( 0 / 41.22 ) $ & $ 28.18\,( 0 / 71.08 ) $ & $ 9.05\,( 0 / 46.17 ) $ & $ 3.03\,( 0 / 14.26 ) $\\ $d_0(x)$ & $ 0\,( 0 / 81.48 ) $ & $ 0\,( 0 / 2 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $\\ $d_1(x)$ & $ 68.87\,( 0 / 27.89 ) $ & $ 0.40\,( 0 / 54.34 ) $ & $ 0\,( 0 / 0.10 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $\\ $d_2(x)$ & $ 82.21\,( 0 / 15.81 ) $ & $ 17.88\,( 0 / 58.11 ) $ & $ 0.22\,( 0 / 5.08 ) $ & $ 0\,( 0 / 0.07 ) $ & $ 0\,( 0 / 0 ) $\\ $s(x)$ & $ 0\,( 0 / 59.19 ) $ & $ 0\,( 0 / 0.33 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $ & $ 0\,( 0 / 0 ) $\\ \hline \end{tabular}\end{center} \end{tiny} \caption{Results for Gonnet's 2009 refined error estimate.} \label{tab:res_gonnet2008b} \end{table} In both {\em new} error estimates described in \sect{new}, the errors of the polynomials $p_{n+1}(x)$ and $p_{n+2}(x)$ tend to be over-estimated as the computed $L_2$-norm is a somewhat pessimistic estimate of the integration error. What is notable is that these error estimates never under-estimated the error, resulting in no false positives at all. \subsection{Summary} According to the results using the chosen test integrands, the best two error estimators appear to be that of Piessens \ea (\sect{piessens1983}) which is the error estimator for the adaptive routines in the popular integration library {\small QUADPACK}, and the two new error estimators presented herein (\sect{new}). The relatively few false positives returned by the {\small QUADPACK} error estimator may seem negligible in contrast with its efficiency (evidenced by the much smaller percentage of false negatives) compared to the new error estimate. We can verify this by evaluating the smooth integral \begin{equation*} \int_1^2 \frac{0.1}{0.01 + (x-\lambda)^2}\dx \end{equation*} first suggested by \citeN{ref:Lyness1976}, for which we compute $1\,000$ realizations of the parameter $\lambda \in [1,2]$. We use both Piessens \ea's error estimate and the two new error estimates as implemented for the previous tests in a recursive scheme as in Algorithm~\ref{alg:general_rec} with $\tau' = \tau/\sqrt{2}$, to a relative precision of $\tau=10^{-9}$. On average, Piessens \ea's error estimate requires 157 function evaluations while the new error estimates require 379 and 330 evaluations respectively -- roughly more than twice as many. Both methods integrate all realizations to the required tolerance. If we consider, however, the Waldvogel\footnote{This function was suggested to the author by Prof.\ J\"org Waldvogel.} function \begin{equation*} W(x) = \int_0^x \left\lfloor e^t \right\rfloor \,\mbox{d}t \end{equation*} which we wish to evaluate to the relative precision $\tau=10^{-9}$ for $1\,000$ realizations of $x \in [2.5,3.5]$ using both the error estimates of Piessens \ea and our new error estimators as described above, we get very different results. While Piessens \ea's error estimator fails in roughly three quarters of all cases (753 failures out of $1\,000$, see Fig.~\ref{fig:int_piessens1984}), usually missing a sub-interval containing one or more discontinuities and using, on average, $29\,930$ function evaluations, our new error estimators succeeds on every trial, using on average $31\,439$ and $29\,529$ function evaluations respectively. For this integrand, a single bad error estimate is sufficient for the entire computation to fail and, in this case, the cautious estimate pays off. \begin{figure} \centerline{\epsfig{file=piessens1984_err.eps,width=0.6\textwidth}} \caption{Piessens \ea's error estimate used to evaluate one realization of the Waldvogel-function. The circles mark the edges of the sub-intervals. Note that the integrand is not well resolved near $x \approx 1.4$ and $x \approx 2.6$.} \label{fig:int_piessens1984} \end{figure} \section{Conclusions} \label{sec:conclusions} In this review we have analyzed a large part of error estimates for adaptive quadrature published in the last 45 years or so. We have shown that all these estimates can be reduced to either a linear or non-linear approximation of the integral and one or more error terms of the underlying quadrature rule: \begin{equation} \label{eqn:concl_err} \mathsf{Q}_n^{(m)}[a,b] = \intfx{a}{b} + \underbrace{\kappa_1 h^{\alpha_1} + \kappa_2 h^{\alpha_2} + \dots + \kappa_N h^{\alpha_N}}_{=\varepsilon}, \quad h = \frac{b-a}{m}. \end{equation} For the {\em linear} error estimators discussed in \sect{linear}, the exponents $\alpha_i$, $i=1 \dots N$ are assumed to be known. For the {\em non-linear} error estimators discussed in \sect{non-linear}, the $\alpha_i$, $i=1 \dots N$ are {\em not} assumed to be known and are also approximated. In both cases, $N$ is usually 1 with the exception of de~Boor's {\tt CADRE} (see \sect{deboor1971}) and de~Doncker's adaptive extrapolatory algorithm (see \sect{dedoncker1978}). These error estimators all fail for the {\em same reason}, namely when the difference between two successive quadratures is ``{\em accidentally small}''. This can happen when the actual error contains more significant terms than the ones shown in \eqn{concl_err}. The new error estimators presented in \sect{new} are no different as they approximate the error for $N=1$ and a supposed $\alpha_1=n+1$. The main difference is that instead of using different approximations of the {\em integral} of different quadrature rules, we use the $L_2$-norm of the difference of the {\em interpolating polynomials} of different quadrature rules to approximate the unknown terms in \eqn{concl_err}. As we will see, this significantly reduces the probability of accidentally small differences, and thus avoid the major cause of failure of the other algorithms, as is demonstrated by the results in \sect{compare}. The reason in this increased reliability is best explained by considering, for any error estimator, the set of integrands for which it will {\em always} fail. Consider the polynomials orthogonal with respect to the discrete product \begin{equation} \label{eqn:new_discr_measure} \langle p_i(x),p_j(x)\rangle = \sum_{k=1}^n p_i(x_k)p_j(x_k), \quad i,j=0\dots n \end{equation} where the $x_k$ are the nodes of the quadrature rule or the combined nodes of all the quadrature rules used in the computation of the error estimate in the interval. In the following, when we refer to a pair of functions being orthogonal, we understand them to be orthogonal with respect to the above product. For any {\em linear} error estimate relying on the difference between two quadrature rules over the nodes $x_i$, the error estimate can be computed as \begin{equation*} \varepsilon = \sum_{i=1}^n \eta_i f(x_i) \end{equation*} where the $\eta_i$ are the difference of the weights of the two quadrature rules used in the error estimate for each node\footnote{ The $\eta_i$ are, incidentally, the weights of a null rule, such as they are constructed by \citeN{ref:Lyness1965}.}. Let $\eta(x)$ be the polynomial interpolating the $\eta_i$ at the nodes $x_i$, $i=1 \dots n$. The error can then be computed as the product in \eqn{new_discr_measure} applied to the integrand $f(x)$ and the polynomial $\eta(x)$: \begin{equation*} \varepsilon = \langle \eta(x),f(x) \rangle. \end{equation*} Therefore, if the integrand $f(x)$ is of algebraic degree {\em higher} than that of the quadrature rule used --- and will therefore not be correctly integrated --- and the integrand $f(x)$ is {\em orthogonal} to the polynomial $\eta(x)$, then the linear error estimate will be zero and therefore it will {\em fail}. For the error estimate of O'Hara and Smith (\sect{ohara1969}) and of Oliver (\sect{oliver1972}), which use more than one derivative, the error estimate fails when the integrand $f(x)$ is of higher algebraic degree than the basic quadrature rule and the coefficients $\tilde{c}_n$, $\tilde{c}_{n-2}$ and $\tilde{c}_{n-4}$ are zero (see \eqn{ohara_errfinal}). This is the case when the integrand $f(x)$ is orthogonal to the Chebyshev polynomials $T_n(x)$, $T_{n-2}(x)$ and $T_{n-4}(x)$. For the error estimate of Berntsen and Espelid (\sect{berntsen1991}), the error estimate fails when the integrand $f(x)$ is of higher algebraic degree than the basic quadrature rule and the integrand $f(x)$ is orthogonal to the last $2(K-1)$ null-rules\footnote{In Berntsen and Espelid's original error estimate 2 null-rules are used to compute each $E_k$ from which the $K$ ratios $r_k$ (see \eqn{null_ratios}) are computed. It is, however, only necessary that the nominators of the ratios be zero, hence only $2(K-1)$ null-rules need to be zero for the estimate to be zero.}. For the non-linear error estimates discussed in \sect{non-linear}, the error estimates will fail under similar circumstances: In de~Boor's {\tt CADRE} (see \sect{deboor1971}), it is sufficient that the difference between two neighboring entries in the T-table is zero for the error estimate to fail. For a T-table of depth $\ell$, this engenders $\mathcal O(\ell^2/2)$ different polynomials to which the integrand {\em may} be orthogonal to for the error estimate to fail. In the case of Rowland and Varol's or Venter and Laurie's error estimates (see \sect{rowland1972}), a difference of zero between two consecutive pairs of rules is sufficient for the error estimate to fail and thus, as for the simple error estimators discussed above, for a sequence of $m$ rules, there are $m-1$ polynomials to which an integrand $f(x)$ {\em may} be orthogonal to for which the error estimator will always fail. In Laurie's error estimate (see \sect{laurie1983}), either $Q^{(2)}_\alpha-Q^{(2)}_\beta$ or $Q^{(2)}_\alpha-Q^{(1)}_\alpha$ need to be zero for the estimate to fail, resulting in two polynomials to which the integrand {\em may} be orthogonal to for the error estimate to fail. Similarly, for Favati \ea's error estimate (see \sect{laurie1983}), there are three such polynomials. Finally, for de~Doncker's error estimate (see \sect{dedoncker1978}), the case is somewhat more complicated due to the global approach of the algorithm. Since it uses, locally, Piessens \ea's local error estimate (see \sect{piessens1983}), it will fail whenever this estimate fails, making it vulnerable to the same family of integrands. Additionally, it will fail whenever the difference between two {\em global} estimates $\hat{Q}^{(m)}_n[a,b] - \hat{Q}^{(m-1)}[a,b]$ accidentally becomes zero, causing the algorithm to fail {\em globally}. For both new error estimates presented here (\eqn{new_err2}and \eqn{new_eps2}), the matter is a bit more complicated. Given two interpolations $g^{(1)}_{n_1-1}(x)$ and $g^{(2)}_{n_2-1}(x)$, with $n_2 \geq n_1$, over the nodes $x^{(1)}_i$, $i=1\dots n_1-1$ and $x^{(2)}_i$, $i=1\dots n_2-1$ respectively, we define the joint set of $n_u$ nodes $x^{(u)} = x^{(1)} \cup x^{(2)}$ which we will use for the product in \eqn{new_discr_measure}. Given the inverse Vandermonde-like matrices\index{Vandermonde-like!matrix} $\mathbf U^{(1)} = (\mathbf P^{(1)})^{-1}$ and $\mathbf U^{(2)} = (\mathbf P^{(2)})^{-1}$ of size $n_1 \times n_1$ and $n_2 \times n_2$ used to compute the coefficients of $g^{(1)}_{n_1}(x)$ and $g^{(2)}_{n_2}(x)$, we can stretch them to size $n_2 \times n_u$ such that \begin{equation*} \mathbf c^{(1)} = \tilde{\mathbf U}^{(1)} \mathbf f^{(u)}, \quad \mathbf c^{(2)} = \tilde{\mathbf U}^{(2)} \mathbf f^{(u)} \end{equation*} where $\tilde{\mathbf U}^{(1)}$ and $\tilde{\mathbf U}^{(2)}$ are the stretched matrices and $\mathbf f^{(u)}$ contains the integrand evaluated at the joint set of nodes $x^{(u)}$. For the error estimate $\|\mathbf c^{(1)} - \mathbf c^{(2)}\|$ to be zero, $\mathbf f^{(u)}$ must lie in the null-space of the $n_2 \times n_u$ matrix \begin{equation*} \mathbf U^{(u)} = \left[ \tilde{\mathbf U}^{(1)} - \tilde{\mathbf U}^{(2)} \right] \end{equation*} which has rank $r_u$ equal to the smaller of the number of nodes {\em not} shared by both $x^{(1)}$ and $x^{(2)}$, \ie $x^{(u)} \backslash \{ x^{(1)} \cap x^{(2)} \}$ or $n_2$. For the error estimate to be zero, the product $\mathbf U^{(u)}\mathbf f^{(u)}$ must be zero. This is the case when the integrand $f(x)$ is of algebraic degree $> n_2$ and orthogonal to the $r_u$ polynomials generated by interpolating the values of the first $r_u$ rows of $\mathbf U^{(u)}$ at the nodes $x^{(u)}$. If, additionally, the integrand is of degree $>n_2$, then both error estimates will fail. The space of functions that will cause any of the error estimators presented here to fail is, in essence, infinite, yet for each type of error estimator, this infinite space is subject to different restrictions. For the simple {\bf linear} error estimators which compute a {\em single} divided difference, the space is restricted by a {\em single} orthogonality restriction. In the case of error estimators such as O'Hara and Smith's or Berntsen and Espelid's, the space is restricted by {\em three or four}\footnote{In Berntsen and Espelid's original error estimate, a constant $K=3$ is used.} orthogonality restrictions. Instead of being subject to {\em one or more} restrictions, the space of functions that will cause the {\bf non-linear} error estimators discussed in \sect{non-linear} to fail is {\em larger} than that of the simple error estimators, since the integrand needs only to be orthogonal to {\em any} of a set of polynomials for the algorithm to fail. The set of functions for which they will fail is therefore the {\em union} of a set of functions, each subject to only {\em one} restriction. For our {\bf new} error estimators, the number of restrictions depends on the number of nodes used. For the trivial error estimate (\eqn{new_err2}), if the nodes $x^{(1)} \subset x^{(2)}$ and $n_2 \approx 2n_1$ (\ie if Clenshaw-Curtis or Gauss-Kronrod rule pairs are used), the number of restrictions will be $\approx n_2/2$. For the more refined error estimate (\eqn{new_eps2}), if the basic rule does not re-use more than $\lceil n/2 \rceil$ of its $n$ nodes in each sub-interval, the number of restrictions will be at least $n-1$. The new error estimates presented in \sect{new} are therefore more reliable since the space of functions for which it will fail, albeit infinite, is {\em more restricted} than that of the other error estimators presented here. It is also interesting to note that if we were to {\em increase the degree} of the underlying quadrature rules in all our error estimates, the number of restrictions to the space of functions for which they will fail {\em would not grow}, whereas for our new error estimates, the number of restrictions {\em grows linearly} with the degree of the underlying quadrature rule. Two adaptive quadrature algorithms implementing the new error estimates have been described and extensively tested in \citeN{ref:Gonnet2010}. One of the algorithms presented therein has been implemented as {\tt cquad} in both the GNU Scientific Library \cite{ref:Galassi2009} and as a part of GNU Octave \cite{ref:Eaton2002}. \subsection{De~Boor's {\tt CADRE} Error Estimator} \label{sec:deboor1971} In 1971, \citeN{ref:deBoor1971} publishes the integration subroutine {\tt CADRE}\index{CADRE@{\tt CADRE}}. The algorithm, which follows the scheme in Algorithm~\ref{alg:general_rec}, generates a Romberg T-table\index{Romberg extrapolation} \cite{ref:Bauer1963} with \begin{equation} \label{eqn:deboor_ttable} T_{\ell,i} = T_{\ell,i-1} + \frac{T_{\ell,i-1} - T_{\ell-1,i-1}}{4^i - 1} \end{equation} in every interval. The entries in the T-table are used to decide whether to extend the table or bisect the interval\footnote{Thus making it the first doubly-adaptive\index{doubly-adaptive} quadrature algorithm known to the author.}. After adding each $\ell$th row to the table, a decision is made using the ratios \begin{equation} \label{eqn:deboor_ratio} R_i = \frac{T_{\ell-1,i} - T_{\ell-2,i}}{T_{\ell,i} - T_{\ell-1,i}} \end{equation} as to whether the integrand is linear, sufficiently smooth, discontinuous, singular or noisy inside the interval. If the integrand is assumed to be smooth ($R_0 = 4 \pm 0.15$), the approximation $T_{\ell,i}$ is returned for the smallest $i \leq \ell$ such that the error \begin{equation} \label{eqn:deboor_err1} \varepsilon_k = (b_k - a_k) \left| \frac{T_{\ell,i-1} - T_{\ell-1,i-1}}{4^i-1} \right|. \end{equation} is less than the required local tolerance. Otherwise, if a jump discontinuity is assumed ($R_0 = 2 \pm 0.01$), the error is assumed to be bounded by the absolute difference of the two previous lowest-degree estimates: \begin{equation*} \varepsilon_k = \left|T_{\ell,0} - T_{\ell-1,0}\right|. \end{equation*} Finally, if the integrand is assumed to be singular ($R_0 \in (1,4)$ and is within 10\% of the $R_0$ from the previous level $\ell-1$) and of the form $f(x) = (x - \xi)^\alpha g(x)$, where $\xi$ is near the edges of $[a_k,b_k]$ and $\alpha \in (-1,1)$. If this is the case, $R_0$ should be $\approx 2^{\alpha+i}$ and the T-Table is computed using ``{\em cautious extrapolation}'' by interleaving the normal updates in \eqn{deboor_ttable} with updates of the form \begin{equation} \label{eqn:deboor_cautious} T_{\ell,i} = T_{\ell,i-1} + \frac{T_{\ell,i-1}-T_{\ell-1,i-1}}{2^{\alpha+i} - 1} \end{equation} where necessary. The error estimate is computed as in the smooth case \eqn{deboor_err1} or as \begin{equation} \label{eqn:deboor_err2} \varepsilon_k = (b_k - a_k) \left| \frac{T_{\ell,i-1} - T_{\ell-1,i-1}}{2^{\alpha+i}-1} \right|, \end{equation} depending on which column $i$ is considered. The rationale for using the ratios $R_i$ \eqn{deboor_ratio} is based on the observation that the error of each entry of the T-table is, for sufficiently smooth integrands, \begin{equation} \label{eqn:deboor_quaderr} \frac{1}{b-a}\intfx{a}{b} - T_{\ell,i} \approx \kappa_i \left(2^{-(\ell-i)}\right)^{2i+2}. \end{equation} The ratio $R_i$ can therefore be re-written as \begin{equation} R_i \ = \ \frac{\kappa_i\left(2^{-(\ell-i-1)}\right)^{2i+2} - \kappa_i\left(2^{-(\ell-2-i)}\right)^{2i+2}} {\kappa_i\left(2^{-(\ell-i)}\right)^{2i+2} - \kappa_i\left(2^{-(\ell-1-i)}\right)^{2i+2}} \ \ = \ \frac{2^{2i+2} - 4^{2i+2}}{1 - 2^{2i+2}} \ = \ 4^{i+1}. \label{eqn:deboor_ratio2} \end{equation} If this condition is actually satisfied (more or less), then de~Boor considers it safe to assume that the difference between the two approximations $T_{\ell,i-1}$ and $T_{\ell,i}$ is a good bound for the error of $T_{\ell,i}$, as is computed in \eqn{deboor_err1}. This error estimate for the regular case is itself, as defined at the beginning of this section, by no means non-linear. The reason for its inclusion in this category is the special treatment of integrable singularities in \eqn{deboor_err2}. \subsection{De Doncker's Adaptive Extrapolation Algorithm} \label{sec:dedoncker1978} The probably best-known quadrature algorithm using non-linear extrapolation is published by \citeN{ref:deDoncker1978}. The main idea of the algorithm is similar to that of the Romberg scheme\index{Romberg extrapolation}: Given a basic quadrature rule $\mathsf{Q}_n[a,b]$, the series \begin{equation} \label{eqn:dedoncker_series} \mathsf{Q}_n^{(1)}[a,b], \mathsf{Q}_n^{(2)}[a,b], \mathsf{Q}_n^{(4)}[a,b], \dots , \mathsf{Q}_n^{(2^i)}[a,b] , \dots \end{equation} converges exponentially, for large enough $i$ and sufficiently smooth $f(x)$, towards $\intfx{a}{b}$. In Romberg's scheme, $\mathsf{Q}_n^{(m)}[a,b] = \mathsf{T}^{(m)}[a,b]$, is the trapezoidal rule, and the limit of the series is extrapolated linearly using the the Romberg T-table. De~Doncker's algorithm, however, differs in two main points: The 21-point Gauss-Kronrod rule\index{Kronrod extension} is used as the basic rule $\mathsf{Q}_n^{(m)}[a,b]$ instead of the trapezoidal rule and the non-linear $\epsilon$-Algorithm\index{e-Algorithm@$\epsilon$-Algorithm} \cite{ref:Wynn1956} is used to extrapolate the limit of the series instead of the linear extrapolation in the Romberg T-table. The algorithm, as described thus far, is not yet adaptive. The main (and new) trick is that instead of using $\mathsf{Q}_n^{(m)}[a,b]$, de~Doncker uses {\em approximations} $\tilde{\mathsf{Q}}_n^{(m)}[a,b]$. Each approximation $\tilde{\mathsf{Q}}_n^{(m)}[a,b]$ is computed by iteratively picking out the sub-interval of width greater than $h=(b-a)/m$ with the largest local error estimate \begin{equation} \label{eqn:dedoncker_errloc} \varepsilon_k = \left| \mathsf{G}_{10}[a_k,b_k] - \mathsf{K}_{21}[a_k,b_k] \right| \end{equation} which is the same local error estimate as first used by Piessens (see \sect{piessens1973}), and subdividing it until either the sum of the local error estimates $\varepsilon_k$ of all intervals of width larger than $h$ is smaller than the required tolerance or there are no more intervals of width larger than $h$ left to subdivide. In her original paper, de~Doncker does not give any details on how the $\epsilon$-Algorithm is applied or how the global error is estimated. In its implementation as {\tt QAGS}\index{QAGS@{\tt QAGS}} in {\small QUADPACK}\index{QUADPACK@{\small QUADPACK}}, the local error estimate \eqn{dedoncker_errloc} is replaced by the local error estimator used in the other {\small QUADPACK}-routines (see \sect{piessens1983}, \eqn{quadpack_err}). A global error estimate is computed for the extrapolated $I_i$ using \begin{equation} \label{eqn:dedoncker_err} \varepsilon_i = \left|I_i - I_{i-1}\right| + \left|I_i - I_{i-2}\right| + \left|I_i - I_{i-3}\right| \end{equation} where $I_{i-1}$, $I_{i-2}$ and $I_{i-3}$ are the previous three estimates of the global integral. \subsection{Finite-Difference Based Error Estimators} \label{sec:fd-based} \label{sec:gallaher1967} \label{sec:ninomiya1980} In a 1967 paper, \citeN{ref:Gallaher1967} presents a recursive adaptive quadrature routine based on the midpoint rule\index{midpoint rule}. In this algorithm, the interval is divided symmetrically into three sub-intervals with the width $h_c$ of the central sub-interval chosen randomly in $ h_c \in \left[ \frac{1}{6}h_k , \frac{1}{2}h_k \right]$, $h_k = \left(b_k - a_k\right)$. The integrand $f(x)$ is evaluated at the center of each sub-interval and used to compute the midpoint rule therein. Since the error of the midpoint rule is proportional to the second derivative of $f(x)$, the local integration error can be estimated by computing the second divided difference\index{divided difference} of $f(x)$ over the three values $f_1$, $f_2$ and $f_3$ in the center of the sub-intervals. Instead of the difference formula, Gallaher uses the more compact approximation \begin{equation} \label{eqn:gallaher_err} \varepsilon = 14.6 \left| f_1 - 2f_2 + f_3 \right| \frac{b_k-a_k - h_c}{2}. \end{equation} In which the constant $14.6$ is determined empirically. Similarly, \citeN{ref:Ninomiya1980} presents a recursive adaptive quadrature routine based on closed Newton-Cotes rules\index{Newton-Cotes quadrature}. He uses rules with $2n+1$ nodes (results are given for $5$, $7$ and $9$ points) and notes that these have an error of the form \begin{equation*} \mathsf{NC}_{2n-1}[a,b] - \intfx{a}{b} = K_{2n+1}(b-a)^{2n+1}f^{(2n)}(\xi), \quad \xi \in [a,b]. \end{equation*} Instead of using the same quadrature rule on two or more sub-intervals to approximate the error as in Kuncir's and Lyness' error estimates, he adds two nodes in the center of the leftmost and the rightmost intervals. Using $5+2$, $7+2$ and $9+2$ point stencils, he computes the error estimators, \eg \begin{equation} \mathsf{D}_{9+2}[a,b] \approx \frac{37(b-a)^{11}}{3\,066\,102\,400}f^{(10)}(\xi), \quad \xi \in [a,b], \label{eqn:ninomiya_diff} \end{equation} which approximate the scaled $2n+1$st derivative in the analytical error of the Newton-Cotes rules. \subsection{Gauss-Kronrod Based Error Estimators} \label{sec:gk-based} \label{sec:patterson1973} \label{sec:piessens1973} \label{sec:piessens1983} \label{sec:berntsen1984} \label{sec:gander2001b} In 1973 both \citeN{ref:Patterson1973} and \citeN{ref:Piessens1973} publish adaptive quadrature routines based on Gauss quadrature rules\index{Gauss quadrature} and their Kronrod extensions\index{Kronrod extension} \cite{ref:Kronrod1965}. Piessens' algorithm, which is the first to follow the scheme in Algorithm~\ref{alg:general_nonrec}, uses an error estimate of the form \begin{equation} \label{eqn:piessens_err} \varepsilon_k = \left|\mathsf{G}_n[a_k,b_k] - \mathsf{K}_{2n+1}[a_k,b_k]\right| \end{equation} where $\mathsf{G}_n[a,b]$ is the $n$-point Gauss quadrature rule of degree $2n-1$ and $\mathsf{K}_{2n+1}[a,b]$ is the $2n+1$ point Gauss-Kronrod extension of degree $3n+1$ which is in turn used as the approximation to the integral. This is also the error estimate currently used in Matlab's {\tt quadgk} \cite{ref:Shampine2008}. Patterson's integrator takes a different approach, starting with a $3$-point Gauss quadrature\index{Gauss quadrature} rule and using the Kronrod\index{Kronrod extension} scheme to successively extend it to 7, 15, 31, 63, 127 and 255 nodes, resulting in quadrature rules of degree 5, 11, 23, 47, 95, 191 and 383 respectively, until the globally relative error estimate \begin{equation} \label{eqn:patterson_err} \varepsilon_k = \left| \mathsf{K}_n[a_k,b_k] - \mathsf{K}_{2n+1}[a_k,b_k] \right| / \left| \hat{I} \right| \end{equation} where $\mathsf{K}_n[a,b]$ is the Kronrod extension over $n$ nodes and $\mathsf{K}_{2n+1}[a,b]$ its extension over $2n+1$ nodes, is below the required tolerance. $\hat{I}$ is an initial approximation of the global integral generated by applying successive Kronrod extensions to the whole interval before subdividing. In 1983, the most widely-used ``commercial strength'' quadrature subroutine library {\small QUADPACK}\index{QUADPACK@{\small QUADPACK}} is published by \citeN{ref:Piessens1983}. The general adaptive quadrature subroutine {\tt QAG}\index{QAG@{\tt QAG}} is an extension of Piessens' integrator, yet with a slight modification to the local error estimate \begin{equation} \label{eqn:quadpack_err} \varepsilon_k = \tilde{I}_k \min \left\{ 1 , \left(200 \frac{\left|\mathsf{G}_n[a_k,b_k] - \mathsf{K}_{2n+1}[a_k,b_k]\right|}{\tilde{I}_k} \right)^{3/2} \right\} \end{equation} where the default value of $n$ is 10 and the value \begin{equation*} \tilde{I}_k = \int_{a_k}^{b_k} \left| f(x) - \frac{\mathsf{K}_{2n+1}[a_k,b_k]}{b_k - a_k} \right|\,\mbox{d}x, \end{equation*} which is also evaluated using the $\mathsf{K}_{2n+1}[a,b]$ rule, is used, as described by \citeN{ref:Krommer1998}, as ``{\em a measure for the smoothness of $f$ on $[a,b]$}''. The error measure is best explained graphically, as is done in Piessens \ea (Fig.~\ref{fig:quadpack_err}). The exponent $\frac{3}{2}$ is determined experimentally and scales the error exponentially, with a break-even point at $1.25 \times 10^{-6}$ which is approximately relative machine precision for IEEE 754 32-bit floating point arithmetic. The scaling makes the estimate increasingly pessimistic for error estimates larger than $1.25\times 10^{-6}$ and increasingly optimistic for error estimates below that threshold. \begin{figure} \begin{center}\input{img.001.tex}\end{center} \caption{The error measure $\left(200\,|\mathsf{G}_n[a,b] - \mathsf{K}_{2n+1}[a,b]|\right)^{3/2}$ (dashed line) plotted as a function of $|\mathsf{G}_n[a,b] - \mathsf{K}_{2n+1}[a,b]|$.} \label{fig:quadpack_err} \end{figure} This measure is further divided by $\sqrt{\tilde{I}_k}$. Krommer\index{Krommer} and \"Uberhuber\index{Uberhuber@\"Uberhuber} explain this as follows: \begin{quote} ``{\em If this ratio is small, the difference between the two quadrature formulas is small compared to the variation of $f$ on $[a,b]$; \ie, the discretization of $f$ in the quadrature formulas $\mathsf{G}_n$ and $\mathsf{K}_{2n+1}$ is fine with respect to its variation. In this case, $\mathsf{K}_{2n+1}$ can indeed be expected to yield a better approximation for $If$ than $\mathsf{G}_n$.}'' \end{quote} Unfortunately, no further analysis is given in either \cite{ref:Piessens1983} or \cite{ref:Krommer1998}. This local error estimate is re-used by \citeN{ref:Favati1991}, yet using pairs of ``recursive monotone stable'' (RMS) nested quadrature rules\index{RMS quadrature rules} introduced by \citeN{ref:Favati1991b}, which allow for function evaluations to be re-used after bisection, within a doubly-adaptive scheme. \citeN{ref:Hasegawa2007} extend this approach by choosing bisection over increasing the degree of the quadrature rule when the ratio of two successive error estimates is larger than an empirically determined constant (as is suggested by \citeN{ref:Venter2002}, see \sect{rowland1972}). In 1984 \citeN{ref:Berntsen1984} suggest that instead of using the difference between a Gauss quadrature\index{Gauss quadrature} rule over $n$ points and its Kronrod extension\index{Kronrod extension} over $2n+1$ points, one could directly use a Gauss quadrature rule over $2n+1$ points for the estimate of the integral. To estimate the error of this rule of degree $4n+1$, they suggest removing one of the points and creating a new interpolatory quadrature rule $\mathsf{Q}_{2n}[a,b]$ of degree $2n-1$ over the remaining $2n$ points: \begin{equation} \label{eqn:berntsen_err} \varepsilon_k = \left| \mathsf{G}_{2n+1}[a_k,b_k] - \mathsf{Q}_{2n}[a_k,b_k] \right|. \end{equation} Since the degree of the rule $\mathsf{Q}_{2n}[a,b]$ is the same as that of the Gauss quadrature rule $\mathsf{G}_n[a,b]$ used by Piessens (see \sect{piessens1973}), the error estimate is 0 for functions of up to the same algebraic degree of precision, yet the final estimate is $n$ degrees higher: $4n+1$ for $\mathsf{G}_{2n+1}[a,b]$ vs.\ $3n+1$ for $\mathsf{K}_{2n+1}[a,b]$. A further advantage is the relative ease with which the weights of the rule $\mathsf{Q}_{2n}[a,b]$ can be computed, as opposed to the effort required for the nodes and weights of the Kronrod extension. Finally, the second routine by \citeN{ref:Gander2001} (see \sect{gander2001a}), {\tt adaptlob}, uses a 4-point Gauss-Lobatto rule\index{Gauss-Lobatto quadrature} $\mathsf{GL}_4^{(1)}[a,b]$ and its 7-point Kronrod extension\index{Kronrod extension} $\mathsf{K}_7^{(1)}[a,b]$. The globally relative local error is computed, analogously to \eqn{gander_err}, as \begin{equation} \label{eqn:gander_err2} \varepsilon_k = \left| \mathsf{GL}_4^{(1)}[a_k,b_k] - \mathsf{K}_7^{(1)}[a_k,b_k] \right| / |\hat{I}|. \end{equation} If the tolerance is met, the approximation $\mathsf{K}_7^{(1)}[a,b]$ is used for the integral. \section{Introduction} \label{sec:introduction} Adaptive quadrature, or adaptive numerical integration, refers to the process of approximating the integral of a given function to a specified precision by {\em adaptively} subdividing the integration interval into smaller sub-intervals over which a set of local quadrature rules are applied. Since the publication of the first adaptive quadrature routines almost 50 years ago \cite{ref:Morrin1955,ref:Villars1956,ref:Kuncir1962}, more than 20 distinct algorithms have been published, along several papers dedicated to their analysis \cite{ref:Casaletto1969,ref:Hillstrom1970,ref:Kahaner1971,ref:Malcolm1975,ref:Robinson1979,ref:Krommer1998} and even on methodologies for their analysis \cite{ref:Lyness1977}. \begin{algorithm} \caption{integrate $(f,a,b,\tau)$} \label{alg:general_rec} \begin{algorithmic}[1] \STATE $\mathsf{Q}_n[a,b] \approx \intfx{a}{b}$ \label{alg:compQ} \STATE $\varepsilon \approx \left| \mathsf{Q}_n[a,b] - \intfx{a}{b} \right|$ \label{alg:compEps} \IF{$\varepsilon < \tau$} \RETURN{$\mathsf{Q}_n[a,b]$} \ELSE \STATE $m \leftarrow (a+b)/2$ \RETURN{$\mbox{integrate}(f,a,m,\tau') + \mbox{integrate}(f,m,b,\tau')$} \label{alg:tauPrime} \ENDIF \end{algorithmic} \end{algorithm} Many recursive adaptive quadrature routines follow the general scheme detailed in Algorithm~\ref{alg:general_rec}. In Line~\ref{alg:compQ} an approximation $\mathsf{Q}_n[a,b]$ to the integral of $f(x)$ over $n$ points in the interval $[a,b]$ is computed and in Line~\ref{alg:compEps} the error of this approximation is estimated. If this error is less than some user-specified local tolerance $\tau$ the algorithm returns the approximation $\mathsf{Q}_n[a,b]$. If the error is deemed too large, the interval is subdivided (in this example bisection is used) and the integration algorithm is applied recursively on both intervals separately for some new, adjusted tolerance $\tau'$. In the following, we will use $\mathsf{Q}_n[a,b]$ to denote a generic interpolatory quadrature rule over $n$ points in the interval $[a,b]$. For specific or well-known quadrature rules, we will use specific symbols such as $\mathsf{NC}_n[a,b]$ for Newton-Cotes, $\mathsf{CC}_n[a,b]$ for Clenshaw-Curtis and $\mathsf{G}_n[a,b]$ and $\mathsf{GK}_n[a,b]$ for Gauss and Gauss-Kronrod rules over $n$ points respectively. We will use the notation $\mathsf{Q}_n^{(m)}[a,b]$ to denote the quadrature rule $\mathsf{Q}_n$ applied on $m$ panels of equal size in $[a,b]$. In \cite{ref:Davis1967} $\mathsf{Q}^{(m)}_n[a,b]$ is referred to as a {\em compound} or {\em composite} quadrature rule. We will call $m$ the {\em multiplicity} of $\mathsf{Q}^{(m)}_n[a,b]$. A slightly different approach to \alg{general_rec}, motivated by the desire for a sharper global error estimate and better interval selection criteria --- and partially due to the unavailability of recursion in early computer programming languages --- is shown in Algorithm~\ref{alg:general_nonrec}. In this non-recursive approach, a heap of intervals, sorted by their local error estimates, is maintained (Line~\ref{alg:nonrec_heap}). As long as the sum of the individual error estimates is larger than the required global tolerance $\tau$ (Line~\ref{alg:nonrec_while}), the interval at the top of the heap (\ie the interval with the largest error estimate, Line~\ref{alg:nonrec_top}) is subdivided (Line~\ref{alg:nonrec_bisect}). The resulting subintervals are evaluated (Lines~\ref{alg:nonrec_eval1} to \ref{alg:nonrec_eval2}) and returned to the heap (Lines~\ref{alg:split_first} and \ref{alg:split_last}), and the global integral and global error estimate are updated (Lines~\ref{alg:nonrec_intupdate} and \ref{alg:nonrec_errupdate}). \begin{algorithm} \caption{integrate $(f,a,b,\tau)$} \label{alg:general_nonrec} \begin{algorithmic}[1] \STATE $I \leftarrow \mathsf{Q}_n[a,b] \approx \intfx{a}{b}$ \STATE $\varepsilon \leftarrow \varepsilon_0 \approx \left| \mathsf{Q}_n[a,b] - \intfx{a}{b} \right|$ \label{alg:compEps1} \STATE initialize heap $H$ with interval $[a,b]$, integral $\mathsf{Q}_n[a,b]$ and error $\varepsilon_0$ \label{alg:nonrec_heap} \WHILE{$\varepsilon > \tau$} \label{alg:nonrec_while} \STATE $k \leftarrow$ index of interval with largest $\varepsilon_k$ in $H$ \label{alg:nonrec_top} \STATE $m \leftarrow (a_k+b_k)/2$ \label{alg:nonrec_bisect} \STATE $I_\mathsf{left} \approx \intfx{a_k}{m}$ \label{alg:nonrec_eval1} \STATE $I_\mathsf{right} \approx \intfx{m}{b_k}$ \STATE $\varepsilon_\mathsf{left} \approx \left| \mathsf{Q}_n[a_k,m] - \intfx{a_k}{m} \right|$ \label{alg:compEps2} \STATE $\varepsilon_\mathsf{right} \approx \left| \mathsf{Q}_n[m,b_k] - \intfx{m}{b_k} \right|$ \label{alg:compEps3} \label{alg:nonrec_eval2} \STATE $I \leftarrow I - I_k + I_\mathsf{left} + I_\mathsf{right}$ \label{alg:nonrec_intupdate} \STATE $\varepsilon \leftarrow \varepsilon - \varepsilon_k + \varepsilon_\mathsf{left} + \varepsilon_\mathsf{right}$ \label{alg:nonrec_errupdate} \STATE push interval $[a_k,m]$ with integral $I_\mathsf{left}$ and error $\varepsilon_\mathsf{left}$ onto $H$ \label{alg:split_first} \STATE push interval $[m,b_k]$ with integral $I_\mathsf{right}$ and error $\varepsilon_\mathsf{right}$ onto $H$ \label{alg:split_last} \ENDWHILE \RETURN{$I$} \end{algorithmic} \end{algorithm} If the integrand is Riemann integrable and the error estimates are exact, both Algorithm~\ref{alg:general_rec} and Algorithm~\ref{alg:general_nonrec} will converge to the exact integral. It is therefore only failures in the estimation of the integration error that will cause the quadrature algorithms to fail. It is for this reason that inn this review, we will concentrate only on the error estimate \begin{equation*} \varepsilon \approx \left| \mathsf{Q}_n[a,b] - \intfx{a}{b} \right|. \end{equation*} as it is computed in Line~\ref{alg:compEps} of Algorithm~\ref{alg:general_rec} and Lines~\ref{alg:compEps1}, \ref{alg:compEps2} and \ref{alg:compEps3} of Algorithm~\ref{alg:general_nonrec}. We will distinguish between the {\em local} and {\em global} error of an adaptive quadrature routine. During adaptive integration, the interval is subdivided into sub-intervals $[a_k,b_k]$ with $a \leq a_k < b_k \leq b$. This subdivision occurs either recursively (as in Line~\ref{alg:tauPrime} of Algorithm~\ref{alg:general_rec}) or explicitly (as in Lines~\ref{alg:split_first}--\ref{alg:split_last} of Algorithm~\ref{alg:general_nonrec}). The {\em local error} $\varepsilon_k$ of the $k^{\mbox{th}}$ interval $[a_k,b_k]$ and the {\em global error} $\varepsilon$ are defined as \begin{equation} \label{eqn:err_abs} \varepsilon_k = \left| \mathsf{Q}_n[a_k,b_k] - \intfx{a_k}{b_k} \right| \quad \mbox{and} \quad \varepsilon = \left| \sum_k \mathsf{Q}_n[a_k,b_k] - \intfx{a}{b} \right|. \end{equation} The sum of the local errors forms an upper bound for the global error ($\varepsilon \leq \sum_k \varepsilon_k$). We further distinguish between the absolute errors (\ref{eqn:err_abs}), the {\em locally relative error} and the {\em globally relative local error} \begin{equation} \label{eqn:err_rel} \varepsilon_k^{(\mathsf{lrel})} = \left| \frac{\mathsf{Q}_n[a_k,b_k] - \intfx{a_k}{b_k}}{\intfx{a_k}{b_k}} \right|, \quad \varepsilon_k^{(\mathsf{grel})} = \left| \frac{\mathsf{Q}_n[a_k,b_k] - \intfx{a_k}{b_k}}{\intfx{a}{b}} \right|. \end{equation} We also define the {\em global relative error} which is bounded by the sum of the globally relative local errors: \begin{equation*} \varepsilon = \frac{\left| \sum_k Q_n[a_k,b_k] - \intfx{a}{b} \right|}{\intfx{a}{b}} \leq \sum_k \left| \frac{Q_n[a_k,b_k] - \intfx{a_k}{b_k}}{\intfx{a}{b}} \right|. \end{equation*} The sum of the {\em locally relative errors}, however, form no such bound. In the following, we will often refer to the {\em degree} of a quadrature rule. A quadrature rule is of degree $n$ when it integrates all polynomials of degree $\leq n$ exactly, but not all polynomials of degree $n+1$. This is synonymous with the {\em precise degree of exactness} as defined by \citeN{ref:Gautschi2004} or the {\em degree of accuracy} as defined by \citeN{ref:Krommer1998}. If a quadrature rule is of degree $n$, then its {\em order of accuracy} as defined by \citeN{ref:Skeel1993}, to which we will simply refer to as its {\em order}, is $n+1$. The goal of this review is to analyze and compare different error estimation techniques {\em qualitatively}, similarly to the analysis by \citeN{ref:Laurie1985}. We will start with an overview of the most significant contributions over the last 50 years. Following this analysis, we will present a new error estimator which overcomes most of the problems observed in previous error estimators. In the following two sections we will discuss existing linear (\sect{linear}) and non-linear (\sect{non-linear}) error estimation techniques\footnote{For a more detailed review, see \cite{ref:Gonnet2009}.}. In \sect{new} a new error estimation technique is presented and its relation to previous error estimators is discussed. In \sect{compare} we will apply the discussed error estimators to a number of test functions to assess their performance. In \sect{conclusions} we discuss these results and try to interpret them qualitatively. \subsection{Laurie's Sharper Error Estimate} \label{sec:laurie1983} \label{sec:favati1991} In 1983, \citeN{ref:Laurie1983} publishes a sharper error estimate based on two quadrature rules $\mathsf{Q}_\alpha[a,b]$ and $\mathsf{Q}_\beta[a,b]$ of degree $\alpha$ and $\beta$ respectively, where $\alpha > \beta$, or $\alpha = \beta$ and $\mathsf{Q}_\alpha[a,b]$ is assumed to be more precise than $\mathsf{Q}_\beta[a,b]$: \begin{equation} \label{eqn:laurie_err} \varepsilon_k = \frac{\left(\mathsf{Q}_\alpha^{(2)} - \mathsf{Q}_\beta^{(2)}\right)\left(\mathsf{Q}_\alpha^{(2)} - \mathsf{Q}_\alpha^{(1)}\right)} {\mathsf{Q}_\beta^{(2)} - \mathsf{Q}_\beta^{(1)} - \mathsf{Q}_\alpha^{(2)} + \mathsf{Q}_\alpha^{(1)}} \end{equation} where the ranges $[a_k,b_k]$ are omitted for simplicity. He shows that this error estimate is valid when \begin{equation} \label{eqn:laurie_conds} \left|\mathsf{Q}_\alpha^{(2)} - \mathsf{Q}_\alpha^{(1)}\right| < \left|\mathsf{Q}_\beta^{(2)} - \mathsf{Q}_\beta^{(1)}\right| \quad \mbox{and} \quad 0 \leq \frac{\mathsf{Q}_\alpha^{(2)} - I}{ \mathsf{Q}_\alpha^{(1)} - I} \leq \frac{\mathsf{Q}_\beta^{(2)} - I}{\mathsf{Q}_\beta^{(1)} - I} < 1. \end{equation} The former can be checked for in practice, yet the latter is impossible to verify since the exact integral $I$ must be known. These two conditions imply that the error of $\mathsf{Q}_\alpha[a,b]$ is smaller than and decreases at a faster rate than that of $\mathsf{Q}_\beta[a,b]$. Laurie suggests a weaker condition that can be checked in practice, namely replacing $I$ by $\mathsf{Q}_\alpha^{(2)}[a_k,b_k] + \varepsilon_k$ in \eqn{laurie_conds}, resulting in \begin{equation} \label{eqn:laurie_cond3} 0 \leq \frac{\mathsf{Q}_\alpha^{(2)} - \mathsf{Q}_\beta^{(2)}}{\mathsf{Q}_\alpha^{(1)} - \mathsf{Q}_\beta^{(1)}} < 1. \end{equation} \citeN{ref:Espelid1989} show, however, that this weaker condition is often satisfied when \eqn{laurie_conds} is not, which can lead to bad error estimates\footnote{Espelid and S{\o}revik show that this is the case when using the 10-point Gauss rule and its 21-point Kronrod extension for $\mathsf{Q}_\beta^{(1)}$ and $\mathsf{Q}_\alpha^{(1)}$ respectively and integrating $\int_1^2 0.1/(0.01+(x-\lambda)^2)\,\mbox{d}x$ for $1 \leq \lambda \leq 2$.}. \begin{figure} \centerline{\epsfig{file=laurie.001.eps,width=0.8\textwidth}} \caption{The error of the quadrature rules $Q_\alpha^{(m)}$ (solid curve) and $Q_\beta^{(m)}$ (dotted curve) as a function of the number of panels or subdivisions $m$.} \label{fig:laurie} \end{figure} The error estimate itself, based on these assumptions, is best explained graphically (see Fig.~\ref{fig:laurie}). The errors of both rules $Q_\alpha^{(m)}[a,b]$ and $Q_\beta^{(m)}[a,b]$ are assumed to decrease exponentially with the increasing number of panels or subdivisions $m$: \begin{equation*} \mathsf{Q}^{(m)}_\eta - I = \kappa_\eta \left( \frac{b-a}{m} \right)^{\eta+2} f^{(\eta+1)}(\xi), \quad \xi \in [a,b]. \end{equation*} We define the distances $\varepsilon$, $d_{2m}$, $d_\alpha$ and $d_m$ using \begin{equation} \begin{array}{ll} \mathsf{Q}_\alpha^{(2m)} - I = \varepsilon, & \mathsf{Q}_\beta^{(2m)} - I = \varepsilon + d_{2m}, \\ \mathsf{Q}_\alpha^{(m)} - I = \varepsilon + d_\alpha, & \mathsf{Q}_\beta^{(m)} - I = \varepsilon + d_\alpha + d_m. \end{array}\label{eqn:laurie_dists} \end{equation} Inserting these terms into the second inequality in \eqn{laurie_conds}, we obtain \begin{equation} \label{eqn:laurie_ineq} \frac{\varepsilon}{\varepsilon + d_\alpha} \ \le \ \frac{\varepsilon + d_{2m}}{\varepsilon + d_\alpha + d_{m}} \quad \Longrightarrow \quad \varepsilon \ \le \ \frac{d_\alpha d_{2m}}{d_{2m} - d_{m}}. \end{equation} Resolving the distances using \eqn{laurie_dists}, we see that this bound is identical to the error estimate proposed by Laurie \eqn{laurie_err}. In 1991, \citeN{ref:Favati1991b} publish a similar error estimator, based on four quadratures $\mathsf{Q}_\alpha[a,b]$, $\mathsf{Q}_\beta[a,b]$, $\mathsf{Q}_\gamma[a,b]$ and $\mathsf{Q}_\delta[a,b]$ of degree $\alpha > \beta > \gamma > \delta$ that satisfy the relations \begin{multline} \label{eqn:favati_rel} \left|I-\mathsf{Q}_\alpha\right| \ \leq \ \left|I - \mathsf{Q}_\delta\right| , \quad \left|I-\mathsf{Q}_\alpha\right| \ \leq \ \left|I - \mathsf{Q}_\gamma\right| , \\ \left|I-\mathsf{Q}_\alpha\right| \ \leq \ \left|I - \mathsf{Q}_\beta\right|, \quad \frac{\left|I-\mathsf{Q}_\alpha\right|}{\left|I-\mathsf{Q}_\gamma\right|} \leq \frac{\left|I-\mathsf{Q}_\beta\right|}{\left|I-\mathsf{Q}_\delta\right|}. \end{multline} For any ordering of the four estimates $\mathsf{Q}_\alpha$, $\mathsf{Q}_\beta$, $\mathsf{Q}_\gamma$ and $\mathsf{Q}_\delta$ around the exact integral $I$, we can define the distances $d_\alpha = |\mathsf{Q}_\alpha-I|$, $d_\beta$, $d_\gamma$ and $d_\delta$ depending on the configuration of the estimates around $I$, similarly to \eqn{laurie_dists}. The algorithm therefore first makes a decision as to which configuration is actually correct based on the differences between the actual estimates. Based on this decision, it computes the $d_\alpha$, $d_\beta$, $d_\gamma$ and $d_\delta$ or bounds them using the first three relations in \eqn{favati_rel} and inserts them into the final relation in \eqn{favati_rel} to extract an upper bound for $d_\alpha = |I-\mathsf{Q}_\alpha|$. Favati \ea test this algorithm on a number of integrands and show that the milder conditions in \eqn{favati_rel}, which do not require that successive estimates decrease monotonically, are satisfied more often than those of Laurie in \eqn{laurie_conds}. \section{Linear Error Estimators} \label{sec:linear} In this section we will look at a number of {\em linear error estimators}. We define a linear error estimator as an estimate computed from a linear combination of evaluations of the integrand. Such estimators can be quadrature-like rules, linear combinations or differences of quadrature rules or quantities computed using linear extrapolation techniques, \eg the Romberg scheme. \input{nc-based.tex} \input{fd-based.tex} \input{coeff-based.tex} \input{gk-based.tex} \subsection{Summary} Summarizing, we can group the different linear error estimators in the following categories: \begin{enumerate} \item $\varepsilon \sim \left| \mathsf{Q}_n^{(m_1)}[a,b] - \mathsf{Q}_n^{(m_2)}[a,b] \right|$: Error estimators based on the difference between two estimates of the same degree yet of different multiplicity \cite{ref:Kuncir1962,ref:McKeeman1962,ref:McKeeman1963,ref:McKeeman1963b,ref:Lyness1969,ref:Lyness1970,ref:Malcolm1975,ref:Forsythe1977}. \item $\varepsilon \sim \left| \mathsf{Q}_{n_1}[a,b] - \mathsf{Q}_{n_2}[a,b] \right|$: Error estimators based on the difference between two estimates of different degree \cite{ref:Patterson1973,ref:Piessens1973,ref:Piessens1983,ref:Hasegawa2007,ref:Berntsen1984,ref:Favati1991,ref:Gander2001,ref:OHara1969}. \item $\varepsilon \sim \left| f^{(n)}(\xi) \right|$: Error estimators based on directly approximating the derivative in the analytic error term \cite{ref:Gallaher1967,ref:Garribba1978,ref:Ninomiya1980}. \item $\varepsilon \sim \left|\tilde{c}_n\right|$: Error estimators based on the estimate of the highest-degree coefficient of the function relative to some orthogonal base \cite{ref:OHara1968,ref:OHara1969,ref:Oliver1972,ref:Berntsen1991,ref:Espelid1992,ref:Espelid2002,ref:Espelid2004,ref:Espelid2004b,ref:Espelid2007}. \end{enumerate} Already in 1985, \citeN{ref:Laurie1985} shows that the first three categories are, in essence, {\em identical}. Consider Kuncir's error estimate (see \sect{kuncir1962}, \eqn{kuncir_err}) from the {\bf first} category (without the relative scaling), which can be viewed as a 5-point ``rule'' (or linear functional) over the nodes used by $\mathsf{S}^{(1)}[a,b]$ and $\mathsf{S}^{(2)}[a,b]$. Since both approximations evaluate polynomials of up to degree 3 exactly, their difference will be, when applied to polynomials of up to degree 3, zero. When applied to a polynomial of degree 4 or higher, the estimates will, in all but pathological cases, differ. This is, up to a constant factor, {\em exactly} what the {\em $4$th divided difference} over the same 5 nodes computes\footnote{ Note that this is also, up to a constant factor, the definition of a null-rule, as used by Berntsen and Espelid (see \sect{berntsen1991}). \citeN{ref:Lyness1965}, who originally introduced the concept of null-rules, creates them explicitly from the difference of two quadrature rules, as is done in these error estimates implicitly.}. The same can be said of error estimates from the {\bf second} category, such as the one used by Piessens (see \sect{piessens1973}) where the Gauss quadrature rule $\mathsf{G}_n[a,b]$ integrates all polynomials of degree up to $2n-1$ exactly and its Kronrod extension $\mathsf{K}_{2n+1}[a,b]$ integrates all polynomials of degree up to $3n+1$ exactly. Since the approximation computed by these rules differ only for polynomials of degree $2n$ and higher, the combined ``rule'' over the $2n+1$ points behaves just as the {\em $2n$th divided difference} would. In these cases, the divided differences are {\em unique}\footnote{ Not all error estimators in these categories, though, are identical up to a constant factor to the highest-degree divided differences over the same points. McKeeman's error estimator (see \sect{mckeeman1962}), for instance, approximates a $4$th divided difference over 7 points, which is neither unique nor of the highest-possible degree. The same can be said of Forsythe, Malcolm and Moler's {\tt QUANC8} (see \sect{lyness1969}) and Patterson's successive Kronrod extensions (see \sect{patterson1973}).} (\ie the $n$th difference over $n+1$ points), as are the quadrature rules. They therefore {\em differ only by a constant factor}. As a consequence, the first and second categories, are both equivalent to the {\bf third} category, in which the lowest degree derivative of the error expansion are approximated explicitly. In the {\bf fourth} and final category we again find finite differences, namely in Berntsen and Espelid's null rules (see \sect{berntsen1991}), in which the coefficients $e_k$ relative to an orthogonal base are computed (see (\ref{eqn:null_interp})). The highest-degree coefficient $e_{n-1}$, computed with the $(n-1)\st$ null rule over $n$ nodes is, as Berntsen and Espelid themselves note in \cite{ref:Berntsen1991}, identical up to a constant factor to the $(n-1)\st$ divided difference over the same nodes. This value is combined with the $(n-2)\nd$ divided difference (see (\ref{eqn:null_ratios})), itself identical only up to a linear factor and used as an error estimate. The same goes for the coefficients relative to {\em any} base computed over $n$ points, such as the coefficients $\tilde{c}_i$ of the Chebyshev polynomials used by O'Hara and Smith (see \sect{ohara1969}) and Oliver (see \sect{oliver1972}). The ``rule'' used to compute the highest-degree coefficients (\eqn{ohara_err3}) is identical up to a constant factor to the $n$th divided difference over the $n+1$ nodes used. While O'Hara and Smith use the highest-degree coefficient directly, Oliver uses $\mathsf{K}^3|\tilde{c}_{n-4}|$ (see (\ref{eqn:oliver_K}) and (\ref{eqn:oliver_err})), which is related (\ie no longer identical up to a constant factor) to the $(n-4)$th divided difference. We therefore establish that {\em all} linear error estimators presented in this section are equivalent in that they all use one or more divided difference approximations of the higher derivatives of the integrand. The quality of the error estimate therefore depends on the quality of these approximations. In these estimates, problems may occur when the difference between two estimates or the magnitude of the computed coefficients is {\em accidentally small} \ie the approximations used to compute the error estimate are too imprecise, resulting in a false small error estimate. This is often the case near singularities and discontinuities where the assumptions on which the error estimate is based, \eg continuity and/or smoothness, do not hold. \subsection{Early Error Estimators Based on Rules of Equal Degree} \label{sec:nc-based} \label{sec:first} \label{sec:kuncir1962} \label{sec:mckeeman1962} \label{sec:mckeeman1963} \label{sec:mckeeman1963b} \label{sec:lyness1969} \label{sec:garribba1978} \label{sec:gander2001a} There seems to be some confusion as to who actually published the first adaptive quadrature algorithm. \citeN{ref:Davis1967} cite the works of \citeN{ref:Villars1956}, \citeN{ref:Henriksson1961} and Kuncir\index{Kuncir} (see \sect{kuncir1962}). Although no explicit attribution is given, Henriksson's algorithm seems to be an unmodified {\small ALGOL}-implementation of the algorithm described by Villars which is, as the author himself states, only a slight modification of a routine developed by \citeN[cited in \citeNP{ref:Villars1956}]{ref:Morrin1955} in 1955. These three algorithms are more reminiscent of ODE-solvers\index{ODE-solvers}, integrating the function stepwise from left to right using Simpson's rule and adapting (doubling or halving) the step-size whenever an estimate converges or fails to do so. In doing so they effectively discard function evaluations and so lose information on the structure of the integrand. We will therefore not consider them to be ``genuine'' adaptive integrators. In 1962, \citeN{ref:Kuncir1962} publishes the first adaptive quadrature routine\footnote{ Although Kuncir predates McKeeman\index{McKeeman} by about half a year, many publications \cite{ref:Espelid2007,ref:Espelid2002,ref:Espelid2004,ref:Espelid2003,ref:Berntsen1991,ref:Malcolm1975}, credit McKeeman with having published the first adaptive integrator. Interestingly enough, the very similar works of both Kuncir and McKeeman were both published in the same journal (Communications of the ACM) in the same year (1962) in different issues of the same volume (Volume 5), both edited by the same editor (J.H. Wegstein). This duplication of efforts does not seem to have been noticed at the time. } following the scheme in Algorithm~\ref{alg:general_rec} and using the {\em locally relative} local error estimate \begin{equation} \label{eqn:kuncir_err} \varepsilon_k = \left| \frac{ \mathsf{S}^{(1)}[a_k,b_k] - \mathsf{S}^{(2)}[a_k,b_k] }{ \mathsf{S}^{(2)}[a_k,b_k] }\right| \end{equation} where $\mathsf{S}^{(1)}[a_k,b_k]$ is Simpson's rule applied over the entire interval $[a_k,b_k]$ and $\mathsf{S}^{(2)}[a_k,b_k]$ is Simpson's rule applied on the sub-intervals $[a,\frac{a+b}{2}]$ and $[\frac{a+b}{2},b]$. If the error estimate is below the required tolerance, the estimate $\mathsf{S}^{(2)}[a_k,b_k]$ is used as the local approximation to the integral. The error estimate is based on the assumption that if the estimate $\mathsf{S}^{(2)}[a_k,b_k]$ is a better approximation of the integral than $\mathsf{S}^{(1)}[a_k,b_k]$, the difference between both estimates will be a good estimate of the difference between $\mathsf{S}^{(1)}[a_k,b_k]$ and the actual integral. Replacing every evaluation of the integrand in the un-scaled error estimate \eqn{kuncir_err} with an appropriate $f(a+h)$ and expanding it in a Taylor expansion\index{Taylor expansion} around $a$, as is done in \cite{ref:Gander2006}, we obtain \begin{equation} \label{eqn:kuncir_err_taylor} \mathsf{S}^{(1)}[a_k,b_k] - \mathsf{S}^{(2)}[a_k,b_k] = \frac{(b_k-a_k)^5}{3072} f^{(4)}(\xi), \quad \xi \in [a_k,b_k]. \end{equation} Inserting the Taylor expansion into the {\em actual} error gives a similar result: \begin{equation} \label{eqn:kuncir_int_taylor} \mathsf{S}^{(2)}[a_k,b_k] - \intfx{a_k}{b_k} = \frac{(b_k-a_k)^5}{46\,080} f^{(4)}(\xi), \quad \xi \in [a_k,b_k]. \end{equation} If we assume that $f^{(4)}(x)$ is more or less constant for $x \in [a_k,b_k]$ and both \eqn{kuncir_err_taylor} and \eqn{kuncir_int_taylor} therefore have similar values for $f^{(4)}(\xi)$, then the error estimate is actually 15 times larger than the actual integration error. This factor of 15 might seem large, but in practice it is a good guard against bad estimates when $f^{(4)}(x)$ is {\em not} constant for $x \in [a_k,b_k]$. In the same year, \citeN{ref:McKeeman1962} publishes a similar recursive algorithm (following Algorithm~\ref{alg:general_rec}, yet using trisection\index{trisection} instead of bisection) using the {\em globally relative} local error estimate \begin{equation} \label{eqn:mckeeman_err} \varepsilon_k = \frac{1}{\hat{I}} \left| \mathsf{S}^{(1)}[a_k,b_k] - \mathsf{S}^{(3)}[a_k,b_k] \right| \end{equation} where $\hat{I}$ is an approximation to the global integral of the absolute value of $f(x)$. Using the same analysis as in \eqn{kuncir_err_taylor}, we can compute the ratio of the computed and exact errors and obtain \begin{equation} \label{eqn:mckeeman_ratio} \left| \frac{\mathsf{S}^{(1)}[a,b] - \mathsf{S}^{(3)}[a,b]}{\mathsf{S}^{(3)}[a,b] - \intfx{a}{b}} \right| \approx 80, \end{equation} \ie the error is overestimated by a factor of 80 for sufficiently smooth\footnote{In the following, we will use the rather loose expression ``sufficiently smooth'' when, for a quadrature rule of order $n$, the $n$th derivative of the integrand is sufficiently close to constant in the integration interval, such that the error estimate will not fail.} integrand. The use of a {\em globally relative} local error estimate is an important improvement. Besides forming a correct upper bound for the global error, it does not run into problems in sub-intervals where the integrand approaches 0, causing any {\em locally relative} error estimate to approach infinity. The use of an error relative to the global integral of the {\em absolute} value of the function is a good guard against cancellation or {\em smearing} \cite{ref:Henrici1982} when summing-up the integrals over the sub-intervals. A year later, \citeN{ref:McKeeman1963} publish a non-recursive\footnote{Their algorithm is non-recursive in the sense that an explicit stack is maintained, analogous to the one generated in memory during recursion, and not as in the scheme presented in Algorithm~\ref{alg:general_nonrec}} version of of the integrator with a better local tolerance computation and shortly thereafter, McKeeman\index{McKeeman} publishes another recursive adaptive integrator \cite{ref:McKeeman1963b} based on Newton-Cotes rules\index{Newton-Cotes quadrature} over a set of $n$ points, where $n$ is a user-defined parameter. In the same vein as the previous integrator, the following error estimate is used \begin{equation} \label{eqn:mckeeman_err1963b} \varepsilon_k = \frac{1}{\hat{I}_d} \left| \mathsf{NC}^{(1)}_{n}[a_k,b_k] - \mathsf{NC}^{(n-1)}_{n}[a_k,b_k] \right|. \end{equation} At every recursion level, the interval is subdivided into $n-1$ panels and, if the tolerance is met, the value of $\mathsf{NC}^{(n-1)}_{n}[a,b]$ is used as an approximation to the integral. Replacing the evaluations of the integrand $f(a+h)$ by their Taylor expansions\index{Taylor expansion} around $a$ and inserting them into the ratio of the computed and exact error as in \eqn{mckeeman_ratio}, we can see that for $n=3$ (\ie applying Simpson's rule), we overestimate the actual error by a factor of $15$. For $n=4$, this factor grows to $80$, as observed for McKeeman's first integrator (see \eqn{mckeeman_ratio}). For $n=5$ it is $4\,095$ and for $n=8$, the maximum allowed in the algorithm, it is $5\,764\,800$ (7 decimal digits!), making this a somewhat strict estimate both in theory and in practice. In 1969, \citeN{ref:Lyness1969} publishes the first rigorous analysis of McKeeman's integrator and implements a revised algorithm, {\tt SQUANK}\cite{ref:Lyness1970}. He suggests using the absolute local error instead of the globally relative local error, bisection instead of trisection and includes the resulting factor of $15$ in the error estimate\footnote{Note that McKeeman's original error estimate was off by a factor of 80 (see \eqn{mckeeman_ratio}). The factor of 15 comes from using bisection instead of trisection.}: \begin{equation} \label{eqn:lyness_err} \varepsilon_k = \frac{1}{15} \left| \mathsf{S}^{(1)}[a_k,b_k] - \mathsf{S}^{(2)}[a_k,b_k] \right| \end{equation} He further suggests using Romberg extrapolation\index{Romberg extrapolation} to compute the five-node Newton-Cotes formula\index{Newton-Cotes quadrature} from the two Simpson's approximations\footnote{Interestingly enough, this was already suggested by \citeN{ref:Villars1956} and implemented by \citeN{ref:Henriksson1961}, but apparently subsequently forgotten.}: \begin{equation} \label{eqn:lyness_romb} \mathsf{NC}^{(1)}_5[a,b] = \frac{1}{15}\left( 16\mathsf{S}^{(2)}[a,b] - \mathsf{S}^{(1)}[a,b] \right). \end{equation} This is a departure from previous methods, in which the error estimate and the integral approximation were of the same degree, making it impracticable to relate the error estimate to the integral approximation without making additional assumptions on the smoothness of the integrand. In a 1975 paper, \citeN{ref:Malcolm1975} present a {\em global} version of {\tt SQUANK} called {\tt SQUAGE}\index{SQUAGE@{\tt SQUAGE}} (Simpson's Quadrature Used Adaptively Global Error) along the lines of Algorithm~\ref{alg:general_nonrec}, and conclude that global adaptivity allows for better control of the error estimate\footnote{In their paper, Malcolm and Simpson state (erroneously) that Lyness' {\tt SQUANK} uses $S^{(2)}[a,b]$ as its approximation to the integral and, as their results suggest, $S^{(2)}[a,b]$ was also used in their implementation thereof. This omission, however, has no influence on their results or the conclusions they draw in their paper as they only consider the number of intervals generated by the global and local error estimates, and not the accuracy of the final result.}. In 1977, \citeN{ref:Forsythe1977} publish the recursive quadrature routine {\tt QUANC8}, which uses essentially the same basic error estimate as Lyness \eqn{lyness_err}, yet using Newton-Cotes rules over 9 points, resulting in a scaling factor of 1023 instead of 15 (see \eqn{lyness_err}). Analogously to \eqn{lyness_romb}, the two quadrature rules are combined using Romberg extrapolation to compute a $11$th degree approximation which is used as the approximation to the integral\footnote{ This routine was integrated into {\small MATLAB} as {\tt quad8}, albeit without the Romberg extrapolation, and has since been replaced by {\tt quadl} as of Version 7, Release 14 \cite{ref:Mathworks2005}.}. The same approach, although effectively evaluated differently, was later re-used by \citeN{ref:Garribba1978} in 1978 in their integrator {\tt SNIFF} for Gauss-Legendre quadrature rules. They do not use Romberg extrapolation to refine the approximation of the integral, but the use the error estimate to guess the optimal width of the sub-intervals in each unconverged interval. Finally, in a 2001 paper, \citeN{ref:Gander2001} present two recursive adaptive quadrature routines. The first routine, {\tt adaptsim} is quite similar to Lyness' {\tt SQUANK} (see \sect{lyness1969}). It computes the approximations $\mathsf{S}^{(1)}[a,b]$ and $\mathsf{S}^{(2)}[a,b]$\index{compound Simpson's rule} and uses them to extrapolate $\mathsf{NC}_5^{(1)}[a,b]$ as in \eqn{lyness_romb}. The {\em globally relative} local error estimate, however, is then computed as \begin{equation} \label{eqn:gander_err} \varepsilon_k = \left| \mathsf{NC}^{(1)}_5[a_k,b_k] - \mathsf{S}^{(2)}[a_k,b_k] \right| / | \hat{I} | \end{equation} where $\hat{I}$ is a rough approximation to the global integral computed over a set of random nodes. \section{A New Error Estimator} \label{sec:new} In the following, we will present a new type of error estimator introduced by the author in \cite{ref:Gonnet2010}. For the construction of this error estimator, we will begin with an explicit representation of the integrand. In almost all previously presented error estimators, the integrand itself is represented only by its approximated integral or, in the best of cases (see \sect{ohara1969}), only a few higher-order coefficients relative to some base. By definition, every interpolatory quadrature rule implicitly constructs an interpolation polynomial $g_n(x)$ of degree $n-1$ of the integrand $f(x)$ at the nodes $x_i$, $i=1\dots n$ and computes the integral of the interpolation. This equivalence is easily demonstrated, as is done in many textbooks in numerical analysis (\cite{ref:Stiefel1961,ref:Rutishauser1976,ref:Gautschi1997,ref:Schwarz1997,ref:Ralston1978} to name a few)\footnote{ If we consider the Lagrange interpolation $g_n(x)$ of the integrand and integrate it, we obtain \begin{equation*} \int_a^bg_n(x)\,\mbox{d}x = \int_a^b \sum_{i=0}^n \ell_i(x) f(x_i) \,\mbox{d}x = \sum_{i=0}^n f(x_i) \int_a^b \ell_i(x) \,\mbox{d}x = \sum_{i=0}^n f(x_i) w_i \end{equation*} where the $\ell_i(x)$ are the Lagrange polynomials and the $w_i$ are the weights of the resulting quadrature rule. }. For our new error estimate, we will represent the interpolant $g_n(x)$ {\em explicitly} as a weighted sum of orthonormal Legendre polynomials \begin{equation} \label{eqn:new_g2} g_n(x) = \sum_{k=0}^{n-1} c_k p_k(x). \end{equation} The interpolant $g_n(x)$ interpolates the integrand $f(x)$ on the transformed interval from $[a,b]$ to $[-1,1]$ at the nodes $x_i$, $i=1\dots n$: \begin{equation} \label{eqn:new_interval} g_n(x_i) = \hat{f}(x_i) = f\left( \frac{a+b}{2} - \frac{a-b}{2}x_i \right), \quad x_i \in [-1,1]. \end{equation} Given the function values $\mathbf{f} = (\hat{f}(x_1),\hat{f}(x_2),\dots,\hat{f}(x_n))^\mathsf{T}$, we can compute the vector of coefficients $\mathbf{c} = (c_0,c_1,\dots,c_{n-1})^\mathsf{T}$ by solving the linear system of equations \begin{equation} \label{eqn:function_linsys} \mathbf{P} \mathbf{c} = \mathbf{f} \end{equation} where the matrix $\mathbf{P}$ with $P_{ij}=p_j(x_i)$ on the left-hand side is a {\em Vandermonde-like} matrix. The naive solution using Gaussian elimination is somewhat costly and may be unstable \cite{ref:Gautschi1975}. However, several algorithms exist to solve this problem stably in \oh{n^2} operations for orthogonal polynomials satisfying a three-term recurrence relation \cite{ref:Bjorck1970,ref:Higham1988,ref:Higham1990,ref:Gonnet2008b}. Given such a representation as in (\ref{eqn:new_g2}), the integral of $g_n(x)$ can be computed as \begin{equation} \int_{-1}^1 g_n(x)\dx \ = \ \sum_{k=0}^{n-1} c_k \int_{-1}^1p_k(x)\dx \ = \ \sum_{k=0}^{n-1} c_k \omega_k = \boldsymbol \omega^\mathsf{T} \mathbf c. \label{eqn:new_int2} \end{equation} Using orthonormal Legendre polynomials, the coefficients are simply $\boldsymbol \omega^\mathsf{T} = ( 1/\sqrt{2} , 0 , \dots , 0 )$. We can formulate the integral approximation as the scalar product of the vector of coefficients $\mathbf c$ with a vector of weights $\boldsymbol \omega$: \begin{equation} \mathsf{Q}_n[a,b] = \frac{(b-a)}{2}\boldsymbol \omega^\mathsf{T} \mathbf c. \label{eqn:new_int3} \end{equation} Another useful feature of such a representation is that it can be easily transformed to a sub-interval. Let $c_i$, $i=0\dots n-1$ be the coefficients of the interpolation $g_n(x)$ in the interval $[a,b]$. Given the matrix $\mathbf T^{(\ell)}$ with entries \begin{equation} \label{eqn:new_Tl} T^{(\ell)}_{i,j} = \int_{-1}^1 p_j(x) p_i\left(\frac{x-1}{2}\right) \dx \end{equation} we can compute the coefficients $c^{(\ell)}_i$, $i=0 \dots n-1$ of the interpolation $g^{(\ell)}_n(x)$ over the left half of the interval $[a,(a+b)/2]$ using $c^{(\ell)} = \mathbf T^{(\ell)} \mathbf c$ where the resulting polynomial $g^{(\ell)}_n(x)$ over $[-1,1]$ is identical to $g_n(x)$ over $[-1,0]$ ($g^{(\ell)}_n(x) = g_n\left(\frac{x-1}{2}\right)$, $x \in [-1,1]$). Analogously, we can create the matrix $\mathbf T^{(r)}$ such that $\mathbf c^{(r)} = \mathbf T^{(r)} \mathbf c$ are the coefficients of the right half of $g_n(x)$ transformed to $[-1,1]$. Such upper-triangular matrices can be constructed to transform $g_n(x)$ to any sub-interval. A final useful feature is that given the coefficients $c_i$, $i=0\dots n-1$ of any interpolation $g_n(x)$, we can compute its $L_2$-norm using Parseval's theorem: \begin{equation} \left[ \int_{-1}^1 g(x)^2 \dx\right]^{1/2} \ = \ \left[\sum_{i=0}^{n-1}c_i^2\right]^{1/2} \ = \ \|\mathbf c\|_2 \label{eqn:l2} \end{equation} which is simply the Euclidean norm of the vector of coefficients $\mathbf c$. In the following, we will use $\|\cdot\|$ to denote the 2-norm. Instead of constructing our error estimate by approximating the difference of the {\em integral} of the interpolation $g_n(x)$ to the integral of the integrand $f(x)$ directly, as is done in practically all the methods presented in \sect{linear} and \sect{non-linear}, we will consider the $L_2$-norm of the difference between the integrand and its interpolant: \begin{equation} \label{eqn:new_err} \varepsilon = \frac{b-a}{2} \left[ \int_{-1}^1 \left( \hat{f}(x) - g_n(x) \right)^2 \dx \right]^{1/2}. \end{equation} The proposed error estimate in \eqn{new_err} is, save for a constant factor of $\sqrt{2}$, an upper bound of the integration error of the interpolant $g_n(x)$\footnote{This can be shown using the Cauchy-Schwarz inequality \begin{equation*} \left| \int_{-1}^1 \phi(x) \psi(x)\dx \right|^2 \leq \int_{-1}^1 \left|\phi(x)\right|^2\dx \int_{-1}^1 \left|\psi(x)\right|^2\dx. \end{equation*} for $\psi(x) = 1$ we obtain \begin{equation*} \left| \int_{-1}^1 \phi(x)\dx \right|^2 \leq 2 \int_{-1}^1 \left|\phi(x)\right|^2\dx, \end{equation*} and finally \begin{equation*} \left| \int_{-1}^1 \phi(x)\dx \right| \leq \sqrt{2} \left(\int_{-1}^1 \left|\phi(x)\right|^2\dx\right)^{1/2}. \end{equation*} } \begin{equation*} \frac{b-a}{2} \left| \mathsf{Q}_n[-1,1] - \int_{-1}^{1}\hat{f}(x)\dx \right| = \frac{b-a}{2} \left| \int_{-1}^1(g_n(x) - \hat{f}(x))\dx \right| \end{equation*} and will only be zero if the interpolated integrand matches the integrand on the entire interval ($g_n(x) = \hat{f}(x)$, $x \in [-1,1]$). In such a case, the integral will also be computed exactly. The error \eqn{new_err} is therefore, assuming we can evaluate it reliably, not susceptible to ``accidentally small'' values. Since we do not have an exact representation of the integrand $f(x)$, we can not compute (\ref{eqn:new_err}) exactly. We can, however, generate a first trivial error estimate using two interpolations $g^{(1)}_{n_1}(x)$ and $g^{(2)}_{n_2}(x)$ of different degree where $n_2 > n_1$. If we assume that $g^{(2)}_{n_2}(x)$ interpolates the integrand $f(x)$ much better than does $g^{(1)}_{n_1}(x)$, then we can assume that \begin{equation} \label{eqn:new_err1} \hat{f}(x) - g^{(1)}_{n_1}(x) \approx g^{(2)}_{n_2}(x) - g^{(1)}_{n_1}(x) \end{equation} that is, that $f(x)$ on the left-hand side can be replaced with $g_{n_2}(x)$, similarly to Piessens' and Patterson's error estimates (see \sect{piessens1973}), in which the estimate from a higher-degree rule is used to estimate the error of a lower-degree rule. Taking the $L_2$-norm from the left-hand side of (\ref{eqn:new_err1}), we obtain \begin{equation} \label{eqn:new_err2} \varepsilon_1 = \frac{b-a}{2}\| \mathbf c^{(1)} - \mathbf c^{(2)} \| \end{equation} where $\mathbf c^{(1)}$ and $\mathbf c^{(2)}$ are the vectors containing the coefficients of the interpolants $g^{(1)}_{n_1}(x)$ and $g^{(2)}_{n_2}(x)$ respectively and $c^{(1)}_i = 0$ where $i \geq n_1$. This error estimate, however, is only valid for the lower-degree interpolation $g^{(1)}_{n_1}(x)$ and would over-estimate the error of the higher-degree interpolation $g^{(2)}_{n_2}(x)$ which we would use to compute the integral. For a more refined error estimate, we could consider the interpolation error \begin{equation} \label{eqn:new_interperr} \hat{f}(x) - g_n(x) = \frac{ \hat{f}^{(n)}(\xi_x) }{n!} \pi_{n}(x) , \quad \xi_x \in [-1,1] \end{equation} for any $n$ times continuously differentiable $f(x)$ where $\xi_x$ depends on $x$ and where $\pi_{n}(x)= \prod_{i=1}^n (x - x_i)$ is the Newton polynomial over the $n$ nodes of the quadrature rule: Taking the $L_2$-norm on both sides of \eqn{new_interperr} we obtain \begin{equation*} \varepsilon = \left[ \int_{-1}^1 \left( g_n(x) - \hat{f}(x) \right)^2 \dx \right]^{1/2} = \left[ \int_{-1}^{1} \left(\frac{\hat{f}^{(n)}(\xi_x)}{n!}\right)^2 \pi^2_n(x) \dx \right]^{1/2}. \end{equation*} Since $\pi_n^2(x)$ is, by definition, positive for any $x$, we can apply the mean value theorem of integration and extract the derivative resulting in \begin{equation} \label{eqn:new_interperr2} \varepsilon = \left[ \int_{-1}^1 \left( g_n(x) - \hat{f}(x) \right)^2 \dx \right]^{1/2} = \left|\frac{\hat{f}^{(n)}(\xi)}{n!}\right|\left[ \int_{-1}^{1} \pi^2_n(x) \dx \right]^{1/2}, \quad \xi \in [-1,1]. \end{equation} If we represent the polynomial $\pi_n(x)$ analogously to $g_n(x)$, as $\pi_n(x) = \sum_{k=0}^n b_k p_k(x)$, then we can compute its $L_2$-norm as $\| \mathbf b \|$, where $\mathbf b$ is the vector of the $n+1$ coefficients\footnote{\citeN{ref:Higham1988} shows how the coefficients of a Newton-like polynomial can be computed relative to any orthogonal base.} $b_k$. Therefore, the terms on the right-hand side of (\ref{eqn:new_interperr2}), only the $n$th derivative of the integrand is unknown. Given two interpolations of the integrand, $g^{(1)}_n(x)$ and $g^{(2)}_n(x)$, of the same degree yet not over the same set of nodes, if we assume that the derivative $f^{(n)}(\xi)$ is constant for $\xi \in [a,b]$\footnote{ This assumption is a stronger form of the ``sufficiently smooth'' condition, which we will use only to construct the error estimator.}, we can extract the unknown derivative as follows: \begin{equation} g^{(1)}_n(x) - g^{(2)}_n(x) = \left|\frac{\hat{f}^{(n)}(\xi)}{n!}\right| \left( \pi^{(1)}_n(x) - \pi^{(2)}_n(x) \right) \label{eqn:new_fdn} \end{equation} where $\pi^{(1)}_n(x)$ and $\pi^{(2)}_n(x)$ are the $n$th Newton polynomials over the nodes of $g^{(1)}_n(x)$ and $g^{(2)}_n(x)$ respectively. Taking the $L_2$-norm on both sides of (\ref{eqn:new_fdn}), we obtain \begin{equation} \left| \frac{\hat{f}^{(n)}(\xi)}{n!} \right| = \frac{\left\| \mathbf c^{(1)} - \mathbf c^{(2)} \right\|}{\left\| \mathbf b^{(1)} - \mathbf b^{(2)} \right\|} \label{eqn:new_fdn2} \end{equation} from which we can construct an error estimate for either interpolation \begin{equation} \left[\int_{-1}^{1}\left(g^{(k)}_n(x) - \hat{f}(x)\right)^2\dx\right]^{1/2} = \frac{\left\| \mathbf c^{(1)} - \mathbf c^{(2)} \right\|}{\left\| \mathbf b^{(1)} - \mathbf b^{(2)} \right\|} \| \mathbf b^{(k)} \|, \quad k\in \{1,2\}. \label{eqn:new_refined} \end{equation} Note that for this estimate, we have made the assumption that the $n$th derivative is constant. We can't verify this directly, but we can verify if our computed $|\frac{f^{(n)}(\xi)}{n!}|$ (\ref{eqn:new_fdn2}) actually satisfies (\ref{eqn:new_interperr}) for the nodes of the first interpolation by testing \begin{equation} \label{eqn:err_test} \left| g^{(2)}_n(x_i) - \hat{f}(x_i) \right| \leq \vartheta_1 \left|\frac{f^{(n)}(\xi)}{n!}\right| \left|\pi^{(2)}_n(x_i)\right|, \quad i=1 \dots n \end{equation} where the $x_i$ are the nodes of the interpolation $g^{(1)}_n(x)$ and the value $\vartheta_1 \geq 1$ is an arbitrary relaxation parameter. If this condition is violated for any of the $x_i$, then we use the un-scaled estimate as in \eqn{new_err2}. In practice, we can implement this error estimator in a recursive adaptive quadrature by first computing the $n$ coefficients $c_k$ of $g_n(x)$ in the interval $[a,b]$. The $n+1$ coefficients $b_k$ of the $n$th Newton polynomial over the nodes of the basic quadrature rule can be pre-computed. For the first interval, no error estimate is computed. The interval is bisected and for the recursion on the left half of $[a,b]$, we compute\footnote{ Note that to compute $\mathbf b^\mathsf{old}$ we would actually need to extend $\mathbf T^{(\ell)}$ and, since $\mathbf{b}^\mathsf{old}$ and $\mathbf{b}$ are not in the same interval, we have to scale the coefficients of $\mathbf{b}^\mathsf{old}$ by $2^n$ so that Equation~\ref{eqn:new_interperr} holds for $g^{(2)}_n(x)$ in the sub-interval.} \begin{equation*} \mathbf c^\mathsf{old} = \mathbf T^{(\ell)} \mathbf c, \quad \mathbf b^\mathsf{old} = 2^n \mathbf T^{(\ell)} \mathbf b. \end{equation*} Inside the left sub-interval $[a,(a+b)/2]$, we then evaluate the new coefficients $\mathbf c$. Given the old and new coefficients, we then compute the error estimate \begin{equation} \label{eqn:new_eps2} \varepsilon_2 = \frac{(b-a)}{2} \frac{\| \mathbf c - \mathbf c^\mathsf{old} \|}{\|\mathbf b - \mathbf b^\mathsf{old}\|} \|\mathbf b\|. \end{equation} \section{Non-Linear Error Estimators} \label{sec:non-linear} In the previous section, we considered error estimators that used only linear combinations of function values inside a single interval. In this section, we will consider methods that use function values or quadratures from one or more intervals or sub-intervals and which combine these values {\em non-linearly} to estimate the integration error. \input{deboor1971.tex} \input{rowland1972.tex} \input{laurie1983.tex} \input{dedoncker1978.tex} \subsection{Summary} Although most of the non-linear error estimators presented in this section differ significantly in their approach, they all rely on the same basic principle, namely the assumption that, for any quadrature rule $\mathsf{Q}^{(m)}[a,b]$, for sufficiently smooth $f(x)$ in the interval $x \in [a,b]$, the error can be written as \begin{equation} \label{eqn:extrap_err} \mathsf{Q}^{(m)}[a,b] - \intfx{a}{b} \approx \kappa h^\alpha, \quad h = \frac{b-a}{m} \end{equation} where $\kappa$ depends on the basic quadrature rule $\mathsf{Q}$ and the higher derivatives of the integrand and $\alpha$ is the order of the error. In the most general case, \eqn{extrap_err} has three unknowns, namely the actual integral $I=\intfx{a}{b}$, the scaling $\kappa$ and the order $\alpha$ of the error. The order $\alpha$ is usually assumed to be the order of the quadrature rule, but in the presence of singularities or discontinuities, this is not always the case. The three unknowns may be resolved using three successive approximations of increasing multiplicity: \begin{eqnarray} \mathsf{Q}^{(m)} & = & I + \kappa h^\alpha \label{eqn:extrap_qm} \\ \mathsf{Q}^{(2m)} & = & I + \kappa h^\alpha 2^{-\alpha} \label{eqn:extrap_q2m} \\ \mathsf{Q}^{(4m)} & = & I + \kappa h^\alpha 4^{-\alpha} \label{eqn:extrap_q4m} \end{eqnarray} We can subtract \eqn{extrap_qm} from \eqn{extrap_q2m} to isolate the error term \begin{equation} \kappa h^\alpha = \frac{\mathsf{Q}^{(m)} - \mathsf{Q}^{(2m)}}{1 - 2^{-\alpha}} = \frac{2^\alpha\left(\mathsf{Q}^{(m)} - \mathsf{Q}^{(2m)}\right)}{2^\alpha-1}. \label{eqn:extrap_kappa} \end{equation} Re-inserting this expression into \eqn{extrap_q2m}, we obtain \begin{equation*} I = \mathsf{Q}^{(2m)} - \frac{\mathsf{Q}^{(m)} - \mathsf{Q}^{(2m)}}{2^\alpha - 1} \end{equation*} which is the linear extrapolation used in the Romberg T-table (for even integer values of $\alpha$) and also used by de~Boor's {\tt CADRE} (see \sect{deboor1971}, \eqn{deboor_cautious}), where the $\mathsf{Q}^{(m)}$, $\mathsf{Q}^{(2m)}$ and $\mathsf{Q}^{(4m)}$ are the T-table entries $T_{\ell-2,i}$, $T_{\ell-1,i}$ and $T_{\ell,i}$ respectively, for an unknown $\alpha$. Inserting \eqn{extrap_kappa} into \eqn{extrap_q2m} and \eqn{extrap_q4m} and taking the difference of the two, we can extract \begin{equation} 2^{\alpha} = \frac{\mathsf{Q}^{(2m)} - \mathsf{Q}^{(4m)}}{\mathsf{Q}^{(m)} - \mathsf{Q}^{(2m)}} \label{eqn:extrap_alpha} \end{equation} which is the ratio $R_i$ used by de~Boor (\eqn{deboor_ratio}) to approximate the order of the error ($2^{\alpha+1}$ therein). Inserting both \eqn{extrap_kappa} and \eqn{extrap_alpha} into the last estimate, \eqn{extrap_q4m}, we obtain \begin{eqnarray} I = \mathsf{Q}^{(4m)} - \frac{\left(\mathsf{Q}^{(2m)} - \mathsf{Q}^{(4m)}\right)^2}{\mathsf{Q}^{(m)} - 2\mathsf{Q}^{(2m)} + \mathsf{Q}^{(4m)}} \end{eqnarray} which is one step of the well-known Aitken $\Delta^2$-process \cite{ref:Aitken1926}. The approach taken by Rowland and Varol (see \sect{rowland1972}) is almost identical, except that, instead of using the exact integral, they use \begin{equation} \mathsf{Q}^{(m)} \ = \ \mathsf{Q}^{(2m)} + \kappa h^\alpha, \quad \mathsf{Q}^{(2m)} \ = \ \mathsf{Q}^{(4m)} + \kappa h^\alpha 2^{-\alpha}, \quad \mathsf{Q}^{(4m)} \ = \ I + \kappa h^\alpha 4^{-\alpha} \end{equation} to solve for $\kappa h^\alpha$, $2^{-\alpha}$ and the exact integral $I$, resulting in their simpler error estimate (see \eqn{rowland_err}). In a similar vein, Laurie (see \sect{laurie1983}) uses the four equations \begin{equation} \begin{array}{lcl} \mathsf{Q}_\alpha^{(1)} = I + \kappa_\alpha (b-a)^{\alpha+2}, & & \mathsf{Q}_\alpha^{(2)} = I + \kappa_\alpha (b-a)^{\alpha+2} 2^{-(\alpha+2)}, \\ \mathsf{Q}_\beta^{(1)} = I + \kappa_\beta (b-a)^{\beta+2}, & & \mathsf{Q}_\beta^{(2)} = I + \kappa_\beta (b-a)^{\beta+2} 2^{-(\beta+2)} \end{array}\label{eqn:extrap_laurie} \end{equation} which are, however, under-determined, since there are 5 unknowns ($\kappa_\alpha$, $\kappa_\beta$, $\alpha$, $\beta$ and $I$). To get a bound on the equation, Laurie therefore adds the conditions in \eqn{laurie_conds}, obtaining the inequality in \eqn{laurie_ineq} from which he constructs his error estimate. Similarly, Favati, Lotti and Romani use the equations \begin{equation*} \begin{array}{lcl} \mathsf{Q}_\alpha = I + \kappa_\alpha (b-a)^{\alpha+2}, & & \mathsf{Q}_\beta = I + \kappa_\beta (b-a)^{\beta+2}, \\ \mathsf{Q}_\gamma = I + \kappa_\gamma (b-a)^{\gamma+2}, & & \mathsf{Q}_\delta = I + \kappa_\delta (b-a)^{\delta+2}, \end{array} \end{equation*} which have 8 unknowns, and which can be solved together with the four conditions in \eqn{favati_rel}. Laurie and Venter's error estimator (see \sect{rowland1972}), differs in that, although similar in form to that of Rowland and Varol, the estimates \begin{equation*} \mathsf{Q}_1^{(1)} = I + \kappa_1 (b-a)^3, \ \mathsf{Q}_3^{(1)} = I + \kappa_3 (b-a)^5, \ \dots, \ \mathsf{Q}_{255}^{(1)} = I + \kappa_{255} (b-a)^{257} \end{equation*} form a set of $n$ equations in $n+1$ unknowns ($I$ and the $n$ different $\kappa_i$, assuming, for simplicity, that the actual order of the error is that of the quadrature rule) which can {\em not} be solved as above. In summary, these methods, \ie Romberg's method, the Aitken $\Delta^2$-process and Rowland and Varol's extrapolation, take a sequence of initial estimates $\mathsf{Q}^{(m)}$, $\mathsf{Q}^{(2m)}$, $\mathsf{Q}^{(4m)}$, $\dots$ and use them to create a sequence of {\em improved} estimates by removing the dominant error term as per \eqn{extrap_err}. These approaches can, of course, be re-applied to the resulting sequence, thus eliminating the next dominant error term, and so on. This is exactly what is done in the columns of the Romberg T-table and in successive re-applications of the Aitken $\Delta^2$-process. Instead of successively and iteratively removing the dominant term in the error, we could also simply model the error directly as the sum of several powers \begin{equation} \label{eqn:extrap_err2} \mathsf{Q}^{(m)} - I \approx \kappa_1 h^{\alpha_1} + \kappa_2 h^{\alpha_2} + \dots + \kappa_N h^{\alpha_N}, \quad h = \frac{b-a}{m} \end{equation} Since this equation has $2N+1$ unknowns (the $N$ constants $\kappa_i$, the $N$ exponents $\alpha_i$ and the exact integral $I$), we need $2N+1$ estimates to solve for them: \begin{eqnarray} \mathsf{Q}^{(m)} & = & I + \kappa_1 h^{\alpha_1} + \kappa_2 h^{\alpha_2} + \dots + \kappa_N h^{\alpha_N} \nonumber \\ \mathsf{Q}^{(2m)} & = & I + \kappa_1 h^{\alpha_1}2^{-\alpha_1} + \kappa_2 h^{\alpha_2}2^{-\alpha_2} + \dots + \kappa_N h^{\alpha_N}2^{-\alpha_N} \nonumber \\ & \vdots \nonumber \\ \mathsf{Q}^{(2^{2N}m)} & = & I + \kappa_1 h^{\alpha_1}2^{-2n\alpha_1} + \kappa_2 h^{\alpha_2}2^{-2n\alpha_2} + \dots + \kappa_N h^{\alpha_N}2^{-2N\alpha_N} \end{eqnarray} This non-linear system of equations does not appear to be an easy thing to solve, yet in \cite{ref:Kahaner1972} Kahaner shows that, if we are only interested in $I$, this is {\em exactly} what the $\epsilon$-Algorithm \cite{ref:Wynn1956} does. For an even number of approximations $2N$, the algorithm computes the same approximation as in \eqn{extrap_err2}, yet only over the first $N-1$ terms, ignoring the first estimate $\mathsf{Q}^{(m)}$. Keeping \eqn{extrap_err2} in mind, de~Doncker's error estimate (see \sect{dedoncker1978}, \eqn{dedoncker_err}) then reduces to \begin{equation*} \varepsilon_i \approx 2\left| \kappa_N h^{\alpha_N} \right| \end{equation*} for $N = \lfloor i/2 \rfloor$, assuming that, ideally, for all estimates the right-most even column of the epsilon-table was used. Generally speaking, we can say that all the error estimators presented herein assume that the error of a quadrature rule $\mathsf{Q}^{(m)}[a,b]$ behaves as in \eqn{extrap_err2}. The unknowns in this equation ($I=\intfx{a}{b}$, $\kappa_i$ and $\alpha_i$) can be solved for using several approximations $Q_n^{(m)}$. In all these methods, the error estimate is taken to be {\em the difference between the last estimate and the extrapolated value $I$ of the integral}. In the case of de~Boor's {\tt CADRE}, this is the difference between the last two entries in the bottom row of the modified T-table, and for Rowland and Varol (\sect{rowland1972}), Laurie (\sect{laurie1983}) and Favati, Lotti and Romani (\sect{laurie1983}), this is $\mathsf{Q}^{(4m)}-I$, $\mathsf{Q}_\alpha^{(2)}-I$ and $\mathsf{Q}_\alpha-I$ respectively. If the exponents $\alpha_i$ are known or assumed to be known, the resulting system is a {\em linear} system of equations. This is what Romberg's method does quite explicitly and what many of the error estimators in \sect{linear} do implicitly. If the exponents $\alpha_i$ are {\em not} known, the resulting system of equations is {\em non-linear} and can therefore only be solved for non-linearly. The non-linear methods discussed here are therefore a conceptual extension of the linear error estimators presented earlier. As such, they are subject to the same problem of the difference between two estimates being {\em accidentally small} in cases where the assumptions in \eqn{extrap_err} or \eqn{extrap_err2} do not actually hold, as is the case for singular or discontinuous integrands. The different error estimation techniques in this section differ only in the depth $N$ of the expansion and the use of additional constraints when the resulting system of equations is under-determined. \section*{Acknowledgments} The author would like to thank E.H.A.~Venter, F.J.~Smith, E.~de~Doncker, P.~Davis, T.O.~Espelid and R.~Jaffe for their help in retrieving and understanding some of the older or less accessible publications included in this review as well as G.V.~Milovanovic, B.~Bojanov, G.~Nikolov, A.~Cvetkovic and G.~Gonnet for the helpful discussions on quadrature, mathematics and everything else. Very special thanks go to W.~Gander and J.~Waldvogel, without who's immeasurable help this review wouldn't have gotten anywhere. \subsection{Rowland and Varol's Modified Exit Procedure} \label{sec:rowland1972} \label{sec:venter2002} In 1972, \citeN{ref:Rowland1972} publish an error estimator based on Simpson's compound rule. In their paper, they show that the ``{\em stopping inequality}''\index{stopping inequality} \begin{equation*} \left|\mathsf{S}^{(m)}[a,b] - \mathsf{S}^{(2m)}[a,b]\right| \ge \left|\mathsf{S}^{(2m)}[a,b] - \intfx{a}{b}\right| \end{equation*} is valid if $f^{(4)}(x)$ is of constant sign for $x \in [a,b]$. They also show that under certain conditions there exists an integer $m_0$ such that the inequality is valid for all $m \ge m_0$. They note that for the compound Simpson's rule \begin{equation} \label{eqn:rowland_ratio} \frac{\mathsf{S}^{(m)}[a,b] - \mathsf{S}^{(2m)}[a,b]}{\mathsf{S}^{(2m)}[a,b] - \mathsf{S}^{(4m)}[a,b]} \approx 2^{2q} \end{equation} holds, where usually $q=2$. This condition is used to test if $m$ is indeed large enough, much in the same way as de~Boor's {\tt CADRE} does (see \eqn{deboor_ratio}) to test for regularity. If this condition is more or less satisfied\footnote{Since their paper does not include an implementation, no specification is given to how close to a power of two this ratio has to be.} for any given $m$, then they suggest using \begin{equation} \label{eqn:rowland_err} \varepsilon_k = \frac{\left(\mathsf{S}^{(2m)}[a_k,b_k] - \mathsf{S}^{(4m)}[a_k,b_k]\right)^2} {\left|\mathsf{S}^{(m)}[a_k,b_k] - \mathsf{S}^{(2m)}[a_k,b_k]\right|}. \end{equation} This error estimate can be interpreted as follows: Let us assume that \begin{equation} \label{eqn:rowland_e} e_m = \left| \mathsf{S}^{(m)}[a,b] - \mathsf{S}^{(2m)}[a,b] \right| \end{equation} is an estimate of the error of $\mathsf{S}^{(m)}[a,b]$. If we assume that the error estimates decrease at a constant rate $r$ when the multiplicity $m$ is doubled, then we can {\em extrapolate} the error of $\mathsf{S}^{(4m)}[a,b]$ using \begin{equation*} e_{2m} = r e_m \quad \Longrightarrow \quad r = \frac{e_{2m}}{e_m}, \quad e_{4m} = r e_{2m} \quad \Longrightarrow \quad e_{4m} = \frac{e_{2m}^2}{e_m} \end{equation*} which is exactly what is computed in \eqn{rowland_err}. A similar approach is taken by \citeN{ref:Venter2002}, where instead of using compound Simpson's rules of increasing multiplicity, they use a sequence of stratified quadrature rules\index{stratified quadrature rules}, described by \citeN{ref:Laurie1992}. In their algorithm, the sequence of quadratures of increasing degree $\mathsf{Q}_1[a,b]$, $\mathsf{Q}_3[a,b]$, $\mathsf{Q}_7[a,b]$, $\dots$, $\mathsf{Q}_{2^i-1}[a,b]$ is computed and the differences of pairs of these rules are used to extrapolate the error of the highest-order ($i$th) quadrature rule: \begin{equation} \label{eqn:venter_err} \varepsilon_k = \frac{E_{i-1}^2}{E_{i-2}}, \quad E_i = \left| \mathsf{Q}_{2^i-1}[a,b] - \mathsf{Q}_{2^{i+1}-1}[a,b] \right|. \end{equation}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,910
.. DVHB Hybrid documentation master file, created by sphinx-quickstart on Tue Jan 31 14:10:02 2017. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. .. include:: ../README.rst Documentation contents ---------------------- .. toctree:: :maxdepth: 3 tutorial history Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search`
{ "redpajama_set_name": "RedPajamaGithub" }
2,447
Campli Usi is een bestuurslaag in het regentschap Pidie van de provincie Atjeh, Indonesië. Campli Usi telt 370 inwoners (volkstelling 2010). Plaats in Atjeh
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,786
{"url":"https:\/\/symbiosisonlinepublishing.com\/biosensors-biomarkers-diagnostics\/biosensors-biomarkers-diagnostics09.php","text":"Research Article Open Access\nPreparation of Biocompatible Palladium-Fe3O4 Nanoparticles\/Multiwalled Carbon Nanotubes Composite and its Electrocatalytic Activity towards Determination of Cholesterol on Screen Printed Electrode\nRevanasiddappa Manjunatha1, Gurukar S. Suresh1, 2*, Jose S. Melo3, 4*, Jakkid Sanetuntikul5 and Sangaraju Shanmugam5\n1Chemistry Research Centre, S. S. M. R. V. Degree College, Jayanagar, Bangalore - 560041, India\n2Department of Chemistry and Research Centre, N.M.K.R.V. College for Women, Jayangar, Bangalore -560011, India\n3Nuclear Agriculture and Biotechnology Division, Bhabha Atomic Research Centre, Trombay, Mumbai - 400085, India\n4Homi Bhabha National Institute, Anushakti Nagar, Mumbai- 400094, India\n5Department of Energy Systems and Engineering, Daegu Gyeongbuk Institute of Science and Technology, Daegu 711-873, Republic of Korea\n*Corresponding author: Jose Savio Melo, Nuclear Agriculture and Biotechnology Division, Bhabha Atomic Research Centre, Trombay and HomiBhabha National Institute, Anushakti Nagar, Mumbai- 400 085, India, Telephone: 91-22-25592760; Fax no: 91-22-25505151.; E-mail: @\nGurukar S. Suresh, Department of Chemistry and Research Centre, N.M.K.R.V. College for Women, Jayangar, Bangalore -560011, India, Telephone: 91 \u201380 \u2013 26654920;Fax no: 91 \u2013 80 \u2013 22453665 ; E-mail: @\nReceived: 03 October, 2016; Accepted: 04 January, 2017; Published: 10 February, 2017\nCitation: Jose S. Melo (2017) Preparation of Biocompatible Palladium-Fe3O4 Nanoparticles\/Multiwalled Carbon Nanotubes Composite and its Electrocatalytic Activity towards Determination of Cholesterol on Screen Printed Electrode. J of Biosens Biomark Diagn 2(1):1-10. DOI: 10.15226\/2575-6303\/2\/1\/00109\nAbstract\nA simple and facile microwave method was adopted to prepare Fe3O4 and Pd- Fe3O4 nanoparticles, which possess the mean particle diameter of 10 nm and 90 nm, respectively. Formation of Fe3O4 and Pd- Fe3O4 nanoparticles were confirmed from Powder X-ray diffraction, Transmission electron microscopy, Energy-dispersive X-ray spectroscopy, and FT-Infra red spectroscopy techniques. Negatively charged mutliwalled carbon nanotubes (COO--MWCNTs) were wrapped with positively charged poly (Diallyldimetheylammonium Chloride) (PDDA) followed by coating with Pd-Fe3O4 nanoparticle to get (Pd-Fe3O4\/PDDA\/COO- -MWCNTs) composite. This composite was used for the determination of cholesterol by using Cholesterol Oxidase (ChOx) enzyme on Screen Printed Electrode (SPE). (Pd- Fe3O4\/PDDA\/COO- -MWCNTs) composite provides biocompatible microenvironment for the ChOx to exhibit Direct Electron Transfer (DET) on electrode surface. A well defined redox peak at -0.365 and -0.443 V was observed, corresponding to the DET of the FAD\/FADH2 of ChOx. Enzyme modified SPE was characterized by cyclic voltammetry and electrochemical impedance spectroscopy by means of Fe(CN)6 3-\/4as an electrochemical probe. The linear range of the enzyme modified SPE was found to be 10-80 \u03bcm (R=0.9972) with detection limit of 1 \u03bcm of cholesterol. The sensitivity of the enzyme modified SPE was found to be 10.45 \u03bcA \u03bcM-1 cm-2 for the determination of cholesterol, common interferents such as ascorbic acid, uric acid and glucose did not cause any interference because of low operating potential.\n\nKeywords: Pd-Fe3O4; Nanoparticles; Screen Printed Electrode; Cholesterol Oxidase\nIntroduction\nPreparation of Biocompatible Palladium-Fe3O4 Nanoparticles\/Multiwalled Carbon Nanotubes Composite and its Electrocatalytic Activity towards Determination of Cholesterol on Screen Printed Electrode\nThe magnetic nanoparticles in general, iron oxide (Fe3O4) have been attracted an increasing interest in the development of nanostructured materials and nanotechnology in biotechnology and medicine [1]. The main advantage of magnetic nanoparticles is that, they can easily and rapidly separate from their matrix by an external magnetic field. Some of the common features associated with the Fe3O4 nanoparticles are good biocompatibility, strong superparamagnetic property, high surface area, low toxicity and ease of preparation [2,3]. Thus, Fe3O4 nanoparticles have been used in a wide range of potential applications such as, electrochemical sensors\/biosensors, catalysis, immunoassays, data storage [4-7]. Various methods have been used for the synthesis of magnetic nanoparticles which includes hydrothermal synthesis, co-precipitation, sol-gel method, microwave irradiation method [8-11]. The later method has significant advantage with respect to higher reaction rates and product yields in a shorter period of time.\n\nCarbon nanotubes are one of the new kinds of carbon material, discovered in the last decades of the 20th century [12]. Researchers have explored various potential characteristics of CNTs, which could be applicable to various fields [13]. It was found that CNTs have excellent electronic conductivity, mechanical strength, chemical stability and unique structural properties [14]. However, the main drawbacks exist in the processibility of CNTs in solution, in which they precipitate into ropes or bundles due to strong Van der Waals interactions [15]. To overcome this problem, surface modifications have been employed, such as chemical functionalization using strong acids, polymer wrapping of CNTs [16,17]. In later method i.e. chemical functionalization of CNTs using strong acid results in partial oxidation of the carbon atoms to produce oxygen containing groups such as carboxylic groups, especially in the open ends of CNTs [18]. These groups are negatively charged in the aqueous solution and can interact with positively charged poly electrolytes [19].\n\nCholesterol is one of the most important analyte in clinical analysis, because its assay is important for diagnosis and prevention of a numerous clinical disorders such as, hypertension, cerebral thrombosis, arteriosclerosis and coronary heart disease [20]. Recent studies explored that cholesterol plays a vital role in the brain synapses and also in the immune system including protection against cancer. In earlier days, cholesterol was determined by using non-enzymatic spectrophotometric techniques by using colored substances [21]. However, this technique suffers from low specificity, instability of reagents and high cost. These can be effectively addressed by using enzymatic cholesterol biosensor. Some of the advantages of enzymatic cholesterol biosensors are specificity, simplicity, rapidness and cost effectiveness [22]. The most commonly used enzyme for the construction of cholesterol biosensor is cholesterol oxidase. ChOx is a Flavin-Adenine-Dinucleotide (FAD) containing flavoenzyme. In the presence of oxygen, ChOx catalyzes two reactions; oxidation of cholesterol to cholest-5-en-3-one and subsequently the isomerization to cholest-4-en-3-one.\n\nDirect electrochemistry of redox enzyme systems has gained increasing interest both for the study of the electron transport proteins as well as development of third generation reagent less electrochemical biosensors [23,24]. However, direct electron transfer between redox enzyme and electrode is generally difficult to observe due to several factors. Such as, enzyme active sites are deeply embedded in protein matrix, resulting in a long distance between active sites and underlying electrode. In addition, conformational changes or denaturation of redox enzyme often occur while immobilization of enzyme onto the electrode surface. Thus, to obtain DET, many techniques have been developed such as layer-by-layer technique, covalent binding using cross linkers, physical adsorption [25-27]. Physical adsorption method involves van der Waals forces, ionic binding or hydrophobic forces. The main advantage of this method is that it is simple and can be used under mild conditions. It requires only a minimum activation steps resulting in little or no conformational changes of the enzyme or destruction of its active centre [28].\n\nIn the present work, a simple and facile microwave method was adopted to prepare Fe3O4 and Pd- Fe3O4 nanoparticles by following the procedure given in the research article with slight modifications [29]. Formation of both Fe3O4 and Pd-Fe3O4 nanoparticles were confirmed by pXRD, TEM, EDX and FTIR analysis. Negatively charged Pd- Fe3O4 nanoparticles were mixed with positively charged PDDA wrapped MWCNTs to get negatively charged novel composite. This negatively charged novel composite was drop casted on the Screen Printed Electrode (SPE). Positively charged ChOx (pH 4. 0) was immobilized on the composite by physical adsorption method. Enzyme modified screen printed electrode showed electrocatalytic activity towards detection of cholesterol. Cyclic Voltammery (CV) and Electrochemical Impedance Spectroscopy (EIS) techniques were used to characterize the enzyme modified screen printed electrode. Above mentioned novel composite provides biocompatible microenvironment for the ChOx to exhibit DET on SPE. To the best of our knowledge, for the first time we have used this novel composite for detection of cholesterol.\nMethods\nReagents\nTritonX-100,Cholesterol, Pd(NH3)4Cl2\u2022H2O, FeSO4\u20227H2O, MWCNTs and PDDA (Mw: 200,000-350,000), were purchased from Sigma Aldrich. Cholesterol oxidase was procured from SRL, India. Screen printed electrodes of diameter 3 mm (0.071 cm2) were purchased from CH instruments (product no. TE 100). Phosphate Buffer Saline of pH 7. 0 (PBS) was prepared from stock solutions of 0.1 M KH2PO4, 0.1 M K2HPO4 and 0.1M KCl. All other chemicals used were of analytical reagent grade unless otherwise mentioned and used without further purification. All solutions were prepared with milli-Q water.\nEnzyme solution preparation\n100 U\/ml ChOx solution was prepared in 0.1 M acetate buffer solution of pH 4. A stock solution of 10 mM cholesterol was prepared by dissolving 0.0967 g of cholesterol in a mixture of 1 mL Triton X-100 and 0.5 mL isopropanol at 65\u00b0C and diluting the resulting solution to 25 mL in a standard flask using hot PBS of pH 7.0. The solution was stored at 4\u00b0C in the dark and was stable for two weeks (until a slight turbidity was observed).\nSynthesis of Fe3O4 and Pd- Fe3O4 nanoparticles\nAs stated earlier, Fe3O4 and Pd- Fe3O4 nanoparticles were prepared according to the procedure described elsewhere [29]. In brief, 2.0 mm FeSO4.7H2O was dissolved in 100 ml distilled water with continuous stirring. The pH of the solution was adjusted to 11 by conc. ammonia solution resulting in the formation of black precipitation. This suspension was transferred into microwave oven, in which microwave radiation of high energy was applied for one minute. The product Fe3O4 nanoparticles were separated by centrifugation method. Finally Fe3O4 nanoparticles washed thoroughly with water followed by ethanol and dried at 50\u00b0C in vacuum oven.\n\nSimilarly, Pd- Fe3O4 nanoparticles were prepared by coprecipitation method. 0.5 mm Pd (NH3)4Cl2\u2022H2O, and 2.0 mmol FeSO4\u20227H2O were dissolved in 100 ml distilled water with constant stirring. The pH of the solution was adjusted to about 11 by conc. ammonia solution. Then the reaction mixer was taken into microwave oven for microwave irradiation. The product was isolated by using above descried procedure.\nElectrochemical measurements\nCyclic voltammetry, electrochemical impedance spectroscopy, differential pulse voltammetry experiments were carried out with Versa stat 3 (Princeton Applied Research, USA). The microwave oven used in the present study was a domestic microwave oven (LG, intellowave, MS-2342 AE). Powder X-Ray Diffraction (pXRD) patterns of the samples were recorded using a Philips X\u2019pert Pro diffractometer with CuK\u03b1 (\u03bb = 1.5418 \u01fa). FT-IR experiments were carried out with Bruker Alpha-T FTIR spectrometer (ATR mode, diamond crystal, resolution 4 cm-1, 400-4000 cm-1). The morphology of the samples were analyzed by the Field emission scanning electron microscopy (Hitachi, S4800 FE-SEM) and the Field emission transmission electron microscopy (Hitachi, HF 3600 FE-TEM). TEM experiments were performed at an acceleration voltage of 300 kV. For the elemental mapping study, an Energy Dispersive X-Ray Spectroscopy (EDXS) connected to a TEM was used in scanning mode. TEM samples were prepared by dropping ultrasonically dispersed isopropyl alcohol solution of nanoparticles on a copper grid coated with amorphous carbon film. All experiments were done in an electrochemical cell consisting of SPE with an unmodified or modified carbon working electrode, a carbon counter electrode and Ag\/AgCl reference electrode.\nPreparation of enzyme modified screen printed electrode\nAs we already discussed in the introduction, CNTs are insoluble in most of the solvents because they precipitate into ropes are bundles due to strong Vander Waals interactions. To overcome this difficulty, we introduced carboxylic groups on MWCNTs surface by refluxing with conc. nitric acid for 5 h, followed by filtration and washed with pure water until the filtrate become neutral. Finally the product was dried in vacuum at 50\u00b0C [30]. PDDA is a water soluble, quaternary ammonium cationic polyelectrolyte. It is positively charged colloid when dissolved in aqueous solutions [31,32]. The positively charged PDDA polymer can be easily wrapped\/coated on negatively charged MWCNTs [33]. 1 mg\/ ml carboxylated MWCNTs dispersed in 0.2% PDDA solution, ultrasonicalted for 20 min. followed by stirred at 50\u00b0C for 12 h. To this composite 0.5 mg\/ml Pd- Fe3O4 nanoparticles were added and stirred for 12 h at room temperature. In this stage negatively charged Pd- Fe3O4 nanoparticles coated on positively charged MWCNTs wrapped with PDDA as shown in figure 1.\nFigure 1:Schematic representation for the fabrication of enzyme electrode based on the (PdFe3O4\/PDDA\/COO- -MWCNTs) composite.\n2.5 \u03bcl of Pd- Fe3O4\/PDDA\/COO- -MWCNTs composite was drop casted on screen printed electrode, dried at ambient temperature. 5 \u03bcl of positively charged ChOx (pH 4.0) drop casted on composite, dried at 4\u00b0C. Hereafter, the enzyme modified electrode was denoted as ChOx-(Pd- Fe3O4\/PDDA\/COO- -MWCNTs)\/SPE.\nResults and discussion\nCharacterization of Fe3O4 and Pd- Fe3O4 nano particles using pXRD, TEM, EDX and FT-IR techniques\nFigure 2a shows XRD patterns of the synthesized Fe3O4 nanoparticles. Diffraction peaks of Fe3O4 nanoparticles were obtained at 30.18\u00b0, 35.54\u00b0, 43.29\u00b0, 53.69\u00b0, 57.29\u00b0 and 62.96\u00b0 corresponding to the index planes (220), (311), (400), (422), (511) and (440) respectively. This is quite identical to pure Fe3O4 nanoparticles and well matched with that of JCPDS no. 82-1533. This revealed that the Fe3O4 nanoparticles have a cubic spinel structure [34,35]. Also, no characteristic peaks of impurities were observed. From the XRD data, the mean particle diameter of Fe3O4 nanoparticles was calculated from index planes (220), (311) and (400) by using Debye-Schererrer\u2019s equation, which is given below.)\nFigure 2:A. XRD patterns of Fe3O4 (a) and Pd- Fe3O4 (b) nanoparticles. B. FT-IR spectrum of Fe3O4 nanoparticles.\nWhere, \u03bb- wavelength of X-ray, \u03b2- full width at half maximum,\u03b8- Bragg\u2019s diffraction angle. The mean particle diameter of Fe3O4 nanoparticles was found to be 10 nm. These results depicts that the Fe3O4 nanoparticles can be rapidly synthesized with 5-10 minutes. Usually most of the Fe3O4 nanoparticles synthetic methods need more than an hour [36].\n\nFormation of Fe3O4 nanoparticles by microwave irradiation can be explained as follows\nWhen ammonia is added to the FeSO4 solution Fe(OH)2 is formed according to the equation 2, which is oxidized to Fe3O4 nanoparticles und the influence of microwave radiation as follows Similarly figure 2b shows XRD patterns of the Fe3O4 nanoparticles decorated on Pd. The addition diffraction peaks at 40.07\u00b0, 46.54\u00b0 and 68.09\u00b0 corresponding to the (111), (2000) and (220) lattice planes were attributed to formation of Pd nanoparticles [37,38]. The mean particle diameter of Pd-Fe3O4 nanoparticles was found to 90 nm according to equation 1. The possible mechanism for the formation of Pd-Fe3O4 shown as below, FT-IR data of Fe3O4 nanoparticles is shown in figure 2B. It is noteworthy that in figure 2B, peak at 545 cm-1 is attributed to the Fe-O bond vibration of Fe3O4 [39]. The broad peak at 3346 cm-1 is due to stretching vibrations of \u2013OH bond, which is absorbed by Fe3O4 nano particles. Also, the peak at \u223c1610 cm-1 may be assigned to the deformation vibrations of water molecules trapped onto the magnetic nanoparticles [40]. These results confirm the formation of Fe3O4 nanoparticles. There was no major change in the FT-IR spectrum of Pd-Fe3O4 (results not shown).\nFigure 3:(a) TEM (b) HRTEM images of Fe3O4 nanoparticles. B. (a) TEM (b) HRTEM Images of Pd-Fe3O4 nanoparticles. C. (a) Bright field TEM image Pd-Fe3O4 nanoparticles, the corresponding EDX maps depict the distribution of constituting individual elements within the structure as shown in (b-d). The images correspond to the (b) Fe, (c) Pd, and (d) O.\nFigure 3A, 3B shows TEM images of Fe3O4 and Pd- Fe3O4 nanoparticles respectively. Both Fe3O4 and Pd- Fe3O4 nanoparticles appeared to be almost spherical in shape. TEM image of figure 3B (a) clearly depicts the presence of slightly bigger Pd nanoparticles (dark contrast) surrounded by Fe3O4 nanoparticles, which is evidenced in the difference in contrast. The average diameter of Fe3O4 nanoparticles was found to be 12.8 nm, whereas, the average particle size of Pd- Fe3O4 nanoparticles was found to be 92.4 nm. These results are in well agreement with XRD results shown in figure 3A. The high resolution TEM (HRTEM) images of Fe3O4 and Pd-Fe3O4 nanoparticles are shown in figure 3A (b) & 3B (b), respectively. The distance between two lattice planes of the Fe3O4 crystallite was 0.252 nm, which corresponds to the (311) plane of spinal Fe3O4. In the same way, HRTEM image of Pd- Fe3O4 nanoparticle showed well resolved lattice fringes with a distance of 0.225 nm, corresponding to the (111) plane of cubic Pd. Furthermore, the presence of Pd nanoparticles in Fe3O4 was also confirmed using scanning transmission electron microscope (STEM) coupled with elemental mapping analysis (STEM-EDS). Figure 3C (a) shows a bright field TEM image of Pd-Fe3O4 nanoparticles. The corresponding elemental maps are given in figure 3C (b-d). The EDS mapping results suggest that metallic Pd core is surrounded by several Fe3O4 nanoparticles.\nFigure 4:SEM images of unmodified SPE (a), (PDDA\/COO- -MWCNTs)\/ SPE (b), (Pd-Fe3O4\/PDDA\/COO- -MWCNTs)\/SPE(c)andChOx-(Pd-Fe3O4\/ PDDA\/COO- -MWCNTs)\/SPE (d).\nFigure 4 displays the SEM images of unmodified SPE (a), (PDDA\/COO- -MWCNTs)\/SPE (b), (Pd-Fe3O4\/PDDA\/COO- -MWCNTs)\/SPE (c) and ChOx-(Pd- Fe3O4\/PDDA\/COO- -MWCNTs)\/ SPE (d). The morphology of the unmodified SPE exhibits rough surface having different size grains of several microns. PDDA coated MWCNTs composite is uniformly deposited on SPE, which can be seen in image (b). Nano sized Pd-Fe3O4 particles appear as small white grains which are incorporated into (PDDA\/COO- -MWCNTs) composite clearly visible in image (c). Image (d) shows uniform immobilization of ChOx enzyme on (Pd-Fe3O4\/ PDDA\/COO- -MWCNTs) composite. Inset shows the different size granules of ChOx enzyme.\nCharacterization of ChOx-(Pd- Fe3O4\/PDDA\/COO- -MWCNTs)\/SPE using CV and EIS\nFe(CN)6 3-\/4- redox couple is widely used as an electrochemical probe to characterize the property of unmodified\/modified electrodes. Figure 5\nFigure 5:Cyclic voltammograms of unmodified SPE (a), Pd- Fe3O4\/ PDDA\/COO- -MWCNTs)\/SPE (b) and ChOx-(Pd- Fe3O4\/PDDA\/COO- -MWCNTs)\/ SPE (c) in 0.1 M PBS containing 5 mM Fe(CN)64\u2212\/3\u2212 (pH 7.0); scan rate: 50 mVs-1.\nIllustrates, the cyclic volt ammogramms of SPE, (Pd- Fe3O4\/PDDA\/COO- -MWCNTs)\/SPE and ChOx-(Pd-Fe3O4\/PDDA\/ COO- -MWCNTs)\/SPE in 5 mM Fe(CN)63-\/4- containing PBS (pH 7.0) at scan rate of 50 mVs-1. Irreversible voltammogram was observed at SPE (curve a, dotted line) with minimal cathodic peak current (Ipc) and anodic peak current (Ipa). The cathodic peak potential (Epc) and anodic peak potential (Epa) were found at 521 mV and -256 mV respectively with peak to peak separation (\u0394Ep) 777 mV. However, SPE modified with (Pd- Fe3O4\/PDDA\/COO- -MWCNTs) composite, a well redox peak of Fe(CN)63-\/4- with the \u0394Ep 48 mV (curve b) was obtained. These results depict that the over potential decreased by 729 mV and around 2 fold of increase in current was observed for 5 mM Fe(CN)63-\/4- at (Pd- Fe3O4\/PDDA\/COO- -MWCNTs)\/SPE than that of SPE. This shows that Electrocatalytic activity of (Pd- Fe3O4\/ PDDA\/COO- -MWCNTs) composite. After immobilization of ChOx enzyme on (Pd- Fe3O4\/PDDA\/COO- -MWCNTs)\/SPE, decreased in peak current was observed (curve c). This could be attributed to macromolecular non conducting enzymatic structure, impede electrochemical redox reaction of Fe(CN)63-\/4 -at electrode surface. This demonstrates that ChOx enzyme successfully immobilized on (Pd- Fe3O4\/PDDA\/COO- -MWCNTs) composite by means of electrostatic attraction.\n\nElctrochemical Impedance Spectroscopy (EIS) is a powerful and sensitive characterization tool for studying the charge transfer process at electrode\/electrolyte interface [41]. Hence, Characterization of SPE, (Pd- Fe3O4\/PDDA\/COO- -MWCNTs)\/SPE and ChOx-(Pd- Fe3O4\/PDDA\/COO- -MWCNTs)\/SPE was further investigated using EIS. EIS was carried out in the presence of 5 mM Fe (CN)64\u2212\/3\u2212as a electrochemical redox probe, in the frequency range of 100 kHz to 0.1 Hz with amplitude of 5 mV as shown in figure 6A The equivalent circuit shown in the inset of figure 6B was used to fit experimental data. The simulated curve of experimental data and best fitting equivalent circuit are shown in the figure 6C. The obtained impedance data are shown in Table 1.\n\nThe circuit includes the solution resistance (Rs), charge transfer resistance (Rct), double layer capacitance (Qdl), Warburg impedance (Zw), Faradaic resistance, (Rf) and Faradaic capacitance (Cf). At SPE, big semicircle having a Rct of 5714 \u03a9 was observed for Fe(CN)64\u2212\/3\u2212 (curve a), which suggests that unmodified SPE exhibits sluggish and unfavorable Fe(CN)64\u2212\/3\u2212 electron transfer. However, SPE modified with (Pd- Fe3O4\/\nTable 1: EIS data of unmodified SPE, (Pd- Fe3O4\/PDDA\/COO- -MWCNTs)\/SPE and ChOx-(Pd- Fe3O4\/PDDA\/COO- -MWCNTs)\/SPE in 5 mM Fe(CN)6 3-\/4\n Electrode Rs \/\u03a9 n Q\/\u00b5F Rct \/\u03a9 W \/\u03a9 SPE 189 0.95 0. 49 5714 0.0009 Pd- Fe3O4\/PDDA\/ COO--MWCNTs)\/SPE 198 0.97 0. 23 0.19 0.006 ChOx-(Pd- Fe3O4\/PDDA\/\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 COO--MWCNTs)\/SPE 196 0.78 14.5 557 0.0016\nFigure 6:Nyquist impedance plots of unmodified SPE (a), Pd- Fe3O4\/ PDDA\/COO- -MWCNTs)\/SPE (b) and ChOx-(Pd- Fe3O4\/PDDA\/COO- -MWCNTs)\/ SPE (c) in 0.1 MPBS containing 5 mM Fe(CN)64\u2212\/3\u2212 (pH 7.0). The frequency range is from 100 kHz to 0.1 Hz and amplitude 5 mV. B. The equivalent circuit used to fit experimental data. (C) Nyquist impedance plots of experimental data of ChOx-(Pd-Fe3O4\/PDDA\/COO- -MWCNTs)\/ SPE and best fitting by using equivalent.\nPDDA\/COO- -MWC) composite, big semicircle was replaced with straight line, which have a Rct of 0.19 \u03a9 as shown in figure 5A inset (curve b). This shows (Pd- Fe3O4\/PDDA\/COO- -MWCNTs) composite facilitates fast and favorable electron transfer towards electrode surface. Small semicircle was observed (curve c) with a Rct of 557 \u03a9, when ChOx enzyme immobilized on (Pd- Fe3O4\/PDDA\/COO- -MWCNTs)\/SPE, which illustrates that non conducting macromolecular ChOx was successfully immobilized modified SPE and it oppose the electron transfer towards electrode surface.\nDirect electrochemistry of ChOx on (Pd- Fe3O4\/PDDA\/ COO- -MWCNTs)\/SPE\nFigure 7 shows, cyclic voltammogram of ChOx on unmodified SPE (curve a) and (Pd- Fe3O4\/PDDA\/COO- -MWCNTs)\/ SPE (curse b). When ChOx immobilized on unmodified SPE, which does not facilitate DET. This illustrates that unmodified SPE does not provide micro environment for the immobilization of ChOx. However, ChOx immobilized on (Pd-Fe3O4\/PDDA\/COO- -MWCNTs)\/SPE exhibits a pair of well defined redox peaks at -0.365 and -0.443, these peaks are assigned for FAD\/FADH2, which could be ascribed to electron transfer between ChOx and under laying electrode [19,22,27]. The potential difference between the two peaks \u0394Ep was 78\nFigure 7: Cyclic voltammograms of ChOx-unmodified SPE (a) and ChOx- (Pd-Fe3O4\/PDDA\/COO- -MWCNTs)\/SPE in 0.1M PBS (pH 7.0) scan rate: 50mVs-1.\nmV at a scan rate of 50 mVs-1, which suggests that ChOx has undergone a quasi-reversible redox reaction (Pd-Fe3O4\/PDDA\/ COO- -MWCNTs)\/SPE. The surface area (\u03c4) of ChOx-(Pd-Fe3O4\/ PDDA\/COO- -MWCNTs)\/SPE was calculated using the following equation.\n$\u03c4= Q nFA MathType@MTEF@5@5@+= feaagGart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqiXdqNaey ypa0ZaaSaaaeaacaWGrbaabaGaamOBaiaadAeacaWGbbaaaaaa@3C2A@$\nWhere, Q is the charge, n is the number of electrons transferred, F is the Faraday constant. Therefore (\u03c4) was found to be 5.38 \u00d7 10-9 mol cm-2.\nEffect of scan rate and pH at ChOx-(Pd-Fe3O4\/PDDA\/ COO- -MWCNTs)\/SPE\nTo determine the kinetics of electrode reactions, the effect of scan rate on the voltammetric response of ChOx-(Pd-Fe3O4\/ PDDA\/COO- -MWCNTs)\/SPE in a 0.1 M PBS of pH 7 was studied in the range of 5-75 mVs-1 as shown in figure 8A. The linear regression equations are as given below\n\nIpa = -6.6628E-7 + 5.4642E-7 \u03bd (Vs-1); R = 0.9981 (8)\n\nIpc = -1.4345E-5 - 6.3799E- 7 \u03bd (Vs-1); R = 0.9932 (9)\nThe redox peak current of ChOx increased linearly with increasing scan rate (Figure 8B) and the peak to peak separation also increased, indicating that surface controlled quasi-reversible process is involved.\n\nIn addition, the anodic peak potential shifted to a more potential value with increasing scan rate, where as the cathodic peak potential shifted in a negative direction. The pH of the electrolyte solution has a significant influence on the redox reaction of FAD\/FADH2 of ChOx with respect to peak current and peak potential. Figure 9A Shows cyclic voltammograms of effect of pH of electrolyte in the range 4 to 8 studied on the response of ChOx-(Pd-Fe3O4\/PDDA\/COO- -MWCNTs)\/SPE. The electrochemical response of enzyme immobilized on the electrode surface is due to redox reaction of its active site, i.e. FAD\/FADH2. Where, FAD is known to undergo redox reaction involving two electrons with two protons to form FADH2.The redox peak current of ChOx increased linearly with increasing scan rate (Figure 8B) and the peak to peak separation also increased, indicating that surface controlled quasi-reversible process is involved.\nFigure 8: Cyclic voltammograms of ChOx-(Pd- Fe3O4\/PDDA\/COO- -MWCNTs)\/ SPE in PBS (pH 7.0) at different scan rates: (5-75 mV-1). B. The plot of peak current vs. scan rate.\nFigure 9: Cyclic voltammograms of ChOx-(Pd- Fe3O4\/PDDA\/COO- -MWCNTs)\/ SPE at various pHs of the solution (pH 4\u20138) scan rate: 50mVs-1. (B) Plots of potential vs. E1\/2 at pH (4\u20138).\nDue to protons involve in the reaction, the acidity of the solution has a significant effect on the redox potential of ChOx. Thus, the anodic and cathodic peak potentials of ChOx immobilized on the (Pd-Fe3O4\/PDDA\/COO- -MWCNTs)\/SPE should be pH dependent. It was observed that, redox peak potential of the enzyme shifted towards negative with increase in pH as shown in figure 8B indicating that protons are involved in the redox reaction. A good linear relationship was obtained between half wave potential (E1\/2) and the solution pH. The corresponding linear regression equation is given as\n\nE1\/2 = -0.025 - 0.058 pH; R = 0.9908 (10)\nFrom the above equation, slope of E1\/2 is 58 mV, which is close to the theoretical value (59 mV pH-1) for a classical Nernstian two electrons and protons process. Hence, ChOx redox system is a two proton participated two electron redox processes.\nDetermination of cholesterol based on the direct electrochemistry of ChOx on the (Pd-Fe3O4\/PDDA\/COO- -MWCNTs)\/SPE\nIn the this protocol, the direct electrochemistry of ChOx is based on the redox reaction of its active center, i.e. FAD, in the absence of oxygen, direct electron transfer of immobilized ChOx can be expressed as follows\nIn the presence of oxygen, the reduced enzyme is oxidized very quickly at the electrode surface. Electron transfer turnover rate of the molecular oxygen is about 700 s-1 to accept electrons [42]. This is much faster than that of ChOx on the (Pd-Fe3O4\/PDDA\/COO- -MWCNTs)\/SPE. As a result, obvious electrostatic process towards the reduction of dissolved oxygen, which is given below The catalytic regeneration of the enzyme in its oxidized form causes the loss of reversibility as a result increase in the size of the reduction peak as shown in figure 10 (curve b) [24]. By the addition of cholesterol, a competitive reaction take place at the vicinity of the enzyme modified electrode surface. Thus, leading to the decrease of reduction peak current (curve c), as a result the sensitive determination of cholesterol. In other words, in the presence of oxygen, ChOx on modified electrode will catalyze the oxidation of cholesterol according to the following enzymatic reaction.\nFigure 10:Cyclic voltammograms obtained at ChOx-(Pd- Fe3O4\/PDDA\/ COO\u2014MWCNTs)\/SPE in nitrogen saturated and oxygen saturated PBS (a and b), after addition of 50 \u03bcM cholesterol to oxygen saturated PBS (c). Scan rate: 50mVs-1.\nBy the addition of cholesterol, a competitive reaction take place at the vicinity of the enzyme modified electrode surface. Thus, leading to the decrease of reduction peak current (curve c), as a result the sensitive determination of cholesterol. In other words, in the presence of oxygen, ChOx on modified electrode will catalyze the oxidation of cholesterol according to the following enzymatic reaction.\n\nThe reduction peak of the ChOx-(Pd-Fe3O4\/PDDA\/COO- -MWCNTs)\/SPE in oxygen saturated PBS (pH 7. 0), decreased with addition of cholesterol, which suggested that the immobilized ChOx still retained its enzymatic activity. This could be due to the biocompatible, microenvironment provided by (Pd-Fe3O4\/PDDA\/ COO- -MWCNTs) composite. Thus, the addition of cholesterol restrains Electrocatalytic reaction between the oxidized form of ChOx i.e. ChOx-FAD and cholesterol, which attenuates the concentration of the ChOx-FAD. This causes decrease in the reduction peak current of the enzyme [27]. Also, the dissolved oxygen mediates the enzymatic oxidation of cholesterol by ChOx. Therefore, the depletion of the oxygen proximal to the electrode surface makes the reduction of the oxidized form of ChOx less favorable, leading to the decrease of the reduction peak current of the enzyme [43].\n\nEventually, cholesterol is determined by measuring the decreased reduction peak current, by the addition of cholesterol in oxygen saturated PBS (pH 7.0).\n\nFigure 11A shows the differential pulse voltammograms of various cholesterol concentrations at ChOx-(Pd-Fe3O4\/ PDDA\/COO- -MWCNTs)\/SPE in oxygen saturated PBS (pH 7. 0). The reduction current decreased gradually upon increasing cholesterol concentration.\nFigure 11: Differential pulse voltammetric measurements at ChOx- (Pd- Fe3O4\/PDDA\/COO-MWCNTs)\/SPE at oxygen saturated PBS(pH 7.0), without cholesterol (a) and (b\u2013j) with cholesterol of 10, 20, 30, 40, 50, 60, 70, 80 and 90 \u03bcM. DPV parameters; scan rate: 20 mV s\u22121, pulse height: 200 mV, pulse width: 0.05 s, step height: 10 mV and step width:0.5 s. B. shows relationship between Ipc and concentrations of cholesterol.\nFigure 11B shows the calibration current corresponding to decrease of reduction current and concentration of cholesterol. The linear regression equation is given by\n\nIpc (Cholesterol) (\u03bcA) = -4.9712E-4 + 7.4202E-7 C (Cholesterol) (\u03bcA); R = -0.9972 ..................... (14)\n\nUsing slope of the above equation, the sensitivity of the enzyme modified SPE was calculated to be 0.742 \u03bcA \u03bcM-1 or 10.45 \u03bcA \u03bcM-1 cm-2 (area of electrode surface is 0.071 cm2). Comparison of the enzyme modified SPE with other cholesterol determination based on SPEs are given in Table 2 [44-46].\n\nResults shown in table 2, depicts that the sensitivity of the ChOx-(Pd-Fe3O4\/PDDA\/COO- -MWCNTs)\/SPE much better when compare to other determinationofcholesterol based on SPEs. Furthermore, applied potential and detection limit of the ChOx-(Pd-Fe3O4\/PDDA\/COO- -MWCNTs)\/SPE is quiet comparable with respect to other determination of cholesterol based on SPEs.\nTable 2: Comparison of the ChOx enzyme SPE with other ChOx basedSPEs\n Cholesterol Biosensor Sensitivity\u00a0\u00a0\u00a0\u00a0\u00a0 \u03bcA \u03bcM\u02c91 Potential\u00a0 applied (mV) Linear range (\u00b5M) Detection limit\u00a0 (\u00b5M) Reference GNS-nPt\/SPE - 400 0-35 0.2 [45] SP-rhodium-graphite- -Au-P450scc 0.13 -400 10-70 - [46] SP-RP450scc 0.0138 -600 50-300 - [47] ChOx-(Pd-Fe3O4\/PDDA \/COO--MWCNTs)\/SPE 0.742 -380 10-80 1 Present work\nStability and reproducibility of the ChOx-(Pd-Fe3O4\/ PDDA\/COO- -MWCNTs)\/SPE\nDirect electron transfer of ChOx on (Pd-Fe3O4\/PDDA\/ COO- -MWCNTs)\/SPE is very stable. When twenty consecutive CV curves obtained at ChOx-(Pd-Fe3O4\/PDDA\/COO- -MWCNTs)\/SPE in 0.1 M PBS pH 7.0 at scan rate of 50mVs-1, there was no change in the peak to peak separation. However, peak current gradually decreased. The electrode retained 86.6% of its initial response after twenty consecutive cycles. These results shows that ChOx binds strongly on (Pd-Fe3O4\/PDDA\/COO- -MWCNTs) composite. To ascertain fabrication reproducibility, five sets of ChOx on (Pd-Fe3O4\/PDDA\/COO- -MWCNTs)\/SPE were fabricated for the determination of cholesterol. The results show that the enzyme modified SPE had satisfying reproducibility with the Relative Standard Deviation (RSD) of 9.5%.\nConclusion\nFe3O4 and Pd-Fe3O4nanoparticles were synthesized by simple and facile microwave method. Formation of Fe3O4and Pd- Fe3O4 nanoparticles were confirmed from powder X-ray diffraction and FT-IR techniques. Pd- Fe3O4 nanoparticles used for the preparation of biocompatible composite which consists of negatively charged mutliwalled carbon nanotubes (COO- -MWCNTs) wrapped with positively charged poly diallyldimethyl ammonium chloride. This composite was successfully used for the determination of cholesterol by using cholesterol oxidase enzyme on screen printed electrode. DET of ChOx was observed on (Pd- Fe3O4\/PDDA\/COO- -MWCNTs) composite which shows that the composite provides biocompatiable microenvironment for the ChOx. The linear range of the enzyme modified SPE was found to be 10-80 \u03bcM (R=9972) with detection limit of 1 \u03bcM. Common interferents such as ascorbic acid, uric acid and glucose did not cause any interference because of low operating potential.\nAcknowledgements\nThe authors gratefully acknowledge the financial support from Vision Group on Science and Technology, Government of Karnataka. R. Manjunatha thanks Council of Scientific and Industrial Research, New Delhi for the award of Senior Research Fellowship. We thank Sri. A. V. S. Murthy, honorary secretary, Rashtreeya Sikshana Samiti Trust, Bangalore for his continuous support and encouragement.\nReferencesTop\n\nListing : ICMJE","date":"2018-07-21 05:22:41","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 1, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5165001153945923, \"perplexity\": 10144.915835072763}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-30\/segments\/1531676592387.80\/warc\/CC-MAIN-20180721051500-20180721071500-00114.warc.gz\"}"}
null
null
Gold is one of the most popular metals chosen by people to invest. The shine of gold, luxurious feel of it and the fact that the price always keeps rising gives gold a market where people are readily wanting to invest on it. Gold is a treasury, an idea of jewelry and sometimes a promise for a safe future. And this is why people have been scanning the right ideas and making sure that when they buy gold, they don't go wrong anywhere. No matter if it is an investment, an instant purchase or something to build future, here are three things to consider when you buy gold for you. Gold is a metal that is the softest when pure. The pure metal does not let it be an ideal choice for the jewelry makers. As a result it is sometimes mixed with other metals to make it hard enough to design jewelry. As a result the purest and priciest of gold is not available in jewelries. The gold coins, raw gold or even the chinks of gold hold truly more value than others. Gold is available in purity of 24K, 22K, 18K, 14K and more. The lesser the Karats, the more are the chances of owning less purity and value. When you invest into gold make sure you know the Karats and make a sound investment. To buy gold the best choice is to go for trusted and reliable dealers. They are registered with the authorities and deal in pure metals. Therefore there is likely to be some safety associated with buying gold from them. They offer hallmark, purity meters and easy filtering of the understanding of the metals you are investing in. Reliable dealers offer you the prevailing market rates for the gold without charging anything excessive. The price of gold keeps changing every minute. Its fluctuating each moment, but the base prices taken for the calculation of sale are mostly taken from the open or closing balances for the day. Hence it is important to choose a time when the price is running low. Study about the market and predict when the prices may go low, that is the right time to buy gold in quantities and keep it safe. Investing in gold is like securing your future into a metal which can serve multi-fold. Make your choices wisely!
{ "redpajama_set_name": "RedPajamaC4" }
5,970
Colgate-Palmolive (Colgate) needs to determine how Precision should be positioned in the tooth brush market. Colgate must decide what segment of the market to target and how Precision should be marketed to its target market. Colgate's objective is to position and market Precision to fully utilize the products potential. To accomplish this objective, Colgate must determine what segment Precision should target, the super premium niche or the broader professional brush market, and the marketing mix that would be most effective for that position. Colgate can position Precision as a niche product. Colgate does not have a tooth brush currently positioned in the super premium niche. Oral B, Johnson & Johnson, Procter & Gamble and Smithkline Beecham currently compete in this product segment. Due to the number of quality competitors in the tooth brush market, product innovation is an important factor to be able to differentiate the product and to succeed. Precision was developed to compete in the super premium segment focused at consumers who are concerned about gum disease. As a niche product, Precision is expected to extract a 15% price premium above Oral-B and to gain a 3% market share in its first year and 5% in year two. Precision's factory list price would be priced at $2.13. The super premium niche is growing. It accounts for 35% of unit volume and 46% of dollar sales. Baby boomers are more concerned about the health of their gums as compared to cavity prevention. Baby boomers are also willing to pay a premium for toothbrushes that address this issue. The target customer segment for Precision in the super premium niche is therapeutic brushers, who make up 46% of adults. (Table B) Therapeutic brushers are high involvement users and differentiate among products to search out functionally effective oral health products. Precision can be marketed to these consumers using a niche positioning strategy. Precision's design meets baby boomers, and in particular, therapeutic brushers expectations. It was designed using infrared motion analysis to track brushing movements and plaque removal. The three different lengths of bristles provide a triple action brushing effect to clean the teeth surface, in between teeth and around the gum line. Precision achieved a 35% increase in plaque removal and at the gum line was twice as good as its competitors in removing plaque. Precision can effectively differentiate itself within the super premium niche and command a higher price. Another option for Colgate is that instead of positioning Precision as a niche product, Colgate could position Precision in the mainstream. If Precision were positioned with a mainstream strategy, Precision would have the opportunity to realize its sales and profit potential. With a mainstream positioning strategy, Precision would be priced at $1.85 in the professional segment. Precision would expect to gain 10% of the market in its first year and 14.7% in year two. A mainstream positioning strategy would focus on overall benefits of the toothbrush instead of being more focused on the prevention of gum disease. If Precision is positioned as a niche product, there is not an issue with production capacity. When Precision was designed, the production schedule was based on it being a niche product. If Precision is positioned as a mainstream product, this would put a strain on production capacity. First year production would jump from 13 million units in a niche positioning strategy to 42 million units in a mainstream positioning strategy. It would take a 10 month lead time to increase capacity. Switching to a mainstream strategy at this time would likely result in shortages of available product. While being perceived as a "hot product" may be desirable, the consumer will purchase a competitors product if your "hot product" is not available. When it comes to be the next time for the consumer to purchase a new toothbrush, it may be more difficult to draw them away from the brand they purchased when your "hot product" was unavailable. Thus, market share potential may not be realized. In addition to the shortage of available product issue, the capital investment necessary to take a mainstream positioning strategy is much greater. The capital investment for the first 2 years of production in a mainstream positioning strategy would be $13 million as compared to $4.55 million in a niche positioning strategy. Thus, there is more capital at risk positioning Precision as a mainstream product versus a niche product. One issue consumer testing showed that Precision must overcome is that the brush looked unusual and consumers had mixed first impressions. However, testing showed that once consumers tried Precision, 77% claimed it was much more effective than their current toothbrush and that "You could really feel it working." The more consumers learned about Precision, the more consumers liked it. Thus, the niche positioning strategy where the strengths of the product can be stressed may be more effective in obtaining consumer use and acceptance of Precision. There is a strong risk that Precision's appearance will be a detriment in the mainstream positioning strategy. Precision's advantage over its competitors is its effectiveness against gum disease based upon plaque removal. It is difficult to communicate the primary benefit of reduced gum disease from extra plaque removal for broad consumer appeal to compete in the mainstream positioning. (Table B). Other than Colgate Plus, there is only one competitor, Plax, who currently stresses plaque removal. (Ex. 9) Precision could be advertised with a broader message, but that would ignore its primary advantage over its competitors. "Colgate-Palmolive Company: The Precision Toothbrush" Essays24.com. 12 2010. 2010. 12 2010 <https://www.essays24.com/essay/Colgate-Palmolive-Company-The-Precision-Toothbrush/19257.html>. "Colgate-Palmolive Company: The Precision Toothbrush." Essays24.com. Essays24.com, 12 2010. Web. 12 2010. <https://www.essays24.com/essay/Colgate-Palmolive-Company-The-Precision-Toothbrush/19257.html>. "Colgate-Palmolive Company: The Precision Toothbrush." Essays24.com. 12, 2010. Accessed 12, 2010. https://www.essays24.com/essay/Colgate-Palmolive-Company-The-Precision-Toothbrush/19257.html.
{ "redpajama_set_name": "RedPajamaC4" }
8,942
© jim hogg photography About the Author Bronwen Forbes has lived in both big cities and small towns and has experienced the advantages and drawbacks of both. She co-founded Free Spirit Alliance and has taught at various Pagan festivals, Pride Days, and conferences across North America. She is also the author of Make Merry in Step and Song: A Seasonal Treasury of Music, Mummer's Plays & Celebrations in the English Folk Tradition. Llewellyn Publications Woodbury, Minnesota Copyright Information The Small-Town Pagan's Survival Guide: How to Thrive in Any Community © 2011 by Bronwen Forbes. All rights reserved. No part of this book may be used or reproduced in any matter whatsoever, including Internet usage, without written permission from Llewellyn Publications, except in the form of brief quotations embodied in critical articles and reviews. As the purchaser of this e-book, you are granted the non-exclusive, non-transferable right to access and read the text of this ebook on screen. The text may not be otherwise reproduced, transmitted, downloaded, or recorded on any other storage device in any form or by any means. Any unauthorized usage of the text without express written permission of the publisher is a violation of the author's copyright and is illegal and punishable by law. First e-book edition © 2011 E-book ISBN: 9780738729787 Cover art © Paul Oglesby/AA Reps, Inc. Cover design by Lisa Novak Editing by Brett Fechheimer Interior photograph © Sylvia Forbes Llewellyn Publications is an imprint of Llewellyn Worldwide Ltd. Llewellyn Publications does not participate in, endorse, or have any authority or responsibility concerning private business arrangements between our authors and the public. Any Internet references contained in this work are current at publication time, but the publisher cannot guarantee that a specific reference will continue or be maintained. Please refer to the publisher's website for links to current author websites. Llewellyn Publications Llewellyn Worldwide Ltd. 2143 Wooddale Drive Woodbury, MN 55125 www.llewellyn.com Manufactured in the United States of America For Ravenna, Spiritrunner, K, Noey, Julia, Donna Hames, Becca, Jenn, Evy, Witch of the Woods, Kathleen from North Dakota, Deanna Eberlin, Moondancer, Keltasia, Cordelia, Lisa McSherry, Ruth Merriam, Rowen Brianna, Fergus, Kim Schaufenbuel, Andrea Covey, Darren, and Iris. You have taught me so much. Acknowledgments It is always true that one person does not write a book in a vacuum. In this case, I have fifty survey respondents to thank, especially the twenty-three who have agreed to let themselves be quoted in this book either in their own words as they completed my survey or as interviewees on specific topics. They are as much co-authors as they are contributors; their wisdom and experiences have taught me a great deal. My thanks also to Elysia Gallo of Llewellyn Worldwide for once again believing in me and what I have to say. She and production editor Brett Fechheimer are truly saints for not panicking when a family medical crisis delayed my work on the final manuscript. I have tried to be as accurate as possible when recording the population of the towns the survey respondents live in, unless the respondents did not want their town specifically mentioned. For this I have used the numbers provided by the U.S. Census Bureau for its 2009 Population Estimates Program. I apologize for any inaccuracies. I could not have written this book without the help and support of my husband, A. G., and my daughter Rose. Not only have their own experiences as small-town Pagans been chronicled here, but they've handled my telling of those experiences to perfect strangers with humor and grace. They were also the best traveling companions ever, for the now-famous "Walmart Altar Road Trip" recounted in chapter 4. Finally, because of life and moving and finishing a bachelor's degree in journalism and a few other inexcusable excuses, there was a four-year gap between the time I originally sent out the survey to interested participants and when I actually started working on the book. During that time, several of my original respondents changed or dropped the e-mail addresses they used to send me their survey responses (the only way I had to reach them). Their comments have not been included in this book, since I had no way to get their signed permission to do so. One of those lost respondents didn't have a lot to say in her survey that was overly positive or negative, but the name she asked me to use if I quoted her told me everything I needed to know. She asked me to refer to her as "Silent." On a very deep and personal level, this entire book is dedicated to all the "Silents" out there who were too frightened to say anything to me at all. Bronwen Forbes Baldwin City, Kansas Summer 2010 Contents Introduction Chapter 1: Popularity Contest Chapter 2: Making Contact Chapter 3: The Well-Decorated Broom Closet Chapter 4: The Discount Superstore Altar Chapter 5: Minimum Daily Requirements Chapter 6: Internetworking: Finding Others of Like Mind Online Chapter 7: Community Building Chapter 8: Problems, Like Charity, Begin at Home Chapter 9: The Experts Speak Afterthoughts Recommended Reading Resources Introduction Some Basic Definitions For the purposes of this book, if you think you are a Pagan, you are. And if you think you live in a small town, you do. After all, the definition of a twenty-first-century American Pagan is relative. Some use the term to define anything that isn't one of the "Big Three"—Christianity, Judaism, or Islam. Even within our own community, the term Pagan can encompass vastly different beliefs and practices, as illustrated by the following definition from the Pagan Pride Project: A Pagan or Neo-Pagan is someone who self- identifies as a Pagan, and whose spiritual or religious practice or belief fits into one or more of the following categories: • Honoring, revering, or worshipping a Deity or Deities found in pre-Christian, classical, aboriginal, or tribal mythology; and/or • Practicing religion or spirituality based upon shamanism, shamanic, or magickal practices; and/or • Creating new religion based on past Pagan religions and/or futuristic views of society, community, and/or ecology; • Focusing religious or spiritual attention primarily on the Divine Feminine. . . .1 I can't tell you if you're Pagan or not. No one can. But if one of the definitions above closely describes what you do and think, or want to do or think (but may not know exactly how to yet), religiously speaking, then I think you're a Pagan. The definition of "small town" is equally relative. If you grew up in New York City, then you probably think of Lubbock, Texas (population 225,859) as a small town. But if you grew up in Fayette, Ohio (population 1,281), then Lubbock is a big city to you. Likewise, I can't tell you if you live in a small town or not. Let me give you an example: when I lived in the suburbs of Washington, DC, I knew I lived in a big city. Spiritually, my city offered three major Pagan festivals a year, a quarterly Pagan newspaper, four or five good occult shops, dozens of botanicas (supply shops for those of the Santería, Voudon, and Yoruba faiths and a source of awesomely cheap glass-enclosed pillar candles), a monthly concert/lecture series, and more covens, classes, workshops, forums, and discussion groups in more traditions than one person could join in a lifetime. Washington, DC also offered me the opportunity to wear my pentacle and other Pagan jewelry as openly as I pleased twenty-four hours a day, seven days a week. Who cares about a little silver star-in-a-circle when the Hare Krishna standing behind me at Starbucks is wearing saffron robes, has a drum tucked under his arm, and sports a shaved head—except for one long ponytail? In early 2000, for reasons I will discuss later in this book, I moved to Missouri and spent four years in a town of about 91,000 souls, a figure that included the student population of three colleges. I was not prepared for a place where the nearest Pagan festival was three hours away and the local community only got together once a month for discussion and pizza. There were two covens—my husband and I started one of them—and a small student organization at one of the colleges. Oh, and the nearest Pagan shop was also about a three hours' drive away. I thought I had died and moved to hell. Then, in 2004, my family and I moved again to a tiny town in rural New Mexico. The population, according to the sign at the town limits, was "twelve thousand friendly souls and a few old grouches." The nearest Pagan shop was now four to five hours away. There was no monthly community get-together at all. I had Pagan acquaintances who were afraid to be seen buying Harry Potter books, much less meeting openly in a restaurant or coffee shop to discuss (insert whispered tones here) Paganism. There were no concerts and the nearest coven was ninety miles away in Roswell (population 46,453). There was, however, a very bad taste in the locals' minds about the word Pagan after some college students tried to start an on-campus study group only to be blasted in the town newspaper by every Baptist minister within a thirty-mile radius. And there are a lot of Baptist ministers in those parts. The Pagans weren't about to admit their religious affiliations after that, not even to their fellow practitioners, and the non-Pagans in the area eventually found something else to get all worked up about. But the college study group didn't last very long. Neither did my job, after the local university paper did the requisite "Let's interview a Witch for Halloween" article, with me as the interviewed Witch. But I will discuss all of this in detail later. Comparatively speaking, that town in Missouri seemed like a spiritually rich place. So my personal definition of "small town" has changed dramatically. And yours is probably different from mine. The 2000 Census found that approximately 60 percent of all Americans live in (or in the suburbs of) a city with a minimum population of 200,000, which means that approximately 40 percent live in small towns and other rural areas. From this we can assume that approximately 40 percent of all Pagans live in (or in the suburbs of) a city with a population under 200,000. Whether you've never met another Pagan in your life or have been part of a thriving community for years and are looking for different perspectives on the Pagan culture, this book is for all of us. Some of the material in this book will already be familiar to those of you with group experience, but I hope the rest of it will speak to all Pagans who live in a small town, came from a small town, or just want to know how 40 percent of us live our day-to-day spiritual lives. I am only one Pagan living in one small town in America. I don't even come close to thinking I have all the answers. So in 2005 I came up with what I think was a fairly comprehensive survey about life as a small-town Pagan. Through the wonderful meetinghouse that is the Internet—LiveJournal (livejournal.com) and Witchvox, also known as the Witches' Voice (witchvox.com), specifically—I posted notices inviting interested parties to answer questions about everything from home décor to ritual attendance to child rearing. I am blessed to have received fifty completed surveys, plus one interview via e-mail and four interviews via phone, about my subjects' specific cyber spiritual or first-time festival experiences. Some of the respondents are old friends I managed to guilt-trip into completing my survey or letting me interview them on specific topics; the rest I've gotten to know through working on this project. You will get to know them, too, as their comments and wisdom are quoted throughout the book in their own unique voices. So if you think this book might be relevant to your living situation and your spiritual practice, I hope it is. I wrote it for you. It also means you fall into one of the following three categories, which I call: Hometowner A Hometowner is someone who has grown up in a small town and discovered Paganism at some point or, like my daughter Rose, someone born into a Pagan family that lives in a small town. If you've never spoken face to face with another Pagan or attended a public ritual or other event in your life, I hope the information and encouragement from your fellow Hometowners in these pages will encourage you to do so. Emigrant An Emigrant is someone who grew up in a big city or suburb and has moved—or plans to move shortly—to a small town for career, love, health, or some other reason after starting to practice Paganism. While you probably have quite a good grasp of basic ritual and Pagan festival etiquette, you've probably had (or are afraid you will have) a bit of a culture shock in moving to a small town. I hope reading this will temper that shock somewhat, and reassure you that you are not the only former city-dweller who did something stupid (but thankfully not obviously stupid) while buying toilet paper in a small-town grocery store. Interested researcher You're curious about the reality of non-urban Pagan life in the early years of the twenty-first century and want to know more about it. All I can say is, the people who have already read the material in this book—both urban and small-town Pagans—think there's something in here for every one of us! It is possible to be both a Hometowner and an Emigrant. I am. I grew up in a small town (population roughly 8,000 at the time). Looking back, I realize now that I was headed straight to Paganism, even as a teenager. The first clues were my inability to feel "spiritually fed" on Sunday mornings in the Episcopal church and my overwhelming sense of spiritual connection whenever I watched or performed the seasonal folk dances of England that were very popular in my hometown. However, I spent most of my twenties and thirties living somewhere in the Wilmington, Delaware/Baltimore, Maryland/Washington, DC corridor, and "discovered I was Pagan" while I was there. As of this writing, I share my town with the 4,400 other residents of Baldwin City, on the Kansas prairie—but I'm less than an hour away from all the Pagan amenities that Kansas City has to offer. By my standards, it's a pretty cool place to be Pagan. What you hold in your hands looks and feels like a book. It even reads like a book. I don't like to think of it as a book, however; I prefer to see this object as a conversation. Imagine with me for a moment that you and I and the fifty people who contributed their thoughts and opinions are sitting around a very large version of your kitchen table, drinking coffee (or tea) and eating homemade cookies that are still warm from the oven as we swap stories about what it's like to be Pagan in small-town America at the beginning of the twenty-first century. When you've finished this book, I hope you will gather together some local Pagans and invite them all to a restaurant or meeting space in your town for coffee, tea, warm cookies—and a continuation of our conversation. [contents] 1. "Who We Are," Pagan Pride Project, Inc. Online at http://www .paganpride.org/who/who.html. Chapter 1 Popularity Contest My parents weren't happy at first, but as time went on and they saw I was still me, not some weird girl wearing all black and sacrificing animals, they mellowed. They were hippies though, and a bit more open-minded than most in this town. My extended family just refuses to talk about it at all. Although when my parents died earlier this year, my relatives were "kind" enough to send me pamphlets explaining why I'd be spending eternity burning in hell. I was a bit pissed off about that. My brothers have always been okay about it. They're like, "Whatever." I have lost friends over it and some actually cross the street when they see me coming, but overall I think because I've been calling myself a Witch for so long that people have just gotten used to it. —evy, bolivar, new york (population 1,089) When I first had the idea to write this book, there was a "reality" television program on the Country Music Television network (CMT) called Popularity Contest. The premise was: take ten people from large East or West Coast metropolitan areas and dump them in Vega, Texas (population 896).The contestants lived in the townspeople's homes, worked in their businesses, and America got to laugh at the contestants' attempts to cope with small-town life. The residents of Vega were able to vote one person off each week, and the winner got $100,000—half of which was to be shared with some or all of the residents or organizations in the town, as the winner saw fit. Even by reality show standards, Popularity Contest was pretty lame; it was a onetime show that was never repeated, probably because as many people tuned in to laugh at the "backward" residents of Vega, Texas, as tuned in to laugh at the contestants. As a cautionary tale for Hometowners and Emigrants, however, the show was priceless. The contestants and the residents learned some profound and relevant lessons about tolerance and debunking stereotypes that you can apply to your own situation. For example, the contestants on Popularity Contest were asked to go on a town-wide scavenger hunt their first day. One of the items on the "to find" list was Devil's Rope, more commonly known as barbed wire, only they weren't told that. While the joke was on the contestants—how can anyone not know what barbed wire is?—the real test was in how they handled the joke. Some were angry, while others took it with grace and basic good humor—and were voted off much later than their more judgmental fellows. While there may not be $100,000 on the line in your situation, judging your fellow small-town residents by your newly discovered Pagan sensibilities and/or urban standards is just going to get you into trouble. My survey respondents agree. So how do you fit in in a town you don't fit into? For Hometowners, this is a particularly difficult question, because up until you realized or decided you were Pagan, you probably fit in pretty well. And, unlike most people who reside in big cities, you are more likely to live near your family and childhood friends. You also see them more often as you go about your daily work, errands, and other activities than someone who lives in a city with a million other people. What do you do? To Tell or Not to Tell: Coming Out of the Closet We talk about "coming out of the broom closet" as Pagans much as gay men and lesbians talk about "coming out of the closet" about their sexuality, and if you're reading this and you're gay and Pagan in a small town, you most certainly have some big challenges ahead. I hope some of the advice in here will help make those mountainous challenges a little smaller. Lars, a gay Pagan friend who lives in a large East Coast city, reminded me recently, "You do not come out of any closet once and are done with it. You choose every day and with every person to be out or not." Some family members suspect, but I haven't come right out and told them, and a few friends know. Because our area is very narrow-minded, I am not blatant about my religion, but if someone asked, I would tell them. —keltasia, shamokin, pennsylvania (population 7,361) The first and biggest question, of course, is whether to tell your family and friends about your religious identity. I can't tell you whether you should or not; no one can, because no one but you knows your family that well, understands your work relationships, or wants to keep your kids at the end of the messy divorce you may currently be going through. I grew up in Berea, Kentucky, in the 1970s; the population at the time was about eight thousand. I moved away in high school and realized I was a Pagan while in college in Baltimore, Maryland. But up until I was in my mid-thirties, I returned to my hometown every Christmas, and always hid my Pagan identity from the people who'd known me the longest. I was convinced they wouldn't understand. I joined Facebook about a year ago and have connected with 177 people as of this writing, at least half of whom I knew as a child and/or in high school—including my first ex-husband who even admitted on his own Facebook page that he left me because I was a Witch.2 My profile, updates, and many of my posted notes leave no doubt in anyone's mind that I'm Pagan. As near as I can tell, no one cares. Of course, they haven't seen me in person since 1998, which is the last time I visited Berea; my reception may be very different face to face. I have contact with friends since my parents still live in my hometown. My family is very supportive. In fact my mother "outed" me at a family reunion and I wasn't even there. No one has given me any negativity over being a Witch. My friends think it's pretty cool. —k, sevierville, tennessee (population 17,297) No matter where we live, some of us just don't have the freedom to say, "If my friends and family can't accept my beliefs it's their problem, not mine, so I won't hide who I am," as Deanna Eberlin of Addison, New York (population 1,708), wrote in her survey. I have heard story after story about bosses who say (or at least imply), "Come to my church or you're fired," and I'll bet you Hometowners have, too. Maybe it's even happened to you. Your friends might want to be supportive, but who knows how much pressure they're under from their own family, employers, and significant others to drop you like a hot rock now that you're different? If you come out, you run a very big risk of being, at minimum, ostracized from the people you've known since kindergarten. My family knows what I am, and for the most part have disowned me; they think I'm going to hell and taking my child with me. —spiritrunner, bakersfield, california (population 324,463), previously in taft, california (population 9,032) On the other hand, if you're living in the same small town you grew up in, don't underestimate your family's ability—and your friends' ability—to adapt and understand. No, not everyone will understand and accept your religion; hard or painful as that may be, it's part of the reality of being "different." But I think, and most of my survey respondents think, that you could be pleasantly surprised. Moondancer, from a small town in the state of Washington, agrees: My family was well aware of my Pagan beliefs, although most of them don't understand any of it. The majority of them are Christian of one denomination or another. There are a lot of Baptists in my family, as well as Assembly of God and other evangelical churches. I like to think that it doesn't affect my relationship with any of them; I still get sent all of the "inspirational" e-mails and sob stories from them that they send to all the other relatives, and while I occasionally wish there were some Pagan equivalents, for the most part I just smile, and accept the well-wishes for that and ignore the subcontext. The few friends I have who are not Pagan are from work, and as I don't consider religion/spirituality to be appropriate to be discussed in the workplace, it rarely comes up. Having said that, I don't make any particular attempt to hide or disguise my beliefs, and if asked directly, I'll answer with as much accuracy as I think they can handle. I particularly like Moondancer's points about religion not being a particularly appropriate topic to discuss at work—unless you happen to work in a church, and about only giving your loved ones as much information as you think they can handle.3 I have found, and many of my survey respondents have found, that a little at a time is best—kind of like how parents teach their children about sex: a little at a time over the years, and never more than they are ready to know. In other words, when you're at the next family reunion, don't dump the whole thing on your nice Methodist Aunt Virginia at once—that the God may or may not have antlers, horns, or cloven feet; the history of human sacrifice; the May First sex holiday known as Beltane. Show some tact, take it slow. My parents thought it was a phase for a while, but now they accept it. The rest of my family accepts it. Though at first my little brother said I was going to hell, but then he read up on it and is probably a little bit Pagan too. Even my mom thinks she was a Native American medicine woman in a past life. —kathleen, from a town in north dakota Your neighbors, too, may have questions about your faith once they notice your car is still in the driveway every Sunday morning or the odd nail-filled glass bottle hanging from a tree in your front yard. While fences do, in fact, make good neighbors, Julia and K both found that a few favors and a casual attitude about Paganism on your part go a long way toward making tolerant neighbors. Julia's advice in particular is something we can all take to heart, Emigrants and Hometowners alike: Most of my neighbors get to know who I am before they find out about my religious practices. For the most part, my experiences have all been very positive. We tend to help folks with shoveling the walk in the winter, the little old lady with the garbage, and stuff like that, so we are good neighbors. My religion is rarely mentioned at all, and it has not been negative at all here. —julia, east stroudsburg, pennsylvania (population 10,411) I have not shared my religious beliefs with too many people locally, especially as I am a relative newcomer. The few that I have told were either interested in knowing what Paganism entailed or they just said, "Oh, okay" and left it at that. It has been a pleasant surprise for me to find this attitude in a highly Baptist area. —k, sevierville, tennessee Sometimes your neighbors may figure it out without you having to say or do a thing. And sometimes, as herbalist Witch of the Woods found out, your non-Pagan neighbors can actually actively help you in your spiritual practice. The elderly couple across the street must know something, since the wife has many wild herbs I've used for medicine on her property. Once she stopped me and said I could dig up and transplant any herb I wanted for my "potions." She and her husband have been nothing but curious and sweet. Honestly, it's boring, but I have never had a negative reaction, especially once people talk to me and ask me questions. —witch of the woods, merrimac, wisconsin (population 882) Fitting In Like the contestants on Popularity Contest, you may have a hard time fitting in if you're an Emigrant—partly because you're Pagan and partly because you're not used to small-town life. Even if there is a small Pagan presence in your town, if you're not careful you could still ruin your chances of social acceptance not only in the community at large, but also within your own "tribe." Try not to sound like a "know-it-all" by saying, "This is how my group(s) used to do it." Everyone hates that, I think. I let them get to know me before offering authoritative opinions. Avoid asking "why," which is confrontational, and never say, "You should/ought . . ." People hate that, too. It sounds parental. —rowen brianna, bowling green, kentucky (population 56,598) I was living in the suburbs of Washington, DC in 1999 when I fell in love with one of my oldest friends who was in graduate school in Columbia, Missouri (population currently 102,324). Even though I'd grown up in a small town, I had a genuine case of culture shock when I moved to the Midwest to be with him. For those of you who always hope for romantic endings, he'd been in love with me for about a decade at that point. We married in early 2001 and are still happily together. I moved to a small town for love and had the hardest time fitting in—and not only because I was Pagan. Ironically, I'd been calling upon the four directions as part of my spiritual practice since 1985, and quickly realized I had trouble coping in a culture where people told you how to get somewhere by saying, "Turn north at . . . then turn west at . . ." I'd say, "Is that left or right?" and I'd get a funny look followed by a patient sigh and "It's north. Then west." I got lost fairly often that first year. Actually, ten years later I can still get lost if someone gives me directions based solely on, well, the directions. One day I had only a few minutes before I was due somewhere else (either work or rehearsal for a play I was helping out with), and I had to stop at a grocery store and buy toilet paper. I looked at the posted store directory above the aisles and didn't see it listed, so I asked an employee where the toilet paper was. He said, "It's on the north wall." Like I was supposed to know instinctively which wall was the north wall! Of course it was the fourth wall I checked. Had I grown up with a more practical, non-ritual sense of the four directions, I probably wouldn't have been late for wherever I was headed. I also talked too fast—like anyone from a large East Coast city would, which tended to alienate me from Pagans and non-Pagans alike. I'm still trying to break myself of this habit. It was also hard, with my semi-traditional Wiccan background and festival coordinator experience (I was one of the founders and was festival coordinator for several of the first ten Free Spirit Gatherings in Maryland), not to come across like a snob to my fellow small-town Pagans, most of whom had never been to a festival and had never had any formal religious training. I wish I'd had the following advice when I was trying to fit in: It's not so much that I try to fit in, but I'm not into advertising my Paganism. I might wear something very discreet (like a small pentacle), but that's about it. —noey, coupeville, washington (population 1,869) Other respondents who didn't want to be quoted mentioned repeatedly that getting involved in the community was a good way to fit in for both Emigrants and Hometowners alike. Many of them talked about organizations they volunteered with that were compatible with their Pagan values—the town's animal shelter or park clean-up group were cited as favorites. Others suggested getting involved with local activities like elections, theater groups, or clubs that revolve around your not-spiritual interests (ham radio, dog training, scrapbooking, gardening, home-brewing, etc.) as ways to relate to people in your community in a safe, non-threatening way. My husband and I have a small child, so our family also participates in local, seasonal activities. We attend Memorial Day parades, street fairs, Fourth of July fireworks (a perennial favorite!), farmers' markets, fall festivals, and summer activities at the local library. My daughter is also actively involved in soccer and gymnastics. Freezing on the sidelines with the other parents as your kids vainly attempt to kick a ball, or waiting in the wings together at the year-end gymnastics recital, are bonding rituals that equal anything I've ever experienced in ritual or at a Pagan festival! We hope to eventually make casual friends at these events, whether they're Pagan or not. In the meantime, these activities keep us from hiding in the house and make us feel like part of the community. Witch of the Woods agrees with me: Definitely get to know the people you live with. I often exchange desserts and herbal meds with them. These small towns love it when you get involved, even on a small level. They will respect you if you participate with them. Volunteer at Little League or just be kind and respectful of others. —witch of the woods, merrimac, wisconsin Sometimes the community provides its own Pagan activity without even realizing it. When we lived in Portales, New Mexico (population 12,184), we were shocked to discover that seniors at the local high school had been winding ribbons around a Maypole during graduation week since 1929. Ironically, the annual Maypole is the only dance event allowed on school property. The homecoming dance and senior prom must be held at another location—usually at the local college's gym—because the school board thinks those other dances are "licentious." If only they knew! The yearly high school Maypole Dance is a big event in the community—the public is invited and the auditorium is packed. Parents and grandparents attend the final dress rehearsal to film and photograph the event. The boys wear tuxedos and the girls wear formal full-length dresses, complete with hoop skirts. The couples weave the ribbons, and then waltz around the pole. I'm considered to be somewhat of an expert on the history of Maypole dancing and am full of advice on how to make and dance around one, but even I would not try to do the Maypole in a hoop skirt!4 I have no idea how the girls manage. The Portales News-Tribune faithfully covers the event every year; this is front-page news. An article in the May 22, 2005, edition of the News-Tribune spends several column inches on the dresses and how several generations of the same families have danced around the Maypole, but only briefly touches on the Maypole itself: Maypole is historically a fertility and pagan [sic] ritual, but at PHS [Portales High School], it has become like a second prom for graduating seniors, only much more formal. The original intention of the dance does not seem to sway participants . . .5 Of course, not every small-town high school conveniently provides a Maypole dance for the community at graduation time, but if you look hard enough, you're bound to find a local activity or custom that feeds your Pagan soul. Slow, Subtle, and Quiet is the Way to Go When my husband A.G. was an undergraduate at a major state university (the culture of which can very much resemble a small- to medium-sized town), the girlfriend of one of his closest friends decided to become a Witch and out herself to the entire university community at the same moment. There's an old theory that if you say, "I'm a Witch" three times while turning in place, you'll instantly be one. However, the young lady chose to conduct this rite of passage at full volume, in ritual robes, in the middle of a very crowded student cafeteria at lunchtime. You probably already know that this is not the best way to reveal your religious preferences in a small town, but just in case there's any doubt, my survey respondents would probably copy A.G.'s reaction to the spontaneous self-initiation in the cafeteria and laugh uproariously—and he already knew he was Pagan at the time. Maybe the young lady should have heeded the following counsel: I'd offer a bit of advice from the so-called "long version" of the Rede of the Wicca: "Soft of eye and light of touch—speak little, listen much." Take pride in yourself, but you don't need to beat your friends and neighbors over the head with your religion or spirituality. At the same time, pick your battles—not everything is worth fighting over, but some things are. It's a tough choice sometimes to know which is which. —moondancer, washington state Consider why you want to tell various people—do they need to know? Do you just want to get it off your chest, or do you just want to be in their face about it? This doesn't have to be a public issue, but on the other hand, being in the broom closet carries its own special risk. If you live in a "right to work" state, then your employer can fire you for any reason not against the law—they don't like your clothes, for example. Even though religious discrimination is against the law, another excuse is easy to come up with and lawyers are very expensive. See Dana D. Eilers's book Pagans and the Law.6 —rowen brianna, bowling green, kentucky Rowen Brianna brings up an interesting point about the motive for letting those around you know about your religious beliefs. Maybe you do want to "just be in their face" and would do just about anything to shock your family, friends, and neighbors out of their presumed small-town conservative outlook—and I say "presumed," because in my experience there is a much stronger "live and let live" mentality in small-town residents than city-dwellers give us credit for. There are places from one extreme all the way to the other in terms of acceptance of Pagans. A place known for its wide acceptance isn't always better than a tiny town known for its Christian population. —becca, clovis, new mexico (population 32,863) Let me give you an example. I was once staffing a vending booth at a Pagan festival about an hour from my home here in Kansas, when one of the merchants in a nearby booth realized that he and his boothmates needed some additional food and drink supplies to get them through the rest of the weekend. He decided he needed to make a Walmart run. The nearest Walmart to the festival was in the big town up the road from mine, population about 80,000. Now, this gentleman had been wearing very little except a thigh-length leopard-print nylon bathrobe all weekend, and did he change clothes for his Walmart run? He did not. Was he beaten up, laughed at, insulted, or refused service at the local Walmart? He was not. Heck, his picture didn't even end up on www.peopleofwalmart.com (yes, I checked). He probably would have done better to take Julia and Noey's advice, however: Don't shove your religion in people's faces. Most people are not going to have a problem with your spirituality as long as you don't push it on them. If these people have known you your whole life and know who you are, it's probably not going to change much, unless you have been obnoxious or have a chip on your shoulder. —julia, east stroudsburg, pennsylvania Find something discreet to wear; once folks get used to that and get to know you, move slowly into more direct forms of expression but always remember: We are the Hidden Children. —noey, coupeville, washington On the other hand, it's hard to live a lie as, again, many gay men and lesbians can attest to, to pretend you're a bona fide member of the "mainstream" (whatever that is) when you know in your heart that you're not. It's tough to censor yourself constantly so you don't inadvertently out yourself to friends or co-workers. I've been calling myself Pagan for over two decades now, and I still have trouble keeping certain phrases out of my vocabulary when I'm not in Pagan company. I frequently slip up and say things like "Oh, Goddess" instead of "Oh, God" (or "My God"), and "Godsdammit" often pops out of my mouth when I'm annoyed or frustrated. Occasionally I will use the expletive "Sweet Buddha's tits" which is at least a little less religiously inflammatory (except maybe to Buddhists). I've even been known to recount time based on Pagan holidays to non-Pagans, as in "Yes, Mr. Mechanic, I know my car was supposed to have the oil changed sometime back around Lammas." Oops. Since my early Pagan days in 1985, I've had very few friends who weren't Pagan, and none of them were what I'd consider close friends—more like good acquaintances. These few friends may have thought we were close, but I knew better. If they didn't know I was Pagan, it was probably because I'd determined they couldn't accept that I wasn't Christian, which meant our entire friendship was based on a lie. You may already be doing the same thing, and if you are, you know how lonely, painful, and yet occasionally necessary this choice can be. In addition, you may initially have a positive coming-out experience with your family, and then realize years later—usually during a crisis—that they've never really accepted your religion at all. This happened to us. A.G. told his mother sometime in the late 1980s that he was Pagan. Did she ask what that meant or offer to pray for his misguided soul? No. She just asked him if he'd still come home for Thanksgiving and Christmas dinners—these two meals being the most important rites on my mother-in-law's ecumenical calendar. The holidays these feasts are attached to mean almost nothing to her; it's all about the family getting together and eating well. Once he reassured her that he'd still come home for the sacred suppers, she completely lost interest in his religious identity. She told the rest of the family; no one ever said anything negative to A.G.—or to me, after we were married—and we assumed all was well with our religion and his blood kin and that was the end of the issue. We were wrong. In the late summer of 2003, A.G.'s father was dying of blood cancer. He'd been unplugged from some of the life-support machines when A.G. and I went into his intensive care unit room to say goodbye. My father-in-law wanted to let go, and had specifically asked to be taken off the machines. He and A.G. were cracking jokes, and A.G. asked me to sing something for his father. I started to sing an old English folk song I'd sung in front of my devout Methodist grandmother on more than one occasion—in other words, I was not singing anything particularly "Witchy." A.G.'s older sister was in the room at the time, yelled out "Freaks!" and stormed out. By the time I'd finished the song and we left the room, rumors were flying all over the packed family waiting area that I was "practicing Witchcraft" on my father-in-law in the middle of the ICU. Needless to say, this caused a rift in the family that is still healing seven years later. The problem is, if you've grown up in a small town and have recently discovered your Pagan leanings, it's too late to avoid non-Pagan family and good friends. Not only do you probably have a handful of really good non-Pagan friends, but these people have also been your friends for years. You also have nearby family to consider. And now your friends and family may hate you for "changing the rules" on them, for changing your perspective as you study more and more about your new path. If there is a situation with more potential for pain in small-town Pagan life, I can't think of it. Fortunately, my surveyed experts had a lot of advice about balancing coming out of (or staying in) the broom closet with being true to yourself—especially for Hometowners: Don't go out of your way to announce it, but don't be afraid to be who you are. And you better be strong in what you believe because you will be constantly challenged. However, there are a lot of people who are in the closet here and don't want any spotlight shown on them. They come to me with lots of questions and advice; it's kind of funny to watch people be that way, but I don't mind. I do have some bumper stickers on my Explorer that indicate it belongs to a non-Christian and have had "Witch" scratched into the hood of my truck! —jenn, mountain home, idaho (population 12,266) Be the same person you always were. Unless you were an asshole, then you might want to change, but my point is, if people see that you aren't just trying to freak them out, that this is still just you, they'll be more accepting. Most often people just ask about my necklace (it's a small pentacle) and then move on. I've been pretty lucky: most have been pretty good about it, but occasionally I've been handed pamphlets and told I'm going to burn in hell. A pizza shop owner once refused to serve me because of my necklace, and I refused to remove it. —evy, bolivar, new york People in small towns are just . . . skittish when it comes to differences. Ease them into it. Don't go running through town dressed like a hippie, doused in patchouli, yelling, "Long live the sun god, he's a real fun god! RA! RA! RA!" Don't go out into your backyard buck nekkid waving your athame at the stars. Cops will probably be called, even if you truly are on your own property and can do what you please. Keep it low-key, unless you really, really like stirring up the neighborhood. —ravenna, dowagiac, michigan, (population 5,635) Advice for the Stranger in a Strange Land: The Emigrant I've been away from the East Coast for over a decade now, and I have the strong Midwesterner identity to prove it—with one exception: after living near New York City-style pizza and/or with former New Yorkers who were raised on New York City-style pizza (thin crust with cheese that is likely to slide off with no warning whatsoever) for so many years, I still fold my pizza slice in half lengthwise before I eat it. This is pretty hard to do with thick, Midwestern, Chicago-style crusts, but I manage! I also fold toast, quesadilla slices, and anything else that's flat and doesn't need a fork to eat. I can't help it; I don't even do it consciously. The funny thing is, people here in the Midwest (not to mention my husband, who has never lived on the East Coast) tend to stare at me more for my "food-folding" than they do my Celtic knotwork T-shirts or crescent-moon jewelry. Other than "Don't fold your food," here's what my respondents had to say to Emigrants who have recently moved to a small town: Don't pay attention to the criticism, but don't go trying to convert people either. When people are ready to discover a new truth, they will seek you out. —jenn, mountain home, idaho Take a deep breath, slow down, and learn to amuse yourself. It's pretty quiet here, though there are plenty of opportunities to get out and just be with nature. —becca, clovis, new mexico Don't try to flaunt being a Pagan; don't assume the town is out to get you; don't assume we don't like you because you're Pagan—it could be your personality, after all! —noey, coupeville, washington Don't make a big deal out of it. Confrontation helps nobody. If a particular person is supposed to know, an occasion will arise. —donna hames, nashwauk, minnesota (population 960) [contents] 2. And, yes, it was weird to read that, even though we've been casual friends for years. 3. There will be more discussion about Paganism in the small-town work environment in chapter 8. 4. See the "Spring" chapter in my book Make Merry in Step and Song: A Seasonal Treasury of Music, Mummer's Plays & Celebrations in the English Folk Tradition (Llewellyn, 2009). 5. Helena Rodriguez, "Maypole to Unwind at High School." Online at http://www.pntonline.com/news/maypole-4986-phs-prewitt.html. 6. Dana D. Eilers, Pagans and the Law: Understand Your Rights (Career Press, 2009). Chapter 2 Making Contact I go to Boise (about 50 miles away) at least once a week to Crone's Cupboard, because my mentor is there. I really love to soak up the energy of the shop and the wonderful people you meet there are great! I also plan to go to Washington next year for the Spring Mysteries, not to mention the yearly trip to GoddessFest. —jenn, mountain home, idaho Hometowners, you can certainly choose to stay solitary and never speak to another Pagan face to face if you want to. By my definition, religion is what you do with other people, but spirituality is between you and the Gods. Likewise, Emigrants who have had enough Pagan community contact and politics and stupidity and petty bickering and backstabbing to last several lifetimes may appreciate the solitude and personal spiritual focus that life in a small town and away from most, if not all, other Pagans can provide. That being said, there are definitely advantages to being in at least semi-regular contact with other members of the Pagan community. The best reason to sit in a formal or informal circle with others once in a while is the opportunity for some basic reality checks. Are you grounding and centering properly? Are you really in a trance state on a guided meditation, or are you just daydreaming—or asleep? If you have only done ritual by yourself, you are much more vulnerable to imaginary spiritual experiences. Let me give you an example. At one time, my coven offered holiday rituals that were open to the community. A solitary came to one of our Beltane celebrations, speaking to anyone who would listen about her amazing psychic powers and her "astral fiancé." Apparently she'd met a man on the astral plane—but never in real life—and they planned to marry just as soon as she moved to Ireland and figured out which one of several hundred thousand Irishmen he was. To further hinder her search, she didn't even know his name. As an occasional writer of fiction, I wish I could make up stuff this good. Does the idea of an anonymous "astral fiancé" sound a little far-fetched? It sure did to us—especially when she quit her job, moved to Ireland, and (no surprise) never found him. Last we heard she was back in the States, flat broke, and living out of her car. If this woman had had some solid training and/or regular contact with a group at the beginning of her Pagan studies, she'd have known better than to delude herself to the point of homelessness. Plus, she hadn't a clue that no one at our Beltane ritual cared one whit about her "amazing psychic powers," or that repeatedly boasting about your Gods-given talents is considered rude in polite Pagan company—but her rudeness in mentioning them (repeatedly) has stuck with us to this day. There was also a woman in our local weekly discussion group when we lived in Wisconsin who had likewise never spent any time in a "mainstream" Pagan community, and as a result had some pretty set (but inaccurate) opinions about some of her fellow practitioners. The two I remember most were "all Gardnerians are into BDSM" (they're not) and "all Asa Tru [followers of the Norse deities] are white supremacists" (they're not). It took a long time for us to convince her, without alienating her, that she was, well, wrong. But if she'd just gone to one or two of the smaller events at Circle Sanctuary outside of Madison—about three hours away—she'd have been far less likely to develop these interesting notions. Like Craves Like It's human nature to want to join with and bond to "those like us," and Pagans are no exception to the rule.7 Many of the survey respondents said that they'd had some limited contact with other Pagans in or near their small town, although some also reported problems of incompatibility: I have attended an open circle, but while I liked the people in the group, we don't practice the same kind of Paganism. —evy, bolivar, new york There are about ten to twenty Pagans that I know of in my town, though I know there are more. I think the rest of them are solitary. I already belong to a group, and I have been in it since 2001. My group has a ritual once a month on the full moon; it's all very positive. —kathleen, from a town in north dakota My contact with other Pagans here has been minimal. There was a brief encounter with a couple at the local bookstore—they were having a fun game of "make fun of the books being offered"—during which they pointed me in the direction of a monthly study group in a town down the road. I only attended one meeting, and by the time I had settled in enough to try to attend another meeting the shop had shut down, and I don't know what happened to the members. I assume they went back to their respective lives. Beyond those two encounters, nothing that I can recall. —becca, clovis, new mexico There are just four others in my town that I'm aware of. Last week a young girl at a "Family Fun" night was proudly wearing a pentacle. It was nice to see! —iris, genoa, illinois (population 5,145) On the Road In the survey, I asked, "Have you traveled to attend a Pagan event (open ritual, discussion group, gathering) or to visit a Pagan shop? Why or why not? How far did you travel? How often do you travel to do these things?" Unlike the Pagan residents of Washington, DC, or even Kansas City (and those cities' suburbs), most small-town Pagans don't have the luxury of Pagan events in their "backyard" so to speak, and must travel—sometimes hundreds of miles—in order to find people to talk to or circle with. I think, and I believe my survey respondents would agree with me, that it takes an extra level of commitment to the practice of your faith to pack up and drive two, three, four, or more hours (one way) in order to have any connection with your community at all. Many survey respondents reported being willing to travel in order to connect with other Pagans, while others indicated that travel was prohibitive, either for financial or family reasons: Our town is about a forty-five-minute drive from Seattle; however, we've gone across country to attend gatherings more than once. These days, unless it's for something specific to our Tradition of the Craft, we don't generally travel to attend. An exception this year is a long-running Pagan camping festival that we will be attending near the Columbia River Gorge, which we have not attended for the past several years due to timing. —moondancer, washington state I have attended a few Wiccan/Pagan rituals at a local metaphysical store about twenty miles from my home and have attended Native American ceremonies about an hour and a half from my home. —keltasia, shamokin, pennsylvania Keltasia brings up an interesting point: there may very well be activities in faiths not your own that are close enough to your house, and compatible enough with your beliefs, that they will at least somewhat feed you spiritually in between, or even in place of, attending rituals or other events with your fellow Pagans. For this reason, I have "hung out" on the fringes of the Buddhist community for over a decade; I've gone to retreats (and had to smuggle in my own non-vegetarian fare) and weekly meditations when there was nothing else available. I learned more about grounding and centering and the art of silently listening for Deity to speak to me—not that I'm good at that particular skill—sitting in meditation with Buddhists than I have at any one-time or regular Pagan event in my life. I have traveled several hundred miles to be at events in the past. I don't do it so much these days, as I like staying home. It used to be we traveled to gatherings several times a year; now it's just a couple of times a year. I visit my Pagan shop and the other metaphysical shop in town at least twice a month. I prefer to buy from them because it keeps them in business. —julia, east stroudsburg, pennsylvania So if you do decide to travel outside of your hometown and/or comfort zone in order to attend a festival, workshop, or open ritual, what can you expect? What should you do? How should you behave? For Emigrants, attending a festival or community ritual with or without knowing any other participants probably isn't that big a deal. In fact, if you're used to a certain amount of contact with your fellow Pagans, you may need to drive some distance to attend various Pagan events, stay active with the coven you left behind, or visit Pagan shops in order to continue to feel spiritually connected to the Divine and to your "tribe." I went to a Witches' Ball once. That was fun. It was in Batavia, which is about an hour and a half away. I also went to an open circle, but it wasn't for me. I've been told there's a Pagan shop in Hornell, which is about forty minutes away, but no one seems to know exactly where it is. Why don't I go more often? I work constantly, and I don't really have anyone to go with. Starwood Pagan Festival isn't that far away, but again, I have no one to go with. —evy, bolivar, new york On the other hand, as Evy points out, Hometowners may feel an understandable amount of apprehension at the idea of driving some distance—often to the nearest big city—and interacting with a bunch of strangers who just happen to also call themselves "Pagan." It's scary—and it was scary for Emigrants to "step out" into Pagan society the first time, too. But there are benefits in even occasional face-to-face contact with others of a like mind, so try to get to at least one festival or open ritual a year. Your spirituality will be enriched by the experience, I promise. And hopefully you won't end up homeless and living out of your car because you were unable to find your astral soulmate in Ireland. Basic Pagan Community Etiquette So what do you need to know in order to fit in and not make too many major social blunders at an open ritual, workshop, or regular discussion group (also called a moot, Pagan Night Out or PNO, or meetup)? See, in many cases, the Pagan community has very different rules. Yes, we still say please and thank you and take hot showers (with soap!) on a regular basis—or at least we should—but there are some different rules that can be completely incomprehensible to newcomers and outsiders. Here are some of the big ones. Outing The primary rule for everyone in the community is: don't ever give out a Pagan's full name, contact information (including e-mail or cell-phone number), or work location without prior permission—even to fellow Pagans. This is called "outing," and it is just about the worst thing you can do to someone. If you meet someone who wants to contact your Pagan friend down the street, say, take the acquaintance's information (with permission) and give it to your friend when you get home. That gives her the option of giving her own contact information to your acquaintance. Several years ago, my husband A.G. had to sternly remind a Pagan friend of this basic etiquette point when the friend gave A.G.'s phone number to the local newspaper without checking with A.G. first. Graduate students don't like to be awakened at dawn on days they don't have classes, which is what this reporter did—he wanted the annual "interview a Witch for Halloween" story for the local paper. Touching sacred objects The next most important rule, and one that is so often unthinkingly broken by novices is: don't touch someone else's "Witchy stuff" without their permission. This includes crystals, rocks, tarot cards, jewelry, robes or other ritual garb, familiars (pets who help you magically), and altar tools—especially athames (ritual knives). If the items are on display in the public part of the house you happen to be sitting in, it usually means the owner won't mind your touch, but only if you ask first. A good rule of thumb is to ask even when the owner is handing something to you. For example: I recently attended a major Pagan festival where the land is very steep between the camping and merchant areas at the top of the ridge and the workshop sites and deity-specific groves—and food and hot showers—at the bottom of the ridge. I was taking the shuttle van back to the top of the property after lunch when some other riders got in. One of them handed me her newly purchased carved staff to hold while she climbed into the van seat in front of me. Even in that hurried, crowded, non-ritual moment, I asked, "May I?" before I touched it, even though she obviously wanted me to hold it for her. She said, "Of course,"and I held it until she was settled in her seat, after which she reclaimed the staff and said, "Thank you," and we all went about our polite Pagan business. If someone bestows upon you the very great honor of letting you near his or her personal altar, don't even think about touching anything on it. This is as true for public events like festivals as it is in someone's living room. Why? Because it's their altar, dedicated to their deity or deities, and you and your energy have no business blundering into their personal means of honoring and connecting to their God(s). It's kind of like the magical equivalent of going into someone else's home, turning on their computer without permission, and using it to check your e-mail. It's just not done. Sure, you could make the argument, "Well, if they didn't want people touching their altar stuff, they shouldn't have it out in public space," and you would have a valid point. Unfortunately, in the Pagan community, you'd still be in the wrong to touch someone else's altar, just as you would their personal computer outside the Pagan community. Please note that this also applies to temporary altars set up for a public or open ritual at a park or, say, the local Unitarian church. Even though the circle is not taking place in someone's "home space" like a living room or festival campsite, the officiants probably don't have two sets of ritual tools—one for private use and one for public use. Most only have one set, their personal set. Unless otherwise invited, leave it alone. Now, if there is danger or hazard involved—the altar cloth has caught fire or the altar has been bumped and the ceramic Isis statue is headed for a fatal encounter with a cement floor, by all means step in if you are in close enough proximity to prevent loss or disaster. Put out the fire, save the High Priestess' favorite Isis statue now—and apologize profusely later. Familiars and pets In regards to familiars or other animals, always follow house rules. Katie is our pathologically shy dog—shy to the point that she will have a panic seizure if she becomes too stressed. Whenever we have open rituals in our home, we simply move Katie's crate (which she loves) to the spare bedroom and post a "Private: Do Not Enter" sign on the door. Once most of the people have gone home after the post-ritual feast, we'll let her out, but ask that people wait for her to approach them, and to please ignore her until or unless she initiates contact. Because of our strictly enforced rules, Katie has never had a seizure on ritual days. Of course it helps that guests who deliberately break Katie's rules aren't invited to return. If you are afraid of or allergic to specific animals, check with the hosts if the ritual or workshop will be held in a private home to see if they have any pets on the property. Depending on the severity of your allergy or discomfort levels, you may want to reconsider attending the event. I've developed an allergy to rabbits in the last five years or so after a childhood filled with several instances of rescuing orphaned baby bunnies and raising them until they were old enough to be released back into the wild. It's annoying, but I can take over-the-counter antihistamine before I leave for a ritual where I know there will be lagomorphs, and I do just fine. However, if I knew the ritual at a rabbit-inclusive home was going to be particularly long, intense, or would involve a very long (quiet) pathworking or meditation, I'd probably stay home. Even with the antihistamine, I still sniffle and blow my nose a lot when I'm around rabbits for more than a couple of hours. One of our dogs is a one-hundred-pound German shepherd who is a complete Goofbucket (our favorite nickname for him) and devoted to any child he meets, although he is standoffish with adults he doesn't know. Karl is harmless, but can look pretty scary—until you get his red rubber ball and offer to throw it for him; he loves his red rubber ball. We tell people up front that we have a huge dog who will bark until he gets to know them. This doesn't bother most first-time attendees, but some people have elected not to come to our home because they knew in advance they wouldn't be able to set aside their fear of large dogs enough to fully experience the ritual or concentrate on the workshop. We respect that. I, on the other hand, am stark raving teleport-to-another-room-if-one's-loose-in-the-house freaking terrified of bats. I know for a fact that if someone had a pet bat I would never, ever be able to attend a ritual at that person's house. I've had bats loose in my home and found myself outside on the front lawn a second or two after seeing the bat fly at me—with absolutely no memory of actually running down the stairs and out the front door. I. Do. Not. Like. Bats. Fortunately for my psyche, not too many people, not even Pagans, keep bats as pets. And by all that is sacred and holy, just because your spirit animal or totem animal just happens to be the same species or breed of an animal that belongs to another Pagan, don't assume for one second that you have any rights to talk to, touch, or otherwise interact with that animal without the owner's permission. I see this most often with Pagans who have raptors or other birds of prey as spirit or totem animals who think just because a presenter or fellow attendee brings their personal hunting hawk or a barn owl they're trying to rehabilitate (because it's part of their day job) to an event or ritual, the Pagan with the hawk or crow totem is entitled to touch, hold, pet, or otherwise "take over" that animal. Once a fellow presenter—and licensed raptor rescuer—at a Pagan festival brought a crow she'd recently acquired with her for the weekend because it needed extra care. The crow had been living with humans since shortly after birth; there was no way the bird could ever be released into the wild. She was a friendly bird, but that did not give this one attendee who babbled and hinted all weekend about her personal "crow energy" any right to constantly pester my fellow attendee to let her hold and/or adopt the poor crow. I've also seen this behavior with people who feel spiritually connected to snakes and ferrets, but the bird-of-prey-as-totem-animal people are the worst. Unless you are specifically invited, leave the animal alone. And if you are invited to touch or hold the animal, treat it exactly as you would someone else's ritual athame and be reverent, respectful, and follow the owner's instructions exactly. I did, and got to hold a lovely red-tailed hawk on my wrist one day as a result. My totem animal is the stag, but it was still a big spiritual thrill to hold that hawk. Tolerance At some point in our lives, we've all heard the Golden Rule: treat others as you yourself would like to be treated. Now that you're a Pagan, this rule still applies to you. Don't judge, gossip about, or be rude to someone just because he or she belongs to a different category of people from you or for some personal trait over which he or she has no control. This includes gender or sexual orientation, health conditions, age, weight or lack thereof (thin people hurt when teased, too), race or place of origin, and dietary choices—vegan, vegetarian, or meat-eater. Here are some general etiquette points guaranteed to minimize becoming a hot news item on Pagan Gossip Central if you follow them: • Politely avoid people you disagree with or feel are emotional or spiritual "bullies." • Saying "You are more powerful than I am" is a good way to not be taken seriously. • Saying "I am more powerful a Witch than you" is really rude. • There is more than one right way to practice Paganism. If it doesn't cause nonconsensual bodily harm or exploit minors, leave it be. • Don't cut down other peoples' Gods. • There is more than one way to do ritual. Some are quiet and meditative, which, if you've been working for a long time as a solitary, are probably what you're most accustomed to. Other rituals involve dancing, drumming, and screaming. Both are acceptable, and meet certain needs at certain times. All of the above is just to prevent you from making some of the most common newcomers' faux pas. One final note: you are expected to follow these basic tolerance rules on the Internet, too. Just because you're Pagan, it doesn't mean you have to be so open-minded about any ideas that your brain falls out the back of your head. If it hurts someone, especially someone helpless (like children), it's wrong. If it's based on a television show, a novel or a movie, it's silly (look up Klingon Wicca or Jedi Wicca the next time you're on the Internet). If you're "borrowing" folk customs or magics from another culture, get your facts straight—otherwise you're going to end up looking like a complete idiot. Let me give you an example. I know a gentleman who believes himself to be a German Witch. He calls himself a Hexer and goes on and on at great length on various online forums about how German Hexerei and Strega (Italian Witchcraft) are the same thing; they're not. To prove his point, this guy regularly posts pictures of his ritual spoon—a wooden spoon that he's carved himself and covered with runes.8 My German mother-in-law has a carved spoon with a decorative fabric bow tied just above the bowl. It hangs on the wall of her informal dining room, and its function is to bring prosperity to her house. Her ritual spoons are the ones she uses to stir her sacred pot of boiling potatoes and they are quite plain-looking. I've talked with a few people who practice Strega, and they all tell me that the ritual spoon in their tradition is the one their grandmother uses to stir the sacred marinara sauce—and that these spoons were also quite plain-looking. Hmm, maybe Strega and Hexerei have more in common than we thought! (Just kidding!) Basic ritual etiquette Nine times out of ten, if you're the guest at a group's ritual or attending a public circle in a nearby town, someone will make a point of thoroughly explaining the ritual beforehand, usually the High Priest or High Priestess or one of the other officiants. You may even be assigned a "mentor," an experienced ritualist you can sit next to and mimic (if necessary) during the actual rite. There are, however, some basic courtesies that are common enough to be useful in just about any circle. Unless okayed during the pre-circle talk-through (and it's considered good manners to ask if it isn't mentioned), do not leave the defined ritual space—i.e., a cast circle—without permission, and then only in an emergency. A sudden need to go to the bathroom is an emergency. Answering or texting on your cell phone—which you should have turned off and left in the other room anyway—is not an emergency unless a close family member is on the verge of death or is about to give birth. And if someone that close to you is in either condition, you probably have somewhere better to be than in a room full of relative strangers—like the hospital, perhaps. If you need to leave the ritual space, whisper this information to your "mentor," who is probably sitting next to you. He or she will know what to do, whether to ritually "let you out" themselves or get someone more in charge to do so. Wait until he or she "cuts a gate" in the cast circle before you leave, and then wait until he or she cuts another one to let you back in. Why is this such a big deal? Well, some groups believe that a ritually defined space helps channel and direct the flow of psychic energy raised during the rite. If you just get up and walk out, it's as if you just poked a hole in a kiddie pool full of water. The energy drains away. If you are in circle with a group like that and you "poke a hole" in their sacred space, don't expect to be invited back. Ever. Other groups work on the theory that very small children and pets can pass through the barrier of the ritual space without disrupting the energy. If you're reading this, you're probably neither a very small child nor a pet, so make sure you follow group protocol. You are a guest. You are not the High Priest or High Priestess. This means that your main job is to not do or say anything unless it is indicated that your spontaneous contribution would be acceptable. One joke or smart-ass remark at the wrong time is (a) disrespectful, (b) rude, and (c) disruptive. If you cared enough about your own spiritual journey to attend this ritual in the first place, why ruin it for everyone else? I was a guest at a Beltane ritual a couple years ago, and unfortunately could not get away from another guest who just had to make a smart or snarky comment at every part of the ritual. I wasn't in charge—it wasn't my place to tell her to shut the hell up or kick her out—but it definitely lessened my enjoyment of the ritual. Part of the rite very well may involve shared food, drink, peace pipe, etc. Of course, if you're recovering from a cold, be polite and don't contaminate everyone else. And whatever you do, don't refuse to partake outright without an explanation (and the explanation had better be a good one!), or your refusal will be seen as an insult. Otherwise, if you can accept the offering as a whole, do so. It is a great honor to have been included in the sharing. If you can accept the energy of the substance, but not the substance itself, do so with respect. This is usually an issue if wine or mead is being passed around and, for whatever reason (you're pregnant, you're a recovering alcoholic, you're underage and don't want to get your host(s) in trouble, you're on antibiotics, etc.), you feel you cannot safely partake. Salute or otherwise go through the motions. Discussing this with the ritual leader beforehand would be an even better idea. He or she may already know how to handle this situation and will instruct you. If for some reason you can't accept either the energy of the offering or the actual food or drink, don't. Bow in respect, politely pass it on, and consider not circling with this particular group again. In general, remember that you are, in fact, in a place of worship. A place of worship that holds deep meaning for someone. It probably looks, feels, and sounds nothing like where you do your personal rituals at home, but it—and the people gathered there—still merit your respect. What about sex? Good question! Throughout history, Pagans and Witches have been accused of engaging in orgies, sexual overindulgences, and general licentious behavior that would shock a sacred prostitute. Public perception today is about what it was five hundred years ago—and there's some truth to it. As of this writing, I have been invited to an orgy, given the opportunity to play Naked Twister, and flat-out asked if I wanted to have sex with someone not my mutually monogamous husband within the past week (granted, I was at a Pagan gathering, but still . . .). I said, "No, thank you" to all of the above; it's okay to say "No." If/when you venture out into Pagan public, chances are good someone will ask you to have sex with them. You may even want to ask someone to have sex with you. Since our standards of good behavior—i.e. sexual mores—are slightly different from those of the general public, how do you handle it? Believe it or not, the Pagan community does have some basic sexual protocols based on two very important virtues: honesty and respect. Knowing these protocols, and sticking to them, will definitely save your good reputation—and could save your life. Pagans, being on the religious "fringe" of society, tend to attract people who are on the "fringe" in other areas as well—including sexually. This means that we can be, and often are, heterosexual, bisexual, homosexual, celibate, trisexual, metasexual, pansexual, intersexual, asexual, and transgendered. We also attract, because we tolerate (very important), people who are sexually into leather, chocolate, barbed wire, rubber, uniforms, general nudity, and polka music. If one or more of the above makes you uncomfortable, start practicing your polite poker face now. Where do Pagans draw the line sexually? Same as the rest of Western society: nonconsensual sex, adults having sex with children, and anyone having sex with animals. Other than that, pretty much anything between consenting adults is fine with us. As I mentioned, a "no" answer is always a valid response to an unwanted sexual advance. If you don't want to sleep with someone, say so! If the person persists, tell somebody. Better yet, tell lots of somebodies. Anyone who will not accept a polite "no" and attempts to use coercion deserves to be expelled from the community. If a person you're not interested in propositions you, be polite. For some reason, this happens to my husband a lot. He is straight, but he is a "daddy bear," and therefore very attractive to a certain percentage of gay men who like their partners big, fuzzy, and with distinguishing gray hair at the temples. His response to unwelcome propositions is always this: "Thank you, I'm very flattered, but I'm straight." Because he was polite, some of the men who've propositioned him over the years are now numbered among his best friends. Polyamorous relationships in which one or both parties are allowed "outside" sexual encounters are very common in the Pagan community, and usually follow one of two basic rules. The first is "two for one." Say Leah and Wolf are married, and Willow is interested in having sex with Wolf. Under the two for one rule, Willow had better be bisexual, because the only way she will get Wolf in bed is if Leah is there, too—as a full participant in the activities. The other common open relationship rule is "veto power." Wolf and Willow may want to get it on, but Leah has the power to say, "Yes, you may sleep with my husband" and "No, you may not sleep with my husband." If you are interested in bedding someone who has a veto-power rule with his or her spouse, it is considered not only polite but also mandatory to ask the non-participating partner directly. Don't take your object of lust's word for it. Just. Don't. If for some reason you can't ask the spouse directly, do not take one step further toward sexual consummation. The community doesn't need the ensuing drama—and neither do you. Another thing to consider: it's the twenty-first century, and there still isn't a cure for AIDS. Even more scary, the number of new cases of HIV and AIDS is actually rising, mostly because people, being people, tend to forget things that aren't part of their day-to-day reality. Practice saying these words before you have to say them (and you do have to say them) to any sexual partner with whom you are not in a strictly monogamous relationship: "We are using protection. It's not optional." Practice saying it like you mean it, because you do mean it. Remember, magic can help you find a new (or better) job. It can alter how you perceive the world around you. It can even give you the strength you need to get out of a bad relationship. But it can't cure AIDS. In general, just remember that sex is an adult activity. It requires adult-level responsible behavior. Go ahead and have a good time. Enjoy! Celebrate the body the Gods gave you in whatever way you and your partner(s) consent to! Just don't dump your good sense with your pile of hastily discarded clothes, okay? Getting the Most Out of Your First Pagan Festival The idea of packing up your ritual finery, camping gear, myrrh beads, altar cloths, tent banners, and organic bug spray and heading out the door to commune with your fellow Pagans in the bosom of Nature for the better part of a week is daunting, to say the least. But everyone should try it at least once—the experience of being completely immersed in Pagan space twenty-four hours a day, even for just a few days, is empowering, thrilling, and a heck of a lot of fun. You may not know anyone when you arrive, but I guarantee if you attend (and speak up at) workshops, take advantage of any offered meal plan, and let people know that this is your first festival, you'll have made a handful of new friends by the time you leave. I recently spoke with my good friend Andrea Covey about her first Pagan festival experience. Andrea grew up in Oostburg, Wisconsin (population 2,832), and moved to Sheboygan, also in Wisconsin (population 47,782), shortly after she graduated from high school. Andrea had considered herself Pagan for a little less than a year when she and a friend when to Pagan Spirit Gathering (PSG), one of the oldest and largest Pagan gatherings in the country. This is what she had to say about her experience: BF: How many people were at PSG that year? How did you meet people? AC: I think there were about 800 or 900 people attending that year. I'm a social person; I don't have problems talking to strangers. I went to workshops where there were only five or six other people, and we really had a chance to talk about the subject during and after the workshop. There was also a vendor who camped near us and we got to talking. The workshops helped me connect the most, though. BF: What was it like, being in all-Pagan space for the first time? AC: It was awesome. There were times when I was like, "Oh my God, I don't know a damn thing" because I was new. I felt like a fraud. For me, the best part was the women's ritual. We divided ourselves into three groups—not Maiden, Mother, and Crone, but Warrior, Lover, and Wise Woman. We paraded down the main road of camp on our way to the ritual area. Well, I'm not a feminist, but to hear from the people we passed, "Women, we honor you, women we love you" was so empowering. I'd never heard that, and it was very powerful. The men were literally in awe of us. It was the least afraid, most spiritual moment of the week for me. That's when I felt I was in the right space spiritually for me. BF: What did you bring back with you from that festival? AC: There was a workshop on making ritual items out of clay. Most people were making Venus of Willendorfs, but I made a little yin/yang incense burner that I still use. So that's something physical. Spiritually, though, I was so new that it was really something to find out that there were other people like me; I wasn't making all this up. I haven't gone back since, mostly because of life, money, a small child, etc., but if I were to go back now, eight years later, I'd get so much more out of it. I definitely want to go back. It cemented my path for me—that wouldn't have happened, or at least happened as soon, if I hadn't gone. Andrea didn't mention it, but I learned at my first festival that surviving your first Pagan festival with your psyche, your body, your spiritual self, and your metabolism intact is a rare event. And I was on staff! I'm now going to tell you some things I wish someone had told me and/or some friends of mine before we novices went out and lived in all-Pagan-all-the-time space for five days. Remember to Eat and Drink This is the most important advice I can give you. Many times, the sheer intensity of energy at a Pagan gathering is so sustaining that new attendees forget to eat meals—they simply aren't hungry—and they pay for it later, sometimes with a trip to the nearest emergency room. If the festival offers a meal plan, buy it and use it! Even if you're vegetarian, the festival organizers are sensitive to dietary requirements like that. If you're a vegan, ask. They just might be able to help you. At most gatherings, all workshops, rituals, and other activities stop at mealtimes, so you're not going to miss anything by eating. Plus, it's a great chance to sit down and get to know your fellow attendees. If the festival doesn't offer a meal plan, take some time before you leave to plan and shop for simple, nutritious meals. Attendees who show up on site with a loaf of bread, a jar of peanut butter, a box of raisins, and plans to live off just those items for four or five days are just asking for health trouble. Trust me. I've seen it. Borrow a camp stove if you don't have one and learn how to use it before you go! Make sure you have the necessary equipment and supplies to thoroughly wash and sanitize your dishes. A little squirt bottle of bleach is a must. Don't count on there being a picnic table or other eating surface available. Whether the festival offers a meal plan or not, bring lots of non-caffeinated (caffeine will help dehydrate you) drinks and extra snacks. My favorite Pagan festival treat is a case of bottled pink lemonade. If you can't make or find pink lemonade, buy plain lemonade and a bottle of peppermint extract, and make mint lemonade—it's quite refreshing! I've also found that the strawberry-lemonade flavored Powerade sports drink is pretty tasty. It rehydrates me and there's enough extra "stuff" in it to keep my electrolytes happy. Oh, and stay away from alcohol. You'll probably be high enough on the Pagan energy that you won't need it. Pack a basic first-aid and outdoor kit Bug repellent, sunblock, Band-Aids, antibiotic cream, and anti-itch cream are essential. Even if the festival is being held on a completely wooded site, you'd be surprised how much sun gets through the leafy canopy. You may get into the spirit of the gathering and decide to wear a bit less clothing (or more revealing clothing) than you usually do. Pack that SPF 1,000 sunblock and use it. A.G. still remembers the time he was at a Pagan festival where there was a small lake on the property. A female attendee went swimming, then accidentally fell asleep face up on the beach for a couple hours—sans sunblock. There were some parts of her that probably had never been exposed to sun, and those bits had second-degree sunburn by the time she woke up. A.G. became aware of the situation when she staggered into the dining-hall area and begged the volunteers cooking dinner (A.G. included) to put aloe lotion on her blistered bits. She hurt so much that everyone was afraid to touch her for fear of causing her more pain. One kind soul eventually smeared aloe on the affected areas, but the consensus among the kitchen volunteers for the rest of the weekend was that the woman had been incredibly stupid not to put on even a little sunblock before dozing off. Don't let this happen to you. Pack a roll or two of toilet paper; you never know. Remember to take the toilet paper with you on your trips to the bathroom or Porta-John. It does you no good sitting in your tent—I say this from experience, by the way. Also, if you are on any sort of regular medication, even stomach acid pills or over-the-counter allergy tabs, don't forget them! Pack ibuprofen or aspirin. Make sure someone checks you thoroughly for ticks at least once a day or, if you're modest, once you get home. And don't forget the sunblock! Don't try to do everything Just because there are six workshop sessions and two major rituals scheduled per day, it doesn't mean you have to attend each one. After all, now that you've actually come to a festival, you can always come back next year and do everything you missed this year. Give yourself a break once or twice a day (aside from mealtimes) to sit and rest and assimilate what you've learned in the workshops you've already attended. Also, Pagan gatherings are notorious for having a lot (and I mean a lot) of quality merchants and craftspeople in attendance. If you're in workshops all the time, when will you be able to go shopping? If you're not used to attending ritual on a regular (monthly or twice monthly) basis, you may want to limit your ritual attendance to one per day or less. Sweat lodges count as rituals—actually, one sweat lodge should count as two rituals if you're not used to them. The intense heat and humidity is very hard on your body. I was on staff at an early Free Spirit Festival when a young man attended two sweat lodges in one day and didn't bother to eat at all or drink much before, between, or afterward. He—literally—collapsed, and if the camp nurse hadn't managed to stuff him full of electrolyte-rich fluids as fast as she did, we'd have had to call an ambulance. Sweat lodges can be awesome and deeply spiritually moving, but they're also exhausting and dehydrating—attend with care. Get some sleep It's tempting to stay up until three or four a.m. at the drumming circle and bonfire. If you're determined to do so, save it for the last night of the festival. Try to get at least 75 to 80 percent of the amount of sleep you normally get at home. If you're a light sleeper, pack earplugs. With you or without you, the drumming and fire dancing will go on until almost dawn—at which time the waking birds get really loud. If there are cabins available, especially cabins with electricity, try to reserve sleeping space in one. Cabins keep out the rain and most of the bugs, and if they have electricity, a small fan may make the difference between being too hot to sleep and sleeping comfortably. A fan can also act as white noise and cover most of the drumming and early bird songs. Most cabins offer at least a camp bed, which, if you're over thirty, is a major improvement to sleeping on the ground. Ground, ground, and ground again Before the entire festival experience overwhelms you, find a quiet place, maybe at the foot of a particularly friendly tree, and ground yourself. If you don't know how, or if you're having trouble, find a member of the festival staff to help you. That's what they're there for—to help attendees have a good and safe time. If they can't help you, if the person you asked is on his or her way to resolve an overflowing-toilet emergency, they will find someone who can help you. If you can't find anyone on staff, grab a workshop presenter. If a person is presenting at a major festival, he or she has enough experience with things Pagan to help someone ground themselves. Mind your Pagan manners If a tarot deck, necklace, crystal, drum, or athame is sitting on a merchant's table, it's okay to touch. If any of these items is sitting in someone's campsite or cabin, it's not okay to touch. And don't even ask to. Again, if the owner offers to let you touch it, even if he or she is physically handing it to you, ask, "May I?" before you even reach for it. Don't go around saying, "Well, in my coven, we . . ." Let me be the first to tell you: no one there cares. Don't touch another person's necklace if he or she is wearing it. Period. If you stumble across a couple having sex in the woods, remove yourself from the area immediately. Soap, deodorant, and toothpaste are still your friends. Keep your less-than-flattering comments about other peoples' bodies, tattoos, or ritual or festival garb (or lack thereof) to yourself. Which brings me tol . . . Keep an open mind A lot of people wear next to nothing—or nothing at all—at a Pagan festival. And I don't just mean in ritual. It's possible you could turn to ask a fellow shopper in the merchant area a question, only to find that he or she is stark naked. If it's a really hot day, the merchant may be stark naked. Question: What's the polite thing to do? Answer: Treat them as if they were fully clothed unless invited to do otherwise. I will never forget my first festival. Early on the first day I met a gentleman who, at that time, was a rather prominent member of a national Druid organization. His festival attire of choice was a neon-green, calf-length cape and knee-high, black-leather biker boots. And that was it. He was also the first uncircumcised male I had ever seen. Considering his outfit, it was kind of hard to miss. A very loud "Oh, my GOD" escaped my mouth before my brain kicked in and said, "That must be what an uncircumcised penis looks like, dummy!" Needless to say, the gentleman always remembered my name every year when we re-met at the same festival . . . Also, be aware that gay and lesbian couples, as well as men and/or women in three-way (or more) relationships, will likely feel more comfortable expressing affection in public at a Pagan gathering than they do on the streets of your hometown. You may very well see two men or two women holding hands or kissing, or three people of any combination of gender being affectionate right in front of you. If you've never seen it before, it can take a wee bit of getting used to. Be nice and don't say anything. If there are showers available at the festival site, it's 99 percent probable they're co-ed. I'm just sayin'. If you're the modest type who just cannot wash your girly bits or dangly bits in front of total strangers, plan to shower very late at night or very early (before 6:00) in the morning. If there's a crowd and you have to shower right now, close your eyes and face the spray (away from the room) when you get to those sensitive parts. It helps. Wear a waterproof watch At my first Pagan gathering, I sweated so much that I shorted out my watch on the second day. Seriously. Ever since then (and that was in 1986), I've only owned waterproof watches. Now I not only don't have to worry about sweating, if I happen to jump into the swimming pool or the pond, I also don't have to worry about my watch fritzing out on me. Speaking of swimming in ponds, you need to remember to leave Nature alone. Yes, you're in the middle of it. Yes, you worship it. This does not mean you should do stupid stuff like staying outside during a major thunderstorm so you can yell at the sky in Old Norse or trying to commune with a snake that's lying across your path. If they tell you water moccasins live on that end of the pond, don't go over there! Don't try to make friends with the wasps' nest in the corner of the dining hall. The skunk may be your totem animal. For the sake of your fellow attendees, leave it alone! Oh, and thoroughly clean up your campsite before you leave. Be nice and clean up the one next to you, too. As you become accustomed to attending Pagan festivals, and get used to the demands put on your psyche and your energy, you can relax some of the self-care suggestions: feel free to get a little less sleep and attend a few more workshops. Take extra-good care of yourself afterward Aftercare is very important once you get home, to help you recover from the incredible experience and to reintegrate back into your everyday life. In fact, at clothing-optional festivals, aftercare starts on your way out, as there is usually someone posted at the front gate checking to make sure that all exiting participants are wearing some sort of pants and shirt. In addition to getting used to clothes again, I also strongly recommend that you eat more protein than usual for the next few days (I always make sure I have steak waiting for me in the refrigerator for the first post-festival supper), drink more fluids, and give yourself a chance to catch up on missed sleep. Your body will thank you for the extra attention. If you can, schedule a day off work after the festival to slowly and gently reintroduce yourself to the non-Pagan world. I realized the importance of this once when I attended a gathering that had "Trash Pirates"—the cleanup crew had an interesting sense of humor and turned their battered truck and regular daily pick-ups of garbage into a major entertainment event. Twice a day, the bandanna-wearing Trash Pirates would come around, huge black skull and crossbones flag fluttering from the makeshift flagpole attached to the truck's cab, singing a horrible song about the joys of being Trash Pirates while one of the crew kept time by banging a stick on the outside of the bed of the truck as they emptied the bagged contents of the fifty-five-gallon trash drums into the back of the truck. The song was truly dreadful, but it kept them and the rest of us amused. It also got stuck in my head and refused to leave; for several days after I got home from this particular festival, I'd sing, "Yar, we be pirates!" every time I threw something in the trash. Fortunately, the song was more or less out of my system before I went back to work. Don't be surprised if you emotionally fall apart once you get home. For a variety of reasons, I experienced major life-altering events the first three times I attended a Pagan festival. Some were good, some not so good, but all of them were quite a shock to my emotional system. While you're catching up on sleep and stocking up the protein, allow yourself the luxury of a good cry (or two) if you need it. The most important thing, though, is to have fun. Most of us don't spend all day, every day, in "Pagan space." With a little care and pre-planning on your part, you can have a festival experience wonderful enough and fulfilling enough to last until it's time to go again next year! I hope that all of these helpful tips will encourage you to go out and experience community. If nothing else, attending a group or community ritual will help you more clearly define your own beliefs and how you do and do not want to express those beliefs. Attending a festival, aside from the awesome opportunity for some serious shopping, can give you a wealth of ideas to incorporate into your own practice. [contents] 7. A full discussion of how to start your own discussion group, coven, or regular meetup is covered in chapter 7. 8. Just how this is supposed to prove his point, I have no clue. Chapter 3 The Well-Decorated Broom Closet While it's not overtly Pagan, there is definitely artwork of a Pagan nature in every room of my home. The fireplace mantle is the main family altar. There are statues of gods and goddesses here and there in the house, and lots of original artwork that signifies things that are important to us—i.e., water paintings in the west, etc. —julia, east stroudsburg, pennsylvania Whether or not you choose to tell your family, friends, and neighbors that you're Pagan, your home can tell them for you. Artwork, objects, and tchtochkes (Yiddish for "little stuff lying around your house") can all reflect your Pagan-ness as subtly or as overtly as you please. Subtle Décor My home contains its share of obvious Pagan art—the huge Green Man poster hanging in the upstairs hallway, the picture of Stonehenge in my daughter's room, and the shrine in the master bedroom are dead giveaways. But there are also some not-so-obvious pieces here and there, like the Brigid's cross hanging over the front door. One of my favorite subtly Pagan touches is my collection of cast-iron kitchen trivets that hang on the dining room wall. To an undiscerning eye, they look like folk art, which they are, but the Pagans will notice the one with brooms in the design (inherited from my nice Methodist grandmother), and smile. These trivets can be found at flea markets, junk antique stores, and garage sales. I've never actually walked into a store and bought a new one. In fact, my absolute favorite cast-iron trivet is a pentacle—a five-pointed star in a circle that my husband and I found in a filthy, dusty old barn-turned-antique store (our favorite kind!) in Hermann, Missouri. It's had a proud place on my wall ever since we brought it home in 2002. I have lots of crystals, a year wheel, a green man and green woman in the kitchen; my apron, which has a year wheel, and a small shrine to Bastet and Anubis are by the back door. —donna hames, nashwauk, minnesota Another not-obvious thing to do is hang a horseshoe over your front door—points up, of course, so the "good luck" doesn't run out. Hanging a lucky horseshoe over your front door is a time-honored American folk tradition, and no one is likely to think twice about it. You're the only one who needs to know that the crescent shape of the shoe is a religious symbol and that horses have a long, illustrious connection with Paganism—from Epona, the Gaulish horse goddess (who also has a fertility aspect) to the British hobby horse who is part of the mummer's play and morris-dancing tradition and is very much a part of England's May Day celebrations. If you consider yourself more of a kitchen Witch, then the tools of your art—favorite pots, bread pans, good knives, and so forth—can be openly displayed in your sacred space, i.e. kitchen, and no one will even notice. My German-born mother-in-law lives in Salina, Kansas (population 46,180). She has a flotilla of little "kitchen Witch" dolls (riding on wooden spoons, no less!) hanging from her kitchen ceiling in a V formation, and the V is aimed directly at the back door. My husband is pretty sure they were placed that way deliberately, to help "sweep" the negativity from her house. Here's another reason why I think people don't hold issue with my choice of faith. My whole home is dedicated in one way or another to nature or Paganism. Right when you enter my home is one of my altars (not my working one, more of a seasonal one on which I do a little work) with a statue of Goddess and God. All my walls are adorned with the many broomsticks I collect and a few of the less standard Witches. I've collected many antique Halloween decorations over the years, and since some were expensive, I proudly display them. A Witch hangs over my dining room table, and I have three in my kitchen. Anyone who didn't know of my faith before gets the idea from even stepping into my home. Too much? Maybe, but I pay for it, so I get to decide what goes on the walls! —witch of the woods, merrimac, wisconsin If you prefer to hang pictures on your walls rather than objects, why not start with pictures of your own family? It doesn't matter if you don't have photos of your great-great-grandparents, display what you can. We have a huge collection of family pictures on our dining room wall, going back as far as my grandmother (we have more that go back further, we just ran out of room). We call this our Hall of the Ancestors, and it's often a topic of conversation over supper as we point out various family members to our daughter Rose and tell her stories about each one. Family and ancestors are very important to my husband's spiritual practice, and this is a way for him to express it. Do non-Pagan visitors to our home need to know that? Of course not. It's hard to find more Pagan art than the work done by the English Pre-Raphaelite artists (ca. 1848–60). The Pre-Raphaelites were inspired by Greek mythology, the Arthurian legends, and many other classic works of literature. Paintings include such figures as Persephone, Medea, the Oracle at Delphi, Circe, Ophelia, and Pandora (there are relatively few males in Pre-Raphaelite art). Some of their paintings even depict scrying, circle casting, and other Pagan activities. My favorite painting by a Pre-Raphaelite artist is John William Waterhouse's The Lady of Shalott (1888), inspired by the poem of the same name by Alfred, Lord Tennyson. The poem is the story of a woman cursed to never leave her tower and join the crowds at Camelot (which she can see in her magic mirror), lest she die. Unfortunately, one day her mirror shows her an image of Lancelot. She immediately falls in love with the handsome knight, leaves her tower, climbs into a waiting boat, and floats to Camelot. Of course she dies before she arrives. The painting is of the critical moment when she chooses to cast off from the dock, sealing her fate. Why do I love it? Well, other than the pure pathos and drama of the moment, and aside from the fact that it is truly a beautiful work of art, I love the message I get from the painting: not even the Gods can save us from our fates. Plus, I think the amount of detail in the painting (the original is approximately eight feet by ten feet; yes, I've seen it) is amazing. Were the Pre-Raphaelite painters Pagan? Probably not, but one of their "doctrines" was to "study Nature attentively" in order to reproduce it faithfully in their works. They seem to have been embraced by contemporary Pagans for their subjects and themes, yet whenever my nice Episcopalian mother visits, she sleeps under my copy of The Lady of Shalott and doesn't even blink. Why stop at subtly decorating the inside of your home? Various outdoor plants and house decorations can announce your faith to the world—if only the world knew their significance. Before she became too frail to do yard work, my mother-in-law surrounded the outside of her house with plants that have the property to fend off the evil eye, including garlic, datura, and rose bushes. Holly trees and English ivy are both found in pre-Christian symbolism and music. Oak trees feature prominently in ancient and contemporary Pagan lore, and so do rowan, willow, ash, and walnut trees. My mother-in-law, who swears she is a good Lutheran but is also the most powerful Witch I have ever met, also has at least a dozen small lawn gnomes peeking out from beside her shrubs, next to the lilac bushes, and hanging out with the roses. My husband has already started our collection; as of this writing, four gnomes and one moss-covered rabbit hang out in the shrubbery by the front door, two gnomes live in the dining room, and one guards the perpetual pile of to-do paperwork that lives next to the computer. We also have a huge metal sun/moon face hanging next to the front door. I'm sure our neighbors think we bought it because it's mostly painted the same colors as our house (red and white). If you use a lot of herbs in your religious observances, either as incense or in various workings, why not grow your own? There is also nothing more spiritually satisfying to a kitchen Witch than growing his or her own vegetables and feeding them to the family. I have the world's blackest possible thumb (which is why I'm not a kitchen Witch), but I've been told by more than one reliable source that this is true—including the source I'm married to. Obvious Décor Of course if you want to openly decorate your house with things Pagan, the sky is the limit. Ceramic green-man faces on the wall, pentacle magnets on the refrigerator, and visible shrines in the public area of your house can turn your entire home into sacred space—although subtle artwork can, too. There is no mistaking when you come into my home that we are Pagan! I have plaques of the Lord and Lady, many Native American things, pentacles, etc. everywhere. I don't hide it. —jenn, mountain home, idaho Right now, I have all of my Paganish books in a bookcase in my dining room, as well as a phone stand where my Tarot cards live when I'm not using them. I have a large and very colorful astrological wheel cross-stitch displayed on the wall. I put out themed centerpieces on the table for each Sabbat. —ravenna, dowagiac, michigan There are dozens of excellent artists whose work depicts obvious Pagan themes, far too many to mention here, but these are some of my—and my friends'—favorites: Nybor Mystical Art (www.nyborart.com): Nybor of Haven is well-known on the Pagan gathering and conference circuit and is my personal favorite Pagan artist. It's hard to believe he's colorblind. His work includes faeries, satyrs, goddesses, gods, and a myriad of woodland creatures. I have a print of one of his Crone series hanging upstairs, and plan to acquire more soon! Susan Seddon Boulet (www.susanseddonboulet.com): Although she passed away in 1997, Susan Seddon Boulet is still a popular artist among American Pagans. Her artwork shows strong Native American influence, but her pictures of goddesses from all over the world are equally distinctive. As of this writing, prints of Demeter and Persephone are still available on her website. Anne Marie Forrester: If you're more interested in taking your Pagan artwork with you in the form of permanent tattoos, check out Anne Marie's site. She has also illustrated some book covers and has a series of greeting cards for each of the Sabbats that are, in a word, awesome. Her website is http://web.mac.com/annemarieforrester. Alicia Austin (www.aliciaaustin.com): Alicia's work is also strongly influenced by Native American mythology, and even someone who knows next to nothing about Native American myths (that would be me) can tell that this is powerful, divinely inspired stuff. She also seems to have tapped into Russian and Persian folklore for some of her pieces. Definitely worth checking out. Jen Delyth (www.kelticdesigns.com): Jen is a Welsh artist who is best known for her intricate Celtic artwork. Her annual Celtic Mandalas calendar is an annual purchase in my household; I use it as my family schedule calendar since I usually hang it right next to the refrigerator. Mickie Mueller (www.mickiemuellerart.com): Mickie is an accomplished artist and illustrator from the Midwest. Goddesses, gods, faerie children, green men, and other magical beings come to life in her paintings. She's even taken some of her favorite works and had them turned into gorgeous, intricate pendants. Another obvious piece of Pagan décor is a shrine placed where family, friends, and guests can see it. You may not want to be this obvious—not because of the trouble it would cause, but because non-Pagan visitors may become curious and handle your ritual tools and sacred statues. If the thought of someone else handling your altar stuff without asking makes you twitchy, you may want to reconsider being this obvious. If you have pets, you may want to take their natures and needs into consideration before setting up a permanent shrine. The combination of large, boisterous dogs with strong, wagging tails and a shrine full of breakable objects can only end badly; either the dog will run into the shrine and knock it over or his tail will sweep the surface clean of all your precious statuary. Not that cats are any better. My mother's cat Tye hates it when stuff deigns to clutter a high surface he wants to nap on, and generally makes sure it is bodily removed (by him) before he settles down for an afternoon snooze. I once had a cat who loved to yukk up hairballs on my shrine until I finally got the hint and took it down. One note about my using the term shrine when most people would use altar: by my definition, a shrine is any flat surface covered with objects that hold deep personal meaning for you—from pictures of loved ones to a statue of your favorite goddess. A shrine is where you go to sit, meditate, and commune quietly with your God(s). An altar, on the other hand, contains the tools you need for the ritual you are about to perform. The same items can be on both, but an altar is set up for action—i.e., a ritual or magical working—and a shrine is set up for more reflective work.9 I have not changed my survey respondents' use of the terms to reflect my opinion. What is written in the various responses is what they said. We have a small altar set up in our living room that consists of a Qwan Yin statue, ancestral urn, dried fruits and berries in a dish, and, until it was broken, a Buddha statue. We have the "main" altar in our bedroom, and the kids have a small one in each of their rooms. —keltasia, shamokin, pennsylvania We maintain two shrines/altars in our home—a healing shrine and an ancestor shrine. We also have personal altars in other rooms. We have artwork with Pagan themes: a print of the Oracle of Delphi, and one of Diana from the Pompeii frescoes, statuary, etc. —moondancer, washington state My survey respondents are not in agreement on if or how to decorate one's home in a Pagan-y way. Some felt that their home was part of the "broom closet" and should provide the same neutral façade as the rest of their lives. Oh, there were Pagan objects and art pieces in the home, but so subtle that only another Pagan would notice. Others agreed that their home was part of the "broom closet" but in a different way—the one place where they could relax and be themselves and be open about who and what they are; their home décor tended to be much more openly Pagan than the first group's. Either way, I found some excellent advice and ideas for my home, and I hope you have, too. Pentacle dream catchers, statues, books . . . pretty much anything is visible. I'm very open about my spirituality, and anybody who visits my home has to be accepting to my ways. —deanna eberlin, addison, new york I have a banner of flags of the five elements and various Witch curios in my living room. I have crystals all over. I have my bookshelf of Pagan books in my living room also. I keep small Witchy items all over the house. I also have Witches, fairies, and Goddess statues in my yard. —k, sevierville, tennessee I have Pagan paintings, Goddess statuary, rhythm instruments, singing bowls, and Green Man statuary. I also have a lot of found nature gifts: pine cones, dried pomegranates. —kim schaufenbuel, owatonna, minnesota (population 24,958) Decorating for the Holidays Even if your home décor is not particularly Pagan most of the time, it doesn't mean you can't "go a little wild" during the holidays you share with your Christian neighbors. Just because you know that Halloween, Christmas, and Easter borrow heavily from pre-Christian cultural practice, it doesn't mean your fellow small-town residents are aware of the connection. After all, if they're plastering their homes with Witches in October, pentacles (or at least stars) made of lights in December, and pastel bunnies in March and April—don't you think they'll expect you to do the same? Even if they don't decorate for these holidays, they're not likely to care if you choose to. We always have a Yule tree, holly, mistletoe, and a symbolic Yule log. Our big spring thing is Ostara, so Easter gets a pass. Halloween has two faces—the fun side with the dress-up and decorating, and the serious side when my husband and I always go for a Samhain spirit walk when everyone else is in bed. We don't have a problem with the dichotomy—it just works for us. —donna hames, nashwauk, minnesota I decorate and celebrate them all, because I don't think it matters why other people celebrate those days but that it is a time when we become more "one" than the rest of the year. It's really a shame that we don't all come together all year around. Of course, when things like Easter come around I don't mind the "commercialization" because I know what Ostara is really about. It is about the rabbit and eggs, so it is okay. Maybe I am weird, but I love the spirit of St. Nick, too. In neither case do I celebrate the Christian aspect, but others don't get it because they too have adopted the Pagan way and don't know it! I even hang three ears of corn outside of my front door starting at Lughnasad. —jenn, mountain home, idaho To confirm Jenn's comment, ever since I moved away from the Washington, DC metropolitan area in 2000, I have noticed that people in smaller towns are more likely to decorate their yards and front porches with harvest themes long before the end of October. In my neighborhood, corn stalks, scarecrows, gourds, and hay bales—all of which could be real or fake—were pretty common this past fall. In fact, there were entire stands at the local farmers' market dedicated to selling these harvest decorations. They seemed to be very popular. Although I am not the type of Pagan to be offended by a nativity scene set up on the courthouse lawn in December, I have to smile when that exact same spot has scarecrows and pumpkins parked on it in September. If my city officials choose to decorate for Mabon on my behalf (even if they have no clue that they're doing so), the least I can do is appreciate their efforts! Because we have a young child, and just because I like to decorate, my family leaps into the standard holiday decorating frenzy. We carve pumpkins and set out the collection of indoor Witch candle-holders for Halloween. We also have some outdoor decorations, including tombstones, a three-foot-tall skeleton, a "potion shoppe" wall plaque, and a few yard signs with Witches on them. It doesn't matter that I'm proclaiming my "Witchyness" in the front yard for the world to see—it's Halloween, and everyone else on the block is doing the same thing! We just make sure the pumpkins are in and the lights are out by the time we start our Samhain ritual—assuming we even schedule it for the same night as trick-or-treat. For Yule we always get a tree and a wreath. In this I am blessed to live in a small town—Christmas-tree farms where we can go out and cut down our own tree are easy to get to and reasonably priced. There's something very medieval about choosing and cutting down the tree—which I consider to be a scaled-down version of the Yule log—and dragging it back to the farm owner so we can pay for it. I always try to sing a few very old Yule songs, such as "Please to See the King" or "The Gower Wassail," as we haul the tree back to the car. I think my husband would have preferred less singing and more helping-him-pull-the-tree-through-a-couple-feet-of-snow this past year! Another way to further Paganize your Yule decorations (as if you need to) is to make them all natural. String popcorn and cranberries to hang on your tree; kids love to do this. When you're done with your tree, you can hang the popcorn/cranberry strands outside in another tree's branches or some bushes, to feed the squirrels and birds that haven't migrated away for the winter. Will your small-town neighbors look at you askance for doing this? Probably not. Be careful, though. Cranberries are tough little fruits, and it's hard to get the needle all the way through them. Find some pine cones on your next walk through the neighborhood, cover them with peanut butter, roll them in birdseed, and hang them on your Yule tree as well. The squirrels will love them once the holidays are over. If you think you need an excuse to decorate Ostara eggs and you don't have a kid of your own, borrow someone's for the afternoon. Trust me, the kid will love you for it, even if your kitchen may never fully recover from the extra mess. My daughter is now old enough to remember that at some point in the spring it's time to decorate eggs, and she usually starts begging me to get the food coloring out shortly after Valentine's Day. We usually end up dying two batches—one on Ostara and one at Easter, unless they're very close together. Also, whether it's with your own kid or the one you rented for the occasion, no neighbor is going to look at you strange if you host an Easter egg hunt in your own backyard. Just make sure that some of the plastic eggs have bunny and baby-chick Peeps inside. As a child growing up in a town of about eight thousand people, I remember my mother bringing in sprigs of pussywillow as soon as the buds were "fuzzy," and putting them in a vase. It was always a sign that spring was finally coming. My mother-in-law not only brings in pussywillow branches, but she also decorates them by hanging small, hand-carved wooden eggs and rabbits on them. She calls this her Easter tree. Here's what the experts have to say about their decorating habits on the shared holidays of Samhain/Halloween, Yule/Christmas, and Ostara/Easter: We decorate extensively for Halloween and Christmas with a leaning towards more Pagan-like stuff, but we do put up a small manger scene. In my way of thinking, the manger scene fits in with the stories of the "sun god" being reborn so it doesn't conflict, even though others may think it's Christian only. We do little Easter decorating—mostly with more Pagan features such as colored eggs. —keltasia, shamokin, pennsylvania I don't decorate much, other than a tree for Yule, but I also don't do the modern décor on the tree either. We go out and leave shortbread goods on the trees for the animals and spirits, but the decorations I do put up are for the true Pagan side. —spiritrunner, bakersfield, california (previously in taft, california) I still decorate eggs for Easter and put out bright tablecloths and what have you. Halloween is my time. I make costumes like crazy, and I put out all sorts of decorations. Of course, it's less ghosts and goblins and more elegant and spooky-goth. As for Christmas . . . well, we always had a tree and decorations growing up, and the whole season doesn't feel right without lights and all the ornaments I've collected all my life. —ravenna, dowagiac, michigan [contents] 9. Also see my article "Is It an Altar or a Shrine?" on Witchvox.com (April 19, 2009): http://www.witchvox.com/va/dt_va.html?a=uswi&c =words&id=13188. Chapter 4 The Discount Superstore Altar Be creative! The vessel I use to hold water is an old cruet, which was formerly used to mix homemade salad dressing and has a fancy glass stopper. And my altar belonged to a local woman who had used it as a coffee table. The legs fold underneath it like a TV tray. She and her daughter used to play board games on it. It's beautiful, with peacocks inlaid in mother-of-pearl, and I picked it up for twenty-five dollars at her yard sale! —keltasia, shamokin, pennsylvania When I first got the idea for this book I lived in Portales, New Mexico, where the nearest Pagan shop was four-and-a-half hours away in Albuquerque. Although I'd bought, bartered for, or found all my altar items years ago, I looked around my little town and realized that the only option for acquiring new ritual items, other than a small thrift store operated by one of the local evangelical churches, was the Walmart Supercenter located on the north end of town. It was then that I developed my theory: everything I needed to buy for a basic altar—including the altar—I could get at my local, rural Walmart. Fast-forward five years to a cold, foggy Saturday in February, when my family and I finally put my theory to the test. Chanute, Kansas (population 8,738) is a small town approximately two hours' drive from any major metropolitan area. It has a circa-1930s soda fountain tucked into the corner of the Cardinal Drug Store, where you can still buy shakes, ice cream, and floats, and a railroad-depot-turned-museum that features the life and work of Martin and Osa Johnson, who pioneered the art of filming African wildlife. If there were ever a "Typical Small Town in Kansas" competition, Chanute would probably be in the top five, if not the winner. I decided that if I could buy everything I need for ritual at the Walmart Supercenter in Chanute, in a town near "the middle of nowhere" and not home to a four-year college or university, chances are good that you can buy everything you need for ritual at a discount superstore near you. As it turns out, I was able to prove my theory to be true—with a little input and help from my five-year-old daughter, Rose. Pagan parents take note: little girls who love ritual are natural diviners, and are into princesses, unicorns, and sparkly things make awesome assistants if you're shopping for altar and/or ritual items. I understand that some readers may have some issues with actually shopping at Walmart in light of some recent controversies. You may even have chosen to boycott this particular chain in protest. Everything I bought at Walmart could also be purchased at other discount stores, including Target and Kmart, if you'd prefer. You can get a few of the items I mention in this chapter at Costco and Sam's Club (which is owned by Walmart), but not all, since those superstores tend to focus on bulk and big-ticket items like televisions and trampolines. However, the controversy and the boycotting does not change the fact that, for many small towns, Walmart may be the only store in town. When I lived in Portales, New Mexico, the Walmart was one of two places in a city of 12,000 that even sold groceries, and it was the only place to buy electronics, magazines, and new baby clothes for thirty miles, and the only place to buy cheap stick incense for ninety miles. Portales is hardly unique; many small-town Pagans face similar non-choices for their shopping needs every day. That being said, the one item I did not buy at the Walmart Supercenter in Chanute was an altar. Over the years I've bought one bedroom nightstand and one living room end table from Walmart, and either one would do just fine, if possibly be a little crowded, for an altar. In fact, my personal altar is the Walmart nightstand, and the drawers are darn useful for storing extra incense, candles, and other altar "stuff." I did price similar nightstands/end tables in Chanute—they cost about sixty dollars, and all had the three most terrifying words in the English language (at least according to my husband) stamped on the boxes: Some Assembly Required. However, I did pick up the following: Altar Cloth If one has an altar, one generally covers it with an altar cloth. With Rose's help, I chose a yard of navy-blue cloth with a small silver star pattern embossed on it. This cloth would be perfectly appropriate for any moon-phase ritual. The Walmart fabric/craft section also always has a nice selection of seasonal fabric that would work for Sabbat altar cloths; if you'd rather have a moon motif on your Esbat altar covering, wait for Halloween and pick some up then. I usually lay an old bath towel down on my altar to go under the altar cloth on the theory that if something spills or a candle drips, the towel will absorb it, and the towel will fit in the washing machine while the altar won't. I didn't buy a bath towel at Walmart, figuring that everyone (not just me) has plenty of not-quite-good-enough-for-company towels in their house already that could be used for this purpose. At our home, we call them "dog towels," since their primary function is to dry the dogs on bath day. Candlesticks and Candles I found candlesticks in the housewares section that would work on my altar. They're clear glass, somewhat heavy (which translates into "hard to knock over"), and in a nice classic/Colonial style. I bought three—one for the God candle, one for the Goddess candle, and one for the maiden/self/ancestor (depending on your tradition) candle in between the two. Rose has a habit of sniffing various scented candles every chance she gets—she learned it from me—but together we picked out two Williamsburg gray/blue taper candles for the God and Goddess and a cream one for the center candle. We decided they'd look nice with the blue altar cloth. Athame There were two choices at the Chanute Walmart Supercenter for athames: the housewares section and the outdoor/camping section, and neither one had the traditional double-edged dagger blade. Kitchen Witches or those following deities with a caretaking or domestic aspect would probably be very happy with some of the nicer black-handled chopping knives from the housewares section. Since I honestly don't enjoy cooking and, more importantly, because my patron deity is Herne, Lord of the Hunt, I headed for the outdoors and camping section of the store. The very nice sales clerk didn't even blink at the sight of a chunky middle-aged woman (i.e., obviously not a hunter) with a small child being very decisive about hunting knives, and pretty soon a wooden-handled Buck knife with a four-inch stainless steel blade and a handy black sheath was resting in the bottom of my shopping cart. Chalice and Wine Since the ritual wine (or juice) is to be shared among all present, I wanted a good-sized chalice, and the housewares section didn't disappoint me. I found two-to-a-box white-wine glasses. Yes, they're clear glass, but they match the candlesticks, and I have a spare in case the first chalice breaks. Some Walmarts have liquor sections, but since Kansas has separate liquor stores, there wasn't one at the Walmart Supercenter in Chanute. I did, however, find some POM Wonderful® brand 100 percent pomegranate juice, perfect for Ostara or, with the dark red color, a full-moon ritual. Please note, though: my husband A.G. said the pomegranate juice was awfully sour. Pentacle This is the tool that is going to take a little creative effort if you're determined to get everything you need at a discount superstore. I found a plain wooden disk six inches in diameter in the craft section. A little digging in the desk drawer at home unearthed an old plastic protractor to help me place the five points. I borrowed one of Rose's black markers, and in about five minutes I had a perfectly drawn pentacle (or as perfectly drawn as I will ever get). You could also use watercolor or craft paint to decorate your pentacle, and those are both available at Walmart. Pagan parents: buy extras and let your kids draw and decorate their own. The wooden disks are only about a dollar apiece. Incense Although I prefer loose incense burning on a charcoal, Walmart doesn't stock frankincense, myrrh, and copal, much less the packs of special round incense charcoals with the little indentation at the top. So I had to settle for a decent-looking soapstone stick incense burner and some stick incense I found near the housewares section. I'm awfully picky about my stick incense, but I thought the "Warm Spices" flavor wouldn't be too bad, especially since you get three scents in the same package—vanilla, apple pie, and cinnamon. After some experimentation, I've discovered that, once lit, the vanilla smells like burning soap, the apple pie smells like cheap aftershave, but the cinnamon isn't too bad. It doesn't smell much like what it's supposed to, but on the other hand, the smell doesn't make me gag. I may keep trying the Walmart scents to see if there's one I like. Salt and Water In the same section with the incense, incense burner, and potpourri, I found a faux ceramic clam shell that is the perfect water vessel for my altar. In the candles section, I found a shallow, clear-glass votive holder that looks like it was made to hold salt. I sent my husband on a quest to the food aisles, and he came back with a box of sea salt. Perfect! Cakes I have an abundance of appropriate plates in my kitchen to hold cookies for Cakes and Wine, and bowls that make functional libation bowls, so I didn't buy any. I did, however, pick up a package of Pepperidge Farm Verona cookies (the round ones with the strawberry or apricot jam in the center; I chose strawberry), just to prove I could, because their shape makes them perfect for full moon rituals and, ultimately, because they made a great snack on the long car ride home. For the photo in this chapter, I picked up a package of molasses cookies. Of course, you can always buy the ingredients to make your own cookies at any Walmart that sells groceries. But if you do, go with a popular recipe like chocolate chip or oatmeal raisin. Some friends of ours once offered to make cakes for a ritual. They used a "traditional Witchcraft" recipe for ritual cakes that were dry as dust and had no sweetening whatsoever. Sometimes traditional does not mean tasty; I can't imagine offering the Gods a libation I'm not willing to eat or drink myself, can you? So what's left? What wasn't I able to purchase at the Chanute Walmart Supercenter that I needed to complete my altar? A wand, for one thing, and an image of the God and Goddess. On a walk around my block I found some tiny acorns and sweet gum-tree seed pods—each of which would make a perfect God image for my altar. A pine cone would work, too. For the Goddess image, I snipped a long piece of English ivy that grows all over my front porch and twisted it into a small wreath, which I placed around the base of the Goddess candle. As I write this, it's almost March in Kansas; naturally growing flowers (my first choice) are in short supply right now. Besides, in English folklore, ivy is a symbol of divine femininity. If you want a more permanent Goddess image, Walmart—and Hobby Lobby—sells sea shells, and I was able to pick up a nice cowrie shell at my local Hobby Lobby for not very much money. I found a working wand in my front yard after an ice storm downed some red maple tree branches. I chose one that was the right length, peeled the bark off, and sanded the ends. I may eventually decorate it with other items I find in my neighborhood as I walk my dog this spring, like bird feathers. You could also pick up a wooden dowel in the craft or hardware section at Walmart, cut it to the length you want, and decorate it any way you like. No, the Walmart pieces don't exactly fit my altar décor taste, which runs toward rustic, homemade, and folk art; nor are they reflective of my chosen deities in style or color—with the exception of the Buck hunting knife. That being said, I would not be ashamed to perform ritual with them, and at some point I probably will use at least some of what I bought that day in circle (with the possible exception of the incense). The Walmart altar, ready for ritual. Doesn't it look nice? Oh, and my final bill for the day? Without the altar and with a thirty-dollar Buck hunting knife, I spent just about one hundred dollars—cookies, salt, and POM juice included. Other Shopping Opportunities If you just can't bring yourself to shop for ritual and altar items at Walmart, don't panic! Check out garage sales, flea markets, junky little antique stores, and estate sales. Go on an altar-piece expedition at the nearest Goodwill or other thrift store. With a little time, effort, and patience, you're very likely to find exactly what you want for next to nothing—like my prized pentacle cast-iron-pot trivet that set me back a whole three bucks at a junky antique store in Hermann, Missouri. If your town has a paint-your-own ceramic shop, you are in luck. For just a few dollars and an evening or two of sanding, painting, and gossip with the other patrons, you can have a custom chalice, cake plate, salt dish, candlesticks, and water bowl. A ritual tool is not made more powerful by a high price tag or fancy decoration, but by use, by respect, and by intent. Let me give you an example: when I found myself unexpectedly living alone a few years ago, I went to the local flea market to pick up some kitchen items. I was broke, but I needed pots to cook in. One of my finds was an old white enamel pasta pot, for next to nothing. It came with a few dings in the enamel, but I have proudly served my coven many a soup, stew, or lasagna whose noodles were cooked in that pot. That pot is practically part of my tradition now. I wouldn't trade it for anything. Also, don't be afraid to scour the curio shelves, kitchen cabinets, attic, and china cabinet in your own house; you may already have every altar piece you need for free! Shopping Online There are also more places to buy Pagan supplies online—so many, in fact, that I could probably fill a whole chapter, if not a whole book, describing every one. Here are the ones my survey respondents recommended: AzureGreen (www.azuregreen.com): If Witchvox (witchvox .com) is the flagship website for Pagan networking and information, AzureGreen is the Witchvox of Pagan shopping sites. The prices seem reasonably decent, and the selection is excellent—there's something here for all pantheons and paths—Celtic, Norse, Egyptian, Buddhist, Roman, Greek, Hindu, and more. CafePress (www.cafepress.com): Pagan artwork, sayings, and symbolism on T-shirts, mugs, bumper stickers, hoodies, mousepads, baby onesies, and pretty much anything you can think of. Slightly pricey, and the shirt sizes never seem to go above 3X, but you can't beat the selection. Mountain Rose Herbs (www.mountainroseherbs.com): If I did not live near a natural food co-op, I would be a regular customer. In addition to more herbs, teas, and spices than I've even heard of, this site features all-natural soaps, shampoos, aromatherapy oils—and bottles, jars, cloths, bags, wax, etc. for the kitchen Witch who wants to assemble his or her own. This site was mentioned frequently by the survey respondents. Abaxion (www.abaxion.com): This site features mostly silver jewelry with something for pretty much any Pagan's tastes. The prices seem reasonable. eBay (www.ebay.com): If you love virtual flea markets and the thrill of bidding on a much-coveted item, you probably already know that eBay was designed just for you. There are some bargains, but typing Pagan into the search box yielded some really odd stuff—like the "Pagan Wiccan! Healing incense sticks!" or the haunted skull ring I saw recently. Avoid the hype, know what you can afford, and have fun. Etsy (www.etsy.com): If handmade items are more to your liking, check out this site. There are lots of listings for soaps, lotions, and blended oils when you do a search on Pagan. Not everything on the site is made by hand, but a lot is—and if you buy it, you're supporting a Pagan artisan. 13moons.com (www.13moons.com): This is a classic Pagan supply site, easy to navigate and not overly pricey. A very nice selection of items. The Blessed Bee (www.theblessedbee.com): Pagan supplies, a humor page, a recommended-book list—I could spend all afternoon on this site. Again, the prices seem to be comparable to the other sites. My favorite from the humor page: "I love nature as much as anyone. I just don't want to become bear poop." Don't Overload on Stuff Just because there is a lot of cool Pagan stuff out there doesn't mean you need all of it. And there's nothing like packing up to move a thousand miles to make a person realize just how much "Pagan stuff" you have. I began to ponder this as I was packing for our most recent move last year,10 and recently revisited the subject for this chapter. What are the basics of Pagan practice? What do we really need in order to connect with our God(s) in a mutually satisfactory manner? I'll get to what I think we need in a minute. In the meantime, here's a whole list of what we don't need—but we think we need—in order to be happy, active Pagans. Jewelry I used to be so very guilty of this one. At one point (about thirteen years ago), I wore at least one ring on every finger—including thumbs—and four separate pendants twenty-four hours a day/seven days a week, including while sleeping and in the shower. My ears are double-pierced, and only my normal aversion to pain prevents me from acquiring even more holes than that. It's a wonder I didn't drown in the bathtub from the extra weight of all that silver! And whenever I was in ritual, it was even worse: I'd add at least three (sometimes four!) more necklaces, two wrist bracelets, and two ankle bracelets. Did all this bling make me a better priestess? Of course not. There's an old joke in the Pagan community, and like most jokes it has a seed of truth in it: Have you heard of the High Priestess hundred-yard dash? Any priestess who even makes it to the finish line wins! Even Magrat Garlick, the young Witch in Terry Pratchett's Discworld novels, eventually figures out that jewelry does not improve her spiritual practice. Ritual objects We may be spiritual beings, but we are not immune to the fallacy of "keeping up with the Joneses." Pagan tchtochkes and other items wax and wane in and out of fashion. A few years ago, everyone in my local community who thought they were cool (and had seventy bucks they didn't need) bought wands that were made from tree branches that had grown in a spiral pattern because of wild grapevines or other vines that had twined around them. They were pretty, and I admit I seriously wanted one, but I wonder how many of those wands are still in use today. After jewelry, my money pit was, and is, tarot decks. Back in the mid-to-late 1980s, when the publication of a new themed deck was a much bigger, more rare event than it is today, I bought the Mythic Tarot—not because I was (nor am I now) even remotely drawn to working with the Greek pantheon, but because everyone I respected and admired and who, for the most part did have Greek patron deities, bought the Mythic Tarot. I never used it, and eventually gave it away along with ropes of myrrh-bead necklaces, a Daughters of the Moon tarot deck (basically unused), a baby dragon oil lamp, a couple of old video cabinets (i.e., altars), and a veritable forest of candlesticks—all purchased because they were "in" or "trendy" at the time—and, for the most part, never used. Speaking of altars They're nice, they're convenient, but they're not always necessary. My husband, for instance, feels mostly spiritually connected when he's working in his vegetable garden and then cooking the results of that garden for his family. His altars, then, are the dirt in the garden and the stove and countertops in the kitchen, as I suspect they are for many kitchen Witches. I'm not saying altars aren't good and useful for ritual; if nothing else, they keep burning candles and sharp implements off the floor, but how fancy does your altar—the actual piece of furniture—need to be? And if you have more than one, ask yourself: how many of the deities honored by their own altar in your home could be better served by you acting as their hands and doing their work in the world? Now I'm not saying we need to get rid of everything—jewelry, altars, candlesticks, statues, etc., and rely solely on energy, visualization, and our own good deeds to express our spirituality. Pagan practice is undoubtedly enriched beyond measure by these supplemental symbols. But it is easy for too much "stuff" to overwhelm and clutter up the fundamental simplicity of our call to serve our Gods as best we can. If I have more ritual robes hanging in my closet than I do everyday clothes, something is definitely wrong with my priorities. After all, unlike jewelry, I can only wear one robe at a time, and—according to some people—we're supposed to be skyclad (naked) in ritual anyway.11 I have a weakness for books, but I do my research, evaluate the authors whose words resonate with me the most, and limit the number of books I buy in order to avoid redundancy. (I do this with both Pagan and dog-training books, by the way.) Serving the Gods, doing their work here on Earth, and giving each other a helping hand as best we can—that's back to the basics of Pagan practice, no matter how many props we buy. I still wear too much jewelry, though! [contents] 10. This was either while packing up my third box of ritual supplies or my eleventh box of Pagan books. I can't remember. 11. Not that I necessarily agree, but some Pagans think so. Chapter 5 Minimum Daily Requirements I do feasts for each of the seasonal celebrations; I give offerings on the moons of each month. I give honor to my Gods daily. It's really hard to say what all I do because it's so ingrained into my daily life. —deanna eberlin, addison, new york Minimum Daily Requirement—the phrase the United States government uses to determine the least amount of any given vitamin, mineral, or other nutrient you need every twenty-four hours in order to achieve and maintain optimum health. But what is the minimum daily requirement for Pagan spiritual practice? What do you need to do every day in order to achieve and maintain a relationship with your God(s)? For some of us—Emigrants, mostly—our spiritual minimum daily (or at least weekly) requirements include contact with other Pagans. I am a perfect example of this. In 1985, when I first realized I was Pagan, I lived about twenty-five miles north of Baltimore, in Bel Air, Maryland. My first husband, a nice guy but unable to see much beyond his born-again Christian background, was understandably not happy about my studies. While originally he didn't discourage me from attending Pagan events in Baltimore and Washington, DC, he didn't exactly encourage me either. As time went on, though, he became increasingly hostile to my religious studies and activities, and eventually filed for divorce. Consequently, the only time I felt like a "real Pagan" was when I was away from my own home and in the presence of my fellow coveners and community members. As a result, I filled my life with as many away-from-home spiritually oriented events as I could: I helped start (and run) Free Spirit Gathering, one of the biggest Pagan gatherings on the East Coast; I held office in the local community organization that oversaw the gathering; I was a Saturday afternoon regular at the local Pagan bookstore; I volunteered to write for and help do layout on the quarterly community newsletter; and I attended as many open rituals and classes as I could. Between classes, gatherings, planning meetings, rituals, quarterly community business meetings, newsletter work, and hanging out at the bookstore, I wasn't home much. But I got my minimum daily requirement of Pagan-ness. I still struggle with this; twenty-five years later (as of this writing), I'm still not very good at being Pagan by myself. Much as it pains me to admit this publicly, I have yet to be able to start even a daily observance beyond my pre-sleep prayers and stick with it longer than a few days. I'm a former Girl Scout—I love Nature as much as the next person if not more. Yet a concert of Pagan musicians and/or singers is far more likely to feed my soul than a solo hike in the woods with my dog. I'm also far more likely to celebrate the Sabbats or observe moon cycles when there are other people in my life to celebrate with. If it's "just us"—i.e., my family—I may not observe the holiday or full moon at all. It's very similar to the fun of preparing, say, Thanksgiving dinner when family and friends are coming over versus cooking that much food on an ordinary day for just one person. For me, the incentive just isn't there. I suspect that many Emigrants have the same problem when they move to a small town. Contact with others is easy in a large city with its open community events and Pagan shops that serve as information, gossip, and social hubs, but nearly impossible in a small town where the nearest organized Pagans (at least as organized as Pagans ever get!) could be several hours' drive away. A recent glance at the Kansas and Missouri events pages on Witchvox.com confirms this. Pagans who live in or near Kansas City have a plethora of weekly, monthly, and annual community activities to choose from, including drumming circles, public rituals, workshops, meetups, Pagan choir practice, study groups, at least two large shops, a major festival every Memorial Day weekend, and several minor gatherings spread throughout the year. There is a working public transit system for those who have no car, so getting to these events—even the festival, if you hire a taxi—is possible. Not cheap, but possible. If you are spiritually "charged" by being in the presence of your fellow Pagans, Kansas City, Kansas/Missouri, is a good place to be. If, on the other hand, you live a couple hundred miles due west, in Salina, Kansas, you have far fewer options. There is a shop about sixty-five miles away in Hutchinson that also hosts open circles, but that's about it. If you don't have access to a car, you can't get to the nearest Pagan activities. It's not physically possible. For an Emigrant who is used to a fair amount of regular contact with fellow Pagans, actually doing what is necessary to maintain one's spiritual minimum daily requirement is tough. On the other hand, even in a small town it's possible to find like-minded individuals and arrange to meet with them on a regular basis.12 Several Hometowners reported that they also find a way to celebrate daily, monthly, or even just on the Sabbats with their non-Pagan family and friends. I spend time with the Goddess on a regular basis. I walk in my yard and admire Her handiwork all around me. If my family and I are together on a Sabbat, I will prepare a special dinner to fit the holiday, give an explanation and a blessing, and share the meal with my family. They are quite receptive as long as I don't go overboard. —k, sevierville, tennessee I try to be active on a spiritual level daily, even if it's just chatting with like-minded individuals on the Internet. Basically I practice alone but do attend cybercircles.13 —keltasia, shamokin, pennsylvania I attend Sabbats, read, meditate, and write music and poetry. —fergus, monona, wisconsin (population 8,532) I do as much as possible to protect and appreciate the earth. —darren, owatonna, minnesota I talk with my deities every day, celebrate full and dark moons, and honor the changing seasons and the great festivals. Sometimes alone, sometimes with my husband, friends, daughter-in-law, or granddaughter. Sometimes to honor Hecate I just take my dog for a late walk. —donna hames, nashwauk, minnesota Donna makes a good point: there are ways to honor the Gods and practice our spirituality that have nothing to do with formal ritual or other people. My family shares our lives with three dogs—two of whom are definitely representatives of family/ancestral deities we feel a deep connection with. The third has her own relationship with Hecate. If I choose to, I can (and do) consider caring for the dogs (feeding, walking, training, grooming, cuddling, taking them to the dog park or the pet supply store, playing fetch, etc.) as service to their Gods and, therefore, active daily spiritual practice. For someone as extroverted and community-oriented as myself, I see this as a step in the right direction toward a more private, personal practice. My husband, on the other hand, is definitely an introvert. He would probably be deliriously happy if he never attended another public Pagan event again. He is gracious and polite whenever I manage to drag him to various Pagan Pride celebrations, weekend gatherings, and open rituals, but they're not really his thing. On the other hand, my spouse finds his deepest spiritual connection in the garden; he loves to organically grow heirloom fruits and vegetables and is never happier than when he's spent a warm weekend afternoon with his hands in the dirt. He's already teaching our daughter the spiritual aspects of gardening; she's an apt pupil and will work in the garden with her father for hours at a time. It also feeds his soul to take those homegrown fruits and vegetables and make fantastic meals for his family. Spiritrunner seems to feel the same way: I'm very much a kitchen Witch, so I do my craft every day with meals and drink. I thank the Goddess for the day I've had (good or bad), because I know it could always be worse—or non-existent at all. It's hard to have a garden in the middle of the city, but—as anyone who regularly listens to the National Public Radio program A Prairie Home Companion (specifically, "The News From Lake Wobegon" section) can attest—growing tomatoes is not only a common hobby in a small town, it's practically a requirement! A.G. is not the only one who prefers a less "crowded" spiritual practice. Other survey respondents also had great ideas about how to express their spirituality daily, weekly—even monthly—without help from anyone else: Daily devotionals on my balcony, and I'll be doing a fire in my pit today in the backyard. I'm usually pretty quiet. I'm solitary, so most of my work is done in my home or on my property. —witch of the woods, merrimac, wisconsin I try to remain mindful of how I interact with nature and take steps to minimize my negative impact. I also try to live more according to the seasons, changing my diet and activities to coincide with the passing of time. —becca, clovis, new mexico Book Basics I can't remember the source, but I saw an article once that said American Pagans tend to read much more than the average citizen. After seeing the answers to the survey I sent out to collect input for this book, I have to agree. Over and over again, the survey respondents said that a big part of their daily spiritual practice is reading. In a big city or a small town, alone on the couch after supper or as part of a Pagan book discussion group, this is one activity all Pagans everywhere can share. Even though pretty much every Pagan tome ever written—fiction and non-fiction—is available via the Internet with a credit card and just a couple clicks of the mouse, many of us like to save money—and trees—and try to get books from the library, myself included. But some towns are so small there is no library. One survey respondent even said that her town's library shared space with the local tanning salon! Sadly, the majority of the survey respondents expressed concern that any good Pagan book in their library wasn't there for very long. Whether the books disappeared because the thief was afraid to expose his or her interest in Paganism and publicly check the book out of the library or whether the thief was trying to prevent anyone from reading such "evil" material, I cannot say. I suspect, though, that the Pagan- positive books disappear pretty much equally for both reasons. And, as Spiritrunner points out, the books that are left aren't all that good: In Taft, the only books in the "Pagan" section insinuated that we're all devil-worshipers. On the other hand, some small-town libraries have some pretty darn good books on the shelves. Manitowoc, Wisconsin (population 32,764), may have a public transportation system that is limited to within the city limits, but it has a decent selection of Pagan books for beginners—and they're even on the shelves, ready to be checked out. The public library in our town has a number of books on comparative religions, Goddess worship, crystals, astrology, and palm reading—nothing like Margot Adler or Starhawk, but they do have Ronald Hutton's books, so there is enough there to give someone a start and help them find more detailed reading. —donna hames, nashwauk, minnesota If you can't find books that are specifically about Paganism in America (or anywhere else) at the beginning of the twenty-first century on the shelves, don't give up on your local library just yet. It very likely has plenty of information on the following subjects that might prove useful to the Emigrant's and the Hometowner's connection to Deity: Sustainability Growing and preserving more of your own food isn't just a way to save money; it's also a great way to express your spirituality. Julia from East Stroudsburg teaches it to her group as part of her regular practice: Daily devotions and work with my gods, monthly full moons, and a monthly drumming circle—where we also teach sustainability arts like canning, woodworking, gardening and the like. If you live in a small town, it's likely you'll have room for at least a small vegetable or herb garden. If you've recently moved to a small town and have always wanted to grow at least some of your own food, now is a good time to learn how. Organic gardening is safer for the earth and everyone on it—do you know how to compost and how to organically deter bugs and other pests (deer, rabbits, your dog, raccoons, etc.) from eating your produce? Once you've organically grown some fruits or vegetables, do you know how to can them or otherwise safely preserve them for the winter months? A.G. and I thought we did, but we never checked to see if basic canning instructions and boiling times changed if you lived above a certain altitude. As a result, we had a scary, beer-smelling epidemic of Foaming Exploding Jars of Tomatoes in our pantry for a week when we lived in Portales, New Mexico, which is roughly four thousand feet above sea level. Most libraries have an extensive gardening and food preservation section. Also, since "going green" is so popular, libraries are stocking books on how to live more gently on the earth—and use fewer natural resources—for every age group. Find one in the children's section and teach your kids how to help save the planet. Get one for yourself and read up on how to be "green" and save on your utility bills at the same time. Cooking Spiritrunner, from Bakersfield, California, talked earlier about cooking for others at Sabbats and Esbats, and my husband A.G. considers the act of preparing his contribution to the post-holiday feast to be part of the ritual itself. Whether you're truly a kitchen Witch or just aspire to be not so inept around large heat-producing appliances and cauldrons (i.e., saucepans) full of bubbling, er, stuff, this is a great way to expand your knowledge and repertoire. The library probably also has just the cookbook you need to help you re-create your great-grandmother's candied walnut recipe when you want to honor and remember her at Samhain. If your deities and primary spiritual inspiration come from a specific culture—Greek, Roman (Italian), Irish, Middle Eastern—and you weren't raised by or around immigrants from that culture, chances are your library will have a cookbook on the shelves that covers the basics of the cuisine. The book(s) may even include holiday décor and customs as well. For example, my mother-in-law grew up in rural Germany and has passed many of the family recipes down to her son. Fortunately for me (but not for my waistline), my husband loves to cook traditional German food for his family, and he's always looking for more hearty German recipes to add to his repertoire. He's also heard a lot of stories from his mother about how holidays, Christmas and Easter mostly, were celebrated when she was a child. Unfortunately, she's forgotten a lot of the decorating details (being a preteen in Germany during World War II can really mess with your childhood memories) and, in some cases, how to make a specific holiday cookie or bread, but a few "homemade" spiral-bound and stapled German cookbooks we found at various Oktoberfests and street fairs have helped fill in some of the gaps. Cookbooks can also help you with spellwork. As you cook more and more, you become more adept at following a recipe and then adjusting the recipe to fit your personal tastes. You can transfer your recipe-following and recipe-modification skills to creating workings that accomplish pretty much what you need them to. You may even find that cooking becomes a whole new way for you to eat well and perform spells at the same time. A good basic cookbook from the library can also come in handy if you're not particularly adept in the kitchen (that would be me) and have been invited to a private group or public ritual. In twenty-five-plus years in the Pagan community, I have attended very few rituals that didn't include a lavish potluck meal beforehand, during, or afterwards. Unfortunately, most people who either didn't grow up with the church-potluck tradition or haven't attended even a few public or private group rituals don't know the unspoken rule about the food aspect of these events, so I'm going to step up and tell you: it's tacky to bring store-bought or store-made food as your contribution to a pre- or post-ritual potluck. I'll even tell you the rest of the unspoken rule—the exceptions to the first part of the rule: a famous red-and-white-striped bucket of fried chicken, brownies, or cupcakes from a box mix (with store-bought frosting on the cupcakes, of course), or a gallon or two of ice cream in a cooler at a summer ritual. Why are these the exceptions? There are never enough meat dishes at community potlucks; brownies are chocolate, and chocolate in any form is always welcome; cupcakes are fun no matter how old you are; and ice cream—well, pretty much everyone likes ice cream. And if you bring some cones, paper bowls, and/or an assortment of toppings—sprinkles, chocolate (remember, chocolate is always welcome), or butterscotch sauce—your potluck contribution will be even more enthusiastically received. The point is, with help from a cookbook from the library, you can make a nice potluck dish for not much more than you'd spend on a large container of that nasty store-bought potato salad. With a little inspiration from a basic cookbook, you can develop your own specialty dishes for each holiday, whether it's bread for Lammas, apple pie for Mabon, or plain old-fashioned chocolate fudge for Yule. My specialty? Divinity fudge (egg whites, Karo syrup, sugar, and vanilla)—it's easier to make than you think. It has to be, if I'm cooking it! Buddhism Buddhism is often welcome where Paganism is not. I'm not advocating that you give up your Pagan practice and join a Buddhist monastery (or convent), but a library book that covers the basics of Buddhist meditation could greatly enhance your daily practice. A book on simple yoga stretches and poses could be equally valuable, especially if sitting still to meditate really isn't your thing. I tend to incorporate some Buddhist breathing practices into my nightly prayers because I enjoy them; they speak to me. Also, if I'm actually being pressed about my religion and it's a situation where I don't feel safe coming out of the broom closet (like at work or my husband's work), I can say—and have said—with perfect honesty that I am a really lousy Buddhist. Most people have a basic mental image of Buddhists as peaceful, relatively harmless people, and it saves me from having to face the possible negative backlash of telling someone I'm a Witch. Plus, since most Buddhists are vegetarians and anyone who has seen me eat—a group that could include my or my husband's co-workers—knows that I am pretty close to carnivorehood (I had to sneak fried chicken into the one Buddhist retreat I attended so I wouldn't starve to death!), my announcement that I am a Buddhist usually sparks some interesting discussion about my dietary practices, and any thoughts the querent has about anything else I may be into that's "a little weird" usually disappear as I rhapsodize about my husband's grilled lamb recipe. Does the querent need to know that I'm a Witch who incorporates Buddhist discipline to quiet and train my mind and breathe rhythmically during guided meditation? Of course not! History In the Don't Let This Happen to You category, a Pagan professor friend told me this story: While teaching an American history class, he once gave a pop quiz on the previous lecture about the Salem witch trials. In answer to the question, "What side of town did most of the people accused of being witches come from and why?" one clever student answered, "The east, because that's where all the witch stores were." (Correct answer: The east, because the rich people tended to live on the east side of town and were also the majority of the accused, according to the book Salem Possessed by Paul Boyer and Stephen Nissenbaum.) Other students were quite adamant in their quiz answers that Tituba (the slave who taught the little girls some basic divination and love charms) was, in fact, a solar goddess worshipped by the local Native American tribe. We had a good laugh at his students' expense, which could happen to you if you were to accidentally espouse similar misinformation at a public Pagan event (or in an online forum). A basic understanding of American history, not only the Salem witch trials but also the process our Founding Fathers went through to ensure our right to worship as we choose, can only make us better spiritual practitioners. Many established covens that train and initiate their students also require that their students have a basic working grasp of the history of Western civilization. This is especially important if your God or Gods come from the Greek, Roman, or Celtic pantheons. You will have a much deeper appreciation of your God and/or Goddess if you know something about the culture He or She came from and the history of that culture. If the idea of a thick history text seems a bit daunting, start in the children's book section of the library; you will often find factually sound yet easy-to-digest history books there. Science Second only to history as non-Pagan required reading in some covens are books on basic science: physics, chemistry, and biology. The premise is you will have a better appreciation of the mysteries of life and the cosmos if you understand a little bit about how they work. Imagine how much more you would appreciate the unique nature and properties of the various stones and crystals you use if you knew a little bit about basic geology, or how much better astrology would work for you if you read up on astronomy. I will be the first to admit that I am probably the most scientifically challenged person you will ever meet, partly because of a particularly useless independent-study junior high science program and partly because I'm just not all that interested. I have reduced grown men to literal tears of frustration because I simply cannot grasp how a simple cassette tape records and replays sound even after they tried to explain it to me for the better part of an hour (and please don't try if you ever meet me at a festival or workshop weekend—I guarantee I still won't understand). My husband, who started his college career as a microbiology major, has given up trying to explain anything even remotely scientific to me. Now when I ask him how or why something is, he'll answer, "It's magic." Frankly, this answer is usually good enough for me! That being said, I took a Physics of Music class when I needed another science elective in college. Being a hobby musician, I thought it'd at least be relevant. Turns out it was also relevant to my Pagan practice. The idea that every single atom in the Universe vibrates (and consequently produces a note of music) expanded my worship of the Divine in some pretty profound ways. Look for books on string theory in your local library. I'll bet you can find at least one. Another entry in the Don't Let This Happen to You category: My Pagan professor friend shared another story with me about the time he was lecturing on ancient Chinese history in a world civilization course. One student actually raised her hand and asked, "Why didn't the Chinese use their dragons to scare off the invaders?" Yes, she was serious. Apparently she had recently seen a television show that was a "mockumentary" about the evolution of dragons (I actually saw the show—I was impressed as heck by the computer graphics) and didn't realize it was a fake. My five-year-old may not be a science whiz, but she is in a hardcore dinosaur phase. Rose has several books on her bookshelves that say quite clearly how early dinosaur-bone discoveries in China most likely created the whole dragon myth. Mythology You may have to start in the kids' section, but nearly every library has a decent mythology collection. Look for books like Bulfinch's Mythology or Edith Hamilton's book on the Greek myths (entitled Mythology), just to name a couple. If you're interested in comparative mythology—the similarities between myths and legends of different cultures—check out Joseph Campbell's books, particularly The Hero with a Thousand Faces. Campbell is not a good primary academic source, but his material is readable and will give you a basic understanding of how various archetypes are adapted to specific cultures. The librarian probably won't even give you a second glance if you're suddenly reading a lot of mythology books. Studying world mythology also has another benefit if you haven't yet chosen—or, more accurately, been chosen by—a God or Goddess yet. The more mythology you know, the more likely you'll know your God and/or Goddess by name when they show up. It's nice to know someone's name when you first meet, isn't it? Crafts Your local drugstore or dollar store doesn't have the right color candles for your next Sabbat? There's probably a book in your library that can tell you how to make them. Want to make your own scented soap for a ritual bath? Look it up in the library. If your Goddess would like you to learn how to spin yarn and then weave or knit it into something—there's probably a book for that, too. I have to admit that it is quite spiritually satisfying to sit and knit something warm for yourself or a loved one (or, better yet, for a charity), and it's a great way to connect with non-Pagan neighbors. You may run into trouble if you try to start an open Pagan discussion group in your town, but no one will blink if you and a group of non-Pagans meet regularly for a "Stitch and Bitch" afternoon. The other knitters, crocheters, cross-stitchers, etc. don't need to know you do what you do to honor a specific deity. If your mother never taught you how to sew, the library is a good place to find a book to teach you how—unless, of course, you prefer to celebrate your rituals skyclad—i.e., nude. If you want ritual robes and really don't want to hire your Sunday school teacher who also happens to be the local seamstress to make them for you, a quick trip to the library for a book on sewing techniques and/or general needlework is probably in your future. So don't automatically dismiss your library because it doesn't have that latest Pagan book your online friend highly recommended. There is enough information on the shelves already to satisfy and increase your spiritual minimum daily requirement for years to come if you know what to look for. If Wishes Were Fishes I believe I'm the only Pagan in my immediate town, but I do have access to a shop in Baraboo, about fifteen minutes away. It would be nice to have more people to talk to, but it is not a necessity for me. —witch of the woods, merrimac, wisconsin In the survey, I asked what the respondents thought would enhance their overall Pagan practice that they didn't have access to in a small town. Not surprisingly, books, supplies, and other Pagans to talk to were the most popular answers. I would love to have a little New Agey shop just up the street. Instead of paying out the bum for shipping from out of state or having to blow half a tank of gas just for lavender, I could just take my bike up the street, and voilà. I'm not a social creature, but it would be wonderful to meet with other Pagans in the area just to kick around ideas when the mood strikes. A bookstore, ANY kind of bookstore, would be a Goddess-send! The closest bookstore with a decent selection is across the border in Mishawaka, Indiana. —ravenna, dowagiac, michigan I would love if there were more Pagan shops in this area so I didn't have to purchase online. It would definitely be much more "festive" to see more public celebrations of the seasonal festivals. —deanna eberlin, addison, new york More people to discuss ideas with would definitely be nice. A more comprehensive recycling program would help, too, since faith and enviro-consciousness intersect so solidly for me. —becca, clovis, new mexico Even if the Emigrant doesn't need a great deal of contact with fellow Pagans to feel soul-fed, if he or she is accustomed to certain non-Pagan or Pagan-friendly activities that just aren't available in a small town, this could cause a problem. And sometimes the problem can escalate at minimum into a feeling of spiritual disconnectedness or at worse into a major personal religious crisis. If, for example, attending or teaching dog-training classes is an important part of your religious expression and you move to a small town with no dog club for miles and don't feel you have the time or expertise to start your own, what do you do? Or suppose participation in a performance group of some kind (community theater, band, or chorus) was a big part of your connection to your deity back "home" in the big city and there's nothing in town that works as well for you—and you don't have the money, local contacts, or support to get your own group going. What do you do? What you do is wait, be patient, and know that if these non-Pagan activities are truly what you need deep down in your soul, the opportunities to express yourself in something at least close to the ways you've been accustomed to will come to you—usually when you least expect them. One example, and this one isn't pretty, nor is it easy for me to relate. Music, specifically the traditional songs of England and Appalachia, has always been a big part of my personal practice. Nothing feeds my soul like sitting around singing these songs with a group of people. It doesn't matter if they're Pagan or Christian—only the music matters to me. But now I live in a small town where there just aren't monthly folk sings, performance groups dedicated to this kind of music, or weekend folk dance events that include song workshops—all activities I was accustomed to growing up in Kentucky and then living in or near Washington, DC for so many years. In the past decade since I left the East Coast for love and starry skies in the Midwest, I haven't had as many opportunities to sing with others; in fact, I don't sing nearly as much as I'd like to, or as I should. But the Gods will provide opportunities when it's important. Recently a beloved family member had major life-saving emergency surgery and, due to the fact that the surgery took place on Christmas Eve in the middle of an honest-to-Goddess blizzard (which followed an ice storm), I was the only loved one at the hospital. To say that this experience was a major test of my spiritual and emotional strength is an understatement. If you've been through something similar, you know what I mean and if you haven't been through something similar, nothing I say will adequately describe it to you. By the time the seven-hour operation was over and my family member was transferred to the intensive care unit, I was a bona fide physical, mental, and emotional wreck. And the only thing I could think of to do for my own mental health and to calm the patient (who was struggling to get rid of a breathing tube) was pull up a chair, take this person by the hand, and start singing. I sat there and sang the English and Appalachian songs of my childhood—the only ones that were able to make their way to the forefront of my freaked-out brain—for about two hours, beeping machinery and bustling nurses notwithstanding. It wasn't the ideal situation in which to sit and create the music that means the most to me, but it worked—my soul was fed enough that I stayed more or less in one piece, my loved one was calmed by the music, and the energy of the songs got us through. I suspect that now that this "door" has been reopened, it will not close again. I've already found a way to keep it at least a little bit open, and found other people to sing with on a semi-regular basis. Even so, if I had to say what I wish I had better access to for my own practice, it would be more family-friendly community events close by—concerts, coffee shops, discussion groups, and so forth. On the other hand, my family was recently invited to a "bonfire" at a Pagan friend's backyard in the next small town down the road. Most yards have them here—a small fire pit dug into the center of the yard, edged with broken bricks or large stones. We had a great time roasting hot dogs (and later marshmallows) over the flames, and just talking about a variety of Pagan and non-Pagan topics. It was cold for late May (we were all in sweatshirts and jackets), but the fire was warm, the company was congenial, and as the sky darkened we got to lean back and watch the stars come out. There's a lot less light pollution in small towns, and I saw more stars that night than I even knew existed. I had more than my minimum daily requirement of Pagan-ness that night. Would we have been able to have even a bonfire in the middle of a large city? Probably not. Spirituality is not found in books, nor do more people make it any more accessible. Anything that I need, I have or can find easily enough. —moondancer, washington state Honestly? Perhaps because I am already well satisfied with my spirituality, I feel that living in a small town with greater access to nature has enhanced my practices more than anything else. Books can be ordered at my Pagan bookstore or online, and people either find their way to my doorstep or I go and find them. I love living in a small town. I like the neighbors that I have and the friends that I have made. They don't have to be Pagan to be friends. —julia, east stroudsburg, pennsylvania [contents] 12. For further discussion on creating Pagan community in your small town, see chapter 7. 13. Chapter 6 will cover the Internet aspects of Paganism in more depth. Chapter 6 Internetworking: Finding Others of Like Mind Online I'm always swinging by Witchvox to look for shops within driving distance, festivals and meetups, local Pagans. Google isn't Pagan, but it's great for finding me what I need when I'm too lazy to link-hop. I am STRONG in the "Google-fu"! I don't have any other Pagan-ish sites that I visit regularly. I try to bookmark good ones when I come across them, but then I rarely return. I just generally Google what I need, link-hop a while, and voilà. —ravenna, dowagiac, michigan To say that the Internet has greatly helped Pagans in small towns—and big cities—connect with each other is like saying that the sun makes it easier to see during the day. Social networking sites such as Facebook, Myspace, and Twitter, and blogging sites like LiveJournal, make it easy to meet new friends and interact with others of a like mind without having to leave home. For example, as of this writing I have over fifty friends on LiveJournal and about 120 friends on Myspace, plus the aforementioned 177 friends on Facebook. A handful of them are old college friends (and their spouses) and former co-workers; the rest are Pagans I've met online or in person at some time or other. When my family moved to Portales, New Mexico, and struggled unsuccessfully for four years to start a local networking community, discussion group, or coven, these online friends were our social and spiritual lifeline. The day we got DSL at home was my happiest day there. I finally felt like I could "talk" to my friends without waiting all afternoon for a webpage to load like it did on our old dial-up system. Witchvox and Other Recommended Sites Every single survey respondent, even the ones not quoted in these pages, mentioned Witchvox (www.witchvox.com), also known as the Witches' Voice, as the flagship site for Pagan contact and information on the Internet. There's a page for every state and many foreign countries, and each page includes listings for local events, study groups, covens, shops, Pagans offering services (tarot reading, house painting, web design, etc.), local Pagans in the news, and a list of Pagans in the state alphabetically by town or city. Witchvox also offers hundreds if not thousands of articles on pretty much every Pagan subject, a comprehensive list of Pagan books and magazines, and national and international news items of interest. The survey respondents also recommended some smaller, more specific sites. Here are their—and my—favorites: Yahoo Groups (groups.yahoo.com): If you're looking for local e-mail groups, Yahoo Groups seems to have the most. While living in Manitowoc, Wisconsin, I was able to connect with people in Milwaukee and Sheboygan through various Yahoo groups—often finding out about local events (Pagan Pride Days, drumming circles, Pagan picnics) that for whatever reason weren't listed on Witchvox. There are also national and international Pagan Yahoo groups. For example, anyone interested in finding out more about British Traditional Wicca (Gardnerian, Alexandrian, Mohsian, Central Valley, Kingstone, among others) should check out the Amber and Jet Yahoo group. Gay male Pagans may be interested in the Yahoo group Brotherhood of the Phoenix. The Yahoo group naturalwitch is very active and is a must-read for any Pagans interested in organic gardening, animal rights, natural remedies, recycling, and similar topics. There are also Yahoo groups for Pagan parents, solitaries, and even Pagans who are interested in scientifically based ghost hunting. There's also, incidentally, a Yahoo group called Small Town Pagans, a space for people who fit that description and want to hang out together. Needless to say, I see the moderator of that group quite often—every time I look in the mirror, in fact. LiveJournal (www.livejournal.com): Although primarily a free blogging site, LiveJournal also offers quite a few Pagan communities where the members engage in intelligent, interesting discussion. The Wiccan and Non-Fluffy Pagans LiveJournal communities are both informative and lively, and cr_r for Celtic Reconstructionists is quite scholarly—just to name a few. There are some regionally based communities, usually by state, but they don't seem to have a lot of active members. The Wild Hunt (www.wildhunt.org/blog): The Wild Hunt blog is the brainchild of Jason Pitzl-Waters, and covers national and international items of interest to Pagans. Jason is as good a news reporter as he is a news analyst (i.e., he's outstanding at both). I check it every day as part of my daily news-reading ritual because, hey, I'm a journalist, too! Internet Sacred Text Archive (www.sacred-texts.com): This site hosts a huge collection of really interesting articles on every world religion, including Paganism. I spent an entire afternoon looking up some nifty, obscure Pagan stuff. Myth*ing Links (www.mythinglinks.org): "An Annotated & Illustrated Collection of Worldwide Links to Mythologies, Fairy Tales & Folklore, Sacred Arts & Sacred Traditions." The site is a little confusing to navigate at first, but it's worth it. Beliefnet (www.beliefnet.com): This site gives the basics of every world religion, including Paganism. Check out Beliefnet's own Pagan blogger, Gus diZerega, at http://blog .beliefnet.com/apagansblog. Gus's blog is also part of my daily news-reading ritual. Cauldron Living (www.cauldronliving.com): There are some good articles and links to online covens here. The Cauldron (www.ecauldron.com): The forums are run by pretty intelligent people, and the site comes highly recommended. If you want to "talk" to other Pagans, this is a good place to go. Cybercovens If chatting in online forums or tweeting your fellow Pagans on Twitter isn't giving you as much contact as you crave and there just aren't any other Pagans in your area, you might want to consider starting or joining a cybercoven. I've never been in a cybercoven and don't know anyone who has, but I do know the subject is somewhat controversial. The question at the heart of the debate is, "Can a ritual conducted electronically, with the participants hundreds, if not thousands, of miles apart, be 'real'?" Because of my utter lack of experience in this matter, I consulted and interviewed an expert, Lisa McSherry. Lisa is a published author and is also the High Priestess of JaguarMoon Coven, a cybercoven that has been in existence since 2001. Whether you're a Hometowner or an Emigrant, I think at least parts of her story will sound familiar to you. No matter where you grew up, you may think cybercovens are the Internet's biggest joke and plan to stay far away from them, or you might believe cybercovens provide valid spiritual experiences and opportunities for growth for their members, and you're considering joining or starting one. Either way, I think you'll be interested in what Lisa ("LM" in our interview below) has to say. BF: How do you feel the experiences of an online versus in-person coven differ, and how are they the same regarding learning, member's spiritual growth, and incidents of personal gnosis? LM: The simplistic answer is that they are very similar, or can be, and each has its strengths and weaknesses. When I went about creating an online coven, I was operating off of the presumption that if it worked in the physical it will work in the virtual; you just need to think about it differently. For example, in physical ritual the ritual leader can indicate participants' next movements silently—gesturing with a chalice, for example. Obviously that won't work online, and giving directions can be disconcerting to the poetry of the ritual. What we do is indicate in the text itself what the movements are and make it clear to participants that they are mirroring our actions on their own. This makes them active rather than passive, which can be a huge difference from physical ritual. So the ritual might go as follows: High Priestess: "Lord Herne, I stand before you and offer up the bounty of the season!" Herne: "Your offering is welcome, my child. And in return I offer you my blessing." *Herne places His hands on Maat's head. The * is a marker from IRC [Internet Relay Chat], the program we use for real-time interactions, such as ritual or classes. And, yes, when we do ritual, someone aspects [draws down or becomes possessed by] the deities present. There are some very good examples in my book The Virtual Pagan.14 BF: What is absolutely critical for a successful cybercoven? LM: Online groups need: • either a strong leader or a strong group of people with rotating leadership • people who are comfortable communicating—silence is death online • people who are comfortable expressing their needs and wants (and know the difference) Online groups don't survive unless everyone is an adult, or willing to do the work to be an adult—that is why there are so few of them (relatively). A bright, communicative person says, "Let's form a coven, and we'll do it online." They accept every person who applies, have no structure, and don't have enough people to keep conversations going. The leader starts to feel overwhelmed because she's the only one posting topics, and hardly anyone is talking about them. No one wants to pitch in and plan rituals, but they'll gladly show up (not on time) at anything she puts together. There are 250 people in the group, but only five talk and they are all her friends. Is it any surprise they are gone in two years? Leadership is not something most people are born to, and leading online is not easy. It's one reason I wrote a book about magickal group dynamics15—they are not like those found in typical group situations, like school or work. Fundamentally, I think there is nothing that can't be done online that is done in physical groups. Our rituals are strong and successful. I initiate online, and it is as real as any physical initiation. The challenges are different, but one is no better than the other, as an absolute. It's a matter of which is better for the individual. BF: Without compromising their identity, how many of your members live in rural areas versus larger towns or cities? LM: Hmmm . . . I'm going to go with the entire history of JaguarMoon Coven and tell you that about 75 percent live outside of metropolitan areas. Overall, about 15 percent have lived in really rural settings (although that is a pretty flexible term). BF: What do you think draws a person to join an online coven? LM: It's a huge benefit for those who don't have a group anywhere near them. Many of my students and covenmates over the years have said that they just don't have another group to work with in their area. For those who are "disabled"—in a wheelchair, deaf, even blind—working online might be the only way they can get into a group. Each year we've had at least one student from out of the U.S.—Germany, France, Martinique, Australia, the U.K., just to name a few. It's also anonymous. Dan the kindergarten teacher can more easily be Ravenwing the Witch online without fear of losing his job because someone saw him hanging out with other Witches. I've had a number of covenmates over the years who were in sensitive professions, where being a Witch was tantamount to being out of a job. Online, no one knows unless you tell them. BF: That brings up an interesting point. One of the ongoing issues for critics of the Internet is the anonymity it can provide. Anonymity can be a good thing for Dan the kindergarten teacher, but how do you know the people who ask to join JaguarMoon are who they really say they are? Have you had any trouble with "imposters"? LM: Anonymity is absolutely a two-sided issue. On the one hand, it's a protective mechanism, one that allows us to explore facets of our personality we might not otherwise allow to be public. On the other, it can be a way to promulgate negativity in a variety of ways. BF: I assume you mean "trolls"? LM: Yes. In JaguarMoon we just haven't had a lot of problems with it. For one thing, we encourage people to share as much, or as little, about their non-class thoughts/responses with others as they wish. So anonymity is a flexible, and protected, device. Moreover, by the time you join the coven, you've spent a year with us in a variety of situations—that is a long time to maintain falsehoods. It's not impossible, but where is the benefit? Magickal workings are energetic exchanges; we'd notice if something was consistently being held back, and we'd most likely let that person know that they aren't a good fit and wish them well on their journey elsewhere. BF: What about the other big Internet issue—minors having access to adult material? Most, if not all, legitimate face-to-face covens won't let anyone under the age of eighteen join unless their parents are members, too. How do you handle the fact that a minor can be perceived as an adult online? LM: The class and (by extension) the coven doesn't accept anyone under the age of twenty-one. It's too tough to manage if you aren't at a reasonably stable place in your life, with a fairly significant amount of personal control over your time, schedule, and (most of all) privacy. Our year-long class is a lot more like a graduate-level seminar than an undergraduate degree in terms of reading, writing, and discussion. Moreover, I have yet to meet a student who didn't experience profound change in their life while taking the class. From my perspective, it is as if they say to the Universe, "I want to be a Witch," and the Universe says, "Okay. You've got to get rid of the stuff that holds you back from self-evolution. Here, let me take care of it for you." I'm not saying it's guaranteed, but it is absolutely typical. BF: And you're saying teenagers aren't likely to handle getting rid of their "stuff"? LM: Right! I suppose that if someone lied and said they were old enough to join the class, I wouldn't have any way to verify it one way or another. But I think they'd just drop out because it's too tough. If they are a minor living at home, it's unlikely they could get enough private time to meditate, do classwork, attend rituals and our "live" classes. If they are on their own, ages eighteen to twenty-one is a very busy time for most people. I'd guess they just drop out, and hopefully return in a few years. We screen by asking for mundane name, address, and birthdate as a part of the application process. Like I said: they can lie about all of that, but what would the point be? If they are good enough to join the coven after the class, they are stuck maintaining a lie that would become more and more difficult to sustain as time goes on and personal evolution occurs. If not—then, they have proved themselves to be an exception. And then what? Lying is ultimately a waste of resources and energy, and is incredibly self-defeating. We presume people are telling the truth and let it all work out. BF: Do you see online covens as a spiritually viable choice for people who live in tiny towns that are too far away from urban centers to realistically meet folks of a like mind? Why or why not? LM: Absolutely! If there is a local group, you may not like the people in it—this is true even in urban areas. If you're in the "Bible Belt" (which is a pretty big area of the United States), then it might be impossible to find anyone of even a vaguely like mind in your community. Other Online Groups Not all online spiritual groups follow a coven structure—i.e., training new students, celebrating the eight Sabbats and full moons together, or initiating worthy candidates. Ruth Merriam has a face-to-face coven that she is very involved with, but she also works with an online group dedicated to the goddess Brigid. I asked her about her unique perspective in comparing the two experiences. If you don't want an online coven but would like to be in cybercontact with other devotees of your chosen deity, this may inspire you to start a similar group. BF: Tell me a little about your Brigid group. RM: It started as a brainchild of someone else that I knew on LiveJournal. We started as a LiveJournal community, but it didn't go anywhere so we switched over to a Yahoo group. Right now there are twenty-four women in the group. As per the nuns who follow Brigid, or St. Bridget, at Kildare, Ireland, we have set up a twenty-day rotation of tending a flame in honor and worship of Brigid. We tend a flame nineteen of the days, and on the twentieth day Brigid tends it Herself. The vigil takes place from sundown to sundown. When a woman first joins the group, she receives a special candle that she can use to light the flame and instructions on how to re- create the special candle when that one is gone. BF: How active is the Yahoo group? RM: There's not a lot of talk in the group. There's a real sense of silent sisterhood—I know that my sisters are doing their job; it's a palpable feeling when the sister who has the vigil before me finishes and I am to begin. Depending on membership, some women do more than one vigil per twenty-day period. Occasionally some of us will talk about personal stuff through the Yahoo group, but it's very much a personal practice with a group source. It's not a coven—anyone can do this. Anyone can set up a group to worship a particular deity. In addition to flame-tending, some of us make and exchange biddy dolls—made of corn husks or rags—at Imbolc. The dolls bless the recipient's home in the coming year, and at the end of the year the doll is burnt. I started doing this in 1987 by making Brigid crosses, then slowly started making biddy dolls. I brought the idea to this group, and it's very effective. BF: What made you decide to join this group? RM: I'd already had a prior attachment with the biddy dolls when my friend came up with the flame sisters idea. I suggested we formalize it—we started with just a few women. The number ebbs and flows, which is fine. BF: If there had been a face-to-face group doing the same thing close to you, would you have joined that instead? Why or why not? RM: Probably not. I've got so many other parts of my Craft life that are face to face. The cyberspace nature of this forum is ideal for me. I'd love to meet some of the women in person someday, though. BF: What do you feel are the advantages to being in this group? RM: The personal, quiet aspect. It allows a connection and a commitment that doesn't require scheduling. This is something you don't necessarily have to schedule—if you can't physically light your flame at the beginning of your shift (sundown), you can meditate on it and light it in your heart. For some women, it's the only time they give themselves permission to sit quietly and light a candle and nothing else. BF: That's rather sad. RM: Yes, it is. But it gives them that joyful obligation to take time for devotion. BF: The group is all women, then? RM: Oh, yes. BF: How do you enforce that? RM: With very few exceptions, all of the women in the group were recommended by someone already in the group—either they knew the candidate personally or had corresponded with her for many years. BF: And this referral system also takes care of the potential for an underage person joining? RM: Absolutely. BF: Are there any disadvantages to how the group is set up—personal commitment coupled with an online component? RM: I can speak as the moderator of the group. It's occasionally difficult to have the utmost faith to know that the women are doing what they say they are doing. Some only check in every six months. Without regular face-to-face or phone contact, I have to take it on faith that the women are keeping their scheduled vigils. Other than that, there are no more problems than you'd find in any other online group. We had one woman who was a bit of a drama queen, but we only had to talk to her about it once. It's very self-policing and by far the easiest Pagan group I've been involved with. BF: How experienced are your members? I'd imagine that you'd have to have a basic working knowledge of Pagan practice and be pretty comfortable with it in order to participate meaningfully. RM: Actually, they range. There are some that are what I call Paganesque—they've not had any formal training or group practice. Others have a variety of training and experience. I think one woman is Christian and does this as worship to St. Bridget. BF: What do you get out of it, spiritually speaking? RM: I moved relatively recently, and now I don't have the long-standing ties and connections with the people in my "place." As you get older, it's harder to make those connections; I don't have a school-age child, I don't work outside the home, and of course I don't go to church. I'm used to working with a small group of people, but that doesn't mean I like to be isolated. This group is more than just electrons. The flame I tend—there is a sense of community, a sense in this small cadre of women that we are doing something that matters. It's a quiet, reflective thing. For me, it provides the motivation to do what Brigid wants me to do. There are a lot of projects that I'm always working on at home—sewing, paperwork, etc. For a long time, I'd move my little red oil lamp that I light for my vigil every time it was my turn. There was always a project going on all around it. I don't do that anymore. Now I see the lamp as Brigid giving me a kick in the ass to take care of business. So as I work on these things, I speak to Her and She to me. Discussion Groups There are also even more informal Pagan groups on the Internet for people who don't feel ready for coven work or can't find—or don't want to create—a group dedicated to their particular deity. I asked my friend and former "in-person" student Cordelia about her experiences with online study groups. BF: Why did you join an online study group? C: Theoretically, I was in a couple of them. The first one was a Yahoo group, and it was more of a community—not very intensive. It was more varied, and we used it more as a discussion group than anything else. Then one of the women on the list privately e-mailed her phone number to some of us, asking if we wanted to do something more intensive. We tried to have a cybercoven. It lasted a couple of years before it fizzled out. BF: Can you describe some of your experiences in that group, both positive and negative? C: We did rituals via e-mail. They didn't do anything, but they were pretty harmless. We all tried to out-pretty each other writing invocations to the four quarters. It was a good exercise in creative writing! Sometimes we'd pick a night and all meditate at the same time, and then the next morning we'd post our experiences to the group. One member was sixteen years old. I don't remember anyone making an issue of her age. If we'd had more sense, we might have worried about it. BF: Were there minimum posting requirements? Did you have to post daily or weekly or whatever to be a member in good standing or anything like that? C: It might have been better if there had been posting requirements. Some—half—never posted much. If there'd been more structure, it would have been better. BF: Why did you leave, or did you leave? C: It kind of died. The woman in charge tried to make a smaller group later—I was one of the people she picked. There were four people in that new group; two of them never posted anything. We tried to continue, to share the results of our simultaneous meditations. I'm still friends with two of the people from that group. BF: What made you decide to join an online Pagan group? C: I liked the people. I wanted to belong to a group, and there was nothing really local to me. I thought, "These are people I like; we're like-minded and can discuss things." Calling it a "coven" made it glamorous. BF: How did your online group experience compare to being a member of a face-to-face group? C: You definitely raise more energy face to face. Also, the people you're dealing with are really the people you're dealing with; there's no chance for deception. Real-life covens end, not just fizzle out. BF: Who do you think is best served by a cybercoven? C: People who are really shy or really out in the boondocks. Especially people who are shy, since they're not comfortable going out and meeting others. I appreciated the lessons and learning in the face-to-face coven I was a member of. We were going to have lessons in the online group, but we never did. About half the group had a couple years' experience, and the other half were complete newbies. I guess we were supposed to expound about our knowledge or something [laughs], but we weren't comfortable doing that because we hadn't been doing it for very long ourselves. Things to Consider about Online Groups As I mentioned, there is a great deal of debate within the Pagan community as to whether cybercovens and online study or discussion groups "work" at all, with opponents saying that you must interact with people face to face or you're simply engaging in mental "fluff" and deluding yourself. It's also a basic fact that cybercovens and online discussion groups will not work for everyone. During my recent undergraduate career, I took online courses and in-the- classroom courses—often in the same semester and with the same professors teaching in both forums. Maybe it's just my own personal learning style, but I retained more from the in-person classes than I did the online ones. I learn better in a "hands-on" environment. If this is true for you as well, an online group or coven would probably not be a good fit for you. There are also some things that are hard—some would say impossible—to learn "virtually." Energy work is often cited as one of these, as is learning how to draw down—i.e., serve as a channel for divine possession. If anything falls under the category "don't try this at home alone," it's drawing down. I don't want to scare you or discourage you from experiencing the awesome opportunity to literally become one with a god or goddess, but (and this is a big "but") for your own safety and continuing good mental health, do not try to learn how to draw down from a book or the Internet. For all you know, the faceless online person teaching you how to draw down may not have the first clue how to really do it, or many not even be who—and what, credentially speaking—they say they are. In addition to anonymity issues, detractors of online study groups and cybercovens cite the lack of group cohesion as another reason why virtual groups don't work. Their theory is that forming the interpersonal bonds necessary to create a group mind or group gestalt is impossible when the group members are not only scattered all over the country but have also never even seen each other face to face. I am cautiously neutral on this issue. As a veteran of the halcyon days of the popularity of Internet Relay Chat (the "IRC" Lisa McSherry mentioned) in the mid-to-late 1990s, I have to admit that I did form a bond with my fellow regulars in various chat rooms. These were not Pagan venues per se (one was a writer's critique group), but our time together in the chat rooms and/or on the phone with each other felt real. I followed their life problems and trials on IRC and they helped me with mine, just as in-person friends and acquaintances would. It may have all been in my head, but to me—and I think to the other regulars—it was all very, very real. So I'm willing to agree that a sense of group cohesion and some sort of group mind is possible over the Internet. However, I have to agree with the critics who say that it is not possible for a cybergroup to raise energy. Yes, Ruth Merriam said that she could feel "when the sister who has the vigil before me finishes and I am to begin," but that is not the same thing. In order to learn and feel how to raise and use group energy—also sometimes referred to as a Cone of Power—you need, well, a group. In general, though, I advocate a good online study group or cybercoven over no group experience at all. And by "good," I mean: focused, mostly free of drama, with a knowledgeable leader or leaders, and one that has not been formed last week or by high school students. It might also be wise to consider Moondancer's answer to the survey question "What Pagan-oriented sites on the Internet do you recommend for shopping, networking, information, etc.?": For the most part, I don't. I check www.witchvox .com every week or so and subscribe to a number of Pagan lists, but, frankly, most of them are not worth the time it takes to hit the delete key. Get involved in your local gardening club or book discussion group. Be with people, not the computer. [contents] 14. The Virtual Pagan: Exploring Wicca and Paganism Through the Internet. (Weiser Books, 2002). 15. Magickal Connections: Creating a Lasting and Healthy Spiritual Group (New Page Books, 2007). Chapter 7 Community Building We began a PNO, advertising on Meetup (useless), Witchvox (limited success), and with posters at the Chaplain Center on base (complicated). It's been a cycle of ups and downs for years. —noey, coupeville, washington As Noey indicates, it's tough to organize a regular in-person (as opposed to online) Pagan group. More than one experienced Pagan leader has likened the process to "herding cats" or "wrangling butterflies." Organizing any sort of Pagan group can be even harder in a small town, where meeting places are few and far between and potential members are relatively scarce. When asked if they had ever tried to start, or had successfully started, a discussion group/meetup or ritual-oriented group like a coven, most survey respondents replied, "No." Some took it a step further: "Hell no" was a common comment. But other survey respondents said they would think about it, and a few even commented that they had no clue how to even begin to gather together those of like mind and compatible zip codes. Starting Your Own Discussion Group, Study Group, or PNO (Pagan Night Out) Discussion groups, study groups, and Pagan Nights Out (PNOs) are a good way for newcomers to, or those interested in, Paganism to learn a little bit more in a friendly environment that, unlike a coven or study group, requires no commitment whatsoever. These get-togethers are also a great way for potential leaders—i.e., you—to start researching and presenting informal lectures on topics near and dear to your heart. A PNO does not necessarily have to take place in the evening; it's just a term for Pagans coming together at a scheduled time in a public place to chat and share ideas—and because Pagans are involved, food usually is, too. I've been to one ritual here in Bakersfield, and it left much to be desired. As sad as that is, there was no energy in the ritual whatsoever. I see people in my town wearing pentacles and they always just come up to me with big smiles and say, "Blessed Be" or something similar. At the moment there are no meetups, because the person who was running them here left town. —spiritrunner, bakersfield, california ( previously in taft, california) I believe we could use a lot more of these informal groups, and I mean a lot. I recently took a quick glance at a few states' Witchvox pages (including that of my own state, Kansas). There are so many (too many) small-to-medium-sized towns that are home to a dozen or so Pagans, but unless someone in the area feels qualified to organize anything, there is no venue for these folks to get together. So they don't. And the opportunity to connect, to share information and ideas and feel like part of a community, is lost. Feeling like part of an ongoing community is very important. I recently facilitated a discussion at a major Pagan festival about life as a small-town Pagan. The discussion/workshop was very well attended, and while I was more or less prepared for the "us vs. them" feelings the participants expressed as they recounted stories about the pressure they feel at home to join a specific church, I was not at all prepared for the pain I heard in the attendees' voices as they talked about the isolation they feel from their "tribe" during the fifty-one weeks of the year when they weren't at this particular Pagan gathering. But, like my survey respondents, most of the workshop participants were either too frightened or didn't feel qualified to start their own discussion group, book club, or even regular, open non-Pagan drumming/bonfire event. This is what I told the discussion attendees that day. I hope it helps you, too: Starting your own group or event takes a lot of patience while membership slowly builds. The woman who started the weekly Pagan discussion group in Sheboygan, Wisconsin, sat in the meeting space all by herself every Tuesday for several weeks waiting for others to join her. But she had to; odds are good that the one time she didn't go was the one time a potential member would show up. Starting your own book discussion group or meetup also takes perseverance. Depending on the size of, and attitudes in, your town, there are probably very few venues willing to host a Pagan discussion group or ritual (more on this in a moment). Plus, how willing are you to be at least semi if not fully out of the broom closet by starting your own group or event? For Hometowners who are trying to keep their new religion out of the local spotlight, this may be the biggest deterrent. Emigrants, who are just trying to fit in on a number of social, political, and/or cultural levels, may not want to "rock the boat" further by starting a group—even though they may be the ones most comfortable and happy attending at least a PNO on a fairly regular basis. Both Hometowners and Emigrants will have to look very carefully at how their job and/or family (especially school-age children) could be negatively affected by such public exposure.16 And by public exposure I mean Pagan and non-Pagan public—how "out" are you willing to be, not only to your fellow Pagans whom you may or may not know, but also to your neighbors; co-workers; and (for Hometowners) old, non-Pagan friends? This is not a small issue. A dear friend of mine in Columbia, Missouri, took over organizing the monthly discussion group and annual Pagan Pride Day event, which, of course, made her visible enough to the local media that she became the one they called for the required Halloween article and, of course, was the one to talk to on Pagan Pride Day. Despite her requests that the reporters only print her Pagan name, eventually one did not honor that request and she was instantly outed. Unfortunately, my friend was an elementary school teacher and lost her job. She took a chance, decided the organizing needed to happen so the Pagans in her town could connect—and lost her livelihood as a result. But most of all, starting your own Pagan Night Out or study group takes a fair amount of confidence in your own knowledge and your ability to share that knowledge with others. You have to believe that, while you may not be an expert on every possible Pagan subject, you know just enough about a variety of topics to at least facilitate a discussion on that topic and/or be able to know where to go to research the subject enough to organize a couple hours' talk on it. A lot of Pagans, even those who live in big cities, don't feel like they know enough to run such a group, so don't feel bad if you think you don't know anything—you're not the only one! I am certainly no exception to this. Just off the top of my head, I can come up with several basic Pagan topics I know next to nothing about, certainly not enough to discuss for two hours. This list includes astrology, herbalism, drumming (for all my folk-dancing background, I have no sense of percussive rhythm), runes, palmistry, planting a garden according to moon phases, massage, kitchen Witchery (cooking is what I do to food to make it edible and no more), Irish mythology, healing, and faeries. Trust me when I say that this is not a complete list! Yet to date I have co-founded an entire umbrella community organization that includes one of the largest gatherings on the East Coast and organized/led the following: one smaller Pagan gathering, four covens, two study groups, two online networking groups, and one monthly PNO—all with varying degrees of success—oh, and written a ream of articles and (including this one) three books about Paganism. You don't have to know everything about everything; as long as you know enough about a couple of favorite topics, you'll be fine. If you've decided you're determined enough, brave enough, out of the broom closet enough, and have good Internet or library research skills for topics you don't know that much about, here are the practical nuts and bolts for starting your own informal, non-coven group: The first thing you need to do is decide how often you want to meet. Once a month is probably best for a book discussion group because your members will need the time in between meetings to, well, read the book. For general discussion, networking, or study groups, I recommend once a week, especially if there is nothing else Pagan-y going on in your town. Why? Because people who have heard about the meeting or have seen it posted are more likely to check it out if it's every week. They may miss this week, and even next week, but the week after that they remember in time to actually attend. If your PNO meets the second Tuesday of the month, for example, your potential attendees are more likely to forget which Tuesday this is, and will be less likely to show up. The next thing you'll need, obviously, is a place to meet. If you have Unitarians, a Unity congregation, or Quakers in your town who have their own meeting space, you are in luck. You may have to attend their (Unitarian or Unity) services or (Quaker) meetings for a while so they get to know you. In my experience, a typical Sunday morning Unitarian service ranges from practically Pagan to spiritually neutral; you're not likely to feel uncomfortable. A Sunday morning Quaker meeting mostly consists of people sitting together in silence, much as they would at a Buddhist temple, although occasionally someone will feel "moved by the Spirit" to get up and say something. I am not Quaker, but I have attended several meetings. I've always found that when someone is moved to speak, their words are relevant to what's going on in my life. Once you are comfortable with the Unitarians and/or Quakers, casually ask if you could use their space for a weekly discussion group. Be honest and tell them what the discussion is to be about—i.e., things Pagan. Offer to accept small monetary donations from the discussion attendees to cover the cost of utilities your PNO will use (lights, heat, water) during the meeting. Promise you will not do Pagan ritual in their sacred space. If the Quakers or Unitarians say no, respect their answer and move on. If you don't have Quakers or Unitarians in your town or you're not comfortable working with them, you will need to find a more secular year-round place to meet. Unless you live in a part of the country that has perfect outdoor weather 365 days of the year, that place needs to be inside—moving the meeting back and forth between a park in the summer and an indoor location in the winter is going to confuse potential members, and likely cause them to miss meetings. Many Pagans host regular get-togethers and classes at the local public library, where meeting rooms are often available for free (or for a nominal fee) for non-profit groups. The only problem is, you will likely want to have your meeting in the evening (after people have gotten off work) and/or on the weekend, and many small-town libraries close for the day at 5:00 p.m. and are not open on the weekend at all. Check with your library, though. Sometimes they will have evening hours once or twice during the week. As long as you are not charging any sort of entry or attendance fee, the library should be okay with your group meeting there. Some small towns have colleges; my town, Baldwin City, Kansas (population 4,401), does. See if you can find someone sympathetic who is a student or employee of the college, because the school will likely make a classroom available for free for your group to meet in with this person's help and sponsorship. Check the history, anthropology, or women's studies departments. If the college doesn't have a women's studies department, check the curriculum catalogue—usually available for viewing on the college's website—and see if any women's studies or women's literature classes are offered as part of the sociology or English departments, respectively, and talk to that professor. Do not, I repeat, do not think about hosting the regular meetings of an open group in your home. Newcomers to Paganism won't feel comfortable going to a stranger's house and, really, how comfortable are you with people you don't know or don't know well coming to your home in a Pagan context? Your best bet if there are no welcoming churches or colleges in your town is a restaurant or coffee shop, one that is either very busy so no one can hear well enough to eavesdrop on your group's discussion or one that has little to no business so there's no one else there to eavesdrop on your group's discussion. I recommend against buffet-type restaurants where attendees must pay for a meal to get in the door whether they eat anything or not. Some folks can't afford it (especially if there are children involved) or have dietary restrictions—vegetarian, vegan, or food allergies—and can't eat the buffet fare. My favorite PNO site was a pizza place, where even though there were tables and booths in the restaurant, about 90 percent of its business was delivery. Hardly anyone ever ate there, so we pretty much had the place to ourselves. You didn't have to order anything, and a glass of ice water was free. The manager was tickled to bits to have ten to twenty people regularly show up at his restaurant and, at minimum, order a soft drink. If he'd had his way, we'd have been there every night, and just so at least half of us kept ordering food and drink, he could have cared less about our obvious jewelry and topics of discussion. Some chain restaurants like Perkins and Denny's have private or semi-private back rooms available for group use, but you need to reserve them in advance and make it clear in your publicity what "name" the reservation is under. If you find out that the management is reluctant to send people who ask for your party to the back meeting room, or if somehow the reservation you made three weeks in advance is almost always "lost" (or if the manager tells you, "Oh, another group is using the room"), realize that the restaurant is trying to tell you something: your group is not welcome. Move on. Speaking of publicity, this is your next hurdle. Unless you are ready to personally assume a minimum of twelve dollars per month in fees or strongly request (demand) a small financial commitment from your attendees, I recommend against using the services of Meetup.com. Yes, Meetup will send out the lovely reminder e-mails for you, but you can do that yourself for free. Fortunately, there are cheap (i.e., free) ways to get the word out about your group. Witchvox.com, of course, is a must-post place for your PNO information. If people in your town and the next town over have a personal listing on Witchvox that says they are open to invites, send out an e-mail. You don't have to say much, just a quick "Hi, I'm starting a discussion group. We meet every Wednesday at 7:00 at such-and-such location. You're welcome to come, and if you have any topic ideas for future meetings, let me know!" You can also send a similar message to people who are on Myspace and live close to you. You don't even need to friend them first. If you know of semi-local Pagans through Facebook or Twitter, send them a notice, too. It's free! Check and see if there are any Yahoo or Google e-mail groups that cover your town. Even if you are an hour or more away from a big city that does have a Yahoo group, join it—you never know who else in your town is a member. Post polite, occasional reminders and updates about your PNO and remember that this is not your personal publicity e-mail list. If there are no Yahoo e-mail groups in your area, consider starting one. It's free and easy; if I can start one, so can you! Informal bulletin boards or information kiosks at your local college are also good places to post a flyer about your group. College students are often curious about religions other than the one they grew up in. Since these boards and kiosks are often outdoors, you may want to place your flyer in a clear plastic sheet protector first. Also, make plans to repost your flyer every month or so—some college students are curious; others are also pretty strong in the faith of their childhood and may feel the need to tear down your flyer. See if you can have a notice about your PNO or discussion group listed in the religion section or community calendar page of your local newspaper. Some editors won't allow it, but others may surprise you—you won't know until you ask. If you have a Unitarian church or fellowship in town, even if you're not meeting in their space, ask if you can post a flyer on their bulletin board or have a notice listed in their newsletter or on their website. Do you have a natural food store or food co-op in town? Ask if you can post a flyer there—many Pagans are "into" the teas, herbs, "green" cleaning supplies, and cruelty-free hygiene products that natural food stores sell. They'll see your flyer the next time they need to stock up on sage or organic produce. The flyers are posted; your Witchvox notice is getting hits; what do you do now? You put on as much Pagan-identifying jewelry as you're comfortable with, go to the meeting place at the regularly scheduled time, take a book or magazine (or in my case, knitting) to keep you entertained, and you sit. And sit. And sit. Make a list of topics while you sit and include a brief discussion outline for each one—it's good to be prepared for the day that someone actually comes. As I mentioned earlier, the local PNO coordinator in Sheboygan, Wisconsin, sat alone every Tuesday for at least a couple of months before anyone else showed up. But show up they did, and now they've not only got a lively weekly discussion going, but the group also collectively decided to start a small, local Pagan festival that had about twenty-five attendees the first year. Not bad for a town where, just a few months earlier, the Pagans didn't even know each other. You've Got Them, Now Keep Them The key to having a good open group is to be flexible. One of the best tools we have found for our discussion book is question-and-answer night. Everyone writes down a question, and we put the questions in a box; then, as each question is drawn out of the book, the entire group discusses it. It's actually a great learning tool. The biggest pitfall of running a discussion group? If you allow one or two people's personal drama to come into the group, it will destroy a group faster than anything. —julia, east stroudsburg, pennsylvania Once people start coming to your discussion group, what do you do? How do you start each meeting? How do you keep their interest? How do you handle the inevitable personality clashes that come up? At the beginning of each meeting, devote a few minutes for introductions—name, Pagan path, some spiritual history—and if your group has a short list of rules (no interrupting, speak respectfully, try to stay on topic, and so on), go over them. Then start talking about this week's topic. Make a note of other subjects or tangents that come up in the discussion—they may make great topics for future meetings. As Julia hints at in the quote above, the best way to keep people coming back to your weekly event is to offer a variety of information in as many different formats as you can. Just because your personal practice is primarily Celtic or British in influence doesn't mean others in your group wouldn't be interested in information about the Greek and Roman pantheons. Invite regulars to your group to present or facilitate on a topic they know well. You don't always have to be the person leading the discussion—but have a backup topic ready in case at the last minute they don't show up. See if you can find knowledgeable guests to come to your meeting and talk about their area of expertise—a local college is full of professors who know a great deal about Egyptian history, mythological motifs in Victorian art and literature, how to make your own ink, how to identify birds indigenous to your area, and a variety of other topics of interest to the members of your group. If you have a paranormal investigation group in your area, invite them to come and present some of the evidence they've collected (the closer to Samhain you can schedule this, the better!). See if the local wildlife or raptor rescue organization can bring some of their animals to at least part of your meeting and talk about the animals. We did this one just for kicks at the gathering I helped start in Maryland, and it quickly became one of the most popular parts of the festival for kids and adults alike—and the wildlife rescue group's donation box was always full when they left, which means they loved coming as much as we loved having them there. Alternate hands-on "workshops" with discussion sessions. If you or another group member are good at reading tarot or working with pendulums or energy-sensing, don't just talk about it—do it! If you can offer basic refreshments, even if it's hot water for tea or hot chocolate and a plate of store-bought cookies, do so. People in general and Pagans in particular like to eat, and hot tea is particularly "homey" and welcoming in cold weather. Use various media to help get your point across. The leader of the Sheboygan weekly PNO devoted an entire week's session showing the movie The Craft to inspire discussion about Pagan ethics. I'm assuming she used the film as a cautionary example of how not to conduct your magical affairs. If your local movie-rental place doesn't carry The Craft, consider showing Gladiator (horribly historically inaccurate, but handles the Roman tradition of family gods quite well) or The Lion King (excellent way to illustrate the winter/ summer, Lugh/Balor, Oak King/Holly King conflict) instead. And then there's the inevitable conflict. One group member doesn't like one of the others; one dominates the conversation; another one gives off squicky vibes and is driving some of the other attendees away. What do you do? Encourage the members who don't like each other to work it out themselves, or at least keep hostilities to a minimum during the weekly meeting. If one party can and the other one can't, ask the one that can't to leave. If the conflict is causing that much disruption, you're doing the group a favor, believe me. Attendees who dominate the conversation, always seem to take the subject off track, or interrupt others can almost always be curbed by some basic group rules about limited talk or comment time, staying focused, and only speaking when others are not. A "talking stick" is particularly useful for stopping interrupters in their tracks. If they don't have the stick—or Buddha statue, or magical mystery tea mug, or whatever you choose the object to be—they have to stay quiet; it's as simple as that. Unfortunately, Squicky Vibe Person is going to be the toughest one to deal with. You have to be very, very careful when dealing with this sort of problem. The trick is to make absolutely, positively sure that the person truly is projecting a presence that most everyone else, and not just one person, finds objectionable. We had this problem in the Sheboygan weekly group when, after months of just the leader and one other member attending every week, a young man joined about the same time I did. He made that other member quite nervous, and once she did some checking into his criminal background, the rest of us were nervous as well. Not that he'd done anything truly horrendous, but enough to garner a record. The longtime member wanted the young man out. The rest of us (a couple more people had joined by the time this issue came to a head) were in "let's wait and see what he does" mode. The leader handled it beautifully. She spoke with the young man and with the longtime member separately. It turned out that the longtime member resented having to "share" the group with the rest of us—up until I and the young man showed up, she'd had this private Paganism 101 class going and she did not want to give that up. I knew too much about the topic near and dear to this woman's heart—Pagan practice—for her to want to get rid of me; she'd apparently decided I was valuable. That left the young man, who eventually decided we were all too old and boring for his tastes. They both ended up wandering away from the group. Problem solved. As your discussion group, book club, or meetup evolves, a core group of members will develop; don't be afraid to ask for their advice and input when your own unique interpersonal problems arise. Of course, it's not always this easy, as some survey respondents reminded me. I did try to start a discussion group to work each quarter with a different element. People came and went. Then just lost interest or got busy with other things. —k, sevierville, tennessee In high school we had a group that got together once a week to discuss stuff and have a small circle. It lasted about a year or so, then people either became overly busy or uninterested. —spiritrunner, bakersfield, california (formerly in taft, california) In high school, after seeing all the hype about the "prayer around the flagpole," I wanted to start up something for all the rest of the kids who weren't Christian, who wanted to explore outside the Bible box. It ended up being just myself and two close friends who got together to study, read cards, chant, and hold circles. This only lasted a couple of years, though. —ravenna, dowagiac, michigan Going More Formal: Starting a Coven If your discussion group is going well, yet there seems to be an "inner" group of people who get along well and want to try to celebrate the Sabbats together, then the role of coven leader may be in your future. First and foremost, before you actually decide to form a group together, try doing a couple of rituals as a group first. Who knows, you might prefer quiet, meditative circles, while the other members like to drum and dance. You may prefer to plan and organize every Sabbat to a T—up to and including ten- to fifteen-page scripts that include every word of the ritual to pass out to every attendee, while your fellow discussion group members might prefer spontaneous, off-the-cuff rituals that follow a basic sequential outline that looks something like this, if this much: (1) Cast the circle; (2) Call the quarters; (3) Invoke the God and Goddess (which ones? We'll know when we get to that part!), (4) Do the actual ritual—celebrate Lammas, do a full moon meditation, etc., however it feels right; (5) Cakes and wine; (6) Dismiss the God and Goddess, quarters, and circle; (7) Eat potluck food until we drop. As I mentioned in the ritual etiquette section in chapter 2, there's nothing at all wrong with different styles of ritual; you just want to make sure as early as possible that everyone's preferred ritual forms are compatible before you formally create a coven. You will save yourself much heartache and many headaches if you do. The publicity and organizational problems that accompany starting a discussion group still apply when you're starting a coven, and the solutions I've suggested for meetups should work when you're creating a more formal, committed unit. However, you need to know that running a ritual group has its own unique set of headaches that you must consider. Here are some questions you need to ask yourself—and honestly answer—before you commit to taking on this huge responsibility: How good am I at forming group cohesion? The phrase "It's like herding cats," which I referred to at the beginning of this chapter, was probably first coined by a coven leader. If it wasn't, it should have been! Pagans tend to be independent of thought, strong in their opinions, and just plain stubborn when it comes to matters of religious expression. Trying to get a group of these sorts of folk to agree on anything, much less trust each other in order to create group gestalt, would try the patience of a Buddhist monk. If you've been mediating conflict pretty well in your discussion group, you should be prepared to do what it takes (whatever that is for your specific group) to help your members work together in a highly focused setting. Does this mean enough to me that I will commit to this for the long haul, or do I just want to be important? A coven must meet, minimally, eight times a year for the holidays. Once you include full moon rituals and twice-monthly classes, that number swells to forty-five times a year, which is pretty darn close to once a week. Running a coven means you have to schedule the rest of your life around the group's schedule. You can't be a coven leader and say, "Oh, I won't be at Beltane, even though it's supposed to be at my house, because I'm taking my family camping." No. You have to be available for every single coven get-together. Vacations, family reunions, even an afternoon movie and dinner afterward with your spouse all have to take second place. And with a good coven lasting five to fifteen years, that's a long time to put off visiting your mom in the next state over on a "free weekend." I once attended a ritual at which the host for that particular holiday was not going to be home until about five minutes before the ritual was supposed to start—his daughter had a dance recital that day. We all showed up anyway, and there was someone to let us in, but of course the recital ran late and of course that meant the ritual started late, too—by at least an hour and a half. What should the host have done? Trying to find an alternative site for the ritual, one where the host(s) would be home all day, would have been a good start. If the calendar on your wall is already full of family obligations, you may want to rethink starting a coven. How many hot meals am I ready to miss? It sounds trivial, but there were some periods with my students when there would be an emotional emergency or crisis of conscience every night around suppertime for four or five days in a row. I ate a lot of formerly hot food at those times, since it's rude to audibly chew your dinner in the ear of someone who is sobbing hysterically—at least it is where I come from. If mealtime is your special time with your significant other or family and you want to lead a coven, turn the phone off until the dishes are done and the leftovers are safely packed away. Is my home usually clean enough to have rituals in? If you're the leader, chances are your home will double as the covenstead. Fairly or unfairly, the filthier the home, the less likely your fellow coveners will take you seriously, much less want to come over to your house. I don't mean moderately cluttered, and I am not suggesting your living room/family room has to look like something from the pages of House Beautiful magazine. But if it's impossible to tell what color the front of your microwave oven is supposed to be and there's an inch of dust on top of your entertainment center (and you're too lazy to clean your house thoroughly before every ritual, class, workshop, or a coven member dropping by in a state of emergency), you're not ready to lead the group. With a small child and multiple pets, including a hundred- pound long-haired dog that blows his coat (sheds all the undercoat at once) twice a year, my husband and I really had our work cut out for us when we decided to start a coven a few years ago. Left to our own devices, we're pretty messy people, so we were actually grateful for the full roster of classes, Sabbats, and moon rituals, because it made us keep the house reasonably clean at all times. I'm not saying it was easy, though. Do I have my own act together? Ideally, you should be in a reasonably stable stage in your life—done with school (including graduate school); comfortably pursuing a career or in a steady job that offers enough financial compensation to at least pay the bills; and either in a supportive, long-term emotional relationship or comfortable with your lack of one. Running a coven takes a lot of time: you have to interview every potential member, mediate and resolve conflicts between members, plan and execute at least eight holiday rituals and up to twenty-six moon rituals (assuming your group observes each new and full moon), plan and teach classes, and, as previously mentioned, have at least one shoulder available at all times for your fellow coven members to cry upon as needed. If a large percentage of your spare time is spent doing homework, worrying about money, trying to find a job, or cruising dating websites, you won't have time to lead your coven. Are there other obligations in my life that need me more? Aside from the time obligations, I'm talking about small children, non-Pagan significant others, and pets. Little kids take a lot of time, and (I say this from experience) once they reach the age of two until they're in high school, you're not likely to include them in ritual without excluding the needs of your other coven members. A little kid in ritual with all the sharp shiny things and open flame is going to command a lot of High Priestess Mom's and/or High Priest Dad's attention just so Toddler won't hurt himself. With that going on, how are you going to also monitor your first-time-in-ritual student who is about to make herself sick because she can't ground and center properly and you're too busy keeping Toddler's hand out of the candle to help her? Can you really find a 100 percent reliable babysitter for each and every ritual? And, as I mentioned in chapter 2, in the section on basic Pagan etiquette, what if you have a potential student who is deathly afraid of, or violently allergic to, your long-haired dog? Do you have the strength to steer the student elsewhere because of your pre-existing commitment to your pet, knowing full well that in your town there may be no "elsewhere" to steer her to? What if you start the group and then your non-Pagan significant other tells you, "It's the group or me. Pick one." Consider your answer to these questions very carefully before you hang out your "I want to run a coven" shingle. When my husband and I were first starting out in the coven-running business, we were also a foster home for a local no-kill animal shelter, so in addition to our own multiple pets, we usually had one or two extra dogs in the home that we were training to behave in a family environment to make them more adoptable. There were several times when, much as it pained me, I had to say, "Can I take Zoe/Lucky/Tina next week? I'm having company this weekend." In other words, I didn't want to deal with a new good-natured-but-house-clueless dog and a living room full of coveners and guests for a Sabbat at the same time. It was hard, because dog rescue is as spiritual an activity to me as hosting ritual, but I had to find balance . . . until the time a litter of nine beagle puppies were born in my laundry room twenty-four hours before our Ostara ritual. All I can say is, it's a good thing baby animals are an appropriate spring motif! How good are you at saying no? I've done it—all good coven leaders have done it—taken on a member we shouldn't have, and suffered the consequences as teachers because of it. My least favorite ex-covener stole my husband's wedding ring that he'd taken off because it'd become too loose due to recent weight loss. Our "guts" told us not to accept him as a member, but we did anyway—mostly because he was friends with another of our members who begged us to take him in. Can you look a potential member in the eye and say, "No, you can't join"? What if she cries? Can you look a current member in the eye and say, "You need to leave"? What if he becomes angry? Again, consider these questions—and your answers—very carefully. We once had to ask a student to leave the group because she could not—would not—forgive another student (and the first student's former best friend) for something. But we got to the point where our attempts at mediation broke down, the glowers across sacred space got to be too disruptive, and we said, "Enough." The sobs when we told the student to leave haunted us for a long time. It would have been easier to keep the one we asked to leave, but the second student showed more remorse and grew more as a person from the whole incident, so she's the one we asked to stay. It was one of the hardest things we ever had to do as coven leaders. I can say, "I hope I never have to do anything like that again," but realistically, if I'm running a coven, I know eventually I will have to. Are you willing to lead a more than exemplary life? This is probably the hardest part of being a coven leader—the constant spot on the Top Ten Gossip Topics list in your community. No matter how small or spread out your local Pagan community is, word will get around. And it doesn't matter what you do: if you are dating (serially or all at once) more than one person, you're a slut. If you're in a closed, monogamous relationship, you're a stuck-up prude. Every conflict you have with a member that results in the member leaving your coven will be blown all out of proportion by the local (and, thanks to the Internet, the not-so-local) Pagan community. You will be vilified, stabbed in the back, put on a pedestal, and worshipped—sometimes all by the same person at different times in your relationship, and always with the eyes of your fellow local Pagans upon you every step of the way. Can you live in a fishbowl? Because the minute you start to lead a training coven, you're new address is First Glass Container on the Right. Back when A.G. and I were just friends, I once went out on a Friday-night date with a very nice man that unexpectedly turned into an overnighter. The coven I was running at the time had scheduled our Litha celebration for the next day, so I was very careful to get home in plenty of time to clean the house, prepare the ritual space, and make sure my contribution to the post-circle potluck was ready. In other words, I completely fulfilled my obligations to my group, even though I had been out all night. Unfortunately, my working partner had tried to reach me the evening before to discuss some ritual detail. This was in the time before cell phones, so of course I missed her call—and she'd called pretty late. When she arrived the next day for the ritual and found out where I'd been—and inferred correctly what I'd been doing—she said, "Gee, Bronwen, you really are a slut." Ouch. We didn't stay working partners for too long after that. And of course she thoroughly enjoyed spreading the story as far and wide as she could. I could say that her reaction to my personal life and the fact that she chose to tell the world in as nasty a way as possible didn't hurt—but I'd be lying. This is not to say that leading a coven—even an informal, non-training coven—is not worth the trouble. Far from it. There is always the moment when you see the "Aha!" moment in a member's eyes as he gets, really gets, what the Gods are trying to tell him in ritual. That moment is the best reward there is. If you crave those moments, welcome to the few, the proud, the coven leaders! The Survey Respondents' Perspective I am currently considering joining a group. As in any other social situation, you get out of it what you put in. I have taken time to build friendships. The people in this group are slow to accept you until you hang around and let them get to know you. I think it's worth the time, but others complain and say that they are snobs. —k, sevierville, tennessee Again, starting or joining a group is not that easy, as the survey respondents can attest. Some had good advice, and some had tales of caution and woe about their experiences of starting a coven. K makes a very good point—very often people in a formal group are perceived as "snobs" who think they're "better than" the solitaries in the area. The group members may not feel that way about themselves at all, but sometimes not only does the coven leader have to lead an exemplary, perfect, overly scrutinized life, but the rest of the coven does, too. I have never tried to start a coven or other Pagan group, but a friend did and it turned out to be a disaster. It was hard for us to get together because of conflicting work schedules. It also wasn't well-organized, so nothing really "got done" when we did get together, and one of the members brought in several ex-girlfriends and it turned into a lot of catfights and bickering, so we disbanded. In another group I was involved with years ago, the leader told us to always think twice about who we invited because you never know how much dislike and complications you can bring into a group by inviting the wrong person. That was really good advice, and I wish my friend had considered her choices more thoroughly instead of opening up to anyone who wanted to join us. —keltasia, shamokin, pennsylvania I am not really interested, at this time of my life, in organizing large groups, dealing with acting-out adolescents who want to shock their parents, or anyone who is more into being "out there" or in people's faces about their religious beliefs. I am also not willing to have people with serious mental illness or personality disorders in my personal circle. I certainly support everyone's right to deal with their spirituality in their own way, but I don't want to necessarily be a part of it. As a friend of mine once said: not everyone needs (or wants) to be a part of everything Pagan. A common pitfall for me is getting too excited about the new Pagan I meet and wanting to include her or him before getting the "big picture" about this person. —rowen brianna, bowling green, kentucky Rowen Brianna's comments about people with mental or personality issues may sound harsh, but chances are excellent that you are simply not trained to effectively work with such folks, no matter how much they may need what you and your group have to offer. One of my favorite sayings as a coven leader is "Religion is not a substitute for therapy." What I mean by this is that your coven, your rituals, and your classes are not a good or effective alternative for someone who genuinely needs professional help, no matter how much they (and you) want your events and the training you're giving to be an effective alternative. It's amazing how quickly a person who needs professional help can monopolize your rituals, your classes, and the rest of your coven life. A.G. and I once had a student who was bright, funny, articulate, dedicated—and probably the most insecure person we had ever met. She absolutely, positively had to be the center of attention at all coven rituals, classes, and social get-togethers. If she wasn't the center of attention, she always had a crisis that restored her to the role of primary group focus—she had to leave her (allegedly) abusive boyfriend right now; she was being "forced" to draw down her chosen deity; the deity was then asking her to do awful things. The list went on and on. Finally, exhausted, A.G. and I realized we'd made a huge mistake and asked her to leave until she was in a better space to do the work the coven was formed to do. Funny, she never came back. A wise coven leader will know when someone needs more help than the leader can effectively offer, and will set firm boundaries by not allowing the person to join the group—or asking him or her to leave if already a member—until the person has received the needed help. Pagan author Amber K has written an excellent, comprehensive book on starting and running a coven.17 I highly recommend it. For the Truly Brave: Public Events Whether your little band of intrepid small-town Pagans chooses to stay a small discussion group or morphs into a coven, eventually the topic of "going public" is bound to come up. The more comfortable your group feels about its existence, the greater the temptation to try a Pagan Pride picnic or even a public ritual. Here are some tips for a successful public event: Mask it as something else The Pagans in my husband's hometown of Salina, Kansas, decided to have a Pagan Pride picnic a few years ago. They very cleverly scheduled it near the end of the Pagan Pride season in mid-October and said very specifically in all of the publicity that everyone attending should dress up in costume, especially children. That way if any non-Pagans came by and asked what they were doing, they could answer (more or less honestly), "We're having an early Halloween party." Just a few years before this picnic, a non-Pagan health food store in town was actively shut down (as opposed to "went out of business"), so the Salina Pagans were pretty brave to meet in public at all. While it's hard to pretend a ritual is anything other than a ritual, a Pagan Pride picnic could be a "medieval re-enactor's event," a "Going Green festival," or, as in Salina, Kansas, a "Halloween party." Take it out of town Your picnic or ritual does not need to be in the park right next to the elementary school. Find a county park or state park nearby with picnic tables, a shelter, Porta-Johns, and a couple of permanent barbecue grills—and you are all set. The rangers probably won't ask why you want to rent the space, and you don't have to volunteer the information. However, you will need to adhere to park rules regarding weapons (athames and swords may qualify; check your state statutes), alcohol, pets, noise ordinances (which may prohibit drumming if there are residences nearby), and fires in places other than the grills provided. Make absolutely sure you have your space permit paperwork with you at the event. Be good stewards of the earth; even if there are trash cans onsite, plan to take your full garbage bags away with you. Pick up previous users' trash while you're there. Appoint a media spokesperson in advance If you are in town and doing something interesting, there is a chance someone from the local paper could show up. As a journalist, I can tell you from experience that small-town newspaper reporters have a sixth sense about these things. As part of your planning, decide who will be the person to talk to the press—if they come—and help your new media spokesperson come up with a list of possible questions and calm, rational, well-thought-out answers. Have fully charged cell phones available Don't assume your fellow organizers will have their cell phones on them "because they always do." You never know when someone will get hurt and need an ambulance, a child might get lost, or, Gods forbid, you need to call the police because a passerby has major issues with you being in public or your pesky local reporter just won't go away. Discuss in advance who should call the authorities—and under what circumstances. Keep it short If there's a ritual, half an hour to forty-five minutes in circle maximum is good. The entire event does not need to last all day. Three to four hours for a quick ritual and potluck meal is good, and will give attendees a long window of arrival and departure times. Adopt a charity Plan to make a charity part of your event. Have participants bring cans of food that you can later donate to the nearest food bank—you can use the group's name or not. Ask participants to bring cans of pet food or boxes of kitty litter for the local animal shelter, or paper, notebooks, and pens for the local school-supply drive. I will freely admit that large public ritual is not my area of expertise. Isaac Bonewits, however, wrote an excellent book on the subject.18 When Alone Is Better This is not to say that you must interact with other Pagans; there are certainly advantages to staying solitary and never meeting a fellow spiritual traveler. One good reason to stay solitary is the prior family and pets commitment I've mentioned. Also, if you have a special-needs child and the financial and personal-care logistics of actually attending even a local Pagan event is too much, you should not feel guilty for keeping your practice private. If you have too much to do at home, don't go. There is a certain amount of positive self-reliance that one can feel by staying solitary. No one else is responsible for your spiritual development but you; if you want to grow and improve, it's completely up to you and the Gods how you will do so. And if you've met the other Pagans in the area and you just don't feel comfortable interacting with them, then by all means don't—your "gut" may be telling you something about your mental health or physical safety. Listen to it. But for the rest of you who crave and want to work toward a connection to local Pagans, I can only say: If you build it, they will come. I truly believe this. [contents] 16. Family and work issues will be discussed in depth in chapter 8. 17. Amber K, Coven Craft: Witchcraft for Three or More (Llewellyn, 2002). 18. Isaac Bonewits, Neopagan Rites: A Guide to Creating Public Rituals That Work (Llewellyn, 2007). Chapter 8 Problems, Like Charity, Begin at Home I've never had any real issues regarding my faith in the workplace, probably because I kept it low-key. I was working in a nursing home and could speak to any of our patients about death/dying/religious issues openly without specifically stating what my religion was, because they wanted help with reconciling what they were going through with their own religion. —keltasia, shamokin, pennsylvania Overall, the survey respondents said, and I agree, that the two hardest things about being a small-town Pagan are how your path will be perceived by co-workers and supervisors, and whether or not to raise your children openly Pagan. Emigrants may be used to a certain degree of acceptance, tolerance, or just plain apathy—no one at work gives a damn what you call yourself just so you get to the office on time and do your job while you're there. Emigrant children may be accustomed to more culturally diverse classrooms/classmates and music teachers who stay away from Christmas carols completely. Hometowners, on the other hand, may have the added burden of bosses and co-workers who have known them their whole lives, and who might notice and comment when they don't show up at church anymore. Their children may be the second or third (or more) generation to attend the local school and may have some of the same teachers as their now-Pagan parents, which can lead to some pretty awkward parent-teacher conferences. Not surprisingly, single Emigrants who have moved to a small town because of their career and single Hometowners reported some difficulty finding a partner, but each survey respondent was very firm on one romantic issue: it's far better to stay single than to be in a relationship with someone who doesn't share, or at least tolerate, your Pagan path. Problems on the Job When my husband and I lived in Columbia, Missouri, we were pretty well-known within the Pagan community. We ran a training coven, we participated in (and often led) the monthly Pagan Night Out, and we occasionally acted as High Priest and High Priestess for public Sabbats. Because of this, and because some of our students were also in college, we could count on giving at least two interviews apiece per semester, either for the University of Missouri student newspaper or as the subject of a paper in a women's studies or comparative religion class. We also occasionally participated in panel discussions on topics like the origin of Halloween, or contemporary Paganism versus anthropological and historical perspectives (it was a university town, after all), that were covered by the local newspaper, the Columbia Daily Tribune. In other words, my husband's professors, PhD committee, and fellow graduate students, and the upper management at my insurance-company job, had ample opportunities to read in some media format or other that we were Pagan. I don't know if they did or not; neither of us experienced any repercussions at our respective jobs. In the fall of 2005 I was working in the admissions office of Eastern New Mexico University in Portales, New Mexico, where my husband was a professor. As I mentioned earlier, there had been a Pagan student group on campus, and some of the members wanted to get it started again. I volunteered to let the weekly student newspaper interview me for the Halloween issue. Maybe some Pagan or at least Pagan- curious students would read the piece and express interest in jump-starting the Pagan student group. It was a good article, very fair, and the student reporter worked hard to quote me properly (as a former journalism major myself, I appreciate this sort of thing even more). No one in my "chain of command" at work said anything to me, including the vice-president of admissions. However, and maybe it was just a coincidence, but from the day my "interview a Witch for Halloween" article came out, my immediate supervisor began to make my life a living hell. Oh, nothing that could be proven, and nothing big—it was little things like pointing out my errors in as loud a voice as possible, checking to see how I was doing with my stack of work twice as often (at least) as she checked with anyone else, allowing other employees to arrive a few minutes late and leave a few minutes early without recrimination (but I had to stay until five o'clock sharp to cover the phones all the time), and treating me in a condescending way that just set my teeth on edge. I complained and was told to "suck it up," that it was "just your imagination." The situation eventually got to the point that I would burst into tears every morning at the thought of having to go to work. That's when my husband jeopardized his own professor career and confronted my supervisor's boss about it. She called me into her office after he left and laughed, saying how "cute he was" to stand up for me. Since he slammed her office door so hard I heard it from half a building away when he arrived, I still sometimes wonder about her definition of "cute." Eventually, it was made clear to me that I was in a position where I could either quit my job or get fired, so I quit. Was the newspaper article responsible for my slow descent into the job from hell? I will never know for sure. But I will also think twice and evaluate my work environment very carefully before I allow myself to be interviewed by non-Pagan media again. Many of the survey respondents echo this sentiment, and choose not to be "out" as Pagans, or are only "out" to a few select people, at work: It has made the workplace a little "sticky" at times, but all in all I tell them, "Don't be friends with me because of my beliefs, but instead be my friend because of who I am." That works pretty well, actually. It also helped that one of my boss's daughters went through a year and a day of study with me. At first I had conflicts with our mechanic, as I was a school bus driver. He is very Christian, but over time he learned that I wasn't going to sacrifice any kid or anything, and he became one of my closest friends. He and I learned to respect each others' beliefs even if we didn't agree with them. —jenn, mountain home, idaho I will probably come out gradually as people know me as a person and as a professional . . . or not. It depends on the climate and who really needs to know. I don't enjoy other people's religious diatribes, and not participating in that discussion at work is a way to do that. I generally say, "My spiritual path doesn't lend itself to organized religion and is personal. Why is it important for you to know?" The last sentence is reserved for the truly nosy and annoying. —rowen brianna, bowling green, kentucky I lost a job once because the boss found out I took a workshop at the local New Age shop. Stupidly, the shop confirmed I had been there and explained what was discussed to my boss. Worse yet, it was a horrible workshop. —noey, coupeville, washington Surprisingly, other respondents reported no work problems at all. I can honestly say I've had jobs in addition to the insurance-company job where it wasn't an issue. In fact, this past St. Patrick's Day my workplace had some information about the origins of the holiday and other information about Ireland. I had to smile when I saw the poster on St. Brigid that started "St. Brigid was originally a Celtic goddess . . ." I openly wear pentacle earrings every day at this job, and not one person has noticed—or at least noticed enough to say anything. On the other hand, I do work in a medium-sized town where crazy Pagans can wear satin, thigh-length leopard-print bathrobes to Walmart and no one says anything. Whether or not to be "out" on the job, and the possible ramifications, truly need to be evaluated on a case-by-case basis. My bosses have been understanding, and have let me have time off when I needed it. —evy, bolivar, new york I've had no negative experiences. At my husband's workplace (in Madison), everyone knows what I believe, and I have not had anything but questions and positive reinforcement. —witch of the woods, merrimac, wisconsin Raising Your Children Pagan We've all heard the horror stories of raising Pagan children in small-town America—everything from the kids being suspended from school for wearing a pentacle to child and family service agencies removing children from the home simply because the parents are Pagan. Frankly, I expected that at least some of my survey respondents would have heartbreaking tales to tell about losing (or almost losing) their kids, major harassment at school, or something equally horrible. But, as you will see by the comments in this section, they didn't. As a trained journalist, I can only report the stories I get or find and can confirm; I didn't receive any hair-raising stories about raising Pagan children in small-town America, so I can't report any here. That doesn't mean the stories we do hear haven't happened, but bear in mind that we don't know all the details of those cases. In general, my survey respondents said—and I completely agree—that if you want to stay off the radar of child protective services because you're raising your kids Pagan, make sure the interior and exterior of your home is reasonably clean; your children eat breakfast before they go to school every morning (or make arrangements for them to eat at school); they don't wear the same dirty clothes to class every day (unless they're teenagers and that's the new "thing"); and make sure they're healthy and do their homework every night. Once the authorities see one of the "red flags" (unhealthily filthy home, hungry or dirty child, etc.), you've basically given them an excuse to investigate you thoroughly, whether you live in a small town or not. That being said, many parents in the survey were happy to share their Pagan parenting experiences. My daughter knows about the Goddess and the God and about stones and the Great Spirit and a lot of other things. She also knows the elements and other things, too. When she was born, her father asked me to make sure to teach the Pagan way, as he knew he wouldn't be around to teach her. He is still fine with it today. She is still my pride and joy, and she is a walker of the light. She is now ten and I couldn't be prouder. —jenn, mountain home, idaho There are many thorough and comprehensive books on raising your children Pagan, including my favorite—Pagan parent, author, and teacher Kristin Madden's excellent Pagan Parenting,19 so I will not go into too much detail here. I met Kristin and her family at a small Pagan gathering a couple of years ago, and her son is a great model of the Pagan child I want to raise—grounded, gentle, knowledgeable, polite, and still interested in normal teenage-appropriate activities. In short, he's everything a parent (Pagan or not) could ask for. My husband and I are raising our daughter Pagan. When she was a toddler, she quickly learned to blow kisses at the full moon; now that she's a little older, she blows kisses to the moon no matter what phase it's in! Like any preschooler, she is enthusiastic about holiday activities such as trick-or-treating, decorating a Christmas/Yule tree, and coloring (and finding) Easter eggs. My daughter even made homemade butter with us last Imbolc. Rose is also the one who blesses the food at suppertime, and we say the following prayer at bedtime every night: Lord and Lady, keep me safe all night And guard my family 'til morning light. She then gets to list all the family members, including the dogs, and ends with "So be it." It's going to break my heart when she decides she's too old to say prayers with Mommy anymore. In the meantime, Rose loves to sit in ritual and will probably be better at reading tarot cards than I am by the time she's out of elementary school. She comes by that talent naturally, though. Her father is an accomplished reader, and my mother-in-law is the first person I ever saw who read with an ordinary deck of playing cards. She was scarily accurate, too! Many of the survey respondents are also choosing to raise their children Pagan or Pagan-friendly. Even those in relationships in which their significant other isn't Pagan, or who have stepchildren whose other parent is Christian, somehow convey a little of our path to the kids in the home. My boys have attended church services with their dad and circles with me. I am trying to show them both types of religion and practices, so they will be better informed of their choices as they get older. —keltasia, shamokin, pennsylvania I am [raising my daughter Pagan], but I'm also teaching [her] other religions and letting her choose her own path. I will not force what I believe on anyone. —spiritrunner, bakersfield, california (formerly in taft, california) My stepchildren understand our ways are different from their mother's. We're not raising them totally Pagan, but they are learning our ways. —deanna eberlin, addison, new york Other parents have decided not to raise their children Pagan. The interesting thing is, the parents who choose this course of action did not necessarily base their decisions on the fact that they live in a small town, but on more personal considerations: I'm not raising my son specifically Pagan, no. My husband and I have come to the conclusion that it is possible to teach morals and values outside of any religious framework. —becca, clovis, new mexico I did not raise my children Pagan as such, but did teach them to respect Mother Earth and honor the seasonal cycle. —donna hames, nashwauk, minnesota One thing to keep in mind, especially in a small town, whether you consciously decide to raise your child Pagan or let him discover Paganism on his own: kids are the cruelest people on the planet. Sometimes deliberately, sometimes by accident, but wounds (even verbal wounds) inflicted on the playground or in the classroom can remain unhealed well into adulthood. Think about this very carefully before your child starts school. As of this writing, my daughter will start kindergarten in about two months. I know I worry about how my choice to raise her Pagan will affect her. There isn't enough space in this book to describe all the negativity I was on the receiving end of when I was still in school. Gods alive, that was hideous. I used to wear this chunky pentacle that I picked up at Spencer's, silver with blue stones at the points. They were a dime a dozen, and I think every kid who considered themselves Pagan that went through my school had one at one point. Anyway, people would see it and treat me like a leper, call me names, cross themselves as I went by. School life was hell. I was constantly harassed; I was bullied; I was pulled into the office and basically told I was on the school's "watchlist" of potentially dangerous students because I was the target of hostility. The Columbine massacre had happened my junior year, just before prom, so of course every school went into hyper-paranoia about who was going to be next, and because I was the little goth weirdo who was allegedly into Satanism, I was pegged to be a potential threat. The administration never bothered to actually get to know me or anything. They heard what people said about me; they saw me being bullied; and I guess it didn't take much effort to imagine me coming to school with guns concealed upon my person. —ravenna, dowagiac, michigan In school I was ridiculed, picked on, and persecuted. I even had people go as far as throwing holy water on me and throwing frozen water bottles at me in the outside eating area. Of course the school district, the principal, and superintendent would do nothing to stop it because it was their "star" football players who were harassing me. —spiritrunner, bakersfield, california (formerly in taft, california) Some parents choose to take a proactive approach and discuss the basics of their Pagan faith with teachers or other school administrators. A friend of ours in Missouri did this from the time his daughter entered kindergarten. She is now about to graduate from high school and, as far as I know, has never had any trouble with her Paganism in school. On the other hand, my mother, who taught music in the elementary school in our town for a couple of decades, counsels against telling the teachers. Her opinion is that unless you believe your child will not or should not participate in a particular school activity because of his religion, you don't need to "rock the boat" and say anything to the teachers. Apparently, many of her students over the years were Jehovah's Witnesses. The parents asked that their children not sing any holiday or patriotic songs. My mother complied, and that was the end of the issue. So unless your school's music teacher is asking the kids to sing Christmas hymns in December, maybe it's just better to keep your child's religion quiet. On the other hand, we may teach our daughter to say "one nation under Gods" during the Pledge of Allegiance—A. G. and I are still talking about this issue, and may just table it until she's old enough for "under God" to bother her. Sometimes it's the teacher's religion and not yours that can cause a problem. A few years ago, my non-religious nephew in Salina, Kansas, and his classmates were all being pressured by his second-grade teacher to either join her church or go to Hell. Nice situation for a seven-year-old, no? My mother-in-law took offense and stormed down to the school. According to my husband, she marched into the classroom, grabbed the offending teacher by the ear (ow!), and dragged her to the principal's office. Much yelling on my mother-in-law's part ensued, the gist of which was "This is not appropriate in the classroom. Do something." What she actually said is not allowed in a book like this that minors might read. The principal agreed and the teacher, amazingly, stopped proselytizing, at least to her students. I've had to argue with the school board about their enforced prayer and "God issues." I do occasionally wear my pentacle to school functions. Since quite a few of the parents already knew me before my conversion, it hasn't made a difference to them. But it does seem to bother the administration—even though my kids get As and Bs regularly and have not had any real disciplinary problems. —keltasia, shamokin, pennsylvania When our eldest began kindergarten, we had a discussion with her teacher about seasonal holidays and were pleased to learn that she decorated for all of them. A few years later, we had similar discussions with the younger two children's teachers and school administrators. We explained to them that we are Pagan, and there would be no tolerance of questioning our children's beliefs on the subject. There weren't any problems. —moondancer, washington state Fitting in on Sunday Morning One of the biggest social discrepancies between children who are being raised Pagan and their friends who live in actively Christian homes is the question of what the kids do on Sunday morning. The Christian kids, obviously, go to Sunday school and church while the Pagan kids don't. Emigrant children may come to a small town thinking this difference is no big deal. Hometowner kids probably know better. When we lived in Portales, New Mexico, my husband and I briefly attended the local Unitarian Fellowship for the first six months or so of our daughter's life. In fact, she is still listed on the registry somewhere; our daughter is officially a Unitarian, but we are not! There's an old Unitarian joke: "What do you call an atheist with kids? A Unitarian." This was certainly true in our Fellowship, but after a few months we got tired of sitting around and talking about religion on Sunday morning rather than actually practicing it, so we stopped going.20 One survey question was "Are you and your family attending church regularly or semi-regularly as a 'cover'? If so, which one?" Most survey respondents, again, said either no, or hell no. I do not attend church to put on an appearance of something I'm not. If people don't like it, that's their problem, not mine. When asked about it, I just explain that I don't have to be in a specific building to communicate with my Creator. —keltasia, shamokin, pennsylvania If I want a "church," I'll walk outside to my garden or meditation pond. —spiritrunner, bakersfield, california (formerly in taft, california) I love Spiritrunner's answer, don't you? Children at Pagan Gatherings Pretty much every kid I've ever seen at a Pagan festival looked very happy to be there. I've seen children who started coming to gatherings at a young age, and they are the happiest, most self-confident kids I know. If it's a big deal for adults who live in small towns to discover that they're not alone and there's this all-Pagan space that they can revel in for the next four or five days, imagine how much that same environment would mean to a child. Not only will Mom and Dad recharge their spiritual batteries by connecting with their "tribe," but so will the kids. At almost every Pagan gathering I've attended or been in charge of, the ones not old enough to be interested in the workshops are almost never found on their own—they tend to travel in big, multi-aged "packs." Now here's the problem: as great as a gathering is for kids, not only will they likely see adults who aren't their parents in various stages of undress, but the children themselves may also want to wear much less than they normally would at home. This is a big issue, not just for Pagan parents but for the entire community. In general, people who do not want to circle with kids (little ones in particular) can create or find an adults-only group. In a small town, however, this could become a problem if there's only one group and it's not the kind you need, either child-friendly or determinedly child-free, but with a little determination and a few extra miles on your car, you can keep your ritual time exactly as you like it—anything from kid-inclusive to skyclad-adults-only. But people mingle at a Pagan gathering, whether it's in the vendor area, at the nightly concert, in main ritual, or even on the way to the communal shower. Sure, you can do your best to keep your children in the "clothing-mandatory family camping area" that many festivals provide, but what if they want to go shopping on Merchant's Row? Or go swimming in the camp pond or swimming pool? Or eat lunch in the dining hall? I wish I could give definitive advice on this issue, but as a parent and as a former festival coordinator, I can't. Nor can my survey respondents. Every gathering is different, every family is different, and every family's stance on children seeing nude adults and strangers seeing their children nude or semi-nude is different. Even within the same family, opinions can differ. My husband is adamant that Rose not attend a Pagan gathering that is clothing optional until she's in her mid-thirties (which is roughly the age he'd prefer she be when she starts seriously dating). I agree she's not quite ready to accompany me to various gatherings—she's going through an "afraid of the dark" phase that's not conducive to tent camping, and at five she's not old enough to hang out at the kid's activity area without me, and I'm usually pretty busy vending and/or teaching workshops. However, I am completely convinced that if we treat the naked human body as no big deal, Rose will too. Needless to say, discussion will probably continue on this issue for some time. Whichever stance you take, you must check the state statutes for not only your home state but the state in which the gathering takes place. Missouri, for instance, has very strict laws about adults "exposing themselves" to minors—which could include the scenario of a skyclad adult standing next to a sixteen-year-old (or a five-year-old) at a merchant's booth or even rinsing off under the next spray in a communal shower house. If you live in one of the stricter states, legally speaking, and your kid goes home after a gathering and starts talking about all the naked grown-ups he or she saw, you could be in big trouble. And if the state in question has even stricter laws about adults and minors being minus a few critical items of clothing in each others' presence, you could find yourself in need of a Pagan-friendly lawyer very quickly. Relationship Issues As I mentioned, one of the toughest issues facing a small-town Pagan is his or her non-relationship status. In a large city, it's much easier to find a mate of like mind and compatible beliefs, but in a small town, where it's hard enough to collect a group of people to get together and even talk about Paganism, meeting your life partner is often far less likely—and a lot more work. Again, I expected the survey respondents to send me at least a handful of horror stories about relationships that had either never gotten off the ground or broken up over the issue of one party's Pagan practice—acrimonious divorce, loud public breakups, something—and I didn't get any. I know there are marriages and long-term relationships that have crumbled the minute one of the couple said the "P" word, including my first marriage, but no one who volunteered to talk to me had anything to say on this issue. Again, as a journalist, I can only report on the stories I get and can then verify. I can certainly say with some authority that finding a Pagan or at least Pagan-friendly life partner is of utmost importance whether you're a Hometowner or an Emigrant. Julia, of East Stroudsburg, Pennsylvania, hit the nail on the head when she said, "When I was dating after my divorce, I chose to only date people who were at least open to Paganism. I was looking for someone to share my entire life with, including my religion." As I mentioned, my first husband was/is a born-again Christian who is now a happy Episcopalian. We were both (nominally on my part) Christian when we married, and my exploring Paganism is what broke up the marriage. My second husband, although Pagan, was an alcoholic, a drug abuser, and a wife abuser. At one point I found myself facing the business end of a loaded gun, and he was threatening to pull the trigger. Why? He was initiated, I was not,21 and I'd just had the bad manners to say I knew more about something Pagan-related than he did. The lesson here is obvious: just because someone is also Pagan does not mean that you should automatically think he or she is a nice person who will make good lifemate material. My third husband, A.G., who is also the father of my daughter and the one I left the big city of Washington, DC for, is not only Pagan, but he was so when I met him. Over time, our paths have merged and we are magical working partners in addition to being spouses, co-parents, and best friends. But just because our paths have merged doesn't mean we share every single spiritual aspect of our lives. He grows organic vegetables as a form of spiritual expression; I have a black thumb. I sing as a form of spiritual expression; he can't carry a tune in a bucket. His favorite altars are the stove and the kitchen counters. I worship my God best by walking my dog. But we have the same strong opinions about how rituals and ritual groups are to be run; we work with deities who are compatible with each other, and that is merged enough. Remember, every person's path is a little different. In between these three husbands I dated a lot (and I mean a lot) of Pagan or Pagan-friendly men. Some I went out with only once, others I dated for years. And I lived in the Baltimore-Washington metropolitan area, where there was a sizeable community to pick from. I was living in Baldwin City, Kansas, when I first met A.G., and I live there now, so I'll turn to the experts on how to search for a life mate in a small town: Being Pagan made it really tough because they never understood why I was doing things; it would piss them off when I was doing ritual or something and not with them. —jenn, mountain home, idaho That's almost the last thing I worry about, and I worry about a lot of things. Does he think I'm homely? Does he think I'm fat? Oh Gods, is my voice really that annoying? What if I say something stupid and he tells all his friends about the dimwit he went out with last night? The Pagan thing rarely crosses my mind, but when I do think about it, I get really freaked out. I know so many people that see my faith as a quirk or just something else that's weird about me, something to just smile and nod about when I open my mouth. I get a bit worried that I'll meet my dream guy and have him end up treating my beliefs as "just another weird phase my kooky girlfriend is in." I would love to find a mate who's either Pagan or agnostic. —ravenna, dowagiac, michigan Well, I'm not looking for a mate. Right now I'm enjoying my own self. However, if at any point I do want to look for a mate, he will have to be a practicing Pagan. I can't be in a relationship with someone whose religion wants to burn me at the stake. —k, sevierville, tennessee I'm bisexual. So it's hard to find anyone, period. In this conservative town, you have to go way outside the city limits before people are even remotely comfortable with telling you they're interested. And the funny thing is I know lots of gay men, but no gay women. —evy, bolivar, new york You're a WHAT? Telling a Non-Pagan Love Interest That You're Pagan It can happen, despite your best efforts; you've fallen in love with someone in your small town and—lo and behold—he or she has not gotten the memo that you're Pagan. What do you do? Here's one way: I was on a date one time with a non- Pagan, but he was a serious science fiction fan. I can't remember if it was the first date or the second one, when the guy said, "Oh, by the way, I'm a filker." A filker, in case you don't know, is someone who attends a lot (and I mean a lot) of science fiction conventions for the sole purpose of staying up until two or three in the morning singing songs with old, familiar tunes and new lyrics about Star Trek. Or whatever the latest science fiction or fantasy fandom craze may be. To prove it, my date burst into song. Loudly. In the middle of the restaurant. At least he had a nice voice. Needless to say, the relationship didn't last too long after that. But did he do the right thing? Actually, yes, he did. He came out and told me, which was brave of him. Not many people like filkers. So my first rule of thumb for dropping a "big surprise" like "I'm Pagan" on someone you're seeing is: you have to tell them. My second rule of thumb, however, may surprise you: Don't tell them on the first date. Maybe not even on the second date. Why? Give him or her a chance to get to know you first, without the big neon sign flashing WEIRD! or SCARY! over your head. Let your potential sweetie start to like you for your positive, "ordinary" qualities—great sense of humor, nice smile, levelheaded in a crisis, even temperament—before letting him or her know that in a couple weeks you're going on vacation to a Pagan gathering two states away with about a thousand of your closest, clothing-optional friends. So my third rule of thumb is: Don't wait too long. Because then you can add resentment to the possible reaction list, since "you kept it from me!" Rule number four: When you do decide to tell him or her, don't drop hints or waffle and definitely don't act ashamed of who you are and what you believe. Explain your situation calmly and intelligently, but don't dump too much on your sweetie all at once. And don't expect instant acceptance or an instant conversion to your Pagan faith. Offer to give your new love some time to assimilate what you've said. My rule number five: Leave him or her with a graceful way to exit from the relationship if he or she truly cannot handle what you've said. Yes, it will hurt like hell, but better some hurt now than a whole mess of hurt later, after you've had enough time to really invest your heart and soul into the relationship. Finally, be hopeful! Your new sweetie might just surprise you and say, "Wow! I'm Pagan, too! I just never knew what it was called!" Two Religions, One Relationship Believe it or not, even if your significant other is not Pagan, you can make your relationship work as long as the other person is open-minded about your path. (I would not recommend dating a Baptist minister, for example.) Even if you are both Pagan, there may be some fundamental differences in your practice and teachings that can cause a rift. My friend David in Canada, for instance, is Wiccan and also dedicated to the African Orishas. Just because a lover or a roommate is Pagan doesn't mean that person knows the specifics of David's two religious practices, which can include strict dietary, hygiene, and wardrobe requirements. How, then, can you share a home, share a bathroom, and share your free time, intimate bodily fluids, and a tub of popcorn at the movies, and not share a religion? The answers are much easier than you might think. Read, Read, and Read Some More The more you know about your sweetie's religious practices and ideals, the more you're likely to find they have a lot in common with your own, even if, for example, he's a Buddhist and you follow the old Norse warrior gods. Every religion strives to explain the Unexplainable, the Godhead if you wish. And many faiths have very similar paths that lead to the One. Something to keep in mind, though: be very picky about what you read. If, for instance, your beloved is Catholic, don't confine your reading to works that make Catholics look, well, not so good—like stories about the Inquisition. Your partner's spiritual mentor (priest, rabbi, etc.) probably has an office filled with books by contemporary authors who discuss the religion in question in a spiritually honest yet academic way. Ask to borrow some of these books. Try the Hands-On Approach Go to your partner's place of worship at least once in a while (on Christmas Eve or during the Jewish high holidays, for example), especially if you want him or her to occasionally circle with you. Knowing what really happens during your mate's religious services has three huge advantages. One, you get to see your partner's religious ideals in action. Two, you'll make your sweetie very happy by your attendance. Three, if you ever need (for social or emotional support reasons) to accompany your partner to a rite of passage—i.e., a wedding or a funeral—you already have some idea of what to expect and how to behave. I offer the following secular example: my husband was raised in a military family; his father was career Army. I was raised by two semi-hippie, war-protesting music teachers. We did not attend his father's funeral in 2003 due to issues with his family dynamics (see the story in chapter 1 about his sister calling us "freaks" in the intensive care unit), but, if we had, I would have been completely lost and confused by the military funeral. I would not have known how to behave or what was expected of me, and consequently would have been less able to provide A.G. with optimum support in a time of need. So go to church with your partner. I'm not saying you have to stay afterward for the food and chit-chat, nor am I advocating participating in parts of the service you may feel uncomfortable about (like communion for us non-Christians), but at least try to attend the service with an open mind. You might find something of spiritual value to you. I'm honestly convinced I would have found the military aspects of my father-in-law's funeral to be quite lovely. Incorporate Both Sets of Holidays into Your Home This is particularly important if one or both of you have either brought children into the relationship, or you've had some of your own since you got together. If your sweetie is Jewish, for instance, be open to celebrating Hanukkah, Passover, or Yom Kippur along with Yule, Ostara, and Mabon. You may just find some similarities between the different religions' seasonal holidays that you didn't know existed before. Even within differing Pagan traditions, there can be a lot of lovely crossover. When my husband and I got together, he told me that his family, German Lutherans all, celebrate St. Nicholas Day on December 6 by putting their shoes out the night before. St. Nicholas then comes and fills them. It's kind of like a barometer for Santa Claus at Christmas: if you're good, you get candy, fruit, or a small toy item. If you're not, then you get coals or sticks in your shoe. Celebrating St. Nicholas Day helps our daughter learn about her German heritage; it makes my husband happy; and yeah, okay, it's really fun! So put up a Yule tree, light the menorah, and relax! Keep Your Negative Opinions to Yourself No, you're not going to love everything about your partner's non-Pagan faith, and he or she probably doesn't love everything about yours. If you did, one of you would have converted by now, and you probably wouldn't be reading this section of the book. However, I cannot stress strongly enough that you should not express these opinions out loud to your sweetie, even if he or she asks! It would also be a good idea not to express them too strongly to your mutual friends—they may accidentally tell your partner. Look, as long as no one's pressuring you to convert, all is good. Leave it alone and chalk it up to just another difference between two people who love each other very much. My husband thinks good bratwurst is another face of the God. I can't stand the stuff. We manage. Bronwen's Big Theory of Deity Keep in mind that all religions are designed to bring the follower closer to the Divine as they see it, and, as such, they all have value. To illustrate this, I'm going to share with you Bronwen's Big Theory of Deity. As I do, please bear in mind that I am a huge fan of Big Band music and that I came of age when disco was king. The Supreme Power of the Universe, by whatever name you call Him or Her, is like a giant mirror ball, reflecting Light down upon us all. Now, the human brain is just not equipped to comprehend the incredible vastness and power of the whole Mirror Ball. So I'm standing over here on this side of the Grand Mirror Ball, and that little square there is shining Light onto my face. I really like that little square. I can relate to it. However, you're over on the other side of the Grand Mirror Ball (it's not spinning at the moment, okay?), and a completely different little square is shining Light on you. And you like your little square as much as I like mine. Yes, the squares are different, meaning the name of (the) God(s), styles and methods of worship, and philosophies about the True Nature of Life are different. But, and here's the important part: They're both part of the same mirror ball! Cool, huh? [contents] 19. Pagan Parenting: Spiritual, Magical & Emotional Development of the Child (Spilled Candy Publications, 2004). 20. I know that many, if not most, Unitarian Fellowships are more celebratory and less cerebral; this was just an odd group. 21. And, boy, do I have a few choice things to say to the person(s) who thought he was appropriate initiate material. Chapter 9 The Experts Speak The last question on my survey was "What would you like to tell me about life as a small-town Pagan that has not been covered?" You've gotten my answer to this question in bits and pieces throughout the book. This is what the respondents, my experts, wanted to leave you with. There's an awful lot of wisdom here: There are a lot of people out there who want to know [about Paganism] and don't know where to go, so just be yourself and they will come to you. There are also a lot of Pagan wannabes out there who do things to give us a bad name, so be careful who you get involved with. Look at their personal relationships and you will know if they are empowered individuals or not. —jenn, mountain home, idaho Being a Pagan in a small town is no more difficult (or any less so) than it is in a city. Well, maybe slightly easier—I can walk outside and be "lost in nature" in less than one minute or however long it takes me to walk to the back of our property. There's less noise and light pollution here, and you tend to be a bit more aware of nature and its rhythm and cycles than you do in a city. That may seem somewhat idealized, but it's true. —moondancer, washington state I grew up Catholic, which didn't do much for me. I found Wicca first when I was seventeen; I found that Paganism was better. I don't like organized religions. I'm a patchwork Witch, meaning that I pick bits and pieces from all religions and throw them all together. —kathleen, from a town in north dakota Living in a small town is much different from being in a city. In the city there tends to be lots of kinds of people all mixed together, and you're bound to find somebody else like you. In a small town it's not like that. Everybody knows everybody else, and people tend to all be very much the same. You might feel like an outsider, especially when you're still new to the Pagan lifestyle, but don't let it get to you. There are others around; you just have to find them. Take it all in stride and be honest about who you are. You're likely to be the only representative some people have of the Pagan faith, so always be honorable in your interactions with your community. —deanna eberlin, addison, new york You [as an Emigrant] are already an outsider. Don't make it worse. Most of those who live here have always lived here. And their families have always lived here. It's already hard to fit in, so don't put flaming pentagrams in your front yard. People around here don't really advertise what religion they are, but if you start out trying to stand out, you're more likely to be ostracized. Never underestimate who your allies will be. Some see a Baptist minister down the street and immediately think the minister will be against them. I, however, have a really good relationship with all the clergy in this town. Then one day a construction worker was in the place where I work, someone who you would think had seen it all, and saw my necklace and started yelling at me about how I was evil. So you never know. —evy, bolivar, new york I have found that, overall, people are very welcoming. It's all about how you present yourself. If you are kind, generous, and respectful, even people in a small town with three churches for 416 people will be good in return. Answer any questions you get as intelligently and calmly as possible. A lot of people are unfamiliar with what we believe, and I have found people are more curious than judgmental. Don't shy away, don't proclaim, just be. My husband and I were very close with our neighbors across the street. He gave us food from his gardens; any chicken in his yard we wanted for dinner, we could ask for. They were especially interested when I started taking correspondence classes to become an herbalist. They had a gorgeous comfrey specimen that they let me split to grow my own plant. After some time they would ask questions and nod kindly and say things like "You don't say" and "You wouldn't ever believe that would work." The husband worked construction and was often covered with bumps and bandages. One day he dropped off some potatoes for us, and he had a gaping wound on his thumb. I immediately asked him what he was going to do about it. He replied, "Nothing." After he left, I whipped up a batch of my wound-healer salve and walked it over. He was certainly skeptical as I opened this jar of jet black, smells-like-death salve. I told him to go ahead and just apply it as often as he wanted, and his body would just absorb it and his wound would heal. I had doubts he would try it, but the very next day he came over and stuck his thumb in my face. "Look at this, it's almost closed already," he grinned. Not only did he continue to use my salve until it was gone, but he also asked me to make a big batch for future use, and their mere interest in my herbal studies turned into consultations when needed. —witch of the woods, merrimac, wisconsin It's a farm community. There are four seasons: too hot, too cold, too wet, and too dry. I don't go outside and practice. I've noticed young people, maybe age sixteen to eighteen, go into the public parks here late at night and celebrate. I wouldn't be that brave. The park closes at dark, so they were taking a big risk. I'm more cautious. This is definitely a Christian-based town. There's more gossip at the local diner in the morning than at the hair salon—and I'm a hairdresser! —iris, genoa, illinois It's difficult to find other people on a non-Abrahamic path if you don't have local Pagan clergy. It's hard to meet or have a discussion group without organizing it yourself. Most people we encounter are more curious than discouraging about why we don't go to church or participate in mainstream religion. —kim schaufenbuel, owatonna, minnesota We have a significant interfaith group with representatives from different faiths and talks about Thanksgiving, Yule, and other traditions of giving thanks. We often share ideas of faith and understanding. In the last few years I've been doing a little music at the meetings, songs I've written, etc. There's always a potluck dinner afterward. The local Roman Catholics don't get involved—their local bishop told them not to attend any events with Pagans. The Greek Orthodox Catholics participate, though. This group is a great example of how all the mainline religions espouse tolerance of the other religions. There's an annual interfaith Yule celebration at the Unitarian church that's very well attended every year. The community is invited to attend. There's often a pageant where the Holly King is ritually "killed" by the Oak King. People read poems, there's singing, the kids get involved—just so the community knows we're not sacrificing sheep. Sometimes it just takes time and exposure. —fergus, monona, wisconsin If you are just moving into a small town, take it slow. Don't start preaching about what a big old Witch you are. Let folks get to know you. Be a good neighbor, make sure your grass is mowed and your trash is cleaned up. Be there to help people if they need something. Volunteer for local organizations that you have something in common with, such as cleaning up local parks and wildlife areas, neighborhood watches, neighborhood gardens, etc. Give people time to get to know you for you, not because you're a Pagan. There is great joy in living in a small town. Life has a much slower pace; it's less frantic. Small-town activities, like parades and Fourth of July picnics, are a great way to get out and know your neighbors. There is a sense of being at peace with the world and with life that I never found in large cities. Knowing who your neighbors are—having block parties and knowing your neighbors' names and their faces—and knowing that we all watch out for each other and each other's children, it's a very satisfying thing. —julia, east stroudsburg, pennsylvania We have many Pagans here, but we tend to keep things very low-key. We have learned the hard way that this town isn't ready for an Old-Time Religion Revival anytime soon. If you have children, for the love of all that is holy, ward them and spell them and hug them every day, and remind them that graduation is sooner than they think and pretty soon none of those kids who point fingers and accuse them of following Satan will remember who they are. When you are outed as a Pagan in a small town, you will quickly find out how many of your neighbors watch way too many movies and far too much television. "Seriously, one more Charmed quote and I swear I will hex your tires flat." —ravenna, dowagiac, michigan I really believe that many folks in small towns have descended from families that lived close to the land. Many still practice a form of folk magic. They just don't call it that. They can read the land, the weather, and understand the animals' behavior. It's kinda like they know and you know they know, and as long as no one says it out loud, we're all okay here. —k, sevierville, tennessee [contents] Afterthoughts So what happens now? Is this conversation-that-looks-like-a-book over? That, at least in part, is up to you. I hope you feel inspired to reach out to other Pagans in your area and try to at least meet at the local coffee shop even if you're not quite up to starting your own weekly or monthly meetup yet. I was recently able to lead a discussion on "Life as a Small-Town Pagan" at a rather large Pagan gathering. The attendees had plenty to say and did not want to part from each other when the scheduled hour was over. Listening to them—and I did much more listening than talking, a rare event for me when I'm responsible for a workshop!—I was grateful that I had not experienced some of the discrimination they had; humbled by their courage to come and share their experiences; and, ultimately, happy that I have chosen to live my life and practice my faith in a small town. Like me, like the survey respondents in this book, the workshop attendees had much to say about the positive aspects of small-town Pagan life, particularly the closeness to, and the deeper appreciation of, the natural cycles of the year. At the end of the workshop, all the participants agreed that those of us who choose to live in small towns and call ourselves Pagans need to reach out to each other more, to communicate, to start an ongoing conversation about ourselves, our families, our lives, and our practice. Now it's your turn to chime in. I'm trying to keep the conversation going via the Small Town Pagans Yahoo e-mail group, at http://groups.yahoo .com/group/smalltownpagans. I hope you'll join in and contribute your thoughts and experiences. You never know when what you have to say will help someone else. I also encourage the very bravest readers to consider attending a nearby Pagan Pride Day or gathering, and offering to facilitate your own discussion/workshop on life as a small-town Pagan. After all, if you're Pagan and you live in a small town, you have as much expertise as I do on the subject! Pagan Pride celebrations and festivals attract Pagans from all over the region, not just from the city they're nearest to. It's been my experience that there are plenty of folks in attendance who come from small towns just like yours and who really want the chance to talk about their lives. If nothing else, I hope you now know that you're not alone. With luck, something in these pages struck a chord, and you said, "Hey, that happened to me, too!" [contents] Recommended Reading Bonewits, Isaac. Neopagan Rites: A Guide to Creating Public Rituals That Work. Woodbury, MN: Llewellyn, 2007. Bulfinch, Thomas. Bulfinch's Mythology. New York: Barnes & Noble Classics, 2006. Originally published in 1881; many editions available. Campbell, Joseph. The Hero with a Thousand Faces, third edition. Novato, CA: New World Library, 2008. Eilers, Dana D. Pagans and the Law: Understand Your Rights. Franklin Lakes, NJ: Career Press, 2009. Forbes, Bronwen. Make Merry in Step and Song: A Seasonal Treasury of Music, Mummer's Plays & Celebrations in the English Folk Tradition. Woodbury, MN: Llewellyn, 2009. Hamilton, Edith. Mythology. Boston: Back Bay Books, 1998. First published in 1942. K, Amber. Coven Craft: Witchcraft for Three or More. St. Paul, MN: Llewellyn, 2002. Madden, Kristin. Pagan Parenting: Spiritual, Magical & Emotional Development of the Child, revised edition. Niceville, FL: Spilled Candy Publications, 2004. (Originally published by Llewellyn Publications in 2000.) McSherry, Lisa. Magickal Connections: Creating a Lasting and Healthy Spiritual Group. Franklin Lakes, NJ: New Page Books, 2007. ———. The Virtual Pagan: Exploring Wicca and Paganism through the Internet. Boston: Weiser Books, 2002. [contents] Resources General Websites The Pagan Pride Project: http://www.paganpride.org The Wild Hunt: http://www.wildhunt.org/blog Unitarian Universalist Association of Congregations: http://www.uua.org Internet Sacred Texts Archive: http://www.sacred-texts.com Myth*ing Links: http://www.mythinglinks.org Beliefnet: http://www.beliefnet.com Cauldron Living: http://www.cauldronliving.com The Cauldron: http://www.ecauldron.com Shopping Websites AzureGreen: http://www.azuregreen.com CafePress: http://www.cafepress.com Mountain Rose Herbs: http://www.mountainroseherbs.com Abaxion: http://www.abaxion.com eBay: http://www.ebay.com Etsy: http://www.etsy.com 13moons.com: http://www.13moons.com The Blessed Be: http://www.theblessedbee.com Artists' Websites Nybor Mystical Art: http://www.nyborart.com Susan Seddon Boulet: http://www.susanseddonboulet.com Anne Marie Forrester: http://web.mac.com/annemarieforrester Alicia Austin: http://www.aliciaaustin.com Jen Delyth: http://www.kelticdesigns.com Mickie Mueller: http://www.mickiemuellerart.com Networking Sites Witchvox: http://www.witchvox.com LiveJournal: http://www.livejournal.com Facebook: http://www.facebook.com Yahoo Groups: http://www.groups.yahoo.com [contents] Afterthoughts
{ "redpajama_set_name": "RedPajamaBook" }
8,822
Q: Print two arbitrary excel worksheets as one page, double-sided I have an Excel file. It contains several worksheets (ex: A, B, C, D ...). Each worksheet prints on one page in the same orientation. How do I quickly print any two of these worksheets (ex: A & C, B & C, etc.) as a single page, double-sided? If I ctrl select the two worksheet tabs and print, Excel will print just those 2 selected worksheets, but they print as separate pages even when the double-sided print option from the Excel print screen is set. A: There is a solution, following the @MatheJuhasz comment: * *Select the sheets you would like to print by holding CTRL and clicking on individual worksheets. *Save those as a .pdf (File->Save as ... -> Save at type:PDF) *Print .pdf on both sides, as usually. Tadaaaa!!! :) A: Had problems with the PDF not saving my Excel gridlines after creating a PDF I copied the Excel spreadsheet to Word, which gave me gridlines and text. I then saved it as a Word document and printed it. This gave me what I wanted to then fill out the spreadsheet manually, which is what I want for recording data. A: I had similar problem. Resolved it by 1. ensuring duplex print is set, selecting the required tabs, Select "Print Active Sheets" then, and this is the odd bit, select "Page Setup" then "Options" click "OK" then "OK" again, which seems like you haven't really done anything, then print. This works for me when printing duplex to a Canon Pixma. A: (i) Select the sheets you would like to print by holding CTRL and clicking on individual worksheets. (ii) Go to Print as normal, specify the print range i.e. Pages 1 to 2 , select 'Print Both Sides' , Press Print :)
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,202
Toyota New Car 2017. 2017 toyota c hr unveiled in geneva australian launch due. 2017 toyota corolla price photos reviews features. 2017 toyota kluger new car sales price car news. toyota yaris 2017 new car sales price car news carsguide. future cars toyota s new c hr hybrid crossover. 2017 toyota 86 updated and uprated sports car confirmed. 2017 toyota c hr concept car photos 2017 best cars. 2017 toyota highlander hybrid price photos reviews. 2017 toyota 86 new car sales price car news carsguide. 2017 toyota c hr 2017 2018 best cars reviews. [Apriliasxv.com]. Toyota New Car 2017 Vehicles are one of those sorts of transportation that folks locate in order to function as the beloved today any days. There are actually individuals who are obsessed with automobiles and also appreciate all of them and everything its features and might talk forevery speaking about these for a stretch. While using the expansion of better technology, several latest features get arise and already some sort of daily all of modern day cars fully built with those characteristics are noticed approaching in to the markets. With all the inflated price are available the particular terrific characteristics, just about every being one of your kind. As being a home, an auto will be a type of assets that you will make in the lifetime. Hence it is important that you do account in negotiating the car financing that you will be using to get the purchasing amount the small sum of achievable, depending on your ease and comfort level. Sit down with all the sales guys as well as keep conditions as well as status when in front of these people, say to them accurately what you can find the money for and the way significantly a person can pay out, and keeping that in mind take the necessary steps beyond this concept on. Always be totally clear in the first place within income connected issues. One of the benefits involving online will be which you might end up with the car you happen to be looking with a large decrease amount than what you will get from the showrooms. This specific net likewise provides the chance of avoiding a troublesome sales those that you'll have to deal with. Consequently if you find because of this viewpoint this is completely any win-win scenario which you coping the following, so why not take advantage technique online to provide the actual requirements that want this type of focus like this Toyota New Car 2017?
{ "redpajama_set_name": "RedPajamaC4" }
7,642
\section{Credits} This document has been adapted from the instructions for earlier ACL and NAACL proceedings, including those for ACL 2017 by Dan Gildea and Min-Yen Kan, NAACL 2017 by Margaret Mitchell, ACL 2012 by Maggie Li and Michael White, those from ACL 2010 by Jing-Shing Chang and Philipp Koehn, those for ACL 2008 by Johanna D. Moore, Simone Teufel, James Allan, and Sadaoki Furui, those for ACL 2005 by Hwee Tou Ng and Kemal Oflazer, those for ACL 2002 by Eugene Charniak and Dekang Lin, and earlier ACL and EACL formats. Those versions were written by several people, including John Chen, Henry S. Thompson and Donald Walker. Additional elements were taken from the formatting instructions of the {\em International Joint Conference on Artificial Intelligence} and the \emph{Conference on Computer Vision and Pattern Recognition}. \section{Introduction} The following instructions are directed to authors of papers submitted to ACL 2018 or accepted for publication in its proceedings. All authors are required to adhere to these specifications. Authors are required to provide a Portable Document Format (PDF) version of their papers. \textbf{The proceedings are designed for printing on A4 paper.} \section{General Instructions} Manuscripts must be in two-column format. Exceptions to the two-column format include the title, authors' names and complete addresses, which must be centered at the top of the first page, and any full-width figures or tables (see the guidelines in Subsection~\ref{ssec:first}). \textbf{Type single-spaced.} Start all pages directly under the top margin. See the guidelines later regarding formatting the first page. The manuscript should be printed single-sided and its length should not exceed the maximum page limit described in Section~\ref{sec:length}. Pages are numbered for initial submission. However, \textbf{do not number the pages in the camera-ready version}. By uncommenting {\verb|\aclfinalcopy|} at the top of this document, it will compile to produce an example of the camera-ready formatting; by leaving it commented out, the document will be anonymized for initial submission. When you first create your submission on softconf, please fill in your submitted paper ID where {\verb|***|} appears in the {\verb|\def\aclpaperid{***}|} definition at the top. The review process is double-blind, so do not include any author information (names, addresses) when submitting a paper for review. However, you should maintain space for names and addresses so that they will fit in the final (accepted) version. The ACL 2018 \LaTeX\ style will create a titlebox space of 6.35 cm for you when {\verb|\aclfinalcopy|} is commented out. \subsection{The Ruler} The ACL 2018 style defines a printed ruler which should be present in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document without the provided style files, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment the {\verb|\aclfinalcopy|} command in the document preamble.) \textbf{Reviewers}: note that the ruler measurements do not align well with lines in the paper -- this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. In most cases one would expect that the approximate location will be adequate, although you can also use fractional references ({\em e.g.}, the first paragraph on this page ends at mark $117.5$). \subsection{Electronically-available resources} ACL provides this description in \LaTeX2e{} ({\small\tt acl2018.tex}) and PDF format ({\small\tt acl2018.pdf}), along with the \LaTeX2e{} style file used to format it ({\small\tt acl2018.sty}) and an ACL bibliography style ({\small\tt acl\_natbib.bst}) and example bibliography ({\small\tt acl2018.bib}). These files are all available at \url{http://acl2018.org/downloads/acl18-latex.zip}. A Microsoft Word template file ({\small\tt acl18-word.docx}) and example submission pdf ({\small\tt acl18-word.pdf}) is available at \url{http://acl2018.org/downloads/acl18-word.zip}. We strongly recommend the use of these style files, which have been appropriately tailored for the ACL 2018 proceedings. \subsection{Format of Electronic Manuscript} \label{sect:pdf} For the production of the electronic manuscript you must use Adobe's Portable Document Format (PDF). PDF files are usually produced from \LaTeX\ using the \textit{pdflatex} command. If your version of \LaTeX\ produces Postscript files, you can convert these into PDF using \textit{ps2pdf} or \textit{dvipdf}. On Windows, you can also use Adobe Distiller to generate PDF. Please make sure that your PDF file includes all the necessary fonts (especially tree diagrams, symbols, and fonts with Asian characters). When you print or create the PDF file, there is usually an option in your printer setup to include none, all or just non-standard fonts. Please make sure that you select the option of including ALL the fonts. \textbf{Before sending it, test your PDF by printing it from a computer different from the one where it was created.} Moreover, some word processors may generate very large PDF files, where each page is rendered as an image. Such images may reproduce poorly. In this case, try alternative ways to obtain the PDF. One way on some systems is to install a driver for a postscript printer, send your document to the printer specifying ``Output to a file'', then convert the file to PDF. It is of utmost importance to specify the \textbf{A4 format} (21 cm x 29.7 cm) when formatting the paper. When working with {\tt dvips}, for instance, one should specify {\tt -t a4}. Or using the command \verb|\special{papersize=210mm,297mm}| in the latex preamble (directly below the \verb|\usepackage| commands). Then using {\tt dvipdf} and/or {\tt pdflatex} which would make it easier for some. Print-outs of the PDF file on A4 paper should be identical to the hardcopy version. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs as soon as possible. \subsection{Layout} \label{ssec:layout} Format manuscripts two columns to a page, in the manner these instructions are formatted. The exact dimensions for a page on A4 paper are: \begin{itemize} \item Left and right margins: 2.5 cm \item Top margin: 2.5 cm \item Bottom margin: 2.5 cm \item Column width: 7.7 cm \item Column height: 24.7 cm \item Gap between columns: 0.6 cm \end{itemize} \noindent Papers should not be submitted on any other paper size. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs above as soon as possible. \subsection{Fonts} For reasons of uniformity, Adobe's \textbf{Times Roman} font should be used. In \LaTeX2e{} this is accomplished by putting \begin{quote} \begin{verbatim} \usepackage{times} \usepackage{latexsym} \end{verbatim} \end{quote} in the preamble. If Times Roman is unavailable, use \textbf{Computer Modern Roman} (\LaTeX2e{}'s default). Note that the latter is about 10\% less dense than Adobe's Times Roman font. \begin{table}[t!] \begin{center} \begin{tabular}{|l|rl|} \hline \bf Type of Text & \bf Font Size & \bf Style \\ \hline paper title & 15 pt & bold \\ author names & 12 pt & bold \\ author affiliation & 12 pt & \\ the word ``Abstract'' & 12 pt & bold \\ section titles & 12 pt & bold \\ subsection titles & 11 pt & bold \\ document text & 11 pt &\\ captions & 11 pt & \\ abstract text & 11 pt & \\ bibliography & 10 pt & \\ footnotes & 9 pt & \\ \hline \end{tabular} \end{center} \caption{\label{font-table} Font guide. } \end{table} \subsection{The First Page} \label{ssec:first} Center the title, author name(s), and affiliation(s) across both columns (or, for the initial submission, \textbf{Anonymous ACL submission} for names and affiliations). Do not use footnotes for affiliations. Include the paper ID number assigned during the submission process in the header. Use the two-column format only when you begin the abstract. \textbf{Title}: Place the title centered at the top of the first page, in a 15-point bold font. (For a complete guide to font sizes and styles, see Table~\ref{font-table}) Long titles should be typed on two lines without a blank line intervening. Approximately, put the title at 2.5 cm from the top of the page, followed by a blank line, then the author name(s), and the affiliation(s) on the following line. Do not use only initials for given names (middle initials are allowed). Do not format surnames in all capitals ({\em e.g.}, use ``Mitchell'' not ``MITCHELL''). Do not format title and section headings in all capitals as well except for proper names (such as ``BLEU'') that are conventionally in all capitals. The affiliation should contain the author's complete address, and if possible, an electronic mail address. Start the body of the first page 7.5 cm from the top of the page. The title, author names and addresses should be completely identical to those entered to the electronic paper submission website in order to maintain the consistency of author information among all publications of the conference. If they are different, the publication chairs may resolve the difference without consulting with you; so it is in your own interest to double-check that the information is consistent. \textbf{Abstract}: Type the abstract at the beginning of the first column. The width of the abstract text should be smaller than the width of the columns for the text in the body of the paper by about 0.6 cm on each side. Center the word \textbf{Abstract} above the body of the abstract using the font size and style shown in Table~\ref{font-table}. The abstract should be a concise summary of the general thesis and conclusions of the paper. It should be no longer than 200 words. The font size of the abstract text should be as shown in Table~\ref{font-table}. \textbf{Text}: Begin typing the main body of the text immediately after the abstract, observing the two-column format as shown in the present document. Do not include page numbers in the final version. \textbf{Indent}: Indent when starting a new paragraph, about 0.4 cm. \begin{table} \centering \small \begin{tabular}{cc} \begin{tabular}{|l|l|} \hline \textbf{Command} & \textbf{Output}\\\hline \verb|{\"a}| & {\"a} \\ \verb|{\^e}| & {\^e} \\ \verb|{\`i}| & {\`i} \\ \verb|{\.I}| & {\.I} \\ \verb|{\o}| & {\o} \\ \verb|{\'u}| & {\'u} \\ \verb|{\aa}| & {\aa} \\\hline \end{tabular} & \begin{tabular}{|l|l|} \hline \textbf{Command} & \textbf{ Output}\\\hline \verb|{\c c}| & {\c c} \\ \verb|{\u g}| & {\u g} \\ \verb|{\l}| & {\l} \\ \verb|{\~n}| & {\~n} \\ \verb|{\H o}| & {\H o} \\ \verb|{\v r}| & {\v r} \\ \verb|{\ss}| & {\ss} \\\hline \end{tabular} \end{tabular} \caption{Example commands for accented characters, to be used in, {\em e.g.}, \BibTeX\ names.}\label{tab:accents} \end{table} \subsection{Sections} \textbf{Headings}: Type and label section and subsection headings in the style shown on the present document. Use numbered sections (Arabic numerals) in order to facilitate cross references. Number subsections with the section number and the subsection number separated by a dot, in Arabic numerals. Do not number subsubsections ({\em i.e.}, use \verb|\subsubsection*| instead of \verb|\subsubsection|). \begin{table*} \centering \begin{tabular}{lll} output & natbib & previous ACL style files\\ \hline \citep{Gusfield:97} & \verb|\citep| & \verb|\cite| \\ \citet{Gusfield:97} & \verb|\citet| & \verb|\newcite| \\ \citeyearpar{Gusfield:97} & \verb|\citeyearpar| & \verb|\shortcite| \\ \end{tabular} \caption{Citation commands supported by the style file. The citation style is based on the natbib package and supports all natbib citation commands. It also supports commands defined in previous ACL style files for compatibility. } \end{table*} \textbf{Citations}: Citations within the text appear in parentheses as~\cite{Gusfield:97} or, if the author's name appears in the text itself, as Gusfield~\shortcite{Gusfield:97}. Using the provided \LaTeX\ style, the former is accomplished using {\verb|\cite|} and the latter with {\verb|\shortcite|} or {\verb|\newcite|}. Collapse multiple citations as in~\cite{Gusfield:97,Aho:72}; this is accomplished with the provided style using commas within the {\verb|\cite|} command, {\em e.g.}, {\verb|\cite{Gusfield:97,Aho:72}|}. Append lowercase letters to the year in cases of ambiguities. Treat double authors as in~\cite{Aho:72}, but write as in~\cite{Chandra:81} when more than two authors are involved. Also refrain from using full citations as sentence constituents. We suggest that instead of \begin{quote} ``\cite{Gusfield:97} showed that ...'' \end{quote} you use \begin{quote} ``Gusfield \shortcite{Gusfield:97} showed that ...'' \end{quote} If you are using the provided \LaTeX{} and Bib\TeX{} style files, you can use the command \verb|\citet| (cite in text) to get ``author (year)'' citations. You can use the command \verb|\citealp| (alternative cite without parentheses) to get ``author year'' citations (which is useful for using citations within parentheses, as in \citealp{Gusfield:97}). If the Bib\TeX{} file contains DOI fields, the paper title in the references section will appear as a hyperlink to the DOI, using the hyperref \LaTeX{} package. To disable the hyperref package, load the style file with the \verb|nohyperref| option: \verb|\usepackage[nohyperref]{acl2018}|. \textbf{Compilation Issues}: Some of you might encounter the following error during compilation: ``{\em \verb|\pdfendlink| ended up in different nesting level than \verb|\pdfstartlink|.}'' This happens when \verb|pdflatex| is used and a citation splits across a page boundary. To fix this, disable the \verb|hyperref| package (see above), recompile and see the problematic citation. Next rewrite that sentence containing the citation. (See, {\em e.g.}, {\small\tt http://tug.org/errors.html}) \textbf{Digital Object Identifiers}: As part of our work to make ACL materials more widely used and cited outside of our discipline, ACL has registered as a CrossRef member, as a registrant of Digital Object Identifiers (DOIs), the standard for registering permanent URNs for referencing scholarly materials. We are requiring all camera-ready references to contain the appropriate DOIs (or as a second resort, the hyperlinked ACL Anthology Identifier) to all cited works. Thus, please ensure that you use Bib\TeX\ records that contain DOI or URLs for any of the ACL materials that you reference. Appropriate records should be found for most materials in the current ACL Anthology at \url{http://aclanthology.info/}. As examples, we cite \cite{P16-1001} to show you how papers with a DOI will appear in the bibliography. We cite \cite{C14-1001} to show how papers without a DOI but with an ACL Anthology Identifier will appear in the bibliography. As reviewing will be double-blind, the submitted version of the papers should not include the authors' names and affiliations. Furthermore, self-references that reveal the author's identity, {\em e.g.}, \begin{quote} ``We previously showed \cite{Gusfield:97} ...'' \end{quote} should be avoided. Instead, use citations such as \begin{quote} ``\citeauthor{Gusfield:97} \shortcite{Gusfield:97} previously showed ... '' \end{quote} \textbf{Please do not use anonymous citations} and do not include acknowledgments when submitting your papers. Papers that do not conform to these requirements may be rejected without review. \textbf{References}: Gather the full set of references together under the heading \textbf{References}; place the section before any Appendices, unless they contain references. Arrange the references alphabetically by first author, rather than by order of occurrence in the text. Provide as complete a citation as possible, using a consistent format, such as the one for {\em Computational Linguistics\/} or the one in the {\em Publication Manual of the American Psychological Association\/}~\cite{APA:83}. Use of full names for authors rather than initials is preferred. A list of abbreviations for common computer science journals can be found in the ACM {\em Computing Reviews\/}~\cite{ACM:83}. The \LaTeX{} and Bib\TeX{} style files provided roughly fit the American Psychological Association format, allowing regular citations, short citations and multiple citations as described above. \textbf{Appendices}: Appendices, if any, directly follow the text and the references (but see above). Letter them in sequence and provide an informative title: \textbf{Appendix A. Title of Appendix}. \subsection{Footnotes} \textbf{Footnotes}: Put footnotes at the bottom of the page and use the footnote font size shown in Table~\ref{font-table}. They may be numbered or referred to by asterisks or other symbols.\footnote{This is how a footnote should appear.} Footnotes should be separated from the text by a line.\footnote{Note the line separating the footnotes from the text.} \subsection{Figures and Tables} \textbf{Placement}: Place figures and tables in the paper near where they are first discussed, as close as possible to the top of their respective column. \textbf{Captions}: Provide a caption for every illustration; number each one sequentially in the form: ``Figure 1: Caption of the Figure.'' ``Table 1: Caption of the Table.'' Type the captions of the figures and tables below the body, using the caption font size shown in Table~\ref{font-table}. \subsection{Equation} \label{ssec:eqn} An example equation is shown below: \begin{equation} A=\pi r^2 \end{equation} The numbering (if any) and alignment of the equations will be done automatically (using \verb|align| or \verb|equation|). \subsection{Accessibility} \label{ssec:accessibility} In an effort to accommodate the color-blind (as well as those printing to paper), grayscale readability for all accepted papers will be encouraged. Color is not forbidden, but authors should ensure that tables and figures do not rely solely on color to convey critical distinctions. A simple criterion: All curves and points in your figures should be clearly distinguishable without color. \section{Translation of non-English Terms} It is also advised to supplement non-English characters and terms with appropriate transliterations and/or translations since not all readers understand all such characters and terms. Inline transliteration or translation can be represented in the order of: original-form transliteration ``translation''. \section{Length of Submission} \label{sec:length} The ACL 2018 main conference accepts submissions of long papers and short papers. Long papers may consist of up to eight (8) pages of content plus unlimited pages for references. Upon acceptance, final versions of long papers will be given one additional page -- up to nine (9) pages of content plus unlimited pages for references -- so that reviewers' comments can be taken into account. Short papers may consist of up to four (4) pages of content, plus unlimited pages for references. Upon acceptance, short papers will be given five (5) pages in the proceedings and unlimited pages for references. For both long and short papers, all illustrations and tables that are part of the main text must be accommodated within these page limits, observing the formatting instructions given in the present document. Supplementary material in the form of appendices does not count towards the page limit. However, note that supplementary material should be supplementary (rather than central) to the paper, and that reviewers may ignore supplementary material when reviewing the paper (see Appendix \ref{sec:supplemental}). Papers that do not conform to the specified length and formatting requirements are subject to be rejected without review. Workshop chairs may have different rules for allowed length and whether supplemental material is welcome. As always, the respective call for papers is the authoritative source. \section*{Acknowledgments} The acknowledgments should go immediately before the references. Do not number the acknowledgments section ({\em i.e.}, use \verb|\section*| instead of \verb|\section|). Do not include this section when submitting your paper for review. \section{Introduction} Distributed representations of knowledge base entities and concepts have become key elements of many recent NLP systems, for applications from document ranking \cite{Jimeno-Yepes2015} and knowledge base completion \cite{Toutanova2015} to clinical diagnosis code prediction \cite{Choi2016, Choi2016b}. These works have taken two broad tacks for the challenge of learning to represent entities, each of which may have multiple unique surface forms in text. Knowledge-based approaches learn entity representations based on the structure of a large knowledge base, often augmented by annotated text resources \cite{Yamada2016,Cao2017}. Other methods utilize explicitly annotated data, and have been more popular in the biomedical domain \cite{Choi2016, Mencia2016}. Both approaches, however, are often limited by ignoring some or most of the available textual information. Furthermore, such rich structures and annotations are lacking for many specialized domains, and can be prohibitively expensive to obtain. We propose a fully text-based method for jointly learning representations of words, the surface forms of entities, and the entities themselves, from an unannotated text corpus. We use distant supervision from a \textit{terminology}, which maps entities to known surface forms. We augment the well-known log-linear skip-gram model \cite{Mikolov2013a} with additional term- and entity-based objectives, and evaluate our learned embeddings in both intrinsic and extrinsic settings. Our joint embeddings clearly outperform prior entity embedding methods on similarity and relatedness evaluations. Entity and word embeddings capture complementary information, yielding improved performance when they are combined. Analogy completion results further illustrate these differences, demonstrating that entities capture domain knowledge, while word embeddings capture morphological and lexical information. Finally, we see that an oracle combination of entity and text embeddings nearly matches a state of the art unsupervised method for biomedical word sense disambiguation that uses complex knowledge-based approaches. However, our embeddings show a significant drop in performance compared to prior work in a newswire disambiguation dataset, indicating that knowledge graph structure contains entity information that a purely text-based approach does not capture. \section{Related Work} Knowledge-based approaches to entity representation are well-studied in recent literature. Several approaches have learned representations from knowledge graph structure alone \cite{Grover2016, Yang2016b,Wang2017}. \newcite{Wang2014b}, \newcite{Yamada2016}, and \newcite{Cao2017} all use a joint embedding method, learning representations of text from a large corpus and entities from a knowledge graph; however, they rely on the disambiguated entity annotations in Wikipedia to align their models. \newcite{Fang2016} investigate heuristic methods for joint embedding without annotated entity mentions, but still rely on graph structure for entity training. The robust terminologies available in the biomedical domain have been instrumental to several recent annotation--based approaches. \newcite{DeVine2014} use string matching heuristics to find possible occurrences of known biomedical concepts in literature abstracts, and use the sequence of these noisy concepts (without the document text) as input for skip-gram training. \newcite{Choi2016CRI} and \newcite{Choi2016} use sequences of structured medical observations from patients' hospital stays for context-based learning. Finally, \newcite{Mencia2016} take documents tagged with Medical Subject Heading (MeSH) topics, and use their texts to learn representations of the MeSH headers. These methods are able to draw on rich structured and semi-structured data from medical databases, but discard important textual information, and empirically are limited in the scope of the vocabularies they can embed. \section{Methods} In order to jointly learn entity and text representations from an unannotated corpus, we use distant supervision \cite{Mintz2009} based on known {\it terms}, strings which can represent one or more entities. The mapping between terms and entities is many-to-many; for example, the same infection can be expressed as ``cold'' or ``acute rhinitis'', but ``cold'' can also describe the temperature or refer to chronic obstructive lung disease. Mappings between terms and entities are defined by a terminology.\footnote{ {\it Terminology} is overloaded with both biomedical and lexical senses; we use it here strictly to mean a mapping between terms and entities. } We extracted terminologies from two well-known knowledge bases: \textbf{The Unified Medical Language System} (UMLS; \citealp{Bodenreider2004}); we use the mappings between concepts and strings in the MRCONSO table as our terminology. This yields 3.5 million entities, represented by 7.6 million strings in total. \textbf{Wikipedia}; we use page titles and redirects as our terminology. This yields 9.7 million potential entities (pages), represented by 17.1 million total strings. Table~\ref{tbl:terminologies} gives further statistics about the mapping between entities and surface forms in each of these terminologies. \input{tbls/tbl-terminologies} While iterating through the training corpus, we identify any exact matches of the terms in our terminologies.\footnote{ We lowercase and strip special characters and punctuation from both terms and corpus text, and then find all exact matches for the terms. } We allow for overlapping terms: thus, ``in New York City'' will include an occurrence of both the terms ``New York'' and ``New York City.'' Each matched term may refer to one or more entities; we do not use a disambiguation model in preprocessing, but rather assign a probability distribution over the possible entities. \subsection{Model} We extend the skip-gram model of \newcite{Mikolov2013a}, to jointly learn vector representations of words, terms, and entities from shared textual contexts. For a given target word, term, or entity $v$, let $C_v = c_{-k}\dots c_{k}$ be the observed contexts in a window of $k$ words to the left and right of $v$, and let $N_v = n_{-k,1}\dots n_{k,d}$ be the $d$ random negative samples for each context word. Then, the context-based objective for training $v$ is \vspace{-0.33cm} {\small \begin{equation} \label{eq:word-objective} O(v,C_v,N_v) = \sum_{c \in C_v}\textrm{log}\sigma(\vec{c}\cdot\vec{v}) + \sum_{n \in N_v}\textrm{log}\sigma(-\vec{n}\cdot\vec{v}) \end{equation} } \vspace{-0.33cm} \noindent where $\sigma$ is the logistic function. We use a sliding context window to iterate through our corpus. At each step, the word $w$ at the center of the window $C_w$ is updated using $O(w,C_w,N_w)$, where $N_w$ are the randomly-selected negative samples. As terms are of variable token length, we treat each term $t$ as an atomic unit for training, and set $C_t$ to be the context words prior to the first token of the term and following the final token. Negative samples $N_t$ are sampled independently of $N_w$. Finally, each term $t$ can represent a set of entities $E_t$. Vectors for these entities are updated using the same $C_t$ and $N_t$ from $t$. Since the entities are latent, we weight updates with uniform probability $|E_t|^{-1}$; attempts to learn this probability did not produce qualitatively different results from the uniform distribution. Thus, letting $T$ be the set of terms completed at $w$, the full objective function to maximize is: \vspace{-0.33cm} {\small \begin{equation} \label{eq:objective} \begin{split} \hat{O} =\ &O(w,C_w,N_w) + \\ &\sum_{t \in T}\Big[O(t,C_t,N_t) + \sum_{e \in E_t}\frac{1}{|E_t|}O(e,C_t,N_t)\Big] \end{split} \end{equation} } \vspace{-0.33cm} Term and entity updates are only calculated when the final token of one or more terms is reached; word updates are applied at each step. To assign more weight to near contexts, we subsample the window size at each step from $[1,k]$. \subsection{Training corpora} \input{tbls/tbl-corpus-polysemys} We train embeddings on three corpora. For our biomedical embeddings, we use 2.6 billion tokens of biomedical abstract texts from the 2016 PubMed baseline (1.5 billion noisy annotations). For comparison to previous open-domain work, we use English Wikipedia (5.5 million articles from the 2018-01-20 dump); we also use the Gigaword 5 newswire corpus \cite{Gigaword5}, which does not have gold entity annotations. As our model does not include a disambiguation module for handling ambiguous term mentions, we also calculate the expected effect of polysemous terms on each entity that we embed using a given corpus. We call this the entity's \textit{corpus polysemy}, and denote it with $CP(e)$. For entity $e$ with corresponding terms $T_e$, $CP(e)$ is given as \begin{equation} CP(e) = \sum_{t\in T_e}\frac{f(t)}{Z}\textrm{polysemy}(t) \end{equation} \noindent where $f(t)$ is the corpus frequency of term $t$, $Z$ is the frequency of all terms in $T_e$, and polysemy$(t)$ is the number of entities that $t$ can refer to. Table~\ref{tbl:corpus-polysemys} breaks down expected polysemy impact for each corpus. The vast majority of entities experience some polysemy effect in training, but very few have an average ambiguity per mention of 50\% or greater. Most entities with high corpus polysemy are due to a few highly ambiguous generic strings, such as \textit{combinations} and \textit{unknown}. However, some specific terms are also high ambiguity: for example, \textit{Washington County} refers to 30 different US counties. \subsection{Hyperparameters} For all of our embeddings, we used the following hyperparameter settings: a context window size of 2, with 5 negative samples per word; initial learning rate of 0.05 with a linear decay over 10 iterations through the corpus; minimum frequency for both words and terms of 10, and a subsampling coefficient for frequent words of 1e-5. \subsection{Baselines} We compare the words, terms,\footnote{ Unknown terms were handled by backing off to words. } and entities learned in our model against two prior biomedical embedding methods, using pretrained embeddings from each. \newcite{DeVine2014} use sequences of automatically identified ambiguous entities for skip-gram training, and \newcite{Mencia2016} use texts of documents tagged with MeSH headers to represent the header codes. The most recent comparison method for Wikipedia entities is MPME \cite{Cao2017}, which uses link anchors and graph structure to augment textual contexts. We also include skip-gram vectors as a final baseline; for Pubmed, we use pretrained embeddings with optimized hyperparameters from \newcite{Chiu2016b}, and we train our own embeddings with word2vec for both Wikipedia and Gigaword. \section{Evaluations} \input{tbls/tbl-simrel-umnsrs} Following \newcite{Chiu2016a}, \newcite{Cao2017}, and others, we evaluate our embeddings on both intrinsic and extrinsic tasks. To evaluate the semantic organization of the space, we use the standard intrinsic evaluations of similarity and relatedness and analogy completion. To explore the applicability of our embeddings to downstream applications, we apply them to named entity disambiguation. Results and analyses for each experiment are discussed in the following subsections. \subsection{Similarity and relatedness} We evaluate our biomedical embeddings on the UMNSRS datasets \cite{Pakhomov2010}, consisting of pairs of UMLS concepts with judgments of similarity (566 pairs) and relatedness (587 pairs), as assigned by medical experts. For evaluating our Wikipedia entity embeddings, we created WikiSRS, a novel dataset of similarity and relatedness judgments of paired Wikipedia entities (people, places, and organizations), as assigned by Amazon Mechanical Turk workers. We followed the design procedure of \citet{Pakhomov2010} and produced 688 pairs each of similarity and relatedness judgments; for further details on our released dataset, please see the Appendix. For each labeled entity pair, we calculated the cosine similarity of their embeddings, and ranked the pairs in order of descending similarity. We report Spearman's $\rho$ on these rankings as compared to the ranked human judgments: Table~\ref{tbl:simrel-umnsrs} shows results for UMNSRS, and Table~\ref{tbl:simrel-wikisrs} for WikiSRS. As the dataset includes both string and disambiguated entity forms for each pair, we evaluate each type of embeddings learned in our model. Additionally, as words and entities are embedded in the same space (and thus directly comparable), we experiment with two methods of combining their information. Entity+Word sums the cosine similarities calculated between the entity embeddings and word embeddings for each pair; the Cross setting further adds comparisons of each entity in the pair to the string form of the other. \subsubsection{Results} \input{tbls/tbl-simrel-wikisrs} Our proposed method clearly outperforms prior work and text-based baselines on both datasets. Further, we see that the words and entities learned by our model include complementary information, as combining them further increases our ranking performance by a large margin. As the results on UMNSRS could have been due to our model's ability to embed many more entities than prior methods, we also filtered the dataset to the 255 similarity pairs and 260 relatedness pairs that all evaluated entity-level methods could represent;\footnote{ For WikiSRS, all methods covered all pairs. } Table~\ref{tbl:simrel-umnsrs} shows similar gains on this even footing. We follow \newcite{Rastogi2015} in calculating significance, and use their statistics to estimate the minimum required difference for significant improvements on our datasets. \input{tbls/tbl-simrel-methods} In UMNSRS, we found that cosine similarity of entities consistently reflected human judgments of similarity better than of relatedness; this reflects previous observations by \newcite{Agirre2009} and \newcite{Muneeb2015}. Interestingly, we see the opposite behavior in WikiSRS, where relatedness is captured better than similarity in all settings. In fact, we see a number of errors of relatedness in WikiSRS predictions, e.g., ``Hammurabi I'' and ``Syria'' are marked highly similar, while the composers ``A.R. Rahman'' and ``John Phillip Sousa'' are marked dis-similar. MPME embeddings tend towards over-relatedness as well (e.g., ranking ``Richard Feynman'' and ``Paris-Sorbonne University'' much more highly than gold labels). Despite better similarity performance, this trend of over-relatedness also holds in biomedical embeddings: for example, \ent{C0027358} (Narcan) and \ent{C0026549} (morphine) are consistently marked highly similar across embedding methods, even though Narcan blocks the effects of opioids like morphine. \subsubsection{Comparing entities and words} We observe clear differences in the rankings made by entity vs word embeddings. As shown in Table~\ref{tbl:simrel-methods}, highly related entities tend to have high cosine similarity, while word embeddings are more sensitive to lexical overlap and direct cooccurrence. Combining both sources often gives the most inuitive results, balancing lexical effects with relatedness. For example, while the top three pairs by combination in WikiSRS are likely to co-occur, the top three in UMNSRS are pairs of drug choices (antibiotics, ACE inhibitors, and chemotherapy drugs, respectively), only one of which is likely to be prescribed to any given patient at once. These differences also play out in erroneous predictions. Entity embeddings often fix the worst misrankings by words: for example, ``Tony Blair'' and ``United Kingdom'' (gold rank: 28) are ranked highly unrelated (position 633) by words, but entities move this pair back up the list (position 86). However, errors made by entity embeddings are often also made by words: e.g., \ent{C0011175} (dehydration) and \ent{C0017160} (gastroenteritis) are erroneously ranked as highly unrelated by both methods. Interestingly, we find no correlation between the corpus polysemy of entity pairs and ranking performance, indicating that ambiguity of term mentions is not a significant confound for this task. \subsection{Analogy completion} We use analogy completion to further explore the properties of our joint embeddings. Given analogy $a:b::c:d$, the task is to guess $d$ given $(a,b,c)$, typically by choosing the word or entity with highest cosine similarity to $b - a + c$ \cite{Levy2014}. We report accuracy using the top guess (ignoring $a,b$, and $c$ as candidates, per \citealp{Linzen2016}). \subsubsection{Biomedical analogies} To compare between word and entity representations, we use the entity-level biomedical dataset BMASS \cite{Newman-Griffis2017}, which includes both entity and string forms for each analogy. In order to test if words and entities are capturing complementary information, we also include an oracle evaluation, in which an analogy is counted as correct if either words or entities produce a correct response.\footnote{ We use the Multi-Answer setting for our evaluation (a single $(a,b,c)$ triple, but a set of correct values for $d$). } We do not compare against prior biomedical entity embedding methods on this dataset, due to their limited vocabulary. \input{tbls/tbl-bmass-results} Table~\ref{tbl:bmass-results} contrasts the performance of different jointly-trained representations for five relations with the largest performance differences from this dataset. For \textit{gene-encodes-product} and \textit{refers-to}, both of which require structured domain knowledge, entity embeddings significantly outperform word-level representations. Many of the errors made by word embeddings in these relations are due to lexical over-sensitivity: for example, in the renaming analogy \textit{spinal epidural hematoma:epidural hemorrhage::canis familiaris:\underline{\phantom{dog}}}, words suggest latinate completions such as \textit{latrans} and \textit{caballus}, while entities capture the correct \ent{C1280551} (dog). However, on more morphological relations such as \textit{has-free-acid-or-base-form}, words are by far the better option. The success of the oracle combination method for entity and word predictions clearly indicates that not only are words and entities capturing different knowledge, but that it is complementary. In the majority of the 25 relations in BMASS, oracle results improved on words and entities alone by at least 10\% relative. In some cases, as with \textit{has-free-acid-or-base-form}, one method does most of the heavy lifting. In several others, including the challenging (and open-ended) \textit{associated-with}, entities and words capture nearly orthogonal cases, leading to large jumps in oracle performance. \subsubsection{General-domain analogies} No entity-level encyclopedic analogy dataset is available, so we follow \newcite{Cao2017} in evaluating the effect of joint training on words using the Google analogy set \cite{Mikolov2013a}. As shown in Table~\ref{tbl:google-results}, our Wikipedia embeddings roughly match MPME embeddings (which use annotated entity links) on the semantic portion of the dataset, but our ability to train on unannotated Gigaword boosts our results on all relations except \textit{city-in-state}.\footnote{ We failed to precisely replicate the analogy numbers reported by \newcite{Cao2017}; we attribute this primarily to the different training corpus and slightly different preprocessing. } Overall, we find that jointly-trained word embeddings split performance with word-only skipgram training, but that word-only training tends to get consistently closer to the correct answer. This suggests that terms and entities may conflict with word-level semantic signals. \subsection{Entity disambiguation} Finally, to get a picture of the impact of our embedding method on downstream applications, we investigated entity disambiguation.\footnote{ This task is also referred to as entity linking and entity sense disambiguation. } Given a named entity occurrence in context, the task is to assign a canonical identifier to the entity being referred to: e.g., to mark that ``New York'' refers to the city in the sentence, ``The mayor of New York held a press conference.'' It bears noting that in unambiguous cases, a terminology alone is sufficient to link the correct entity: for example, ``Barack Obama'' can only refer to a single entity, regardless of context. However, many entity strings (e.g., ``cold'', ``New York'') are ambiguous, necessitating the use of alternate sources of information such as our embeddings to assign the correct entity. \subsubsection{Biomedical abstracts} \label{sssec:msh-wsd} \input{tbls/tbl-analogy-results} We evaluate on the MSH WSD dataset \cite{Jimeno-Yepes2011}, a benchmark for biomedical word sense disambiguation. MSH WSD consists of mentions of 203 ambiguous terms in biomedical literature, with over 30,000 total instances. Each sample is annotated with the set of UMLS entities the term could refer to. We adopt the unsupervised method of \newcite{Sabbir2017}, which combines cosine similarity and projection magnitude of an entity representation $e$ to the averaged word embeddings of its contexts $C_{avg}$ as follows: \begin{equation} \label{eq:msh-wsd} f(e,C_{avg}) = \mathrm{cos}(C_{avg},e)\cdot \frac{||P(C_{avg},e)||}{||e||} \end{equation} The entity maximizing this score is predicted. We compare against concept embeddings learned by \newcite{Sabbir2017}. They used MetaMap \cite{Aronson2010} with the disambiguation module enabled on a curated corpus of 5 million Pubmed abstracts to create a UMLS concept cooccurrence corpus for word2vec training. As shown in Table~\ref{tbl:msh-wsd-results}, our method lags behind theirs, though it clearly beats both random (49.7\% accuracy) and majority class (52\%) baselines. In addition, we leverage our jointly-embedded entities and words by adding in the definition-based model used by \newcite{Pakhomov2016}, which calculates an entity's embedding as the average of definitions of its neighbors in the UMLS hierarchy \cite{McInnes2011}. We use this alternate entity embedding in Equation~\ref{eq:msh-wsd} to calculate a second score that we add to the direct entity embedding score. This yields a large performance boost of over 6\% absolute, indicating that using entities and words together makes up much of the gap between our distantly supervised embeddings and the external resources used by \newcite{Sabbir2017}. Using the definition-based method alone with our jointly-embedded words, we see a significant increase over \newcite{Pakhomov2016}, indicating the benefits of joint training. However, the combined entity and definition model still yields a significantly different 2\% boost in accuracy over definitions alone. Finally, we evaluate an oracle combination that reports correct if either entity or definition embeddings achieve the correct result; as shown in the last row of Table~\ref{tbl:msh-wsd-results}, this combination outperforms the entity-only method of \newcite{Sabbir2017}, and approaches their state-of-the-art result that combines entity embeddings with a knowledge-based approach from the structure of the UMLS. \input{tbls/tbl-msh-wsd-results} Specific errors shed more light on these differences. The definition-based method performs better in many cases where the surface form is a common word, such as {\it coffee} (68\% definition accuracy vs 28\% entity accuracy) and {\it iris} (93\% definition accuracy vs 35\% entity accuracy). Entities outperform on some more technical cases, such as {\it potassium} (74\% entity accuracy vs 49\% definition accuracy). Combining both approaches in the joint model recovers performance on several cases of low entity accuracy; for example, joint accuracy on {\it coffee} is 68\%, and on {\it lupus} (53\% entity accuracy), joint performance is 60\%. \subsubsection{Newswire entities} AIDA \cite{Hoffart2011} is a standard dataset for entity linking in newswire, consisting of approximately 30,000 entities linked to Wikipedia page IDs. To reduce the search space, \newcite{Pershina2015} provided a set of candidate entities for each mention, which we use for our experiments. The MPME model of \newcite{Cao2017} achieves near state-of-the-art performance accuracy on AIDA with this candidate set, using the mention sense distributions and full document context included in the model. As our embeddings are trained without explicit entity annotations, we instead use the same cosine similarity and projection model discussed in Section~\ref{sssec:msh-wsd} for this task. In contrast to our results on the biomedical data, we see performance far below the baseline on these data, as shown in Table~\ref{tbl:aida-results}. \input{tbls/tbl-aida-results} \input{tbls/tbl-poly-neighbors} However, we improve this performance slightly by multiplying by the similarity between the entity embedding and the average word embedding of the mention itself; this gives us roughly a further 4\% accuracy for both Wikipedia and Gigaword embeddings. Using the surface form recovers several cases where entities alone yield unlikely options, e.g. Roman-era Britain instead of the United Kingdom for {\it Britain}. However, it also introduces lexical errors: for example, {\it British} \input{figures/fig-neighbor-sty} in several cases refers to the United Kingdom, but the British people are often selected instead. We note that this extra score actually hurts performance on MSH WSD, where the terms are curated to be highly ambiguous, in contrast to the shorter contexts and clearer terms used in AIDA. Two other issues bear consideration in this evaluation. Prior approaches to the AIDA dataset, including MPME, make use of the global context of entity mentions within a document to improve predictions; by using local context only, we observe some inconsistent predictions, such as selecting the cricket world cup instead of the FIFA competition for {\it world cup}, in a document discussing football. Additionally, in contrast to the MSH WSD dataset, many instances in AIDA have several highly-related candidates that introduce some confusion in our results. For example, {\it Ireland} could refer to the United Kingdom of Great Britain and Ireland, the island of Ireland, or the Republic of Ireland. As our embedding training does not include gold entity links, cases like this are often errors in our predictions. \section{Analysis of joint embeddings} To get a more detailed picture of our joint embedding space, we investigate nearest neighbors for each point by cosine similarity. As entities in the UMLS are assigned one or more of over 120 semantic types, we first examine how intermixed these types are in our biomedical embeddings. Figure~\ref{fig:neighbor-sty} shows how often an entity's nearest neighbor shares at least one semantic type with it, across the three biomedical embedding methods we evaluated. As each set of embeddings has a different vocabulary, we also restrict to the entities that all three can embed (approximately 11,000). We see that our method puts entities of the same type together nearly 40\% of the time, despite embedding over 270 thousand entities. On an even footing, our method puts types together significantly more often \newcite{Mencia2016} (McNemar's; $p<0.05$), and equivalently with \newcite{DeVine2014}, despite using less entity-level information in training. Within our embeddings, major biological types such as bacteria, eukaryotes, mammals, and viruses all have more than 60\% of neighbors with the same type, while less structured clinical types such as Clinical Attribute and Daily or Recreational Activity are in the 10-20\% range. Corpus polysemy does not appear to have any effect on this type matching (mean polysemy of 1.5 for both matched and non-matched entities). Expanding to include the words and terms in the joint embedding space, however, we see definite qualitative effects of corpus polysemy on entity nearest neighbors. Table~\ref{tbl:poly-neighbors} gives nearest word, term, entity, and joint neighbors to two biomedical entities: \ent{C0009443} (the common cold; $CP=6.71$) and \ent{C0242797} (home health aides; $CP=1$). For the more polysemous \ent{C0009443}, where 95\% of its mentions are of the word ``cold'' (polysemy=7), word-level neighbors are mostly nonsensical, while term neighbors are more logical, and entity neighbors reflect different senses of ``cold''. By contrast, the non-polysemous \ent{C0242797}, which is represented by 14 different unambiguous strings, words, terms, and entities are all very clearly in line with the theme of home health aides. Notably, the common and unambiguous terms for \ent{C0242797} are its nearest neighbors out of all points, while only two of the top 10 neighbors to \ent{C0009443} are terms. \section{Discussion} \newcite{Faruqui2016} observe that similarity and relatedness are not clearly distinguished in semantic embedding evaluations, and that it is unclear exactly how vector-space models should capture them. We see more evidence of this, as cosine similarity seems to be capturing a mix of the two properties in our data. This mix is clearly informative, but it empirically favors relatedness judgments, and cosine similarity is insufficient to separate the two properties. Corpus polysemy plays a qualitative role in our embedding model, but less of a quantitative one. It does not correlate with similarity and relatedness judgments or entity disambiguation decisions, but it clearly affects the organization of the embedding space, by embedding entities with high corpus polysemy in less coherent areas than those with low polysemy. \newcite{Linzen2016} points out that for analogy completion, local neighborhood structure can interfere with standard methods; how this neighborhood structure affects predictions in more complex tasks is an open question. Overall, we find two main advantages to our model over prior work. First, by only using a terminology and an unannotated corpus, we are able to learn entity embeddings from larger and more diverse data; for example, embeddings learned from Gigaword (which has no entity annotations) outperform embeddings learned on Wikipedia in most of our experiments. Second, by embedding entities and text into a joint space, we are able to leverage complementary information to get higher performance in both intrinsic and extrinsic tasks; an oracle model nearly matches a state-of-the-art ensemble vector and knowledge-based model for biomedical word sense disambiguation. However, our other entity disambiguation results demonstrate that there is additional entity-level information that we are not yet capturing. In particular, it is unclear whether our low performance on disambiguating newswire entities is due to a disambiguation model mismatch, a lack of information in our embeddings, or a combination of both. \section{Conclusions} We present a method for jointly learning embeddings of entities and text from an arbitrary unannotated corpus, using only a terminology for distant supervision. Our learned embeddings better capture both biomedical and encyclopedic similarity and relatedness than prior methods, and approach state-of-the-art performance for unsupervised biomedical word sense disambiguation. Furthermore, entities and words learned jointly with our model capture complementary information, and combining them improves performance in all of our evaluations. We make an implementation of our method available at {\tt github.com/OSU-slatelab/JET}, along with the source code used for our evaluations and our pretrained entity embeddings. Our novel Wikipedia similarity and relatedness datasets are available at the same source. \section{WikiSRS construction details} \label{app:wikisrs} We followed a similar process to \newcite{Pakhomov2010} in selecting the entity pairs to be used in our dataset. We first filtered the full list of Wikipedia pages to the subset that we learned embeddings for, and then used the entity types assigned to these pages in YAGO \cite{YAGO3} to restrict to only entities labeled with WordNet types organization or person, or with the YAGO type geoEntity. For each pairing of these categories (Organization-Organization, Organization-Place, Organization-Person, Place-Place, Place-Person, and Person-Person), we manually selected 30 pairs of entities for each of the following relatedness categories: Completely Unrelated, Somewhat Unrelated, Somewhat Related, and Highly Related. These produced the list of 720 entity pairs we used for our Mechanical Turk surveys. We augmented each survey of 30 questions with 4 manually-created validation pairs using common entities (e.g., London, New York), each of which was categorized as Highly Related or Completely Unrelated. We included these validation questions at random indices in our surveys. To evaluate if participants were reading the questions, we binned their ratings on these validation questions into 0-25 (Completely Unrelated), 26-50 (Somewhat Unrelated), 51-75 (Somewhat Related), and 76-100 (Highly Related). If a participant's ratings disagreed with ours on multiple validation questions, we discarded their data (we allowed disagreement on a single question, as some validation questions had high variance in responses among reliable annotators). We recruited 6 participants for each survey, for a total of 34 unique participants across the 48 HITs. Participants were presented with a message describing the survey and stating that by clicking the button at the bottom of the message to begin the survey, they were providing informed consent to participate. Identifying participant data was not collected, and we used only the anonymous worker IDs provided by the Mechanical Turk interface to collate our data and remunerate workers. Participants were asked optional demographic questions about their age bracket and native language at the end of the survey; we did not end up using age information, but filtered our participants for those that self-reported English reading proficiency. The majority responded to a single HIT, while 3 completed more than 20. We discarded all submissions from 3 participants, as they did not report English reading proficiency (1) or did not satisfy the validation questions (2). All participants were paid state minimum wage at the time of the study for their time, regardless of whether they answered demographic questions or if we used their data in the final sample. Collection of this data was approved under Ohio State University IRB protocol 2017E0050. To generate the final dataset, we assessed each participant's responses to the validation questions in each survey. We kept surveys for which we had at least 4 participants with satisfactory answers to the validation questions; this resulted in discarding 1 of the 24 HITs for each task. Due to 2 repeated pairs, this gave us final dataset sizes of 688 pairs for each of similarity and relatedness, 658 of which were shared between the tasks. \input{tbls/tbl-wikisrs-icc} Following \newcite{Pakhomov2010}, we assessed inter-annotator agreement using the intraclass correlation coefficient (ICC). Table~\ref{tbl:wikisrs-icc} gives the values for our datasets. The numbers reported are within the moderate range, and they correspond to the ICC numbers reported by Pakhomov et al.\ on the UMNSRS datasets. The source code of our Mechanical Turk interface and data files used to generate the tasks are available at {\tt github.com/OSU-slatelab/WikiSRS}. \section*{Acknowledgments} We would like to thank Chaitanya Shivade for helpful discussions, and all of our anonymous reviewers for their invaluable advice. This research was supported in part by the Intramural Research Program of the National Institutes of Health, Clinical Research Center and through an Inter-Agency Agreement with the US Social Security Administration.
{ "redpajama_set_name": "RedPajamaArXiv" }
291
{"url":"http:\/\/mathhelpforum.com\/differential-geometry\/137748-l-hospital-s-rule-show-non-existence-limits.html","text":"# Thread: L'Hospital's Rule to show non-existence of limits\n\n1. ## L'Hospital's Rule to show non-existence of limits\n\nLet $\\displaystyle f(x)=x^2\\sin{1\/x}$ for $\\displaystyle 0<x\\leq 1$ and $\\displaystyle f(0)=0$ , and let $\\displaystyle g(x)=x^2$ for $\\displaystyle x\\in [0,1]$ .\nThen both f and g are differentiable on [0,1] and $\\displaystyle g(x)>0$ for $\\displaystyle x\\neq 0$ .\nShow that $\\displaystyle \\lim_{x \\to 0} f(x) = 0 = \\lim_{x \\to 0} g(x)$ and that $\\displaystyle \\lim_{x \\to 0} f(x)\/g(x)$ does not exist.\n\n2. Originally Posted by frenchguy87\nLet $\\displaystyle f(x)=x^2\\sin{1\/x}$ for $\\displaystyle 0<x\\leq 1$ and $\\displaystyle f(0)=0$ , and let $\\displaystyle g(x)=x^2$ for $\\displaystyle x\\in [0,1]$ .\nThen both f and g are differentiable on [0,1] and $\\displaystyle g(x)>0$ for $\\displaystyle x\\neq 0$ .\nShow that $\\displaystyle \\lim_{x \\to 0} f(x) = 0 = \\lim_{x \\to 0} g(x)$ and that $\\displaystyle \\lim_{x \\to 0} f(x)\/g(x)$ does [?] exist.\nBut surely, there must be a typo in the statement of your problem: $\\displaystyle \\lim_{x\\to 0+}f(x)\/g(x)$ does not exist, since it boils down to $\\displaystyle \\lim_{x\\to 0+}sin(1\/x)$.\n\n3. fixed!\n\n4. does anyone know?\n\n5. Originally Posted by frenchguy87\ndoes anyone know?\nWhere are you having problems? This is rather straight forward.\n\n6. Never mind I figured it out earlier.","date":"2018-05-24 00:37:22","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.952875554561615, \"perplexity\": 339.7014589712586}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-22\/segments\/1526794865863.76\/warc\/CC-MAIN-20180523235059-20180524015059-00370.warc.gz\"}"}
null
null
Q: Android app crashes on playing sound I'm developing a free alphabet application but I'm not a Java developer. I've made an HTML page where there are about 150 .png pictures and .mp3 sound file pairs. For example, apple.png and apple.mp3 would be a pair, and there are going to be more. I'm using webview to display the webpage with pictures and to know when the user is trying to hear the sound. Here is the code I am currently using: index.html: ...<a href="mp3/apple.mp3"><img src="apple.png"></a>... alphabetActivity.java: ...public boolean shouldOverrideUrlLoading(WebView view, String url) { if (url.endsWith(".mp3")){ MediaPlayer mediaPlayer = MediaPlayer.create(getApplicationContext(), Uri.parse(url)); Toast.makeText(HelloWebviewActivity.this, url, Toast.LENGTH_SHORT).show(); mediaPlayer.start(); } else { Toast.makeText(HelloWebviewActivity.this, "not mp3", Toast.LENGTH_SHORT).show(); } return true; }... All sounds are stored at assets/www/mp3. . But there is a problem: every time I click on a picture, my application crashes with a Forced close... message. Is there any way to make it work? Found my own solution for this problem. I've copied all the sounds to res/raw folder, changed links in index.html from "mp3/apple.mp3" to "apple" and used this code: if(mPlayer!=null){ mPlayer.stop(); mPlayer.release();} int id = getResources().getIdentifier(url.substring(26), "raw", getPackageName());; mPlayer = MediaPlayer.create(getApplicationContext(), id); mPlayer.start(); Right now this code is working. Thanks for help =) A: As far as I know MediaPlayer.create() is synchronous.Thus it blocks the UI thred in the place of invocation, creating ANR = Application Not Responsive error. To fix that you need to use asynchronous MediaPlayer.prepareAsync() calls. For details see: http://developer.android.com/reference/android/media/MediaPlayer.html More precisely, instead of: MediaPlayer mediaPlayer = MediaPlayer.create(getApplicationContext(),Uri.parse(url)); you should do something similar to: final MediaPlayer mediaPlayer = new MediaPlayer(); mediaPlayer.setDataSource(getApplicationContext(), Uri.parse(url)); mediaPlayer.setOnPreparedListener(new OnPreparedListener() { @Override public void onPrepared(MediaPlayer mp) { mediaPlayer.start(); }); A: Found my own solution for this problem. I've copied all the sounds to res/raw folder, changed links in index.html from "mp3/apple.mp3" to "apple" and used this code: if(mPlayer!=null){ mPlayer.stop(); mPlayer.release();} int id = getResources().getIdentifier(url.substring(26), "raw", getPackageName());; mPlayer = MediaPlayer.create(getApplicationContext(), id); mPlayer.start(); Right now this code is working. Thanks for help =)
{ "redpajama_set_name": "RedPajamaStackExchange" }
510
Q: I can't create a report with an object datasource in Visual studio 2019 I am trying to create a simple report in my vs2019 with a class as the data model * *I have a public class named 'persona' *I have a new wpf applicacion (.net 5) *I've create a new report (RDLC file) *I try to use a class as a data template to design the report When I try to reference the class as the data source object, the class is not visible for the datasourtce wizard as you can see in the image below If put that class in a diferent project, a class library project, I got a different error, as you can see in the images below And whe I press the Finish button I got this new erro: There is no clue about what to do,... does anyone have any idea ? What Am I doing wrong? A: I have same problem. Try to use netstandard2.0 for model library, it works for me (vs2019 netcoreapp3.1)
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,562
\section{Additional Descriptions of the Classes $\mathcal{K}_{\Exp}$ and $\mathcal{K}_{{\Exp}{+}}$} \label{sect:additional} In this section we present additional descriptions of the class $\mathcal{K}_{{\Exp}{+}}$ that follow from our main results. \subsection{Generating $\mathcal{K}_{{\Exp}{+}}$ from $\mathcal{S}$} In this subsection we show that $\mathcal{K}_{{\Exp}{+}}$ is the smallest class that contains $\mathcal{S}$ and is closed under taking first-order reducts, finite covering structures, and adding constants. \begin{lemma}\label{gen_from_set} The following inclusions hold. \begin{enumerate} \item $\mathcal{U}^* \subseteq (R\circ F\circ C)(\mathcal{S})$. \item $\mathcal{U}_{\nf} \subseteq (R\circ F)(\mathbb{N})$. \end{enumerate} \end{lemma} \begin{proof} Let $\mathfrak{A}\in \mathcal{U}^*$ and let $O_1,\dots,O_m$ be the orbits of $\operatorname{Aut}(\mathfrak{A})$ so that $O_1=\{y_1\},\dots,O_l=\{y_l\}$ are the finite orbits. Let $F=\{y_1,\dots,y_l\}$. Pick a bijection $b_i$ between $O_i$ and $\mathbb{N}$ for each $i \in \{l+1,\dots,m\}$. Let $b=\bigcup_{i=l+1}^{m}b_i$ and let $E:=\{(x,y) \mid x,y\in \bigcup_{i=l+1}^{m}O_i, b(x)=b(y)\}$. Let $\mathfrak{C}$ be the structure $\mathfrak{A}$ expanded by the relation $E$. Then $\Delta(\mathfrak{C})=E\cup \{(x,x) \mid x\in F\}$ and $\operatorname{Aut}(\mathfrak{C}/\Delta(\mathfrak{C}))=\operatorname{Sym}(C/\Delta(\mathfrak{C}))_F$. Therefore, $\mathfrak{C}/\Delta(\mathfrak{C})\in C(\mathcal{S})$, which shows (1). If $\mathfrak{A}$ has no finite orbits, then $l=0$ and $\mathfrak{C}/\Delta(\mathfrak{C})$ is bi-definable with $\mathbb{N}$, which shows (2). \end{proof} \begin{lemma}\label{stab_finite} The classes $\mathcal{K}_{\Exp}$ and $\mathcal{K}_{{\Exp}{+}}$ are closed under $C$. \end{lemma} \begin{proof} We need to show that if a permutation group $G$ on $X$ is in $\mathcal{G}_{\Exp}$ or in $\mathcal{G}_{{\Exp}{+}}$, then so is $G_F$ for any finite $F \subset X$. In the case of $\mathcal{G}_{\Exp}$ this is clear. For the class $\mathcal{G}_{{\Exp}{+}}$ this is stated in Lemma \ref{stabil_finite}. \end{proof} \begin{lemma}\label{fu_uf} For any class $\mathcal C$ of structures \begin{align} (F\circ R) ({\mathcal C}) & \subseteq (R\circ F) ({\mathcal C}). \label{eq:fu_uf} \\ (F\circ R^{<\infty}) ({\mathcal C}) & \subseteq (R^{<\infty}\circ F) ({\mathcal C}). \label{eq:fu_uf2} \end{align} \end{lemma} \begin{proof} Let $\mathfrak{C}$ be a structure, let $\mathfrak{B}$ be a first-order reduct of $\mathfrak{C}$, and let $\pi \colon \mathfrak{A} \rightarrow \mathfrak{B}$ be a finite cover. Let $G:=\operatorname{Aut}(\mathfrak{A}) \cap \mu_\pi^{-1}(\operatorname{Aut}(\mathfrak{C}))$. Then $G$ is closed. So $G$ is the automorphism group of some first-order reduct $\mathfrak{D}$ of $\mathfrak{A}$ and $\pi \colon \mathfrak{D} \rightarrow \mathfrak{C}$ is a finite cover. Hence, $\mathfrak{A} \in (R\circ F)(\mathfrak{B})$. Moreover, if $\mathfrak{C}$ is $\omega$-categorical and $[\operatorname{Aut}(\mathfrak{B}):\operatorname{Aut}(\mathfrak{C})]$ is finite, that is, $\operatorname{Aut}(\mathfrak{B})=g_1\operatorname{Aut}(\mathfrak{C})\cup \dots \cup g_n\operatorname{Aut}(\mathfrak{C})$ for some $g_1,\dots,g_n\in \operatorname{Aut}(\mathfrak{B})$, then $\operatorname{Aut}(\mathfrak{A})=h_1\operatorname{Aut}(\mathfrak{C})\cup \dots \cup h_n\operatorname{Aut}(\mathfrak{C})$ for some $h_i$ so that $\mu_\pi(h_i)=g_i$. In particular, $[\operatorname{Aut}(\mathfrak{A}):\operatorname{Aut}(\mathfrak{D})]$ is finite. \end{proof} If $\mathfrak{A}$ has no finite orbits, then every first-order reduct of $\mathfrak{A}$ does not have finite orbits, too, so $R({\mathcal C}_{\nf}) \subseteq (R(\mathcal C))_{\nf}$ for any class $\mathcal C$. For $\mathcal C = \mathcal{U}$ we even get that \begin{align} R(\mathcal{U}_{\nf}) = R(\mathcal{U})_{\nf} \label{eq:r-unf_ru-nf} \end{align} (see Corollary~\ref{wreath_infinite}). Since a finite covering structure $\mathfrak{A}$ of $\mathfrak{B}$ has finite orbits if and only if $\mathfrak{B}$ has finite orbits we have for any class ${\mathcal C}$ of structures that \begin{align} F({\mathcal C}_{\nf}) = (F(\mathcal C))_{\nf}. \label{eq:fnf_nff} \end{align} \begin{theorem}\label{gen_from_set_final} The following equalities hold. \begin{enumerate} \item $\mathcal{K}_{{\Exp}{+}}=(R\circ F\circ C)(\mathcal{S})$, \item $(\mathcal{K}_{{\Exp}{+}})_{\nf}=(R \circ F)(\mathbb{N})$. \end{enumerate} \end{theorem} \begin{proof} Clearly, $\mathbb{N} \in \mathcal{S} \subset \mathcal{K}_{\Exp} \subset \mathcal{K}_{{\Exp}{+}}$ and $\mathcal{K}_{\Exp}$ and $\mathcal{K}_{{\Exp}{+}}$ are closed under $R$. The closure of $\mathcal{K}_{{\Exp}{+}}$ under $F$ follows from $\mathcal{K}_{{\Exp}{+}} = (F \circ R)(\mathcal{U}^*)$ (Theorem~\ref{main_kexpp}) and the closure under $C$ is stated in Lemma~\ref{stab_finite}). Finally, $(\mathcal{K}_{{\Exp}{+}})_{\nf}$ is closed under $R$ since $R((\mathcal{K}_{{\Exp}{+}})_{\nf}) \subseteq (R(\mathcal{K}_{{\Exp}{+}}))_{\nf} = (\mathcal{K}_{{\Exp}{+}})_{\nf}$. This shows the inclusions $\supseteq$ in $(1)$ and in $(2)$. For the converse containments observe that \begin{align*} \mathcal{K}_{{\Exp}{+}} & =(F \circ R)(\mathcal{U}^*) && \text{(by Theorem~\ref{main_kexpp})} \\ & \subseteq (F\circ R \circ R \circ F \circ C)(\mathcal{S}) && \text{(by Lemma~\ref{gen_from_set} (1))} \\ & = (F \circ R \circ F \circ C)(\mathcal{S}) \\ & \subseteq (R\circ F \circ F \circ C)(\mathcal{S}) && \text{(by~(\ref{eq:fu_uf}) in Lemma~\ref{fu_uf})} \\ & = (R\circ F \circ C)(\mathcal{S}) \end{align*} which shows $(1)$. Moreover, \begin{align*} (\mathcal{K}_{{\Exp}{+}})_{\nf} & = ((F \circ R)(\mathcal{U}^*))_{\nf} && \text{(by Theorem~\ref{main_kexpp})} \\ & = F((R(\mathcal{U}))_{\nf}) && \text{(by~(\ref{eq:fnf_nff})} \\ & = (F \circ R) (\mathcal{U}_{\nf}) && \text{(\ref{eq:r-unf_ru-nf})} \\ & \subseteq (F \circ R \circ R \circ F)(\mathbb{N}) && \text{(by Lemma~\ref{gen_from_set} (2))} \\ & \subseteq (R \circ F)(\mathbb{N}) && \text{(as above)} \end{align*} which shows $(2)$. \end{proof} \subsection{Model-complete cores} The model-complete core of an $\omega$-categorical structure has already been defined in the introduction. In this section we show that $\mathcal{K}_{{\Exp}{+}}$ is the smallest class of structures that contains $\mathbb{N}$ and is closed under taking first-order reducts, finite covers, and model-complete cores. \begin{lemma}\label{mc_core_orbit} Let $\mathfrak{A}$ be an $\omega$-categorical structure and $\mathfrak{B}$ its model-complete core. Then $o_n(\mathfrak{B})\leq o_n(\mathfrak{A})$ and $o^i_n(\mathfrak{B})\leq o^i_n(\mathfrak{A})$ for all $n \in {\mathbb N}$. \end{lemma} \begin{proof} For $o_n$ this is Proposition~3.6.24.\ in~\cite{Bodirsky-HDR}. The statement for $o^i_n$ can be shown analogously. \end{proof} \begin{corollary}\label{kexpp_closed_under_m} The classes $\mathcal{K}_{\Exp}$ and $\mathcal{K}_{{\Exp}{+}}$ are closed under $M$. \end{corollary} \begin{remark} Analogous statements hold for the \emph{model companion} instead of the model-complete core. \end{remark} \begin{definition} Let $\mathfrak{A}$ be a structure with signature $\tau$ and let $F \subseteq A$. Then let $\mathfrak{A}(F)$ denote the following $\tau$-structure. \begin{itemize} \item The domain of $\mathfrak{A}(F)$ is $A(F) := (F \times \mathbb{N}) \sqcup ((A \setminus F) \times \{0\})$. \item For each $R \in \tau$ of arity $k$ the relation $R^{\mathfrak{A}(F)}$ is defined as $\{((x_1,n_1),\dots,(x_k,n_k)) \mid (x_1,\dots,x_n) \in R\}$. \end{itemize} \end{definition} \begin{remark} The map $f \colon A(F) \to A$ defined by $(x,n) \mapsto x$ is a homomorphism from $\mathfrak{A}(F)$ to $\mathfrak{A}$. Conversely, the mapping $g \colon A \to A(F)$ defined by $x \mapsto (x,0)$ is a homomorphism from $\mathfrak{A}$ to $\mathfrak{A}(F)$ (in fact it is an embedding). Therefore, $\mathfrak{A}$ and $\mathfrak{A}(F)$ are homomorphically equivalent. \end{remark} \begin{remark}\label{reduct_split} It follows directly from the definition that if $\mathfrak{A}$ is a first-order reduct of $\mathfrak{B}$, and $F \subseteq A$, then $\mathfrak{A}(F)$ is a first-order reduct of $\mathfrak{B}(F)$ (since we can use the same definitions). \end{remark} \begin{lemma}\label{get_rid_of_finite} Let $\mathfrak{A}\in \mathcal{U}^*$ and let $F$ be the union of the finite orbits of $\mathfrak{A}$. Then $\mathfrak{A}(F) \in \mathcal{U}_{\nf}$. \end{lemma} \begin{proof} Let $O_1,\dots,O_k$ be the orbits of $\operatorname{Aut}(\mathfrak{A})$. Then $\operatorname{Aut}(\mathfrak{A})=\prod_{i=1}^k {\operatorname{Sym}(O_i)}$ by Lemma \ref{unary}. Let $O_1=\{y_1\},\dots,O_l=\{y_l\}$ be the finite orbits of $\operatorname{Aut}(\mathfrak{A})$. Then $\operatorname{Aut}(\mathfrak{A}(F))=\prod_{i=1}^l{\operatorname{Sym}(\{y_i\}\times {\mathbb N})} \times \prod_{i=l+1}^k{\operatorname{Sym}(O_i \times \{0\})}$ has no finite orbits and therefore $\mathfrak{A}(F)\in \mathcal{U}_{\nf}$. \end{proof} \begin{lemma}\label{get_rid_of_finite2} Let $\mathfrak{A}\in R(\mathcal{U})$ and let $F$ be the union of the finite orbits of $\mathfrak{A}$. Then $\mathfrak{A}(F)\in R(\mathcal{U})_{\nf}$. \end{lemma} \begin{proof} Let $C_1,\dots,C_n$ be the classes of $\nabla(\mathfrak{A})$. Then $\prod_{i=1}^n{\operatorname{Sym}(C_i)} \subseteq \operatorname{Aut}(\mathfrak{A})$ by Lemma \ref{unary_reduct}, and hence, $\mathfrak{A}$ is a first-order reduct of $\mathfrak{B} \in \mathcal{U}^*$. Lemma~\ref{get_rid_of_finite} implies that $\mathfrak{B}(F) \in \mathcal{U}_{\nf}$. Remark \ref{reduct_split} implies that $\mathfrak{A}(F)$ is a first-order reduct of $\mathfrak{B}(F)$. Hence, $\mathfrak{A}(F) \in R(\mathcal{U}_{\nf}) = R({\mathcal{U}})_{\nf}$. \end{proof} \begin{corollary}\label{ru_model} Every structure $\mathfrak{A}\in R(\mathcal{U})$ is interdefinable with a model-complete core of a structure in $R(\mathcal{U})_{\nf}$, i.e., $$R(\mathcal{U}) \subseteq M(R(\mathcal{U})_{\nf}).$$ \end{corollary} \begin{proof} Let $\mathfrak{A}^*$ be the expansion of $\mathfrak{A}$ by all first-order definable relations. Then $\mathfrak{A}^*$ is a model-complete core and interdefinable with $\mathfrak{A}$. Let $F$ be the union of the finite orbits of $\operatorname{Aut}(\mathfrak{A}^*) = \operatorname{Aut}(\mathfrak{A})$. By Lemma \ref{get_rid_of_finite2} we know that $\mathfrak{A}^*(F)\in R(\mathcal{U})_{\nf}$. Since $\mathfrak{A}^*$ and $\mathfrak{A}^*(F)$ are homomorphically equivalent it follows that $\mathfrak{A}^*$ is the model-complete core of $\mathfrak{A}^*(F)$. Hence, $\mathfrak{A} \in M(R(\mathcal{U})_{\nf})$. \end{proof} \begin{corollary}\label{fru_model} Let $\mathfrak{B}\in R(\mathcal{U}^*)$ and let $\pi \colon\mathfrak{A}\rightarrow \mathfrak{B}$ be a finite cover. Then $\mathfrak{A}$ is interdefinable with a model-complete core of a structure in $F(R(\mathcal{U}^*))_{\nf}$, i.e., $$F(R(\mathcal{U}^*)) \subseteq M(F(R(\mathcal{U}^*))_{\nf}).$$ \end{corollary} \begin{proof} As in the previous proof let $\mathfrak{A}^*$ be the expansion of $\mathfrak{A}$ by all relations that are first-order definable in $\mathfrak{A}$, and let $F$ be the union of the finite orbits of $\operatorname{Aut}(\mathfrak{A}) = \operatorname{Aut}(\mathfrak{A}^*)$. Then $\pi(F)$ is the union of finite orbits of $\operatorname{Aut}(\mathfrak{B})$. By Corollary \ref{get_rid_of_finite2} we know that $\mathfrak{B}(\pi(F))\in R(\mathcal{U})$. Let $\pi' \colon A(F) \rightarrow B(\pi(F))$ be defined as \begin{align*} \pi'(x,n) := \begin{cases} (\pi(x),n) & \text{if } x\in F \\ (\pi(x),0) & \text{otherwise.} \end{cases} \end{align*} Then it is easy to see that $\pi' \colon \mathfrak{A}(F) \rightarrow \mathfrak{B}(\pi(F))$ is a finite covering map. Hence, $\mathfrak{A}(F)\in F(R(\mathcal{U}^*))$. By Lemma~\ref{get_rid_of_finite} the structure $\mathfrak{A}(F)$ has no finite orbits, and as before we can conclude that $\mathfrak{A}$ is the model-complete core of $\mathfrak{A}(F)$. \end{proof} \begin{lemma}\label{gen_mccore} The following identities hold. \begin{enumerate} \item $\mathcal{K}_{\Exp}=M((\mathcal{K}_{\Exp})_{\nf})$, \item $\mathcal{K}_{{\Exp}{+}}=M((\mathcal{K}_{{\Exp}{+}})_{\nf})$, \item $\mathcal{K}_{{\Exp}{+}}=(M\circ R\circ F)(\mathbb{N})$. \end{enumerate} \end{lemma} \begin{proof} The containments ``$\supseteq$'' in item (1) and item (2) follow from Corollary \ref{kexpp_closed_under_m}. By Theorems \ref{main_kexp} and \ref{main_kexpp} we know that $\mathcal{K}_{\Exp}=R(\mathcal{U})$ and $\mathcal{K}_{{\Exp}{+}}=F(R(\mathcal{U}))$. Then the containments ``$\subseteq$'' in items (1) and item (2) follow from Corollaries \ref{ru_model} and \ref{fru_model}. To show Item (3), observe that \begin{align*} \mathcal{K}_{{\Exp}{+}} & = M((\mathcal{K}_{{\Exp}{+}})_{\nf}) && \text{(by item (2) of the lemma)} \\ & = M(R(F(\mathbb{N}))) && \text{(by item (2) of Theorem~\ref{gen_from_set_final})} \end{align*} \end{proof} \subsection{Summary} \label{sect:summary} The following theorem summarizes some of the equivalent characterizations of the classes $\mathcal{K}_{\Exp},\mathcal{K}_{{\Exp}{+}},(\mathcal{K}_{\Exp})_{\nf},(\mathcal{K}_{{\Exp}{+}})_{\nf}$. \begin{theorem}\label{summary} \begin{align}\mathcal{K}_{\Exp} & =R(\mathcal{U})=R^{<\infty}(\mathcal{U}^*) \label{eq:kexp} \\ (\mathcal{K}_{\Exp})_{\nf}& =R(\mathcal{U}_{\nf})=R^{<\infty}(\mathcal{U}_{\nf}) \label{eq:kexpnf} \\ \mathcal{K}_{{\Exp}{+}} & =(F\circ R) (\mathcal{U})=(R\circ F)(\mathcal{U}) \label{eq:kexpp} \\ & =(F\circ R^{<\infty})(\mathcal{U}^*)=(R^{<\infty}\circ F)(\mathcal{U}^*) \nonumber \\ & =(R\circ F\circ C)(\mathcal{S})=(M\circ R\circ F)(\mathbb{N}) \nonumber \\ (\mathcal{K}_{{\Exp}{+}})_{\nf} & =(R\circ F)(\mathbb{N}) =(F\circ R)(\mathcal{U}_{\nf}) \label{eq:kexppnf} \\ & =(F\circ R^{<\infty})(\mathcal{U}_{\nf}) =(R^{<\infty}\circ F)(\mathcal{U}_{\nf}) \nonumber \end{align} \end{theorem} \begin{proof} (\ref{eq:kexp}): Corollary~\ref{ru_rfiniteu} states that $R(\mathcal{U}) = R^{<\infty}(\mathcal{U}^*)$ and Theorem \ref{main_kexp} that $\mathcal{K}_{\Exp}=R(\mathcal{U})$. (\ref{eq:kexpnf}): we have $R^{<\infty}(\mathcal{U}_{\nf}) \subseteq R(\mathcal{U}_{\nf}) \subseteq R(\mathcal{U})_{\nf} = (\mathcal{K}_{\Exp})_{\nf}$ by (\ref{eq:kexp}), and $R(\mathcal{U})_{\nf} \subseteq R^{<\infty}(\mathcal{U}_{\nf})$ can be shown as in the proof of Corollary~\ref{ru_rfiniteu}. (\ref{eq:kexpp}): By Theorem~\ref{main_kexpp} we know that $\mathcal{K}_{{\Exp}{+}}=(F\circ R)(\mathcal{U})=(R^{<\infty}\circ F)(\mathcal{U}^*)$. This also implies that the class $\mathcal{K}_{{\Exp}{+}}$ is closed under $F$, and it is obviously closed under $R$, so $$\mathcal{K}_{{\Exp}{+}} = (R^{<\infty}\circ F)(\mathcal{U}^*) \subseteq (R \circ F)(\mathcal{U}) \subseteq \mathcal{K}_{{\Exp}{+}}.$$ The equality $(F\circ R)(\mathcal{U})=(F\circ R^{<\infty})(\mathcal{U}^*)$ follows from the fact that $R(\mathcal{U})=R^{<\infty}(\mathcal{U}^*)$ (Corollary~\ref{ru_rfiniteu}). The equality $\mathcal{K}_{{\Exp}{+}} = (R \circ F \circ C)(\mathcal{S})$ is item (1) of Theorem~\ref{gen_from_set_final} and the equality $\mathcal{K}_{{\Exp}{+}} = (M \circ R \circ F)(\mathbb{N})$ is item (3) of Lemma~\ref{gen_mccore}. (\ref{eq:kexppnf}): the proof of Theorem~\ref{gen_from_set_final} (2) shows the following equalities: $(\mathcal{K}_{{\Exp}{+}})_{\nf}=(F\circ R)(\mathcal{U}_{\nf}) = (R\circ F)(\mathbb{N})$. Finally, \begin{align*} (\mathcal{K}_{{\Exp}{+}})_{\nf} = (F\circ R)(\mathcal{U}_{\nf}) & = (F\circ R^{<\infty})(\mathcal{U}_{\nf}) && \text{(as in Corollary~\ref{ru_rfiniteu})} \\ & \subseteq (R^{<\infty}\circ F)(\mathcal{U}_{\nf}) && \text{(by (\ref{eq:fu_uf}))} \\ & \subseteq (\mathcal{K}_{{\Exp}{+}})_{\nf} \end{align*} and thus $(\mathcal{K}_{{\Exp}{+}})_{\nf}=(F\circ R^{<\infty})(\mathcal{U}_{\nf}) =(R^{<\infty}\circ F)(\mathcal{U}_{\nf})$. \end{proof} \section{Consequences for Constraint Satisfaction} In the introduction we have already mentioned that for finite structures $\mathfrak{A}$ there is a complexity dichotomy for $\operatorname{CSP}(\mathfrak{A})$: these problems are in P or NP-complete. Such a complexity dichotomy has also been conjectured for the much larger class of first-order reducts of \emph{finitely bounded} homogeneous structures. A structure $\mathfrak{B}$ with finite relational signature $\tau$ is called \emph{finitely bounded} if there exists a finite set of finite $\tau$-structures $\mathcal F$ such that a finite $\tau$-structure $\mathfrak{A}$ embeds into $\mathfrak{B}$ if and only if no structure from $\mathcal F$ embeds into $\mathfrak{A}$. For first-order reducts of finitely bounded homogenous structures there is also a more specific \emph{infinite-domain tractability conjecture}~\cite{BPP-projective-homomorphisms}: assuming that $\mathfrak{A}$ is a model-complete core the conjecture says that $\operatorname{CSP}(\mathfrak{A})$ is in P if and only if $\mathfrak{A}$ has a pseudo-Siggers polymorphism (for a definition of pseudo-Siggers polymorphisms and a proof that the conjecture can be phrased like this, see~\cite{BartoPinskerDichotomy}). Let $\mathfrak{A}$ be a structure from $\mathcal{K}_{{\Exp}{+}}$ with finite relational signature. The next lemma shows that the question whether $\operatorname{CSP}(\mathfrak{A})$ is in P or NP-complete falls into the scope of this conjecture. \begin{lemma}\label{lem:finitely-bounded} Every structure in $\mathcal{K}_{{\Exp}{+}}$ is a first-order reduct of a finitely bounded homogeneous structure. \end{lemma} \begin{proof} Let $\mathfrak{A} \in \mathcal{K}_{{\Exp}{+}}$. By Theorem~\ref{main_kexpp} we have $\mathcal{K}_{{\Exp}{+}}=R(F(\mathcal{U}^*))$, so $\mathfrak{A}$ is a first-order reduct of a structure $\mathfrak{A}' \in F(\mathcal{U}^*)$. By Proposition~\ref{reduct_trivial2}, every finite cover of a structure in $\mathcal{U}^*$ is strongly split, so we can assume that $\mathfrak{A}'$ is a (strongly) trivial covering structure of a structure $\mathfrak{B} \in \mathcal{U}^*$. Let $\mathfrak{C}$ be the structure from the proof of Lemma~\ref{ramsey}, and let $\tau := (\{U_{i,s} \mid i \leq k, s \in F_i\} \cup \{\sim_\pi\})$ be the signature of $\mathfrak{C}$. Then it is easy to specify a finite set of forbidden finite $\tau$-structures such that in any finite $\tau$-structure that avoids these structures \begin{itemize} \item the relation $\sim_\pi$ is an equivalence relation, \item the sets denoted by the unary relations $U_{i,s}$ are pairwise disjoint and cover all of $C$, \item for all $i,s$ if $x \sim_\pi y$ and $x,y \in U_{i,s}$ then $x=y$, \item for all $i,s$ the cardinality of $U_{i,j}$ is at most the cardinality of $U_{i,j}$ in $\mathfrak{C}$. \end{itemize} These are precisely the finite structures that embed into $\mathfrak{C}$. \end{proof} Let $\mathfrak{A}$ be a structure from $R(F(\mathcal{U}))$. In this section we discuss the consequences of our results for classifying the computational complexity of $\operatorname{CSP}(\mathfrak{A})$. First, since $R(F(\mathcal{U})) = \mathcal{K}_{{\Exp}{+}}$ is closed under $M$ as discussed above we can assume that $\mathfrak{A}$ is a model-complete core. The following lemma shows that we can even assume that $\mathfrak{A} \in F(\mathcal{U})$. \begin{lemma} Let $\mathfrak{A} \in R(F(\mathcal{U}))$. Then there exists a model-complete core $\mathfrak{C}$ in $F(\mathcal{U}^*)$ such that \begin{itemize} \item $\operatorname{CSP}(\mathfrak{A})$ and $\operatorname{CSP}(\mathfrak{C})$ are polynomial-time equivalent; \item the $\nabla(\mathfrak{C})$-classes are the orbits of $\operatorname{Aut}(\mathfrak{C})$ and they are primitively positively definable in $\mathfrak{C}$. \end{itemize} \end{lemma} \begin{proof} Let $\mathfrak{C}'$ be the model-complete core of $\mathfrak{A}$. Then $\mathfrak{C}'$ is in $M(R(F(\mathcal{U}))) = \mathcal{K}_{{\Exp}{+}} = F(R(\mathcal{U}))$. So suppose that $\pi \colon \mathfrak{C}' \to \mathfrak{B}'$ is a finite covering for $\mathfrak{B}' \in R(\mathcal{U})$. Add a constant $c$ from each $\nabla(\mathfrak{C}')$-equivalence class to $\mathfrak{C}'$ and let $\mathfrak{C}$ be the resulting structure. Then $\mathfrak{C}$ is still $\omega$-categorical and a model-complete core. Moreover, $\mathfrak{C}$ and $\mathfrak{A}$ are polynomial-time equivalent~\cite{Cores-Journal}. Add a constant $\pi(c)$ to $\mathfrak{B}'$ for each of the new constants $c$ and let $\mathfrak{B}$ be the structure obtained in this way. The proof of Corollary~\ref{add_constants_ru} shows that $\mathfrak{B} \in \mathcal{U}^*$. Then $\pi \colon \mathfrak{C} \to \mathfrak{B}$ is a finite cover. Therefore, $\mathfrak{C} \in F(\mathcal{U}^*)$. Moreover, the $\nabla(\mathfrak{C})$-classes are the orbits of $\mathfrak{C}$ and orbits in model-complete cores are primitive positive definable~\cite{Cores-Journal}. \end{proof} It can be shown using the universal-algebraic approach to constraint satisfaction that if $\mathfrak{C} \in F(\mathbb{N})$ and $\Delta(\mathfrak{C})$ is primitive positive definable then $\operatorname{CSP}(\mathfrak{C})$ is either in P or NP-complete. This lies beyond the scope of this article, but will appear elsewhere. \section{Introduction} A \emph{first-order reduct} of a structure $\mathfrak{A}$ is a relational structure with the same domain as $\mathfrak{A}$ whose relations are first-order definable over $\mathfrak{A}$. Simon Thomas conjectured that every homogeneous structure $\mathfrak{A}$ with finite relational signature has only finitely many first-order reducts up to first-order interdefinability~\cite{RandomReducts}. The conjecture has been verified for many famous homogeneous structures $\mathfrak{A}$: e.g.~for the ordered rationals~\cite{Cameron5}, the countably infinite random graph~\cite{RandomReducts}, the homogenous universal $K_n$-free graphs~\cite{Thomas96}, the expansion of $({\mathbb Q};<)$ by a constant~\cite{JunkerZiegler}, the universal homogeneous partial order~\cite{Poset-Reducts}, and the random ordered graph~\cite{42}, and many more~\cite{agarwal,AgarwalKompatscher,BodJonsPham,BBPP18}. If we drop the assumption that the signature of the homogeneous structure $\mathfrak{A}$ is relational, then the conjecture of Thomas is false even if we keep the assumption that $\mathfrak{A}$ is $\omega$-categorical: already the countable atomless Boolean algebra has infinitely many first-order reducts~\cite{BodorCameronSzabo}. Thomas' conjecture highlights our limited understanding of the class of homogeneous structures $\mathfrak{A}$ with finite relational signature. One approach to widen our understanding is to study homogeneous structures for some fixed signature; for example, classifications exist for the class of all homogeneous tournaments~\cite{Lachlan}, homogeneous undirected graphs~\cite{Henson}, homogeneous partial orders~\cite{Schmerl}, general homogeneous digraphs~\cite{Cherlin}, homogeneous permutations~\cite{CameronPermutations}, and homogeneous coloured multipartite graphs \cite{MultipartiteJenkinsonTrussSeidel,MultipartiteLockettTruss}. However, already the class of homogeneous 3-uniform hypergraphs appears to be very difficult~\cite{AkhtarLachlan95}. If we impose additional assumptions, e.g., that the age of $\mathfrak{A}$ can be described by finitely many forbidding substructures, we might hope for systematic understanding and effectiveness results for various questions. However, it is not clear how to use this assumption for proving that $\mathfrak{A}$ has finitely many first-order reducts. Another approach to understanding the class of homogeneous structures, followed in this paper, is to start with the most symmetric structures in this class. Symmetry can be measured by the number of orbits $o^i_n(G)$ of the diagonal action of the automorphism group $G = \operatorname{Aut}(\mathfrak{A})$ on tuples from $A^n$ that have pairwise distinct entries. By the theorem of Engeler, Ryll-Nardzeski, and Svenonius, these orbits are in one-to-one correspondence with the model-theoretic types of $n$ pairwise distinct elements in $\mathfrak{A}$. Alternatively, we might count the number of orbits $o^s_n(G)$ of the action of $G$ on $n$-element subsets of $A$. The investigation of both of these measures has been pioneered by Cameron; see~\cite{Oligo} for an introduction to the subject. The sequence $o^i_n(G)$ is linked to \emph{labeled} enumeration problems, which are the most intensively studied counting problems in enumerative combinatorics, while $o^s_n(G)$ is linked to \emph{unlabeled} enumeration problems. Many structural results about $G$ are available when we impose restrictions on $o^s_n(G)$; see, e.g.,~\cite{Macpherson-Orbits,Macpherson-GrowthRates,Macpherson-RapidGrowth}. The present article, in contrast, focuses on restricting $o^i_n(G)$. A structure $\mathfrak{A}$ is finite if and only if $o^i_n(\operatorname{Aut}(\mathfrak{A}))$ is eventually 0. It is a by-product of our results that the class $\mathcal{K}_{\Exp}$ of all structures $\mathfrak{A}$ where $o^i_n(\operatorname{Aut}(\mathfrak{A}))$ grows at most exponentially equals the class of first-order reducts of unary structures; by a \emph{unary structure} we mean any at most countable structure with finitely many unary relations. Our main result pushes this further: we study the class $\mathcal{K}_{{\Exp}{+}}$ of structures $\mathfrak{A}$ such that $o^i_n(\operatorname{Aut}(\mathfrak{A}))$ is bounded by $cn^{dn}$ for some constants $c,d$ with $d< 1$. Note that for example the structure $({\mathbb Q};<)$ does not belong to $\mathcal{K}_{{\Exp}{+}}$ because $o^i_n({\mathbb Q};<) = n!$. Also, $\mathcal{K}_{{\Exp}{+}}$ contains no structure $\mathfrak{A}$ with a definable equivalence relation with infinitely many infinite classes because $o^i_n(\mathfrak{A})$ would in this case be at least as large as the $n$-th Bell number, which grows asymptotically faster than $cn^{dn}$ (see Lemma~\ref{counting}). We show that $\mathcal{K}_{{\Exp}{+}}$ contains precisely those structures that are \emph{finite covers} of first-order reducts of unary structures. Finite covers in model theory and infinite permutation groups have been studied in the context of classifying totally categorical structures~\cite{AhlbrandtZiegler-Cover,HodgesPillay,HrushovskiTotallyCategorical} and, more generally, for studying $\omega$-categorical $\omega$-stable structures ~\cite{CherlinHarringtonLachlan,CherlinHrushovski}. Finite covers became an important topic in its own~\cite{Evans,EvansPastori,Pastori}; we refer to the survey article for an introduction~\cite{EvansIvanovMacpherson}. It follows from our result that the class of finite covers of unary structures equals the class of first-oder reducts of finite covers of unary structures. Using the terminology of~\cite{EvansIvanovMacpherson}, we show that all finite covers of unary structures \emph{split}, but not necessarily \emph{strongly}. All structures in $\mathcal{K}_{{\Exp}{+}}$ can be expanded to structures that are homogeneous in a finite relational language, and we show that they all satisfy Thomas' conjecture. The proof uses a result of Macpherson which implies that structures in $\mathcal{K}_{{\Exp}{+}}$ which have a \emph{primitive} automorphism group must be highly transitive~\cite{Macpherson-Orbits}. The class $\mathcal{K}_{{\Exp}{+}}$ can be seen as the \emph{`smallest reasonably robust class that contains all finite structures as well as some infinite ones'} (for formalisations of this statement, see Section~\ref{sect:summary}). So whenever a statement that holds for all finite structures needs to be generalised to a class of \emph{`slightly infinite structures'}, it might be a good idea to try to first prove the statement for $\mathcal{K}_{{\Exp}{+}}$. This is precisely the situation for the constraint satisfaction problem. \subsection{Complexity of constraint satisfaction} Let $\mathfrak{B}$ be a structure with finite relational signature. The \emph{constraint satisfaction problem} for $\mathfrak{B}$ is the computational problem of deciding whether a given finite structure $\mathfrak{A}$ with the same signature as $\mathfrak{A}$ has a homomorphism to $\mathfrak{B}$. For finite structures $\mathfrak{B}$, Feder and Vardi~\cite{FederVardi} conjectured that the computational complexity of $\operatorname{CSP}(\mathfrak{B})$ satisfies a \emph{dichotomy}: it is either in P or NP-complete. Using concepts and techniques from universal algebra, Bulatov and Zhuk recently presented independent proofs of this conjecture~\cite{BulatovFVConjecture,ZhukFVConjecture}. The universal-algebraic approach can also be applied when $\mathfrak{B}$ is countably infinite and $\omega$-categorical. In this case, the computational complexity of $\mathfrak{B}$ is captured by the \emph{polymorphism clone of $\mathfrak{B}$} (see~\cite{BodirskyNesetrilJLC}), which can be seen as a generalisation of the automorphism group of $\mathfrak{B}$: it consists of all homomorphisms from $\mathfrak{B}^n$ to $\mathfrak{B}$, for $n \in \mathbb{N}$. Moreover, every $\omega$-categorical structure $\mathfrak{B}$ is homomorphically equivalent to an (up to isomorphism unique) structure $\mathfrak{C}$ with the property that the automorphisms of $\mathfrak{C}$ lie dense in the endomorphisms of $\mathfrak{C}$, called the \emph{model complete core} of $\mathfrak{B}$. The model-complete core $\mathfrak{C}$ of $\mathfrak{B}$ is again $\omega$-categorical, and has the same CSP as $\mathfrak{B}$, so that we prefer to analyse $\mathfrak{C}$ rather than $\mathfrak{B}$. This simplification of the classification problem is a key step for many results (see, e.g.,~\cite{tcsps-journal},\cite{BodPin-Schaefer-both,BartoPinskerDichotomy}), including the finite-domain classification~\cite{BulatovFVConjecture,ZhukFVConjecture}. Therefore, if we want to classify the computational complexity of $\operatorname{CSP}(\mathfrak{B})$ for all structures $\mathfrak{B}$ from a class $\mathcal C$, it is important whether the class $\mathcal C$ is closed under the formation of model-complete cores. When $\mathfrak{C}$ is the model-complete core of $\mathfrak{B}$, then it is easy to see that $o^i_n(\mathfrak{C}) \leq o^i_n(\mathfrak{B})$; hence, in particular the classes $\mathcal{K}_{\Exp}$ and $\mathcal{K}_{{\Exp}{+}}$ are closed under taking model-complete cores. This makes these classes attractive goals for extending the mentioned dichotomy result from finite domains. As mentioned before, our results imply that every structure in $\mathcal{K}_{\Exp}$ is a first-order reduct of a unary structure. For those structures, it has already been shown that they are in P or NP-complete~\cite{BodMot-Unary} (using the mentioned dichotomy for finite-domain CSPs). Our main result states that $\mathcal{K}_{{\Exp}{+}}$ is precisely the class of first-order reducts of finite covers of unary structures. For classifying the complexity of the CSP for all structures in this class, our result implies that we can assume without loss of generality that these structures are model-complete cores. We thus see our result as a first step towards classifying the CSP for first-order reducts of finite covers of unary structures. \subsection{Definable sets with atoms} \label{sect:atoms} In theoretical computer science one is interested in finite representations of infinite structures; one approach to this is the framework of \emph{definable sets} and \emph{computation with atoms}~\cite{DBLP:journals/corr/BojanczykKL14,DBLP:conf/lics/BojanczykKLT13}. This leads to new models of computation over infinite structures with interesting links to long-standing open problems in finite model theory, namely the question whether there is a logic for P and computation in choiceless polynomial time~\cite{BojanczykTorunczyk18}. If the `atom structure' is $({\mathbb N};=)$ (which is besides $({\mathbb Q};<)$ the most frequently used base structure in this area) then definable sets (in this case also studied under the name \emph{nominal sets}~\cite{GabbayPitts}) correspond precisely to the class ${\mathcal K}_=$ of structures that are first-order interpretable over $({\mathbb N};=)$ in the sense of model theory (for an explicit discussion of the connection, see~\cite{definable-homomorphisms}, Lemma 7 and the remarks thereafter). The class ${\mathcal K}_=$ might appear to be trivial to many model theorists (all structures in it are $\omega$-categorical, $\omega$-stable, and they are first-order reducts of homogeneous finitely bounded structures), but in fact many questions about this class remain open; see Section~\ref{sect:open} for a small sample of open problems. It follows from our results (see Remark~\ref{rem:triv-cover-interpretation}) that $\mathcal{K}_{{\Exp}{+}} \subseteq {\mathcal K}_=$ and we can answer for $\mathcal{K}_{{\Exp}{+}}$ many questions that we cannot answer for the class ${\mathcal K}_=$ in general. So our results can also be seen as a first step towards a better understanding of ${\mathcal K}_=$. \input prelims.tex \input RU.tex \input FU.tex \input FRU.tex \input Kexp.tex \input RU2.tex \input additional.tex \section{Conclusion and Open Problems} \label{sect:open} Our results imply that all structures in $\mathcal{K}_{{\Exp}{+}}$ are $\omega$-stable (Remark~\ref{rem:triv-cover-interpretation}), that they are first-order reducts of finitely bounded homogeneous structures (Proposition~\ref{lem:finitely-bounded}), and that they satisfy Thomas' conjecture (Corollary~\ref{thomas_strong}). Do $\omega$-stable homogeneous structures with finite relational signature in general satisfy Thomas' conjecture, i.e., do they have finitely many reduct up to interdefinability? Note that if we drop the assumption about having a relational signature, then the answer is no even if we insist on $\mathfrak{A}$ being still $\omega$-categorical, and $\omega$-stable (this follows from the example given in ~\cite{BodorCameronSzabo}, which is the expansion of the countably infinite dimensional vector space over the two-element field with one non-zero constant). Answering the question of the previous paragraph might be very ambitious, so we propose to first study a more concrete and fundamental class of structures. Let ${\mathcal K}_=$ be the class of all structure with a first-order interpretation over $({\mathbb N};=)$ (which we have already discussed in Section~\ref{sect:atoms}). Do the structures in ${\mathcal K}_=$ satisfy Thomas' conjecture? Is the model companion of a structure in ${\mathcal K}_=$ also in ${\mathcal K}_=$? We ask the same question for the model-complete cores of structures in ${\mathcal K}_=$. By our results, structures from $\mathcal{K}_{{\Exp}{+}}$ can be represented on a computer as follows. First, every trivial covering $\mathfrak{A}$ of a structure $\mathfrak{B} \in \mathcal{U}$ is interdefinable with a homogeneous structure in a finite relational signature $\mathfrak{C}$ (Proposition~\ref{ramsey}) and is finitely bounded (Lemma~\ref{lem:finitely-bounded}). So we can represent $\mathfrak{B}$ up to isomorphism by specifying these bounds. Second, finite-signature first-order reducts of $\mathfrak{C}$ can be represented by listing formulas for the relations of the reduct (we can assume that these formulas are quantifier-free since $\mathfrak{C}$ is homogeneous in a finite relational signature and hence has quantifier elimination), and storing these together with the representation for $\mathfrak{C}$. We now ask which of the following problems are algorithmically decidable: \begin{enumerate} \item given two structures in $(R \circ F)(\mathcal{U})$, decide whether they are isomorphic. \item given two structures in $(R \circ F)(\mathcal{U})$, decide whether they are interdefinable. \item given two structures in $(R \circ F)(\mathcal{U})$, decide whether they are bi-interpretable. \end{enumerate} Szymon Torunczyk (personal communication) observed that the first of these questions (about deciding isomorphism of two given structures) is in the larger setting of reducts of finitely bounded homogeneous structures equivalent to an open problem about decidability of first-order definability from~\cite{BPT-decidability-of-definability} (the final open problem mentioned there). \bibliographystyle{alpha} \section{Finite Coverings of Reducts of Unary Structures} In this section we show that every structure in $F(R(\mathcal{U}))$ is a quasi-covering reduct (introduced in Definition~\ref{quasi_cover}) of a strongly trivial covering of some structure in $\mathcal{U}^*$ (Proposition~\ref{quasi_trivial}), and that there are only finitely many of such reducts for each structure in $R(\mathcal{U})$ (Theorem~\ref{quasi_cover2}). Moreover, we observe that $F(R(\mathcal{U})) \subseteq R(F(\mathcal{U}))$ (Remark~\ref{fru_rffu}). \subsection{The Ramsey property and canonical functions} Let $\mathfrak{A},\mathfrak{B}$ be structures. A function $f \colon A \to B$ is called \emph{canonical from $\mathfrak{A}$ to $\mathfrak{B}$} if for every $t \in A^n$ and $\alpha \in \operatorname{Aut}(\mathfrak{A})$ there exists $\beta \in \operatorname{Aut}(\mathfrak{B})$ such that $f(\alpha(t)) = \beta(f(t))$. Hence, a canonical function $f$ induces for every $n$ a function from the orbits of $n$-tuples of $\operatorname{Aut}(\mathfrak{A})$ to the orbits of $n$-tuples of $\operatorname{Aut}(\mathfrak{B})$; these functions will be called the \emph{behavior} of $f$. Canonical functions as a tool to classify reducts of homogeneous structures with finite relational signature have been introduced in~\cite{BP-reductsRamsey} and used in~\cite{Poset-Reducts,42,agarwal,AgarwalKompatscher,BodJonsPham,BBPP18}. The existence of certain canonical functions in the automorphism group of a structure $\mathfrak{A}$ is typically shown using Ramsey properties of $\mathfrak{A}$. We will not introduce Ramsey structures here; all that is needed is the well-known fact that $({\mathbb Q};<)$ is Ramsey, and the following result from~\cite{BP-reductsRamsey}. A structure is called \emph{ordered} if the signature denotes a binary relation symbol that denotes a (total) linear ordering of the domain. \begin{lemma}[see~\cite{BodPin-CanonicalFunctions}]\label{gen_can} Let $\mathfrak{D}$ be an ordered homogeneous Ramsey structure with finite relational signature and let $f \colon D \to D$ be a function. Then there exists a function $$g \in \overline{\{\alpha \circ f \circ \beta \mid \alpha,\beta \in \operatorname{Aut}(\mathfrak{D})\}}$$ which is canonical as a function from $\mathfrak{D}$ to $\mathfrak{D}$. \end{lemma} The following is an easy consequence of the definitions. \begin{lemma}\label{gen_behav} Let $\mathfrak{A}$ be a homogeneous structure with finite relational signature and let $\mathfrak{B}$ be a first-order reduct of $\mathfrak{A}$. If $f$ and $g$ are canonical functions from $\mathfrak{A}$ to $\mathfrak{B}$ with the same behaviour then $\overline{\operatorname{Aut}(\mathfrak{B}) \cup \{f\}} = \overline{\operatorname{Aut}(\mathfrak{B}) \cup \{g\}}$. \end{lemma} The next lemma follows from the observation that if $\mathfrak{A}$ is homogeneous with a relational signature of maximal arity $k$ then the behaviour of a canonical function $f$ from $\mathfrak{A}$ to $\mathfrak{B}$ is fully determined by the function induced by $f$ on the orbits of $k$-tuples (see~\cite{BPT-decidability-of-definability}, in particular the comments at the end of Section 4.1). \begin{lemma}\label{fin_many_behav} Let $\mathfrak{A}$ be a homogeneous structure with finite relational signature and let $\mathfrak{B}$ be $\omega$-categorical. Then there are finitely many behaviours of canonical functions from $\mathfrak{A}$ to $\mathfrak{B}$. \end{lemma} \blue{We now discuss homogeneous expansions with finite relational signature of structures in $F(\mathcal{U}^*)$. } \begin{lemma}\label{ramsey} Let $\mathfrak{B}\in \mathcal{U}^*$ and let $\pi \colon\mathfrak{A}\rightarrow \mathfrak{B}$ be a (strongly) trivial finite cover. Then \begin{enumerate} \item $\mathfrak{A}$ is interdefinable with a homogeneous structure $\mathfrak{C}$ with finite relational signature, and \item $\mathfrak{A}$ is a first-order reduct of an ordered homogeneous Ramsey structure $\mathfrak{D}$ with finite relational signature. \end{enumerate} \end{lemma} \begin{proof} Let $O_1,\dots,O_k$ be the orbits of $\mathfrak{B}$. Following Remark~\ref{triv_cov_unary}, we can assume that $\mathfrak{A}=\bigsqcup_{i=1}^k{(F_i \times O_i)}$ for some finite sets $F_i$, and that $\operatorname{Aut}(\mathfrak{A})$ consists of all permutations which preserve the first coordinate and stabilize the sets $O_i$ on the second coordinate. For each $i \leq k$ and $s\in F_{i}$ we define the unary relation $U_{i,s} :=\{(s,u) \mid u\in O_i\}$. Let $\mathfrak{C}$ be the relational structure with domain $A$ and the relations $U_{i,s}$ and $\sim_{\pi}$. Then $\operatorname{Aut}(\mathfrak{C})=\operatorname{Aut}(\mathfrak{A})$. Hence, $\mathfrak{A}$ and $\mathfrak{C}$ are interdefinable. It is easy to see that $\mathfrak{C}$ is homogeneous. This proves (1). To prove item (2) we define an ordering $<$ on $\mathfrak{B}$ as follows. For each infinite orbit $O_i$ let us fix an ordering $<_i$ on $O_i$ which is isomorphic to $(\mathbb{Q};<)$. Let us also fix an ordering of $\prec_i$ on $F_{i}$ for all $i$. Then $<$ is defined as follows \begin{itemize} \item If $\pi(x)\in O_i, \pi(y)\in O_j$ and $i<j$, then $x<y$, \item If $\pi(x),\pi(y)\in O_i$ and $\pi(x)<_i\pi(y)$, then $x<y$, \item If $\pi(x)=\pi(y) \in O_i$ \blue{and $x'$ and $y'$ are the projections of $x$ and $y$ to the first component, then $x<y$ iff $x' \prec_i y'$}. \end{itemize} To show that the expansion $\mathfrak{D}$ of $\mathfrak{C}$ by the ordering $<$ has the Ramsey property, we use the fact that if a structure is the disjoint union of substructures induced by definable subsets, and the substructures are Ramsey, then the structure itself is Ramsey (see~\cite{BodirskyRamsey}). For each $i \leq k$, let $\mathfrak{C}_i$ be the substructure $\mathfrak{C}_i$ of $\mathfrak{C}$ induced by $\pi^{-1}(O_i)$. Note that $\pi^{-1}(O_i) = \bigcup_{s \in F_i} U_{i,s}$ and hence is definable in $\mathfrak{C}$. If $O_i$ is infinite then $\operatorname{Aut}(\mathfrak{C}_i)$ is topologically isomorphic to $\operatorname{Aut}({\mathbb Q};<)$. The property of a structure of being Ramsey is a property of the automorphism group of the structure, viewed as a topological group (again, see~\cite{BodirskyRamsey}). It follows that $\mathfrak{C}$ is Ramsey. \end{proof} \subsection{Reducts of Finite Covers of Reducts of Unary Structures} Let $\mathfrak{B} \in R(\mathcal{U})$ and let $\pi \colon \mathfrak{A} \to \mathfrak{B}$ be a finite covering map. In this section we study the closed supergroups of $\operatorname{Aut}(\mathfrak{A})$ that preserve $\sim_\pi$ such that $\mu_\pi$ also preserves the congruence $\nabla(\mathfrak{B})$. \begin{lemma}\label{minimal_fiber_pres} Let $\mathfrak{B}\in \mathcal{U}^*$ and let $\pi \colon \mathfrak{A} \rightarrow \mathfrak{B}$ be a (strongly) trivial finite covering map. Let $O_1,\dots,O_k$ be the orbits of $\mathfrak{B}$ and let $\mathfrak{D}$ be the ordered homogeneous finite signature Ramsey expansion of $\mathfrak{A}$ from Lemma~\ref{ramsey}. Suppose that $f \in \operatorname{Sym}(A)$ preserves $\sim_{\pi}$ and that $\mu_\pi(f)$ preserves the partition $P := \{O_1,\dots,O_k\}$. Then the monoid $M:=\overline{\langle \operatorname{Aut}(\mathfrak{A}),\alpha\rangle}$ contains a \emph{surjective} map $h$ which is canonical from $\mathfrak{D}$ to $\mathfrak{A}$ and such that $\mu_\pi(h)$ has the same action on $P$ \blue{as} $\mu_\pi(f)$. \end{lemma} \begin{proof} Let $\mathfrak{C}$ be as in the proof of Lemma \ref{ramsey}. By Lemma \ref{gen_can} we obtain that there exists a function $$g\in \overline{\{\alpha f \beta \mid \alpha,\beta \in \operatorname{Aut}(\mathfrak{D})\}} \subseteq M$$ which is canonical from $\mathfrak{D}$ to $\mathfrak{D}$. \blue{Since $g\in M$ it follows that the map $g$ preserves the congruence $\sim_{\pi}$. But note that the map $g$ is not necessarily surjective. For $m \in M$ define $\mu_\pi(m)$ by $x \mapsto \pi(m(\pi^{-1}(x)))$ as in the case of bijective functions.} Then $\mu_\pi(g)$ preserves the partition $\{O_1,\dots,O_k\}$. Since every automorphism of $\mathfrak{D}$ preserves the orbits $O_1,\dots,O_k$, it follows that $\mu_\pi(g)$ and $\mu_\pi(f)$ have the same action on the set $\{O_1,\dots,O_k\}$. If $\mu_\pi(f)(O_i)=O_j$, then $|F_{O_i}|=|F_{O_j}|$ since $f$ preserves $\sim_{\pi}$ and is surjective. Therefore, for every $g' \in M$ and $u \in B$ the restriction of $g'$ to $U := \pi^{-1}(u)$ is a bijection between $U$ and $\pi^{-1}(\mu_\pi(g')(u))$. In particular, this holds for $g \in M$. If $O_i=\{u_i\}$ then $g$ defines a bijection between $\pi^{-1}(u_i)$ and $\pi^{-1}(\mu_\pi(f)(u_i))$. If $O_i$ is infinite then \blue{$\mu_\pi(g)(O_i)$} is a union of infinitely many classes of $\sim_{\pi}$. Moreover, $O_i$ is infinite if and only if \blue{$\mu_\pi(g)(O_i)$} is infinite. \blue{Let $e \colon A \to A$ be defined as $(s,u) \mapsto (s,\mu_{\pi}(g)(u))$. Let $\mathfrak{C}$ be the homogeneous structure from Lemma~\ref{ramsey} which has the property that $\operatorname{Aut}(\mathfrak{C}) = \operatorname{Aut}(\mathfrak{A})$. Then $e$ is an isomorphism between $\mathfrak{C}$ and the substructure of $\mathfrak{C}$ induced by $g(C)$.} Since $\mathfrak{C}$ is homogeneous it follows that $e\in \overline{\operatorname{Aut}(\mathfrak{C})} = \overline{\operatorname{Aut}(\mathfrak{A})}$ and so there is a sequence $e_1,e_2,\ldots\in \operatorname{Aut}(\mathfrak{A})$ which converges to $e$. Then $h_i:= e_i^{-1} g\in M$ converges to $h:=e^{-1}\circ g$ and thus $h \in M$. We claim that the mapping $h$ satisfies the conditions of the lemma. By definition $h(A)=e^{-1}(g(A))=A$, that is, $h$ is surjective. Since $e^{-1}$ preserves the relations of $\mathfrak{C}$ it follows that the mapping $h$ is canonical from $\mathfrak{D}$ to $\mathfrak{C}$ (and therefore also to from $\mathfrak{D}$ to $\mathfrak{A}$). \blue{ For the same reason $\mu_\pi(e^{-1})$ preserves all orbits $O_i$.} This implies that $\mu_\pi(h)$ and $\mu_\pi(g)$ and therefore also $\mu_\pi(f)$ have the same action on $\{O_1,\dots,O_k\}$. \end{proof} \begin{lemma}\label{minimal_fiber_pres2} Let $\mathfrak{B}\in \mathcal{U}^*$ and let $\pi \colon \mathfrak{A} \rightarrow \mathfrak{B}$ be a finite covering map. Let $O_1,\dots,O_k$ be the orbits of $\mathfrak{B}$. Then $\operatorname{Sym}(A)$ has finitely many closed subgroups $G$ such that \begin{itemize} \item $\operatorname{Aut}(\mathfrak{A}) \subseteq G$, \item $G$ preserves $\sim_{\pi}$, and \item $\mu_{\pi}(G)$ preserves the partition $\{O_1,\dots,O_k\}$ of $B$. \end{itemize} \end{lemma} \begin{proof} By Proposition \ref{reduct_trivial2} we know that $\mathfrak{A}$ is a covering reduct of some (strongly) trivial covering $\mathfrak{C}$ of $\mathfrak{B}$ (with respect to $\pi$). Then let $\mathfrak{D}$ be the ordered homogeneous finite-signature Ramsey expansion of $\mathfrak{C}$ from Lemma~\ref{ramsey}. Let $G$ be a closed subgroup of $\operatorname{Sym}(A)$ as in the formulation of the lemma. Then $G$ acts on the set $\{O_1,\dots,O_k\}$. Let $K$ be the kernel of this action. Then $K$ is closed and the index of $K$ in $G$ is finite. Also, $K\subseteq S$ where $S$ is the group as in Definition~\ref{group_ns}. Therefore, $K$ is the automorphism group of a covering reduct of $\mathfrak{A}$ (Proposition~\ref{semidir}). Then by Theorem~\ref{reduct_trivial_finite1} there are finitely many possible choices for the group $K$. By Lemma \ref{minimal_fiber_pres}, for each $f \in \mu_\pi(G)$ there exists a surjective map $h \in \overline{\langle K, f\rangle}$ which is canonical from $\mathfrak{D}$ to $\mathfrak{A}$ such that $f$ and $h$ induce the same permutation $\sigma$ of $\{O_1,\dots,O_k\}$. We claim that $K \cup \{f\}$ and $K \cup \{h\}$ generate the same group. The image of the action of $K\cup \{f\}$ and $K \cup \{h\}$ on $\{O_1,\dots,O_k\}$ is $\langle \sigma \rangle$, and the kernel of these actions is again $K$. Therefore, \begin{align} [\langle K\cup \{f\}\rangle:K]=[\langle K\cup \{h\}\rangle:K]=l \label{eq:index} \end{align} where $l$ is the order of the permutation $\sigma$. In particular, the groups $\langle K\cup \{f\}\rangle$ and $\langle K\cup \{h\}\rangle$ are closed. Hence, $h\in K\cup \{f\}$ and thus $\langle K\cup \{f\}\rangle\leq \langle K\cup \{h\}\rangle$. Then by using Equality~$(\ref{eq:index})$ again it follows that $\langle K\cup \{f\}\rangle=\langle K\cup \{h\}\rangle$. Since $[G:K]$ is finite each group $G$ is generated by finitely many (at most $k!$) elements over $K$. By the previous paragraph we can assume that each of these generators are canonical from $\mathfrak{D}$ to $\mathfrak{A}$. There are finitely many possible behaviours of canonical functions from $\mathfrak{D}$ to $\mathfrak{A}$ (Lemma \ref{fin_many_behav}). If two functions have the same behaviour they generate the same group over $\operatorname{Aut}(\mathfrak{A})$ (Lemma \ref{gen_behav}). This implies that there are finitely many choices for the group $G$. \end{proof} \begin{theorem}\label{fru} Let $\mathfrak{B}\in R(\mathcal{U})$ and let $\pi \colon \mathfrak{A}\rightarrow \mathfrak{B}$ be a finite covering map. Then $\operatorname{Aut}(\mathfrak{A})$ has finitely many closed supergroups $G$ such that $G$ preserves $\sim_{\pi}$ and $\mu_\pi(G)$ preserves $\nabla(\mathfrak{B})$. \end{theorem} \begin{proof} Let $O_1,\dots,O_k$ be the classes of $\nabla(\mathfrak{B})$. Then by Lemma \ref{unary_reduct} it follows that $\prod_{i=1}^k{\operatorname{Sym}(O_i)} \subseteq \operatorname{Aut}(\mathfrak{B})$. Let $\mathfrak{B}'$ be a structure with $\operatorname{Aut}(\mathfrak{B}')=\prod_{i=1}^k{\operatorname{Sym}(O_i)}$. Then $\mathfrak{B}'\in \mathcal{U}^*$ by Lemma~\ref{sing_or_inf} and $\nabla(\mathfrak{B})=\nabla(\mathfrak{B}')$. The group $\operatorname{Aut}(\mathfrak{A})$ acts naturally on the set $\{O_1,\dots,O_k\}$. Let $K$ be the kernel of this action, and let $\mathfrak{A}'$ be a structure so that $\operatorname{Aut}(\mathfrak{A}')=K$. The action of $\operatorname{Aut}(\mathfrak{A}')$ on $B$ equals $\operatorname{Aut}(\mathfrak{B}')$. Therefore, $\pi \colon \mathfrak{A}'\rightarrow \mathfrak{B}'$ is a finite cover. Then the statement of the theorem follows from Lemma \ref{minimal_fiber_pres2} and from the fact that the orbits of $\mathfrak{B}'$ are exactly the classes of the congruence $\nabla(\mathfrak{B})$. \end{proof} \begin{proposition}\label{fru_rffu} $F(R(\mathcal{U}))\subseteqR^{<\infty}(F(\mathcal{U}^*))$ \end{proposition} \begin{proof} Following the notation of the proof of Theorem~\ref{fru} we have that $[\operatorname{Aut}(\mathfrak{A}):\operatorname{Aut}(\mathfrak{A}')]=[\operatorname{Aut}(\mathfrak{A}):K]$ is finite since $K$ is defined as the kernel of the action of $\operatorname{Aut}(\mathfrak{A})$ on the set $\{O_1,\dots,O_k\}$. As we saw in the proof of Theorem~\ref{fru} we have $\mathfrak{A}'\in F(\mathcal{U}^*)$. Hence, $\mathfrak{A}\in R^{<\infty}(F(\mathcal{U}^*))$. \end{proof} Later we will see (in Theorem~\ref{main_kexpp}) that in fact $F(R(\mathcal{U}))=R^{<\infty}(F(\mathcal{U}^*))$. The following definition of \emph{quasi-covering reducts} is needed for a model theoretic reformulation of Theorem~\ref{fru}, which is given in Theorem~\ref{quasi_cover2} below. \begin{definition}\label{quasi_cover} Let $\mathfrak{B}$ be $\omega$-categorical and let $\pi \colon \mathfrak{A} \rightarrow \mathfrak{B}$ be a finite cover. A first-order reduct $\mathfrak{C}$ of $\mathfrak{A}$ is called a \emph{quasi-covering reduct of $\mathfrak{A}$ with respect to $\pi$ if $\operatorname{Aut}(\mathfrak{C})$ preserves $\sim_{\pi}$ and $\mu_\pi(\operatorname{Aut}(\mathfrak{C})) \subseteq \operatorname{Sym}(B)$ preserves $\nabla(\mathfrak{B})$.} \end{definition} \begin{theorem}\label{quasi_cover2} Let $\mathfrak{B}\in R(\mathcal{U})$ and let $\pi \colon \mathfrak{A}\rightarrow \mathfrak{B}$ be a finite cover. Then $\mathfrak{A}$ has finitely many quasi-covering reducts with respect to $\pi$. \end{theorem} \begin{proof} The statement follows immediately from Theorem \ref{fru} and Definition \ref{quasi_cover}. \end{proof} \begin{proposition}\label{quasi_trivial} Let $\mathfrak{B}\in R(\mathcal{U})$ and let $\pi \colon \mathfrak{A}\rightarrow \mathfrak{B}$ be a finite cover. Then $\mathfrak{B}$ is a quasi-covering reduct of a (strongly) trivial covering of some structure in $\mathcal{U}^*$. \end{proposition} \begin{proof} Let us define the structures $\mathfrak{A}'$ and $\mathfrak{B}'$ as in the proof of Theorem \ref{fru}. Then $\mathfrak{A}$ is a finite quasi-covering reduct of $\mathfrak{A}'$ with respect to the covering map $\pi$. The map $\pi \colon \mathfrak{A}' \rightarrow \mathfrak{B}'$ is a finite covering, and $\mathfrak{B}'\in \mathcal{U}^*$. By Proposition \ref{reduct_trivial2}, $\mathfrak{A}'$ is a covering reduct of some strongly trivial covering of $\mathfrak{B}'$. Therefore, $\mathfrak{A}$ is a quasi-covering reduct of a strongly trivial covering of $\mathfrak{B}'\in \mathcal{U}^*$. \end{proof} \section{Finite Coverings of Unary Structures} In this section we classify the finite coverings of unary structures. First we make the following observation. \begin{lemma}\label{fu_fustar} $F(\mathcal{U})=F(\mathcal{U}^*)$. \end{lemma} \begin{proof} The containment ``$\supseteq$'' is trivial. In order to show the other direction it is enough to show that $\mathcal{U}\subseteq F(\mathcal{U}^*)$ since $F\circ F=F$. So let $\mathfrak{A} \in \mathcal{U}$ and let $F$ be the union of its finite orbits. Then $F$ is finite. Let us consider the unary structure $\mathfrak{B}$ whose domain is $B:=A\setminus F\cup \{x\}$ for any $x \notin A$, and whose relations are the infinite orbits of $\mathfrak{A}$ and $\{x\}$. Then $\mathfrak{B}\in \mathcal{U}^*$. Let $\pi \colon \mathfrak{A}\rightarrow \mathfrak{B}$ be defined as $\pi(y)=x$ if $x\in F$, and $\pi(y)=y$ otherwise. Then it is easy to see that $\pi$ is finite covering map, and hence $\mathfrak{A}\in F(\mathcal{U}^*)$. \end{proof} The following theorem summarises the results from Section~\ref{sect:split} and Section~\ref{sect:cov-reducts-triv-covers}. \begin{theorem}\label{reduct_trivial_finite1} Let $\mathfrak{B}\in \mathcal{U}^*$ and let $\pi \colon\mathfrak{A}\rightarrow \mathfrak{B}$ be a finite covering map. Then $\mathfrak{A}$ has finitely many covering reducts with respect to $\pi$. \end{theorem} \begin{proof} Corollary~\ref{reduct_trivial2} shows that $\pi$ is strongly split. The statement then follows from Corollary~\ref{reduct_trivial_finite}. \end{proof} \subsection{Finite covers of unary structures split} \label{sect:split} The following series of lemmas is needed to show that every finite covering map of a structure $\mathfrak{B} \in \mathcal{U}^*$ is strongly split (Proposition~\ref{reduct_trivial2}). Throughout this subsection, let $\mathfrak{B}\in \mathcal{U}^*$ and let $\pi \colon\mathfrak{A}\rightarrow \mathfrak{B}$ be a finite covering map. \begin{remark}\label{weak_strong} Observe that $\mathfrak{B}$ satisfies the condition of Lemma \ref{no_finite_index}, that is, $\operatorname{Aut}(\mathfrak{B})_x$ has no finite index subgroup for any $x\in B$. By Lemma \ref{no_finite_index} this implies that every trivial finite cover of $\mathfrak{B}$ is strongly trivial, and hence every split cover of $\mathfrak{B}$ is strongly split. \end{remark} \begin{remark} When $\mathfrak{B}$ is taken from $\mathcal{U}$ instead of $\mathcal{U}^*$, then there are split covers of $\mathfrak{B}$ that are not strongly split, as illustrated by Example~\ref{expl:twisted} if $|S|=|T|=1$. \end{remark} \begin{lemma}\label{generating_cycle} Let $F$ be a finite subset of an infinite orbit $O$ of $\mathfrak{B}$. If $|F|$ is large enough then there exists an automorphism $\alpha$ of $\mathfrak{A}$ such that \begin{enumerate} \item $\alpha(x)=x$ for all $x \in E := A \setminus \pi^{-1}(F)$, \item $\mu_\pi(\alpha)|_F$ is nontrivial. \end{enumerate} \end{lemma} \begin{proof} Let $k$ be the maximum of the sizes of the fibers of $\pi$ and let $p>k$ be a prime number. We claim that if $|F|\geq p$ then there is an automorphism $\alpha$ of $\mathfrak{A}$ satisfying Conditions (1) and (2). Let $u_1,\dots,u_p\in F$ be distinct elements. Then the $p$-cycle $(u_1u_2\dots u_p)$ is contained in $\operatorname{Aut}(\mathfrak{B})$ by Lemma \ref{unary}. By the definition of finite covering maps there exists $\beta\in \operatorname{Aut}(\mathfrak{A})$ such that $\mu_\pi(\beta)=(u_1\dots u_p)$. Now let $\alpha := \beta^{k!}\in \operatorname{Aut}(\mathfrak{A})$. Then $\mu_\pi(\alpha)|_F$ is again a $p$-cycle and hence nontrivial. On the other hand, if $u\in B \setminus F$ and $U := \pi^{-1}(u)$ then $\beta(U)=U$, and $\alpha|_U=\beta^{k!}|_U=\operatorname{id}_U$ since $|U|\leq k$. This means that $\alpha|_E=\operatorname{id}_E$. Therefore, $\alpha\in \operatorname{Aut}(\mathfrak{A})$ satisfies the Conditions (1) and (2) which proves the lemma. \end{proof} Recall that for any finite set $F$ of cardinality at least 5, the alternating group $\operatorname{Alt}(F)$ is the only non-trivial proper normal subgroup of $\operatorname{Sym}(F)$ (see e.g.~Chapter 8.1 in~\cite{DixonMortimer}). \begin{lemma}\label{generating_trans} Let $F$ be a finite subset of an infinite orbit $O$ of $\mathfrak{B}$. If $|F|$ is large enough then for any pairwise distinct $u_1,u_2,u_3,u_4 \in F$ there exists an automorphism $\alpha$ of $\mathfrak{A}$ such that \begin{enumerate} \item $\alpha(x)=x$ for all $x\in E := A \setminus \pi^{-1}(F)$, \item $\mu_\pi(\alpha)|_F=(u_1u_2)(u_3u_4)$. \end{enumerate} \end{lemma} \begin{proof} Let $K:=\{\mu_\pi(\gamma)|_F \in \operatorname{Sym}(F) \mid \gamma\in \operatorname{Aut}(\mathfrak{A})_E\}$. We claim that $K$ is a normal subgroup of $\operatorname{Sym}(F)$. It is clear that $K$ is a subgroup of $\operatorname{Sym}(F)$. Let $\alpha\in K$ and $\beta\in \operatorname{Sym}(F)$. We need to show that $\beta\alpha\beta^{-1}\in K$. By the definition of $K$ there exists $\gamma\in \operatorname{Aut}(\mathfrak{A})_E$ so that $\mu_\pi(\gamma)|_F=\alpha$. By Lemma~\ref{unary} there exists $\delta \in \operatorname{Aut}(\mathfrak{B})$ so that $\delta|_F = \beta$. By the definition of finite covers, there exists $\eta \in \operatorname{Aut}(\mathfrak{A})$ such that $\mu_\pi(\eta)=\delta$. Let $\gamma'=\eta \gamma\eta ^{-1} \in \operatorname{Aut}(\mathfrak{A})$. Then on can check that $\gamma'(x)=x$ for all $x\in E$ and \begin{align*} \mu_\pi(\gamma')|_F=(\mu_\pi(\eta)\mu_\pi(\gamma)\mu_\pi(\eta)^{-1})|_F=\delta |_F\mu_\pi(\gamma)|_F \delta^{-1}|_F=\beta\alpha\beta^{-1}. \end{align*} We obtained that $K\triangleleft \operatorname{Sym}(F)$. By Lemma \ref{generating_cycle} we know that if $F$ is large enough, then $K$ is nontrivial. Therefore if $F$ is large enough, then $K\geq \operatorname{Alt}(F)$, and then the statement of the lemma follows. \end{proof} \begin{lemma}\label{generating_trans2} Let $O$ be an infinite orbit of $\mathfrak{B}$. Then for all distinct $v_1,v_2 \in O$ there exists an $\alpha \in \operatorname{Aut}(\mathfrak{A})$ such that \begin{enumerate} \item $\alpha(x)=x$ for all $x\in A \setminus \pi^{-1}(\{v_1,v_2\})$, \item $\mu_\pi(\alpha)=(v_1v_2)$. \end{enumerate} \end{lemma} \begin{proof} Let $v_1,v_2 \in O$, and let us choose a finite subset of $O$ which contains the elements $v_1$ and $v_2$ which is large enough so that we can apply Lemma \ref{generating_trans} for $F$. Choose $u_3,u_4 \in F$ such that $u_1 := v_1,u_2 := v_2,u_3,u_4$ are pairwise distinct. Let $\alpha \in \operatorname{Aut}(\mathfrak{A})$ be as in Lemma \ref{generating_trans}. For each $i \in {\mathbb N}$, choose $\gamma_i\in \operatorname{Aut}(\mathfrak{A})$ so that \begin{align*} \mu_\pi(\gamma_i)(v_1) & =v_2, \\ \mu_\pi(\gamma_i)(v_2) & =v_1, \text{ and } \\ \mu_\pi(\gamma_i)(F)\cap \mu_\pi(\gamma_j)(F) & =\{v_1,v_2\} \text{ for all } i\neq j. \end{align*} By Lemma \ref{unary} it follows that such $\gamma_i$'s exist. Let $\beta_i:=\gamma_i\alpha\gamma_i^{-1}\in \operatorname{Aut}(\mathfrak{A})$. Then $\mu_\pi(\beta_i)=(v_1v_2)$, and for all $x\in B \setminus \{v_1,v_2\}$ we have $\beta_i(x)=x$ if $i$ is large enough. Since there are finitely many possible actions of $\beta_i$ on the finite set $S := \pi^{-1}(\{v_1,v_2\})$ there is a subsequence $(\beta_{l(i)})_i$ of $(\beta_{i})_i$ so that $\beta_{l(i)}|_S=\beta_{l(j)}|_S$ for all $i,j\in \mathbb{N}$. Then the sequence $\beta_{l(i)}$ converges to a permutation $\beta$ for which $\gamma(x)=x$ for all $x\in B \setminus S$ and $\pi(\beta)=\pi(\beta_{l(1)})=(v_1v_2)$. Since $\operatorname{Aut}(\mathfrak{A})$ is closed it follows that $\beta\in \operatorname{Aut}(\mathfrak{A})$ which finishes the proof of the lemma. \end{proof} \begin{lemma}\label{reduct_trivial11} Let $\mathcal{O}$ be the set of orbits of $\mathfrak{B}$. Then for each $O\in \mathcal{O}$ there exists a finite sets $F_O$ and a mapping $\psi_O \colon \pi^{-1}(O) \rightarrow F_O$ such that $\psi_O|_{\pi^{-1}(y)}$ is bijective for every $y\in O$, and $\operatorname{Aut}(\mathfrak{A})$ contains every $\alpha \in \operatorname{Sym}(A)$ such that \begin{enumerate} \item $\alpha$ preserves $\sim_{\pi}$, \item $\mu_\pi(\alpha)\in \operatorname{Aut}(\mathfrak{B})$, \item $\psi_O(x)=\psi_O(\alpha(x))$ for every $O\in \mathcal{O}$ and $x\in \pi^{-1}(O)$. \end{enumerate} \end{lemma} \begin{proof} If $O$ is finite, then $O=\{u\}$ for some $u\in B$ since $\mathfrak{B}\in \mathcal{U}^*$. In this case let $\psi_O=\operatorname{id}_{\pi^{-1}(u)}$. If $O$ is infinite, then we define $\psi_O$ as follows. Let $u\in O$ be arbitrary. \begin{itemize} \item If $x\in \pi^{-1}(u)$ then set $\psi_O(x):=x$. \item If $x\in \pi^{-1}(O) \setminus \pi^{-1}(u)$ then by Lemma \ref{generating_trans2} there exists a permutation $\alpha\in \operatorname{Aut}(\mathfrak{A})|_{A \setminus \pi^{-1}(\{\pi(x),u\})}$ such that $\alpha(\pi^{-1}(\pi(x)))=\pi^{-1}(u)$. In particular, $\alpha$ defines a bijection between $\pi^{-1}(\pi(x)))$ and $\pi^{-1}(u)$. Set $\psi_O(x):=\alpha(x)$. \end{itemize} We claim that these mappings satisfy the conditions of the lemma. Let $G$ be the permutation group of those $\gamma \in \operatorname{Sym}(B)$ for which there exists an automorphism $\alpha$ of $\mathfrak{A}$ with $(\mu_\pi(\alpha))=\gamma$ and satisfying Conditions (1)-(3) of the lemma. Then since $\operatorname{Aut}(\mathfrak{B})=\prod_{O\in \mathcal{O}}{\operatorname{Sym}(O)}$ it is enough to show that $(G_{B\setminus O})|_O=\operatorname{Sym}(O)$ for all $O\in \mathcal{O}$. If $O$ is a singleton, then the claim is trivial, so we can assume that $O$ is infinite. It is easy to see that $G$ is closed. Thus, $(G_{B\setminus O})|_O$ is also closed. Hence, by Lemma \ref{unary} it is enough to show that $G$ contains for all $u_1,u_2\in O$ the transposition $(u_1u_2)$. For this it is enough to show that $(uv) \in G$ for all $v\in O\setminus \{u\}$, where $u$ is the element of $O$ which is used in the definition of $\psi_O$. But this follows directly from the definition of the mapping $\psi_O$. \end{proof} \begin{proposition}\label{reduct_trivial2} Any finite covering map $\pi \colon \mathfrak{A} \rightarrow \mathfrak{B}$ for $\mathfrak{B} \in \mathcal{U}^*$ is strongly split. \end{proposition} \begin{proof} Let $F_O$ and $\psi_O$ be defined as in Lemma \ref{reduct_trivial11} for each orbit $O$ of $\mathfrak{B}$. Let $F:=\bigsqcup_O{F_O}$ and $\psi:=\bigcup_O{\psi_O}$. Let $\mathfrak{A}'$ be the expansion of $\mathfrak{A}$ obtained by adding to $\mathfrak{A}$ for each $x\in F$ the unary relation $\psi^{-1}(x)$. Then by Lemma \ref{reduct_trivial11} it follows that $\mu_\pi(\operatorname{Aut}(\mathfrak{A}'))=\operatorname{Aut}(\mathfrak{B})=\mu_\pi(\operatorname{Aut}(\mathfrak{A}))$. Thus $\mathfrak{A}$ is a covering reduct of $\mathfrak{A}'$. We claim that $\pi \colon \mathfrak{A}'\rightarrow \mathfrak{B}$ is a strongly trivial cover. This implies the statement of the proposition. By Remark \ref{weak_strong} it is enough to show that the finite cover $\pi \colon \mathfrak{A}'\rightarrow \mathfrak{B}$ is trivial, i.e., that the kernel of the map $\mu_\pi$ is trivial. Let $\alpha\in \operatorname{Aut}(\mathfrak{A}')$ be so that $\mu_\pi(\alpha)=\operatorname{id}_B$ and let $x\in A$. Then $x\sim_{\pi} \alpha(x)$ and $\psi(x)=\psi(\alpha(x))$. It follows from the definition that $\psi$ is injective on $[x]_\pi$. This implies that $x=\alpha(x)$ and hence that $\alpha=\operatorname{id}_A$. Therefore, the kernel of $\mu_\pi$ is trivial. \end{proof} \begin{remark} Proposition~\ref{reduct_trivial2} generalises Theorem 2.4 in~\cite{FiniteCovers}, which states that every finite covering of $\mathbb{N}$ (strongly) splits. \end{remark} \subsection{Covering reducts of trivial coverings} \label{sect:cov-reducts-triv-covers} In this subsection we describe the automorphism groups of covering reducts of a trivial finite covering of a structure in $\mathcal{U}^*$. In particular, we show that there are always finitely many of them. Throughout this subsection let us fix a structure $\mathfrak{B}\in \mathcal{U}^*$ and a trivial finite covering map $\pi \colon\mathfrak{A}\rightarrow \mathfrak{B}$. Let $O_1,\dots,O_k$ be the orbits of $\mathfrak{B}$. \begin{remark}\label{triv_cov_unary} Let $\mathfrak{A}$ be a strongly trivial covering of a unary structure $\mathfrak{B}$ with orbits $O_1,\dots,O_k$. Then as in Remark \ref{rem:triv-covers} the elements of the structure $\mathfrak{A}$ can be identified with the elements of $\bigsqcup_{i=1}^k{(F_i\times O_i)}$ for some finite sets $F_i$ so that $\operatorname{Aut}(\mathfrak{A})$ contains exactly those permutations which preserve the first coordinate and stabilise the sets $O_i$ in the second coordinate. In this case $\operatorname{Aut}(\mathfrak{A})$ can be written as $\prod_{i \in \{1,\dots,k\}} \{\operatorname{id}_{F_i}\} \wr \operatorname{Sym}(O_i)$. \end{remark} \begin{remark}\label{rem:triv-cover-interpret} It follows from the description of strongly trivial coverings of unary structures in Remark~\ref{triv_cov_unary} that every (reduct of a) trivial covering structure of a structure from $\mathcal{U}$ has a first-order interpretation over $({\mathbb N};=)$. \end{remark} We identify the elements of the structure $\mathfrak{A}$ with the elements of $\bigsqcup_{i=1}^k{(F_i\times O_i)}$ for some finite sets $F_i$ for $i \in \{1,\dots,k\}$ as explained in Remark \ref{triv_cov_unary}. \begin{definition}\label{group_ns} Let $\pi \colon\mathfrak{A}\rightarrow \mathfrak{B}$ be a trivial finite covering for $\mathfrak{B} \in \mathcal{U}^*$. \begin{itemize} \item Let $N$ be the group of all permutations of $A$ which fix all fibers setwise (i.e., $N$ is the kernel of $\mu_\pi$). \item Let $S$ be the group of all permutations of $A$ which fix the sets $F_i\times O_i$ for $i \in \{1,\dots,k\}$ and which preserve the congruence $\sim_{\pi}$. \end{itemize} \end{definition} The following statements are direct consequences of the definitions above. \begin{proposition}\label{semidir} Let $\pi \colon\mathfrak{A}\rightarrow \mathfrak{B}$ be a trivial finite covering of $\mathfrak{B} \in \mathcal{U}^*$. \begin{enumerate} \item A first-order reduct $\mathfrak{C}$ of $\mathfrak{A}$ is a covering reduct of $\mathfrak{B}$ with respect to the covering $\pi$ if and only if $\operatorname{Aut}(\mathfrak{C}) \subseteq S$. \item The group $S$ can be written as a semidirect product $N \rtimes \operatorname{Aut}(\mathfrak{A})$. \end{enumerate} \end{proposition} \begin{proof} (1) follows easily from the definition using that $\operatorname{Aut}(\mathfrak{B})=\prod_{i=1}^k{\operatorname{Sym}(O_i)}$. Since $N$ is the kernel of the homomorphism $\mu_{\pi} \colon S\rightarrow \operatorname{Aut}(\mathfrak{B})$ we have $N\triangleleft S$. It is obvious that $\operatorname{Aut}(\mathfrak{A})\leq S$. Since $\pi$ is a trivial covering map it follows that the kernel of $\mu_{\pi}|_{\operatorname{Aut}(\mathfrak{A})}$ is trivial, that is, $N\cap \operatorname{Aut}(\mathfrak{A})=\{\operatorname{id}_A\}$. \end{proof} The following lemma is a direct consequence of item (2) of Lemma \ref{semidir}. Let $H$ and $K$ be subgroups of the same group. Then we say that $H$ \emph{normalises} $K$ if $H$ is a subgroup of the normaliser of $K$, i.e., for every $h \in H$ we have that $$\{h^{-1} k h \mid k \in K\} = K.$$ \begin{lemma}\label{semidir2} The mapping $G \mapsto G \cap N$ defines a bijection between the closed subgroups of $S$ that contain $\operatorname{Aut}(\mathfrak{A})$ and the closed subgroups of $N$ which are normalized by $\operatorname{Aut}(\mathfrak{A})$. The inverse map is $H \mapsto H \rtimes \operatorname{Aut}(\mathfrak{A})$. \end{lemma} \begin{proof} If $H$ is a subgroup of $N$ which is normalized by $\operatorname{Aut}(\mathfrak{A})$ then the group generated by $H$ and $\operatorname{Aut}(\mathfrak{A})$ can be written as a product $H\operatorname{Aut}(\mathfrak{A})$. Since $H\cap \operatorname{Aut}(\mathfrak{A}) \subseteq N\cap \operatorname{Aut}(\mathfrak{A})=\{\operatorname{id}_A\}$, it follows that this group can be written as a semidirect product $H \rtimes \operatorname{Aut}(\mathfrak{A})$. Then $(H \rtimes \operatorname{Aut}(\mathfrak{A}))\cap N=H$. We claim that if $H$ is closed then so is $H \rtimes \operatorname{Aut}(\mathfrak{A})$. Let $\alpha_1,\alpha_2, \ldots \in H \rtimes \operatorname{Aut}(\mathfrak{A})$ be a sequence converging to some $\alpha\in \operatorname{Sym}(A)$. Let $\beta_i$ (and $\beta)$ be the unique element in $\operatorname{Aut}(\mathfrak{A})$ for which $\mu_{\pi}(\beta_i)=\mu_{\pi}(\alpha_i)$ (and $\mu_{\pi}(\beta)=\mu_{\pi}(\alpha)$), that is, $\alpha_i\beta_i^{-1}\in H$ (and $\alpha\beta^{-1}\in H$). Since $\mu_{\pi}$ is continuous it follows that $(\beta_i)_i$ converges to $\beta$. Hence the sequence $(\alpha_i\beta_i^{-1})_i$ converges to $\alpha\beta^{-1}$. Since $\alpha_i\beta_i^{-1}\in H$ and $H$ is closed it follows that $\alpha\beta^{-1}\in H$ and hence $\alpha\in H\operatorname{Aut}(\mathfrak{A})=H \rtimes \operatorname{Aut}(\mathfrak{A})$. Therefore, $H \rtimes \operatorname{Aut}(\mathfrak{A})$ is closed. Let $G$ be a subgroup of $S$ containing $\operatorname{Aut}(\mathfrak{A})$. Then we claim that the group $G \cap N$ is normalised by $\operatorname{Aut}(\mathfrak{A})$. Indeed, let $g \in \operatorname{Aut}(\mathfrak{A})$. Then $\{g^{-1} h g \mid h \in G \cap N\} = G \cap N$ since $N$ is a normal subgroup of $S$ which contains $\operatorname{Aut}(\mathfrak{A})$. Since $(G\cap N)\cap \operatorname{Aut}(\mathfrak{A})\subseteq N\cap \operatorname{Aut}(\mathfrak{A})=\{\operatorname{id}_A\}$, it follows that $G$ can be written as $G=(G\cap N) \rtimes \operatorname{Aut}(\mathfrak{A})$. Moreover, it is clear that if $G$ is closed, then so is $G\cap N$. We have obtained that the mappings defined in the lemma are inverses of each other, which also implies that they both are bijections. \end{proof} Hence, in order to classify the covering reducts of $\mathfrak{A}$ it is enough to classify those closed subgroups of $N$ which are normalized by $\operatorname{Aut}(\mathfrak{A})$. If $N$ is a normal subgroup of a group $G$, then we write that two elements $x_1,x_2 \in G$ \emph{are the same modulo $N$} if they represent the same element in the factor group $G/N$, i.e., if $x_1x_2^{-1}\in N$. \begin{definition}\label{cond_hn} Let $H$ be a subgroup of $\prod_{i=1}^k{\operatorname{Sym}(F_i)}$, let $H_i \subseteq \operatorname{Sym}(F_i)$ be the projection of $H$ to the $i$-th coordinate, and let $N_i \triangleleft H_i$ for all $i \in \{1,\dots,k\}$. We write $N(H,N_1,\dots,N_k)$ for the group of all permutations $\alpha \in N$ such that \begin{itemize} \item for all $i \in \{1,\dots,k\}$ and for all elements $x_i\in O_i$ there is a permutation $\gamma \in H$ such that the action of $\alpha$ on the first coordinate of the fiber $F_i\times \{x_i\}$ is exactly the $i$-th coordinate of $\gamma$, and \item for all $i \in \{1,\dots,k\}$ and $x,y\in O_i$ the actions of $\alpha$ on the first coordinate of the fibers $F_i\times \{x\}$ and $F_i\times \{y\}$ are the same modulo $N_i$. \end{itemize} \end{definition} It follows directly from the definition that $N(H,N_1,\dots,N_k)$ is a closed subgroup of $N$ and normalised by $\operatorname{Aut}(\mathfrak{A})$. We will show that subgroups of $N$ with these properties are of the form $N(H,N_1,\dots,N_k)$. \begin{definition}\label{def:h-of-g} Let $G$ be a subgroup of $N$ which is normalised by $\operatorname{Aut}(\mathfrak{A})$. \begin{itemize} \item Let $H(G)$ be the subgroup of $\prod_{i=1}^k{\operatorname{Sym}(F_i)}$ containing all permutations $\gamma$ such that there exists a permutation $\alpha \in G$ and elements $x_i\in O_i$ such that for all $i \in \{1,\dots,k\}$ the action of $\alpha$ on the first coordinate of the fiber $F_i\times \{x_i\}$ is exactly the $i$-th coordinate of $\gamma$. \item Let $N_i(G)$ be the group of all permutations $\gamma$ of $F_i$ such that there exists a permutation $\alpha\in G$ and an $x_i\in O_i$ such that the action of $\alpha$ on the first coordinate of the fiber $F_i\times \{x\}$ equals $\gamma$ and $\alpha$ fixes every element of $A \setminus (F_i\times \{x_i\})$. \end{itemize} \end{definition} \begin{remark} Since $G$ is normalised by $\operatorname{Aut}(\mathfrak{A})=\prod_{i=1}^k{\operatorname{Sym}(O_i)}$ it does not matter which elements $x_i \in O_i$ we take in the definition of $H(G)$ and $N_i(G)$. It follows that $H(G)$ and $N_i(G)$ are indeed groups. \end{remark} \begin{remark}\label{nh_going_back} It is clear from the definition that $H(N(H,N_1,\dots,N_k))=H$ and $N_i(N(H,N_1,\dots,N_k))=N_i$. \end{remark} \begin{definition} Let $N_i$ be a subgroup of $\operatorname{Sym}(F_i)$ for some $i\in \{1,\dots,k\}$. Then $N^*(N_1,\dots,N_K)$ is defined to be the closure of the group generated by all permutations $\alpha$ for which there exists an $i$ and $x\in O_i$ such that the action of $\alpha$ on the first coordinate of the fiber $F_i\times \{x\}$ is in $N_i$ and $\alpha$ fixes every element of $A \setminus (F_i\times \{x\})$. \end{definition} It follows easily from the definition that $N^*(N_1,\dots,N_K)$ is contained in every closed group $G\leq N$ normalized by $\operatorname{Aut}(\mathfrak{A})$ with $N_i(G)=N_i$. It is also easy to see that in fact $N^*(N_1,\dots,N_K)=N(\prod_{i=1}^k{N_i},N_1,\dots,N_k)$ (using the notation from Definition \ref{cond_hn}). \begin{example} Let $\mathfrak{B} := \mathbb{N}$ and $\mathfrak{A}$ a strongly trivial finite covering structure of $\mathfrak{B}$ with fibers of size four. Then the covering structure from Example~\ref{expl:neither-free-nor-trivial} is a covering reduct $\mathfrak{C}$ of $\mathfrak{A}$. Let $G := \operatorname{Aut}(\mathfrak{C})$. As $G$ is transitive, we have $S = N$ and $k=1$ in Definition~\ref{def:h-of-g}. Then $H(G)=\prod_{i=1}^k {\mathbb Z}_4$ and $N_1(G) = {\mathbb Z}_2$. \end{example} \begin{lemma}\label{normal_hn} Let $G$ be a subgroup of $N$ normalised by $\operatorname{Aut}(\mathfrak{A})$. Then $N_i(G) \triangleleft H_i(G)$ where $H_i(G)$ denotes the projection of the group $H(G)$ to the $i$-th coordinate. \end{lemma} \begin{proof} Let $\alpha\in H_i(G)$ and $\beta\in N_i(G)$. Let $\gamma\in G$ be an element witnessing $\alpha\in H_i(G)$. Let $x\in O_i$ and let $\delta\in G$ be an element witnessing $\delta\in N_i(G)$ on the fiber $F_i\times \{x\}$. Then the element $\gamma^{-1}\delta\gamma\in G$ witnesses the fact that $\alpha^{-1}\beta\alpha\in N_i(G)$. \end{proof} \begin{lemma}\label{generating_n} Let $G$ be a closed subgroup of $N$ normalised by $\operatorname{Aut}(\mathfrak{A})$. Let $\alpha\in G$ and $u,v\in O_i$. Then the actions of $\alpha$ on the first coordinate of the fibers $F_i\times \{u\}$ and $F_i\times \{v\}$ are the same modulo $N_i(G)$. \end{lemma} \begin{proof} Let $\alpha_u$ and $\alpha_v$ denote the action of $\alpha$ on the first coordinate of the fibers of $u$ and $v$, respectively, so $\alpha_u,\alpha_v \in \operatorname{Sym}(F_i)$. For $\beta \in \operatorname{Aut}(\mathfrak{B})$, we write $\pi^{-1}(\beta)$ for the unique $\gamma \in \operatorname{Aut}(\mathfrak{A})$ such that $\mu_{\pi}(\gamma) = \beta$. Let $\beta=(uv)\in \operatorname{Aut}(\mathfrak{B})$. Let $\gamma := \alpha^{-1}(\pi^{-1}(\beta))^{-1}\alpha\pi^{-1}(\beta)$. Then the action of $\gamma$ on the first coordinate of the fiber $F_i\times \{u\}$ is $\alpha_u^{-1}\alpha_v$. On the other hand, $\gamma$ fixes every element of $A \setminus F_i\times \{u,v\}$. Now let $v_1,v_2,\dots$ be pairwise distinct elements of $O_i$. Let $\beta_i:=(uv_i)$, and let $\gamma_i:=(\pi^{-1}(\beta_i))^{-1}\gamma\pi^{-1}(\beta_i)$. Then $\gamma_i$ acts on the first coordinate of the fiber $F_i\times \{u\}$ as $\alpha_u^{-1}\alpha_v$, and it fixes every element outside $F_i\times \{u,v_i\}$. Therefore, the permutations $\gamma_i$ converge to a permutation $\gamma'$ which acts on the first coordinate of the fiber $F_i\times \{u\}$ as $\alpha_u^{-1}\alpha_v$, and fixes every element outside $F_i\times \{u\}$. By our assumption $G$ is closed, so $\gamma'\in G$. By definition this implies that $\alpha_u^{-1}\alpha_v\in N_i(G)$. \end{proof} \begin{proposition}\label{desc_nh} Let $\mathfrak{C}$ be a covering reduct of $\mathfrak{A}$ and $G := \operatorname{Aut}(\mathfrak{C}) \cap N$. Then $$G=N(H(G),N_1(G),\dots,N_k(G)).$$ \end{proposition} \begin{proof} We first show that $G\leq N(H(G),N_1(G),\dots,N_k(G))$. Let $\alpha \in G$. Then the definition of the group $H(G)$ implies that $\alpha$ satisfies the first item in the definition of $N(H(G),N_1(G),\dots,N_k(G))$. By Lemma \ref{generating_n}, $\alpha$ also satisfies the second item of this definition, and hence $\alpha \in N(H(G),N_1(G),\dots,N_k(G))$. Now let $\alpha\in N(H(G),N_1(G),\dots,N_k(G))$ be arbitrary. Let $u_i\in O_i$ be arbitrary elements. By Remark ~\ref{nh_going_back} we have $H(N(H(G),N_1(G),\dots,N_k(G))=H(G)$. This implies that there exists an $\alpha' \in G$ such that for every $i \in \{1,\dots,k\}$ the actions of $\alpha$ and $\alpha'$ agree on $F_i\times \{u_i\}$. For $v\in O_i$ let $\alpha_v$ and $\alpha_v'$ denote the action of $\alpha$ and $\alpha'$, respectively, on the fiber $F_i\times \{v\}$. We claim that for all $v\in O_i$ it holds that $\alpha_v(\alpha_v')^{-1}\in N_i$. By Remark~\ref{nh_going_back} we have $N_i(N(H(G),N_1(G),\dots,N_k(G))=N_i(G)$, and hence by Lemma~\ref{generating_n} it follows that $\alpha_v\alpha_{u_i}^{-1}\in N_i(G)$, and $\alpha_v'(\alpha_{u_i}')^{-1}=\alpha_v'\alpha_{u_i}^{-1}\in N_i(G)$, and hence $$\alpha_v(\alpha_v')^{-1}=\alpha_v\alpha_{u_i}^{-1}\alpha_{u_i}(\alpha_v')^{-1}=\alpha_v\alpha_{u_i}^{-1}(\alpha_v'\alpha_{u_i}^{-1})^{-1}\in N_i(G).$$ This implies that $$\alpha(\alpha')^{-1}\in N(\prod_{i=1}^k{N_i}(G),N_1(G),\dots,N_k(G))=N^*(N_1(G),\dots,N_k(G))\subseteq G.$$ Therefore $\alpha=(\alpha(\alpha')^{-1})\alpha'\in G$. \end{proof} \begin{remark} Proposition~\ref{desc_nh} generalizes Theorem 3.1 in~\cite{FiniteCovers} from $\operatorname{Sym}(\mathbb{N})$ to arbitrary automorphism groups of structures in $\mathcal{U}^*$. \end{remark} \begin{corollary}\label{reduct_trivial_finite} $\mathfrak{A}$ has finitely many covering reducts with respect to $\pi$. \end{corollary} \begin{proof} By Lemma \ref{semidir2} and item (1) of Proposition \ref{semidir} it is enough to show that $N$ has finitely many closed subgroups which are normalized by $\operatorname{Aut}(\mathfrak{A})$. By Proposition~\ref{desc_nh} these groups can be characterized by a subgroup $\prod_{i=1}^k{\operatorname{Sym}(F_i)}$ and a system of normal subgroups $N_i\triangleleft H_i$ where $H_i$ is projection of the group $H$ to the $i$-th coordinate. Then the statement of the corollary follows from the fact that there are finitely many choices for these group. \end{proof} \subsection{The number of $\nabla$-classes in point stabilizers} \label{sect:stab} In this section we examine the possible growth of the number of $\nabla$-classes in stabilizers of finite sets. \begin{lemma}\label{stabil_finite} Let $G \in \mathcal{G}_{{\Exp}{+}}$ be a permutation group on a countably infinite set $X$, that is, $o^i_n(G)\leq c_1n^{dn}$ for some $c_1,d$ with $d<1$. Let $F\subset X$ be finite. Then for every $\varepsilon>0$ \begin{itemize} \item there exists a constant $c_2$ such that $$o^i_n(G_{F}|_{X \setminus F}) < c_2n^{(d+\varepsilon) n}$$ \item there exists a constant $c_3$ such that $$o^i_n(G_{F}) < c_3n^{(d+\varepsilon) n}.$$ \end{itemize} In particular, $G_F\in \mathcal{G}_{{\Exp}{+}}$ and $G_F|_{X \setminus F} \in \mathcal{G}_{{\Exp}{+}}$. \end{lemma} \begin{proof} Let $\varepsilon>0$. The orbits of injective $n$-tuples of $G_F$ can be embedded into the orbits of injective $(n+|F|)$-tuples of $G$ by mapping the orbit of a tuple $t$ into the orbit of $(t,t')$ where $t'$ is any $|F|$-tuple such that $(t,t')$ has pairwise distinct entries and all elements of $F$ appear in $(t,t')$. Hence, $$o^i_n(G_F) \leq o^i_{n+|F|}(G) \leq c_1(n+|F|)^{d(n+|F|)} \leq c_2 n^{(d+\varepsilon) n}$$ for an appropriate constant $c_2$. Choosing $\varepsilon>0$ such that $d+\varepsilon<1$ shows that $G_F \in \mathcal{G}_{{\Exp}{+}}$. The statements for $G_{F}|_{X \setminus F}$ can be shown analogously. \end{proof} \begin{definition} Let $G$ be an oligomorphic permutation group on a countably infinite set $X$. For every finite set $F\subset X$ let $m_G(F)$ be the number of $\nabla(G_F)$-classes. For $n\in \mathbb{N}$ let $m_G(n):=\max(\{m_G(F) \mid F\subset X, |F|=n\})$. \end{definition} \begin{remark} If $F_1,F_2 \subset X$ are contained in the same orbit of $n$-subsets of $G$, then $m_G(F_1)=m_G(F_2)$. Hence, the set $\{m_G(F) \mid F\subset X, |F|=n\}$ is finite, and so the maximum of this set always exists. \end{remark} \begin{lemma}\label{not_too_many_classes} Let $G$ be a permutation group. Suppose that $o^i_n(G)\leq c_1n^{dn}$ for some $c_1$ and $d<1$. Then for every $\varepsilon>0$ we have $m_G(n)\leq c_2n^{d+\varepsilon}$ for some constant $c_2$. \end{lemma} \begin{proof} Suppose that $G$ is a permutation group on $X$ and let $F\subset X$ be of size $n$. Suppose that $X_1,X_2,\dots,X_l$ are the infinite classes of the congruence $\nabla(G_F)$, and arbitrarily choose $x_i \in X_i$ for $i \in \{1,\dots,l\}$. Then for each function $f \colon \{1,\dots,k\}\rightarrow \{1,\dots,l\}$ there are pairwise distinct elements $y_1,\dots,y_k$ so that $y_j\in X_{f(j)}\setminus \{x_{f(j)}\}$. Let $t^f:=(x_1,\dots,x_l,y_1,\dots,y_k)$. Then the tuples $t^f$ are injective and lie in pairwise different orbits of $G_{F}|_{X \setminus F}$. Thus $l^n\leq o^i_{n}(G_{F}|_{X \setminus F})\leq c_2n^{(d+\varepsilon) n}$ for some $c_2$ by Lemma \ref{stabil_finite}. Thus $l\leq c_2n^{d+\varepsilon}$, and therefore $m_G(F)\leq c_2n^{d+\varepsilon}$. \end{proof} \begin{lemma}\label{small_classes_2} Let $G$ be a permutation group on a countably infinite set $X$ and suppose that $o^i_n(G)\leq cn^{dn}$ for some $c$ and $d<1$. Let $F\subset X$ be finite, let $R$ be a congruence of $G_F$, and let $\frac{k-1}{k}>d$. Then $R$ has finitely many classes of size at least $k$. \end{lemma} \begin{proof} By Lemma~\ref{stabil_finite} we can assume that $F=\emptyset$. Suppose to the contrary that $R$ has infinitely many classes of size at least $k$. Let $n$ be arbitrary and let $\mathcal{P}_n^k$ be the set of partitions $P=\{S_1,\dots,S_l\}$ of $\{1,2,\dots,n\}$ such that $|S_i|\leq k$ for all $i=1,\dots,l$. For each $P \in \mathcal{P}_n^k$ we can choose pairwise distinct elements $x_1^P,\dots,x_n^P \in \{1,\dots,n\}$ such that $x_iRx_j$ iff $\{x_i,x_j\}\in P$. Then the $n$-tuples $(x_1^P,\dots,x_n^P)$ for $P\in \mathcal{P}_n^k$ are injective and lie in pairwise different orbits of $G$. Therefore $o^i_n(G)\geq |\mathcal{P}_n^k|$. Let us choose $\varepsilon>0$ such that $\frac{k-1}{k}-\varepsilon>d$. Then by Lemma \ref{counting} it follows that $$o^i_n(G)\geq|\mathcal{P}_n^k| = p_k(n) \geq n^{(\frac{k-1}{k}-\varepsilon)n}>cn^{dn}$$ for $n$ large enough. This contradicts our assumption. \end{proof} \begin{corollary}\label{small_classes_3} Let $G$ be a permutation group on a countably infinite set $X$ and suppose that $o^i_n(G)\leq cn^{dn}$ for some $c$ and $d<1$. Let $F\subset X$ be finite and let $R$ be a congruence of $G_F$. Then $R$ has finitely many infinite classes. \end{corollary} \begin{proof} Follows directly from Lemma \ref{small_classes_2}. \end{proof} \begin{definition} Let $(\mathcal{G}_{{\Exp}{+}})^k$, for $k\in \mathbb{N}$, denote the class of those groups $G\in \mathcal{G}_{{\Exp}{+}}$ for which the following holds. \begin{itemize} \item[$(*^k)$] For every finite $F \subset X$, every congruence of $G_F$ has at most finitely many equivalence classes of size at least $k$. \end{itemize} Let $(\mathcal{K}_{{\Exp}{+}})^k$ denote the classes of those structures in $\mathcal{K}_{{\Exp}{+}}$ whose automorphism group is in $(\mathcal{G}_{{\Exp}{+}})^k$. \end{definition} Using the definition above, Lemma \ref{small_classes_2} immediately implies the following. \begin{corollary}\label{union_delta_size} $\mathcal{G}_{{\Exp}{+}}=\bigcup_{k=1}^\infty(\mathcal{G}_{{\Exp}{+}})^k$, and $\mathcal{K}_{{\Exp}{+}}=\bigcup_{k=1}^\infty(\mathcal{K}_{{\Exp}{+}})^k$. \end{corollary} \subsection{The primitive case} \label{sect:primitive} We use the following theorem of Dugald Macpherson~\cite{Macpherson-Orbits}. \begin{theorem}[Theorem 1.2 in~\cite{Macpherson-Orbits}]\label{dugald_primitive} Let $G$ be a permutation group on a countably infinite set $X$ which is primitive but not highly transitive. Then there is a polynomial $p$ such that $o^i_n(G)\geq \frac{n!}{p(n)}$. \end{theorem} Theorem \ref{dugald_primitive} immediately implies the following. \begin{lemma}\label{primitive} Let $G \in \mathcal{G}_{{\Exp}{+}}$ be primitive. Then $G$ is highly transitive. \end{lemma} \begin{proof} Let us observe that if $n$ is large enough then $\frac{n!}{p(n)} > cn^{dn}$ for all $c$ and $d<1$ and for every polynomial $p$. This follows from Stirling's formula. Hence, the lemma follows from Theorem \ref{dugald_primitive}. \end{proof} \subsection{The case when $\Delta(G)$ is trivial} \label{sect:delta-trivial} The result of Macpherson (Theorem~\ref{dugald_primitive}) is used via the following lemma. \begin{lemma}\label{nice_case} Let $G \in \mathcal{G}_{{\Exp}{+}}$ be such that $\Delta(G)$ is trivial and such that $G$ stabilizes each class of $\nabla(G)$. Then $G$ acts highly transitively on each of its orbits. \end{lemma} \begin{proof} Let $O_1,\dots,O_m$ be the orbits of $G$. Then $O_1,\dots,O_m$ are also the classes of $\nabla(G)$. We claim that the action of $G$ on $O_i$ is primitive for each $i \in \{1,\dots,m\}$; this suffices, because then the statement of the lemma follows from Lemma~\ref{primitive}. Let $R_i$ be a congruence of $G|_{O_i}$. Since $G$ acts transitively on $O_i$ it follows that every class of $R_i$ has the same size. If this size is finite, then let us consider the congruence $R_i^*:=R_i\cup \{(x,x) \mid x\in X\setminus O_i\}$. Then every class of $R_i^*$ is finite and thus $R_i^*$ must be finer than $\Delta(G)$. Since $\Delta(G)$ is trivial, $R_i^*$ and $R_i$ are trivial, too. Now assume that every class of $R_i$ is infinite. Then by Corollary \ref{small_classes_3} $R_i$ has finitely many classes. Let $C_1,C_2,\dots,C_l$ be these classes. Then $\{O_1,\dots,O_{i-1},O_{i+1},\dots,O_m,C_{1},C_{2},\dots,C_{l}\}$ is an invariant partition. Since $\nabla(G)$ is the finest congruence with finitely many classes, it follows that $l=1$ and thus $R_i$ is again trivial. Therefore $G|_{O_i}$ is primitive for all $i$. \end{proof} Under the conditions of Lemma~\ref{nice_case} we will show that, in fact, if $G$ is closed, then $G=\prod_{i=1}^m{\operatorname{Sym}(O_i)}$ where $O_1,\dots,O_m$ are the orbits of $G$, that is, $G$ is the automorphism group of a unary structure (Lemma \ref{extension_2}). The following lemma is well-known (see e.g.\ Proposition~1.4(2) in~\cite{MoonStalder}); we give a proof for the convenience of the reader. \begin{lemma}\label{highly_normal} Every normal subgroup of a highly transitive permutation group acting on an infinite set is either highly transitive or trivial. \end{lemma} \begin{proof} Let $G$ be a highly transitive subgroup of $\operatorname{Sym}(X)$ for some infinite set $X$ and let $H$ be a normal subgroup of $G$. The closure $K$ of $H$ in $\operatorname{Sym}(X)$ will be a normal subgroup of $\operatorname{Sym}(X)$. To see this, let $\alpha \in K$ and $\beta \in \operatorname{Sym}(X)$. Since $K$ is the closure of $H$ there exists a sequence $(\alpha_i)_{i \in {\mathbb N}}$ of elements of $H$ that converges against $\alpha$. Since $G$ is highly transitive, there exists a sequence $(\beta_i)_{i \in {\mathbb N}}$ of elements of $G$ that converges against $\beta$. Then $$\beta \alpha \beta^{-1} = (\lim_i \beta_i) (\lim_i \alpha_i) (\lim_i \beta_i)^{-1} = \lim_i (\beta_i \alpha_i \beta_i^{-1}) \in K$$ since $K$ is closed and $\beta_i \alpha_i \beta_i^{-1} \in H$ because $H \triangleleft G$. The statement now follows from the known fact that $\operatorname{Sym}(X)$ has no proper non-trivial closed normal subgroups (the normal subgroups of $\operatorname{Sym}(T)$ have been classified, see Chapter 8.1 in~\cite{DixonMortimer}). \end{proof} \begin{lemma}\label{two_parts} Let $G$ be a closed permutation group on a countably infinite set $X$. Let $T$ be an infinite orbit of $G$ such that $G|_T$ is highly transitive and let $S:=X \setminus T$. Then one of the following holds. \begin{enumerate} \item $\{\operatorname{id}_S\}\times\operatorname{Sym}(T) \subseteq G$, \item There exists a surjective homomorphism $e \colon G|_S \to G|_T$ such that a permutation $\gamma$ of $X$ is in $G$ if and only if there exists a permutation $\alpha \in G|_S$ so that $\gamma|_S=\alpha$ and $\gamma|_T=e(\alpha)$. \end{enumerate} \end{lemma} \begin{proof} If $\alpha\in \operatorname{Sym}(S)$ and $\beta\in \operatorname{Sym}(T)$ then we use the notation $(\alpha,\beta)$ for the unique permutation $\gamma \in \operatorname{Sym}(X)$ whose restriction to $S$ equals $\alpha$ and whose restriction to $T$ equals $\beta$. \medskip \emph{Case 1. For every $\alpha \in G|_S$ there is a unique $e(\alpha) \in G|_T$ such that $(\alpha,e(\alpha)) \in G$.} \medskip It is easy to see that in this case $e$ is a surjective homomorphism from $G|_S$ to $G|_T$, therefore Condition (2) holds. \medskip \emph{Case 2. For some $\alpha\in G|_S$ there exist at least two distinct permutations $\beta_1,\beta_2 \in G|_T$ such that $\gamma_1:=(\alpha,\beta_1) \in G$ and $\gamma_2:=(\alpha,\beta_2) \in G$.} \medskip Let $K:=\{\beta\in \operatorname{Sym}(T) \mid (\operatorname{id}_S,\beta)\in G\}$. Then \begin{itemize} \item \emph{$K$ is nontrivial} since $\beta_1\beta_2^{-1}\in K$. \item \emph{$K$ is closed in $\operatorname{Sym}(T)$}, and \item \emph{$K$ is a normal subgroup of $\operatorname{Sym}(T)$}. \end{itemize} To prove normality, let $(\operatorname{id}, \beta)\in G$, and let $\delta \in \operatorname{Sym}(T)$ be arbitrary. Since $K$ is dense in $\operatorname{Sym}(T)$ there is a sequence $\delta_1,\delta_2,\ldots$ of elements of $K$ which converges to $\delta \in \operatorname{Sym}(T)$. By the definition of $G|_S$ we know that there exist elements $\alpha_i\in \operatorname{Sym}(S)$ such that $\eta_i:=(\alpha_i,\delta_i)\in G$ for every $i \in {\mathbb N}$. Then $G\ni \eta_i(\operatorname{id},\beta)\eta_i^{-1}=(\operatorname{id}, \delta_i \beta \delta_i^{-1})$. Therefore $\lim_i(\operatorname{id}, \delta_i \beta \delta_i^{-1})=(\operatorname{id}, \delta \beta \delta^{-1})\in G$ since $G$ is closed. By definition, this implies that $\delta \beta\delta^{-1}\in K$ which shows that $K$ is indeed a normal subgroup. Then by Lemma~\ref{highly_normal}, $K=\operatorname{Sym}(T)$. Thus, $\{\operatorname{id}_S\}\times \operatorname{Sym}(T)\subseteq G$, i.e., Condition (1) holds. \end{proof} \begin{lemma}\label{closedness} Let $G$ be a closed oligomorphic permutation group on a countably infinite set $X$. Let $O_1,\dots,O_m$ be the infinite orbits of $G$ and suppose that $G$ acts highly transitively on each $O_i$. Let $l \in \{1,\dots,m\}$ and let $S := \bigcup_{i=1}^l O_i$ be such that $\operatorname{acl}(S)=X$. Then $H:=G|_S$ is closed in $\operatorname{Sym}(S)$. \end{lemma} \begin{proof} We first show the statement for $l=m-1$. Let $T:=O_m$ (so that we have the same notation as in Lemma \ref{two_parts}). First, let us assume that Condition (1) of Lemma \ref{two_parts} holds. Let $(\alpha_j)_{j \in {\mathbb N}}$ be a sequence that converges in $G|_S$ to $\alpha \in \operatorname{Sym}(S)$. Let $\beta_j\in \{\operatorname{id}_S\}\times\operatorname{Sym}(T) \subseteq G$ be such that $\alpha_j|_T=\beta_j|_T$. Let $\alpha_j':=\alpha_j\beta_j^{-1}\in G$. Then $\alpha_j'\rightarrow (\alpha,\operatorname{id} (T))\in G$ since $G$ is closed. In particular $\alpha\in G|_S$ and $H$ is closed. Otherwise, if Condition (1) of Lemma \ref{two_parts} does not hold, then by Lemma \ref{two_parts} we can assume that item (2) of Lemma \ref{two_parts} holds. Let $e \colon G|_S \to G|_T$ be as in item (2) of Lemma \ref{two_parts}. If $F\subset S$ is finite and $\alpha\in G|_S$ then \begin{align*} \operatorname{acl}_G(\alpha(F))\cap T & =\operatorname{acl}_G((\alpha,e(\alpha))(F))\cap T\\ & = (\alpha,e(\alpha))(\operatorname{acl}_G(F))\cap T \\ & =(\alpha,e(\alpha))(\operatorname{acl}_G(F)\cap T) =e(\alpha)(\operatorname{acl}_G(F)\cap T). \end{align*} By assumption $\operatorname{acl}_G(F)\cap T$ is nonempty for some $F$ (and it is always finite). Let $k:=|{\operatorname{acl}_G}(F)\cap T|$. Then by our previous observation and the fact that $G|_T=e(G|_S)$ is highly transitive it follows that for any subset $F'$ of $T$ of size $k$ there exists a finite subset $F''$ of $S$ such that $\operatorname{acl}_G(F'')\cap T=F'$. We claim that for all $x\in T$ there is a finite set $F$ of $S$ such that for all $\alpha\in (G|_S)_F$ it holds that $e(\alpha)(x)=x$. Let $F_1'$ and $F_2'$ be subsets of $T$ of size $k$ such that $F_1'\cap F_2'=\{x\}$. Then as we have seen there exist finite subset $F_1''$ and $F_2''$ of $S$ such that $\operatorname{acl}(F_i'')\cap T=F_i'$ for $i=1$ and $i=2$. Now let $F:=F_1''\cup F_2''$. Then if $\alpha\in (G|_S)_F$, then $\alpha\in (G|_S)_{F_i''}$, so $e(\alpha)(F_i')=F_i'$. Therefore $e(\alpha)(x)\in F_1'\cap F_2'=\{x\}$, that is $e(\alpha)(x)=x$. Now let $(\alpha_j)_j$ be a convergent sequence in $G|_S$. We want to show that the sequence $(e(\alpha_j))_j$ is also convergent, i.e., for all $x\in T$ we have $e(\alpha_j)(x)=e(\alpha_{j+1})(x)$ if $j$ is large enough. By our claim it follows that there is a finite set $F\subset S$ such that for all $\alpha\in (G|_S)_F$ we have $e(\alpha)(x)=x$. Since $(\alpha_j)_j$ a convergent there is an index $l$ such that $\alpha_j(y)=\alpha_{j+1}(y)$ for all $y\in F$ and $l\leq j$. Then if $j\geq l$ it follows that $\alpha_j\alpha_{j+1}^{-1}\in H|_F$, hence $e(\alpha_j)(e(\alpha_{j+1}))^{-1}(x)=e(\alpha_j\alpha_{j+1}^{-1})(x)=x$, and thus $e(\alpha_j)(x)=e(\alpha_{j+1})(x)$. Therefore $(e(\alpha_j))_j$ is convergent, which shows that $G|_S$ is closed. For $l < m-1$, note that $S \subseteq P := O_1 \cup \cdots \cup O_{m-1}$. Hence, $\operatorname{acl}(P) = X$ and we can apply the above argument for $P$ instead of $S$. We obtain that $G|_P$ is closed. Hence, the group $G|_P$ satisfies all the assumptions for $G$ but has fewer infinite orbits, so by induction we finally obtain that $G|_S$ is closed. \end{proof} In the proof of the next lemma it will be convenient to use a recent general result of Paolini and Shelah. A closed subgroup $G$ of $\operatorname{Sym}(X)$ has the \begin{itemize} \item \emph{small index property} if every subgroup of $G$ of index less than $2^{\aleph_0}$ is open, i.e., contains the pointwise stabilizer of a finite set $F \subset X$. \item \emph{strong small index property} if every subgroup of $G$ of index less than $2^{\aleph_0}$ lies between the pointwise and the setwise stabilizer of a finite set $F \subset X$. \end{itemize} The strong small index property of $\operatorname{Sym}(X)$ itself has been shown in~\cite{DixonNeumannThomas}. (In fact, all automorphisms of $\omega$-categorical $\omega$-stable structures, and thus, by the results that we are about to prove, all groups in $\mathcal{G}_{{\Exp}{+}}$, have the small index property~\cite{HodgesHodkinsonLascarShelah}.) On the other hand, already $R(\mathcal{U})$ contains structures whose automorphism groups do not have the strong small index property (take e.g.~an equivalence relation with two infinite classes). A permutation group $G$ on a set $X$ is said to have \emph{no algebraicity} if $\operatorname{acl}_G(Y) = Y$ for every $Y \subseteq X$. The following has been proved in~\cite{PaoliniShelahReconstructing} (Corollary 2). \begin{theorem}[\cite{PaoliniShelahReconstructing}]\label{paolini_shelah} Let $X_1$ and $X_2$ be countable and let $G \leq \operatorname{Sym}(X_1)$ and $H \leq \operatorname{Sym}(X_2)$ be closed oligomorphic subgroups that are topologically isomorphic, have the strong small index property and no algebraicity. Then there exists a bijection $b$ between $A$ and $B$ that \emph{induces $\xi$}, i.e., for all $x\in X_1$ $$(\xi \alpha)(x) = b(\alpha(b^{-1}(x))).$$ \end{theorem} It is well-known and easy to see that the small index property for $G$ implies that every homomorphism $h \colon G \to \operatorname{Sym}(Y)$, for a countable set $Y$, is continuous. It follows from~\cite{Gaughan} that the image of a continuous homomorphism from $\operatorname{Sym}(X)$ to $\operatorname{Sym}(Y)$ is closed in $\operatorname{Sym}(Y)$ (see Theorem 1.3 in~\cite{YaacovTsankov} for a much more general recent result which also implies this). \begin{lemma}\label{extension} Let $G$ be a closed oligomorphic permutation group on a countably infinite set $X$. Let $O_1,\dots,O_m$ be the infinite orbits of $G$. Suppose that for some $l \leq m$ and $S := \bigcup_{i=1}^lO_i$ we have $\operatorname{acl}_G(S)=X$ and $G|_S=\operatorname{Sym}(O_1)\times \dots \times \operatorname{Sym}(O_l)$. Then $\Delta(G)$ is not trivial. \end{lemma} \begin{proof} Let $j>l$ and $T:=O_j$. Let $G_j:=G|_{S\cup T}$. Then $G_j$ is closed by Lemma~\ref{closedness}. Then we can apply Lemma \ref{two_parts} to the group $G_j$ with respect to the partition of $S \cup T$ into $S$ and $T$. Since $T \subseteq \operatorname{acl}_G(S)$ it follows that Condition (1) of \ref{two_parts} cannot hold. Thus by Lemma \ref{two_parts} there exists a homomorphism $e_j \colon G|_S\rightarrow \operatorname{Sym}(O_j)$ so that $G_j=\{(\alpha,e_j(\alpha)) \mid \alpha\in G|_S\}$. For $i \in \{1,\dots,m\}$ and $\alpha \in G|_{O_i} = \operatorname{Sym}(O_i)$ let $\hat{\alpha}_i$ denote the unique permutation of $S$ for which $\hat{\alpha}_i|_{O_i}=\alpha$ and $\hat{\alpha}_i|_{O_{k}}=\operatorname{id}_{O_{k}}$ if $k \neq i$. Then define the homomorphisms $$e_{ij} \colon \operatorname{Sym}(O_i)\rightarrow \operatorname{Sym}(O_j), \alpha \mapsto e_j(\hat{\alpha}_i).$$ As mentioned before the lemma, the map $e_{ij}$ is continuous. Let $H_i:=\{\hat{\alpha}_i \mid \alpha\in \operatorname{Sym}(O_i)\}$. Then $H_i \triangleleft G|_S$, and so $e_j(H_i)\triangleleft e_j(G|_S)$. By definition it follows that $e_j(G|_S)=(G_j)|_{O_j}=G|_{O_j}$. In particular $e_j(G|_S) \leq \operatorname{Sym}(O_j)$ is highly transitive. Thus by Lemma \ref{highly_normal} it follows that either $e_j(H_i)$ is also highly transitive or it is trivial. If $e_{ij}$ is trivial for every $i \in \{1,\dots,l\}$, then $G$ fixes every element of $O_j$ contradicting the fact that $G$ acts transitively on $T=O_j$. Thus, there is an $i$ such that the image $I\leq \operatorname{Sym}(O_j)$ of $e_{ij}$ is highly transitive. As we have mentioned before the statement of the lemma, $I$ is a closed subset of $\operatorname{Sym}(O_j)$, so we can apply Theorem \ref{paolini_shelah} and obtain a bijection $b_{j}$ between $O_i$ and $O_j$ which induces $e_{ij}$ (we could as well have derived this from the argument in Example 2 on page 224 of~\cite{HodgesLong}). Now let $i'\neq i$, and let $\alpha$ be a nontrivial permutation of $O_{i'}$. Then $\hat{\alpha}_i$ commutes with every element of $G_i$, and so $e_{i'j}(\alpha)$ commutes with every element of $e(G_i)=\operatorname{Sym}(O_j)$. Therefore $e_{i'j}(\alpha)=\operatorname{id}_{O_j}$. We have obtained that for all $j$ there is a unique $i(j) \leq l$ and a bijection $b_j \colon O_{i(j)} \rightarrow O_j$ such that for all $g\in G$ and $x\in O_j$ we have $g(x)=b_j(g(b_j^{-1}(x)))$ (if $j\leq l$, then the statement is trivial). Let $b$ be the union of the functions $b_1,\dots,b_m$ and define the relation $\sim$ by $x\sim y\Leftrightarrow b(x)=b(y)$. Then $\sim$ is a congruence of $G$ all of whose classes are finite. Moreover, $\sim$ is nontrivial since $m>l$. This implies that $\Delta(G)$ is also nontrivial. \end{proof} \begin{lemma}\label{two_cuts} Let $H$ be an oligomorphic permutation group on a countably infinite set $X$ with two infinite orbits $Y$ and $Z$. Let us assume that $H$ acts 2-transitively on $Z$ and that there exists $y\in Y$ so that $|\nabla(H_y|_{Z})|\geq 2$. Then for every $n \in {\mathbb N}$ there exist $y_1,\dots,y_n\in Y$ such that $|\nabla((H_{y_1,\dots,y_n})|_{Z})|\geq n+1$. \end{lemma} \begin{proof} By transitivity we know that for all $y'\in Y$ it holds that $|\nabla(H_{y'}|_{Z})|\geq 2$. We show the statement of the lemma by induction on $n$. For $n=1$ the statement is trivial. Now suppose that we know there exist $y_1,\dots,y_{n-1}\in Y$ such that $\nabla((H_{y_1,\dots,y_{n-1}})|_{Z})$ has at least $n$ classes. Let $C_1,\dots,C_m$ be the classes of $\nabla((H_{y_1,\dots,y_{n-1}})|_{Z})$. Let $z_1,z_2\in C_1$. Since $H$ acts 2-transitively on $Z$, it follows that there exists a $y_n\in Y$ such that $(z_1,z_2)\not\in \nabla(H_{y_n}|_{Z})$. Let $$R:=\nabla((H_{y_1,\dots,y_{n-1}})|_{Z})\cap \nabla(H_{y_n}|_{Z}).$$ Then $R$ is a congruence of $H_{y_1,\dots,y_n}$ which is stricly finer than $(H_{y_1,\dots,y_{n-1},y_n})$, and has finitely many classes. Therefore, $\nabla((H_{y_1,\dots,y_{n}})|_{Z})$ is also finer than $\nabla((H_{y_1,\dots,y_{n-1}})|_{Z})$. In particular, $|\nabla((H_{y_1,\dots,y_{n}})|_{Z})|\geq m+1\geq n+1$. \end{proof} \begin{lemma}\label{gen_sym} Let $G \in \mathcal{G}_{{\Exp}{+}}$ be closed. Suppose that $\Delta(G)$ is trivial and that $G$ stabilizes each class of $\nabla(G)$. Let $O_1,\dots,O_l$ be the orbits of $G$. Suppose that each $O_i$ is infinite, and that $S := X\setminus O_l$ is algebraically closed. Then $\{\operatorname{id}_S\}\times \operatorname{Sym}(O_l) \subseteq G$. \end{lemma} \begin{proof} By our assumptions, the orbits $O_1,\dots,O_l$ are the classes of $\nabla(G)$. By Lemma \ref{nice_case} it follows that $G$ acts highly transitively on each orbit $O_i$. We apply Lemma~\ref{two_parts} to $S$ and $T := O_l$. If Item (1) of Lemma~\ref{two_parts} applies then we are done. We claim that item (2) of Lemma~\ref{two_parts} cannot hold. For this, it suffices to show that $G$ contains a nontrivial permutation $\gamma$ such that $\gamma|_S=\operatorname{id}_S$. Indeed, if there exists a homomorphism $e \colon G|_S \to G|_T$ as in Item (2) of Lemma \ref{two_parts}, then (using the notation from the proof of Lemma~\ref{two_parts}) $\gamma = (\operatorname{id}_S,e(\operatorname{id}_S)) = (\operatorname{id}_S,\operatorname{id}_T)$ is trivial. \medskip {\bf Claim 1.} Let $F\subseteq S$ be finite and let $L := (G_F)|_T$. Then $\nabla(L/\Delta(L))$ is trivial. \begin{proof}[Proof of Claim 1] Let us suppose the contrary. Let us choose the set $F$ to be minimal. Let $F':=F\setminus \{y\}$ for some $y\in F$. Put $E:=\Delta((G_{F'})|_T)$, $Z:=T/E$, and $K:=((G_{F'})|_T)/E \leq \operatorname{Sym}(Z)$. Let us consider the mapping $\pi \colon T\rightarrow Z$ which maps each element to its $E$-class. Then if $u,v\in Z^n$ are in different orbits of $K$, then $\pi^{-1}(u)$ and $\pi^{-1}(v)$ are in different orbits of $G$. Moreover, if $u\in Z^n$ is injective, then so is $\pi^{-1}(u)$. This means that the number of injective $n$-orbits of $K$ is at most $o^i_n(G_{F'})$. By Corollary~\ref{stabil_finite} it follows that $o^i_n(G_{F'})\leq cn^{dn}$ for some constants $c,d$ with $d<1$. Therefore, $o^i_n(K)\leq cn^{dn}$ and thus $K\in \mathcal{G}_{{\Exp}{+}}$. By definition $\Delta(K)$ is trivial. It follows from the minimality of $F$ that the congruence $\nabla(K)$ is also trivial. We obtained that both $\Delta(K)$ and $\nabla(K)$ is trivial, and $K\in \mathcal{G}_{{\Exp}{+}}$. Then Lemma \ref{nice_case} implies that $K$ is highly transitive. Now let $Y:=O_j\setminus F'$ where $O_j$ is the orbit of $y$. Then $G_{F'}$ acts naturally on $Y\sqcup Z$. Let $H$ be the image of this action (as a subgroup of $\operatorname{Sym}(Y \sqcup Z)$). Then we claim that the group $H$, the orbits $Y,Z$ and the element $y\in Y$ satisfy the conditions of Lemma \ref{two_cuts}. The only nontrivial fact that we have to check is that the congruence $\nabla((H_y)|_Z)$ is nontrivial. We know that $\nabla((G_F)|_T/\Delta((G_F)|_T))$ is nontrivial. By Lemma \ref{delta_union_nabla} this is equivalent to the fact that the congruence generated by $\nabla((G_F)|_T)$ and $\Delta((G_F)|_T)$ is nontrivial. The congruence $\Delta((G_{F'})|_T)$ is also a congruence of $(G_F)|_T$ with finite classes, hence $\Delta((G_{F'})|_T)$ is finer than $\Delta((G_{F})|_T)$. Hence, the congruence generated by $\nabla((G_F)|_T)$ and $\Delta((G_{F'})|_T)$ is also nontrivial. Using Lemma \ref{delta_union_nabla} again it follows that $\nabla((G_F)|_T/\Delta((G_{F'})|_T))$ is nontrivial. Since $(G_F)|_T=((G_{F'})_{\{y\}})|_T$ this means that the congruence $\nabla((H_y)|_Z)$ is nontrivial. If we apply Lemma \ref{two_cuts} we obtain that for every $n$ there exist $y_1,\dots,y_n\in Y$ so that $|\nabla((H_{y_1,\dots,y_{n}})|_{Z})| \geq n+1$. This also implies that $|\nabla(G_{F'\cup \{y_1,\dots,y_{n}\}})| \geq n+1$, that is, $m_G(F'\cup \{y_1,\dots,y_n\}) \geq n+1$. In particular $m_G(|F'|+n)\geq n+1$. This contradicts Lemma~\ref{not_too_many_classes} for $0<\varepsilon<1-d$ if $n$ is large enough, finishing the proof of Claim 1. \end{proof} Claim 1 implies that if $F\subset S$ is finite, then $G_F$ acts transitively on $T/\Delta((G_F)|_T))$. This also implies that all $\Delta(G_F)$-classes contained in $T$ have the same size. For a finite set $F\subset S$, let $k(F)$ denote this size. By Corollary \ref{union_delta_size} we have $G\in (\mathcal{G}_{{\Exp}{+}})^k$ for some $k$. This implies that $k(F)\leq k$ for every finite subset $F$ of $S$. This also implies that there exists a finite set $F$ so that $k(F)$ is maximal. So let us choose $F\subset S$ so that $k(F)$ is maximal, and, as above, let $E:=\Delta(G_{F})|_T$, $Z:=T/E$, and let $\pi \colon T\rightarrow Z$ be the factor map as in the proof of Claim 1. \medskip {\bf Claim 2.} For any finite $F' \subset S$ that contains $F$ the group $G_{F'}$ acts highly transitively on $Z$. \begin{proof}[Proof of Claim 2.] Let $K':=(G_{F'}|_T)/E \leq \operatorname{Sym}(Z)$. We would like to use Lemma \ref{nice_case} again. As in the proof of Claim 1 it follows that $K'\in \mathcal{G}_{{\Exp}{+}}$. Claim 1 implies that $\nabla(K')$ is trivial. So it is enough to show that $\Delta(K')$ is trivial. In order to show this, let us consider the relation $$R:=\{(x,y) \in T^2 \mid (\pi(x),\pi(y)) \in \Delta(K')\} \cup \{(x,x) \mid x\in S\}$$ on $X$. Then $R$ is a congruence of $G_{F'}$ with finite classes. Therefore $R$ is finer than $\Delta(G_{F'})$. By the maximality of $k(F)$ it follows that $\Delta(G_F)$ and $\Delta(G_{F'})$ agree on $T$. This is only possible if $\Delta(K')$ is trivial. Therefore the conditions of Lemma \ref{nice_case} hold for the group $K'$, and thus by Lemma \ref{nice_case} it follows that $K'$ is highly transitive. \end{proof} Now let us choose a prime $p>k(F)$ and let $z_1,\dots,z_p\in Z$. Let $F=S_0\subset S_1\subset \cdots$ and $\pi^{-1}(z_1)\cup \cdots \cup \pi^{-1}(z_p) =T_0\subset T_1 \subset \cdots$ be sequences of finite subsets of $S$ and $T$, respectively, so that $\bigcup{S_i}=S$ and $\bigcup{T_i}=T$. By Claim 2, the stabilizer $G_{S_i}$ acts highly transitively on $Z=T/E$. In particular, there is a permutation $\gamma_i' \in G$ which fixes every element in $S_i$ and which acts on $T_i/E$ as $(z_1z_2\dots z_p)$. Now let $\gamma_i := (\gamma_i')^{k(F)!}$. Then $\gamma_i|_{T_0}$ is nontrivial, but $\gamma_i|_{T_i\setminus T_0}=\operatorname{id}_{T_i\setminus T_0}$. Since the permutations $\gamma_i$ have finitely many possible actions on the set $T_0$, we can assume, by choosing a subsequence if necessary, that $\gamma_i|_{T_0}$ are the same for all $i$. Then the permutations $\gamma_i$ converge to a permutation $\gamma$ for which $\gamma|_{S\cup T\setminus T_0}$ is trivial, but $\gamma|_{T_0}$ is not trivial. Since $G$ is closed it follows that $\gamma\in G$. This finishes the proof of the lemma. \end{proof} \begin{corollary}\label{gen_sym_2} Let $G \in \mathcal{G}_{{\Exp}{+}}$ be closed. Suppose that $\Delta(G)$ is trivial and that $G$ stabilizes every class of $\nabla(G)$. Let $O_1,\dots,O_l$ be the orbits of $G$ (which are also the classes of $\nabla(G)$ by our assumption). Suppose that each $O_i$ is infinite, and that $X\setminus O_i$ is algebraically closed for all $i=1,\dots,l$. Then $G=\operatorname{Sym}(O_1)\times \dots \times \operatorname{Sym}(O_l)$. \end{corollary} \begin{proof} Direct consequence of Lemma \ref{gen_sym}. \end{proof} \begin{lemma}\label{extension_2} Let $G \in \mathcal{G}_{{\Exp}{+}}$ be closed and such that $\Delta(G)$ is trivial. Suppose that $G$ fixes every class of $\nabla(G)$ setwise. Let $O_1,\dots,O_m$ be the orbits of $G$ and suppose that each $O_i$ is infinite. Then $G=\operatorname{Sym}(O_1)\times \dots \times \operatorname{Sym}(O_m)$. \end{lemma} \begin{proof} Let $I \subseteq \{1,\dots,m\}$ be minimal so that $\operatorname{acl}_G(S)=X$ for $S :=\bigcup_{i\in I}{O_i}$. Without loss of generality we can assume that $I=\{1,\dots,l\}$ for some $l \leq m$. By Lemma \ref{closedness} the group $G|_S \in \mathcal{G}_{{\Exp}{+}}$ is closed. By the minimality of $I$ it follows that the sets $X\setminus O_i$, for $i \in \{1,\dots,l\}$, are algebraically closed with respect to $G$. By Corollary \ref{gen_sym_2} it follows that $G|_S=\operatorname{Sym}(O_1)\times \dots \times \operatorname{Sym}(O_l)$. If $l<m$ then Lemma \ref{extension} implies that $\Delta(G)$ is nontrivial. This means that $l=m$, and thus $G=G|_S=\operatorname{Sym}(O_1)\times \dots \times \operatorname{Sym}(O_m)$. \end{proof} In order to drop the condition that $G$ fixes every $\nabla(G)$-class we need the following observation about finite index subgroups of oligomorphic groups. \begin{proposition}\label{easy} Let $G$ be an oligomorphic permutation group on a countably infinite set $X$, and let $H$ be a finite-index subgroup of $G$. Then $$o^i_n(H) \leq [G:H] \cdot o^i_n(G).$$ In particular, $H$ is oligomorphic. \end{proposition} \begin{proof} Choose elements $\gamma_1,\dots,\gamma_{[G:H]}\in G$ such that $G=\bigcup_{i=1}^{[G:H]}{\gamma_i H}$. If the tuples $t_1,t_2,\dots,t_l$ represent all injective $n$-orbits of $G$, then the tuples $\gamma_it_j$ for $1\leq i\leq {[G:H]}$ and $1\leq j\leq l$ represent all injective $n$-orbits of $H$. Therefore, $o^i_n(H)\leq [G:H] \cdot o^i_n(G)$. \end{proof} \begin{lemma}\label{finite_index_acl} Let $G$ be an oligomorphic permutation group on a countably infinite set $X$, and let $H$ be a finite-index subgroup of $G$. Then for all $x\in X$ it holds that $\operatorname{acl}_H(x)\subseteq \operatorname{acl}_G(x)$. \end{lemma} \begin{proof} By Proposition \ref{easy} the permutation group $H$ is oligomorphic. Let $x\in X$. First we prove that the index $[G_x:H_x]$ is finite. Choose $\gamma_1,\dots,\gamma_{[G:H]}\in G$ such that $G=\bigcup_{i=1}^{[G:H]}{H\gamma_i}$. Let $O$ be the orbit of $x$ with respect to $H$ and let $$I := \{ i \in \{1,\dots,[G:H]\} \mid \gamma_i(x)\in O\}.$$ For every $i\in I$ choose $\delta_i\in H$ such that $\delta_i\gamma_i(x)=x$. We claim that $G_x=\bigcup_{i\in I}{H_x\delta_i\gamma_i}$. The containment ``$\supseteq$'' is obvious. Now let $\alpha\in G_x$. Then $\alpha=\beta\gamma_i$ for some $\beta\in H$ and $i\in \{1,\dots, [G:H]\}$. If $\gamma_i(x)\not\in O$, then $H\gamma_i(x)\subseteq H(X\setminus O)=X\setminus O$. In particular, $x$ is not stabilised by any element of $H\gamma_i(x)$, a contradiction. Thus, $\gamma_i(x)\in O$ and $i\in I$. As $\alpha(x) = x$ we have \begin{align*} \beta\blue{\delta_i}^{-1}(x) & = \alpha\gamma_i^{-1}\blue{\delta_i}^{-1}(x) && \text{(since $\alpha = \beta \gamma_i$)} \\ & = \alpha(x) && \text{(since $\delta_i \gamma_i(x) = x$)} \\ & = x \end{align*} This implies that $\beta\delta_i^{-1}\in H_x$ and therefore \blue{$\alpha = \beta\delta_i^{-1} \delta_i\gamma_i \in H_x\delta_i\gamma_i$}. We have shown that $k := [G_x:H_x]$ is finite. Choose elements $\gamma_1',\dots,\gamma_k'\in \operatorname{Aut}(\mathfrak{A})$ such that \blue{$G_x=\bigcup_{i=1}^{k}{\gamma_i'H_x}$}. Let $y\in \operatorname{acl}_H(x)$. By definition $H_x(y)$ is finite. Therefore, $$G_x(y)=\left(\bigcup_{i=1}^{k}{\gamma_i'H_x}\right)(y)=\bigcup_{i=1}^{k}{\gamma_i'H_x(y)}$$ is finite, that is, $y\in \operatorname{acl}_G(x)$. This proves that $\operatorname{acl}_H(x) \subseteq \operatorname{acl}_G(x)$. \end{proof} \begin{lemma}\label{finite_index_delta} Let $G$ be an oligomorphic permutation group on a countably infinite set $X$ and let $H$ be a finite-index subgroup of $G$. Then $\Delta(G)=\Delta(H)$. \end{lemma} \begin{proof} By Proposition~\ref{easy} the permutation group $H$ is oligomorphic. Clearly, $\Delta(G)$ is a congruence of $H$ with finite classes. Thus, $\Delta(G) \subseteq \Delta(H)$. Now let $(x,y)\in \Delta(H)$. Then $y\in \operatorname{acl}_H(x)$ and $x\in \operatorname{acl}_H(y)$ \blue{by Lemma \ref{delta_alt}}. By Lemma \ref{finite_index_acl} this implies that $y\in \operatorname{acl}_G(x)$ and $x\in \operatorname{acl}_G(y)$. Again by Lemma \ref{delta_alt} we obtain $(x,y)\in \Delta(G)$, showing that $\Delta(G)=\Delta(H)$. \end{proof} \begin{theorem}\label{extension_3} Let $G \in \mathcal{G}_{{\Exp}{+}}$ be closed such that that $\Delta(G)$ is trivial. Let $O_1,\dots,O_m$ be the classes of $\nabla(G)$. Then $\operatorname{Sym}(O_1)\times \dots \times \operatorname{Sym}(O_m) \subseteq G$. \end{theorem} \begin{proof}[Proof of Theorem \ref{extension_3}] Let $K$ be the kernel of the action of $G$ on $\{O_1,\dots,O_m\}$. Then $[G:K]$ is finite, and thus by Proposition \ref{easy} it follows that $K\in \mathcal{G}_{{\Exp}{+}}$. Without loss of generality we can assume that $O_1,\dots,O_l$ are the infinite orbits \blue{of $K$}. Some infinite orbits Let $Y=O_1 \cup \dots \cup O_l$. Then $X\setminus Y$ is finite, and $K$ fixes each element in $X\setminus Y$. By Lemma \ref{stabil_finite} it follows that the group $K|_{Y}$ is in $\mathcal{G}_{{\Exp}{+}}$. By Lemma \ref{finite_index_delta} it follows that $\Delta(K)$ is trivial, and thus $\Delta(K|_{Y})$ is also trivial. Moreover, $K|_{Y}$ fixes every class of $\nabla(K|_{Y})$ setwise and all orbits of $K|_{Y}$ are infinite. Hence, we can apply Lemma~\ref{extension_2} and obtain that $K|_{Y}=\operatorname{Sym}(O_{1})\times \dots \times \operatorname{Sym}(O_l)$. Therefore, $\operatorname{Sym}(O_1)\times \dots \times \operatorname{Sym}(O_m) = K \subseteq G$. \end{proof} \begin{corollary}\label{extension_4} Let $\mathfrak{A}\in \mathcal{K}_{{\Exp}{+}}$ be such that $\Delta(\mathfrak{A})$ is trivial. Then $\mathfrak{A}\in R(\mathcal{U})$. \end{corollary} \begin{proof} Apply Theorem \ref{extension_3} to $\operatorname{Aut}(\mathfrak{A})$ and combine with Corollary~\ref{cor:unary_reduct-iff}. \end{proof} \begin{corollary}\label{extension_4} Let $\mathfrak{A}\in \mathcal{K}_{{\Exp}{+}}$ be such that $\Delta(\mathfrak{A})$ is trivial. Then $\mathfrak{A}\in R(\mathcal{U})$. \end{corollary} \begin{proof} Apply Theorem \ref{extension_3} to $\operatorname{Aut}(\mathfrak{A})$ and combine with Corollary~\ref{cor:unary_reduct-iff}. \end{proof} \subsection{The general case} \label{sect:general} \begin{lemma}\label{desc_kexpp} $\mathcal{K}_{{\Exp}{+}} \subseteq F(R(\mathcal{U}))$. \end{lemma} \begin{proof} Let $\mathfrak{A}\in \mathcal{K}_{{\Exp}{+}}$ and let us consider the factor mapping $\pi \colon \mathfrak{A}\rightarrow \mathfrak{B}$ where $\mathfrak{B} := \mathfrak{A}/\Delta(\mathfrak{A})$. If $u,v \in B$ are in different orbits of $\operatorname{Aut}(\mathfrak{B})$ then $\pi^{-1}(u)$ and $\pi^{-1}(v)$ are in different orbits of $\operatorname{Aut}(\mathfrak{A})$. Moreover, if $u\in B^n$ is injective, then so is $\pi^{-1}(u)$. This means that the number of injective $n$-orbits of $\mathfrak{B}$ is at most $o^i_n(\mathfrak{A})$ and thus $\mathfrak{B} \in \mathcal{K}_{{\Exp}{+}}$. Then $\Delta(\mathfrak{B})$ must be trivial: otherwise, $\operatorname{Aut}(\mathfrak{B})$ has a nontrivial congruence all of whose classes are finite, contradicting the definition of $\Delta(\mathfrak{A})$. By Corollary \ref{extension_4} it then follows that $\mathfrak{B} \in R(\mathcal{U})$, and thus $\mathfrak{A} \in F(R(\mathcal{U}))$. \end{proof} The reverse containment holds as well. \begin{theorem}\label{main_kexpp} $\mathcal{K}_{{\Exp}{+}}=F(R(\mathcal{U}))=R^{<\infty}(F(\mathcal{U}^*))$ \end{theorem} \begin{proof} We already know that $ \mathcal{K}_{{\Exp}{+}} \subseteq F(R(\mathcal{U}))$ (Lemma~\ref{desc_kexpp}) and that $F(R(\mathcal{U})) \subseteq R^{<\infty}(F(\mathcal{U}^*))$ (see Remark \ref{fru_rffu}) and have to show that $R^{<\infty}(F(\mathcal{U}^*))\subseteq \mathcal{K}_{{\Exp}{+}}$. Proposition \ref{prop:easy} implies that $R^{<\infty}(\mathcal{K}_{{\Exp}{+}})=\mathcal{K}_{{\Exp}{+}}$. Therefore it is enough to show that $F(\mathcal{U}^*)\subseteq \mathcal{K}_{{\Exp}{+}}$. So let $\mathfrak{B}\in \mathcal{U}^*$ and let $\pi \colon \mathfrak{A}\rightarrow \mathfrak{B}$ be a finite covering. Lemma \ref{reduct_trivial2} shows that $\pi$ is strongly split. Therefore we can assume that $\pi$ is a strongly trivial covering map. It follows from the description of trivial coverings given in Remark~\ref{triv_cov_unary} that the orbit of an injective $n$-tuple $t=(t_1,\dots,t_n)$ of a trivial covering of a unary structure is uniquely determined by the orbits of $t_1,\dots,t_n$ and by the partition of the set $\{t_1,\dots,t_n\}$ defined by the congruence $\sim_{\pi}$. This means that the number of injective orbits of $\mathfrak{A}$ it at most $m^n \cdot p_k(n)$ where \begin{itemize} \item $m$ is the number of orbits of $\operatorname{Aut}(\mathfrak{A})$, \item $k$ is the maximal size of the classes of $\sim_{\pi}$, and \item $p_k(n)$ is the number of partitions of $\{1,\dots,n\}$ with parts of size at most $k$ (see Section~\ref{sect:growth}). \end{itemize} Let us choose $d > \frac{k-1}{k}$. Then by Lemma \ref{counting_2} we have $p_k(n) < c_1 n^{dn}$ for some $c_1$. Thus $o^i_n(\mathfrak{A})\leq m^n c_1 n^{dn} \leq c_2 n^{dn}$ for some $c_2$. Therefore $\mathfrak{A}\in \mathcal{K}_{{\Exp}{+}}$. \end{proof} \begin{remark}\label{rem:triv-cover-interpretation} Recall from Proposition~\ref{reduct_trivial2} that every finite cover $\pi \colon \mathfrak{A} \to \mathfrak{B}$ for $\mathfrak{B} \in \mathcal{U}^*$ is strongly split, and hence all structures in $\mathcal{K}_{{\Exp}{+}} = R(F(\mathcal{U}^*))$ have a first-order interpretation in $({\mathbb N};=)$ (Remark~\ref{rem:triv-cover-interpret}). Since $({\mathbb N};=)$ is $\omega$-stable and first-order interpretations preserve $\omega$-stability, it follows that all structures in $\mathcal{K}_{{\Exp}{+}}$ are $\omega$-stable. \end{remark} \subsection{Thomas' conjecture for the class $\mathcal{K}_{{\Exp}{+}}$} \label{sect:thomas} \begin{definition} Let $k,m\in \mathbb{N}$. Then $\mathcal{G}(k,m)$ denotes the class of those oligomorphic permutation groups $G$ for which where the classes of $\Delta(G)$ have size at most $k$ and $\nabla(G/\Delta(G))$ has at most $m$ classes. Let $\mathcal{S}(k,m)$ be the class of all structures whose automorphism group is in $\mathcal{G}(k,m)$. \end{definition} \begin{lemma}\label{skm_classes} Let $k,m\in \mathbb{N}$. Let $\mathfrak{B}\in \mathcal{U}^*$, let $\pi \colon\mathfrak{B}\rightarrow \mathfrak{A}$ be a finite covering, and let $\mathfrak{C}$ be a quasi-covering reduct of $\mathfrak{B}$. Then $\mathfrak{C}\in \mathcal{S}(k,m)$ iff $\mathfrak{A}\in \mathcal{S}(k,m)$. \end{lemma} \begin{proof} By definition $\Delta(\mathfrak{A})=\Delta(\mathfrak{C})={\sim_{\pi}}$, and $$\nabla(\operatorname{Aut}(\mathfrak{C}/\Delta(\mathfrak{C}))=\nabla(\operatorname{Aut}(\mathfrak{C}/\Delta(\mathfrak{A}))=\nabla(\operatorname{Aut}(\mathfrak{A}/\Delta(\mathfrak{A}))=\nabla(\mathfrak{B}).$$ \end{proof} \begin{lemma}\label{kexpp_limited} Let $k,m\in \mathbb{N}$. There are finitely many structures in $\mathcal{K}_{{\Exp}{+}}\cap \mathcal{S}(k,m)$ up to bi-definability. \end{lemma} \begin{proof} By Theorem \ref{main_kexpp} we know that $\mathcal{K}_{{\Exp}{+}}=(F\circ R)(\mathcal{U})$. Proposition~\ref{quasi_trivial} implies that every structure in $\mathcal{K}_{{\Exp}{+}}$ is a quasi-covering reduct of a finite covering structure of some structure in $\mathcal{U}^*$. By Lemma \ref{skm_classes} this structure is also in $\mathcal{S}(k,m)$. By Theorem \ref{quasi_cover2} we know that if $\mathfrak{A}$ is a trivial covering of some structure in $\mathcal{U}^*$, then it has finitely many quasi-covering reducts. Therefore it is enough to show that there are finitely many structures in $\mathcal{S}(k,m)$ up to bi-definability which are trivial covering structures of some structure in $\mathcal{U}^*$. Let $\mathfrak{B}\in \mathcal{U}^*$ and let $\pi \colon\mathfrak{A} \rightarrow \mathfrak{B}$ be a trivial finite covering map. Let $O_1,\dots,O_l$ be the orbits of $\mathfrak{B}$. Then $l\leq m$. Following Remark \ref{triv_cov_unary} we can assume without loss of generality that $A=\bigsqcup_{i=1}^l{F_i\times O_i}$ for some finite sets $F_i$, and $$\operatorname{Aut}(\mathfrak{A})=\bigsqcup_{i=1}^l{\operatorname{id}_{F_i}\wr \operatorname{Sym}(O_i)}.$$ Since $\sim_{\pi}$ is a congruence with finite classes it follows that $|F_i|\leq k$. Then there are finitely many options for $l$, the sizes of the orbits $O_i$ (they are all either one or infinite), and the sizes of the sets $F_i$, and if we fix these parameters, then the group $\operatorname{Aut}(\mathfrak{A})$ is uniquely determined up to isomorphism. This implies that there are finitely many structures in $\mathcal{S}(k,m)$ up to bi-definability which are a trivial covering structure of a structure in $\mathcal{U}^*$. \end{proof} \begin{lemma}\label{kexpp_km} Let $\mathfrak{A}\in \mathcal{K}_{{\Exp}{+}}$ and let $\mathfrak{B} \in R(\mathfrak{A})$. Let $k$ be the size of the largest $\Delta(\mathfrak{A})$-class and let $m$ be the number of $\nabla(\mathfrak{A})$-classes. Then $\mathfrak{B}\in \mathcal{S}(k,m)$. \end{lemma} \begin{proof} If $R$ is a congruence of $\operatorname{Aut}(\mathfrak{B})$, then it is also a congruence of $\operatorname{Aut}(\mathfrak{A})$. Therefore, the size of every class of $\Delta(\mathfrak{B})$ is at most $k$. Similarly, the number of $\nabla(\mathfrak{B})$-classes is at most the number of $\nabla(\mathfrak{A})$-classes. The number of $\nabla(\mathfrak{B})$-classes is an upper bound for the number of $\nabla(\mathfrak{B}/\Delta(\mathfrak{B}))$-classes. This proves the lemma. \end{proof} Lemmas \ref{kexpp_limited} and \ref{kexpp_km} immediately imply the following weak version of Thomas' conjecture for the class $\mathcal{K}_{{\Exp}{+}}$. \begin{theorem}\label{thomas_weak} Let $\mathfrak{A}\in \mathcal{K}_{{\Exp}{+}}$. Then $\mathfrak{A}$ has finitely many first-order reducts up to bi-definability. \end{theorem} Then the (standard version of) Thomas' conjecture follows as follows. First we state an important well-known link between infinite descending chains of first-order reducts and infinite signatures. We say that a structure $\mathfrak{B}$ has \emph{essentially infinite signature} if there does not exist a structure $\mathfrak{B}'$ with finite signature such that $\operatorname{Aut}(\mathfrak{B}) = \operatorname{Aut}(\mathfrak{B}')$. \begin{lemma}\label{lem:inf-sign} Let $\mathfrak{A}$ be an $\omega$-categorical structure. Then there exists an infinite sequence $\mathfrak{B}_1,\mathfrak{B}_2,\dots$ of first-order reducts of $\mathfrak{A}$ such that $\operatorname{Aut}(\mathfrak{B}_1) \supsetneq \operatorname{Aut}(\mathfrak{B}_2) \supsetneq \cdots$ if and only if $\mathfrak{A}$ has a reduct with essentially infinite signature. \end{lemma} \begin{proof} Assume that the reduct $\mathfrak{B} = (B;R_1,R_2,\dots)$ of $\mathfrak{A}$ has essentially infinite signature. By assumption, $\mathfrak{B}$ and $\mathfrak{B}_n := (B;R_1,\dots,R_n)$ are not first-order interdefinable. Moreover, for every $n \in {\mathbb N}$ there exists an $f(n) \in {\mathbb N}$ such that $\mathfrak{B}_n$ and $\mathfrak{B}_{f(n)}$ are not first-order interdefinable (otherwise, every relation in $\mathfrak{B}$ would be first-order definable in $\mathfrak{B}_n$, contradicting our assumptions). So $\mathfrak{B}_1,\mathfrak{B}_{f(1)},\mathfrak{B}_{f(f(1))},\dots$ provides an infinite strictly descending chain of first-order reducts of $\mathfrak{A}$. Suppose conversely that $\mathfrak{B}_1,\mathfrak{B}_2,\dots$ is an infinite strictly descending chain of first-order reducts of $\mathfrak{A}$. Define $\mathfrak{C}$ as the first-order reduct of $\mathfrak{A}$ whose relations are precisely the relations of all the $\mathfrak{B}_i$. Assume for contradiction that there exists a finite-signature structure $\mathfrak{C}'$ with $\operatorname{Aut}(\mathfrak{C}') = \operatorname{Aut}(\mathfrak{C})$. Let $i \in {\mathbb N}$ be such that all relations used in the definitions of the relations of $\mathfrak{C}'$ in $\mathfrak{C}$ already appear in the signature of $\mathfrak{B}_i$. Then $\operatorname{Aut}(\mathfrak{B}_i) = \operatorname{Aut}(\mathfrak{C}') = \operatorname{Aut}(\mathfrak{C}) = \operatorname{Aut}(\mathfrak{B}_j)$ for all $j \geq i$, contradicting the assumption that $(\mathfrak{B}_i)_{i \in {\mathbb N}}$ is strictly decreasing. \end{proof} \begin{proposition}\label{prop:interdef-bidef} Let $\mathfrak{A}$ be an $\omega$-categorical structure. Then $\mathfrak{A}$ has finitely many first-order reducts up to interdefinability if and only if $\mathfrak{A}$ has finitely many first-order reducts up to bidefinability. \end{proposition} \begin{proof} If $\mathfrak{B}$ is a first-order reduct of $\mathfrak{A}$ with essentially infinite signature, then $\mathfrak{B}$ has an infinite strictly descending chain of first-order reducts. Note that if $\operatorname{Aut}(\mathfrak{B}_1) \subsetneq \operatorname{Aut}(\mathfrak{B}_2)$ then for some $n$ there are strictly more orbits of $n$-tuples in $\operatorname{Aut}(\mathfrak{B}_1)$ than in $\operatorname{Aut}(\mathfrak{B}_2)$, so $\mathfrak{B}_1$ and $\mathfrak{B}_2$ are not bidefinable (if two reducts are bidefinable then they have the same number of orbits of $n$-tuples for all $n$). So $\mathfrak{B}$ and $\mathfrak{A}$ have infinitely many first-order reducts up to bi-definability, so the statement is trivially true in this case. Therefore it suffices to show that every first-order reduct $\mathfrak{B}$ of $\mathfrak{A}$ with finite signature is bidefinable to at most finitely many reducts of $\mathfrak{B}$ up to interdefinability. The equivalence class of $\mathfrak{B}$ with respect to interdefinability is given by its orbits of $n$-tuples, for some finite $n$ (since $\mathfrak{B}$ has finite signature), and thus the same holds for any structure which is bidefinable $\mathfrak{B}$. Since $\mathfrak{A}$ is $\omega$-categorical, there are finitely many orbits of $n$-tuples in $\mathfrak{A}$, which implies that there are finitely many first-order reducts of $\mathfrak{A}$ up to interdefinability that are bidefinable with $\mathfrak{B}$. \end{proof} \begin{theorem}\label{thomas_strong} Let $\mathfrak{A} \in \mathcal{K}_{{\Exp}{+}}$. Then $\mathfrak{A}$ has finitely many first-order reducts. \end{theorem} \begin{proof} Follows from Theorem \ref{thomas_weak} and Proposition~\ref{prop:interdef-bidef}. \end{proof} \begin{corollary} $\mathcal{K}_{{\Exp}{+}}$ contains countably many structures up to interdefinability. It contains no structure with essentially infinite signature. \end{corollary} \begin{proof} The first statement is implied by Lemma~\ref{kexpp_limited} in combination with Proposition~\ref{prop:interdef-bidef}. The second statement follows from Theorem~\ref{thomas_strong} and Lemma~\ref{lem:inf-sign}. \end{proof} \end{section} \subsection{Growth rates for partitions} \label{sect:growth} For $n,k \in {\mathbb N}$, let $p_k(n)$ be the number of partitions of the set $\{1,\dots,n\}$ with parts of size at most $k$; this is the Sloane integer sequence A229223. Asymptotic formulas for $p_k(n)$ are known for $k \in \{1,\dots,4\}$ (called \emph{allied Bell numbers} in a letter of John Riordan). We need an upper and a lower bound for all $k \in {\mathbb N}$. \begin{lemma}\label{counting} Let $\varepsilon>0$. Then $p_k(n) \geq n^{(\frac{k-1}{k}-\varepsilon)n}$ if $n$ is large enough. \end{lemma} \begin{proof} Let $s_k(n)$ be the number of partitions of $\{1,\dots,kn\}$ where all the parts contain exactly $k$ elements. Clearly, $s_k(1)=1$ for all $k \in {\mathbb N}$. To form a partition of $\{1,\dots,kn\}$ for $n>1$ we first choose the class containing the number $kn$, and then we choose a partition of the remaining elements. Hence, $s_k$ satisfies the recursion $$s_k(n)={{kn-1}\choose k-1} {s_{k}(n-1)}.$$ Since ${{kn-1}\choose k-1} \geq n^{k-1}$ we obtain by induction that $$s_k(n) \geq n^{k-1}(n-1)^{k-1} \cdots 2^{k-1}=(n!)^{k-1}.$$ Stirling's formula ($n! \sim \sqrt{2\pi n} (\frac{n}{e})^n$ for $n$ tending to infinity) implies that $$(n!)^{k-1} \geq n^{(k-1)(1-\varepsilon')n}$$ for any $\varepsilon'>0$ if $n$ is large enough. Hence, \begin{align*} p_k(n) & \geq s_k(\lfloor \frac{n}{k} \rfloor) \geq \lfloor \frac{n}{k} \rfloor^{(k-1)(1-\varepsilon')\lfloor \frac{n}{k} \rfloor} \\ & \geq \Bigl(\frac{n}{k}-1\Bigr)^{(k-1)(1-\varepsilon')(\frac{n}{k}-1)}\geq n^{(k-1)(1-\varepsilon')(1-\varepsilon'')\frac{1}{k}n}\geq n^{(\frac{k-1}{k}-\varepsilon)n} \end{align*} for an appropriate choices of $\varepsilon',\varepsilon''>0$ if $n$ is large enough. \end{proof} \begin{lemma}\label{counting_2} Let $n,k \in {\mathbb N}$. If $d>\frac{k-1}{k}$, then $p_k(n) < cn^{dn}$ for some $c$. \end{lemma} \begin{proof} To form a partition of $\{1,\dots,n\}$ for $n>1$, we first choose the class containing the number $n$, and then we choose a partition of the remaining elements. We thus have the following recursion formula: \begin{align} p_k(n) =& \sum_{i=0}^{k-1}{{n-1\choose i}p_k(n-1-i)}. \label{recursion} \end{align} We claim that the following inequality holds if $n$ is large enough. \begin{align} \sum_{i=0}^{k-1}{{n-1\choose i}(n-1-i)^{d(n-1-i)}}< n^{dn}.\label{main_ineq} \end{align} In order to prove this it is enough to show that if $i\leq k-1$ and $n$ is large enough, then \begin{align}{n-1\choose i}(n-1-i)^{d(n-1-i)}<\frac{1}{k}n^{dn},\end{align} that is, \begin{align}{n-1\choose i}<\frac{1}{k}\bigg(\frac {n^{n}}{(n-1-i)^{n-1-i}}\bigg)^d. \label{what_we_need} \end{align} We have \begin{align*}\frac{n^n}{(n-1-i)^{n-1-i}} & =\prod_{j=0}^{i}\frac{(n-j)^{n-j}}{(n-j-1)^{n-j-1}}\\ & =\prod_{j=0}^{i}\bigg((n-j)\Big(\frac{n-j}{n-j-1}\Big)^{n-j-1}\bigg) \\ & \geq \prod_{j=0}^{i}(n-j)=n(n-1) \cdots (n-i).\end{align*} This implies that in order to show Inequality (\ref{what_we_need}) it is enough to show that $${n-1\choose i}<\frac{1}{k}(n(n-1)\cdots (n-i))^d$$ if $n$ is large enough. Rearranging this inequality we obtain \begin{equation}\frac{1}{i!}((n-1) \cdots (n-i))^{1-d}<\frac{1}{k}n^d. \label{final_ineq} \end{equation} The LHS of the Inequality~(\ref{final_ineq}) is asympotically $\frac{1}{i!}n^{i(1-d)}$. By our assumption $d>\frac{k-1}{k}$, thus $i(1-d)<\frac{i}{k}\leq \frac{k-1}{k}<d$. This implies Inequality~(\ref{final_ineq}), and hence Inequality~(\ref{main_ineq}) if $n$ is large enough. Now let us choose an $N$ so that Inequality~(\ref{main_ineq}) holds for all $n> N$, and then let us choose a $c$ so that $p_k(n)< cn^{dn}$ holds for $n\leq N$. Then we show that $p_k(n) < cn^{dn}$ also holds for $n>N$ by induction on $n$. Suppose that we already know that $p_k(m) < cm^{dm}$ holds for all $m<n$. Then by using the recursion formula (\ref{recursion}) and Inequality~(\ref{main_ineq}) we obtain $$p_k(n)=\sum_{i=0}^{k-1}{{n-1\choose i}p_k(n-1-i)}<c\sum_{i=0}^{k-1}{{n-1\choose i}(n-1-i)^{d(n-1-i)}}<cn^{dn}.$$ \end{proof} \section{Preliminaries} If $\sim$ is an equivalence relation on $X$ and $x \in X$, then $[x]_\sim$ denotes the equivalence class of $x$ with respect to $\sim$, and $X/{\sim} := \{[x]_{\sim} \mid x \in X\}$ denotes the set of all $\sim$-classes. We write $|{\sim}|$ for $|X/{\sim}|$. If $\sim_1$ and $\sim_2$ are equivalence relations on $X$ then we say that $\sim_1$ is \emph{finer} than $\sim_2$ (or $\sim_2$ is \emph{coarser} then $\sim_1$) if $\sim_2$ is contained in $\sim_1$ (as binary relations). \subsection{Permutation group notation} When $G$ is a group we write $H \leq G$ if $H$ is a subgroup of $G$, and $H \triangleleft G$ if $H$ is a normal subgroup of $G$. We write $[G:H]$ for the index of $H$ in $G$. For any set $X$ we write $\operatorname{Sym}(X)$ for the group of all permutations of $X$. If $G \leq \operatorname{Sym}(X)$ and $x\in X$ then $G_x$ denotes the \emph{stabiliser} of the element $x$. Let $Y\subseteq X$. Then \begin{itemize} \item $G_Y$ denotes the \emph{pointwise stabiliser}, and \item $G_{\{Y\}}$ denotes the \emph{setwise stabiliser} of the set $Y$. \item $G|_{Y}$ denotes the restriction of $G$ to $Y$ provided that $Y$ is preserved by $G$. \end{itemize} If $Y$ is finite, say $Y=\{x_1,\dots,x_n\}$, then we also use the notation $G_{x_1,\dots,x_n}$ for the pointwise stabiliser of the set $Y$. Let $G$ be a permutation group on $X$. An \emph{orbit} of $G$ is a set of the form $\{g(x) \mid g \in G\}$ for some $x \in X$. Then \emph{algebraic closure} of $Y \subseteq X$ with respect to $G$ is the union of the finite orbits of $G_Y$, and it is denoted by $\operatorname{acl}_G(Y)$. If $x\in X$, then we use the notation $\operatorname{acl}_G(x)$ instead of $\operatorname{acl}_G(\{x\})$. \blue{It is well-known that $\operatorname{acl}_G$ is a closure operator on the subsets of $X$, and in particular we have $\operatorname{acl}_G(\operatorname{acl}_G(Y)) = \operatorname{acl}_G(Y)$ for all $Y \subseteq X$.} If the group $G$ is clear from the context, then we will omit the subscript from this notation. An equivalence relation $\sim$ of $X$ is called a \emph{congruence} of a permutation group $G \subseteq \operatorname{Sym}(X)$ if $x\sim y$ and $g\in G$ implies $g(x)\sim g(y)$ for all $x,y\in X$ and $g\in G$. In other words, an equivalence relation $\sim$ is a congruence if the corresponding partition is $G$-invariant. If $\sim$ is a congruence of some permutation group $G \subseteq \operatorname{Sym}(X)$ then $G$ acts naturally on $X/{\sim}$. The image of this action, as a subgroup of $\operatorname{Sym}(X/{\sim})$, is denoted by $G/{\sim}$. \begin{definition} Let $\pi \colon A \to B$ be a map. We write $\sim_\pi$ for the equivalence relation $\{(a_1,a_2) \mid \pi(a_1) = \pi(a_2)\}$ on $A$. If $G$ is a permutation group on $A$ such that $\sim_\pi$ is a congruence of $G$, then $\pi$ gives rise to a homomorphism $\mu_\pi \colon G \to \operatorname{Sym}(B)$ defined by $\mu_\pi(g)(a) := \pi(g(\pi^{-1}(a)))$ (this is well-defined since $G$ preserves $\sim_\pi$). \end{definition} \subsection{Orbit growth and some classes of structures} Let $X$ be a countably infinite set. There are three natural counting sequences attached to a permutation group on $X$, introduced and discussed in general in~\cite{CameronCounting,Oligo}. \begin{definition} Let $G \subseteq \operatorname{Sym}(X)$ be a permutation group and let $n\in \mathbb{N}$. Then \begin{itemize} \item $o_n(G)$ denotes the \emph{number of $n$-orbits of $G$}, i.e., the number of orbits of the natural action $G\curvearrowright X^n$, \item $o^i_n(G)$ denotes the \emph{number of injective $n$-orbits of $G$}, i.e., the number of orbits of the natural action $G\curvearrowright X^{(n)}(=\{(x_1,\dots,x_n)\in X^n \mid x_i\neq x_j\})$, \item $o^s_n(G)$ denotes the \emph{number of orbits of $n$-subsets of $G$}, i.e., the number or orbits of the natural action $G\curvearrowright {X\choose n}(=\{Y\subset X^n: |Y|=n \}$). \end{itemize} If $\mathfrak{A}$ is a structure then let $$o_n(\mathfrak{A}):=o_n(\operatorname{Aut}(\mathfrak{A})),\,o^i_n(\mathfrak{A}):=o^i_n(\operatorname{Aut}(\mathfrak{A})),\,o^s_n(\mathfrak{A}):=o^s_n(\operatorname{Aut}(\mathfrak{A})).$$ In the notation above we omit the reference to the group $G$ or the structure $\mathfrak{A}$ if it is clear from the context. \end{definition} A permutation group is called \emph{transitive} if $o_i(G) = 1$ and \emph{highly transitive} if $o_i(G) = 1$ for all $i \in {\mathbb N}$. \begin{definition}\label{def:oligo} A permutation group $G\subseteq \operatorname{Sym}(X)$ is called \emph{oligomorphic} if $o_n(G)$ is finite for all $n$. \end{definition} Clearly, in Definition~\ref{def:oligo} we could have equivalently required that $o^i_n$ or $o^s_n$ are finite for all $n$. By the theorem of Engeler, Ryll-Nardzewski, and Svenonious, a countably infinite relational structure $\mathfrak{A}$ is $\omega$-categorical if and only if $\operatorname{Aut}(\mathfrak{A})$ is oligomorphic (see for instance~\cite{Hodges}). In this paper we are particularly interested in the following classes of structures and permutation groups. \begin{definition} \begin{itemize} \item Let $\mathcal{G}_{\Exp}$ denote the class of those permutation groups $G$ acting on a countable set $X$ for which there is a constant $c$ such that $o^i_n(G)\leq c^n$. \item Let $\mathcal{K}_{\Exp}$ denote the class of all countable structures $\mathfrak{A}$ with an automorphism group in $\mathcal{G}_{\Exp}$. \item Let $\mathcal{G}_{{\Exp}{+}}$ denote the class of those permutation groups $G$ acting on a countable set $X$ for which there are constants $c$ and $d<1$ such that $o^i_n(\mathfrak{A})\leq cn^{dn}$. \item Let $\mathcal{K}_{{\Exp}{+}}$ denote the class of all countable structures $\mathfrak{A}$ with an automorphism group in $\mathcal{G}_{{\Exp}{+}}$. \end{itemize} \end{definition} \begin{remark} Note that the conditions $G\in \mathcal{G}_{\Exp}$ and $G\in \mathcal{G}_{{\Exp}{+}}$ imply that $G$ is oligomorphic, and therefore $\mathfrak{A}\in \mathcal{K}_{\Exp}$ and $\mathfrak{A}\in \mathcal{K}_{{\Exp}{+}}$ imply that $\mathfrak{A}$ is $\omega$-categorical. \end{remark} We write $\mathbb{N}$ not only for the set of natural numbers, but also for the structure with the empty signature whose domain is $\mathbb{N}$. \begin{definition} We write \begin{itemize} \item $\mathcal{S}$ for the class of all at most countable structures that are first-order interdefinable with a structure having the empty signature; \item $\mathcal{U}$ for the class of at most countable structures that are first-order interdefinable with a structure having a finite signature of unary relation symbols; \item $\mathcal{U}^*$ for the class of the structures $\mathfrak{A}\in \mathcal{U}$ such that every orbit of $\operatorname{Aut}(\mathfrak{A})$ is either a singleton or infinite. \end{itemize} \end{definition} When $\mathcal C$ is a class of structures, we write $\mathcal C_{\nf}$ for the class that contains the structures in $\mathcal C$ that have no finite orbits. Note that $\mathbb{N} \in \mathcal{S} \subset \mathcal{U}_{\nf} \subset \mathcal{U}^* \subset \mathcal{U}$ and that $(\mathcal{U}^*)_{\nf} = \mathcal{U}_{\nf}$. \subsection{Congruences of oligomorphic groups} We need the following easy observation about oligomorphic groups. \begin{proposition}\label{fin_many_cong} Every oligomorphic permutation group has finitely many congruences. \end{proposition} \begin{proof} Every congruence of a permutation group is a union of its 2-orbits. Then the claim follows directly from oligomorphicity. \end{proof} \begin{lemma}\label{acl1} Let $G$ be an oligomorphic permutation group, and let $\sim$ be a congruence of $G$ which has finite equivalence classes. Then $a\sim b$ implies $b\in \operatorname{acl}_G(a)$. \end{lemma} \begin{proof} Suppose that $a\sim b$, but $b\not\in \operatorname{acl}_G(a)$. Then the orbit of $b$ in $G_{a}$ is infinite. Let $b'$ be any element in this orbit. Then by definition $a\sim b'$. Hence the equivalence class of $a$ is infinite, a contradiction. \end{proof} If $\sim_1$ and $\sim_2$ are congruences, then the inclusion-wise smallest congruence relation that contains both $\sim_1$ and $\sim_2$ is called the equivalence relation \emph{generated} by $\sim_1$ and $\sim_2$. \begin{lemma}\label{acl3} Let $G$ be an oligomorphic permutation group, and let $\sim_1$ and $\sim_2$ be congruences of $G$ with finite classes. Then the congruence generated by $\sim_1$ and $\sim_2$ also has finite classes. \end{lemma} \begin{proof} Let $\sim$ be the congruence generated by $\sim_1$ and $\sim_2$, and suppose that $a\sim b$. Then there exists a sequence $a_0=0,b_0,a_1,b_1,\dots, a_k,b_k=n$ such that $a_i\sim_1 b_i$ and $b_i\sim_2 a_{i+1}$ for all $i$. Then by Lemma \ref{acl1} this implies $b_i\in \operatorname{acl}(a_i)$ and $a_{i+1}\in \operatorname{acl}_G(b_i)$ for all $i$. Since $\operatorname{acl}_G$ is a closure operator it follows that $b\in \operatorname{acl}(a)$. In particular the equivalence class of $a$ is finite. \end{proof} \begin{definition} Let $G$ be an oligomorphic permuation group. Then \begin{itemize} \item $\nabla(G)$ denotes the intersection of all congruences of $G$ with finitely many classes, \item $\Delta(G)$ denotes the smallest congruence that contains all congruences of $G$ with finite classes. \end{itemize} If $\mathfrak{A}$ is an $\omega$-categorical structure, then we use the notation $\nabla(\mathfrak{A}):=\nabla(\operatorname{Aut}(\mathfrak{A}))$, and $\Delta(\mathfrak{A}):=\Delta(\operatorname{Aut}(\mathfrak{A}))$. \end{definition} \begin{remark} Since $G$ has finitely many congruences it follows that $\nabla(G)$ also has finitely many classes, i.e., it is the \emph{finest} congruence of $G$ with finitely many classes. By Lemma \ref{acl3} it follows that every class of $\Delta(G)$ is finite, i.e., $\Delta(G)$ is the \emph{coarsest} congruence of $G$ with finite classes. \end{remark} \begin{remark} If $x$ and $y$ are in the same orbit, then their $\Delta$-classes have the same size. If $G$ has finitely many orbits, it follows that there exists some $n \in {\mathbb N}$ such that all elements lie in a $\Delta$-class of size at most $n$. \end{remark} \blue{The congruence $\Delta$ has the following equivalent description.} \begin{lemma}\label{delta_alt} Let $G$ be an oligomorphic permutation group on a countably infinite set $X$. Then $(x,y)\in \Delta(G)$ iff $y\in \operatorname{acl}_G(x)$ and $x\in \operatorname{acl}_G(y)$. \end{lemma} \begin{proof} Let $\Delta'(G)=\{(x,y) \mid y\in \operatorname{acl}_G(x) \wedge y\in \operatorname{acl}_G(x)\}$. We claim that $\Delta'(G)$ is an equivalence relation. It is clear that $\Delta'(G)$ is reflexive and symmetric. The transitivity follows from the fact that $\operatorname{acl}_G$ is a closure operator. It is also clear from the definition that $\Delta'(G)$ is preserved by $G$. Hence, $\Delta'(G)$ is a congruence. For any $x\in X$ we have $[x]_{\Delta'(G)}\subset \operatorname{acl}_G(x)$, so every class of $\Delta'(G)$ is finite. Therefore $\Delta'(G)$ is finer than $\Delta(G)$. On the other hand, if $(x,y)\in \Delta(G)$, then $y\in \operatorname{acl}_G(x)$ and $x\in \operatorname{acl}_G(y)$, and thus $(x,y)\in \Delta(G)$. \end{proof} We often use the following observation throughout this text. \begin{lemma}\label{sing_or_inf} Let $G$ be an oligomorphic permutation group. Then every class of $\nabla(G)$ is either infinite or a singleton. \end{lemma} \begin{proof} If the class of $x \in X$ is finite, then its orbit is also finite. Indeed, let $O$ be the orbit of $x$. Then every class of $\nabla(G)$ is of the same size. So if this size is finite, then $O$ is also finite since $\nabla$ has finitely many classes. Let $X_{\fin}$ be the union of the finite orbits of $G$. By oligomorphicity it follows that $X_{\fin}$ is finite. Then $\nabla':=\nabla(G)\cap \{(x,x) \mid x\in X_{\fin}\}$ is also a congruence of $\nabla(G)$. Since $X_{\fin}$ is finite the congruence $\nabla'$ has finitely many classes. This implies that $\nabla'=\nabla(G)$ and thus every class of $\nabla(G)$ within $X_{\fin}$ is a singleton. \end{proof} \begin{lemma}\label{delta_union_nabla} Let $G$ be an oligomorphic permutation group on $X$ and let $\sim$ be a congruence of $G$ with finite classes. Then the congruence generated by $\sim$ and $\nabla(G)$ equals $\big\{(x,y) \in X^2 \mid ([x]_\sim, [y]_\sim) \in \nabla(G/{\sim})\big\}$. \end{lemma} \begin{proof} If $\pi \colon X\rightarrow X/{\sim}$ is the factor map $x \mapsto [x]_\sim$, and $\approx$ is a congruence of $G/{\sim}$, then $$\pi^{-1}(\approx) := \{(x,y)\in X^2 \mid (\pi(x),\pi(y))\in {\approx}\}$$ is a congruence of $G$ which is coarser than $\sim$. In fact, $\pi^{-1}$ defines a bijection between the congruences of $G/{\sim}$ and those congruences of $G$ which are coarser than $\sim$. The congruence $\pi^{-1}(\nabla(G/{\sim}))$ has finitely many classes since $\nabla(G/{\sim})$ has finitely many classes. Hence $\pi^{-1}(\nabla(G/{\sim})$ is the finest congruence of $G$ that is coarser than $\sim$ and has finitely many classes. So, by definition, it equals the congruence generated by $\sim$ and $\nabla(G)$. \end{proof} \subsection{Direct products} \label{sect:prod} Let $I$ be a set. For each $i \in I$, let $A_i$ be a group. Then $\prod_{i \in I} A_i$ denotes the direct product of the $A_i$; i.e., the elements have the form $(a_i)_{i \in I}$ for $a_i \in A_i$, and group composition is defined point-wise. When the $A_i$ are permutation groups on disjoint sets $X_i$ for every $i \in I$, then $A := \prod_{i \in I} A_i$ acts naturally (intransitively) on $X$ as follows: for $\alpha \in A$ and $x \in X$, define $\alpha(x) := \alpha_i(x)$ if $x \in X_i$. It is easy to see that if each of the $A_i$ is closed in $\operatorname{Sym}(X_i)$, then the permutation group defined by the action of $A$ on $X$ is closed in $\operatorname{Sym}(X)$, and hence equals the automorphism group of some relational structure with domain $X$. \subsection{Semidirect products} Let $H$ and $N$ be groups and $\theta \colon H \to \operatorname{Aut}(N)$ a homomorphism. As usual, the \emph{semidirect product of $N$ by $H$ (with respect to $\theta$)}, denoted by $N \rtimes H$ (or $H \ltimes N$) is the group $G$ with the elements $N \times H$ and group multiplication defined by $(u,x) (v,y) := (u \theta(x)(v),xy)$ for all $(u,x),(v,y) \in G$. Recall that $H^* := \{(1,x) \mid x \in H\}$ and $N^* := \{(u,1) \mid u \in N\}$ are subgroups of $G$ that are isomorphic to $H$ and to $N$, respectively, that $N^*$ is a normal subgroup, and that $G = N^*H^*$ and $N^* \cap H^* = \{1\}$. Conversely, if $N$ is a normal subgroup of $G$, $G = NH$, and $N \cap H = \{1\}$, then $G$ is isomorphic to the semidirect product $N \rtimes H$ with respect to the action of $H$ on $N$ by conjugation in $G$; in this case $G$ is called a \emph{split extension} of $N$ by $H$. \subsection{Wreath products} \label{sect:wr} Let $A$ be a group acting on the set $F$, and let $Y$ be a set. Let $H$ be a group acting on $Y$ and let $X:=F\times Y$. Then there are natural actions of the groups $N := \prod_{y\in Y}A$ and $H$ on the set $X$, defined as follows. \begin{enumerate} \item If $\alpha\in N$ and $(f,y)\in X$, then $\alpha(f,y) := (\alpha_y(f),y)$, \item If $\beta\in H$ and $(f,y)\in X$, then $\beta(f,y) := (f,\beta(y))$. \end{enumerate} Let $G$ be the smallest permutation group that contains the permutation groups on $X$ induced by the actions of $N$ and of $H$ on $X$; we view $N$ and $H$ as subsets of $G$. If $\alpha\in N$ and $\beta\in H$, then $$\beta^{-1}\alpha\beta(f,y)=\beta^{-1}\alpha(f,\beta(y))=\beta^{-1}(\alpha_{\beta(y)}(f),\beta(y))=(\alpha_{\beta(y)}(f),y)$$ so $\beta^{-1} \alpha \beta \in N$ and $N \triangleleft G$. Then $G = NH$ and $N \cap H = \{\operatorname{id}_X\}$. Hence, the group $G$ can be written as the semidirect product $\prod_{y\in Y}A \rtimes H$. The group $G$ is called the \emph{wreath product} of the groups $A$ and $H$ (with its canonical imprimitive action on $X$) and will be denoted by $A\wr H$. \subsection{Interdefinability, bi-definability, bi-interpretability} \blue{We write $A$, $B$, $C$ for the domain of the structures $\mathfrak{A}$, $\mathfrak{B}$, $\mathfrak{C}$, respectively.} If $G$ is a set of permutations on a set $A$ then $\operatorname{Inv}(G)$ denotes the relational structure $\mathfrak{A}$ with domain $A$ which carries all relations that are preserved by all permutations of $G$. The operations $\operatorname{Aut}$ and $\operatorname{Inv}$ form a Galois connection between the set of all relational structures $\mathfrak{A}$ with domain $A$ and the set of sets of permutations $G$ on $A$ (see, e.g., ~\cite{Bodirsky-HDR}). The permutation group $\operatorname{Aut}(\operatorname{Inv}(G))$ is the smallest permutation group that is \emph{closed} in $\operatorname{Sym}(A)$ equipped with the topology of pointwise convergence. This topology is the restriction of the product topology on $A^A$ where $A$ is taken to be discrete. A permutation group $G$ on $A$ is closed in $\operatorname{Sym}(A)$ if and only if $G$ is the automorphism group of a relational structure. If $\mathfrak{A}$ is $\omega$-categorical, then the structure $\operatorname{Inv}(\operatorname{Aut}(\mathfrak{A}))$ is the expansion of $\mathfrak{A}$ by all relations that can be defined by a first-order formula in $\mathfrak{A}$ (this is a consequence of the proof of the theorem of Ryll-Nardzewski; see~\cite{Hodges}). It follows that if $\operatorname{Aut}(\mathfrak{A}) \subseteq \operatorname{Aut}(\mathfrak{A}')$ if and only if all relations of $\mathfrak{A}'$ are first-order definable over $\mathfrak{A}$; in this case we say that $\mathfrak{A}'$ is a \emph{first-order reduct} of $\mathfrak{A}$. Two structures on the same domain are called \emph{interdefinable} if they are reducts of one another. By the above, if $\mathfrak{A}$ or $\mathfrak{A}'$ is $\omega$-categorical, then $\mathfrak{A}$ and $\mathfrak{A}'$ are interdefinable if and only if $\operatorname{Aut}(\mathfrak{A}) = \operatorname{Aut}(\mathfrak{A}')$. Two structures $\mathfrak{A}$ and $\mathfrak{B}$, not necessarily with the same domain, are called \emph{bi-definable} if there exists a bijection $f \colon A \to B$ between the domains of $\mathfrak{A}$ and $\mathfrak{B}$ such that $\mathfrak{A}$ and $\mathfrak{B}$ are interdefinable after identifying $A$ and $B$ along $f$. It follows that two $\omega$-categorical structures $\mathfrak{A}$ and $\mathfrak{B}$ are bi-definable if and only if $\operatorname{Aut}(\mathfrak{A})$ and $\operatorname{Aut}(\mathfrak{A}')$ are isomorphic as permutation groups. We give an example of two structures with the same domain that are bi-definable but not interdefinable. \begin{example} \blue{ The structures $({\mathbb Z}; \{0\})$ and $({\mathbb Z}; \{1\})$ are bi-definable, but not interdefinable. } \end{example} A \emph{($d$-dimensional) interpretation} of $\mathfrak{A}$ in $\mathfrak{B}$ is a partial surjective map $I$ from $A^d$ to $B$ such that the pre-image of $B$, of the equality relation on $B$, and of each relation of $\mathfrak{B}$ under $I$ is first-order definable in $\mathfrak{A}$. If $\mathfrak{A}$ has a $d$-dimensional first-order interpretation $I$ in $\mathfrak{B}$ and $\mathfrak{B}$ has an $e$-dimensional first-order interpretation $J$ in $\mathfrak{A}$ such that the relation $\{(x,y_{1,1},\dots,y_{d,e}) \mid x = J(I(y_{1,1},\dots,y_{d,1}),\dots,I(y_{1,e},\dots,y_{d,e}))\}$ is first-order definable in $\mathfrak{B}$ and $\{(x,y_{1,1},\dots,y_{d,e}) \mid x = I(J(y_{1,1},\dots,y_{1,e}),\dots,J(y_{d,1},\dots,y_{d,e}))\}$ is first-order definable in $\mathfrak{A}$, then $\mathfrak{A}$ and $\mathfrak{B}$ are called \emph{bi-interpretable}. By a result of Coquand, Ahlbrandt, and Ziegler~\cite{AhlbrandtZiegler}, two $\omega$-categorical structures $\mathfrak{A}$ and $\mathfrak{B}$ are bi-interpretable if and only if $\operatorname{Aut}(\mathfrak{A})$ and $\operatorname{Aut}(\mathfrak{B})$ are \emph{topologically isomrophic}, i.e., isomorphic via a mapping which is a homeomorphism with respect to the pointwise convergence topology. \subsection{Finite covers} We now introduce the concept of finite covers that plays a central role in this article. Forming finite covers may be viewed as a way to construct new $\omega$-categorical structures from known ones; a more appropriate way is to view them as a way to decompose $\omega$-categorical structures into (hopefully) simpler parts. \begin{definition}\label{finite_def} Let $\mathfrak{A}$ and $\mathfrak{B}$ be structures. A mapping $\pi \colon \mathfrak{A}\rightarrow \mathfrak{B}$ is called a \emph{finite covering map} (or \emph{finite cover}) if \begin{enumerate} \item $\pi$ is surjective, \item for each $w\in B$ the set $\pi^{-1}(w)$ is finite, \item $\sim_\pi$ is preserved by $\operatorname{Aut}(\mathfrak{A})$, \item the image of $\operatorname{Aut}(\mathfrak{A})$ under $\mu_\pi$ equals $\operatorname{Aut}(\mathfrak{B})$. \end{enumerate} The sets $\pi^{-1}(w)$, for $w\in B$, are called the \emph{fibers} of the finite covering map $\pi$. A structure $\mathfrak{A}$ is called a \emph{finite covering structure of $\mathfrak{B}$} if there is a finite covering map $\pi \colon \mathfrak{A}\rightarrow \mathfrak{B}$. \end{definition} \begin{remark} A finite covering structure of an $\omega$-categorical structure has an oligomorphic automorphism group, and hence is $\omega$-categorical. \end{remark} \begin{remark}\label{rem:congr} Let $\mathfrak{A}$ be an arbitrary structure and let $\sim$ be a congruence of $\operatorname{Aut}(\mathfrak{A})$. If all $\sim$-equivalence classes are finite, then $\mathfrak{A}$ is a finite covering structure of $\mathfrak{A}/{\sim}$, where $\mathfrak{A}/{\sim}$ can be any structure such that $\operatorname{Aut}(\mathfrak{A}/{\sim}) = \operatorname{Aut}(\mathfrak{A})/{\sim}$. In fact, every finite covering structure is of this form. Indeed, let $\mathfrak{A}$ be a structure. If $\pi \colon \mathfrak{A}\rightarrow \mathfrak{B}$ is a finite covering map, then $\sim_{\pi}$ is a congruence of $\operatorname{Aut}(\mathfrak{A})$ and there is a natural bijection between $B$ and $A/{\sim}_{\pi}$ defined by $w \mapsto \pi^{-1}(w)\in A/{\sim}_{\pi}$. Let us identify $B$ and of $A/{\sim}_{\pi}$ along this bijection, and let $\mathfrak{A}/{\sim}_{\pi}$ be any structure such that $\operatorname{Aut}(\mathfrak{A}/{\sim}_{\pi}) = \operatorname{Aut}(\mathfrak{B})$. The image of $\operatorname{Aut}(\mathfrak{A})$ under the homomorphism $\mu_\pi$ equals $\operatorname{Aut}(\mathfrak{B})$, hence $\operatorname{Aut}(\mathfrak{A})/{\sim}_{\pi} = \operatorname{Aut}(\mathfrak{B}) = \operatorname{Aut}(\mathfrak{A}/{\sim}_\pi)$. \end{remark} We present a series of simple examples of finite covers; they illustrate different phenomena of finite covers on which we will comment later, referring back to these examples. \begin{example}\label{expl:trivial} Let $\vec P_1\cdot \omega$ be the directed graph which is an infinite union of directed edges. Then $\omega \cdot \vec P_1$ is a finite covering structure of $\mathbb{N}$, with $\pi$ being the projection to the second argument. Also note that $\operatorname{Aut}(\vec P_1\cdot \omega)$ is topologically isomorphic to $\operatorname{Sym}({\mathbb N})$, and that $\vec P_1\cdot \omega$ and $({\mathbb N};\neq)$ are bi-interpretable but not bi-definable. \end{example} \begin{example}\label{expl:principal} Let $K_2\cdot \omega$ be the graph which is an infinite union of undirected edges. Then $\omega \cdot K_2$ is a finite covering structure of $\mathbb{N}$, with $\pi$ being the projection to the second argument. Identifying the domain of $K_2 \cdot \omega$ with $\{0,1\} \times {\mathbb N}$ so that $(u,n)$ is adjacent to $(v,m)$ iff $n=m$ and $u \neq v$, the automorphism group of $K_2\cdot \omega$ is the wreath product ${\mathbb Z}_2\wr \operatorname{Sym}(\omega)$ (see Section~\ref{sect:wr}). \end{example} \begin{example}\label{expl:non-free} Let $K_2\cdot \omega$ be the structure with domain $\{0,1\} \times \mathbb{N}$ from Example~\ref{expl:principal}, and let $\mathfrak{A}$ be the expansion of $K_2\cdot \omega$ by the equivalence relation $\Eq$ defined by $\Eq((u,n),(v,m))$ iff $u=v$. Then $\mathfrak{A}$ is a finite covering structure of $\mathbb{N}$ with respect to the covering map $\pi$ that maps $(u,n)$ to $n$. Note that $\operatorname{Aut}(\mathfrak{A})$ is isomorphic (as an abstract group) to the direct product ${\mathbb Z}_2 \times \operatorname{Sym}(\mathbb{N})$ (see Section~\ref{sect:prod} for direct products and other actions of direct products). \end{example} \begin{example}\label{expl:neither-free-nor-trivial} Let $A := \{0,1,2,3\} \times \mathbb{N}$ and let $\pi \colon A \to \mathbb{N}$ be the projection to the second argument. Let $\mathfrak{A}$ be the graph with vertex set $A$ such that $(u,b)$ is adjacent to $(v,c)$ if and only if \begin{itemize} \item $b=c$ and $u=v+1 \mod 4$, or \item $b \neq c$ and $u=v \mod 2$. \end{itemize} \blue{See Figure~\ref{fig:2fibers} for an illustration.} \begin{figure} \begin{center} \includegraphics[scale=0.6]{2fibres.pdf} \end{center} \caption{An illustration of the subgraph of the structure $\mathfrak{A}$ from Example~\ref{expl:neither-free-nor-trivial} that is induced by 2 fibers.} \label{fig:2fibers} \end{figure} \blue{Then $\mathfrak{A}$ is a finite covering structure of $\mathbb{N}$ with respect to $\pi$. The automorphism group of $\mathfrak{A}$ equals $KH$ where \begin{itemize} \item $H = \{\alpha \in \operatorname{Sym}(A) \mid \text{if } \alpha(u,v) = (u',v') \text{ then } u=u'\}$ (i.e., $H$ is topologically isomorphic to $\operatorname{Sym}(\mathbb{N})$), and \item $K = \{ \alpha \in \prod_{i \in \mathbb{N}} {\mathbb Z}_4 \mid \text{for all } k,l \in \mathbb{N}: \alpha_k {\mathbb Z}_2 = \alpha_l {\mathbb Z}_2\}$ where ${\mathbb Z}_k$ is the cyclic group acting on $\{0,1,\dots,k-1\}$ and $\prod_{i \in \mathbb{N}} {\mathbb Z}_4$ is the direct product in its intransitive action on $A$ (see Section~\ref{sect:prod}). \end{itemize} } \end{example} \begin{example}\label{expl:twisted} Let $\mathfrak{B}$ be the countable structure which carries an equivalence relation $\Eq$ with three classes $R,S,T$ such that $|S|=|T|$, and a unary relation symbol denoting the class $R$. Let $A := (\{0,1\} \times R) \cup (\{0\} \times (S \cup T))$. \blue{We define the structure $\mathfrak{A}$ with domain $A$ and the signature $\{E,F\}$ where $E$ and $F$ have arity two, and \begin{itemize} \item $E((u_1,b_1),(u_2,b_2))$ holds if and only if $(u_1 = 0$, $b_1 \in R$, and $b_2 \in S$) or ($u_1 = 1$, $b_1 \in R$, and $b_2 \in T$); \item $F((u_1,b_1),(u_2,b_2))$ holds if and only if $b_1 = b_2$. \end{itemize} Let $\pi \colon A \to B$ be the projection to the second argument. Then $\sim_{\pi} \, = F$ and $\pi$ is a finite covering. If $R,S,T$ are countably infinite then $\sim_{\pi} \, = F = \Delta(\mathfrak{A})$. The automorphism group of $\mathfrak{A}$ is isomorphic to a semidirect product $(\operatorname{Sym}(R) \times \operatorname{Sym}(S)^2) \rtimes {\mathbb Z}_2$.} \end{example} \begin{definition} Let $\pi \colon \mathfrak{A} \to \mathfrak{B}$ be a finite covering map, let $b \in B$, and let $S := \pi^{-1}(b)$. \begin{itemize} \item The \emph{fiber group of $\pi$ at $b$} is the group $\operatorname{Aut}(\mathfrak{A})_S|_S$. \item The \emph{binding group of $\pi$ at $b$} is the group $K_S|S$ where $K$ is the kernel of $\mu_\pi$. \end{itemize} \end{definition} So the binding group at $b$ is a normal subgroup of the fiber group at $b$. If for some $b\in B$ the fiber group and the binding group at $b$ are unequal then $\pi$ is called \emph{twisted}. Example~\ref{expl:twisted} gives an example of a twisted finite cover; Examples~\ref{expl:trivial},~\ref{expl:principal},~\ref{expl:non-free}, and~\ref{expl:neither-free-nor-trivial} are not twisted. \begin{remark} \blue{The following terminology is not needed for stating or proving our results, but we mention it for a better understanding of the examples of finite covers that we have already presented. Let $\pi \colon \mathfrak{A} \to \mathfrak{B}$ be a finite covering map, and let $B_b$ be the binding group at $b \in B$. Then $\pi$ is called \emph{free} if the kernel of $\mu_\pi \colon \operatorname{Aut}(\mathfrak{A}) \to \operatorname{Aut}(\mathfrak{B})$ equals $\prod_{b \in B} B_b$. Example~\ref{expl:principal}, Example~\ref{expl:trivial}, Example~\ref{expl:neither-free-nor-trivial}, and Example~\ref{expl:twisted} are free. Example~\ref{expl:non-free} is an example of a finite cover which is not free: the binding group at each point is ${\mathbb Z}_2$ and equals the kernel of $\mu_\pi \colon \operatorname{Aut}(\mathfrak{A}) \to \operatorname{Aut}(\mathfrak{B})$, which is therefore not equal to $\prod_{b \in B} B_b = {\mathbb Z}^\omega_2$.} \end{remark} \subsection{Trivial finite covers} There are two important notions of triviality for finite covers, intended to describe those finite covers that have an automorphism group which is smallest possible. This is important for our purposes since we will describe general finite covering structures in our class by describing them as certain first-order reducts of trivial finite covers; and, as we will see, trivial covers are much easier to describe. \begin{definition}\label{trivial_trans} Let $\pi \colon \mathfrak{A} \rightarrow \mathfrak{B}$ be a finite covering map. We say that $\pi$ is \begin{itemize} \item a \emph{trivial cover} if the kernel of $\mu_\pi \colon \operatorname{Aut}(\mathfrak{A}) \to \operatorname{Aut}(\mathfrak{B})$ is trivial (only contains the identity permutation $\operatorname{id}_A$); \item a \emph{strongly trivial cover} if all of its fiber groups are trivial. \end{itemize} A structure $\mathfrak{A}$ is called a \emph{(strongly) trivial covering structure of $\mathfrak{B}$} if there is a finite covering map $\pi \colon \mathfrak{A} \rightarrow \mathfrak{B}$ which is (strongly) trivial. \end{definition} It is clear from the definition that $\pi$ is a trivial cover if and only if all of its binding groups are trivial. Hence, if $\pi$ is strongly trivial, then it is also trivial. Example~\ref{expl:trivial} is an example of a strongly trivial finite covering. Example~\ref{expl:twisted} is an example of a trivial finite covering which is not a strongly trivial finite covering. Examples~\ref{expl:principal},~\ref{expl:non-free}, and~\ref{expl:neither-free-nor-trivial} are examples of non-trivial finite coverings. Next we give a sufficient condition for a structure $\mathfrak{B}$ under which every trivial cover of $\mathfrak{B}$ is strongly trivial. \begin{lemma}\label{no_finite_index} Let $\mathfrak{B}$ be a structure such that for every $b\in B$ the stabilizer $\operatorname{Aut}(\mathfrak{B})_b$ has no nontrivial finite-index subgroups. Then every trivial cover of $\mathfrak{B}$ is strongly trivial. \end{lemma} \begin{proof} Let $\pi \colon \mathfrak{A} \rightarrow \mathfrak{B}$ be a trivial finite cover. Then $\mu_\pi$ is an isomorphism between $\operatorname{Aut}(\mathfrak{A})$ and $\operatorname{Aut}(\mathfrak{B})$. Let $b \in B$. We need to show that the fiber group of $\pi$ at $b$ is trivial. Put $S:=\pi^{-1}(b)$. Let us consider the mapping $\varphi \colon \operatorname{Aut}(\mathfrak{B})_b\rightarrow \operatorname{Sym}(S)$ given by $h\mapsto \mu_\pi^{-1}(h)|_S$. Then $\varphi$ is clearly a group homomorphism. Let $K$ be the kernel of this homomorphism. Then $K$ is a finite index subgroup of $\operatorname{Aut}(\mathfrak{B})_b$, and thus by our assumption $K=\operatorname{Aut}(\mathfrak{B})_b$. That is, $\varphi$ is the trivial homomorphism, which means that the fiber group of $\pi$ at $b$ is trivial. \end{proof} We now give an explicit description of strongly trivial covers. \begin{lemma}\label{lem:triv-cov} Let $\pi \colon \mathfrak{A} \rightarrow \mathfrak{B}$ be a strongly trivial covering map. Then for each orbit $O$ of $\mathfrak{B}$ there exists a finite set $F_O$ and a mapping $\psi_O \colon \pi^{-1}(O)\rightarrow F_O$ such that \begin{itemize} \item for every $w \in O$ the restriction of $\psi_O$ to $\pi^{-1}(w)$ is a bijection; \item $\psi_O(x) = \psi_O(\mu_\pi^{-1}(\beta)(x))$ for all $x\in \pi^{-1}(O)$ and $\beta\in \operatorname{Aut}(\mathfrak{B})$. \end{itemize} \end{lemma} \begin{proof} Let us fix an element $b\in O$ and let $F_O:=\pi^{-1}(b)$. If $x\in \pi^{-1}(O)$ then there exists an automorphism $g$ of $\mathfrak{B}$ such that $g(\pi(x))=b$. Let us define $\psi_O(x)$ to be $\mu_\pi^{-1}(g)(x)$. We claim that $\psi_O(x)\in F_O$ and that its value is well-defined (i.e., it does not depend on our particular choice of $g$). The first claim is clear since by definition $$\pi(\mu_\pi^{-1}(g)(x))=\mu_\pi(\mu_\pi^{-1}(g))(\pi(x))=g(\pi(x))=b$$ thus $\mu_\pi^{-1}(g)(x)\in \pi^{-1}(b)=F_O$. In order to show the second claim we need to show that if $h \in \operatorname{Aut}(\mathfrak{B})$ is such that $h(\pi(x))=g(\pi(x))=b$ then $\mu_\pi^{-1}(g)(x)=\mu_\pi^{-1}(h)(x)$. Since $(h^{-1}g)(\pi(x))=\pi(x)$ it follows that $(\mu_\pi^{-1}(h^{-1}g))|_{\pi(x)}$ is in the fiber group at $\pi(x)$. Since $\pi$ is strongly trivial this group is trivial, and hence $\mu_\pi^{-1}(h^{-1}g)(x)=x$. This implies that $$\mu_\pi^{-1}(h)(x)=(\mu_\pi^{-1}(h)\mu_\pi^{-1}(h^{-1}g))(x)=\mu_\pi^{-1}(g)(x).$$ Now the first item follows from the fact that if $w \in O$ is such that $g(w)=b$, then $g$ defines a bijection between $\pi^{-1}(w)$ and $\pi^{-1}(b)$. As for the second item let $x\in \pi^{-1}(O)$ and let $g\in \operatorname{Aut}(\mathfrak{B})$ be such that $g(\pi(x))=b$. If $\beta\in \operatorname{Aut}(\mathfrak{B})$, then $$(g\beta^{-1})(\pi(\mu_\pi^{-1}(\beta)(x)))=(g\beta^{-1})(\beta(\pi(x)))=g(\pi(x))=b,$$ and thus $$\psi_O(\mu_\pi^{-1}(\beta)(x))=(\mu_\pi^{-1}(g\beta^{-1}))(\mu_\pi^{-1}(\beta)(x))=\mu_\pi^{-1}(g)(x)=\psi_O(x).$$ \end{proof} \begin{remark}\label{rem:triv-covers} Let the sets $F_O$ and the maps $\psi_O$ be defined as in Lemma~\ref{lem:triv-cov} for each orbit $O$ of $\operatorname{Aut}(\mathfrak{B})$. Then there is a natural bijection between $A$ and $\bigcup_O{(F_O\times O)}$ defined as $x \mapsto (\psi_O(x),\pi(x))$ where $O$ is the orbit of $\operatorname{Aut}(\mathfrak{B})$ containing $\pi(x)$. If we identify each element of $A$ with its image under this bijection, then $\operatorname{Aut}(\mathfrak{A})$ consists of those permutations that fix the first coordinate of each element and that act as an automorphism of $\mathfrak{B}$ on the second coordinate. \end{remark} \subsection{Covering reducts} Let $\mathfrak{A}$ and $\mathfrak{B}$ be structures and let $\pi \colon \mathfrak{A} \rightarrow \mathfrak{B}$ be a finite covering map. A first-order reduct $\mathfrak{C}$ of $\mathfrak{A}$ is a \emph{covering reduct of $\mathfrak{A}$ with respect to $\pi$ (and $\mathfrak{A}$ is called a \emph{covering expansion of $\mathfrak{C}$ with respect to $\pi$}; see~\cite{EvansIvanovMacpherson})} if every $\alpha\in \operatorname{Aut}(\mathfrak{C})$ preserves $\sim_{\pi}$ and $\mu_\pi(\alpha) \in \operatorname{Aut}(\mathfrak{B})$. \begin{remark} \blue{We do not need but mention that every finite cover $\pi \colon \mathfrak{A} \to \mathfrak{B}$ is an covering expansion of a free finite covering structure of $\mathfrak{B}$ with respect to $\pi$ (Lemma~2.1.3 in~\cite{EvansIvanovMacpherson}).} \end{remark} \begin{definition} Let $\pi \colon \mathfrak{A} \to \mathfrak{B}$ be a finite covering map. \begin{itemize} \item If $\mathfrak{A}$ is a covering reduct of a trivial covering structure of $\mathfrak{B}$ with respect to $\pi$, then $\pi$ is called a \emph{split cover of $\mathfrak{B}$}~\cite{EvansIvanovMacpherson} (in this case, we also say that \emph{$\pi$ is split}). \item If $\mathfrak{A}$ is a covering reduct of a strongly trivial covering of $\mathfrak{B}$ with respect to $\pi$, then $\pi$ is called a \emph{strongly split cover of $\mathfrak{B}$}~\cite{EvansIvanovMacpherson}. \end{itemize} \end{definition} Equivalently (and this motivates the terminology; see~\cite{EvansIvanovMacpherson}), a finite cover $\pi \colon \mathfrak{A} \to \mathfrak{B}$ is split if the kernel $K$ of $\mu_{\pi} \colon \operatorname{Aut}(\mathfrak{A}) \to \operatorname{Aut}(\mathfrak{B})$ has a closed complement in $\operatorname{Aut}(\mathfrak{A})$, i.e., there is a closed subgroup $H$ of $\operatorname{Aut}(\mathfrak{A})$ such that $KH = \operatorname{Aut}(\mathfrak{A})$ and $K \cap H = \{1\}$ (so that $\operatorname{Aut}(\mathfrak{A})$ is isomorphic to the semidirect product $K \rtimes H$). Examples~\ref{expl:trivial},~\ref{expl:principal}~\ref{expl:non-free}, and~\ref{expl:neither-free-nor-trivial} are examples of split covers of $\mathbb{N}$. For a non-example, see, e.g.,~\cite{EvansPastori}. Example~\ref{expl:twisted}, \blue{in the case that $|S|=|T|=1$,} is an example of a finite split cover of a structure in $\mathcal{U}^*$ which is not strongly split. \subsection{Operations on classes of structures} Let $\mathfrak{A}$ be a structure, and let $\mathfrak{B}$ be a first-order reduct of $\mathfrak{A}$. Then we say that $\mathfrak{B}$ is a \emph{finite index (first-order) reduct} of $\mathfrak{A}$ iff the index $[\operatorname{Aut}(\mathfrak{B}):\operatorname{Aut}(\mathfrak{A})]$ is finite. We define the following operations on classes of structures. \begin{definition} Let $\mathfrak{A}$ be a countable $\omega$-categorical structure. Then \begin{itemize} \item $C(\mathfrak{A})$ is the class of structures which are interdefinable with an expansion of $\mathfrak{A}$ with finitely many constants, \item $M(\mathfrak{A})$ is the class of structures that are interdefinable with the (up to isomorphism unique~\cite{Cores-Journal,BodHilsMartin-Journal}) model-complete core of $\mathfrak{A}$, \item $R(\mathfrak{A})$ is the class of first-order reducts of $\mathfrak{A}$, \item $R^{<\infty}(\mathfrak{A})$ is the class of finite index first-order reducts of $\mathfrak{A}$, \item $F(\mathfrak{A})$ is the class of finite covering structures of $\mathfrak{A}$. \end{itemize} If $\mathcal{C}$ is a class of structures and $\Phi$ is one of the operators above, then we use the notation $\Phi(\mathcal{C})$ for the union of the classes $\Phi(\mathfrak{A})$ such that $\mathfrak{A}\in \mathcal{C}$. \end{definition} \begin{proposition}\label{prop:easy} The following identities hold. \begin{enumerate} \item $C\circ C=C$, \item $M\circ M=M$, \item $R\circ R=R$, \item $R^{<\infty}\circ R^{<\infty}=R^{<\infty}$, \item $F\circ F=F$, \item $C(\mathcal{U}_{\nf}) = \mathcal{U}^*$, \item $R(\mathcal{U}) = R(\mathcal{U}^*)$, \item $\mathcal{K}_{\Exp}=R(\mathcal{K}_{\Exp}) $, \item $\mathcal{K}_{{\Exp}{+}}=R(\mathcal{K}_{{\Exp}{+}}) $. \end{enumerate} \end{proposition} \begin{proof} Straightforward from the definitions. \end{proof} We will show that $\mathcal{K}_{\Exp}=R(\mathcal{U})$ and $\mathcal{K}_{{\Exp}{+}}=(F\circ R)(\mathcal{U}) = (R \circ F)(\mathcal{U})$, and we will give several equivalent descriptions of these classes in Section~\ref{sect:additional}. We also prove Thomas' conjecture for each structure in $\mathcal{K}_{{\Exp}{+}}$ (Theorem~\ref{thomas_strong}).
{ "redpajama_set_name": "RedPajamaArXiv" }
5,277
Q: Can't run rvmsudo rails s after update After updating to the latest stable build of rails, I can't seem to run rvmsudo rails s, I get sudo: rails: command not found. I've tried running rvm reinstall 1.9.3 to no avail. rvmsudo which rails returns: /home/user/.rvm/gems/ruby-1.9.3-p327/bin/rails. A: rvm get head resolved the issue for me. It looks like it was a bug as noted here.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,452
"""Tests for the media metadata parsing.""" from conftest import DataLoader DATA_LOADER = DataLoader("media_metadata_payloads") MEDIA_TEST_SOURCES = ("bbc", "cifs", "pandora", "sonos_radio", "tunein", "tunein_2") def test_metadata_parsing(moco): for media_source in MEDIA_TEST_SOURCES: metadata_info = DATA_LOADER.load_json("{}.json".format(media_source)) moco.avTransport.GetPositionInfo.return_value = metadata_info["input"] assert moco.get_current_track_info() == metadata_info["result"] moco.avTransport.GetPositionInfo.assert_called_once_with( [("InstanceID", 0), ("Channel", "Master")] ) moco.avTransport.reset_mock()
{ "redpajama_set_name": "RedPajamaGithub" }
6,125