text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
\section{Introduction} The $\Pi$-operator is one of the tools used to study smoothness of functions over Sobolev spaces and to solve the Beltrami equations. In one dimensional complex analysis, the Beltrami equation is given by $\displaystyle \frac{\partial w}{\partial \overline{z}}=\mu \displaystyle \frac{\partial w}{\partial z}$, where $\mu=\mu(z)$ is a given complex function, and $z\in\mathbb{C}$. It can be transformed to a fixed-point equation $h=q(z)(I+\Pi_\Omega h),$ where $$\Pi_\Omega h(z)=-\displaystyle\frac{1}{\pi i}\displaystyle\int_\Omega \frac{h(\xi)}{(\xi-z)^2}d\xi_1 d\xi_2$$ is the complex $\Pi$-operator. This singular integral operator acts as an isometry from $L^2(\mathbb{C})$ to $L^2(\mathbb{C})$ with the $L^p$-norm being a long standing conjecture by Iwaniec. \par With the help of Clifford algebras, the classical Beltrami equation and $\Pi$-operator with some well known results can be generalized to higher dimensions. Abundant results in Euclidean space have been found. For instance, in \cite{GKS}, G\"{u}rlebeck, K\"{a}hler and Shapiro considered a class of generalizations of the complex one-dimensional $\Pi$-operator in spaces of quaternion-valued functions depending on four real variables. In \cite{GK}, G\"{u}rlebeck and K\"{a}hler provided a hypercomplex generalization of the complex $\Pi$-operator which turns out to have most of the properties of its origin in one dimensional complex analysis. K\"{a}hler studied Beltrami equations in the case of quaternions in \cite{Kahler}, which gave an overview of possible generalizations of complex Beltrami equation and their properties in the quaternionic case. In \cite{Blaya}, the authors studied the $\Pi$-operator in Clifford analysis by using two orthogonal bases of a Euclidean space, which allows to find the expression of the jump of the generalized $\Pi$-operator across the boundary of the domain. The case of the $\Pi$-operator and the Beltrami equation on the unit sphere has also been discussed in \cite{CRK} with most useful properties inherited from the complex $\Pi$-operator. The classical Ahlfors-Beurling inequality has also been generalized to higher dimensions by Martin in \cite{Martin}. \par Conformally flat manifolds are manifolds with atlases whose transition maps are M\"{o}bius transformations. They can be parametrized by $U/\Gamma$ where $U$ is a simply connected subdomain of either $\mathbb{S}^{n}$ or $\mathbb{R}^{n}$ and $\Gamma$ is a Kleinian group acting discontinuously on $U$. Examples of such manifolds treated here include the real projective space $\mathbb{R}P^n$, cylinders and Hopf manifolds $\mathbb{S}^1\times \mathbb{S}^{n}$. More details for these conformally flat manifolds can be found in \cite{KR,KR1}. In the present paper, we will generalize the results in Euclidean space \cite{GK} and on the unit sphere \cite{CRK} to the previous conformally flat manifolds through proper projection maps. \par This paper is organized as follows. In Section 2, we briefly introduce the Clifford algebras setting and some integral formulas. Section 3 is devoted to an introduction of $\Pi$-operator in a general Hilbert space. It turns out that this technique can be applied to obtain the results of the $\Pi$-operator in the classical case in the complex plane, and the $\Pi$-operators on some other conformally flat manifolds can also be constructed with this strategy. This is explained in details in the rest of the paper. More specifically, in section 4, we define the real projective space $\mathbb{R}P^{n}$ as a quotient space of the $n$-dimensional unit sphere with certain projection map. With the help of this projection map we can induce the Dirac operator, Cauchy transform, some integral formulas and the $\Pi$-operator from $\mathbb{S}^{n}$ to $\mathbb{R}P^{n}$. The Beltrami equation on the real projective space is also studied here as an application. In Section 5, we generalize the results in Euclidean space to cylinders and Hopf manifolds. Applications to the Beltrami equations on cylinder and Hopf manifolds are also provided. Section $6$ is devoted to an investigation for the $\Pi$-operator theory on the upper-half space with the hyperbolic metric. Such defined $\Pi$-operator also possesses most properties that it has in one dimensional complex analysis. \subsection*{Acknowledgements} This paper is dedicated to Klaus G\"urlebeck on his 65th birthday. \section{Preliminaries} \subsection{Clifford analysis in Euclidean space} Let $\{e_1,\cdots,e_n\}$ be the canonical orthonormal basis of the Euclidean space $\mathbb{R}^{n}$. The real Clifford algebra $\mathcal{C}l_n$ is generated from $\mathbb{R}^{n}$ by considering the relationship $e_ie_j+e_je_i=-2\delta_{ij}e_0$, where $e_0$ is the identity of $\mathcal{C}l_n$ and $\delta_{ij}$ is the usual Kronecker symbol. An arbitrary element of the basis of the Clifford algebra can be written as ${e}_A={e}_{j_1}\cdots {e}_{j_r},$ where $A=\{j_1, \cdots, j_r\}\subset \{1, 2, \cdots, n\}$ and $1\leq j_1< j_2 < \cdots < j_r \leq n$. Hence, for any $a\in \mathcal{C}l_n$, we have $a=\sum_Aa_Ae_A,$ where $a_A\in \mathbb{R}$. The norm of a Clifford number $x$ is defined as $\|x\|^2=\sum_{A\subset\{1,\cdots,n\}}x_A^2.$ If the set $A$ contains $k$ elements, then we call $e_A$ a \emph{k-vector}. Likewise, we call each linear combination of $k$-vectors a $k$-vector. The vector space of all $k$-vectors is denoted by $\Lambda^k\mathbb{R}^{n}$. Obviously, $\mathcal{C}l_n$ is the direct sum of all $\Lambda^k\mathbb{R}^{n}$ for $k\leq n$. In particular, under the rule of multiplication, each non-zero vector $x\in\mathbb{R}^{n}$ has a multiplicative inverse $x^{-1}=\frac{-x}{||x||^2}$. see \cite{Br} for more details on Clifford algebras. We also need the following three anti-involutions in Clifford analysis. \begin{itemize} \item \textbf{Reversion:} $ \tilde{a}=\sum_{A} (-1)^{|A|(|A|-1)/2}a_Ae_A, $ where $|A|$ is the cardinality of $A$. In particular, $\widetilde{e_{j_1}\cdots e_{j_r}}=e_{j_r}\cdots e_{j_1}$. \item \textbf{Clifford conjugation:} $ a^{\dagger}=\sum_{A} (-1)^{|A|(|A|+1)/2}a_Ae_A, $ satisfying ${e_{j_1}\cdots e_{j_r}}^{\dagger}=(-1)^re_{j_r}\cdots e_{j_1}$. \item \textbf{Clifford involution:} $ \bar{a}=\tilde{a}^{\dagger}=\widetilde{a^{\dagger}}. $ \end{itemize} In the rest of this paper, we identify the Euclidean space $\mathbb{R}^{n+1}$ with the direct sum $\Lambda^0\mathbb{R}^{n}\oplus\Lambda^1\mathbb{R}^{n}$ and $\Omega\subset \mathbb{R}^{n+1}$ is a domain with a sufficiently smooth boundary $\Gamma=\partial \Omega$. Further, we only deal with functions defined in $\Omega$ taking values in $\mathcal{C}l_n$. These functions can be written as \begin{eqnarray*} f(x)=\sum_{A\subseteq\{ e_1,e_2,...e_n \}}f_A(x)e_A,\quad x\in \Omega. \end{eqnarray*} Properties such as continuity, differentiability, integrability, and so on, which are ascribed to $f$ have to be possessed by all components $f_A(x),\ A\subseteq\{ e_1,e_2,...e_n \}$. The spaces $C^k(\Omega,{\mathcal{C}l_n})$ and $L^p(\Omega, {\mathcal{C}l_n}) $ are defined as right Banach modules with the corresponding traditional norms. In particular, the space $L^2(\Omega,{\mathcal{C}l_n})$ is a right Hilbert module equipped with a ${\mathcal{C}l_n}$-valued sequilinear form $$ \langle u,v\rangle=\int_\Omega \overline{u(\eta)} v(\eta)\, d\eta. $$ Furthermore, $W_p^k(\Omega,{\mathcal{C}l_n}), k\in \mathbb{N}\cup\{0\},1\leq p<\infty$ denotes the Sobolev space as the right module of all functionals whose derivatives belong to $L^p(\Omega,{\mathcal{C}l_n})$, with norm $$ \|f\|_{W_p^k(\Omega,{\mathcal{C}l_n})}:=\big(\sum_{A}\sum_{\|\alpha\|\leq k}\|D^\alpha_w f_A\|_{L^p(\Omega,{\mathcal{C}l_n})}^p\big)^{1/p}. $$ The closure of the space of test functions $C^\infty_0(\Omega, {\mathcal{C}l_n})$ in the $W_p^k$-norm will be denoted by $\wzwop(\Omega, {\mathcal{C}l_n})$. \par The Euclidean Dirac operators $D_x$ and $D_0$ arise as generalizations of the Cauchy-Riemann operator in one dimensional complex analysis and $ D_x:=\sum_{i=1}^{n}e_i\partial_{x_i},\ D_0:=e_0\partial_{x_0}+\sum_{i=1}^{n}e_i\partial_{x_i}=e_0\partial_{x_0}+D_x. $ Note $D_x^2=-\Delta_n$, where $\Delta_n$ is the Laplacian in $\mathbb{R}^{n}$, and $\Delta_{n+1}=D_0\overline{D_0}$. A $\mathcal{C}l_n$-valued function $f(x)$ defined on a domain $\Omega$ in $\mathbb{R}^{n+1}$ is called left monogenic if $D_0f(x)=\sum_{i=0}^{n}e_i\partial_{x_i}f(x)=0.$ Since Clifford multiplication is not commutative in general, there is a similar definition for right monogenic functions. \par Let $f \in C^1(\Omega, {\mathcal{C}l_n})$, $G(x,y)=\displaystyle\frac{\overline{x-y}}{\|x-y\|^{n+1}}$ is the fundamental solution of $D_0$ (see \cite{LR}). When considering functions with compact support, $D_0$ has a left and right inverse (called Cauchy transform) $T_{\Omega}$ as follows. \begin{eqnarray*} T_\Omega f(x)=\frac{1}{\omega_n}\int_\Omega G(x,y)f(y)dy, \end{eqnarray*} where $\omega_{n+1}$ is the area of the $n$-dimensional unit sphere. For more details, see \cite{NW}. Also, there is a non-singular boundary integral operator given by \begin{eqnarray*} F_{\partial \Omega}f(x)=\frac{1}{\omega_n}\int_{\partial \Omega}G(x,y)n(y)f(y)d\sigma(y). \end{eqnarray*} With the above two integral operators, we have the classical Borel-Pompeiu formula in Clifford analysis as follows. \begin{theorem} \cite{GK} For $f\in C^1(\Omega,\mathcal{C}l_n)\cap C(\overline\Omega)$, we have \begin{eqnarray*} f(x)=\frac{1}{\omega_n}\int_{\partial \Omega}G(x,y)n(y)f(y)d\sigma(y)+\frac{1}{\omega_n}\int_\Omega G(x,y)D_0f(y)dy, \end{eqnarray*} In particular, if $f\in \wzwo(\Omega,{\mathcal{C}l_n})$, then \begin{eqnarray*} f(x)=\frac{1}{\omega_n}\int_\Omega G(x,y)D_0f(y)dy. \end{eqnarray*} \end{theorem} \subsection{Clifford analysis on the unit sphere} Recall that the generalized spherical Dirac operator $D_s$ and its conjugate on the $n$-dimensional unit sphere $\mathbb{S}^{n}$ are defined as follows: $D_s=x(\Gamma_0-\frac{n}{2}),\ \overline{D_s}=\overline{x}(\overline{\Gamma_0}-\frac{n}{2}),$ where $\Gamma_0=\sum_{j=1}^{n} e_0e_jL_{0,j}-\sum_{i=1,j>i}^{n} e_ie_jL_{i,j}$, and here the operators $L_{i,j}=x_i\partial_{x_j}-x_j\partial_{x_i}$ are called the angular momentum operators. It is well known that the fundamental solution of $\Gamma_0$ is $G_s(x,y)=\frac{\overline{x-y}}{\|x-y\|^n}$, and the fundamental solution of $\overline{\Gamma_0}$ is $\overline{G_s(x,y)}=\frac{x-y}{\|x-y\|^n}$, $x,y\in \mathbb{S}^{n}$, see \cite{CRK} for details. \par Assume $\Omega$ is a bounded smooth domain on ${\mathbb{S}^{n}}$ and $f \in C^1(\Omega, \mathcal{C}l_n)$. One can define Cauchy transforms with respect to $D_s$ and $\overline{D_s}$ as below \cite{CRK}. \begin{eqnarray*} T_\Omega f(x)=\frac{1}{\omega_n}\int_\Omega G_s(x,y)f(y)dy=\int_\Omega \frac{\overline{x-y}}{\|x-y\|^n}f(y)dy,\quad \overline{T}_\Omega f(x)=\frac{1}{\omega_n}\int_\Omega \overline{G_s(x,y)}f(y)dy=\int_\Omega \frac{x-y}{\|x-y\|^n}f(y)dy. \end{eqnarray*} Here, $T_{\Omega}$ ($\overline{T}_{\Omega}$) is also a left and right inverse for $D_s$ ($\overline{D_s}$) when considering functions with compact support, see Theorem \ref{BPF} below. Also, we have two non-singular boundary integral operators \begin{eqnarray*} F_{\partial \Omega}f(x)=\frac{1}{\omega_n}\int_{\partial \Omega}G_s(x,y)n(y)f(y)d\sigma(y),\quad \overline{F}_{\partial \Omega}f(x)=\frac{1}{\omega_n}\int_{\partial \Omega}\overline{G_s(x,y)}n(y)f(y)d\sigma(y). \end{eqnarray*} Then the Borel-Pompeiu formula for $D_s$ and $\overline{D_s}$ is stated as follows. \begin{theorem}[Borel-Pompeiu formula\cite{LR}]\label{BPF} \hfill\\ For $f \in C^1(\Omega)\cap C(\overline\Omega)$, we have \begin{eqnarray*} f(x)=\frac{1}{\omega_n}\int_{\partial \Omega}G_s(x,y)n(v)f(v)d\sigma(y)+\frac{1}{\omega_n}\int_\Omega G_s(x,y)D_sf(y)dy, \label{1} \end{eqnarray*} in other words, $f=F_{\partial \Omega}f+T_\Omega D_sf$. Similarly, $f=\overline{F}_{\partial \Omega}f+\overline{T}_\Omega \overline{D_s}f$ \begin{eqnarray*} f(x)=\frac{1}{\omega_n}\int_{\partial \Omega}\overline{G_s(x,y)}n(y)f(y)d\sigma(y)+\frac{1}{\omega_n}\int_\Omega \overline{G_s(x,y)}\overline{D_s}f(y)dy, \label{1} \end{eqnarray*} In particular, if $f$ has compact support, then $T_{\Omega}D_s=\overline{T_{\Omega}}\overline{D_s}=I$. \end{theorem} \section{The $\Pi$-operator in Hilbert space} In this section, we will provide a $\Pi$-operator defined on a general Hilbert space. This $\Pi$-operator has the isometry property, which motivates the definitions of $\Pi$-operators in different conformally flat manifolds in the following sections. \par Let $H$ be a real Hilbert space, $\mathcal{S}$ is a dense subspace of $H$. Let $f,g\in \mathcal{S}\otimes \mathcal{C}l_n$, and $D$ is a linear map from $\mathcal{S}\otimes \mathcal{C}l_n$ to itself. Further, $D$ also satisfies $D^*D=DD^*$ where $D^*$ is the dual operator of $D$ in the sense of $ \langle Df,g\rangle=\langle f,D^*g\rangle, $ where $\langle\ ,\ \rangle$ is the inner product on $H$. Suppose $G$ is an operator acting on $\mathcal{S}\otimes \mathcal{C}l_n$, then it is called the inverse of $D$ if it satisfies $DG=GD=I$. \begin{definition} The generalized $\Pi$-operator in the Hilbert space $H$ is defined as $$\Pi=D^*G.$$ \end{definition} Next, we will show that the generalized $\Pi$-operator defined above also has the isometry property. \begin{theorem} The generalized operator $\Pi=D^*G$ is an isometric operator in $H\otimes \mathcal{C}l_n$. \end{theorem} \begin{proof} \begin{eqnarray*} \langle \Pi f,\Pi g\rangle=\langle D^*Gf,D^*Gg\rangle=\langle Gf,DD^*Gg\rangle =\langle Gf,D^*DGg\rangle=\langle DGf,DGg\rangle=\langle f,g\rangle. \end{eqnarray*} \end{proof} Further, our generalized $\Pi$-operator can also be used to solve certain Beltrami equations. More specifically, if we let $H$ be $L^2(X)$, where $X$ is a measure space with a measure $\eta$, then we can define a Beltrami equation over $H\otimes\mathcal{C}l_n$ i.e., $L^2(X,\mathcal{C}l_n)$ as $Df=qD^*f,$ where $q\in L^{\infty}(X,\mathcal{C}l_n) $. This is similar as in Euclidean space with the essential supremum norm with respect to $\eta$. By the substitution $f=\phi+Gh$ where $\phi$ is a solution for $D\phi=0$, we transform the Beltrami equation in the following way. \begin{eqnarray*} D(\phi+Gh)=h=qD^*(\phi+Gh)=q(D^*\phi+\Pi h). \end{eqnarray*} Hence, if $h$ is the unique solution of the equation $h=q(D^*\phi+\Pi h),$ then $f=\phi+Gh$ is the unique solution of the Beltrami equation. The Banach fixed point theorem tells us this equation has a unique solution if $\|q\|\leq q_0< \frac{1}{\|\Pi\|}$, with $q_0$ being a constant. Hence, as in the classical case, the problems of the existence of the solution to the Beltrami equation becomes the norm estimate of the generalized $\Pi$-operator. \par As special cases of this general Hilbert space approach, one has the $L^2$ isometry of the usual $\Pi$-operator in one complex variable and the $\Pi$-operator in $\mathbb{R}^n$ described in \cite{Blaya,GK,GKS} and elsewhere. The next sections describe the $\Pi$-operator acting over $L^2$ spaces over other manifolds. \section{$\Pi$-operators on real projective space} Recall the construction of our $\Pi$-operator in the previous section, if we let $X$ be the real projective space $\mathbb{R}P^{n}$ with the measure $\eta$ by pushing forward the Lebesgue measure on $\mathbb{S}^{n}$. Then, $H=L^2(\mathbb{R}P^{n},\mathbb{R})$ becomes a real Hilbert space, and $H\otimes \mathcal{C}l_n$ is a Clifford-Hilbert module with the inner product \begin{eqnarray*} \langle f,g\rangle=\int_{V'}\overline{f}(x)g(x)d\eta(x), \end{eqnarray*} where $V'$ is a subset of real projective space with $\overline{V'}$ inclosed and $f, g: V'\longrightarrow \mathcal{C}l_n$. Therefore, we can obtain the $\Pi$-operator theory on real projective space as a special case of Section $3$. More details are given below. \subsection{Dirac operators on real projective space} We know that the real projective space $\mathbb{R}P^{n}$ is defined as $\mathbb{S}^{n}/\Gamma$, where $\Gamma=\{\pm1\}$. This implies that $\Pi$-operator theory on the real projective space can be generalized from the $\Pi$-operator theory on the unit sphere. Notice that there is a projection map $p: \mathbb{S}^{n}\longrightarrow\mathbb{R}P^{n}$, such that for each $x\in \mathbb{S}^{n}$, $p(\pm x)=x'$. If Q is a subset of $\mathbb{S}^{n}$, we denote $p(\pm Q)=Q'$. First, we consider the bundle $E_1$ by making the identification of $(x,X)$ and $(-x, X)$ where $x\in \mathbb{S}^{n}$ and $X\in \mathcal{C}l_n$. \par Now we change the spherical Cauchy kernel $G_s(x,y)=- \frac{1}{\omega_n}\frac{\overline{x-y}}{\|x-y\|^n}$, $x,y\in \mathbb{S}^{n}$ into a kernel which is invariant with respect to $\Gamma=\{\pm1\}$, and this gives us a kernel $G_{\mathbb{R}P^{n}_1}(x,y)=G_s(x,y)+G_s(-x,y)$ for $\mathbb{R}P^{n}$ \cite{KR}. \par Suppose $S$ is a suitably smooth hypersurface lying in the northern hemisphere of $\mathbb{S}^{n}$ and $V$ is also a domain lying in the northern hemisphere sphere that $S$ bounds a subdomain $W$ of $V$. If $f: V \longrightarrow \mathcal{C}l_n$ is a left spherical monogenic function and $y\in W$, then we have \begin{eqnarray*} f(x)=\frac{1}{\omega_n}\int_S \big(G_s(x,y)+G_s(-x,y)\big)n(y)f(y)d\sigma(y), \end{eqnarray*} where $n(y)$ is the unit outer normal vector to $S$ at $x$ lying in the tangent space of $\mathbb{S}^{n}$ at $y$. Now we use the projection map $p:\mathbb{S}^{n}\longrightarrow \mathbb{R}P^{n}$ to note that this projection map induces a function $f': V'\longrightarrow E_1$, which satisfies \cite{KR} \begin{eqnarray*} f'(x')=\frac{1}{\omega_n}\int_{S'} G_{\mathbb{R}P^{n}_1}(x',y')dp(n(y))f'(y')d\sigma'(y'), \end{eqnarray*} where $x'=p(x)$, $y'=p(y)$, $S'=p(S)$ and $\sigma'$ on $S'$ is induced from $\sigma$ on $S$ by the map $p$. Now we will assume that the domain $V$ satisfies that $-x\in V$ for each $x\in V$, the function $f$ is two-fold periodic, so that $f(x)=f(-x)$ and $S=-S$. Now the projection map $p$ gives rise to a well defined domain $V'$ on $\mathbb{R}P^{n}$ and a well defined function $f'(x'): V'\longrightarrow E_1$ such that $f'(x')=f(\pm x)$. As the function is spherical monogenic, i.e., $D_sf(x)=0$, we can induce a Dirac operator $D_{\mathbb{R}P^{n}_1}$ on $\mathbb{R}P^{n}$ and $D_{\mathbb{R}P^{n}_1}f'(x')=0$. In this case \cite{KR}, $$2f'(x')=\frac{1}{\omega_n}\int_{S'} G_{\mathbb{R}P^{n}_1}(x',y')dp(n(x))f'(y')d\sigma'(y').$$ Similarly, we have the conjugate of the Dirac operator $\overline{D_{\mathbb{R}P^{n}_1}}$, and the kernel of $\overline{D_{\mathbb{R}P^{n}_1}}$ is $\overline{G_{\mathbb{R}P^{n}_1}(x,y)}=\overline{G_s(x,y)}+\overline{G_s(-x,y)}$. \par Now we induce the Cauchy transform and its conjugate from $\mathbb{S}^{n}$ to $\mathbb{R}P^{n}$ as follows. \begin{eqnarray*} T_{V'_1} f'(x')=\frac{1}{\omega_n}\int_{V'} G_{\mathbb{R}P^{n}_1}(x',y')f'(y')dy',\quad \overline{T_{V'_1}} f'(x')=\frac{1}{\omega_n}\int_{V'} \overline{G_{\mathbb{R}P^{n}_1}(x',y')}f'(y')dy'. \end{eqnarray*} Also, a non-singular boundary integral operator and its conjugate are given by \begin{eqnarray*} F_{S'}f'(x')=\frac{1}{\omega_n}\int_{S'}G_{\mathbb{R}P^{n}_1}(x',y')dp(n(y'))f'(y')d\sigma'(y'),\quad \overline{F_{S'}}f'(x')=\frac{1}{\omega_n}\int_{S'}\overline{G_{\mathbb{R}P^{n}_1}(x',y')}dp(n(y'))f'(y')d\sigma'(y'). \end{eqnarray*} Hence, one obtains a Borel-Pompeiu formula as follows. \begin{theorem} For $f'\in C^1(V',\mathcal{C}l_n)\cap C(\bar V')$, we have \begin{align*} 2f'(x')=\frac{1}{\omega_n}\int_{S'}G_{\mathbb{R}P^{n}_1}(x',y')dp(n(y))f'(y')d\sigma'(y') +\frac{1}{\omega_n}\int_{V'} G_{\mathbb{R}P^{n}_1}(x',y')D_{\mathbb{R}P^{n}_1}f'(y')dy'. \end{align*} In particular, if $f'$ has compact support, then \begin{eqnarray*} 2f'(x')=\frac{1}{\omega_n}\int_{V'} G_{\mathbb{R}P^{n}_1}(x',y')D_{\mathbb{R}P^{n}_1}f'(y')dy', \end{eqnarray*} from which we can obtain $TD_{\mathbb{R}P^{n}_1}=2I$. \end{theorem} Since the domain $V=-V$, if we restrict it on the northern hemisphere, the Dirac operator $D_{\mathbb{R}P^{n}_1}$ is locally homeomorphic to $D_s$. Hence, we project it on the domain $V'\subset\mathbb{R}P^{n}$ to obtain $$D_{\mathbb{R}P^{n}_1}\frac{1}{\omega_n}\int_V G_s(x,y)f(y)dy=f(x).$$ Now, for the whole domain $V$, after applying the projection on the domain $V'\subset\mathbb{R}P^{n}$, we have $$D_{\mathbb{R}P^{n}_1}\frac{1}{\omega_n}\int_{V'} \big(G_s(x,y)+G(-x,y)\big) f'(y')dy'=2f'(x),$$ that is $D_{\mathbb{R}P^{n}_1}T=2I$. Similarly, we have $\overline{D_{\mathbb{R}P^{n}_1}T}=\overline{TD_{\mathbb{R}P^{n}_1}}=2I$. \par In the rest of this section, we will study spectrum of the operators $\overline{D_{\mathbb{R}P^{n}_1}}$ and $T$, this helps us to show that the $\Pi$-operator defined in the next section also has an $L^2$ isometry property. Similar argument can be found in \cite{CRK}. \par Let $H_m$ denote the space of $\mathcal{C}l_n$-valued harmonic polynomials with homogeneity of degree $m$ on $\mathbb{S}^{n}$. It is well known that $L^2(\mathbb{S}^{n})=\sum_{m=0}^\infty H_{2m}$, see \cite{Ax}. Now we consider a function $f(x)$ defined on an open domain $V\subseteq\mathbb{S}^{n}$ and it also satisfies that $-x\in V$ for each $x\in V$ and $f(x)=f(-x)$. Such a domain $V$ can be projected on the real projective space $\mathbb{R}P^{n}$ by $p(\pm x)=x'$. Since $f(x)=\sum_{m=0}^\infty h_m(x)$, we have $f(-x)=\sum_{m=0}^\infty h_m(-x)$, and by the projection map we have $f'(x')=\sum_{m=0}^\infty h'_{2m}(x')$. Hence, $L^2(\mathbb{R}P^{n})=\sum_{m=0}^\infty H'_{2m}$, where $H'_{2m}$ is the projection of $H_{2m}$ on the real projective space. \par Assume that $P_m$ is the space of spherical $\mathcal{C}l_n$-valued left monogenic polynomials with homogeneity of degree $-m$ and $Q_m$ is the space of spherical $\mathcal{C}l_n$-valued left monogenic polynomials with homogeneity of degree $n+m$, $m=0,1,2,...$. We have already known that $H_m=P_m\bigoplus Q_m$ on $\mathbb{S}^{n}$ (see \cite{Balinsky}), that is for each $h_m(x)\in H_m(\mathbb{S}^{n})$ there exist $p_m(x)\in P_m(\mathbb{S}^{n})$ and $q_m(x)\in Q_m(\mathbb{S}^{n})$ such that $h_m(x)=p_m(x)+q_m(x)$. Hence $h_m(-x)=p_m(-x)+q_m(-x)$ and by the projection map, we have similar decomposition on the real projective space as $h'_{2m}(x')=p'_{2m}(x')+q'_{2m}(x')$. In other words, $L^2(\mathbb{R}P^{n})=\sum_{m=0}^\infty P'_{2m}\bigoplus Q'_{2m}$. As we know that $D_s(P_m)=Q_m$ and $D_s(Q_m)=P_m$, we also have $D_{\mathbb{R}P^{n}_1}(P'_{2m})=Q'_{2m}$ and $D_{\mathbb{R}P^{n}_1}(Q'_{2m})=P'_{2m}$. Hence $D_{\mathbb{R}P^{n}_1}$ maps $L^2(\mathbb{R}P^{n})$ to itself, similarly for $\overline{D_{\mathbb{R}P^{n}_1}}$. From the result in the unit sphere case, we have the spectrum of the real projective Dirac operator as follows. \begin{align*} \sigma(D_{\mathbb{R}P^{n}_1})=\sigma(\overline{D_{\mathbb{R}P^{n}_1}})=\{-2m-n, m=0,1,2,...\}\cup \{2m+n, m=0,1,2,...\}. \end{align*} Since we previously mentioned that $\overline{D_{\mathbb{R}P^{n}_1}T}=\overline{TD_{\mathbb{R}P^{n}_1}}=2I$, and $T:Q'_{2m}\longrightarrow P'_{2m}$ and $T: P'_{2m}\longrightarrow Q'_{2m}$, then the spectrum of $T$ and its conjugation $\overline{T}$ on the real projective space are $$\sigma(\overline{T})= \sigma(T)=\{\frac{2}{2m+n}, m=0,1,2,...\}\cup\{\frac{2}{-2m-n}, m=0,1,2,...\}.$$ \subsection{Construction of $\Pi$-operator on the real projective space} We first give the definition for the $\Pi$-operator on the real projective space as follows. \begin{definition} The $\Pi$-operator on the real projective space is defined by $\Pi_{\mathbb{R}P^{n}_1}=\frac{1}{2}\overline{D_{\mathbb{R}P^{n}_1}}T.$ \end{definition} The constant $\frac{1}{2}$ allows $\Pi_{\mathbb{R}P^{n}_1}$ to be $L^2$ isometric, we will see more details below. One can also see that $\Pi_{\mathbb{R}P^{n}_1}$ maps $L^2(\mathbb{R}P^{n})$ to $L^2(\mathbb{R}P^{n})$. \begin{theorem} $\Pi_{\mathbb{R}P^{n}_1}$ is an $L^2(\mathbb{R}P^{n})$ isometry. \end{theorem} \begin{proof} We only need to prove for the function $u\in C^1(\mathbb{R}P^{n})\subset L^2(\mathbb{R}P^{n})$, since $C^1(\mathbb{R}P^{n})$ is dense in $L^2(\mathbb{R}P^{n})$. For such a function $u$, we have the decomposition \begin{eqnarray*} u=\displaystyle\sum_{m=0}^\infty\sum_{p'_{2m}\in P'_{2m}}p'_{2m}+\sum_{m=0}^{-\infty}\sum_{q'_{2m}\in Q'_{2m}}q'_{2m}. \end{eqnarray*} Hence, with similar arguments as in \cite{CRK}, we have \begin{align*} &||\frac{1}{2}\overline{D_{\mathbb{R}P^{n}_1}}Tu||^2_{L^2}= \displaystyle\sum_{m=0}^\infty(\frac{1}{2m+n})^2\sum_{q'_{2m}\in Q'_{2m}}\|\overline{D_{\mathbb{R}P^{n}_1}}q'_{2m}\|_{L^2} +\sum_{m=0}^{\infty}(\frac{1}{-2m-n})^2\sum_{p'_m\in P'_{2m}}\|\overline{D_{\mathbb{R}P^{n}_1}}p'_{2m}||_{L^2}\\ =&\displaystyle\sum_{m=0}^\infty(\frac{1}{2m+n})^2(2m+n)^2\sum_{p'_{2m}\in P'_{2m}}\|p'_{2m}||_{L^2} +\sum_{m=0}^{\infty}(\frac{1}{-2m-n})^2(-2m-n)^2\sum_{q'_{2m}\in Q'_{2m}}\|q'_{2m}||_{L^2}\\ =&\displaystyle\sum_{m=0}^\infty\sum_{p'_{2m}\in P'_{2m}}||p'_{2m}||_{L^2}+\sum_{m=0}^{\infty}\sum_{q'_{2m}\in Q'_{2m}}||q'_{2m}||_{L^2} =||u||_{L^2}. \end{align*} \end{proof} It is worth pointing out that we can assign another bundle $E_2$ to $\mathbb{R}P^{n}$ by identifying the pair $(x, X)$ with $(-x, -X)$, where $x\in \mathbb{S}^{n}$ and $X\in \mathcal{C}l_n$. In this case, the projection map $p$ induces a Cauchy kernel $G_{\mathbb{R}P^{n}_2}$, which is antiperiodic with respect to $\Gamma=\{\pm 1\}$. Hence $G_{\mathbb{R}P^{n}_2}(x'-y')=G_s(x,y)-G_s(-x,y)$. Further, a Clifford holomorphic function $f: V\longrightarrow \mathcal{C}l_n$ satisfying $f(x)=-f(-x)$ will give a Clifford holomorphic function $f: V'\longrightarrow E_2$. Similarly, we can induce another Cauchy transform and its conjugate from $\mathbb{S}^{n}$ to $\mathbb{R}P^{n}$ as follows. \begin{eqnarray*} T_{V'_2} f'(x')=\frac{1}{\omega_n}\int_{V'} G_{\mathbb{R}P^{n}_2}(x'-y')f'(y')dy',\quad \overline{T_{V'_2}} f'(x')=\frac{1}{\omega_n}\int_{V'} \overline{G_{\mathbb{R}P^{n}_2}(x'-y')}f'(y')dy'. \end{eqnarray*} With similar arguments as for $D_{\mathbb{R}P^{n}_1}$, we can define $D_{\mathbb{R}P^{n}_2}$ on $\mathbb{R}P^{n}$ with the bundle $E_2$, and the $\Pi$-operator is defined as $\Pi_{\mathbb{R}P^{n}_2}=\frac{1}{2}D_{\mathbb{R}P^{n}_2}T_{V'_2}$. Similar arguments as for $\Pi_{\mathbb{R}P^{n}_1}$ shows that $\Pi_{\mathbb{R}P^{n}_2}$ also possesses the $L^2$ isometry property. \section{$\Pi$-Operators on Cylinders and Hopf Manifolds} Let $X$ be the cylinders $C_k$ with the measure $\eta$ by pushing forward the Lebesgue measure on $\mathbb{R}^{n+1}$ via the quotient map $\mathbb{R}^{n+1}\longrightarrow\mathbb{R}^{n+1}/ \mathbb{Z}^k$. Meanwhile, $H=L^2(\mathbb{C}_k,\mathbb{R})$ is a real Hilbert space, and $H\otimes \mathcal{C}l_n$ is a Clifford-Hilbert module with the inner product \begin{eqnarray*} \langle f,g\rangle=\int_{V'}\overline{f}gd\eta(x), \end{eqnarray*} where $V'$ is a domain on the cylinder $C_k$ with $\overline{V'}$ enclosed and $f, g: V'\longrightarrow \mathcal{C}l_n$. Therefore we can construct the $\Pi$-operator theory on cylinders as demonstrated in the previous section. \par Similarly, if we let $X$ to be Hopf manifolds $\mathbb{S}^1\times \mathbb{S}^{n}$ with the pushforward measure obtained via the quotient map defined below, and $H=L^2(\mathbb{S}^1\times \mathbb{S}^{n},\mathbb{R})$ is a Hilbert space. Then we can build the $\Pi$-operator theory on the Clifford-Hilbert module $H\otimes \mathcal{C}l_n$. More details are given below. \subsection{$\Pi$-Operators on Cylinders} For integer $k$, $1\leq k\leq n$, we define the $k$-cylinder $C_k$ to be the manifold $\mathbb{R}^{n+1}/\mathbb{Z}^k$ where $\mathbb{Z}^k=\mathbb{Z}e_0+\mathbb{Z}e_1+...+\mathbb{Z}e_{k-1}$. Each element in $C_k$ has the form $m_0e_0+\cdots m_{k-1}e_{k-1}$ for $m_0,\cdots,m_{k-1}\in\mathbb{Z}$ and it is denoted by $\underline{t}$. For each $k$, the space $\mathbb{R}^{n+1}$ is the universal covering space of the cylinder $C_k$. Hence, there is a projection map $p_k: \mathbb{R}^{n+1}\longrightarrow C_k$. \par Let $U$ be an open subset of $\mathbb{R}^{n+1}$. It is called \emph{$k$-fold periodic} if for each $x\in U$ and $t\in\mathbb{Z}^k$ we also have $x+\underline{t}\in U$. Hence, $U'=p_k(U)$ is an open subset of $C_k$. Suppose that $U\subseteq\mathbb{R}^{n+1}$ is a $k$-periodic open set and $f(x)$ is a $\mathcal{C}l_n$-valued function defined on $U$, we say that $f(x)$ is a \emph{$k$-fold periodic function} if we have $f(x)=f(x+\underline{t})$ for each $x\in U$. Hence, the projection $p_k$ induces a well defined function $f':\ U'\longrightarrow \mathcal{C}l_n$, where $f'(x')=f(x)$ for each $x'\in U'$ and $x$ is an arbitrary representative of $p_k^{-1}(x')$. Moreover, any function $f':\ U'\longrightarrow \mathcal{C}l_n$ can lift to a $k$-fold periodic function $f:\ U\longrightarrow \mathcal{C}l_n$, where $U=p_k^{-1}(U')$. \par In \cite{KR1}, the spinor bundle over $C_k$ is trivial on $C_k\times\mathcal{C}l_n$. Other $k$ spinor bundles $E^{(l)}$ over $C_k$ arise by making the identification $(x,X)$ with $(x+\underline{m}+\underline{n}, (-1)^{m_0+m_1+...+m_l}X)$, where $l$ is an integer and $0\leq l\leq k$, $\underline{m}$ is in the lattice $\mathbb{Z}^l=\mathbb{Z}e_0+\mathbb{Z}e_1+...+\mathbb{Z}e_{l-1}$, and $\underline{n}$ is in the lattice $\mathbb{Z}^{k-l}=\mathbb{Z}e_l+\mathbb{Z}e_{l+1}+...+\mathbb{Z}e_{k-1}$. \par Let $G(x,y)=\frac{\overline{x-y}}{||x-y||^{n+1}}$ be the fundamental solution of the Euclidean Dirac operator. Consider the series $ \cot_{k,0}(x,y)=\sum_{\underline{m}\in \mathbb{Z}^k}G(x-y+\underline{m}) $ which converges on $\mathbb{R}^{n+1}\setminus \mathbb{Z}^k$, for $k< n-1$, see \cite{KR}. Then, the kernel of the Dirac operator on the cylinder $C_k$ with the trivial bundle has the form $\cot_{k,0}(x',y')$ which is defined on $(C_k\times C_k)\setminus diagonal(C_k)$, where $diagonal(C_k)=\{(x',x'): x'\in C_k\}$. More generally, For $k<n-1$ and $l\leq k$, the kernel $\cot_{k,l}(x',y')$ of the Dirac operator on $C_k$ with the bundle $E^{(l)}$ is given by applying $p_k$ to \begin{eqnarray*} \cot_{k,l}(x,y)=\sum_{\underline{m}\in \mathbb{Z}^k,\underline{n}\in \mathbb{Z}^{l}}(-1)^{m_0+m_1+...m_{l-1}}G(x-y+\underline{m}+\underline{n}). \end{eqnarray*} Further, with the projection map $p_k$, we can otain the Dirac operator on $C_k$ with the bundle $E^{(l)}$, which is denoted by $D_l$. Similar argument applies for the conjugation $\overline{D_l}$ and its fundamental solution $\overline{\cot_{k,l}(x',y')}$. One also has $D_l\overline{D_l}=\overline{D_l}D_l=\Delta_l$, where $\Delta_l$ is a spinorial Laplacian, see \cite{KR}. \par Suppose $f:V\longrightarrow\mathbb{R}^{n+1}$ satisfying $f(x+\underline{m}+\underline{n})=(-1)^{m_0+m_1+...m_{l-1}}f(x)$, where $\underline{m}\in \mathbb{Z}^k,\underline{n}\in \mathbb{Z}^{l}$. Then, $f$ can be lifted by the projection map $p_k$ to a function $f':V'\longrightarrow E^{(l)}$, where $V'=p_k^{-1}(V)$. If $D_l f'=0$, $f'$ is called an $E^{(l)}$ left Clifford monogenic function. \par Using the fundamental solutions of the Dirac operators, we can define the Cauchy transform on different bundles. If $f':V'\longrightarrow E^{(l)}$, $S'$ is a surface lying in $V'$ and bounding a subdomain $W'$. Suppose $x'\in W'$, then \begin{eqnarray*} T_{V'} f'(x')=\frac{1}{\omega_{n}}\int_{V'} \cot_{k,l}(x',y')f'(y')dy',\quad \overline{T_{V'}} f'(x')=\frac{1}{\omega_{n}}\int_{V'} \overline{\cot_{k,l}(x',y')}f'(y')dy'. \end{eqnarray*} Also, a non-singular boundary integral operator and its conjugate are given by \begin{eqnarray*} F_{S'}f'(x')=\frac{1}{\omega_{n}}\int_{S'}\cot_{k,l}(x',y')dp(n(y'))f'(y')d\sigma'(y'),\quad \overline{F_{S'}}f'(x')=\frac{1}{\omega_{n}}\int_{S'}\overline{\cot_{k,l}(x',y')}dp(n(y'))f'(y')d\sigma'(y'). \end{eqnarray*} Hence, the Borel-Pompeiu formula in this context is stated as follows. \begin{theorem} \cite{KR1} For $f'\in C^1(V',\mathcal{C}l_n)\cap C(\overline{ V'})$, we have \begin{align*} f'(x')=&\frac{1}{\omega_{n}}\big(\int_{S'}\cot_{k,l}(x',y')dp(n(y))f'(y')d\sigma'(y') +\frac{1}{\omega_{n}}\int_{V'} \cot_{k,l}(x',y')D_lf'(y')dy'\big). \end{align*} \end{theorem} Similar as the case in Euclidean space, for a function $f'$ with compact support, we have $D_lT_{V'}=T_{V'}D_l=I$, and $\overline{D_lT_{V'}}=\overline{T_{V'}D_l}=I$ as well. \par Now we define the $\Pi$-operator on the cylinder as follows. \begin{definition} The $\Pi$-operator on the cylinder is defined by $\Pi_l=\overline{D_l}T.$ \end{definition} Since $\Pi_l$ is induced from the $\Pi$-operator in Euclidean space, we expect similar results as in \cite{GK}. \begin{theorem} $\Pi_l$ is an $L^2(C_k)$ isometric operator. \end{theorem} \begin{proof} The proof is similar to the proof of Proposition 5 in \cite{GK}. \end{proof} In this section, we will use the norm estimation of the $\Pi$-operator on the cylinder to determine existence of the solution of Beltrami equation on the cylinder. First, we define the Beltrami equation on the cylinder as follows. \par Let $V' \subseteq C_k$ be a bounded, simply connected domain with sufficiently smooth boundary, and $q, f': V'\longrightarrow E^{(l)}$, q is a measurable function, and $f'$ is sufficiently smooth. The Beltrami equation on the cylinder is as follows. $$ D_lf'=q\overline{D_l}f'.$$ We already explained how the estimate of the norm of $\Pi$-operator determines the existence and uniqueness of solutions to Beltrami equations in Section $3$. Next, we will provide a norm estimate for our $\Pi$-operator here. \par Suppose $V=\bigcup_{i=1}^\infty V_i=p_k^{-1}(V')$, such that $p_k(V_i)=V',\ i=1,2,\cdots$. $f:V_i\longrightarrow\mathcal{C}l_n$ is a piecewise continuous function with compact support, and $f$ can be induced to $f':V'\longrightarrow E$. For the $\Pi$-operator on $\mathbb{R}^{n+1}$, we have $\|\Pi\|_{L^p(\mathbb{R}^{n+1})}\leq (n+1)(p^*-1)$, where $p^*=max(p, p/(p-1))$, see \cite{NW}. \par Recall that $\Pi_l=\overline{D_l}\cot_{q,k,0}\ast$, where $``\ast"$ is the standard convolution. On each subdomain $V_i$, we have $\|\overline{D_l}\cot_{q,k,0}\ast f'(x')\|_{L^p(V_i)}=\|\overline{D}G\ast f(x')\|_{L^p(V_i)}\leq (n+1)(p^*-1)\|f(x')\|_{L^p(V_i)}$. Hence for the domain $V=\bigcup_{i=1}^\infty V_i$, we have $\|\overline{D_l}\cot_{q,k,0}\ast f(x)\|_{L^p(V)}=\|\overline{D}G\ast f(x)\|_{L^p(V)}\leq (n+1)(p^*-1)\|f(x)\|_{L^p(V)}$. Applying the projection $p_k$ on $V$, we could obtain \begin{theorem} $$\|\overline{D'}\cot'_{q,k,0}\ast f(x')\|_{L^p(V')}\leq (n+1)(p^*-1)\|f(x')\|_{L^p(V')}, $$ \end{theorem} which shows $\|\Pi_l\|_{L^p(C_k)}\leq (n+1)(p^*-1)$, where $p^*=max(p, p/(p-1))$. \subsection{$\Pi$-operators on Hopf Manifolds} A Hopf manifold is diffeomorphic to the conformally flat spin manifold $U/\Gamma=\mathbb{S}^1\times \mathbb{S}^{n}$, where $U=\mathbb{R}^{n+1}\setminus\{0\}$ and $\Gamma=\{2^k:k\in Z\}$. There exists a projection $p_k:\mathbb{R}^{n+1}\setminus\{0\}\longrightarrow \mathbb{S}^1\times \mathbb{S}^{n}$, such that $p_k(2^kx)=x'$. \par Let $V\subseteq \mathbb{R}^{n+1}$ be open, and if $x\in V$, $2^kx\in V$. Hence $p_k(V)=V'\subseteq \mathbb{S}^1\times \mathbb{S}^{n}$, which is also open. A left Clifford holomorphic functions $f: V\longrightarrow \mathcal{C}l_n$ which satisfying $f(x)=f(2^kx)$ can be lifted to a well defined function $f':V'\longrightarrow \mathcal{C}l_n$ by the projective map $p_k$, where $f'(x')=f(x)$ for each $x'\in V'$ and $x$ is one of $p_k^{-1}(x')$. \par The spinor bundle $E$ over $\mathbb{S}^1\times \mathbb{S}^{n}$ is constructed by identifying $(x, X)$ with $(2^kx, X)$ for $k\in Z$ and $x\in \mathbb{R}^{n+1}\setminus\{0\}$, $X\in \mathcal{C}l_n$. By \cite{KR1}, the Cauchy kernel for $\mathbb{S}^1\times \mathbb{S}^{n}$ is given as follows. Let $C(x-y)=C_1(x-y)+2^{2-2n}C_2(x-y)$, where \begin{eqnarray*} C_1(x-y)=\sum_{k=0}^\infty G(2^kx-2^ky),\quad C_2(x-y)=G(x)\sum_{k=-1}^{-\infty} G(2^{-k}x^{-1}-2^{-k}y^{-1})G(y). \end{eqnarray*} $G(x,y)=\frac{\overline{x-y}}{||x-y||^n}$ is the fundamental solution of the Euclidean Dirac operator. After applying the projective map, we obtain the Cauchy kernel $C'(x'-y')$ for the Dirac operator on $(\mathbb{S}^1\times \mathbb{S}^{n})\times(\mathbb{S}^1\times \mathbb{S}^{n})\setminus diagonal(\mathbb{S}^1\times \mathbb{S}^{n})$, which is denoted as $D'$. A function $f'$ defined on $V'\subseteq \mathbb{S}^1\times \mathbb{S}^{n}$ is left monogenic if $D'f'=0$. \par Using the kernel of the Dirac operators $D'$, we can define the Cauchy transform on $S^1\times \mathbb{S}^{n}$. If $f':V'\longrightarrow E$, $S'$ is a surface lying in $V'$ and bounding a subdomain $W'$. Suppose $x'\in W'$, \begin{eqnarray*} T_{V'} f'(x')=\frac{1}{\omega_{n+1}}\int_{V'} C(x'-y')f'(y')dy',\quad \overline{T_{V'}} f'(x')=\frac{1}{\omega_{n+1}}\int_{V'} \overline{C(x'-y')}f'(y')dy'. \end{eqnarray*} Also, a non-singular boundary integral operator and its conjugate are given by \begin{eqnarray*} F_{S'}f'(x')=\frac{1}{\omega_{n+1}}\int_{S'}C(x'-y')dp(n(y'))f'(y')d\sigma'(y'),\quad \overline{F_{S'}}f'(x')=\frac{1}{\omega_{n+1}}\int_{S'}\overline{C(x'-y')}dp(n(y'))f'(y')d\sigma'(y'). \end{eqnarray*} And the Borel-Pompeiu formula is stated as follows. \begin{theorem} \cite{KR1} For $f'\in C^1(V',\mathcal{C}l_n)\cap C(\overline {V'})$, we have \begin{eqnarray*} f'(x')=\frac{1}{\omega_{n+1}}\big(\int_{S'}C(x'-y')dp(n(y))f'(y')d\sigma'(y')+\int_{V'} C(x'-y')D_lf'(y')dy'\big). \end{eqnarray*} \end{theorem} \begin{definition} Define the $\Pi$-operator on the Hopf manifold as $$\Pi'f'=\overline{D'}Tf'.$$ \end{definition} Since $\Pi'$ is induced from the $\Pi$-operator in Euclidean space, we expect similar results as in \cite{GK}. \begin{theorem} $\Pi'$ is an $L^2$ isometry operator. \end{theorem} \begin{proof} The proof is similar as for the Proposition 5 in \cite{GK}. \end{proof} Now, we will introduce a norm estimate for the $\Pi$-operator in this context. Let $V' \subseteq \mathbb{S}^1\times \mathbb{S}^{n}$ be a bounded, simply connected domain with sufficiently smooth boundary, and $q, f': V'\longrightarrow E$, q is a measurable function, and $f'$ is sufficiently smooth. The Beltrami equation on the Hopf manifold is as follows. $$ D'f'=q\overline{D'}f'.$$ Suppose $V=\bigcup_{i=1}^\infty V_i$ is the inverse image of $V'$ under $p_k$, such that $p_k(V_i)=V'$. $f:V_i\longrightarrow\mathcal{C}l_n$ is a piecewise continuous function with compact support, and $f$ could be induced to $f':V'\longrightarrow E$. For the $\Pi$-operator on $\mathbb{R}^{n+1}$, we have $\|\Pi\|_{L^p(\mathbb{R}^{n+1})}\leq (n+1)(p^*-1)$, where $p^*=max(p, p/(p-1))$, see \cite{NW}. \par On each subdomain $V_i$, we have $\|\overline{D'}C*f(x)\|_{L^p(V_i)}=\|\overline{D}G*f(x)\|_{L^p(V_i)}$, hence for the domain $V=\sum_{i=1}^\infty V_i$, we have $\|\overline{D'}C*f(x)\|_{L^p(V)}=\|\overline{D}G*f(x)\|_{L^p(V)}\leq (n+1)(p^*-1)\|f(x)\|_{L^p(V)}$. Applying the projection $p_k$ on $V$, we can obtain $\|\overline{D'}C'*f(x')\|_{L^p(V')}\leq (n+1)(p^*-1)\|f(x')\|_{L^p(V¡®)}$, which shows the following. \begin{theorem} $\|\Pi'\|_{L^p(\mathbb{S}^1\times \mathbb{S}^{n})}\leq (n+1)(p^*-1)$, where $p^*=max(p, p/(p-1))$. \end{theorem} \section{A $\Pi$-Operator on the Hyperbolic upper-half Space}\label{hyperbolic} Let $X$ be the upper-half space $\mathbb{R}^{n+1}_+$ with the hyperbolic measure. Then the Hilbert space $H=L^2(\mathbb{R}^{n+1}_+,\mathbb{R})$ becomes a real Hilbert space, and $H\otimes \mathcal{C}l_n$ is a Clifford-Hilbert module with the inner product \begin{eqnarray*} \langle f,g\rangle=\int_{\Omega}\overline{f}g\frac{dx^n}{x_n^{n-1}}, \end{eqnarray*} where $\Omega$ is a subset of the upper-half space with $\overline{\Omega}$ inclosed and $f, g: \Omega\longrightarrow \mathcal{C}l_n$. Then the $\Pi$-operator theory on the hyperbolic upper-half space is actually a special case of Section $3$, which is demonstrated as follows. \subsection{Hyperbolic Dirac Operator} Denote the upper-half space $\mathbb{R}^{n+1}_+=\{x_0e_0+x_1e_1\cdots+x_ne_n : x_n>0\}$. The Poincar\'{e} half-space is a Riemannian manifold $(\mathbb{R}^{n+1}_+, ds^2)$ with the Riemannian metric $ds^2=\displaystyle \frac{(dx_0^2+dx_1^2+....+dx_n^2)}{x_n^2}.$ The Clifford algebra $\mathcal{C}l_n$ can be expressed as $\mathcal{C}l_n=\mathcal{C}l_{n-1}+\mathcal{C}l_{n-1} e_n$. So if $A\in \mathcal{C}l_n$, there exist unique elements $B$ and $C\in \mathcal{C}l_{n-1}$, such that $A=B+Ce_n$. This gives rise to a pair of projection maps $P$ and $Q$, where $$P:\mathcal{C}l_n\longrightarrow \mathcal{C}l_{n-1}, P(A)=B,\quad Q:\mathcal{C}l_n\longrightarrow \mathcal{C}l_{n-1}, Q(A)=C.$$ We denote $-e_nQ(A)e_n$ by $Q'(A)\in \mathcal{C}l_{n-1}$. The modified Dirac operator is defined as $$Mf=D_0f+\displaystyle\frac{n-1}{x_n}Q'f,$$ where $D_0=\sum_{i=0}^ne_i\partial_{x_i}$ is the Dirac operator on $\mathbb{R}^{n+1}$. Let $\Omega\subset\mathbb{R}^{n+1}_+$, we say a function $f:\Omega\longrightarrow\mathcal{C}l_n$ is \emph{hypermonogenic} if $Mf(x)=0$ for each $x\in \Omega$. \par The conjugate of the modified Dirac operator is defined by $\overline{M}f=\overline{D_0}f-\frac{n-1}{x_n}Q'f,$ where $\overline{D_0}=e_0\partial_{x_0}-\sum_{i=i}^ne_i\partial_{x_i}$, see \cite{Qiao}. Next result shows the relation between $M$ and $\overline{M}$. \begin{proposition} $M^*=-\overline{M}$. \end{proposition} \begin{proof} Let $f,g \in L^2(\mathbb{R}^{n+1}_+,\mathcal{C}l_n)$ with compact support. From the decomposition that $A=P(A)+Q(A)e_n$, we notice that $||f||^2_h=||Pf||^2_h+||Qf||^2_h$, where $$||f||^2_h=\int_\Omega \overline{f(x)}f(x)\frac{dx^n}{x_n^{n-1}}$$ defines the norm of $f$ in the upper-half space with hyperbolic metric. If we replace $f$ that in the previous identity with $f+g$, one can easily see that $P(f)$ is orthogonal to $Q(g)e_n$. More specifically, \begin{eqnarray}\label{ortho} \int_\Omega \overline{P(f)}\cdot (Q(g)e_n)\frac{dx^n}{x_n^{n-1}}=0. \end{eqnarray} On one hand, since we have $$\langle Mf,g\rangle=\langle\sum_{i=0}^ne_i\frac{\partial f}{\partial x_i}+\frac{n-1}{x_n}Q'f,g\rangle=\langle\sum_{i=0}^ne_i\frac{\partial f}{\partial x_i}-\frac{n-1}{x_n}e_nQfe_n,g\rangle,$$ then \begin{align*} &\langle\sum_{i=0}^ne_i\frac{\partial f}{\partial x_i},g\rangle=\int_\Omega\overline{\sum_{i=0}^ne_i\frac{\partial f}{\partial x_i}}\cdot g\frac{dx^n}{x_n^{n-1}} =\int_\Omega\overline{\sum_{i=0}^n\frac{\partial f}{\partial x_i}}\cdot\overline{e_i}g\frac{dx^n}{x_n^{n-1}} =-\int_\Omega \overline{f}\cdot\sum_{i=0}^n\frac{\partial}{\partial x_i}(\overline{e_i}g)\frac{dx^n}{x_n^{n-1}}\\ =&-\int_\Omega \overline{f} \big(\sum_{i=0}^n \overline{e_i}\frac{\partial g}{\partial x_i}\frac{dx^n}{x_n^{n-1}}\big)-\int_\Omega \overline{f} \overline{e_n}g\frac{-(n-1)}{x_n^n}dx^n =\langle f, -\overline{D_0}g\rangle-(n-1)\int_\Omega \overline{f}\cdot e_ng\frac{dx^n}{x_n^n}. \end{align*} On the other hand, \begin{eqnarray*} \langle-\frac{n-1}{x_n}e_nQfe_n,g\rangle=-(n-1)\int_\Omega\overline{e_nQfe_n}g\frac{dx^n}{x_n^{n}} =(n-1)\int_\Omega \overline{Qfe_n}\cdot e_n g\frac{dx^n}{x_n^{n}}. \end{eqnarray*} Hence, \begin{align*} &\langle Mf,g\rangle=\langle\sum_{i=0}^ne_i\frac{\partial f}{\partial x_i}-\frac{n-1}{x_n}e_nQfe_n,g\rangle =\langle f, -\overline{D_0}g\rangle-(n-1)\int_\Omega \overline{f}\cdot e_ng\frac{dx^n}{x_n^n}+(n-1)\int_\Omega \overline{Qfe_n}\cdot e_n g\frac{dx^n}{x_n^{n}}\\ =&\langle f, -\overline{D_0}g\rangle-(n-1)\int_\Omega \overline{Pf}\cdot e_ng\frac{dx^n}{x_n^n} =\langle f, -\overline{D_0}g\rangle-(n-1)\int_\Omega \overline{Pf}\cdot e_n(Pg+Qge_n)\frac{dx^n}{x_n^n}. \end{align*} Since $e_nPg$ can be rewritten as $\pm Pge_n$, where $``\pm"$ depends on whether $n$ is even or odd. This can also be considered as $Qhe_n$ for some function $h\in L^2(\mathbb{R}^{n+1}_+,\mathcal{C}l_n)$. Hence, from (\ref{ortho}), we can see that $Pf$ is orthogonal to $e_nPg$. Thus, the previous equation becomes \begin{eqnarray*} \langle f, -\overline{D_0}g\rangle-(n-1)\int_\Omega \overline{Pf}\cdot e_nQge_n\frac{dx^n}{x_n^n}. \end{eqnarray*} With a similar argument as above, the previous equation is equal to \begin{align*} =&\langle f, -\overline{D_0}g\rangle-(n-1)\int_\Omega \overline{Pf+Qfe_n}\cdot e_nQge_n\frac{dx^n}{x_n^n} =\langle f, -\overline{D_0}g\rangle-(n-1)\int_\Omega \overline{f}\cdot e_nQge_n\frac{dx^n}{x_n^n}\\ =&\langle f,-\overline{D_0}g+\frac{n-1}{x_n}Q'g\rangle=\langle f,-\overline{M}g\rangle. \end{align*} Therefore, $M^*=-\overline{D_0}+\frac{n-1}{x_n}Q'=-\overline{M}$. Similarly, $\overline{M}^*=-M$. \end{proof} By a straight forward calculation, we can obtain \begin{eqnarray*} M\overline{M}f=\overline{M}Mf=\Delta f-\frac{n-1}{x_n}\frac{\partial}{\partial_{x_n}}f+(n-1)\frac{Qfe_n}{x_n^2}, \end{eqnarray*} where $\Delta$ is the Laplace operator in $\mathbb{R}^{n+1}$. In the hyperbolic function theory, we define hyperbolic harmonic function $f:\Omega\longrightarrow\mathcal{C}l_n$ as a solution of the equation \begin{align*} \overline{M}Mf(x)=0,\quad \text{for}\ x\in \Omega. \end{align*} Let \begin{eqnarray*} E(x,y)=\displaystyle \frac{(x-y)^{-1}}{\|x-y\|^{n-1}\|x-\widehat{y}\|^{n-1}},\ F(x,y)=\displaystyle \frac{(\widehat{x}-y)^{-1}}{\|x-y\|^{n-1}\|\widehat{x}-y\|^{n-1}}, \end{eqnarray*} where $\widehat{x}=\sum_{i=0}^{n-1}x_ie_i-x_ne_n$. The Cauchy transform is defined as \cite{EO} \begin{eqnarray*} T_\Omega f(y)=-\displaystyle \frac{2^{n-1}y_n^{n-1}}{\omega_{n+1}}\int_\Omega \big(E(x,y)f(x)-F(x,y)\widehat{f(x)}\big)dx^n, \end{eqnarray*} where $\widehat{f}=\sum_{i=0}^{n-1}f_ie_i-f_ne_n$. Also, a non-singular boundary integral operator is given by \begin{eqnarray*} F_{\partial \Omega}f(y)=\displaystyle \frac{2^{n-1}y_n^{n-1}}{\omega_{n+1}}\int_{\partial \Omega}\big(E(x,y)n(x)f(x)-F(x,y)\widehat{n}(x)\widehat{f}(x)\big)d\sigma(x). \end{eqnarray*} Hence, we have a Borel-Pompeiu formula as follows. \begin{theorem}\cite{EO} Let $\Omega\subseteq \mathbb{R}^{n+1}_+$ be a bounded region with smooth boundary in $\mathbb{R}^{n+1}_+$. Suppose $f:\Omega\longrightarrow \mathcal{C}l_n$ is a $C^1$ function on $\Omega$ with a continuous extension to the closure of $\Omega$. Then for $y\in \Omega$, we have \begin{eqnarray*} f(y)=\displaystyle \frac{2^{n-1}y_n^{n-1}}{\omega_{n+1}}\int_{\partial \Omega}\big(E(x,y)n(x)f(x)-F(x,y)\widehat{n}(x)\widehat{f}(x)\big)d\sigma(x) -\displaystyle \frac{2^{n-1}y_n^{n-1}}{\omega_{n+1}}\int_\Omega \big(E(x,y)Mf(x)-F(x,y)\widehat{Mf(x)}\big)dx^n. \end{eqnarray*} \end{theorem} \begin{remark} We notice that when $f$ is a hypermonogenic function, we have \begin{eqnarray*} f(y)=\displaystyle \frac{2^{n-1}y_n^{n-1}}{\omega_{n+1}}\int_{\partial \Omega}\big(E(x,y)n(x)f(x)-F(x,y)\widehat{n}(x)\widehat{f}(x)\big)d\sigma(x). \end{eqnarray*} Further, if $f\in \wzwo(\Omega,{\mathcal{C}l_n})$, then \begin{eqnarray*} f(y)=-\displaystyle \frac{2^{n-1}y_n^{n-1}}{\omega_{n+1}}\int_\Omega \big(E(x,y)Mf(x)-F(x,y)\widehat{Mf(x)}\big)dx^n, \end{eqnarray*} in other words, $TM=I$. If we apply the hyperbolic Dirac operator $M$ on both sides of the equation, we can easily obtain $MT=I$. \end{remark} \subsection{Construction of the Hyperbolic $\Pi$-Operator} The generalization of $\Pi$-operator to higher dimension via Clifford algebras is defined as follows. \begin{definition} The hyperbolic $\Pi$-operator in $\mathbb{R}^{n+1}_+$ is defined as $\Pi_h=\overline{M}T.$ \end{definition} The following are some well known properties for the $\Pi_h$-operator. \begin{theorem} Suppose $f\in \wzwop(\Omega)\ (1<p<\infty, k\geq 1)$, then \begin{enumerate} \item $M\Pi_h f=\overline{M}f$, $\Pi_h Mf=\overline{M}f-\overline{M}F_{\partial\Omega}f,$ \item $F_{\partial\Omega}\Pi_h f=(\Pi_h-T\overline{M})f$, $M\Pi_h f-\Pi_h Mf=\overline{M}F_{\partial\Omega}f.$ \end{enumerate} \end{theorem} \begin{proof} The proof is a straight forward calculation. \end{proof} The following decomposition of $L^2(\Omega,\mathcal{C}l_n)$ helps us to observe that the $\Pi$-operator actually maps $L^2(\Omega,\mathcal{C}l_n)$ to $L^2(\Omega,\mathcal{C}l_n)$. \begin{theorem} \text{(\textbf{Decomposition of $L^2(\Omega,\mathcal{C}l_n$)})} $$L^2(\Omega,\mathcal{C}l_n)=L^2(\Omega,\mathcal{C}l_n)\ \cap\ Ker \overline{M}\oplus M(\wzwo(\Omega, \mathcal{C}l_n)),$$ and $$L^2(\Omega,\mathcal{C}l_n)=L^2(\Omega,\mathcal{C}l_n)\ \cap\ Ker M\oplus\overline{M}(\wzwo(\Omega, \mathcal{C}l_n)).$$ \end{theorem} \begin{remark} The proof is similar to the proof in \cite[Theorem 1]{GK}. Notice that \begin{eqnarray*} \Pi_h(L^2(\Omega,\mathcal{C}l_n)\cap Ker \overline{M})=L^2(\Omega,\mathcal{C}l_n)\cap Ker M,\quad \Pi_h(M(\wzwo(\Omega, \mathcal{C}l_n))=\overline{M}(\wzwo(\Omega, \mathcal{C}l_n)). \end{eqnarray*} Hence, $\Pi_h$ maps $L^2(\Omega,\mathcal{C}l_n)$ to $L^2(\Omega,\mathcal{C}l_n)$. \end{remark} Further, the $\Pi$-operator is an $L^2$ isometry, which can also be demonstrated as follows. \begin{theorem} For functions in $L^2(\Omega,\mathcal{C}l_n)$, we have $\Pi^* \Pi=I.$ \begin{proof} Let $f\in L^2(\Omega, \mathcal{C}l_n)$ with compact support, \begin{eqnarray*} \langle\Pi_h f, \Pi_h f\rangle=\langle\overline{M}Tf, \overline{M}Tf\rangle=-\langle Tf, M\overline{M}Tf\rangle =-\langle Tf, \overline{M}MTf\rangle=\langle MTf, MTf\rangle=\langle f,f\rangle. \end{eqnarray*} Here we use $\overline{M}^*=-M$. \end{proof} \end{theorem} \begin{remark} The argument for the applications of $\Pi$-operators on Beltrami equations given in Section $3$ can also be applied here, which shows the further applications of the $\Pi$-operators in the study of the Beltrami equations in the hyperbolic case. \end{remark}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,140
Sacramento Beer: A Craft History Telling the story of those past and present that make up Sacramento's Craft Beer Community. Behind the Brews Blog Show search form Menu - Select Page -Sacramento Beer: A Craft HistoryBehind the Brews BlogAbout The AuthorContact Oak Park Brewing Company- An Oral History By Justin Chechourka in The Breweries on January 11, 2017 In November, Oak Park Brewing Company celebrated its two-year anniversary. In those two years, it's become one of the region's most celebrated breweries and restaurants which isn't bad considering it was all started by four friends brewing in their garage with little to no restaurant experience. Ask anyone about opening a brewery and they will tell you it's a risky endeavor, tell them you're planning on opening a brewpub and they'll probably call you crazy. I sat down with Bonnie Peterson, Tom Karvonen and Dave Estis to find out how they were able to make it work, even when the odds were stacked against them. Tom: We started as homebrewers, in (Bonnie and Dave's) garage. Started doing these brew day parties, we called them big brew days. We'd invite people over, we'd do barbecue and Dave and I would brew. We always jokingly called their garage "Oak Park Brewing Company" never thinking at the time that we were going to open a commercial brewery. After entering some contests and winning contests with their beer, Tom, Dave, Bonnie and Shannon decided they would open a real pub. The next question: where? Bonnie: When we were looking for buildings we looked all over the Sacramento area and at one point we were going to do that industrial park model with a food truck. It's funny, every time we looked at a building I'd have some excuse for why it didn't work. I think all of us want to be a neighborhood space that people can come and enjoy themselves. I love Oak Park Dave: I argued many nights with people that it really wasn't worth looking anywhere else unless we change the name. Bonnie: This building came up for sale actually. The broker that was helping us look for buildings contacted somebody who would be interested in buying it. He bought it with us in mind and did the remodel on the historic façade. We did all the remodels on the inside. Having a building owner like that who was so engaged and willing to help was really important. We wouldn't have been able to do both, a remodel of a building like this and put the money into the brew house if we had to do it on our own. TAKING ON A BREWERY AND A RESTAURANT The initial plan never called for opening a restaurant. They were probably going to open a brewery in an industrial area like many of the other breweries in the region. Only they didn't. They opened a brewpub in the heart of Oak Park. So, how'd that happen? None of them can say exactly, it just sort of ended up that way. Bonnie: I like to say we got sucked into it. Dave: I definitely didn't want to do it. Bonnie: None of us had it as our main goal; it just kind of evolved into that. Originally the building was going to be leased by a restaurateur and we were going to lease next door. And then it just sort of evolved into use sharing the space. And then they had some trouble coming up with their investment so we ended up building the kitchen. And then, once you put everything into it, you end up running it too. DON'T FORGET THE BEER At the heart of Oak Park was still the beer. It was the beer where it all began and it was still the beer they wanted to be most known for. Tom: We decided early on, we've been saying British and Belgium ales with a West Coast edge, we didn't want to just try to be another IPA house. Our main thing is doing more balanced beers. That's one thing people like about our IPA. It's more like an East Coast IPA than a West Coast IPA. Bonnie: I think the other kind fun thing we kind of do, and maybe it's because we started brewing in the garage, we really don't take the styles too seriously. We let our brewers have fun. Tom: (We want our beer to be) something a little different, something that's a little more balanced and drinkable than a lot of the wreck your palate IPAs that are out there. RUNNING A BREWPUB You've got good beer and good food. On paper, those are all the ingredients you need for a brewpub. But as the four friends would discovery, running a brewpub is a whole other challenge. Dave and Tom had been project leaders and managers in their previous professions, but neither had ever run a restaurant. Dave: It was 7 days a week, 16 hours a day for a while. It was bad. Tom: Everyone told us you're going to have to hire twice as many people as you think you need because half of them are going to suck so bad. Dave: We had an initial general manager who put a really good team together and trained us how to do the restaurant side. Tom: We have two sets of employees. We have our restaurant employees and our restaurant employees. We lucked out and got some really good people on the team early on and made (running Oak Park) possible. Oak Park Brewing Company now has around 45 employees. Dave and Tom pretty much split the business with Dave running the restaurant side of things and Tom handling the brewing. None of this made opening day weekend any easier. Dave: Opening weekend, two days, 12-hour a day shoulder-to-shoulder the entire time, it was that packed the entire time. You didn't have time to think. You just did what you possibility could. Bonnie: We knew it would be difficult if we got big crowds. We knew we couldn't do the full table service that was ultimately going to be the plan and our staff was trained for. So was Dave nervous? Dave: I wouldn't say nervous. I tend to have a, "it's too late to be nervous" attitude about things. Once it's started and happening, it's happening. There's nothing you can do about it. All you can do is keep pushing forward. PUB DESIGN One of the best things about Oak Park is the feel. It's a lot of brick and steel but yet feels welcoming. Dave: Kind of Victorian-industrial, steam punkish look without trying too hard to do steampunk. Bonnie: I think the other thing we can't leave out is that a lot of the credit for the style of the building goes to our fourth partner, Shannon (Karvonen), who's not involved in the day-to-day basis. She did most of the interior design herself, she really has an eye for that just came up with some really creative things we wouldn't have been able to do on our own. Pretty much everything you see in here is designed to look like it was here, but it's all new. Dave: It's just supposed to be kind of a community place, where you can go and have some food, hang out with some friends, have a few drinks. You know that third place. Tom: First place being home, second place being work and then third place being the pub. EVOLVING ON THE RUN The Oak Park owners had a team and an ambitious plan of running a brewpub with what they described as a white napkin menu. But you know what they say about best-laid plans. Bonnie: A little too ambitious for our location and for a brewpub. Anybody that has visited us from the beginning until now has seen we shifted a lot. You're making really good food and you're working really hard, and you're not making any money on it. We had some really creative chefs that came up with amazing food, but when we started to push back because it wasn't making financial sense to continue with that really amazing food, we lost some people. Tom: We had to change the food around to be a little more friendly. We have some great dishes, seared ahi with wasabi mashed potatoes and stuff like that, but we definitely added a lot more things like sliders and wings. And it's not just the menu that's evolving. It's the location and their approach to the business as well. Bonnie: We did the little beer garden area in the corner that now you can just seat yourself and come in and order at the bar. And that was in response to a lot of people who didn't want full table service and didn't want to wait 45 minutes for a table in order to have a beer. That's one of the adjustments we made to accommodate both sides of the equation and really make it a more pleasant experience. It's fair to say that Oak Park has been a success. Call it a combination of offering something a bit different plus a unique location. Not the team is looking toward the future. They are bottling and selling their beers in the stores and are looking to expand their production facilities. Dave: We'll continue to building the brand. Continue refining the pub itself, while saving up to do a production facility. Kind of the ultimate goal is to be a regional brewery, kind of like Track 7. Whatever the future holds its clear Oak Park Brewing Company plans to stay a part of the neighborhood where it all began. Bonnie: The four of us always held our ownership in the brewery separately. It really was something each of us wanted to do. I don't think any of us would give this up because it's ended up being such a special place in the neighborhood, I love Oak Park. This story was originally published at Monkeys Fighting Robots. Published by Justin Chechourka I am the father of three small children who happens to have a taste for fine beer and telling people's stories. As it happens Northern California is home to some of the best beer makers in the country, so grab a beer and let me tell you about the folks Behind the Brews. View all posts by Justin Chechourka Previous postDevice Brewing Company- A Risk Worth Taking Next postTilted Mash- Small Brewery, Big Plans May the 4th Beerfest and 5K Q&A with New Helvetia From Street Pub to Brew Street Drake's: The Barn- Behind the Scenes Alaro Brewing: Taking Over for an Icon The Long Road Back… to KCRA My wife calls this my "bachelor meal." I call it my masterpiece! #hotdog #macandcheese Friday got me like... #fillintheblank . 9 Lives Crowler from @bluenotebrewing First new to me beer of 2020– Lizard's Mouth from @figmtnbrew. I'd say I'm definitely #winningatlife. Beer, slippers and #thegodfather. 2020 starting out right. I should mention I've never seen this movie before. Lucha Libre bottle opener for the win! This Psychedelic Hop Goblin is full of holiday spirit. Or am I full of holiday spirits? Hard to tell. Sacramento Beer Book on Facebook
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,363
\section{Introduction} Let $G$ and $H$ be groups each of which acts upon the other (on the right), \[ G\times H \rightarrow G, \; (g,h) \mapsto g^h; \; \; H\times G \rightarrow H, \; (h,g) \mapsto h^g \] and on itself by conjugation, in such a way that for all $g,g_1 \in G$ and $h,h_1 \in H$, \begin{equation} \label{eq:0} g^{\left( h^{g_1} \right) } = \left( \left( g^{g^{-1}_1} \right) ^h \right) ^{g_1} \; \; \mbox{and} \; \; h^{\left( g^{h_1}\right) } = \left( \left( h^{h_1^{-1}} \right) ^g \right) ^{h_1}. \end{equation} In this situation we say that $G$ and $H$ act {\em compatibly} on each other. The derivative of $G$ under (the action of) $H$, $[G,H]$, is defined to be the subgroup $[G,H] = \langle g^{-1}g^h \mid \ g \in G, h \in H\rangle$ of $G$. Similarly, the subgroup $[H,G] = \langle h^{-1}h^g \mid \ h \in H, g \in G \rangle$ of $H$ is called derivative of $H$ under $G$. In particular, if $G=H$ and all actions are conjugations, then the derivative $[G,H]$ becomes the derived subgroup $G'$ of $G$. Schur \cite[10.1.4]{Rob} showed that if $G$ is central-by-finite, then the derived subgroup $G'$ is finite and thus, the group $G$ is a BFC-group. Neumann \cite[14.5.11]{Rob} improved Schur's theorem in a certain way, showing that the group $G$ is a BFC-group if and only if the derived subgroup $G'$ is finite, and this occurs if and only if $G$ contains only finitely many commutators. Latter, Wiegold proved a quantitative version of Neumann's result: if $G$ contains exactly $m$ commutators, then the order of the derived subgroup $G'$ is finite with $m$-bounded order \cite[Theorem 4.7]{W}. Now, the next result can be viewed as a version of Wiegold's result in the context of actions and derivatives subgroups $[G,H]$ and $[H,G]$, where $G$ and $H$ are groups acting compatibly on each other. \begin{thmA} Let $G$ and $H$ be groups that act compatibly on each other. Suppose that the set $\{g^{-1}g^h \mid g \in G, \ h \in H\} \subseteq [G,H]$ has exactly $m$ elements. Then $[G,H]$ is finite, with $m$-bounded order. \end{thmA} It should be noted that the structure of derivative subgroups provides important information about the structure of the non-abelian tensor product of groups (see for instance \cite{BNR,Nak,NR01,Vis,T}). In this direction, we want to describe quantitative results for the non-abelian tensor product of groups (cf. \cite{BNR}). Let $H^{\varphi}$ be an extra copy of $H$, isomorphic via $\varphi : H \rightarrow H^{\varphi}, \; h \mapsto h^{\varphi}$, for all $h\in H$. Consider the group $\eta(G,H)$ defined in \cite{Nak} as $$\begin{array}{ll} {\eta}(G,H) = \langle G,H^{\varphi}\ | & [g,{h}^{\varphi}]^{g_1}=[{g}^{g_1},({h}^{g_1})^{\varphi}], \; [g,{h}^{\varphi}]^{h^{\varphi}_1} = [{g}^{h_1}, ({h}^{h_1})^{\varphi}] , \\ & \ \forall g,g_1 \in G, \; h, h_1 \in H \rangle . \end{array}$$ We observe that when $G=H$ and all actions are conjugations, $\eta (G,H)$ becomes the group $\nu (G)$ introduced in \cite{NR1}: $$\begin{array}{ll} {\nu}(G) = \langle G \cup G^{\varphi}\ | & [g_1,{g_2}^{\varphi}]^{g_3}=[{g_1}^{g_3},({g_2}^{g_3})^{\varphi}] = [g_1,{g_2}^{\varphi}]^{g^{\varphi}_3}, \ g_i \in G \rangle . \end{array}$$ It is a well known fact (see \cite[Proposition 2.2]{Nak}) that the subgroup $[G, H^{\varphi}]$ of $\eta(G,H)$ is canonically isomorphic with the {\em non-abelian tensor product} $G \otimes H$, as defined by Brown and Loday in their seminal paper \cite{BL}, the isomorphism being induced by $g \otimes h \mapsto [g, h^{\varphi}]$ (see also Ellis and Leonard \cite{EL}). It is clear that the subgroup $[G,H^{\varphi}]$ is normal in $\eta(G,H)$ and one has the decomposition \begin{equation} \label{eq:decomposition} \eta(G,H) = \left ( [G, H^{\varphi}] \cdot G \right ) \cdot H^{\varphi}, \end{equation} where the dots mean (internal) semidirect products. For a deeper discussion of non-abelian tensor product and related constructions we refer the reader to \cite{K,NR}. An element $\alpha \in \eta(G,H)$ is called a {\em tensor} if $\alpha = [a,b^{\varphi}]$ for suitable $a\in G$ and $b\in H$. We write $T_{\otimes}(G, H)$ to denote the set of all tensors (in $\eta(G,H)$). When $G = H$ and all actions are by conjugation, we simply write $T_{\otimes}(G)$ instead of $T_{\otimes}(G,G)$. The influence of the set of tensors in the general structure of the non-abelian tensor product and related constructions was considered for instance in \cite{BNR,BR1,BR2,LT,NR2}. In \cite{BNR} the authors proved that if the set of all tensors $T_{\otimes}(G,H)$ is finite, then the non-abelian tensor product $[G,H^{\varphi}]$ is finite. Here we obtain the following quantitative version: \begin{thmB} Let $G$ and $H$ be groups that act compatibly on each other. Suppose that there exist exactly $m$ tensors in $\eta(G,H)$. Then the non-abelian tensor product $[G,H^{\varphi}]$ is finite with $m$-bounded order. \end{thmB} An immediate consequence of the above theorem is a quantitative version of the a well known result due to Ellis \cite{Ellis} concerning the finiteness of the non-abelian tensor product of finite groups (cf. \cite{BNR,LT,T}). See also Theorem \ref{cor.bound} and Remark \ref{rem.finite}, below. It is well known that the finiteness of the non-abelian tensor square $G \otimes G$, does not imply that $G$ is a finite group (and so, the group $\nu(G)$ cannot be finite). A useful result, due to Parvizi and Niroomand \cite[Theorem 3.1]{NP}, provides a sufficient condition: if $G$ is a finitely generated group in which the non-abelian tensor square is finite, then $G$ is finite (see also \cite[Remark 5]{NR2} for more details). The following result is a quantitative version of the above result and is a refinement of Theorem B in the context of the non-abelian tensor square of groups. \begin{thmC} Let $G$ be a group. Suppose that there exist exactly $m$ tensors in $\nu(G)$. Then, \begin{itemize} \item[(a)] The non-abelian tensor square $[G,G^{\varphi}]$ is finite with $m$-bounded order. More specifically, $|[G,G^{\varphi}]| \leqslant m^{m \cdot n}$, where $n$ is the order of the derived subgroup $G'$; \item[(b)] Additionally, if the abelianization $G^{ab}$ is finitely generated, then the group $G$ is finite, with $m$-bounded order. \end{itemize} \end{thmC} Note that the assumption of the abelianization $G^{ab}$ to be finitely generated is necessary. For instance, the Pr\"ufer group $C_{p^{\infty}}$ is an infinite group such that $T_{\otimes}(C_{p^{\infty}}) = \{1\} = [C_{p^{\infty}},C_{p^{\infty}}^{\varphi}]$. We also obtain a list of equivalent conditions related to the finiteness of the non-abelian tensor square and the structure of the group $\nu(G)$ (see Theorem \ref{thm.finiteness}, below). \section{Proofs} The following result is a consequence of \cite[Proposition 2.3]{BL}. \begin{prop} \label{ident} Let $G$ and $H$ be groups acting compatibly on each other. The following statements hold in $\eta(G,H)$: \begin{itemize} \item[(a)] There exists an action of the free product $G\ast H$ on $[G,H^{\varphi}]$ so that for all $g\in G$, $h\in H$, $p\in G\ast H$: $$[g,h^{\varphi}]^p=[g^p, (h^p)^{\varphi}];$$ \item[(b)] There are epimorphisms of groups $$\lambda:[G,H^{\varphi}] \to [G,H], \; \mu: [G,H^{\varphi}] \to [H,G]$$ such that $([g,h^{\varphi}])\lambda = g^{-1} g^h, \ ([g,h^{\varphi}])\mu=h^{-g}h$, for each $g\in G$, $h\in H$; \item[(c)] The actions of $G$ on $\ker (\mu)$ and of $H$ on $\ker (\lambda)$ are trivial. \end{itemize} \end{prop} The next lemma is an immediate consequence from the definition of $\eta(G,H)$ and Proposition \ref{ident}(c). \begin{lem} \label{lem.quo} If $G$ and $H$ are groups that act compatibly on each other, then $\ker(\mu) \cap \ker (\lambda)$ is a central subgroup of $\eta(G,H)$; \end{lem} For the reader's convenience we restate Theorem A. \begin{thmA} Let $G$ and $H$ be groups that act compatibly on each other. Suppose that the set $\{g^{-1}g^h \mid g \in G, \ h \in H\} \subseteq [G,H]$ has exactly $m$ elements. Then the derivative subgroup $[G,H]$ is finite with $m$-bounded order. \end{thmA} \begin{proof} Put $D=\{g^{-1}g^h \mid g \in G, \ h \in H\}$. For $g\in G$ and $h,k\in H$, let us write $[g,h]=g^{-1}g^h$ and $[g,h,k]=[[g,h],k]$. The compatibility of the actions gives us that $[g,h]^x=[g^x,h^x]$, for all $x,g \in G$ and $h \in H$. Thus, $D$ is a normal subset of $[G,H]$ and, as $|D|=m$, for each $\delta \in D$ we have $[[G,H]:C_{[G,H]}(\delta)]\leq m$. Consequently, $\bigcap_{\delta \in D}C_{[G,H]}(\delta)$ has finite $m$-bounded index in $[G,H]$ and, by \cite[Theorem 4.7]{W}, the derived subgroup $[G,H]'$ is finite with $m$-bounded order. Without loss of generality we may assume that $[G,H]$ is abelian. Since for all $x,g \in G$, $h,k \in H$, we have $ [[g,h],k]^x=[[g^x,h^x],k^x]$ and \[ [[g,h],k]^2=([g,h]^{-1}[g,h]^k)^2 = [g,h]^{-2}[g,h]^{2k}= [[g,h]^2, k] \in D, \] we conclude that the abelian finitely generated subgroup $[[G,H],H]$ is normal in $G$ and each generator of this subgroup has $m$-bounded order. From this we deduce that $[[G,H],H]$ is finite with $m$-bounded order and we may assume that $H$ acts trivially on $[G,H]$. Hence, for all $g \in G$ and $h\in H$, \[ [g,h]^2=[g,h][g,h]^h=g^{-1}g^hg^{-h}g^{h^2}=[g,h^2] \in D. \] Since $|D|=m$, it follows that every element $[g,h]$ has finite $m$-bounded order. We conclude that the order of the derivative subgroup $[G,H]$ is $m$-bounded. The proof is complete. \end{proof} \begin{rem} \label{rem.derivatives} Since $[G,H]$ and $[H,G]$ are epimorphic images of the non-abelian tensor product $[G,H^{\varphi}]$, the finiteness of $[G,H^{\varphi}]$ implies that $[G,H]$ and $[H,G]$ are finite. However, the converse does not hold in general. In fact, let $F_m$ and $F_n$ be free groups of finite ranks $m$ and $n$, respectively, where $m,n \geq 1$ and suppose that these groups act trivially on each other. Thus $[G,H]=\{1\}$ and $[H,G]=\{1\}$ are finite, but by \cite[Proposition 2.4]{BL}, $[F_m, (F_n)^{\varphi}]\cong (F_m)^{ab} \otimes_{\mathbb Z} (F_n)^{ab}$, which is not finite. \end{rem} Now we will deal with Theorem B: {\it Let $G$ and $H$ be groups that act compatibly on each other. Suppose that there exist exactly $m$ tensors in $\eta(G,H)$. Then the non-abelian tensor product $[G,H^{\varphi}]$ is finite with $m$-bounded order.} \begin{cor} \label{cor.m-bound} Let $G$ and $H$ be groups that act compatibly on each other. Suppose that the sets $\{g^{-1}g^h \mid g \in G, \ h \in H\} \subseteq [G,H]$ and $\{h^{-1}h^g \mid g \in G, \ h \in H\} \subseteq [H,G]$ have at most $m$ elements. Then the index $n = |[G,H^{\varphi}]: \ker(\lambda) \cap \ker(\mu)|$ is finite and $m$-bounded. \end{cor} \begin{proof} By Theorem A, both derivative subgroups $[G,H]$ and $[H,G]$ are finite groups with $m$-bounded orders. Since $|[G,H^{\varphi}]: \ker(\lambda)|=|[G,H]|$ and $|[G,H^{\varphi}]: \ker(\mu)|=|[H,G]|$, it follows that $\ker(\lambda) \cap \ker(\mu)$ has index at most $|[G,H]| \cdot |[H,G]|$. The proof is complete. \end{proof} \begin{lem} \label{lem.finite} Let $G$ and $H$ be groups that act compatibly on each other. Suppose that there are exactly $m$ tensors in $\eta(G,H)$. Then for every $x \in G$ and $y \in H$ we can write: $$[x,y^{\varphi}]^{n+1} = [x,(y^2)^{\varphi}][x^ y,y^{\varphi}]^{n-1},$$ where $n = |[G,H^{\varphi}]/(\ker(\mu)\cap \ker(\lambda))|$. \end{lem} \begin{proof} Since $|T_{\otimes}(G,H)|=m$, each of the sets $\{g^{-1}g^h \mid g \in G, h \in H\}$ and $\{h^{-1}h^g\mid g \in G, h \in H\}$ has at most $m$ elements. By Theorem A, the derivative subgroups $[G,H]$ and $[H,G]$ are finite with $m$-bounded order. Moreover, the index $|[G,H^{\varphi}]: \ker(\mu) \cap \ker(\lambda)| = n$ is finite (Corollary \ref{cor.m-bound}). We conclude that for every $x,y \in G$ the element $[x,y^{\varphi}]^n \in \ker(\mu) \cap \ker(\lambda)$. Thus, by Lemma \ref{lem.quo}, $[x,y^{\varphi}]^n \in Z(\eta(G,H))$ and so, $[x,y^{\varphi}]^{n+1} = x^{-1}(y^{-1})^{\varphi}x[x,y^{\varphi}]^{n}y^{\varphi}$. Further, \begin{eqnarray*} [x,y^{\varphi}]^{n+1} & = & x^{-1}(y^{-1})^{\varphi}x[x,y^{\varphi}]^{n}y^{\varphi} \\ & = & [x,(y^2)^{\varphi}] (y^{-1})^{\varphi} [x,y^{\varphi}]^{n-1}y^{\varphi} \\ & = & [x,(y^2)^{\varphi}] ([x,y^{\varphi}]^{n-1})^{y^{\varphi}}\\ & = & [x,(y^2)^{\varphi}] [x^y,y^{\varphi}]^{n-1}, \ \text{by definition of $\eta(G,H)$}, \end{eqnarray*} which establishes the formula. \end{proof} We are now in a position to prove Theorem B. \begin{proof}[Proof of Theorem B] By Lemma \ref{lem.quo}, the subgroup $\ker(\mu) \cap \ker(\lambda)$ is a central subgroup of $\eta(G,H)$. Set $N=\ker(\mu) \cap \ker(\lambda)$ and $n = |[G,H^{\varphi}]/N|$. By Corollary \ref{cor.m-bound}, the index $n$ is $m$-bounded. We claim that every element in $[G,H^{\varphi}]$ can be written as a product of at most $\displaystyle{m \cdot n}$ tensors. Indeed, suppose that an element $\alpha \in [G,H^{\varphi}]$ can be expressed as a product of $r$ tensors but cannot be written as a product of fewer tensors. If $r> m \cdot n$, then one of the tensors must appear in the product at least $n+1$ times. In particular, since the set of tensors is normal and by definition of $\eta(G,H)$, $[g,h^{\varphi}]^{x} = [g^x,(h^x)^{\varphi}]$ and $[g,h^{\varphi}]^{y^{\varphi}} = [g^y,(h^y)^{\varphi}]$, for all $g,x\in G$ and $h,y \in H$, we can write $$ \alpha = [a,b^{\varphi}]^{n+1}[a_{n+2},b_{n+2}^{\varphi}]\ldots [a_{r},b_{r}^{\varphi}],$$ where $a,a_{n+2},\ldots,a_r \in G$ and $b,b_{n+2},\ldots,b_r \in H$. By Lemma \ref{lem.finite}, $$[a,b^{\varphi}]^{n+1} = [a,(b^2)^{\varphi}][a^b,b^{\varphi}]^{n-1}.$$ It follows that $\alpha$ can be rewritten as a product of $r-1$ simple tensors, contrary to the minimality of $r$. From this we conclude that $r \leqslant m \cdot n$. Now, since there exists at most $m$ simple tensors, we conclude that $\vert [G,H^{\varphi}] \vert \leqslant m^{m\cdot n}$, as well. In particular, $[G,H^{\varphi}]$ is finite with $m$-bounded order. The proof is complete. \end{proof} In \cite{M}, Moravec proved that if $G$ and $H$ are locally finite groups of finite exponent acting compatibly on each other, then there is a bound to the exponent of the non-abelian tensor product $G \otimes H$ in terms of the exponent of the involved groups. This bound depends to the positive solution of the restricted Burnside problem (Zel'manov, \cite{ze1,ze2}). Using the general description of the group $\eta(G,H)$ we present an explicit bound to the exponent of the non-abelian tensor product of groups, when $G$ and $H$ are finite groups. Moreover, we present another proof of Ellis' result \cite{Ellis}. \begin{thm} \label{cor.bound} Let $G$ and $H$ be finite groups that act compatibly on each other. Then the non-abelian tensor product $[G,H^{\varphi}]$ is finite. Moreover, the exponent $\exp([G,H^{\varphi}])$ is finite and $\{|G|,|H|\}$-bounded. \end{thm} \begin{proof} By Lemma \ref{lem.quo}, $\ker(\mu) \cap \ker(\lambda)$ is a central subgroup of $\eta(G,H)$. Set $n = [[G,H^{\varphi}]:\ker(\mu) \cap \ker(\lambda)]$. Note that $n$ divides $|G|\cdot |H|$, because $[G,H] \leqslant G$ and $[H,G] \leqslant H$. Since $|\eta(G,H)/(\ker(\mu) \cap \ker(\lambda))| = |G|\cdot |H|\cdot n$, it follows that the derived subgroup $\eta(G,H)'$ is finite and $\exp(\eta(G,H)')$ divides $|G| \cdot |H| \cdot n$ (Schur's theorem \cite[10.1.4]{Rob}). In particular, the non-abelian tensor product $[G,H^{\varphi}]$ is finite and $\exp([G,H^{\varphi}])$ divides $|G| \cdot |H| \cdot n$. The proof is complete. \end{proof} \begin{rem} \label{rem.finite} Since the proof of the above result is based on the general structure of $\eta(G,H)$ (cf. \cite{Nak}) and on Schur's theorem \cite[10.1.4]{Rob}, it becomes evident that it provides only a crude bound to both, the order and the exponent of the non-abelian tensor product $[G,H^{\varphi}]$. However, the advantages of these results are the explicit limits and the elementary proofs (without using homological methods). See \cite{M} for more details. Recently, other proofs of this result which are of non-homological nature have appeared (see for instance \cite{BNR,LT,T}). \end{rem} The remainder of this section will be devoted to obtain finiteness conditions for the non-abelian tensor square of groups. \begin{lem} \label{lem.abelianization} \cite[Theorem C, (a)]{BNR} Let $G$ be a group with finitely generated abelianization. Assume that the diagonal subgroup $\Delta(G)$ is periodic. Then the abelianization $G^{ab}$ is finite. Moreover, $G^{ab}$ is isomorphic to some subgroup of $\Delta(G)$. \end{lem} For the reader's convenience we restate Corollary C: \\ \noindent {\bf Corollary C.}{ Let $G$ be a group. Suppose that there exist exactly $m$ tensors in $\nu(G)$. Then, \begin{itemize} \item[(a)] The non-abelian tensor square $[G,G^{\varphi}]$ is finite, with $m$-bounded order. More specifically, $|[G,G^{\varphi}]| \leqslant m^{m \cdot n}$, where $n$ is the order of the derived subgroup $G'$; \item[(b)] Additionally, if the abelianization $G^{ab}$ is finitely generated, then the group $G$ is finite, with $m$-bounded order. \end{itemize} } \begin{proof} \noindent (a). Applying Theorem B to $[G,G^\varphi]$ we deduce that the order of the non-abelian tensor square is finite with $m$-bounded order. Arguing as in the proof of Theorem B we conclude that $|[G,G^{\varphi}]|\leq m^{m\cdot n}$. \\ \noindent (b). By the previous item, the non-abelian tensor square $[G,G^{\varphi}]$ and the derived subgroup $G'$ are finite with $m$-bounded orders. Now, it suffices to prove that the abelianization is finite with $m$-bounded order. By Lemma \ref{lem.abelianization}, the abelianization $G^{ab}$ is isomorphic to a subgroup of the diagonal subgroup $\Delta(G)$. Since $\Delta(G) \leqslant [G,G^{\varphi}]$, it follows that $\Delta(G)$ is finite with $m$-bounded order. The proof is complete. \end{proof} It should be noted that the next result makes evident an interesting relation between the constructions $\nu(G)$ and the non-abelian tensor square $G \otimes G$. More precisely, we collect a list of equivalences which give a relation between the set of commutators of the group $\nu(G)$ and the set of tensors $T_{\otimes}(G)$. \begin{thm} \label{thm.finiteness} Let $G$ be a group. The following properties are equivalents. \begin{itemize} \item[(a)] $\nu(G)$ is a BFC-group; \item[(b)] The set of all commutators $\{[\alpha,\beta] \mid \alpha,\beta \in \nu(G)\}$ is finite; \item[(c)] The derived subgroup $\nu(G)'$ is finite; \item[(d)] The non-abelian tensor square $[G,G^{\varphi}]$ is finite; \item[(e)] $G$ is a BFC-group and $G^{ab} \otimes_{\mathbb{Z}}G^{ab}$ is finite; \item[(f)] The set of tensors $T_{\otimes}(G) = \{[g,h^{\varphi}] \mid g,h \in G\} \subseteq \nu(G)$ is finite. \end{itemize} \end{thm} \begin{proof} The equivalences $(a) \Leftrightarrow (b) \Leftrightarrow (c)$ are immediate consequences of Newmann's result \cite[14.5.11]{Rob}. The equivalences \ $(d) \Leftrightarrow (e)$ and $(d) \Leftrightarrow (f)$ were proved in \cite[Corollary 1.1]{BNR} and \cite[Theorem A]{BNR}, respectively. It is clear that $(b)$ implies $(f)$. Finally, if part $(f)$ holds then, from the decomposition (\ref{eq:decomposition}) and itens $(d)$, $(e)$, we obtain $(a)$. The proof is complete. \end{proof} \noindent{\bf Acknowledgements.} The authors wish to thank I. Snopche for interesting discussions. This work was partially supported by FAPDF - Brazil, Grant: 0193.001344/2016.
{ "redpajama_set_name": "RedPajamaArXiv" }
7,937
{"url":"https:\/\/socratic.org\/questions\/what-is-the-least-common-multiple-of-the-set-of-numbers-8-12-16-36#306435","text":"# What is the least common multiple of the set of numbers 8, 12, 16, 36?\n\nSep 4, 2016\n\n144\n\n#### Explanation:\n\n$8 = {2}^{3}$\n$12 = {2}^{2} \\cdot 3$\n$16 = {2}^{4}$\n$36 = {2}^{2} \\cdot {3}^{2}$\nSo the lowest common multiple must be ${2}^{4} \\cdot {3}^{2}$\nOr 144\n\nSep 5, 2016\n\n$144$\n\n#### Explanation:\n\nThe first thing to notice is that:\n8 is a factor of 16 and\n12 is a factor of 36.\n\nTherefore we do not need to consider 8 and 12 at all,\n\nFind the LCM of 16 and 36.\n\nFind the product of their prime factors.\n\n$\\textcolor{w h i t e}{\\times \\times \\times} 16 = 2 \\times 2 \\times 2 \\times 2 \\textcolor{w h i t e}{\\times \\times \\times} = {2}^{4}$\n$\\textcolor{w h i t e}{\\times \\times \\times} 36 = 2 \\times 2 \\textcolor{w h i t e}{\\times \\times \\times} \\times 3 \\times 3 = {2}^{2} \\times {3}^{2}$\n\nLCM = $\\textcolor{w h i t e}{. . \\times} = 2 \\times 2 \\times 2 \\times 2 \\times 3 \\times 3 = {2}^{4} \\times {3}^{2} = 144$\n\nA multiple of 16 must include ${2}^{4}$\nA multiple of 36 must include ${2}^{2} \\mathmr{and} {3}^{2}$\n\nThe LCM must have the highest of each, hence ${2}^{4} \\times {3}^{2}$","date":"2022-10-02 16:37:26","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 12, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6310372352600098, \"perplexity\": 773.1089325295062}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030337338.11\/warc\/CC-MAIN-20221002150039-20221002180039-00200.warc.gz\"}"}
null
null
Succinyl-CoA:3-ketoacid CoA transferase deficiency Printable PDF Open All Close All Succinyl-CoA:3-ketoacid CoA transferase (SCOT) deficiency is an inherited disorder that impairs the body's ability to break down ketones, which are molecules produced in the liver during the breakdown of fats. The signs and symptoms of SCOT deficiency typically appear within the first few years of life. Affected individuals experience episodes of extreme tiredness (lethargy), appetite loss, vomiting, rapid breathing, and, occasionally, seizures. These episodes, which are called ketoacidotic attacks, sometimes lead to coma. About half of affected individuals have a ketoacidotic attack within the first 4 days of life. Affected individuals have no symptoms of the disorder between ketoacidotic attacks. People with SCOT deficiency usually have a permanently elevated level of ketones in their blood (persistent ketosis). If the level of ketones gets too high, which can be brought on by infections, fevers, or periods without food (fasting), a ketoacidotic attack can occur. The frequency of ketoacidotic attacks varies among affected individuals. What does it mean if a disorder seems to run in my family? What is the prognosis of a genetic condition? Genetic and Rare Diseases Information Center The prevalence of SCOT deficiency is unknown. More than 20 cases of this condition have been reported in the scientific literature. What information about a genetic condition can statistics provide? Why are some genetic conditions more common in particular ethnic groups? Mutations in the OXCT1 gene cause SCOT deficiency. The OXCT1 gene provides instructions for making an enzyme called succinyl-CoA:3-ketoacid CoA transferase (SCOT). The SCOT enzyme is made in the energy-producing centers of cells (mitochondria). The enzyme plays a role in the breakdown of ketones, which are an important source of energy during fasting or when energy demands are increased, such as during illness or when exercising. OXCT1 gene mutations result in the production of a SCOT enzyme with little or no function. A reduction in the amount of functional enzyme leads to an inability to break down ketones, resulting in decreased energy production and an elevated level of ketones in the blood. If these signs become severe, a ketoacidotic attack can occur. Individuals with mutations that create an enzyme with partial function are still prone to ketoacidotic attacks, but are less likely to have persistent ketosis. Learn more about the gene associated with succinyl-CoA:3-ketoacid CoA transferase deficiency How can gene mutations affect health and development? Inheritance Pattern This condition is inherited in an autosomal recessive pattern, which means both copies of the gene in each cell have mutations. The parents of an individual with an autosomal recessive condition each carry one copy of the mutated gene, but they typically do not show signs and symptoms of the condition. What are the different ways in which a genetic condition can be inherited? More about Inheriting Genetic Conditions Diagnosis & Management Links Genetic Testing Information (2 links) Genetic Testing Registry: Succinyl-CoA acetoacetate transferase deficiency Other Diagnosis and Management Resources (2 links) MedlinePlus Encyclopedia: Ketones--Urine MedlinePlus Encyclopedia: Serum Ketones Test How we cover diagnosis and management of health conditions How are genetic conditions diagnosed? How are genetic conditions treated or managed? How can I find a genetics professional in my area? Other Names for This Condition 3-oxoacid CoA transferase deficiency ketoacidosis due to SCOT deficiency SCOT deficiency succinyl-CoA 3-oxoacid transferase deficiency succinyl-CoA:3-oxoacid CoA transferase deficiency succinyl-CoA:acetoacetate transferase deficiency Health Information from MedlinePlus (3 links) Encyclopedia: Ketones--Urine Encyclopedia: Serum Ketones Test Health Topic: Mitochondrial Diseases Genetic and Rare Diseases Information Center (1 link) Additional NIH Resources (1 link) National Institute of Neurological Disorders and Stroke: Seizures and Epilepsy Educational Resources (2 links) MalaCards: succinyl-coa:3-oxoacid-coa transferase deficiency Orphanet: Succinyl-CoA:3-ketoacid CoA transferase deficiency Patient Support and Advocacy Resources (1 link) Metabolic Support UK Scientific Articles on PubMed (1 link) Catalog of Genes and Diseases from OMIM (1 link) SUCCINYL-CoA:3-OXOACID-CoA TRANSFERASE DEFICIENCY Sources for This Page Berry GT, Fukao T, Mitchell GA, Mazur A, Ciafre M, Gibson J, Kondo N, Palmieri MJ. Neonatal hypoglycaemia in severe succinyl-CoA: 3-oxoacid CoA-transferase deficiency. J Inherit Metab Dis. 2001 Oct;24(5):587-95. Citation on PubMed Fukao T, Ishii T, Amano N, Kursula P, Takayanagi M, Murase K, Sakaguchi N, Kondo N, Hasegawa T. A neonatal-onset succinyl-CoA:3-ketoacid CoA transferase (SCOT)-deficient patient with T435N and c.658-666dupAACGTGATT p.N220_I222dup mutations in the OXCT1 gene. J Inherit Metab Dis. 2010 Dec;33 Suppl 3:S307-13. doi: 10.1007/s10545-010-9168-5. Epub 2010 Jul 21. Fukao T, Mitchell GA, Song XQ, Nakamura H, Kassovska-Bratinova S, Orii KE, Wraith JE, Besley G, Wanders RJ, Niezen-Koning KE, Berry GT, Palmieri M, Kondo N. Succinyl-CoA:3-ketoacid CoA transferase (SCOT): cloning of the human SCOT gene, tertiary structural modeling of the human SCOT monomer, and characterization of three pathogenic mutations. Genomics. 2000 Sep 1;68(2):144-51. Fukao T, Sass JO, Kursula P, Thimm E, Wendel U, Ficicioglu C, Monastiri K, Guffon N, Barić I, Zabot MT, Kondo N. Clinical and molecular characterization of five patients with succinyl-CoA:3-ketoacid CoA transferase (SCOT) deficiency. Biochim Biophys Acta. 2011 May;1812(5):619-24. doi: 10.1016/j.bbadis.2011.01.015. Epub 2011 Feb 2. Fukao T, Shintaku H, Kusubae R, Zhang GX, Nakamura K, Kondo M, Kondo N. Patients homozygous for the T435N mutation of succinyl-CoA:3-ketoacid CoA Transferase (SCOT) do not show permanent ketosis. Pediatr Res. 2004 Dec;56(6):858-63. Epub 2004 Oct 20. Reviewed: December 2011
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
576
\section{Introduction} A tensor product surface is the image of a map ${\mathbb{P}^1 \times \mathbb{P}^1} \longrightarrow \P^3$. Such surfaces arise in geometric modeling, and it is often useful to find the implicit equation for the surface. Standard tools such as Gr\"obner bases and resultants tend to be slow, and the best current methods rely on Rees algebra techniques. The use of such methods was pioneered by the geometric modeling community (e.g. Sederberg-Chen \cite{sc}, Sederberg-Goldman-Du \cite{sgd}, Sederberg-Saito-Qi-Klimaszewksi \cite{ssqk}, Cox-Goldman-Zhang \cite{cgz}). Further work on using Rees algebras in implicitization appears in Bus\'e-Jouanolou \cite{bj}, Bus\'e-Chardin \cite{bc}, Botbol \cite{bot} and Botbol-Dickenstein-Dohm \cite{bdd}; see Cox \cite{cox2} for a nice overview. A key tool is the approximation complex $\mathcal{Z}$, introduced by Herzog-Simis-Vasconcelos in \cite{hsv1}, \cite{hsv2}. \begin{defn} Let $I = \langle f_1, \ldots, f_n \rangle \subseteq R=k[x_1,\ldots x_m]$, and let $K_i \subseteq \Lambda^i(R^n)$ be the kernel of the $i^{th}$ Koszul differential on $\{f_1, \ldots, f_n\}$. The approximation complex $\mathcal{Z}$ is a complex of $S=R[y_1,\ldots, y_n]$ modules, with $i^{th}$ term $\mathcal{Z}_i = S \otimes_R K_i$, and differential the Koszul differential on $\{y_1, \ldots, y_n\}$. \end{defn} It follows from Definition 1.1 that $H_0(\mathcal{Z})$ is the symmetric algebra $S_I$ on $I$, and that $K_1$ is ${\mathrm{Syz}}(I)$. For a fixed degree $\mu$, the matrix representing the first differential $d^1$ in $\mathcal{Z}$ in degree $\mu$ is obtained by rewriting each syzygy on $I$ \[ \sum_{i=1}^n g_i e_i \mbox{ with } \sum_{i=1}^n g_i f_i=0 \] as $\sum_{i=1}^n g_i y_i$, but in terms of a choice of basis for $R_\mu$, so that the entries of $d^1_\mu$ are elements of $k[y_1,\ldots, y_n]$. This generalizes to the bigraded setting. Let $R = k[s,t,u,v]$ be a bigraded ring over an algebraically closed field $k$, with $s,t$ of degree $(1,0)$ and $u,v$ of degree $(0,1)$. Note that the bidegree $(a,b)$ graded piece $R_{a,b}$ of $R$ corresponds exactly to the global sections $H^0({\mathcal{O}_{\mathbb{P}^1 \times \mathbb{P}^1}}(a,b))$. \begin{defn} Suppose $U \subseteq R_{a,b}$ has basis $\{ p_{0}, p_{1}, p_{2}, p_{3} \}$, such that the $p_i$ have no common zeroes on ${\mathbb{P}^1 \times \mathbb{P}^1}$, and let $I_U$ denote the ideal $\langle p_{0}, p_{1}, p_{2}, p_{3}\rangle \subset R$. Since the $p_i$ have no common zeroes on ${\mathbb{P}^1 \times \mathbb{P}^1}$, they define a regular map $\phi_U: {\mathbb{P}^1 \times \mathbb{P}^1} \longrightarrow \P^3$, and we write $X_U$ for $\phi_U({\mathbb{P}^1 \times \mathbb{P}^1}) \subseteq \P^3.$ \end{defn} The assumption that $U$ is basepoint free means that $\sqrt{I_U} = \langle s,t\rangle \cap \langle u,v \rangle.$ In this setting, work of \cite{bdd} gives conditions on $\mu$ so that the determinant of $d^1_\mu$ is a power of the implicit equation for $X_U$. Motivated by \cite{cds}, in \cite{ssv}, Schenck-Seceleanu-Validashti show that for tensor product surfaces of bidegree $(2,1)$, the existence of a linear syzygy on $I_U$ imposes very strong conditions on $X_U$. We show this is not specific to the bidegree $(2,1)$ case. Our main result is: \vskip .05in \noindent{\bf Theorem}: If $a,b \ge 2$ and $U$ is basepoint free, then there is at most one linear first syzygy on $I_U$. A linear first syzygy gives rise to a special pair of additional first syzygies. These three syzygies determine the degree $(2a-1,b-1)$ component of the approximation complex $\mathcal{Z}$. By \cite{bot}, the determinant of the resulting square matrix is a power of the implicit equation of $X_U$. \begin{exm}\label{ex1} Suppose $(a,b)=(2,2)$, and \[ U = {\mathrm{Span}} \{t^2u^2+s^2uv, t^2uv+s^2v^2, t^2v^2, s^2u^2 \} \subseteq H^0({\mathcal{O}_{\mathbb{P}^1 \times \mathbb{P}^1}}(2,2)), \] which has a first syzygy of bidegree $(0,1)$. A computation shows that $I_U$ has seven minimal first syzygies, in bidegrees \[ (0,1),(2,1),(2,1),(0,3),(2,2),(4,1),(6,0). \] By Theorem~\ref{EXTRA}, the three syzygies of bidegree $(0,1),(2,1),(2,1)$ are generated by the columns of \[ \left[\begin{matrix} v &0 &s^2u \\ -u & -t^2v &0 \\ 0 & t^2u+s^2v & 0\\ 0 & 0 & -t^2u-s^2v \end{matrix} \right], \] and the bidegree $(2a-1,b-1) = (3,1)$ component of the first differential in the approximation complex is \[ \left[ \begin{matrix} x_0&0&0&0 &x_2&0&-x_3&0\\ -x_1&0&0&0 &0&0&x_0&0\\ 0&x_0&0&0 &0&x_2&0&-x_3\\ 0&-x_1&0&0 &0&0&0 &x_0\\ 0&0&x_0&0 &-x_1&0&0&0\\ 0&0&-x_1&0 &x_2&0&-x_3&0\\ 0&0&0&x_0 &0&-x_1&0&0\\ 0&0&0&-x_1 &0&x_2&0&-x_3 \end{matrix} \right] \] The determinant of this matrix is \[ (x_0^3x_2+x_1^3x_3-x_0^2x_1^2)^2. \] By Corollary~\ref{EQN} this means the implicit equation defining $X_U$ is $x_0^3x_2+x_1^3x_3-x_0^2x_1^2$, and $\phi_U$ is $2:1$ by Lemma~\ref{BLEM2}. By Corollary~\ref{SING} the codimension one singular locus of $X_U$ contains ${\bf V }(x_0,x_1)$; in fact, in this case equality holds. \end{exm} \subsection{Algebraic tools} Two results from previous work will be especially useful; for additional background on approximation complexes and bigraded commutative algebra, see \cite{ssv}. \begin{lem}\label{LS1}\cite{ssv} If $I_U$ has a linear first syzygy of bidegree $(0,1)$, then \[ I_U = \langle pu, pv, p_2,p_3 \rangle, \] where $p$ is homogeneous of bidegree $(a,b-1)$. \end{lem} A similar result holds if $I_U$ has a first syzygy of degree $(1,0)$. The lemmas below (Lemmas 7.3 and 7.4 of Botbol \cite{bot}) also play a key role. Botbol notes that the local cohomology module $(H_2)_{4a-1,3b-1}$ has dimension equal to the sum of the multiplicities at the basepoints, so if $U$ is basepoint free, this module vanishes. \begin{lem}\label{BLEM1}\cite{bot} Suppose $a\le b$. If $\nu = (2a-1,b-1)$, then the determinant of the $\nu$ strand of the approximation complex is of degree $2ab-\dim(H_2)_{4a-1,3b-1}$. \end{lem} \begin{lem}\label{BLEM2}\cite{bot} If $U$ has basepoints with multiplicities $e_x$, then \[ \deg(\phi_U)\deg(F) = 2ab-\sum e_x, \mbox{ where } \langle F \rangle=I(X_U). \] \end{lem} \noindent If $U$ is basepoint free, the determinant of the $\nu$ strand is the determinant of $(d^1)_\nu$. \section{Proofs of main theorems} \begin{thm}\label{MAIN} If $a,b \ge 2$ and $U$ is basepoint free, then there can be at most one linear first syzygy on $I_U$. \end{thm} \begin{proof} Suppose $L$ is a linear syzygy of bidegree $(0,1)$ on $I_U$. By Lemma~\ref{LS1}, we may assume \[ I_U = \langle pu, pv, p_2,p_3 \rangle =\langle p_0, p_1, p_2,p_3 \rangle, \] where $p$ is homogeneous of bidegree $(a,b-1)$. Suppose there is another minimal first linear syzygy of bidegree $(0,1)$ \[ \sum\limits_{i=0}^3 p_i\cdot(a_iu+b_iv) = 0. \] Let \[ \begin{array}{ccc} \widetilde{p_2} &= &\sum a_ip_i\\ \widetilde{p_3} &= & \sum b_ip_i, \end{array} \] so $\widetilde{p_2}u+\widetilde{p_3}v = 0$. But the syzygy module on $[u,v]$ is generated by $[v,-u]$, so we must have $\widetilde{p_2}=qv, \widetilde{p_3}=-qu$ for some $q$ of bidegree $(a,b-1)$. If in addition \[ D = {\mathrm{det}} \left[\begin{matrix} a_2 & a_3 \\ b_2 & b_3 \end{matrix} \right]\mbox{ is nonzero, then } \] \[ I_U = \langle pu,pv, \widetilde{p_2},\widetilde{p_3} \rangle = \langle pu, pv, qu, qv \rangle. \] Example V.1.4.3 of \cite{h} shows that curves ${\bf V }(f)$ of bidegree $(a,b)$ and ${\bf V }(g)$ of bidegree $(c,d)$ on ${\mathbb{P}^1 \times \mathbb{P}^1}$ sharing no common component meet in $ad+bc$ points. If $p$ and $q$ share a common factor, then clearly $I_U$ is not basepoint free; if they do not share a common factor, then ${\bf V }(p,q)$ consists of $2ab-2a$ points; since $a,b \ge 2$, this again forces $I_U$ to have basepoints. The same argument works if the additional syzygy is of bidegree $(1,0)$, save that in this case since $q$ is of degree $(a-1,b)$, ${\bf V }(p,q)$ consists of $2ab-a-b+1$ points, and again $I_U$ is not basepoint free. Next, suppose $D=0$. If $a_2=a_3=b_2=b_3=0$, then the second minimal first syzygy involves only $pu$ and $pv$. If the syzygy is of bidegree $(0,1)$, then by Lemma~\ref{LS1}, $(pu,pv)=(qv,qu)$. Thus \[ pu=qv \Longrightarrow p=fv, q=fu \Longrightarrow fv^2=fu^2, \] a contradiction. If the syzygy is of bidegree $(1,0)$, then $(pu,pv)=(qs,qt)$, and \[ pu=qs \Longrightarrow p=fs, q=fu \Longrightarrow fsv=fut, \] again a contradiction. Finally, if $D=0$ and $a_2,a_3,b_2,b_3$ are not all zero, then $c\cdot[a_2,b_2] = [a_3,b_3]$ for some $c \ne 0$, so letting $\widetilde{p_2} = p_2+cp_3$, we may assume the syzygy involves only $pu,pv, \widetilde{p_2}$. If the syzygy is of degree $(0,1)$, letting $l_i=a_iu+b_iv$ for $i=0,1,2$, we have \[ pul_0+ pvl_1+ \widetilde{p_2}l_2=0. \] Since $\langle l_2 \rangle$ is prime, either $l_2 | ul_0+ vl_1$ or $l_2 | p$. In the former case, $ul_0+ vl_1 = l_2l_3$ for some $l_3 \in k[u,v]_1$ hence $pl_3 + \widetilde{p_2}=0$. In particular $p | \widetilde{p_2}$, so ${\bf V }(p,p_3)$ contains $2ab-a$ points and $I_U$ is not basepoint free. In the latter case, $p'l_2 =p$ for some $p' \in R_{(a-2,b)}$, so $p'l_2ul_0+ p'l_2vl_1+ \widetilde{p_2}l_2=0$. Hence $p'ul_0+ p'vl_1+ \widetilde{p_2}=0$, so $p'$ is a common factor of $p$ and $\widetilde{p_2}$ of bidegree $(a,b-2)$, so ${\bf V }(p',p_3)$ contains $2ab-2a$ points and $I_U$ is not basepoint free. A similar argument works if the additional syzygy is of bidegree $(1,0)$. \end{proof} \begin{thm}\label{EXTRA} If $U$ is basepoint free, $a,b \ge 2$ and there is a linear syzygy $L$ of bidegree $(0,1)$ on $I_U$, then there are two additional first syzygies $S_1,S_2$ of bidegree $(a,b-1)$, such that \[ \dim \langle L,S_1,S_2\rangle_{(2a-1,b-1)} = 2ab. \] \end{thm} \begin{proof} By Lemma~\ref{LS1} we may assume $(p_0,p_1)=(pu,pv)$. Write $p_2 = g_2v+f_2u$. Then $f_2p_0+g_2p_1-pp_2 =0$, so the kernel of $[pu, pv, p_2]$ contains the columns of the matrix \[ M = \left[\begin{matrix} v & f_2 \\ -u & g_2 \\ 0 & -p \end{matrix} \right]. \] In fact, $M$ is the syzygy matrix of $[pu, pv, p_2]$: the sequence $\{pu, p_2\}$ is not regular iff the two polynomials share a common factor. If $u|p_2$, then let $p_2'= p_2+pv$; $u|p_2'$ or $p|p_2'$ imply $I_U$ is not basepoint free. So the depth of the ideal of $2 \times 2$ minors of $M$ is two and exactness follows from the Buchsbaum-Eisenbud criterion \cite{ebig}. Writing $p_3=f_3u+g_3v$, the syzygy module of $I_U$ contains the columns of $N={\mathrm{Span}} \{L,S_1,S_2\},$ where \[ N = \left[\begin{matrix} v & f_2 & f_3\\ -u & g_2 &g_3\\ 0 & -p &0 \\ 0 & 0 & -p \end{matrix} \right]. \] As the bottom $3 \times 3$ submatrix of $N$ is upper triangular, $\{L,S_1,S_2\}$ span a free $R$-module. The linear syzygy $L$ is of bidegree $(0,1)$, so in the degree $\nu$ strand of the approximation complex it gives rise to \[ h^0({\mathcal{O}_{\mathbb{P}^1 \times \mathbb{P}^1}}(2a-1,b-2)) = 2a(b-1) \] columns of the matrix of the first differential $d^1$. The two syzygies $S_1,S_2$ of bidegree $(a,b-1)$ each give rise to \[ h^0({\mathcal{O}_{\mathbb{P}^1 \times \mathbb{P}^1}}(a-1,0)) = a \] columns of the matrix of $d^1$. That the columns are independent follows from the fact that $\{L,S_1,S_2\}$ span a free $R$-module. Hence, these syzygies yield $2ab$ columns the degree $\nu$ component of the matrix of $d^1$. \end{proof} For Theorem~\ref{MAIN} and Theorem~\ref{EXTRA} to hold, we need $a,b \ge 2$, even if $U$ is basepoint free. If either $a$ or $b$ is at most one, there can be additional minimal linear syzygies. For example, if $(a,b)=(1,1)$, then there are four minimal linear first syzygies. However, it is easy to see that the theorems both hold if $L$ is of bidegree $(1,0)$. \begin{cor}\label{EQN} If $a,b \ge 2$, $U$ is basepoint free, and $I_U$ has a linear first syzygy, then the determinant of the degree $\nu = (2a-1,b-1)$ submatrix of the first differential in the approximation complex is determined by $\{L, S_1,S_2\}$. \end{cor} \begin{proof} This follows from Lemma~\ref{BLEM1}, Lemma~\ref{BLEM2}, the remarks preceding those lemmas, and Theorem~\ref{EXTRA}. \end{proof} \begin{cor}\label{SING} If $a,b \ge 2$, $U$ is basepoint free, and $I_U$ has a linear first syzygy, then the singular locus of $X_U$ contains a line. \end{cor} \begin{proof} Let $I_U = \langle pu,pv,p_2,p_3\rangle$. By Corollary~\ref{EQN}, the matrix representing the degree $\nu$ component $d^1$ has as its leftmost $2a(b-1)$ columns a block matrix $P$. For each monomial $m_c = s^{2a-1-c}t^c$ with $c \in \{0,\ldots,2a-1\}$, there is a $b \times b-1$ block $B$ corresponding to elements $m_c\cdot\{v^{b-2},\ldots, u^{b-2}\} \cdot L$, with $L = vx_0-ux_1$, hence \[ B=\left[\begin{matrix} x_0 & 0 &\hdots & \hdots &0\\ -x_1 &x_0 &0 & \vdots & 0\\ \vdots &-x_1 &\ddots & \vdots &0 \\ \vdots & 0 &x_0 &\ddots &0 \\ \vdots & \vdots &\vdots&\vdots &0 \\ 0 & 0 & 0 & -x_1 &x_0\\ 0 & 0 & 0 & 0 &-x_1 \end{matrix} \right], \mbox{ and }P=\left[\begin{matrix} B & 0 & \hdots &0\\ 0 &B &\ddots & 0\\ 0 &0 &\ddots & 0 \\ 0 & \hdots & 0 & B \end{matrix} \right]. \] Computing the Laplace expansion of the determinant using the $2ab-2a$ minors of $P$ shows the implicit equation for $X_U$ takes the form \[ x_0^{2ab-2a}\cdot f_0 + x_0^{2ab-2a-1}x_1 \cdot f_1 + \cdots + x_1^{2ab-2a} \cdot f_{2ab-2a}. \] So $X_U$ is singular along ${\bf V }(x_0,x_1)$, with multiplicity at least $2ab-2a$. \end{proof} \begin{remark}The specific form of the implicit equation given above means that it suffices to find the $f_i$, and speeds up the computation. \end{remark} \section{Application to the bidegree $(2,2)$ case} We close with some examples in the bidegree $(2,2)$ case; without loss of generality we assume $I_U$ has a linear first syzygy of bidegree $(0,1)$, so $I_U = \langle pu,pv,p_2,p_3\rangle$. Hence $p$ is of bidegree $(2,1)$. There are three possible factorizations for $p$: \pagebreak \begin{enumerate} \item $p$ is irreducible. \item $p$ is a product of an irreducible form of bidegree $(1,1)$, and a form of bidegree $(1,0)$. So $p=ql$, where $q =a_0su+a_1sv+a_2tu+a_3tv$ and $l=b_0s+b_1t$. The locus of such forms is the image of the map \[ \P(H^0({\mathcal{O}_{\mathbb{P}^1 \times \mathbb{P}^1}}(1,1))) \times \P(H^0({\mathcal{O}_{\mathbb{P}^1 \times \mathbb{P}^1}}(1,0))) = \P^3 \times \P^1 \longrightarrow \P^5, \] $(a_0:a_1:a_2:a_3) \times (b_0:b_1) \mapsto (a_0b_0:a_0b_1+a_2b_0:a_2b_1:a_1b_0:a_1b_1+a_3b_0:a_3b_1)$, which is a quartic hypersurface \[ Q = {\bf V }({x}_{2}^{2} {x}_{3}^{2}-{x}_{1} {x}_{2} {x}_{3} {x}_{4}+{x}_{0} {x}_{2} {x}_{4}^{2}+{x}_{1}^{2} {x}_{3} {x}_{5}-2 {x}_{0} {x}_{2} {x}_{3} {x}_{5}-{x}_{0} {x}_{1} {x}_{4} {x}_{5}+{x}_{0}^{2} {x}_{5}^{2}). \] Note that $\Sigma_{2,1} \subseteq {\bf V }(Q)$. \item $p$ is a product of three linear forms, two of bidegree $(1,0)$ and one of bidegree $(0,1)$. Then identifying the coefficients of $p=a_0s^2u +a_1stu+a_2t^2u+a_3s^2v+a_4stv+a_5t^2v$ with a point of ${\mathbb P}^5$, such a decomposition corresponds to a point on the Segre variety $\Sigma_{2,1}$, whose ideal is defined by the two by two minors of \[ \left[\begin{matrix} x_0 & x_1 & x_2 \\ x_3 & x_4 &x_5 \end{matrix}\right]. \] \end{enumerate} Examples of possible bigraded betti tables for these three cases appear below, where $p_2$ and $p_3$ are chosen generically. When $p_2$ and $p_3$ are also nongeneric, there are many additional possible types of betti table. It would be interesting to prove that the tables below are always the bigraded betti tables for generic choices of $p_2$ and $p_3$, and to classify the bigraded resolutions which are possible in the $(2,2)$ case, and we are working on this. For brevity, we denote $R(a,b)$ by $(a,b)$. In all three cases, $X_U$ has degree $2ab=8$, in contrast to Example~\ref{ex1}. \begin{exm}\label{genericCase} Suppose $p \not\in {\bf V }(Q)$. After a change of coordinates, we may assume $p$ is the point $(1:0:0:0:0:1)$, which corresponds to $p=s^2u+t^2v$. \[ 0 \leftarrow I_U \longleftarrow (-2,-2)^4 \longleftarrow \begin{array}{c} (-2,-3)\\ \oplus \\ (-4,-3)^2\\ \oplus \\ (-4,-4)\\ \oplus \\ (-3,-5)^2\\ \oplus \\ (-6,-3)\\ \oplus \\ (-8,-2) \end{array} \longleftarrow \begin{array}{c} (-4,-5)^3\\ \oplus \\ (-6,-4)^2\\ \oplus \\ (-8,-3)^2 \end{array} \longleftarrow \begin{array}{c} (-6,-5)\\ \oplus \\ (-8,-4) \end{array} \longleftarrow 0 \] The reduced singular locus of $X_U$ consists of curves of degrees $1$, $2$, and $3$. \end{exm} \pagebreak \begin{exm}\label{OnQ} Suppose $p \in {\bf V }(Q) \setminus \Sigma_{2,1}$. After a change of coordinates, we may assume $p$ is the point $(1:2:1:1:1:0)$, which corresponds to $s^2u+2stu+t^2u+s^2v+stv$. \[ 0 \leftarrow I_U \longleftarrow (-2,-2)^4 \longleftarrow \begin{array}{c} (-2,-3)\\ \oplus \\ (-4,-3)^2\\ \oplus \\ (-4,-4)\\ \oplus \\ (-3,-5)^2\\ \oplus \\ (-6,-3)\\ \oplus \\ (-7,-2) \end{array} \longleftarrow \begin{array}{c} (-4,-5)^3\\ \oplus \\ (-6,-4)^2\\ \oplus \\ (-7,-3)^2 \end{array} \longleftarrow \begin{array}{c} (-6,-5)\\ \oplus \\ (-7,-4) \end{array} \longleftarrow 0 \] The reduced singular locus of $X_U$ consists of curves of degrees $1$, $1$, and $4$. \end{exm} \begin{exm}\label{genericCase} Suppose $p \in \Sigma_{2,1}$. After a change of coordinates, we may assume $p$ is the point $(1:1:1:1:1:1)$, which corresponds to $s^2u+stu+t^2u+s^2v+stv+t^2v$. \[ 0 \leftarrow I_U \longleftarrow (-2,-2)^4 \longleftarrow \begin{array}{c} (-2,-3)\\ \oplus \\ (-4,-3)^2\\ \oplus \\ (-4,-4)\\ \oplus \\ (-3,-5)^2\\ \oplus \\ (-6,-2)\\ \end{array} \longleftarrow \begin{array}{c} (-4,-5)^3\\ \oplus \\ (-6,-4)\\ \oplus \\ (-6,-3) \end{array} \longleftarrow (-6,-5) \longleftarrow 0 \] The reduced singular locus of $X_U$ consists of curves of degrees $1$ and $4$. \end{exm} \noindent{\bf Acknowledgments} We thank an anonymous referee for a careful reading of the paper, and for helpful comments. This work arose from a question asked by R. Vakil at the 2013 SIAM meeting on applied algebraic geometry, and we thank the organizers of the session on toric geometry I. Soprunov and B. Nill. Evidence for this work was provided by many computations done using {\tt Macaulay2}, by Dan Grayson and Mike Stillman. {\tt Macaulay2} is freely available at \begin{verbatim} http://www.math.uiuc.edu/Macaulay2/ \end{verbatim} and scripts to perform the computations are available at \begin{verbatim} http://www.math.uiuc.edu/~schenck/Syzscript \end{verbatim} \bibliographystyle{amsplain}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,453
\section{Introduction} \label{Sec: Intro} The vast realm of data-driven control methods can be classified into {\em indirect data-driven control} approaches consisting of sequential system identification and model-based control as well as {\em direct data-driven control} approaches seeking an optimal decision compatible with recorded data. Both approaches have a rich history, and they have received renewed interest cross-fertilized by novel methods and widespread interest in machine learning. Representative recent surveys are \cite{hewing2020learning,pillonetto2014kernel,chiuso2019system,hou2013model,recht2019tour,IM-FD:21-survey,hjalmarsson2005experiment,SM:22}. The pros and cons of both paradigms have often been elaborated on. {\tb The indirect approach is modular with well understood subtasks, though modeling and identification are cumbersome, their results are often not useful for control (due to, e.g., incompatible uncertainty quantifications), and practitioners often prefer end-to-end methods. Direct approaches promise to resolve these problems by learning control policies directly from data. However, they are often analytically and computationally less tractable and rarely apply to real-time and safety-critical control systems. Selected direct methods that proved themselves in theory and practice are iterative feedback tuning and virtual reference feedback tuning~\cite{HH-MG-SG-OL:98, MC-AL-SS:02,bazanella2011data} Quite a few approaches have bridged the direct and indirect data-driven control paradigms.} Of relevance to this article, we note the literature on identification for control \cite{hjalmarsson2005experiment,hjalmarsson1996,geversaa2005,schrama1992} and control-oriented regularized identification \cite{formentin2018core}, which propose that the control objective should bias the identification task. {\tb Likewise, dual control dating to \cite{feldbaum1963dual} addresses the {exploration vs. exploitation} trade-offs in simultaneous identification and optimal control; see \cite{ferizbegovic2019learning,larsson2016application,iannelli2020structured} for recent contributions. Furthermore, \cite{campestrini2017data} formulates data-driven model reference control as an identification problem, where various degrees of prior information can be incorporated so that the method can range between the direct and the indirect approach.} We take a similar perspective here: the sequential identification and control tasks can be abstracted as nested bi-level optimization problem: find the best control subject to a model, where the model is the best fit to a data set within some hypothesis class. This approach is modular and both steps admit tractable formulations, but generally it is also suboptimal: there is no separation principle -- aside from special cases, see \cite[Section 4]{hjalmarsson2005experiment} -- for these two nested optimization problems. An end-to-end direct algorithmic approach may thus outperform indirect methods if a tractable formulation were available. For the latter we resort to a paradigm square in between behavioral system theory and subspace system identification methods. Behavioral system theory \cite{willems2007,willems1991,willems1997} takes an abstract view on dynamical systems as sets of trajectories, and it does not require parametric representations which makes it appealing from a data-centric perspective. For example, linear time-invariant (LTI) systems are characterized as shift-invariant subspaces within an ambient space of time series. The role of identification is to find such a low-dimensional feature from data. Subspace methods take a similar (albeit more algorithmic) viewpoint \cite{van2012subspace,katayama2006subspace,van1994n4sid} and extract parametric models from the range and null spaces of a low-rank data Hankel matrix. Both lines of work come together in a result known as the Fundamental Lemma \cite{willems2005}; see also \cite{waarde2020,IM-FD:20,IM-FD:21-survey} for recent extensions. It states that, under some assumptions, the set of all finite-length trajectories (the restricted behavior) of an LTI system equals the range space of a data Hankel matrix. This result serves as the theoretic underpinning for work in subspace identification \cite{IM-FD:20,markovsky2006,markovsky2005algorithms} and data-driven control, in particular subspace predictive control based on non-parametric models \cite{favoreel1999spc,qin2005novel,huang2008dynamic}, explicit feedback policies parametrized by data matrices \cite{berberich2020combining,van2020noisy,de2019formulas}, and data-enabled predictive control {\tb (DeePC)} seeking compatibility of predicted trajectories with the range space of a data Hankel matrix. The latter methods have first been established for deterministic LTI systems in \cite{markovsky2008,markovsky2016} and have recently been extended by suitably regularizing the optimal control problems. Closed-loop stability was certified in \cite{berberich2020data}. The regularizations were first mere heuristics \cite{JC-JL-FD:18} but have later been constructively derived by robust control and optimization \cite{JC-JL-FD:19-CDC,JC-JL-FD:20,xue2020data,LH-JZ-JL-FD:20,LH-JZ-JL-FD:01}. These approaches, albeit recent, have proved themselves in practical nonlinear problems {\tb in multiple domains} \cite{LH-JZ-JL-FD:01,LH-JC-JL-FD:19,PC-AF-SB-FD:20,EE-JC-PB-JL-FD:19,LH-JZ-JL-FD:20}. We also note the recent maximum-likelihood perspective \cite{yin2020maximum}. {\tb We refer to \cite{IM-FD:21-survey} surveying results surrounding the fundamental lemma}. In this paper, we explore the following questions: how does data-enabled predictive control relate to a prior system identification? What are principled regularizations? And why does it work so well in the nonlinear case? We start our investigations from indirect data-driven control formulated as a bi-level optimization problem in the general output feedback setting. As a vehicle to transition between indirect and direct approaches, we consider a multi-criteria problem trading off identification and control objectives {\tb reminiscent of similar approaches \cite{hjalmarsson2005experiment,hjalmarsson1996,geversaa2005,schrama1992,formentin2018core,feldbaum1963dual,ferizbegovic2019learning,larsson2016application,iannelli2020structured,campestrini2017data} blending the two.} We formally show that one tail of its Pareto front corresponds to the bi-level problem, and a convex relaxation results in the regularized data-enabled predictive control formulations used in \cite{berberich2020data,JC-JL-FD:18,JC-JL-FD:19-CDC,JC-JL-FD:20,xue2020data,LH-JZ-JL-FD:20,LH-JC-JL-FD:19,PC-AF-SB-FD:20,EE-JC-PB-JL-FD:19,LH-JZ-JL-FD:01}. Most of our results are formulated in the abstract language of behavioral systems theory and parametric mathematical programs, but we also specialize our treatment to two concrete methods: subspace predictive control {\tb (SPC)} \cite{favoreel1999spc,qin2005novel,huang2008dynamic} and low-rank approximation \cite{markovsky2016}. In both cases we conclude that the direct regularized data-driven control can be derived as a convex relaxation of the indirect approach, where {\tb $(i)$ LTI complexity specifications (selecting the model class) are dropped, and $(ii)$} the projection of the data on the set of LTI systems is replaced by regularizations accounting for implicit identification. {\tb In particular, starting from indirect data-driven control based on low-rank approximation of a Hankel matrix, we arrive at a DeePC formulation with an $\ell_{1}$-regularizer (Theorem \ref{Theorem: Low-rank relaxation}). When formulating indirect data-driven control via the SPC framework, our analysis reveals a novel regularizer for DeePC promoting a least-square data fit by projecting on the null space of the Hankel matrix (Theorem~\ref{Theorem: SPC relaxation}).} We illustrate our results with numerical studies illustrating the role of regularization, superiority of the new regularizer, and comparisons. Informed by our analysis, we hypothesize and numerically confirm that the indirect approach is superior in case of ``variance'' error, e.g., for LTI stochastic systems, and the direct approach wins in terms of ``bias'' error, e.g., for nonlinear systems supporting the {\tb empirical} observations in \cite{LH-JZ-JL-FD:01,LH-JC-JL-FD:19,PC-AF-SB-FD:20,EE-JC-PB-JL-FD:19,LH-JZ-JL-FD:20}. {\tb Similar bias-variance trade-offs can also be found in the recent pre-print \cite{krishnan2021direct} discussing sub-optimality of direct and indirect methods as function of the data size.\,These findings also resonate with those of data-driven model reference control \cite{campestrini2017data} concluding that the direct approach is superior in reducing the bias whereas the indirect one gives better variance -- especially if an erroneous model class is selected.} The remainder of this paper is organized as follows: Section~\ref{Sec: Preliminaries} reviews representations of LTI systems. Section~\ref{Sec: Direct and Indirect Data-Driven Control} formulates the direct and indirect data-driven control problems, and Section~\ref{Sec: Bridging} bridges them. Section~\ref{subsec: numerical analysis} contains our numerical studies. Finally, Section~\ref{sec: conclusions} concludes the paper. {\tb Readers familiar with the behavioral approach may skip Section \ref{Sec: Preliminaries}.} \section{LTI Systems and their Representations} \label{Sec: Preliminaries} We adopt a behavioral perspective which allows for system theory independent of parametric representations. We aim at a concise exposition and refer to \cite{willems2007,willems1991,willems1997,IM-FD:21-survey} for details. \subsection{Behavioral Perspective on Discrete-Time LTI systems} Consider the discrete time axis $\mathbb Z$, the signal space $\real^{q}$, and the associated {space of trajectories} $\real^{q\mathbb Z}$ consisting of all $q$-variate sequences $(\dots,w(-1),w(0), w(1),\dots)$ with $w(i) \in \real^{q}$. Consider a permutation matrix $P$ partitioning each $ w(i) = P \left[\begin{smallmatrix} u(i) \\ y(i) \end{smallmatrix}\right] $, where $u(i) \in \real^{m}$ and $y(i) \in \real^{q-m}$ are free and dependent variables that will later serve as inputs and outputs. The {\em behavior} ${\mathscr{B}}$ is defined as a subset of the space of trajectories, ${\mathscr{B}} \subset \real^{q\mathbb Z}$, and a system as the triple $(\mathbb Z,\real^{q},{\mathscr{B}})$. In what follows, we denote a system merely by its behavior ${\mathscr{B}}$, keeping the signal space $ \real^{q\mathbb Z}$ fixed throughout. A system is {\em linear} if ${\mathscr{B}}$ is a subspace of $ \real^{q\mathbb Z}$. Let $\sigma$ denote the shift operator with action $\sigma w({t}) = w({t+1})$. A system is {\em time-invariant} if ${\mathscr{B}}$ is shift-invariant: $\sigma {\mathscr{B}} = {\mathscr{B}}$. {\tb Finally, ${\mathscr{B}}_{L}$ is the {restriction of ${\mathscr{B}}$} to $ \real^{qL}$, i.e., to trajectories of length $L\in \mathbb Z_{>0}$.} \subsection{Kernel Representations and Parametric Models} \label{subsec: kernel reps} {\tb Rather than a mere set-theoretic descriptions, one typically works with explicit {\em parametric representations} (colloquially termed {\em models}) of LTI systems. For instance, a {\em kernel representation} with {\em lag} $\ell$ specifies an LTI behavior as} \begin{equation*} {\mathscr{B}} = \text{kernel}(R(\sigma)) = \bigl\{ w \in \real^{q\mathbb Z}\,:\; R(\sigma) w = 0 \bigr\}\,, \end{equation*} where $R(\sigma) = R_{0}+R_{1} \sigma + \dots + R_{\ell} \sigma^{\ell}$ is a polynomial matrix of degree $\ell$, and the matrices $R_{0}, R_{1}, \dots, R_{\ell}$ take values in $\real^{(q-m) \times q}$. Alternatively, one can unfold the kernel representation by revealing a latent variable: the state $x(t) \in \real^{n}$. The {\em input/state/output} (or {\em state-space}) {\em representation} is \begin{align*} {\mathscr{B}} = & \bigl\{ w = P \left[\begin{smallmatrix} u \\ y\end{smallmatrix}\right] \in \real^{q\mathbb Z}\,:\; \exists x \in \real^{n\mathbb Z} \mbox{ such that } \\ & \quad \sigma x = Ax + Bu\,,\, y = Cx+Du \bigr\}\,, \end{align*} where $A \in \real^{n \times n}$, $B \in \real^{n \times m}$, $C \in \real^{q-m \times n}$, and $D \in \real^{q-m \times m}$. We assume that the lag $\ell$ (resp., the state dimension $n$) is minimal, i.e., there is no other kernel (resp., state-space) representation with smaller lag (resp., state dimension). The dimension $n$ of a minimal state-space representation manifests itself in a minimal kernel representation as $n=\sum_{i=1}^{q-m} \ell_{i}$, where $\ell_{i}$ is the {degree}\ of the $i$th row of $R(\sigma)$. \subsection{Representation-Free Estimation and Behavior Dimension} \label{subsec: estimation} {\tb Given a state-space representation with $m$ inputs, order $n$, and lag $\ell$, the extended {observability} and convolution matrices \begin{equation*} {\mathscr{O}}_{L} = \left[\begin{smallmatrix} C \\ CA \\ \vdots \\ C A^{L-1} \end{smallmatrix}\right] \quad\mbox{and}\quad {\mathscr{G}}_{L} = \left[\begin{smallmatrix} D & 0 & \cdots & & 0 \\ CB & D & 0 & \cdots & 0 \\ CAB & CB & D & \ddots & \vdots \\ \vdots & \ddots & \ddots &\ddots & 0 \\ CA^{L-2}B & \cdots & CAB & CB & D \end{smallmatrix}\right] \end{equation*} parametrize all length-$L$ trajectories in ${\mathscr{B}}_{L}$ as \begin{equation} \begin{bmatrix} u \\ y \end{bmatrix} = \begin{bmatrix} I & 0 \\ {\mathscr{G}}_{L} & {\mathscr{O}}_{L} \end{bmatrix} \begin{bmatrix} u \\ {x_{\textup{ini}}} \end{bmatrix} \,, \label{eq: IOS representation} \end{equation} where ${x_{\textup{ini}}} \in \real^{n} $ is the initial state. Recall the {\em observability problem}: given length-$L$ time series of inputs and outputs, can ${x_{\textup{ini}}}$ be reconstructed? Equation \eqref{eq: IOS representation} gives a succinct answer: namely, ${x_{\textup{ini}}}$ can be reconstructed if and only if ${\mathscr{O}}_{L}$ has full column-rank. The minimum $L$ so that ${\mathscr{O}}_{L}$ has full rank $n$ equals the {\em lag} $\ell$ of a minimal kernel representation. As readily deducible from \eqref{eq: IOS representation} and formalized in \cite[Lemma\,1]{markovsky2008}, in a {\em representation-free} setting, the initial condition ${x_{\textup{ini}}}$ for a trajectory $w \in {\mathscr{B}}_{L}$, can be estimated via a prefix trajectory ${w_{\textup{ini}}} = \bigl(w(-{T_{\textup{ini}}}+1), \dots, w(-1), w(0) \bigr)$ of length ${T_{\textup{ini}}} \geq \ell$ so that the concatenation ${w_{\textup{ini}}} \wedge w \in {\mathscr{B}}_{{T_{\textup{ini}}}+L}$ is a valid trajectory.} Hence, an LTI system is characterized by the complexity parameters $( q,m,n,\ell)$, and we denote the corresponding class of LTI systems by ${\mathscr{L}}_{m,\ell}^{q,n}$: namely, LTI systems with $m$ inputs, $q-m$ outputs, minimal state dimension $n$, and minimal lag $\ell$. The following lemma characterizes the dimension of ${\mathscr{B}}_{L} \in {\mathscr{L}}_{m,\ell}^{q,n}$ in terms of the complexity parameters $( q,m,n,\ell)$ {\tb \begin{lemma}[Dimension of ${\mathscr{B}}_{L}$] \label{Lemma: subspace dimension} Let ${\mathscr{B}} \in {\mathscr{L}}_{m,\ell}^{q,n}$. Then ${\mathscr{B}}_{L}$ is a subspace of $ \real^{qL}$, and for $L \geq \ell$ its dimension is $mL + n$. \end{lemma} \begin{IEEEproof} Due to linearity of ${\mathscr{B}}$, ${\mathscr{B}}_{L} \subset \real^{qL}$ is a subspace. To show that the dimension of ${\mathscr{B}}_{L}$ equals $mL + n$ for $L \geq \ell$, we appeal to a minimal state-space representation of ${\mathscr{B}}$ --- a state-space-independent proof is in \cite[Sec. 3]{IM-FD:20}. We have $w = P \left[\begin{smallmatrix} u \\ y\end{smallmatrix}\right] \in {\mathscr{B}}_{L}$ if and only if \eqref{eq: IOS representation} holds for some ${x_{\textup{ini}}} \in \real^{n}$. Since the representation is minimal, ${\mathscr{O}}_{L} \in \real^{(q-m)L \times n}$ is of full column-rank for $L \geq \ell$. Therefore, the matrix $ \left[\begin{smallmatrix} I & 0 \\ {\mathscr{G}}_{L} & {\mathscr{O}}_{L} \end{smallmatrix}\right] \in \real^{qL \times (mL + n)}$ is of full rank $mL+n$ for $L \geq \ell$ and forms a basis for ${\mathscr{B}}_{L}$. Thus, ${\mathscr{B}}_{L}$ has dimension $mL+n$. \end{IEEEproof} \begin{remark}[Complexity bounds] All forthcoming results assume known complexity $(q,m,n,\ell)$. When only data and no prior information is available, it is reasonable to assume upper bounds on $(q,m,n,\ell)$. In this case, the anticipated dimension of ${\mathscr{B}}_{L}$ is at most $mL + n$, and the forthcoming rank equalities the behavior dimension should be replaced by inequalities. \oprocend \end{remark}} \subsection{Image Representation of Restricted Behavior}\label{sec:image-representation} The restricted behavior ${\mathscr{B}}_{L}$, the set of all trajectories of length $L$, can be described by a kernel or state-space representation. As an interesting alternative, we recall the {\em image representation} of ${\mathscr{B}}_{L}$ by a data matrix of a time series. Consider the sequence $w = \bigl(w(1),w(2),\dots,w(T)\bigr)$ with elements $w(i) \in \real^{q}$, and define the (block) {\em Hankel matrix} ${\mathscr{H}}_{L}(w) \in \real^{qL \times (T-L+1)}$ of depth $L$, for some $L \leq T$, as \begin{equation*} {\mathscr{H}}_{L}(w) = \begin{bmatrix} w(1) & w(2) & \dots & w(T-L+1) \\ w(2) & w(3) & \dots & w(T-L+2) \\ \vdots & \vdots & \ddots & \vdots \\ w(L) & w(L+1) & \dots & w(T) \end{bmatrix} \,. \end{equation*} A result due to \cite{willems2005} that became known as the {\em Fundamental Lemma} offers an image representation of the restricted behavior in terms of the column span of a data Hankel matrix. We present a necessary and sufficient version here assuming: \begin{enumerate}\addtolength{\itemindent}{5pt} \renewcommand{\labelenumi}{{\theenumi}}\renewcommand{\theenumi}{(A.\arabic{enumi})} \item \label{ass:L+n pe} rank$\left({\mathscr{H}}_L(w)\right)=mL + n$. \end{enumerate} \begin{lemma}\label{lemma: fundamental lemma}{\cite[Corollary 19]{IM-FD:20})}: Consider an LTI system ${\mathscr{B}} \in {\mathscr{L}}_{m,\ell}^{q,n}$ and an associated trajectory $w = \bigl(w(1), w(2),$ $ \dots, w(T)\bigr) \in \real^{qT}$. The following are equivalent for $L > \ell$:\vspace{-4pt} \begin{equation*} \text{colspan} \left({\mathscr{H}}_{L}(w) \right) = {\mathscr{B}}_{L} \quad\Longleftrightarrow\quad \text{Assumption \ref{ass:L+n pe}} \end{equation*} \end{lemma} In words, the Hankel matrix ${\mathscr{H}}_L(w)$ composed of a single $T$-length trajectory parametrizes all $L$-length trajectories if and only if rank$\left({\mathscr{H}}_L(w)\right)=mL + n$. A plausible reasoning leading up to Lemma~\ref{lemma: fundamental lemma} is that every column of ${\mathscr{H}}_L(w)$ is a trajectory of length $L$, and the set of all such trajectories has at most dimension $mL+n$; see Lemma~\ref{Lemma: subspace dimension}. Lemma~\ref{lemma: fundamental lemma} extends the original {\em Fundamental Lemma} \cite[Theorem 1]{willems2005} which requires input/output partitioning, controllability, and persistency of excitation of order $L+n$ (i.e., ${\mathscr{H}}_{L+n}(u)$ must have full row rank) as sufficient conditions. Lemma~\ref{lemma: fundamental lemma} also extends to mosaic Hankel, Page, and trajectory matrices \cite{{IM-FD:20}}. {\tb \begin{remark}[Models vs. data]\label{rem: models vs. data} It is debatable whether the image representation via the Hankel matrix ${\mathscr{H}}_{L}(w)$ should be called a ``model'', as it is readily available from raw data. Hence, we call $\text{colspan} \left({\mathscr{H}}_{L}(w) \right)$ a {\em data-driven representation} of ${\mathscr{B}}_{L}$ and reserve the term ``model'' for parametric (kernel or state-space) representations. Models are useful for many reasons: first and foremost the availability of powerful analysis and design methods. Another readily discernible advantage is that models are vastly compressed compared to the image representation, and the latter holds only on finite horizons unless trajectories are weaved together \cite{markovsky2005algorithms}; see also Remark~\ref{rem: On data lengths}. \oprocend \end{remark}} \section{Direct and Indirect Data-Driven Control} \label{Sec: Direct and Indirect Data-Driven Control} We present different data-driven control formulations along with assumptions under which the formulations are consistent. These assumptions are used only for consistency statements and not for our main results, but they will prove insightful. \subsection{Optimal Control Problem} \label{subsec:opt-ctr} Given a plant with {\em plant behavior} ${\mathscr{B}}^P \in {\mathscr{L}}_{m,\ell}^{q,n}$, a ${T_{\textup{ini}}}$-length prefix trajectory ${w_{\textup{ini}}} = \bigl(w(-{T_{\textup{ini}}}+1), \dots, w(0) \bigr) \in {\mathscr{B}}_{{T_{\textup{ini}}}}$, a $L$-length reference trajectory $w_r\in\real^{qL}$ in a {\em reference behavior} ${\mathscr{B}}^R$, {\tb and a set of {\em admissible trajectories\/} $\mathcal W \subset \real^{qL}$,}\ consider the finite-time {\em optimal control problem} \begin{mini} {w {\tb \in \mathcal W}}{{c_\text{ctrl}}(w-w_{r}) }{\label{eq:OPT}}{\boldsymbol{C}:} \addConstraint{ {{w_{\textup{ini}}} \wedge w \in {\mathscr{B}}^{P}_{{T_{\textup{ini}}}+L}} \,. } \end{mini} For ${T_{\textup{ini}}} \geq \ell$ the prefix trajectory ${w_{\textup{ini}}}$ implicitly sets the initial condition for the optimal control problem \eqref{eq:OPT}; see Section~\ref{subsec: estimation}. In case of uncertain initial condition, the prefix ${w_{\textup{ini}}}$ can be made a decision variable and included via a penalty term in the cost; c.f., \cite{berberich2020data,markovsky2016,JC-JL-FD:18,JC-JL-FD:20,JC-JL-FD:19-CDC}. We refrain from such extensions here.% {\tb Typically, the cost ${c_\text{ctrl}}:\, \mathbb R^{qL} \to \real_{\geq 0}$ includes a running and a terminal cost. The set $\mathcal W \subset \real^{qL}$ captures constraints on admissible trajectories (e.g., capturing input saturation). We denote a minimizer (if it exists) of the optimization problem $\boldsymbol{C}$ in \eqref{eq:OPT} by $w^{\star}_{C}$. We make the following regularity assumptions: \begin{enumerate}\addtolength{\itemindent}{5pt} \renewcommand{\labelenumi}{{\theenumi}}\renewcommand{\theenumi}{(A.\arabic{enumi})}\addtocounter{enumi}{1} \item\label{ass:ctr cost} ${c_\text{ctrl}}:\, \mathbb R^{qL} \to \real_{\geq 0}$ is a convex function that achieves its minimum when $w = w_{r}$; $\mathcal W \subset \real^{qL}$ is closed,\,convex, and non-empty; and $(\real^{q{T_{\textup{ini}}}} \oplus \mathcal W) \cap {\mathscr{B}}^{P}_{{T_{\textup{ini}}}+L}$ is\,{non-empty.} \end{enumerate} The last assumption ensures that $\mathcal W$ is {\em viable}, i.e., a trajectory of ${\mathscr{B}}^{P}$ originating anywhere can be contained within $\mathcal W$ for $L$ steps. Problem \eqref{eq:OPT} is thus convex with closed, convex, and non-empty feasible set due to Assumption~\ref{ass:ctr cost} and because ${{\mathscr{B}}^{P}_{{T_{\textup{ini}}}+L}}$ is a subspace; see Lemma \ref{Lemma: subspace dimension}. Under further standard assumptions existence and uniqueness of a (global) minimum can be assured, but we do not impose further structure. For problem \eqref{eq:OPT}, we do not necessarily assume ${\mathscr{B}}^P = {\mathscr{B}}^R$, since we often ask systems to track non-plant behavior (e.g., steps). Likewise, we generally do not assume feasibility: $w_{r} \in \mathcal W$. However, such assumptions connect to {model reference control} and allow to state consistency results as presented next. \begin{enumerate}\addtolength{\itemindent}{5pt} \renewcommand{\labelenumi}{{\theenumi}}\renewcommand{\theenumi}{(A.\arabic{enumi})}\addtocounter{enumi}{2} \item\label{ass: Bp = Br} ${w_{\textup{ini}}} \wedge w_{r} \in {(\real^{q{T_{\textup{ini}}}} \oplus \mathcal W)} \cap {\mathscr{B}}^{P}_{{T_{\textup{ini}}}+L}$, i.e., the reference $w_{r} \in {\mathscr{B}}^R_{L}$ is compatible with the prefix trajectory ${w_{\textup{ini}}}$, the plant ${\mathscr{B}}^{P}$, and the constraints $\mathcal W$. \end{enumerate} } \begin{fact}\label{Fact: minimum of C} Under Assumptions \ref{ass:ctr cost} and \ref{ass: Bp = Br}, the minimum of the control problem $\boldsymbol{C}$ in \eqref{eq:OPT} is achieved for $w^{\star}_{C}=w_{r}$. \end{fact} {\tb Fact~\ref{Fact: minimum of C} (and similar consistency results later) follows since $w^{\star}_{C}=w_{r}$ is feasible and achieves the minimum of the cost. Fact~\ref{Fact: minimum of C} (and consistency Assumption \ref{ass: Bp = Br}) serve to establish ground-truth for comparing different problem formulations.} Problem \eqref{eq:OPT} becomes a ``classical'' control problem if a parametric model for the plant ${\mathscr{B}}^P$ is available. The latter is usually obtained from data through system identification. \subsection{Indirect Data-Driven Control via System Identification} \label{subsec:id+ctr} Given a $T$-length trajectory $w_{d} \in\real^{qT}$ as {\em identification data}, conventional system identification and control consists of three steps. The first step, {\em model class selection}, amounts to choosing the set of candidate models, e.g., ${\mathscr{L}}_{m,\ell}^{q,n}$ specified by the complexity $(q,n,m,\ell)$. The second step, {\em model fitting}, chooses an element from the model class that fits the data best in some specified sense, e.g., distance between data $w_{d}$ and model ${\mathscr{B}}$. This step is often synonymous to learning a parametric model (e.g., PEM), though some classic (e.g., ETFE) and modern (e.g., kernel-based) methods are non-parametric and by-pass the model {\tb order} selection; see \cite{pillonetto2014kernel} for a review (and the acronyms). However, for control design the non-parametric models again have to be projected on a behavior in ${\mathscr{L}}_{m,\ell}^{q,n}$. Both approaches can be abstracted\,as \begin{mini} {\hat w_{d},\widehat{\mathscr{B}}}{{c_\text{id}}(\hat w_{d}-w_{d}) }{\label{eq:ID}}{\boldsymbol{ID}:} \addConstraint{ \hat w_{d} \in \widehat{\mathscr{B}}_{T}\,,\; \widehat{\mathscr{B}} \in {\mathscr{L}}_{m,\ell}^{q,n}\,. } \end{mini} It is useful to think of the identification loss ${c_\text{id}}: \mathbb R^{qT} \!\to\! \real_{\geq 0}$ as a distance. Given the data $w_{d}$, problem \eqref{eq:ID} seeks the closest LTI behavior within the class ${\mathscr{L}}_{m,\ell}^{q,n}$, i.e., the closest subspace with dimensions as in Lemma \ref{Lemma: subspace dimension}. We denote a minimizer of \eqref{eq:ID} by $\bigl(\hat w^{\star}_{d,ID},\widehat{\mathscr{B}}_{ID}^{\star}\bigr)$ and assume the following {about}\ ${c_\text{id}}(\cdot)$: \begin{enumerate}\addtolength{\itemindent}{5pt} \renewcommand{\labelenumi}{{\theenumi}}\renewcommand{\theenumi}{(A.\arabic{enumi})}\addtocounter{enumi}{3} \item \label{ass: cid} ${c_\text{id}}(\cdot)$ achieves its minimum when $\hat w_{d} \!=\! w_{d}$. \end{enumerate} {\tb Note that existence and uniqueness of minimizers of \eqref{eq:ID} does not only hinge upon the regularity of cost and constraint functions, but also on the data. In general, identification problems are non-convex. For now we keep problem \eqref{eq:ID} abstract and general and resort to more specific formulations in Section~\ref{Sec: Bridging}.} Exact identification of the true system requires exact data $w_{d} \in {\mathscr{B}}^P_{T}$ and an identifiability assumption \cite[Theorem 15]{IM-FD:20} which assures that ${\mathscr{B}}^P$ can be recovered from $w_{d}$: \begin{enumerate}\addtolength{\itemindent}{5pt} \renewcommand{\labelenumi}{{\theenumi}}\renewcommand{\theenumi}{(A.\arabic{enumi})}\addtocounter{enumi}{4} \item \label{ass: Bp=Bid} $w_{d} \in {\mathscr{B}}^P_{T}$, i.e., $w_{d}$ is a valid trajectory of ${\mathscr{B}}^P_{T}$ ; and \item \label{ass: ell + n + 1 pe} rank$\left({\mathscr{H}}_{\ell+1}(w_{d})\right)=m(\ell+1) + n$. \end{enumerate} \begin{fact}\label{Fact: minimum of ID} Under assumptions \ref{ass: cid}--\ref{ass: ell + n + 1 pe}, the minimum value of the system identification problem $\boldsymbol{ID}$ in \eqref{eq:ID} is achieved for $\hat w_{d,ID}^{\star}=w_{d}$ and $\widehat{\mathscr{B}}^{\star}_{ID} = {\mathscr{B}}^P$. \end{fact} {\tb We again note that the (arguably strong) Assumptions \ref{ass: Bp = Br}, \ref{ass: Bp=Bid}, and \ref{ass: ell + n + 1 pe} are used only for consistency statements (such as Fact~\ref{Fact: minimum of ID}) and not for our later main results and simulations.} Finally, equipped with an identified behavior $\widehat{\mathscr{B}}^{\star} \in {\mathscr{L}}_{m,\ell}^{q,n}$, the third step is {\em certainty-equivalence control}: solve the optimal control problem \eqref{eq:OPT} subject to the identified model: \begin{mini} { w {\tb \in \mathcal W}}{{c_\text{ctrl}}( w-w_{r}) }{\label{eq:OPT-CE}}{} \addConstraint{ {{w_{\textup{ini}}} \wedge w \in \widehat{\mathscr{B}}^{\star}_{{T_{\textup{ini}}}+L}} \,. } \end{mini} In \eqref{eq:OPT-CE}, ${c_\text{ctrl}}( w-w_{r})$ is merely a {\em surrogate} (predicted) control error since $ w \in \widehat{\mathscr{B}}^{\star}$, the identified model, rather than $ w \in {\mathscr{B}}^{P}$. Putting both the system identification \eqref{eq:ID} and certainty-equivalence control \eqref{eq:OPT-CE} together, we arrive at {indirect data-driven control} formulated as the {\em bi-level problem}% \begin{mini}% {w {\tb \in \mathcal W}}{{c_\text{ctrl}}(w-w_{r}) }{\label{eq:OPT-BL}}{\!\!\!\!\boldsymbol{BL}:\!\!\!} \addConstraint{ {{w_{\textup{ini}}} \wedge w \in \widehat{\mathscr{B}}^{\star}_{{T_{\textup{ini}}}+L}} } \addConstraint{\where \quad \widehat{\mathscr{B}}^{\star} \in \argmin_{\hat w_{d},\,\,\,\widehat{{\mathscr{B}}}} \; {c_\text{id}}(\hat w_{d}-w_{d}) } \addConstraint{\qquad\qquad\! \st \quad \hat w_{d} \in\,\widehat{\mathscr{B}}_{T}\,, } \addConstraint{\qquad\qquad \qquad\qquad\quad\;\, \widehat{\mathscr{B}} \in {\mathscr{L}}_{m,\ell}^{q,n} \,. } \end{mini} The bi-level problem structure in \eqref{eq:OPT-BL} reflects the sequential system identification and control tasks, that is, first a model is fitted to the data in the inner identification problem before the model is used for control in the outer problem. We denote a minimizer for the inner problem of \eqref{eq:OPT-BL} by $\bigl(\hat w^{\star}_{d,BL},\widehat {\mathscr{B}}^{\star}_{BL}\bigr)$ and a minimizer for the outer problem of \eqref{eq:OPT-BL} by $w^{\star}_{BL}$. {\tb \begin{remark}[Further problem levels and the value of models] The bi-level formulation \eqref{eq:OPT-BL} is only the tip of the iceberg, and the overall design may feature further nested levels, e.g., optimization of the model selection hyper-parameters $(n,\ell)$, uncertainty quantification, etc. We deliberately neglect these levels here and focus on identification and control. Since our ultimate interest is control, we treat models in a disregarding manner, i.e., they serve merely an auxiliary purpose. Of course, models are desired for other reasons: system design, analysis, the reasons in Remark~\ref{rem: models vs. data}, etc. \oprocend \end{remark}} Under suitable consistency assumptions, the sequential system identification and control approach in \eqref{eq:OPT-BL} is optimal. \begin{fact}\label{Fact: minimum of BL} Consider the optimal control problem $\boldsymbol{C}$ in \eqref{eq:OPT} and the bi-level problem $\boldsymbol{BL}$ in \eqref{eq:OPT-BL}. Then \begin{enumerate} \item under Assumptions \ref{ass: cid}--\ref{ass: ell + n + 1 pe}, the bi-level problem $\boldsymbol{BL}$ reduces to the optimal control $\boldsymbol{C}$; and \item under the additional Assumptions \ref{ass:ctr cost} and \ref{ass: Bp = Br}, the minimum value of the bi-level problem $\boldsymbol{BL}$ is achieved for $\hat w_{d,BL}^{\star}=w_{d}$, and $\widehat{\mathscr{B}}^{\star}_{BL} = {\mathscr{B}}^P$, $w^{\star}_{BL}= w_{r}$. \end{enumerate} \end{fact} The first statement echos the ``model as well as possible'' paradigm and a separation of control\,and\,identification, albeit in a simple setting; see \cite[Section 4.2]{hjalmarsson2005experiment} for further reading. \subsection{Direct Data-Driven Control via the Image Representation} \label{subsec: Direct Data-Driven Control via the Image Representation} The direct data-driven control approach pursued here hinges upon the Fundamental Lemma~\ref{lemma: fundamental lemma}. A direct corollary of {\tb the latter} is that the prediction and estimation trajectories have to be within the column span of the data Hankel matrix. \begin{corollary}[Direct data-driven control]\label{Corollary: Data-driven optimal control} Assume that Assumptions \ref{ass:L+n pe} and \ref{ass: Bp=Bid} hold with $L$ replaced by ${T_{\textup{ini}}}+L$, then the optimal control problem $\boldsymbol{C}$ in \eqref{eq:OPT} is equivalent to \begin{mini} {w {\tb\in \mathcal W}}{\hspace{-7pt}{c_\text{ctrl}}(w-w_{r}) }{\label{eq:OPT-D}}{\hspace{-30pt}\boldsymbol{D}\!:\!\!\!} \addConstraint{\hspace{-7pt} \begin{bmatrix} {w_{\textup{ini}}}\\w\end{bmatrix} \!\in\! \text{colspan} \left({\mathscr{H}}_{{T_{\textup{ini}}}+L}(w_{d}) \right)\!, } \end{mini} i.e., the minimizers and minima of \eqref{eq:OPT} and \eqref{eq:OPT-D} coincide. \end{corollary} \begin{fact} Under Assumptions \ref{ass:L+n pe} with $L$ replaced by ${T_{\textup{ini}}}+L$, \ref{ass:ctr cost}, \ref{ass: Bp = Br}, and \ref{ass: Bp=Bid}, the minimum value of \eqref{eq:OPT-D} is achieved for $w^{\star}_{D} = w_{r}$. \end{fact} {\tb \begin{remark}[Data lengths]\label{rem: On data lengths} It is instructive to compare the sample complexity of direct and indirect approaches \eqref{eq:OPT-D} and \eqref{eq:OPT-BL}. Due to Assumption \ref{ass:L+n pe}, \eqref{eq:OPT-D} requires more data than the identification Assumption {\ref{ass: ell + n + 1 pe}}. This discrepancy is due to \eqref{eq:OPT-D} seeking a multi-step predictor, whereas identification \eqref{eq:ID} seeks a single-step predictor to be applied recursively. By weaving multiple trajectories of length $\ell+1$, Assumption \ref{ass:L+n pe} can be eased so that the data lengths coincide; see \cite[Lemma 3]{markovsky2005algorithms}. \oprocend \end{remark} In comparison, with system identification, the model order selection is implicit in Assumption \ref{ass:L+n pe} and encoded in the rank of the Hankel matrix ${\mathscr{H}}_{{T_{\textup{ini}}}+L}(w_{d})$ -- at least, for exact data $w_{d} \in {\mathscr{B}}^{P}_{T}$. If the data $w_{d}$ is noisy, then ${\mathscr{H}}_{{T_{\textup{ini}}}+L}(w_{d})$ likely has full rank, and the constraint of \eqref{eq:OPT-D} is vacuous. Thus, $w=w_{r}$ uniquely minimizes the surrogate control error, but the realized control error may be arbitrarily different. In short, certainty-equivalence can fail arbitrarily poorly in direct data-driven control, and the direct approach has to be robustified. This is a major difference with the indirect (first identify, then control) approach \eqref{eq:OPT-BL}: one purpose of identification is to filter noisy data by projecting on a deterministic behavior. To go beyond certainty equivalence, the DeePC approaches \cite{berberich2020data,JC-JL-FD:19-CDC,JC-JL-FD:20,xue2020data,JC-JL-FD:18,LH-JZ-JL-FD:20,LH-JZ-JL-FD:01} reformulate the constraint in \eqref{eq:OPT-D} as $\text{col}({w_{\textup{ini}}},w) = {\mathscr{H}}_{{T_{\textup{ini}}}+L}(w_{d}) g$ for some $g$ and add a robustifying regularizer.}% \begin{mini} {w {\tb \in \mathcal W},g}{{c_\text{ctrl}}(w-w_{r}) + \lambda \cdot h(g)}{\label{eq:OPT-DR}}{\boldsymbol{D}_{\lambda}:} \addConstraint{ \begin{bmatrix} {w_{\textup{ini}}}\\w\end{bmatrix} = {\mathscr{H}}_{{T_{\textup{ini}}}+L}(w_{d}) g } \end{mini} To provide an intuition, every column of ${\mathscr{H}}_{{T_{\textup{ini}}}+L}(w_{d})$ is a trajectory of ${\mathscr{B}}^{P}_{{T_{\textup{ini}}}+L}$, and the decision variable $g$ linearly combines these columns for the optimal trajectory $w$ -- consistent with the prefix trajectory ${w_{\textup{ini}}}$ and regularized by $h(g)$. The regularization function $h(\cdot)$ and parameter $\lambda$ are nonnegative. Choices for $h(\cdot)$ are one-norms \cite{JC-JL-FD:18}, two-norms \cite{xue2020data}, squared two-norms \cite{berberich2020data,LH-JZ-JL-FD:20}, or arbitrary $p$-norms \cite{JC-JL-FD:19-CDC,JC-JL-FD:20,LH-JZ-JL-FD:01}. {\tb The regularizers can be related to robust optimization formulations in deterministic \cite{xue2020data,LH-JZ-JL-FD:20,LH-JZ-JL-FD:01} or stochastic settings \cite{JC-JL-FD:19-CDC,JC-JL-FD:20}, where $\lambda$ is a design parameter specifying the size of the assumed uncertainty set. The regularized formulation \eqref{eq:OPT-DR} has proved itself in practical (nonlinear) control systems \cite{LH-JZ-JL-FD:01,LH-JC-JL-FD:19,PC-AF-SB-FD:20,EE-JC-PB-JL-FD:19,LH-JZ-JL-FD:20}.} \section{Bridging Direct \& Indirect Approaches} \label{Sec: Bridging} \subsection{Multi-Objective Data-Driven Control} \label{subsec: multi-objective} From an optimization perspective it is natural to lift the bi-level problem \eqref{eq:OPT-BL} to a {\em multi-criteria problem} simultaneously optimizing for identification and control objectives. Using weighted sum scalarization, the multi-criteria problem\,is\vspace{-5pt}% \begin{mini}% {w {\tb \in \mathcal W},\hat w_{d},\widehat{\mathscr{B}}}{\gamma \cdot {c_\text{id}}(\hat w_{d}-w_{d}) \,+\, {c_\text{ctrl}}(w-w_{r}) }{\label{eq:OPT-SMO}}{\!\!\!\boldsymbol{MC}_{\gamma}\!:\!\!\!} \addConstraint{ {{w_{\textup{ini}}} \wedge w \in \widehat{\mathscr{B}}_{{T_{\textup{ini}}}+L}}\,,\; \hat w_{d} \in \widehat{\mathscr{B}}_{T}\,,} \addConstraint{ \widehat{\mathscr{B}} \in {\mathscr{L}}_{m,\ell}^{q,n} \,, } \end{mini} where the trade-off parameter $\gamma \geq 0$ traces the Pareto front between the identification and optimal control objectives. The multi-criteria problem \eqref{eq:OPT-SMO} can be interpreted as fitting a model $\widehat{\mathscr{B}}$ simultaneously to two data sets: the identification data $w_{d}$ and the reference $w_{r}$. From a control perspective, the identification criterion biases the solution $w \in \widehat{\mathscr{B}}$ to adhere to the observed data $w_{d}$ rather than merely matching the to be tracked reference $w_{r}$. Likewise, from the other side, the identification criterion is biased by the control objective. In short, control and identification {\em regularize} each other, in the spirit of identification for control \cite{hjalmarsson2005experiment,hjalmarsson1996,geversaa2005,schrama1992}. A similar formulation has been proposed in \cite{formentin2018core} interpolating between PEM identification and a model-reference control objective. {\tb Likewise, the data-driven model reference control formulation in \cite{campestrini2017data} interpolates between a direct and an indirect approach. Finally, dual control approaches consider similar multi-criteria formulations balancing exploration (for identification) and exploitation (i.e., optimal control) \cite{feldbaum1963dual,ferizbegovic2019learning,larsson2016application,iannelli2020structured}.} We denote a minimizer of \eqref{eq:OPT-SMO} by $\bigl(w^{\star}_{MC},\hat w^{\star}_{d,MC},\widehat {\mathscr{B}}^{\star}_{MC}\bigr)$. \begin{fact} Under Assumptions \ref{ass:ctr cost}--\ref{ass: ell + n + 1 pe}, for any $\gamma \geq 0$ the minimum of the parametric multi-criteria problem $\boldsymbol{MC}_{\gamma}$ is achieved for $\hat w_{d,MC}^{\star}=w_{d}$, $\widehat{\mathscr{B}}^{\star}_{MC} = {\mathscr{B}}^P$, and $w^{\star}_{MC} = w_{r}$. \end{fact} Different points on the Pareto front of \eqref{eq:OPT-SMO} have different emphasis regarding the control and identification objectives. Below we formalize that for $\gamma$ sufficiently large, the multi-criteria problem \eqref{eq:OPT-SMO} recovers the bi-level problem \eqref{eq:OPT-BL} corresponding to sequential system identification and control. We follow standard penalty arguments from bi-level optimization \cite{ye1995,ye1997exact}, which are particularly tractable here since \eqref{eq:OPT-BL} is only weakly coupled: the inner problem does not depend on the decision variable $w$ of the outer problem. {\tb Assume there is a minimum (termed value function) of the inner problem:} \begin{mini} {\hat w_{d},\,\,\,\widehat{\mathscr{B}}}{{c_\text{id}}(\hat w_{d}-w_{d}) }{\label{eq:ID-value}}{\varphi \, = } \addConstraint{ \hat w_{d} \in\,\, \widehat{\mathscr{B}}_{T}\,,\;\, \widehat{\mathscr{B}} \in {\mathscr{L}}_{m,\ell}^{q,n}\,. } \end{mini} The bi-level problem \eqref{eq:OPT-BL} reads then equivalently as% \begin{mini} {w{\tb \in \mathcal W},\hat w_{d},\widehat{\mathscr{B}}}{{c_\text{ctrl}}(w-w_{r}) }{\label{eq:OPT-BL-value}}{} \addConstraint{ {{w_{\textup{ini}}} \wedge w \in \widehat{\mathscr{B}}_{{T_{\textup{ini}}}+L}}\,,\; \hat w_{d} \in \widehat{\mathscr{B}}_{T}\,, } \addConstraint{ \widehat{\mathscr{B}} \in {\mathscr{L}}_{m,\ell}^{q,n}\,,\; {c_\text{id}}(\hat w_{d}-w_{d}) - \varphi = 0 \,. } \end{mini} At this point the reader is encouraged to review the definition and salient properties of a constraint qualification termed partial calmness \cite{ye1995,ye1997exact}; see the appendix. If problem~\eqref{eq:OPT-BL-value} is partially calm at a local minimizer and ${c_\text{ctrl}}(\cdot)$ is continuous, then there is $\gamma^{\star}>0$ so that, for all $\gamma > \gamma^{\star}$, then \eqref{eq:OPT-BL-value} equals \begin{mini} {w{\tb \in \mathcal W},\hat w_{d},\widehat{\mathscr{B}}}{\!\!\!\gamma \cdot \bigl| {c_\text{id}}(\hat w_{d}-w_{d}) - \varphi \bigr| \,+\, {c_\text{ctrl}}(w-w_{r}) }{\label{eq:OPT-BL-value-2}}{\!\!\!\!\!\!} \addConstraint{ {\!\!\!{w_{\textup{ini}}} \wedge w \in \widehat{\mathscr{B}}_{{T_{\textup{ini}}}+L}}\,,\; \hat w_{d} \in \widehat{\mathscr{B}}_{T}\,,\; \widehat{\mathscr{B}} \in {\mathscr{L}}_{m,\ell}^{q,n} \,, } \end{mini} that is, the local minimizers of \eqref{eq:OPT-BL-value} and \eqref{eq:OPT-BL-value-2} coincide; see Proposition~\ref{Proposition: partial calmness and exact penalty}. We now drop the absolute value (since $ {c_\text{id}}(\hat w_{d}-w_{d}) - \varphi \geq 0$) and the constant $\varphi$ (which in our case does not depend on the variable $w$ of the outer problem) from the objective of \eqref{eq:OPT-BL-value-2} to recover problem \eqref{eq:OPT-SMO}. % We have thus established a chain of equivalences relating the bi-level and multi-criteria problems. We summarize our discussion below. \begin{proposition}[Upper tail of the Pareto front of $\boldsymbol{MC}_{\gamma}$]\label{Proposition: upper tail of the Pareto front} Consider the parametric multi-criteria problem $\boldsymbol{MC}_{\gamma}$ in \eqref{eq:OPT-SMO} and the bi-level problem $\boldsymbol{BL}$ in \eqref{eq:OPT-BL}. Assume that {\tb the inner identification problem admits a minimum as in \eqref{eq:ID-value}, \eqref{eq:OPT-BL-value} is partially calm at any local minimizer, and ${c_\text{ctrl}}(\cdot)$ is continuous.} Then there is $\gamma^{\star}>0$ so that for $\gamma > \gamma^{\star}$ the problem $\boldsymbol{MC}_{\gamma}$ is equivalent to $\boldsymbol{BL}$, i.e., $ w^{\star}_{MC} = w^{\star}_{BL}$ , $\hat w^{\star}_{d,MC} = \hat w^{\star}_{d,BL}$, and $\widehat {\mathscr{B}}^{\star}_{MC} = \widehat {\mathscr{B}}^{\star}_{BL}$. Moreover, the optimal values of $\boldsymbol{MC}_{\gamma}$ and $\boldsymbol{BL}$ coincide up to the constant $\gamma \cdot \varphi$ with $\varphi$ defined in \eqref{eq:ID-value}. \end{proposition} The following comments are in order regarding partial calmness. As discussed in Proposition \ref{Proposition: partial calmness and exact penalty}, partial calmness is equivalent to the constraint $ {c_\text{id}}(\hat w_{d}-w_{d}) - \varphi \geq 0$ serving as an exact penalty. Partial calmness is satisfied, for instance, appealing to Proposition~\ref{Proposition: LipschitzPenalty}, if the identification cost ${c_\text{id}}(\cdot)$ can be phrased as a distance (see the discussion following the identification problem \eqref{eq:ID}) and ${c_\text{ctrl}}(\cdot)$ is Lipschitz continuous over the feasible set, e.g., the feasible set is either compact (due to constraints) or the control performance is measured by a norm or Huber loss. The Lipschitz constant then serves as a lower estimate for $\gamma^{\star}$. A non-Lipschitz cost requires $\gamma \to \infty$ as a sufficient condition. Note that for $\gamma \to \infty$ Proposition~\ref{Proposition: upper tail of the Pareto front} holds without assumptions, since \eqref{eq:OPT-BL-value-2} is merely an indicator function reformulation of \eqref{eq:OPT-BL-value}. Our relaxations in the next sections will, among others, drop the requirement on $\gamma$ sufficiently large as well as the LTI complexity specification $\widehat{\mathscr{B}} \in {\mathscr{L}}_{m,\ell}^{q,n}$. {\tb Even if the identification \eqref{eq:ID} is convex the multi-criteria problem \eqref{eq:OPT-SMO} is not,} since it simultaneously optimizes over the to-be-identified model ${\mathscr{B}}$ and the to-be-designed trajectory $w$. This can be spotted in a kernel representation: the constraint $ w \in \widehat{\mathscr{B}}_{L}$ takes the form $\widehat R(\sigma) w = 0$, where both $\widehat R$ and $ w$ are variables. Other representations lead to the same conclusions. \begin{proposition}\label{Proposition: } Consider the multi-criteria problem \eqref{eq:OPT-SMO} and a kernel representation of the to-be-identified behavior: $\widehat{\mathscr{B}} = \text{kernel}(\widehat R(\sigma))$. Then the feasible set of \eqref{eq:OPT-SMO} is not convex. \end{proposition} We believe that the multi-criteria problem is interesting in its own right: studying its Pareto front and choosing an optimal trade-off parameter may possibly yield superior performance. Our problem setup thus far was conceptual rather than practically useful. Below, we consider concrete problem formulations and turn our conceptual insights into concise results. \subsection{Bridging Towards Subspace Predictive Control (SPC)} \label{subsec: subspace ARX formulations} We explain SPC from the perspective of the Fundamental Lemma~\ref{lemma: fundamental lemma} stating that any trajectory ${w_{\textup{ini}}} \wedge w \in {\mathscr{B}}_{{T_{\textup{ini}}}+L}^{P}$ lies in $\text{colspan} \left({\mathscr{H}}_{{T_{\textup{ini}}}+L}(w_{d}) \right)$. Recall that ${w_{\textup{ini}}}$ is a prefix trajectory of length ${T_{\textup{ini}}} \geq \ell$ setting the initial condition, and $w$ is a future trajectory of length $L>1$ to be designed via optimal control. Accordingly, permute and partition $w$ and the Hankel matrix \begin{equation*} \begin{bmatrix} {w_{\textup{ini}}} \\ w \end{bmatrix} \sim \begin{bmatrix} {u_{\textup{ini}}} \\ u \\ \hline {y_{\textup{ini}}} \\ y \end{bmatrix} ,\,\; {\mathscr{H}}_{{T_{\textup{ini}}}+L}(w_{d}) \sim \begin{bmatrix} {U_{\mathrm{p}}} \\ {U_{\mathrm{f}}} \\\hline {Y_{\mathrm{p}}} \\ {Y_{\mathrm{f}}} \end{bmatrix} = \begin{bmatrix} {\mathscr{H}}_{{T_{\textup{ini}}} + L}(u_{d}) \\\hline {\mathscr{H}}_{{T_{\textup{ini}}} + L}(y_{d}) \end{bmatrix} \,, \end{equation*} where ${u_{\textup{ini}}} \in \real^{m{T_{\textup{ini}}}}$, ${y_{\textup{ini}}} \in \real^{(q-m){T_{\textup{ini}}}}$, and $\sim$ denotes similarity under a coordinate permutation. The subscripts ``p'' and ``f'' are synonymous to ``past'' and ``future''. We seek a linear model, i.e., a matrix $K$, relating past and future as \begin{equation} y = \underbrace{ \begin{bmatrix} K_{p} & \vline& K_{f} \end{bmatrix}}_{=K} \cdot \begin{bmatrix} {u_{\textup{ini}}} \\ {y_{\textup{ini}}} \\\hline u \end{bmatrix} \label{eq: ARX transition model} \,. \end{equation} The multi-step predictor $K$ is found from Hankel matrix data by means of the least-square criterion \cite[Section\,3.4]{huang2008dynamic} \begin{mini} {K}{ \left\|{Y_{\mathrm{f}}} - K \cdot \begin{bmatrix} {U_{\mathrm{p}}} \\ {Y_{\mathrm{p}}} \\ {U_{\mathrm{f}}} \end{bmatrix}\right\|_{F}^{2}} {\label{eq:pem}}{} \,, \end{mini} where $\|\cdot\|_{F}$ is the Frobenius norm. Via the Moore-Penrose inverse, the solution of \eqref{eq:pem} is the classic SPC predictor \cite{favoreel1999spc}% \begin{equation} K = {Y_{\mathrm{f}}} \begin{bmatrix} {U_{\mathrm{p}}} \\ {Y_{\mathrm{p}}} \\{U_{\mathrm{f}}} \end{bmatrix}^{\dagger} \,. \label{eq: SPC predictor} \end{equation} It is insightful to compare equation \eqref{eq: ARX transition model} and the matrices $K_{p},K_{f}$ to equation \eqref{eq: IOS representation} and the extended observability and impulse response matrices ${\mathscr{O}}_{L}$ and ${\mathscr{G}}_{L}$, respectively. One realizes that for exact data, \eqref{eq: ARX transition model} is an ARX model with rank$(K_{p})=n$ assuring LTI behavior of desired complexity and a lower block-triangular zero pattern of $K_{f}$ assuring causality. {\tb For inexact data, LTI behavior of desired complexity is promoted by low-rank approximation (typically via singular-value thresholding of $K_{p}$) \cite{favoreel1999spc}; and one aims to gain causality by heuristically thresholding $K_{f}$ towards a desired zero pattern \cite[Remark 10.1]{huang2008dynamic}, \cite[Section 3]{qin2005novel}. The causality requirement can also be omitted for offline or receding horizon control, but it is useful to condition the data on the set of causal models. These steps bring the linear relation \eqref{eq: ARX transition model} half-way towards an LTI model. Though a model has further structure, e.g., $K_{f}$ is Toeplitz, and the entries of $K_{p}$ and $K_{f}$ are coupled; see \eqref{eq: IOS representation}.} Hence, in this case the identification problem \eqref{eq:ID} {\tb is relaxed to the single, monolithic, and non-convex program} \begin{mini*} {K}{ \left\|{Y_{\mathrm{f}}} - K \cdot \begin{bmatrix} {U_{\mathrm{p}}} \\ {Y_{\mathrm{p}}} \\ {U_{\mathrm{f}}} \end{bmatrix}\right\|_{F}^{2}} {\label{eq:pem}}{} \addConstraint{K = \begin{bmatrix} K_{p} & \vline& K_{f} \end{bmatrix}} \addConstraint{K_{f}\, \text{lower-block triangular}} \addConstraint{\text{rank}( K_{p}) = n}% \,,% \end{mini*}% where the lower-block triangular specification means that all entries above the diagonal $(q-m)\times m$ blocks equal zero. {\tb We obtain a parametric version of the indirect data-driven approach \eqref{eq:OPT-BL}, where ${w_{\textup{ini}}} \wedge w \in \widehat{\mathscr{B}}^{\star}_{{T_{\textup{ini}}}+L}$ and $w \in \mathcal W = \mathcal U \times \mathcal Y$ are replaced by \eqref{eq: ARX transition model} and $(u,y) \in \mathcal U \times \mathcal Y$, respectively:}% \begin{mini} {\tb u \in \mathcal U,y \in \mathcal Y}{{c_\text{ctrl}}\left( \left[\begin{smallmatrix} y - y_{r} \\ u - u_{r} \end{smallmatrix} \right] \right) }{\!\!\!\!}{\label{eq:OPT-BL-ARX}} \addConstraint{ y = K^{\star} \cdot \begin{bmatrix} {u_{\textup{ini}}} \\ {y_{\textup{ini}}} \\ u \end{bmatrix} } \addConstraint{\where \; K^{\star} \in \argmin_{K} \; \left\|{Y_{\mathrm{f}}} - K \cdot \begin{bmatrix} {U_{\mathrm{p}}} \\ {Y_{\mathrm{p}}} \\ {U_{\mathrm{f}}} \end{bmatrix}\right\|_{F}^{2} } \addConstraint{\qquad\quad\!\; \st \quad\! \! K = \begin{bmatrix} K_{p} & \vline& K_{f} \end{bmatrix}} \addConstraint{\qquad\quad\!\; \phantom{\st} \quad\! K_{f}\, \text{lower-block triangular}} \addConstraint{\qquad\quad\!\; \phantom{\st} \quad\! \text{rank}( K_{p}) = n} \,. \end{mini}% {\tb We stress that \eqref{eq:OPT-BL-ARX} is generally not an equivalent reformulation of \eqref{eq:OPT-BL} since the inner identification does not necessarily lead to an LTI model; see the comments following equation \eqref{eq: SPC predictor}.} For comparison, consider also an instance of the direct regularized problem \eqref{eq:OPT-DR} with regularizer $h(g) = \|(I-\Pi)g\|_{p}$: \begin{mini}% {\tb u \in \mathcal U,y \in \mathcal Y,g}{{c_\text{ctrl}}\left( \left[\begin{smallmatrix} y - y_{r} \\ u - u_{r} \end{smallmatrix} \right] \right) \,+\, \lambda \cdot \|(I-\Pi)g\|_{p} }{\label{eq:ARX-SMO-final}}{} \addConstraint{ \begin{bmatrix} {U_{\mathrm{p}}} \\ {Y_{\mathrm{p}}} \\{U_{\mathrm{f}}} \\ {Y_{\mathrm{f}}} \end{bmatrix} g = \begin{bmatrix} {u_{\textup{ini}}} \\ {y_{\textup{ini}}} \\ u \\y \end{bmatrix} }\,. \end{mini}% Here, {\tb $\|\cdot\|_{p}$ is any $p$-norm,} $\Pi = \left[\begin{smallmatrix} {U_{\mathrm{p}}} \\ {Y_{\mathrm{p}}} \\{U_{\mathrm{f}}} \end{smallmatrix}\right]^{\dagger}\! \left[\begin{smallmatrix} {U_{\mathrm{p}}} \\ {Y_{\mathrm{p}}} \\{U_{\mathrm{f}}} \end{smallmatrix}\right]$, and $(I-\Pi)$ is an orthogonal\,projector on the kernel of the first three block-constraint equations. {\tb The proof of Theorem~\ref{Theorem: SPC relaxation} will later show that this regularizer is in fact {\em induced} by the least-square identification~\eqref{eq:pem}, i.e., $\|(I-\Pi)g\|_{p}=0$ if and only if the least-square criterion is minimized. Hence, it robustifies the problem akin to least squares.} We state the following consistency result.% \begin{fact}\label{fact: consistency of projection} Under Assumptions \ref{ass:L+n pe} with $L$ replaced by ${T_{\textup{ini}}}+L$, \ref{ass:ctr cost}, \ref{ass: Bp = Br}, and \ref{ass: Bp=Bid}, for any $\lambda \geq 0$ the minimum of the regularized problem \eqref{eq:ARX-SMO-final} is achieved for $y^{\star} = Y_{f}g^{\star} = y_{r}$ and $u^{\star} = U_{f}g^{\star} = u_{r}$, where $\|(I-\Pi)g^{\star}\|_{p}=0$. \end{fact} {\tb \begin{remark}[Consistency of regularizers]\label{rem: consistency} Fact~\ref{fact: consistency of projection} may not appear insightful at first glance, but it highlights an important fact. The projection-based regularizer $h(g)= \|(I-\Pi)g\|_{p}$ is consistent since it penalizes only the homogenous solution to the constraint equations \eqref{eq:ARX-SMO-final} and does not affect the variables $(u,y)$. In comparison, the conventional norm-based regularizer $h(g) = \|g\|_{p}$ is {not} consistent: it penalizes the heterogeneous solution of the constraint equations in \eqref{eq:ARX-SMO-final} and thus also $(u,y)$. Hence, even with ideal consistency Assumptions \ref{ass:L+n pe}, \ref{ass:ctr cost}, \ref{ass: Bp = Br}, and \ref{ass: Bp=Bid} in place, the norm-based regularizer $h(g) = \|g\|_{p}$ with $\lambda \neq 0$ does not lead to the ground-truth solution $y^{\star} = Y_{f}g^{\star} = y_{r}$, $u^{\star} = U_{f}g^{\star} = u_{r}$; see also Remark~\ref{rem: comments on spc theorem}. \oprocend \end{remark}} The following is the main result of this subsection. \begin{theorem}[SPC relaxation]\label{Theorem: SPC relaxation} Consider the indirect data-driven control problem \eqref{eq:OPT-BL-ARX} and the direct data-driven control problem \eqref{eq:ARX-SMO-final} parameterized by $\lambda \geq 0$. {\tb Let Assumption~\ref{ass:ctr cost} hold and} assume that ${c_\text{ctrl}}(\cdot)$ is Lipschitz continuous. For $\lambda$ sufficiently small, \eqref{eq:ARX-SMO-final} is a convex relaxation of \eqref{eq:OPT-BL-ARX}, that is, \begin{enumerate} \item[$(i)$] \eqref{eq:ARX-SMO-final} is convex, \item[$(ii)$] any feasible $(u,y)$ in \eqref{eq:OPT-BL-ARX} is feasible for \eqref{eq:ARX-SMO-final}, and \item[$(iii)$] the optimal value of \eqref{eq:ARX-SMO-final} lower-bounds that of \eqref{eq:OPT-BL-ARX}. \end{enumerate} \end{theorem} \begin{IEEEproof} First, we perform a convex relaxation by dropping the rank and block-triangularity constraints in \eqref{eq:OPT-BL-ARX}. Second, observe that the explicit solution of the inner problem, the predictor \eqref{eq: SPC predictor}, is equivalently derived as least-norm\,solution \begin{align*} y = {Y_{\mathrm{f}}} g^{\star} \;\text{ where}\; &g^{\star} = \argmin_{g} \| g\|_{2} \nonumber \\&\text{subject to} \begin{bmatrix} {U_{\mathrm{p}}} \\ {Y_{\mathrm{p}}} \\{U_{\mathrm{f}}} \end{bmatrix} g = \begin{bmatrix} {u_{\textup{ini}}} \\ {y_{\textup{ini}}} \\ u \end{bmatrix} \,. \end{align*} \mbox{We now insert this reformulation in the relaxation of \eqref{eq:OPT-BL-ARX}:}\vspace{-0pt}% \begin{mini}% {\tb u \in \mathcal U,y \in \mathcal Y}{{c_\text{ctrl}}\left( \left[\begin{smallmatrix} y - y_{r} \\ u - u_{r} \end{smallmatrix} \right] \right) }{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}{\label{eq:least-norm form}} \addConstraint{ y = {Y_{\mathrm{f}}} g^{\star}} \addConstraint{\where \quad g^{\star} \in \argmin_{ g} \; \| g\|_{2}} \addConstraint{\qquad\qquad\! \st \quad \begin{bmatrix} {U_{\mathrm{p}}} \\ {Y_{\mathrm{p}}} \\{U_{\mathrm{f}}} \end{bmatrix} g = \begin{bmatrix} {u_{\textup{ini}}} \\ {y_{\textup{ini}}} \\ u \end{bmatrix} }. \end{mini} We now follow the arguments from Section~\ref{subsec: multi-objective} to reduce the bi-level problem \eqref{eq:least-norm form} to a single-level multi-criteria problem. As in \eqref{eq:OPT-BL-value}, the inner problem can be replaced by a constraint assuring that it achieves its minimum. {\tb Here, we add an orthogonality constraint to the constraints of the inner problem: \begin{equation*} \begin{bmatrix} {U_{\mathrm{p}}} \\ {Y_{\mathrm{p}}} \\{U_{\mathrm{f}}} \end{bmatrix} g = \begin{bmatrix} {u_{\textup{ini}}} \\ {y_{\textup{ini}}} \\ u \end{bmatrix} \quad\text{and}\quad 0 = \| (I - \Pi) g \|_{p} \end{equation*}}% The {\tb orthogonality constraint $0 = \| (I - \Pi) g \|_{p}$} poses the inner optimality constraint as the distance to the subspace containing the minimizers of the inner problem. Retaining all constraints, \eqref{eq:least-norm form} can then be formulated as the single-level problem \begin{mini}% {\tb u \in \mathcal U,y \in \mathcal Y,g}{{c_\text{ctrl}}\left( \left[\begin{smallmatrix} y - y_{r} \\ u - u_{r} \end{smallmatrix} \right] \right) }{\label{eq:ARX-SMO}}{} \addConstraint{ \begin{bmatrix} {U_{\mathrm{p}}} \\ {Y_{\mathrm{p}}} \\{U_{\mathrm{f}}} \\ {Y_{\mathrm{f}}} \end{bmatrix} g = \begin{bmatrix} {u_{\textup{ini}}} \\ {y_{\textup{ini}}} \\ u \\y \end{bmatrix} } \addConstraint{ \| (I - \Pi) g \|_{p} = 0 } \,. \end{mini} We now apply Proposition \ref{Proposition: LipschitzPenalty}, lift the distance constraint $\| (I - \Pi) g \|_{p} = 0$ to the objective, and recover problem \eqref{eq:ARX-SMO-final} with $\lambda$ larger than the Lipschitz constant of ${c_\text{ctrl}}(\cdot)$. Hence, \eqref{eq:ARX-SMO-final} is equivalent to \eqref{eq:ARX-SMO} for $\lambda$ sufficiently large. Our final convex relaxation is to choose $\lambda$ small rather than large. Namely, from the view-point of the objective: it lowers the cost; or from the bi-level viewpoint: it turns the inner optimality constraint into a weaker sub-optimality constraint{\tb, i.e., we allow for solutions satisfying $\| (I - \Pi) g \|_{p} \geq 0$.} {\tb Conclusion $(i)$ now follows since \eqref{eq:ARX-SMO-final} is convex; $(ii)$ follows since we have only enlarged the feasible set when passing from \eqref{eq:OPT-BL-ARX} to \eqref{eq:ARX-SMO-final}; and $(iii)$ follows due to the enlarged feasible set, since the costs of \eqref{eq:OPT-BL-ARX} and \eqref{eq:ARX-SMO} coincide, and since \eqref{eq:ARX-SMO-final} is a relaxation of \eqref{eq:ARX-SMO} if $\lambda$ is not sufficiently large.} \end{IEEEproof} \begin{remark}[\tb Comments on Theorem~\ref{Theorem: SPC relaxation}] \label{rem: comments on spc theorem} First, we summarize the salient arguments to pass from indirect to direct data-driven control: we relaxed problem \eqref{eq:OPT-BL-ARX} by dropping causality (block-triangularity) and LTI complexity (rank) specifications, replaced the least-square criterion \eqref{eq:pem} by the equivalent least-norm formulation \eqref{eq:least-norm form}, and lifted the problem from bi-level to multi-criteria, where the least-square objective {\tb induces} the regularization $\|(I-\Pi)g\|_{p}$. For equivalence to the least-square objective, the proof requires $\lambda$ larger than the (global) Lipschitz constant of ${c_\text{ctrl}}(\cdot)$, similar to robustification-induced regularizations \cite{JC-JL-FD:19-CDC,JC-JL-FD:20}. If ${c_\text{ctrl}}(\cdot)$ is only locally Lipschitz, e.g., in case of a quadratic cost, then choosing a finite (small) $\lambda$ is a relaxation that allows the predicted trajectory to not adhere to the least-square fit of the data. Though as we will see in Section~\ref{subsec: role of projection}, its effect is minor for $\lambda$ not overly small. {\tb Second, continuing on the magnitude of $\lambda$: For exact data and under consistency assumptions, \eqref{eq:ARX-SMO-final} achieves the exact minimizer for any $\lambda \geq 0$; see Fact~\ref{fact: consistency of projection}. When departing from these ideal assumptions, the least-square fit of the data is enforced only for $\lambda$ sufficiently large. Generally, $\lambda$ should be regarded as a tune-able hyper-parameter chosen by the designer to control how much the predicted trajectory should adhere to the data (versus the control objective) and to ultimately improve the realized performance. The proof of Theorem~\ref{Theorem: SPC relaxation} suggests a sufficiently large value, which is also confirmed by our later empirical findings (see e.g. Figure~\ref{fig:projected_ell2_regularization_more_data}). } {\tb Third}, the regularization based on the projector $\|(I-\Pi)g\|_{p}$ differs from the standard $p$-norm regularizers $h(g) = \|g\|_{p}$ \cite{xue2020data,JC-JL-FD:19-CDC,JC-JL-FD:20} (or squared 2-norms $\|g\|_{2}^{2}$ \cite{LH-JZ-JL-FD:20,berberich2020data}). Actually, it is this projection which recovers the least-square criterion \eqref{eq:pem}. {\tb In contrast, norm-based regularizers $\|g\|_{p}$ are not consistent and bias the optimal solution $(u^{\star},y^{\star})$; see Remark~\ref{rem: consistency}. This is undesirable from an identification perspective: the regularizer should induce a least-square fit of the data. While for small values of $\lambda$ both regularizers have a similar effect, for sufficiently large $\lambda$ the {\em identification-induced regularizer} $\|(I-\Pi)g\|_{p}$ demonstrates a superior performance; see Figure~\ref{fig:projected_ell2_regularization_more_data} later.} {\tb Fourth}, our proof strategy reveals an entire class of regularizers. In fact, we can choose any $p$-norm $\|(I-\Pi)g\|_{p}$, use more general penalty functions such as the (squared) merit functions in \cite{ye1997exact}, or attack problem \eqref{eq:ARX-SMO} with other penalty or augmented Lagrangian methods. These degrees of freedom reflect the intuition that the Pareto-front of \eqref{eq:ARX-SMO-final} is invariant under certain (e.g., monotone) transformations of objectives such as taking squares; see \cite[Appendix A]{xu2010robust} for a formal reasoning. For our later simulations in Section~\ref{subsec: role of projection}, we choose the computationally attractive regularization $\|(I-\Pi)g\|_{2}^{2}$. {\tb Fifth and finally,} our proof arguments are obviously ``qualitative'' crossing out {\tb rank and causality constraints similar to most SPC implementations} and using non-quantifiable ``sufficiently large'' reasoning. Hence, the convex relaxation \eqref{eq:ARX-SMO-final} of \eqref{eq:OPT-BL-ARX} should not be expected to be tight. Nevertheless, the formulation \eqref{eq:ARX-SMO-final} (without projector) has proved itself {\tb in many case studies} and often outperforms \eqref{eq:OPT-BL-ARX}, as testified in \cite{LH-JZ-JL-FD:01,LH-JC-JL-FD:19,PC-AF-SB-FD:20,EE-JC-PB-JL-FD:19,LH-JZ-JL-FD:20}. Section~\ref{subsec: role of projection} will compare the different formulations \oprocend \end{remark} \subsection{Bridging Towards Structured Low-Rank Approximation} \label{subsec: low-rank relaxation} We now present an entirely non-parametric problem formulation, namely a version of subspace identification based on structured low-rank approximation \cite{markovsky2016}, and we relate the resulting bi-level problem to direct data-driven control \eqref{eq:OPT-DR}. Given the model class ${\mathscr{L}}_{m,\ell}^{q,n}$, we project the identification data $w_{d} \in \real^{qT}$ on $ \widehat{\mathscr{B}}_{{T_{\textup{ini}}}+L} \in {\mathscr{L}}_{m,\ell}^{q,n}$. By Lemma \ref{lemma: fundamental lemma}, the latter set is characterized by all trajectories $\hat w \in \real^{q({T_{\textup{ini}}}+L)}$ so that the associated Hankel matrix satisfies $\text{rank} \left({\mathscr{H}}_{{T_{\textup{ini}}}+L}(\hat w) \right) \leq m({T_{\textup{ini}}}+L)+n$ for $({T_{\textup{ini}}}+L) > \ell$. An implicit assumption is, of course, $T \gg {T_{\textup{ini}}}+L$: the identification data is much longer than the estimation plus control prediction horizons. In presence of noise, ${\mathscr{H}}_{{T_{\textup{ini}}}+L}( w_{d}) $ will not have low rank and has to be approximated by a low-rank matrix in an identification step. Thus, the identification problem \eqref{eq:ID} reads\,as% \begin{mini}% {\hat w_{d}}{{c_\text{id}}(\hat w_{d}-w_{d})}{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}{\label{eq:low-rank}} \addConstraint{ \text{rank} \left({\mathscr{H}}_{{T_{\textup{ini}}}+L}(\hat w_{d}) \right) {\leq} m({T_{\textup{ini}}}+L)+n. }% \end{mini}% Problem \eqref{eq:low-rank} is to be read as low-rank approximation problem: given the identification data assorted in a Hankel matrix ${\mathscr{H}}_{{T_{\textup{ini}}}+L}(w_{d})$, we seek the closest sequence $\hat w_{d}$ so that the Hankel matrix ${\mathscr{H}}_{{T_{\textup{ini}}}+L}(\hat w_{d})$ has rank {no more than} $m({T_{\textup{ini}}}+L)+n$ Since $\hat w_{d} \in \widehat {\mathscr{B}}_{T}$, we have $\text{rank} \left({\mathscr{H}}_{{T_{\textup{ini}}}+L}(\hat w_{d}) \right) {\leq} m({T_{\textup{ini}}}+L)+n$. Since also ${w_{\textup{ini}}} \in \widehat{\mathscr{B}}_{{T_{\textup{ini}}}}$ and $w \in \widehat{\mathscr{B}}_{L}$, we conclude \begin{equation*} \text{rank} \left(\left[{\mathscr{H}}_{{T_{\textup{ini}}}+L}(\hat w_{d}) \,~\, \text{col}({w_{\textup{ini}}},w) \right]\right) {\leq} m({T_{\textup{ini}}}+L)+n \,. \end{equation*} Assuming that $\text{rank} \left({\mathscr{H}}_{{T_{\textup{ini}}}+L}(\hat w_{d})\right) = m({T_{\textup{ini}}}+L)+n$, which is generically the case, ${\mathscr{H}}_{{T_{\textup{ini}}}+L}(\hat w_{d}) g = \text{col}({w_{\textup{ini}}},w) $ for some vector $g$. Hence, the bi-level problem \eqref{eq:OPT-BL} takes the form% \begin{mini} {w {\tb \in \mathcal W},g}{\!\!\!\!\!{c_\text{ctrl}}(w-w_{r}) }{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}{\label{eq:low-rank-2}} \addConstraint{\!\!\!\!\! \begin{bmatrix} {w_{\textup{ini}}}\\ w\end{bmatrix} = {\mathscr{H}}_{{T_{\textup{ini}}}+L}(\hat w_{d}^{\star})g } \addConstraint{\!\!\!\!\! \hat w_{d}^{\star} \in \argmin_{\hat w_{d}} \; {c_\text{id}}(\,\hat w_{d}-w_{d}) } \addConstraint{\!\!\!\!\! \st\, \textup{rank} ({\mathscr{H}}_{{T_{\textup{ini}}}+L}(\,\hat w_{d})) \!=\! m({T_{\textup{ini}}}\!+\!L)\!+\!n }. \end{mini} \begin{theorem}[{\tb $\ell_{1}$-norm relaxation}] \label{Theorem: Low-rank relaxation} Consider the indirect data-driven control problem \eqref{eq:low-rank-2} and the direct data-driven control problem \eqref{eq:OPT-DR} for $h(g) = \|g\|_{1}$ and parameterized by $\lambda\geq0$. Let Assumptions~{\tb \ref{ass:ctr cost} and}~\ref{ass: cid} hold. For $\lambda$ sufficiently small, \eqref{eq:OPT-DR} is a convex relaxation of \eqref{eq:low-rank-2}, that is, \begin{enumerate} \item[$(i)$] \eqref{eq:OPT-DR} is convex, \item[$(ii)$] any feasible $(w,g)$ in \eqref{eq:low-rank-2} is also feasible for \eqref{eq:OPT-DR}, and \item[$(iii)$] the optimal value of \eqref{eq:OPT-DR} lower-bounds that of \eqref{eq:low-rank-2}. \end{enumerate} \end{theorem} \begin{IEEEproof} To prove the claim, one can resort to a proof strategy via the multi-criteria problem \eqref{eq:OPT-SMO}, as in the previous section. Instead, we present a more direct approach here. We start by massaging the rank constraint in \eqref{eq:low-rank-2}. First, since $ \text{rank} \left({\mathscr{H}}_{{T_{\textup{ini}}}+L}(\hat w_{d})\right) = m({T_{\textup{ini}}}+L)+n$, we may without loss of generality add the constraint $\|g\|_0 \leq n+ m({T_{\textup{ini}}}+L)$ to the outer problem, where $\|g\|_0$ denotes the cardinality (number of nonzero entries) of $g$. Second, we perform a convex relaxation and drop the rank constraint. Third, another convex relaxation (popular in LASSO problems \cite{hastie2015statistical}) is to replace $\|g\|_0 \leq n+ m({T_{\textup{ini}}}+L)$ by $\|g\|_{1} \leq \alpha$ for $\alpha>0$ sufficiently large. As a result of these three steps, \eqref{eq:low-rank-2} is relaxed to \begin{mini} {w{\tb \in \mathcal W},g}{{c_\text{ctrl}}(w-w_{r}) }{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}{\label{eq:low-rank-3}} \addConstraint{ \begin{bmatrix} {w_{\textup{ini}}}\\ w\end{bmatrix} = {\mathscr{H}}_{{T_{\textup{ini}}}+L}(\hat w_{d}^{\star})g \,,\; \|g\|_1 \leq \alpha } \addConstraint{\!\where \; \hat w_{d}^{\star} \in \argmin_{\hat w_{d}} \; {c_\text{id}}(\,\hat w_{d}-w_{d}) }. \end{mini} Observe that under Assumption~\ref{ass: cid} the inner problem admits a trivial solution:\, $\hat w_{d}^{\star}=w_{d}$. Thus, \eqref{eq:low-rank-3} reduces to \begin{mini} { w{\tb \in \mathcal W},g}{{c_\text{ctrl}}( w-w_{r}) }{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}{\label{eq:low-rank-4}} \addConstraint{ \begin{bmatrix} {w_{\textup{ini}}}\\ w\end{bmatrix} = {\mathscr{H}}_{{T_{\textup{ini}}}+L}( w_{d})g \,,\; \|g\|_1 \leq \alpha }. \end{mini} Next, we lift the 1-norm constraint to the objective \begin{mini} { w{\tb \in \mathcal W},g}{{c_\text{ctrl}}( w-w_{r}) + \lambda \cdot \|g\|_{1} }{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}{\label{eq:low-rank-5}} \addConstraint{ \begin{bmatrix} {w_{\textup{ini}}}\\ w\end{bmatrix} = {\mathscr{H}}_{{T_{\textup{ini}}}+L}( w_{d})g }, \end{mini} where $\lambda\geq0$ is a scalar weight. In particular, for each value of $\alpha$ in \eqref{eq:low-rank-4}, there is $\lambda\geq0$ so that the solution of \eqref{eq:low-rank-5} coincides with \eqref{eq:low-rank-4}, and vice versa. These equivalences are standard in $\ell_{1}$-regularized problems and follow from strong duality (applicable since ${c_\text{ctrl}}(\cdot)$ is convex and Slater's condition holds) \cite{hastie2015statistical}. The precise value of $\lambda$ depends on the Lagrange multiplier of the constraint $\|g\|_1 \leq \alpha$ and thus on the data. In either case, there is a selection of parameters so that both problems are equivalent, and choosing $\lambda$ sufficiently small is a relaxation. Thus, we arrived at the direct data-driven control \eqref{eq:OPT-DR} for $\lambda$ sufficiently small and $h(g) = \|g\|_{1}$. {\tb Conclusion $(i)$ follows due to convexity \eqref{eq:OPT-DR}; $(ii)$ follows since we have enlarged the feasible set passing from \eqref{eq:low-rank-2} to \eqref{eq:OPT-DR}; and $(iii)$ follows due to the enlarged feasible set, since the costs of \eqref{eq:low-rank-2} and \eqref{eq:low-rank-4} coincide, and since \eqref{eq:OPT-DR} is a relaxation of \eqref{eq:low-rank-4} for $\lambda$ small.} \end{IEEEproof} In summary, to pass from indirect data-driven control \eqref{eq:low-rank-2} to direct data-driven control \eqref{eq:OPT-DR}, we performed a sequence of convex relaxations effectively replacing the rank constraint of the system identification by a $\ell_{1}$-norm regularizer. Hence, the 1-norm regularizer accounts for selecting the model complexity. Similar remarks as those following Theorem~\ref{Theorem: SPC relaxation} on tightness of the relaxation apply to Theorem~\ref{Theorem: Low-rank relaxation}, too; {\tb see Remark \ref{rem: comments on spc theorem}.} \subsection{Hybrid relaxations} Theorems \ref{Theorem: SPC relaxation} and \ref{Theorem: Low-rank relaxation} reveal the roles of the two regularizers: $\|g\|_{1}$ controls the {model} complexity, whereas $\|(I-\Pi)g\|_{2}$ accounts for least-square fitting the data. To blend the two, consider a hybrid formulation of \eqref{eq:ARX-SMO-final} and \eqref{eq:low-rank-2} \begin{mini} {w{\tb \in \mathcal W},g}{\!\!\!\!\!{c_\text{ctrl}}(w-w_{r}) \,+\, \lambda_{1} \cdot \|(I-\Pi)g\|^{2}_{2}}{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}{\label{eq:low-rank-6}} \addConstraint{\!\!\!\!\! \begin{bmatrix} {w_{\textup{ini}}}\\ w\end{bmatrix} = {\mathscr{H}}_{{T_{\textup{ini}}}+L}(\hat w_{d}^{\star})g } \addConstraint{\!\!\!\!\! \hat w_{d}^{\star} \in \argmin_{\hat w_{d}} \; {c_\text{id}}(\,\hat w_{d}-w_{d}) } \addConstraint{\!\!\!\!\! \st\, \textup{rank} ({\mathscr{H}}_{{T_{\textup{ini}}}+L}(\,\hat w_{d})) \!=\! m({T_{\textup{ini}}}\!+\!L)\!+\!n }, \end{mini} where $\lambda_{1} \geq 0$. Observe that this formulation is consistent: \begin{fact} Under Assumptions \ref{ass:L+n pe} with $L$ replaced by ${T_{\textup{ini}}}+L$, \ref{ass:ctr cost}, \ref{ass: Bp = Br}, and \ref{ass: Bp=Bid}, for any $\lambda \geq 0$ the minimum of \eqref{eq:low-rank-6} and achieved for $w^{\star} = w_{r}$ and $\|(I-\Pi)g^{\star}\|^{2}_{2}=0$. \end{fact} {\tb The arguments in the previous section then lead us to} \begin{mini} { w {\tb\in\mathcal W},g}{\!\!\!{c_\text{ctrl}}( w-w_{r}) \,+\, \lambda_{1} \cdot \|(I-\Pi)g\|^{2}_{2} \,+\, \lambda_{2} \cdot \|g\|_{1}}{\!\!\!\!\!}{\label{eq:low-rank-7}} \addConstraint{\!\!\! \begin{bmatrix} {w_{\textup{ini}}}\\ w\end{bmatrix} = {\mathscr{H}}_{{T_{\textup{ini}}}+L}( w_{d})g }, \end{mini} where $\lambda_{2} \geq 0$. We will validate the performance of the hybrid regularizer in Section~\ref{subsec: role of projection} below; see specifically Figure~\ref{fig:hybrid_regularization_more_data}. \subsection{Possible pitfalls of relaxations} \label{subsec: Pitfall of relaxations} Note that the two convex relaxation results in Theorems \ref{Theorem: SPC relaxation} and \ref{Theorem: Low-rank relaxation} are {\em trivially} true in the limit when $\lambda = 0$. In fact, even the abstract multi-criteria formulation \eqref{eq:OPT-SMO} can be related to a relaxation of the abstract bi-level problem \eqref{eq:OPT-BL} in the limit $\gamma = 0$. Namely, for $\gamma = 0$, \eqref{eq:OPT-SMO} reduces to% \begin{mini}% { w,\hat w_{d},\widehat{\mathscr{B}}}{{c_\text{ctrl}}( w-w_{r}) }{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}{\label{eq:OPT-SMO-R}} \addConstraint{ {{w_{\textup{ini}}} \wedge w \in \widehat{\mathscr{B}}_{{T_{\textup{ini}}}+L}}\,,\; \hat w_{d} \in \widehat{\mathscr{B}}_{T}\,,\; \widehat{\mathscr{B}} \in {\mathscr{L}}_{m,\ell}^{q,n} \,. } \end{mini}% {\tb The variable $\hat w_{d}$ and the constraint $\hat w_{d} \in \widehat{\mathscr{B}}_{T}$ can be removed, and \eqref{eq:OPT-SMO-R} amounts to matching the model $\widehat{\mathscr{B}}$ to the reference $w_{r}$. The next result is followed by a discussion on regularizers:} \begin{corollary}\label{Corollary: trivial relaxation} Consider the indirect data-driven control \eqref{eq:OPT-BL} and multi-criteria problem \eqref{eq:OPT-SMO-R} in the limit $\gamma = 0$, {\tb and let Assumption~\ref{ass:ctr cost} hold.} Then problem \eqref{eq:OPT-SMO-R} is a relaxation of problem \eqref{eq:OPT-BL}, that is, \begin{enumerate} \item[$(i)$] any feasible $( w,\hat w_{d},\hat {\mathscr{B}})$ in \eqref{eq:OPT-BL} is also feasible for \eqref{eq:OPT-SMO-R}, \item[$(ii)$] and the optimal value of \eqref{eq:OPT-SMO-R} lower-bounds that of \eqref{eq:OPT-BL}. \end{enumerate} \end{corollary} \begin{IEEEproof} Consider the equivalent formulation \eqref{eq:OPT-BL-value} of \eqref{eq:OPT-BL}, and note that \eqref{eq:OPT-SMO-R} equals \eqref{eq:OPT-BL-value} when the inner optimality constraint ${c_\text{id}}(\hat w_{d}-w_{d}) - \varphi = 0$ is dropped. {\tb The conclusions now follow analogously as in Theorems \ref{Theorem: SPC relaxation} and \ref{Theorem: Low-rank relaxation}.} \end{IEEEproof} Analogous corollaries can be stated for Theorems \ref{Theorem: SPC relaxation} and \ref{Theorem: Low-rank relaxation} for $\lambda = 0$. Given such results, one may wonder whether Theorems \ref{Theorem: SPC relaxation} and \ref{Theorem: Low-rank relaxation} are vacuous since they are trivially true for $\lambda = 0$. We offer several answers. First, the limit $\lambda =0$ clearly leads to a better solution $w^{\star}$ (i.e., a lower surrogate tracking error) for the {\em open-loop} optimal control problem. However, this solution merely matches the reference $w_{r}$ and does not adhere to the identification data $w_{d}$ in the sense of meeting any fitting criterion. Hence, the optimal solution $w^{\star}$ may not be a trajectory of the true system behavior, and the actual {\em realized} control performance can be arbitrarily poor. Obviously, such a situation is not desirable, and one may want to regularize with a small but non-zero $\lambda$ -- an observation consistent with \cite{JC-JL-FD:18,JC-JL-FD:19-CDC,JC-JL-FD:20,xue2020data,LH-JZ-JL-FD:20,LH-JZ-JL-FD:01} albeit derived from a different perspective. Second, Theorems \ref{Theorem: SPC relaxation} and \ref{Theorem: Low-rank relaxation} require $\lambda$ to be sufficiently small, but not zero. According to the proofs, depending on Lipschitz constants and multipliers of the respective problems, there is a smallest value for $\lambda$ so that the behavior $\widehat{\mathscr{B}}$ matches (in the ${c_\text{id}}(\cdot)$ fitting criterion) the plant behavior ${\mathscr{B}}^{P}$. In \cite{JC-JL-FD:18,JC-JL-FD:19-CDC,JC-JL-FD:20,xue2020data,LH-JZ-JL-FD:20,LH-JZ-JL-FD:01} the coefficient $\lambda$ relates to a desired robustness level. In either case, $\lambda$ can hardly be quantified a priori and without cross-validation; {\tb see also Remark~\ref{rem: comments on spc theorem}.} We follow up on this set of questions in the next section. \section{Numerical Analysis and Comparisons} \label{subsec: numerical analysis} We now numerically investigate the effect of the hyper-parameter $\lambda$, confirm the superiority of the regularizer $h(g) = \|(I-\Pi)g\|_{2}^{2}$, and compare direct and indirect approaches. \subsection{Choice of Regularization Parameter} \label{subsec: choice of regularization} We first study the parameter $\lambda$ regularizing direct data-driven control \eqref{eq:OPT-DR}. Consider the benchmark single-input, single-output, 5th order, linear time-invariant system \cite{ddctr-benchmark}. Denoting the $t$-th element of the concatenated input and output by ${w}(t)=({u}(t),{y}(t))$, the control cost was chosen as $c_\textup{ctrl}({w}-w_r) = ({w}-w_r)^{\top}W({w}-w_r)$ with reference $w_r(t) = (u_r(t),y_r(t)) = (0,\sin(2\pi t/(L-1)))$ for $t\in\{0,1,\dots,L-1\}$, prediction horizon $L=20$, $W=I_L\otimes\textup{diag}(0.01,2000)$, where $I_L$ is the $L \times L$ identity, and $\otimes$ denotes the Kronecker product. {\tb In this entire section, we disregard constraints, i.e., $\mathcal W \equiv \real^{qL}$.} We used a 1-norm regularizer $h(g) = \|g\|_{1}$ in~\eqref{eq:OPT-DR} and a prefix-trajectory of length ${T_{\textup{ini}}}=5$ (see Section~\ref{subsec: estimation}). We collected one noise-free input/output time series of length {\tb$T=250$} by applying a random Gaussian input. From this noise-free data set, 100 independent noisy data sets were constructed by adding Gaussian noise with a noise-to-signal ratio of 5\%. For each data set and each value of $\lambda\in(0,10^3)$, optimal control inputs were computed from~\eqref{eq:OPT-DR}. We define the {\em predicted} error as $c_{\textup{ctrl}}(w^{\star}-w_r)$, where $w^{\star}$ is an optimizer of \eqref{eq:OPT-DR}. We define the {\em realized} error as $c_{\textup{ctrl}}(w_{\textup{true}}-w_r)$, where $w_{\textup{true}}$ is the realized trajectory of the system after applying the computed optimal inputs. The predicted and realized errors were converted to a percentage increase in error with respect to the ground-truth optimal performance (i.e., if the deterministic system was exactly known), and were averaged over the 100 independent data sets. The results are plotted in Figure~\ref{fig:error_vs_epsilon}. It is apparent that choosing $\lambda$ too small leads to an optimistic predicted error but very poor realized performance. Furthermore, the performance is poor for large values of $\lambda$ indicating that the regularization parameter should be chosen carefully (though a wide range delivers equally good results). These observations are consistent with those in~\cite{JC-JL-FD:18,xue2020data,JC-JL-FD:19-CDC,JC-JL-FD:20,LH-JZ-JL-FD:20,LH-JZ-JL-FD:01} and the hypotheses discussed at the end of Section~\ref{subsec: Pitfall of relaxations}. \begin{figure}[b] \centering \includegraphics[width=\columnwidth]{img/28-Aug-2021-1147_comparison_open_vs_closed_loop_nonzeroic-eps-converted-to.pdf} \caption{Predicted and realized errors {\tb (relative to the ground-truth optimal performance and averaged over 100 data sets) with 1-norm regularizer $\lambda \|g\|_{1}$.}} \label{fig:error_vs_epsilon} \end{figure} \subsection{Role of Projection in Two-Norm Regularization} \label{subsec: role of projection} Theorem~\ref{Theorem: SPC relaxation} suggests that the identification-induced regularizer $h(g) = \|(I-\Pi)g\|_{2}^{2}$ is superior to a two-norm regularizer $h(g) = \|g\|_{2}^{2}$ if one is interested in consistency and the predicted trajectory adhering to a least-square fit of the data. To test this hypothesis, we consider the same case study from Section~\ref{subsec: choice of regularization} and report the averaged cost in Figure~\ref{fig:projected_ell2_regularization_more_data}. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{img/26-Aug-2021_ell2_vs_ell2projected_more_data-eps-converted-to.pdf} \caption{Comparison of the realized performance {\tb (relative to the ground-truth optimal performance and averaged over 100 data sets)} for the two-norm $\|g\|_{2}^{2}$ and identification-induced regularization $\|(I-\Pi)g\|_{2}^{2}$ as function of $\lambda$.} \label{fig:projected_ell2_regularization_more_data} \end{figure} Both regularizers perform similarly for small $\lambda$, but the identification-induced regularizer shows a superior and surprisingly constant performance for sufficiently large $\lambda$. By the proof of Theorem~\ref{Theorem: SPC relaxation}, for $\lambda$ sufficiently large, the direct and indirect problems \eqref{eq:ARX-SMO-final} and \eqref{eq:OPT-BL-ARX} are equivalent -- up to causality and complexity constraints. Thus, a sufficiently large $\lambda$ forces the least-square fit \eqref{eq:pem} and results in excellent performance independent of the specific value of $\lambda$. While there is a small window where the two-norm excels, the identification-induced regularizer shows overall much more robust performance. \begin{figure}[tb] \centering \includegraphics[width=\columnwidth]{img/29-Aug-2021-0855_comparison_ell2_vs_ell2projected_vs_ell2mixed_nonzeroic-eps-converted-to.pdf} \caption{Realized error {\tb (relative to the ground-truth optimal performance and averaged over 100 data sets)} for a hybrid regularizer $\lambda_{1} \|(I-\Pi)g\|_{2}^{2} + \lambda_{2} \|g\|_{1}$} \label{fig:hybrid_regularization_more_data} \end{figure} Next we study the merits of hybrid regularization \eqref{eq:low-rank-7}. For the same case study Figure~\ref{fig:hybrid_regularization_more_data} shows the averaged realized performance plotted over the regularization parameters. The $\{\lambda_{1}=0\}$ and $\{\lambda_{2}=0\}$ slices recover Figures \ref{fig:error_vs_epsilon} and \ref{fig:projected_ell2_regularization_more_data}. As before, the regularizer $ \|(I-\Pi)g\|_{2}^{2}$ is more robust though a hybrid regularizer yields a minor albeit robust improvement. {\tb A closer examination of the data underlying Figure~\ref{fig:hybrid_regularization_more_data} reveals that a hybrid regularization can improve up to 15\% over the best results achievable with the regularizer $ \|(I-\Pi)g\|_{2}^{2}$ only.} {\tb \subsection{Effect of data length} We continue with the same case study and discuss the effect of data-length on direct and indirect methods. For the direct method, we used the identification-induced regularizer $h(g) = \|(I-\Pi)g\|_2^2$ with sufficiently large weight $\lambda = 10000$, as indicated in Figure \ref{fig:projected_ell2_regularization_more_data}. For the indirect method, the inner system identification problem~\eqref{eq:ID} is solved using the subspace approach N4SID~\cite{van1994n4sid} with prefix horizon ${T_{\textup{ini}}}=5$, prediction horizon $L=20$, and (correct) model-order selection $n=5$. For our case study Lemma~\ref{lemma: fundamental lemma} demands at least $T = 59$ data points. Figure~\ref{fig:performance_vs_numdata} below shows the beneficial effects of including more data on the realized {\em median} performance of the direct and indirect methods. The main findings are as follows: First, both methods are asymptotically consistent. Second, the indirect method is superior in the low data regime echoing that models are compressed and de-noised representations, see Remark~\ref{rem: models vs. data}. Third and finally, when an incorrect model-order $n=6$ is selected for the indirect method (resulting in an over-parameterization and thus a bias), then consistency is lost, and the direct method is superior. This effect is even more pronounced when studying the average (as opposed to the median) error due to several outliers of the indirect method. This third point hints at a bias-variance trade-off between the direct and indirect methods, which will be studied below. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{img/29-Aug-2021-0858_comparison_yalmip_number_of_data_influence-eps-converted-to.pdf} \caption{Realized median error (over 100 data sets) for the direct and indirect (with different model order selections) methods for varying amount of data.} \label{fig:performance_vs_numdata} \end{figure} } \subsection{Comparison and Bias-Variance Hypotheses} We now compare the direct and indirect approaches through two case studies. The first study evaluates the performance of both methods on the basis of ``variance'' error, i.e., on a linear system with noisy measurements. The second study evaluates the performance on the basis of ``bias'' error, i.e., on a nonlinear system with noise-free measurements. We expect the direct method to perform better on the nonlinear system since the indirect method erroneously selects a linear model class thus leading to a larger ``bias'' error. On the other hand, we expect the indirect method to perform better on the linear system with noisy outputs since the identification step filters noise thus leading to a lower ``variance'' error. \subsection*{Comparison: Stochastic Linear System} Consider the same case study as in the Section~\ref{subsec: choice of regularization}, i.e., same LTI system, cost, and reference. We collected data for varying levels of noise-to-signal ratio, i.e., we considered measurements that were affected by Gaussian noise with noise-to-signal ratio in the set $\{0\%,1\%,\dots,15\%\}$. For each noise-to-signal ratio, {\tb $T=250$} input/output data samples were collected by applying a random Gaussian input. This data was then used for both the direct and indirect methods. For the indirect method, the inner system identification problem~\eqref{eq:ID} is {\tb again} solved using N4SID~\cite{van1994n4sid} with prefix horizon ${T_{\textup{ini}}}=5$ and prediction horizon $L=20$. Equipped with a (correct) 5th-order identified model, optimal control inputs are computed by solving~\eqref{eq:OPT-CE}. The indirect method was compared to the direct method~\eqref{eq:OPT-DR}, with $h(g) = \|g\|_{1}$, ${T_{\textup{ini}}}=5$, and $\lambda=27$. The hyper-parameters of both methods were kept constant for all simulations below and chosen to give good realized control performance for all noise-to-signal ratios. For both methods we recorded the realized performance after applying the open-loop inputs and converted it to a percentage error with respect to the best possible performance (i.e., if the deterministic model was exactly known). For each noise-to-signal ratio, 100 simulations were conducted with different random data sets. The results are displayed in the box plot in Figure~\ref{fig:boxplot_noisy} and show that both methods perform well for low levels of noise (up to approximately $2\%$ noise-to-signal ratio). As the data becomes noisier, the performance of the direct method degrades significantly, while the performance of the indirect method remains relatively constant. We remark that a slightly better albeit qualitatively similar result is obtained with the regularizer $ \|(I-\Pi)g\|_{2}^{2}$. We attribute these observations to the fact that identification de-noises the data. These results confirm our hypothesis that the indirect method is superior in terms of ``variance'' error. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{img/27-Aug-2021-0952_comparison_yalmip_moredata_nonzeroic-eps-converted-to.pdf} \caption{Comparison of direct and indirect methods for varying noise.} \label{fig:boxplot_noisy} \end{figure} \subsection*{Comparison: Deterministic Nonlinear System} We now consider the scenario where the direct and indirect methods are subject to a ``bias'' error, but not a ``variance'' error. Consider the discrete-time nonlinear Lotka-Volterra dynamics considered for direct data-driven control in~\cite{kaiser2018sparse} \[ \begin{aligned} x(t_{k+1}) &= f_{\textup{nonlinear}}(x(t_k),u(t_k)) \\ &=\left[\begin{smallmatrix} x_1(t_k) + \Delta t(ax_1(t_k) - bx_1(t_k)x_2(t_k)) \\ x_2(t_k) + \Delta t(dx_1(t_k)x_2(t_k) -cx_2(t_k) + u(t_k)) \end{smallmatrix}\right]\,, \end{aligned} \] where $t_{k+1} - t_k = \Delta t = 0.01$, $a=c=0.5$, $b=0.025$, $d=0.005$, and $x(t_k) = \begin{bmatrix} x_1(t_k) & x_2(t_k)\end{bmatrix}^{\top}$. Here, $(x_1(t_k),x_2(t_k))$ denote prey and predator populations, and $u(t_k)$ is the input. A linearization about the equilibrium $(\bar{u},\bar{x}_1,\bar{x}_2)=(0,c/d,a/b)$ yields the affine {\tb linear} system \[ \begin{aligned} &x(t_{k+1}) = f_{\textup{linear}}(x(t_k),u(t_k),\bar{x}_1,\bar{x}_2) \\ &= \left[\begin{smallmatrix} x_1(t_k) + \Delta t\left((a-b\bar{x}_2)(x_1(t_k)-\bar{x}_1) - b\bar{x}_1(x_2(t_k)-\bar{x}_2)\right) \\ x_2(t_k) + \Delta t\left(d\bar{x}_2(x_1(t_k)-\bar{x}_1) +(d\bar{x}_1 -c)(x_2(t_k)-\bar{x}_2) + u(t_k)\right) \end{smallmatrix}\right]\,. \end{aligned} \] {\tb We expect direct data-driven control \eqref{eq:OPT-DR} to perform well on such a nonlinear system for two reasons: $(i)$ nonlinear systems can be well approximated by LTI systems of sufficiently high complexity; and $(ii)$ the direct method \eqref{eq:OPT-DR} does not specify the LTI system complexity (e.g., by enforcing rank constraints).} We compare the direct and indirect methods for varying degree of nonlinearity by interpolating between $f_{\textup{nonlinear}}$ and $f_{\textup{linear}}$, i.e., we study the interpolated system \begin{equation} \begin{aligned} x(t_{k+1}) &= \epsilon \cdot f_{\textup{linear}}(x(t_k),u(t_k),\bar{x}_1,\bar{x}_2)\\ &\quad + (1-\epsilon) \cdot f_{\textup{nonlinear}}(x(t_k),u(t_k)) \end{aligned} \label{eq:interpolated_dynamics} \end{equation} for $\epsilon\in [0,1]$. For $\epsilon=1$ (resp. $\epsilon=0$), the dynamics are purely affine (resp. nonlinear). For each $\epsilon\in\{0,0.1,\dots,1\}$, $T=2415$ data points were collected by applying a noisy sinusoidal input $u(t_k) = 2(\sin(t_k)+\sin(0.1t_k)))^2 + v(t_k)$ with $v(t_k)$ sampled from a Gaussian random variable. Full state measurement was assumed. The data collection was repeated for 100 different initial conditions. For each degree of nonlinearity $\epsilon\in\{0,0.1,\dots,1\}$ and each initial condition, the data was used to compute optimal open-loop control inputs using direct and indirect methods. The control cost was chosen as $c_\textup{ctrl}({w}-w_r) = \| {w}-w_r\|_{2}^{2}$ with equilibrium reference $w_r = (0,100,20,\dots,0,100,20)$, $L=600$, and $w=(u,x)$. For the indirect method, the inner system identification optimization problem given by~\eqref{eq:ID} is solved using the subspace approach N4SID~\cite{van1994n4sid} with initial condition horizon ${T_{\textup{ini}}}=4$, and prediction horizon $L=600$. A model order of 4 was chosen, as it produced the best performance as measured by the realized control cost. Optimal control inputs were then computed by solving~\eqref{eq:OPT-CE}. For comparison, we chose the direct method~\eqref{eq:OPT-DR} with $h(g) = \|g\|_{1}$, ${T_{\textup{ini}}}=4$, and $\lambda=8000$. The performance was measured with the realized control cost after applying the open-loop inputs to system~\eqref{eq:interpolated_dynamics}. As before, the hyper-parameters of both direct/indirect methods were judicially chosen and kept constant for all simulations.% \begin{figure}[tb] \centering \includegraphics[width=\columnwidth,trim=2cm 0cm 2cm 0cm,clip=true]{img/30-Nov-2020_boxplot_nonlinear_deterministic-eps-converted-to.pdf} \caption{Comparison of direct and indirect methods for varying nonlinearity.} \label{fig:boxplot_nonlinear} \end{figure} The results displayed in Figure~\ref{fig:boxplot_nonlinear} show that both methods perform well for low levels of nonlinearity: $\epsilon \in [0.7, 1]$. As the system becomes increasingly nonlinear, the performance of the indirect method degrades significantly, while the performance of the direct method remains relatively constant. We attribute this observation to the fact that the indirect method incurs a ``bias'' error from selecting a linear model class and applying certainty-equivalence control, while the direct method uses data from the nonlinear system without bias. {\tb These findings confirm our earlier bias-variance observations from Figure~\ref{fig:performance_vs_numdata}.} \section{Discussion and Conclusions} \label{sec: conclusions} We studied the relationship between indirect and direct data-driven control formulated as bi-level (first-identify, then control) and single-level regularized (based on the Fundamental Lemma) optimization problems, respectively. An intermediate multi-criteria problem allowed us to efficiently transition between both formulations. We concluded that the regularized direct approach can be viewed as a convex relaxation of the indirect approach, where the choice regularizer depended on the problem formulation and accounted for an implicit identification step. We also discovered a novel regularizer that is consistent and accounts for least-square identification. Our results suggested the use of the indirect method in case of ``variance'' errors and the use of the direct method in presence of ``bias'' errors (e.g., a nonlinear system {\tb or when selecting a wrong model order}). These insights {\tb echo the bias-variance trade-offs previously encountered for direct and indirect methods in \cite{campestrini2017data,krishnan2021direct}, and they} shed some partial light on the remarkable {\tb empirical performance of (direct)} data-enabled predictive control applied to nonlinear systems. As a limitation, our results concern only the open-loop predictive control problem, though we ultimately care about the realized performance, especially in a receding horizon closed-loop implementation. {\tb Some preliminary results on the realized performance of regularized control formulations were obtained in \cite{LH-JZ-JL-FD:01} through the lens of robust optimization, but the topic remains largely open.} Moreover, we believe that the proposed multi-criteria data-driven control formulation is important in its own right and may deliver excellent performance if one were to find a convex formulation and appropriate trade-off parameter. Both of these are formidable tasks for future work. Finally, we believe that our approach is also applicable to other identification and control formulations and may deliver interesting and novel direct data-driven control formulations. \section*{Acknowledgements} The authors acknowledge their colleagues at ETH Z\"urich, in particular Miguel Picallo Cruz, for fruitful discussions.
{ "redpajama_set_name": "RedPajamaArXiv" }
4,119
{"url":"https:\/\/simardcasanova.net\/wiki\/data-science\/r\/conditions\/","text":"# Conditions\n\n## \u201cInline\u201d conditions\n\nAn inline condition is a condition that is not used through if or a similar statement.\n\nCheck if a condition is true:\n\ndf$var[conditionToTest] Check if a condition is false: df$var[ ! conditionToTest]\n\n\nconditionToTest should be replaced by actual R code.","date":"2023-03-31 15:10:23","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9116498231887817, \"perplexity\": 4177.94446762724}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296949644.27\/warc\/CC-MAIN-20230331144941-20230331174941-00399.warc.gz\"}"}
null
null
Q: NopCommerce, calling Nop.Web controller methods in Nop.Plugins projects Trying to achieve: I am writing a restful plugin that would allow searching products from external associated system (website). And allow them to show our products on their website. Instead of writing everything from scratch, I want to use what is already written in NopCommerce. I have managed to search products, however I am not able to get product images, by using something that already is there. Inside CatalogController, I see SearchTermAutoComplete ActionResult, it does what I am looking for. But I am not able to call it from my plugin code. I don't want to copy controller files/code into plugin project. Because, then I will have to copy all the extension objects and other dependencies as well. Is there a way in MVC/NopCommerce by which I access controller code and call them from a plugin? Plugin code: public ActionResult Search(String authToken, string keywords) { if (!IsAuthTokenValid(authToken, out _tokenStatus)) return InvalidAuthToken(_tokenStatus); var _products = _productService.SearchProducts(0, int.MaxValue, null, 0, 0, 0, 0, null, false, false, false, null, null, 0, keywords, true, true, true); if (_products == null || _products.Count <= 0) return ErrorOccured("No product was found"); else return Successful(_productHelper.GetProductsJson(_products, _productPictures)); } P.S. newbie MVC/NopCommerce UPDATE: I tried to load controller and then call the method, see code below. However, I ended up getting an error in BaseController's RenderPartialViewToString method. Code: var _shoppingCartController = DependencyResolver.Current.GetService<ShoppingCartController>(); return _shoppingCartController.AddProductToCart_Catalog(productId, 1, quantity); Error upon execution: An exception of type 'System.ArgumentNullException' occurred in System.Web.Mvc.dll but was not handled in user code Additional information: Value cannot be null. Stack Trace: at System.Web.Mvc.ViewEngineCollection.FindPartialView(ControllerContext controllerContext, String partialViewName) at Nop.Web.Framework.Controllers.BaseController.RenderPartialViewToString(String viewName, Object model) in d:\NopCommerce\Presentation\Nop.Web.Framework\Controllers\BaseController.cs:line 67 at Nop.Web.Controllers.ShoppingCartController.AddProductToCart_Catalog(Int32 productId, Int32 shoppingCartTypeId, Int32 quantity, Boolean forceredirection) in d:\NopCommerce\Presentation\Nop.Web\Controllers\ShoppingCartController.cs:line 1547 at Nop.Plugin.RestService.Controllers.ApiController.AddProductToCart(String authToken, Int32 productId, Int32 quantity) at lambda_method(Closure , ControllerBase , Object[] ) at System.Web.Mvc.ActionMethodDispatcher.Execute(ControllerBase controller, Object[] parameters) at System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary`2 parameters) at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary`2 parameters) at System.Web.Mvc.Async.AsyncControllerActionInvoker.ActionInvocation.InvokeSynchronousActionMethod() at System.Web.Mvc.Async.AsyncControllerActionInvoker.<BeginInvokeSynchronousActionMethod>b__39(IAsyncResult asyncResult, ActionInvocation innerInvokeState) at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResult`2.CallEndDelegate(IAsyncResult asyncResult) at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResultBase`1.End() at System.Web.Mvc.Async.AsyncResultWrapper.End[TResult](IAsyncResult asyncResult, Object tag) at System.Web.Mvc.Async.AsyncControllerActionInvoker.EndInvokeActionMethod(IAsyncResult asyncResult) at System.Web.Mvc.Async.AsyncControllerActionInvoker.AsyncInvocationWithFilters.<InvokeActionMethodFilterAsynchronouslyRecursive>b__3d() at System.Web.Mvc.Async.AsyncControllerActionInvoker.AsyncInvocationWithFilters.<>c__DisplayClass46.<InvokeActionMethodFilterAsynchronouslyRecursive>b__3f()
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,805
E-Commerce • Press Release Accenture Launches NewsPage 9 With SAP In Asia-Pacific, Africa Deloitte Recognized As Global Leader In Cybersecurity Consulting By ALM [shutterstock: 515814736, pathdoc] E-3 Magazine Are Your Business Critical Applications Secure? According to a new Cyberark survey, the majority of organizations (nearly 70 percent) do not prioritize the protection of the applications that their business depend on – such as ERP and CRM systems – any differently than how low-value data, applications or services are secured. The independent Cyberark survey interviewed 1,450 business and IT decision makers, primarily from Western European economies. Respondents indicated that even the slightest downtime affecting business critical applications would be massively disruptive, with 61 percent agreeing that the impact would be severe. Breaches affecting applications that are the lifeblood of business can result in punitive costs, with a 2018 report estimating the average cost of an attack on an ERP system at $5.5 million USD. The threat actors that enterprises face are formidable. Organized crime was behind 50 percent of all breaches in 2018. The attackers used established tactics like privileges abuse to achieve their aims. 56 percent of organizations have experienced data loss or service disruptions affecting business critical applications in the previous two years. However, the survey found 72 percent of respondents are confident that their organization can stop data security attacks or breaches. Cyberark highlights disconnect security and business This brings to light a remarkable disconnect between where security strategy is focused and the business value of what is most important to the organization. An attacker targeting administrative privileges for these applications could cause significant disruption and could even halt business operations. The survey also found that 74 percent of organizations indicated they have moved business critical applications to the cloud; or will do so in the next two years. A risk-prioritized approach to protecting these assets is necessary for companies to manage this transition successfully. Further industry data shows that, globally, 69 percent of organizations are migrating data for popular ERP applications to the cloud. "From banking systems and R&D to customer service and supply chain, all businesses in all verticals run on critical applications. Accessing and disrupting these applications is a primary target for attackers. This is due to their day-to-day operational importance and the wealth of information that resides in them; whether they are on-premises or in the cloud," said David Higgins, CyberArk. "CISOs must take a prioritized, risk-based approach that applies the most rigorous protection to these applications, securing in particular privileged access to them and assuring that, regardless of what attacks penetrate the perimeter, they continue to run uncompromised." Cyberark (external) Articles published through E-3 Magazine International. This includes press releases by our partners as well as articles and reports from the E-3 team of journalists. Hybrid Cloud: NetApp Announces Ontap 9.6 Security Risks of Increased Mobility With S/4 Hana
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,332
SpecBegin(WLXBluetoothDeviceRegistry) __block id<WLXBluetoothDeviceRepository> mockRepository; __block NSNotificationCenter * notificationCenter; __block CBCentralManager * mockCentralManager; __block WLXBluetoothDeviceRegistry * registry; __block CBPeripheral * mockPeripheral; __block NSDate * mockDate; __block WLXBluetoothDeviceConnectionRecord * record; beforeEach(^{ mockPeripheral = mock([CBPeripheral class]); mockRepository = mockProtocol(@protocol(WLXBluetoothDeviceRepository)); notificationCenter = [NSNotificationCenter defaultCenter]; mockCentralManager = mock([CBCentralManager class]); NSDateFormatter * formatter = [[NSDateFormatter alloc] init]; [formatter setLocale:[[NSLocale alloc] initWithLocaleIdentifier:@"es_AR"]]; [formatter setDateFormat:@"yyyy-MM-dd HH:mm:ss Z"]; mockDate = [formatter dateFromString:@"2012-03-01 22:00:00 GMT-03:00"]; id<WLXDateProvider> dateProvider = [[WLXFakeDateProvider alloc] initWithDate:mockDate]; [WLXBluetoothDeviceConnectionRecord setDateProvider:dateProvider]; registry = [[WLXBluetoothDeviceRegistry alloc] initWithRepository:mockRepository notificationCenter:notificationCenter centralManager:mockCentralManager]; NSUUID * UUID = [[NSUUID alloc] initWithUUIDString:@"68753A44-4D6F-1226-9C60-0050E4C00067"]; [MKTGiven([mockPeripheral name]) willReturn:@"Mock Peripheral"]; [MKTGiven([mockPeripheral identifier]) willReturn:UUID]; [MKTGiven([NSDate date]) willReturn:mockDate]; record = [WLXBluetoothDeviceConnectionRecord recordWithPeripheral:mockPeripheral]; [MKTGiven([mockCentralManager retrievePeripheralsWithIdentifiers:@[UUID]]) willReturn:@[mockPeripheral]]; }); afterEach(^{ mockCentralManager = nil; mockDate = nil; mockPeripheral = nil; mockRepository = nil; notificationCenter = nil; registry = nil; }); describe(@"#enabled", ^{ context(@"when the registry is enabled", ^{ beforeEach(^{ registry.enabled = YES; }); context(@"when a new connection is established", ^{ beforeEach(^{ NSDictionary * userInfo = @{ WLXBluetoothDevicePeripheral : mockPeripheral }; [notificationCenter postNotificationName:WLXBluetoothDeviceConnectionEstablished object:nil userInfo:userInfo]; }); it(@"saves the connected peripheral into the repository", ^{ MKTArgumentCaptor * connectionRecordCaptor = [[MKTArgumentCaptor alloc] init]; [MKTVerify(mockRepository) saveConnectionRecord:connectionRecordCaptor.capture withBlock:anything()]; WLXBluetoothDeviceConnectionRecord * connectionRecord = connectionRecordCaptor.value; expect(connectionRecord.name).to.equal(mockPeripheral.name); expect(connectionRecord.UUID).to.equal(mockPeripheral.identifier.UUIDString); expect(connectionRecord.connectionDate).to.equal(mockDate); }); }); }); context(@"when the registry is disabled", ^{ context(@"when a new connection is established", ^{ beforeEach(^{ NSDictionary * userInfo = @{ WLXBluetoothDevicePeripheral : mockPeripheral }; [notificationCenter postNotificationName:WLXBluetoothDeviceConnectionEstablished object:nil userInfo:userInfo]; }); it(@"does not save the connected peripheral into the repository", ^{ [MKTVerifyCount(mockRepository, never()) saveConnectionRecord:anything() withBlock:anything()]; }); }); }); }); describe(@"#fetchLastConnectionRecordWithBlock:", ^{ context(@"when the registry is enabled", ^{ beforeEach(^{ registry.enabled = YES; }); context(@"when a new connection is established", ^{ beforeEach(^{ NSDictionary * userInfo = @{ WLXBluetoothDevicePeripheral : mockPeripheral }; [notificationCenter postNotificationName:WLXBluetoothDeviceConnectionEstablished object:nil userInfo:userInfo]; }); it(@"returns the last connection record", ^{ [registry fetchLastConnectionRecordWithBlock:^(NSError * error, WLXBluetoothDeviceConnectionRecord * connectionRecord) { expect(connectionRecord.name).to.equal(mockPeripheral.name); expect(connectionRecord.UUID).to.equal(mockPeripheral.identifier.UUIDString); expect(connectionRecord.connectionDate).to.equal(mockDate); }]; MKTArgumentCaptor * connectionRecordBlockCaptor = [[MKTArgumentCaptor alloc] init]; [MKTVerify(mockRepository) fetchLastConnectionRecordWithBlock:connectionRecordBlockCaptor.capture]; void (^block)(NSError *, WLXBluetoothDeviceConnectionRecord *) = connectionRecordBlockCaptor.value; block(nil, record); }); }); }); context(@"when the registry is disabled", ^{ context(@"when a new connection is established", ^{ beforeEach(^{ NSDictionary * userInfo = @{ WLXBluetoothDevicePeripheral : mockPeripheral }; [notificationCenter postNotificationName:WLXBluetoothDeviceConnectionEstablished object:nil userInfo:userInfo]; }); it(@"returns the previous connection record", ^{ [registry fetchLastConnectionRecordWithBlock:^(NSError * error, WLXBluetoothDeviceConnectionRecord * record) { expect(record).to.beNil; }]; }); }); }); }); describe(@"#fetchLastConnectedPeripheralWithBlock:", ^{ context(@"when the registry is enabled", ^{ beforeEach(^{ registry.enabled = YES; }); context(@"when a new connection is established", ^{ beforeEach(^{ NSDictionary * userInfo = @{ WLXBluetoothDevicePeripheral : mockPeripheral }; [notificationCenter postNotificationName:WLXBluetoothDeviceConnectionEstablished object:nil userInfo:userInfo]; }); it(@"returns the last connected peripheral", ^{ [registry fetchLastConnectedPeripheralWithBlock:^(NSError * error, CBPeripheral * peripheral) { expect(peripheral).to.equal(mockPeripheral); }]; }); }); }); context(@"when the registry is disabled", ^{ context(@"when a new connection is established", ^{ beforeEach(^{ NSDictionary * userInfo = @{ WLXBluetoothDevicePeripheral : mockPeripheral }; [notificationCenter postNotificationName:WLXBluetoothDeviceConnectionEstablished object:nil userInfo:userInfo]; }); it(@"returns the previuos connected peripheral", ^{ [registry fetchLastConnectedPeripheralWithBlock:^(NSError * error, CBPeripheral * peripheral) { expect(peripheral).to.equal(nil); }]; }); }); }); }); SpecEnd
{ "redpajama_set_name": "RedPajamaGithub" }
2,012
{"url":"https:\/\/mailman.ntg.nl\/pipermail\/ntg-context\/2019\/095162.html","text":"# [NTG-context] Wiki update\n\nHans Hagen j.hagen at xs4all.nl\nFri May 31 09:37:57 CEST 2019\n\n```On 5\/30\/2019 9:34 PM, Otared Kavian wrote:\n>> On 30 May 2019, at 12:07, Taco Hoekwater <taco at elvenkind.com> wrote:\n>> [\u2026]\n>>\n>> For example,\n>>\n>> <syntax>section<\/syntax>\n>>\n>> outputs a table with the formal calling convention(s) for the \\section command. Multiple tables actually, as it also outputs the alternative calls with different arguments, and it outputs the definition of the internal \\*section* command, of which \\section is an instance. (The parents of instances are printed in a slanted font).\n>\n> This is absolutely magic and extremely useful! Thank you, Wolfgang, Taco and Hans!\n>\n> I wonder whether the magic behind the scenes would allow some day to say\n> \t\\syntax[section]\n> within a ConTeXT document\u2026 :-)\n\n\\usemodule[setups-basics]\n\n\\starttext\n\n\\basicsetup{section:instance}\n\n\\shortsetup{section:instance}\n\n\\showsetup {section:instance}\n\n\\stoptext\n\n-----------------------------------------------------------------","date":"2019-12-14 13:29:06","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9259594082832336, \"perplexity\": 12768.84399088955}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-51\/segments\/1575541157498.50\/warc\/CC-MAIN-20191214122253-20191214150253-00328.warc.gz\"}"}
null
null
Hieronder volgt de kandidatenlijst voor de Europese Parlementsverkiezingen 2004 van ChristenUnie-SGP, een gezamenlijke lijst van ChristenUnie en de Staatkundig Gereformeerde Partij. De lijst Hans Blokland Bas Belder Peter van Dalen Chris Janse Rijk van Dam Evert-Jan Brouwer Hans van Dijk Jan Verboom Ruud van Eijle Ton de Jong Heleen van den Berg Otto van der Tang Nadine de Roode-Hof Gerrit Holdijk Jochem Pleijsier Rinus Houtman Leon Meijer Henk Jan van Schothorst Johannes Schenk Roelof Bisschop Lijsten van ChristenUnie-politici Lijsten van SGP-politici ChristenUnie-SGP
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,698
\section{Introduction} We present algorithms on context-free grammars (and also on hypergraphs and regular tree grammars, which share the same context-free derivation rule): hypergraph reachability, shortest path, and inside-outside pruning of 'relatively useless' arcs that are unused by any near-shortest paths. \secref{notation} is optional for those already familiar with regular tree grammars (analogous to derivation trees of context free grammars) and/or hypergraphs. \section{Notation} \label{s:notation} \label{sec1} \subsection{Strings} $\Sigma^{\star}$ are the \emph{strings over alphabet $\Sigma$}. For $s=(s_{1},\ldots,s_{n})$ the \emph{length} of $s$ is $|s|\equiv n$ and the $i$\emph{th letter} is $s[i]\equiv s_{i}$, for all $i\in indices_{s}\equiv\{ i\in\naturals \st 1\leq i\leq n\}$, and the concatenation of a sequence of letters by index is $s[\seqn{f}{n}\in indices_{s}^{\star}]\equiv (s[f[1]],\ldots,s[f[n]])$. \emph{Concatenation} of strings is specified by the $\concat$ operator, where $a\concat b\equiv(a[1],\ldots,a[|a|],b[1],\ldots,b[|b|])$. \comment{Naturally, $|a\concat b|=|a|+|b|$.} \comment{: \[ \seqn{a}{n}\concat\seqn{b}{m}\equiv(a_{1},\ldots,a_{n},b_{1},\ldots,b_{m})\] } \comment{ The \emph{letters in $s$} are $letters_{s}=\{ l|\exists i\in{indices_{s}}:s[i]=l\}$. The \emph{spans} of $s$ are $spans_{s}=\{(a,b)\in{\{\mathbb{{N}}^{2}\st1\leq a\leq b\leq n+1\}}$, and the \emph{substring at span $p=(a,b)$} of $s$ is $s\downarrow p\equiv(s_{a},\ldots s_{b-1})$, with $s\downarrow(a,a)=()$. The \emph{subsequences} of $s$ are given by a \emph{subsequence map} $f\in subseqmap_s$: \[ subseqmap_s\equiv \{\seqn{i}{n}\in indices_s^{\star}\st i_1 < \ldots < i_n \} \] A subsequence of $s$ by map $f$ is $s[f]$. (The subsequences of $s$ are $subseq_s\equiv \{s[f]\st f\in subseqmap_s\}$). For a letter $\sigma\in \Sigma$ there is exactly one maximal subsequence consisting of repetitions of that letter, and its map is $subseqmap_s(\sigma)$: \[ subseqmap_{s}^{\geneq}(\sigma)\equiv \bigconcat_{i=1}^{|s|} \ternary{s[i]\geneq\sigma}{(i)}{()} \] Note the $\geneq$ superscript, which, if omitted, is assumed to be the usual equality ($=$). Different $\geneq$ predicates can be useful for matching on projections of $\Sigma$. This convention will be assumed throughout. We can extend a function $f:\Sigma\into \Delta$, to sequences by mapping it over each element $f:\Sigma^{\star}\into\Delta^{\star}$, where $f(s\in\Sigma^{\star})=(f(s[1]),\ldots,f(s[|s|]))$. } \subsection{Multisets} A \emph{multiset $M$ of $S$} is a partial function $M:S\into \naturals$, or equivalently, a functional binary relation $M\subset S\times \naturals$. The class of multisets of $S$ is written $\multiset{S}$. If $M(s)=m\in{\naturals}$, we say $(x,m)\in{M}$, $x\in{M}$, and the \emph{multiplicity of $x$ in $M$} is $m$. Intuitively, the multiplicity is the number of times an element occurs. The \emph{domain of $M$} is $\domain{M}\equiv \{x\in{M}\}$. In some cases it is convenient to interpret $M$ as a total function from $S\rightarrow \nonnegints$ where $M(x\notin{\domain{M}})\equiv 0$. A set $S$ can be interpreted as a multiset where each $x\in{S}$ has multiplicity $S(x)\equiv 1$. A sequence $V=\seqn{v}{n}\in{S^{\star}}$ can also be seen as a multiset with $V(x)\equiv \sum_{i:v_{i}=x}1$ (after all, another notation of a multiset is just a set listed without removal of duplicates, e.g. $\{a,b,a\}$). \comment{ The \emph{intersection}, or \emph{product}, of multisets $M$ and $N$ is $M\intersect N \equiv \{(x,ab)\st (x,a)\in{M} \logand (x,b)\in{N}\}$. Their \emph{union}, or \emph{sum} is $M\union N$ defined by $M\union N:(\domain{M})\union (\domain{N}) \into \naturals$ where $(M\union N)(x)\equiv M(x)+N(x)$. The \emph{size} of a multiset $M$ is $|M|\equiv \sum_{x\in{M}}M(x)$. A multiset $M$ can be \emph{scaled} by a constant $k\in \naturals$: $kM\equiv \{(x,km)\st (x,m)\in M\}$. The \emph{factorial of a multiset $M$} is the set of unique permutations $M!\subset \Sigma^{\star}$ that are equivalent to $M$ when considered as a multiset. The number of unique permutations of a multiset $M$ is given by \[ |M!|=\frac{|M|!}{\prod_{(x,m)\in{M}}m!} \] since all the $M(x)!$ ways of reordering the $M(x)$ identical items $x\in M$ are indistinguishable. The multiset factorial of a sequence can be generated in the tradition of sequence permutations, except doing nothing when two items to be swapped are equal, instead of explicitly counting the multiplicity of the unique elements. } \subsection{Trees} $T_{\Sigma}$ is the set of \emph{(rooted, ordered, labeled, finite) trees over alphabet $\Sigma$.} $T_{\Sigma}(X)$ are the \emph{trees over alphabet $\Sigma$, indexed by $X$}---the subset of $T_{\Sigma\union X}$ where only leaves may be labeled by $X$. ($T_{\Sigma}(\emptyset)=T_{\Sigma}$.) \emph{Leaves} are nodes with no children. The \emph{nodes} of a tree \emph{t} are identified one-to-one with its \emph{paths}: \emph{}$paths_{t}\subset paths\equiv \naturals ^{\star} \equiv\bigcup_{i=0}^{\infty}\naturals^{i}$ ($A^{0}\equiv\{()\}$). The path to the root is the empty sequence $(),$ and $p_{1}$ \emph{extended by} $p_{2}$ is $p_{1}\concat p_{2}$, where $\concat$ is concatenation. For $p\in{paths_{t}}$, $rank_{t}(p)$ is the number of children, or \emph{rank}, of the node at $p$ in $t$, and $label_{t}(p)\in{\Sigma} \union X$ is its \emph{label}. The \emph{root of $t$} is $root(t)=label_{t}(())$. The \emph{ranked label} of a node is the pair $labelandrank_{t}(p)\equiv(label_{t}(p),rank_{t}(p))$. For $1\leq i\leq rank_{t}(p)$, the \nth{$i$}{th} \emph{child} of the node at $p$ is located at \emph{path} $p\concat(i)$. The \emph{subtree at path $p$ of $t$} is $t\downarrow p$, defined by $paths_{t\downarrow p}\equiv\{ q\st p\concat q\in{paths_{t}}\}$ and $labelandrank_{t\downarrow p}(q)\equiv labelandrank_{t}(p\concat q)$. The \emph{children of $t$} are $children_t\in T_\Sigma^{\star}$, with $children_t[i]=t\downarrow (i), \forall 1\leq i \leq rank(t)$. The \emph{paths to $X$ in $t$} are $paths_{t}(X)\equiv\{ p\in{paths_{t}}\st label_{t}(p)\in{X}\}$. A \emph{frontier} is a set of paths $f$ that are \emph{pairwise prefix-independent}: \[ \forall p_{1},p_{2}\in{f},p\in{paths}:p_{1}=p_{2}\concat p\logimplies p_{1}=p_{2}\] A \emph{frontier of t} is a frontier $f\subseteq paths_{t}$. For $t,s\in{T_{\Sigma}(X)},p\in{paths_{t}}$, $t[p\assign s]$ is the \emph{substitution of $s$ for $p$} in $t$, where the subtree at path $p$ is replaced by $s$. For a frontier $f$ of $t$, the \emph{mass substitution of $X$ for the frontier $f$ in $t$} is written $t[p\assign X,\forall p\in{f}]$ and is equivalent to substituting the $X(p)$ for the $p$ serially in any order. The \emph{yield of $X$ in} $t$ is $yield_{t}(X)$, \comment{ the concatenation (in lexicographic order\footnote{$()<_{lex}(a)$, $(a_{1})<_{lex}(a_{2}) \textrm{ iff } a_{1}<a_{2}$, $(a_{1}) \cdot b_{1}<_{lex} (a_{2})\cdot b_{2} \textrm{ iff } a_{1}<a_{2} \logor (a_{1}=a_{2} \logand b_{1}<_{lex} b_{2})$}) over paths to leaves $l\in{paths_{t}}$ (such that $rank_{t}(l)=0$) of $label_{t}(l)\in{X}$---that is, } the string formed by reading out the leaves labeled with $X$ in left-to-right order. The usual case (the \emph{yield of $t$}) is $yield_{t}\equiv yield_{t}(\Sigma)$. We may also consider the \emph{monadic strings} in $t$, $mstrings_t \subset \Sigma^{\star}$, obtained by reading off the labels along some path from the root down. The paths that read off a monadic string $s$ in $t$ are $mpaths_{t}^{\geneq}(s)\equiv \{p\in paths_t\st \forall 1\leq i \leq |p|+1 : label_t(p\downarrow (1,i))\geneq s[i]\}$, and the string of labels along a path is $mstring_t(p\in paths_t)\equiv \bigconcat_{i=1}^{|p|+1} (label_t(p\downarrow (1,i)))$ (so $\forall p\in mpaths_t^{\geneq}(s) : mstring_t(p)\geneq s$). Then $mstrings_t \equiv \{mstring_t(p\in paths_t)\}$ and $t\downarrow s$ is the sequence of \emph{subtrees of $t$ along the monadic string $s$} (in lexicographic path order): \[ t\downarrow^{\geneq} s\in{mstrings_t} \equiv \bigconcat_{p\in mpaths_{t}^{\geneq}(s) \text{ in lexicographic order }} (t\downarrow p) \] Naturally, the path in $t$ to the \nth{$i$}{th} element of $t \downarrow s$ is the \nth{$i$}{th} (in lexicographic order) $mpaths_t(s)$. \comment{ The \emph{$l$-labeled-children of $t$} are contained in the subsequence $children_{t}^\geneq(l)=t\downarrow^\geneq (l)$: \[ children_t^\geneq(l)\equiv { c[subseqmap_{c,=_r}(l)] \text{ where } c\equiv children_t \text{ and } a=_rb \text{ iff } root(a)\geneq b } \] } \subsection{Regular Tree Grammars} A \emph{weighted regular tree grammar} (\cls{wRTG}) $G$ is a quadruple $(\Sigma,N,S ,P)$, where $\Sigma$ is the alphabet, $N$ is the finite set of \emph{nonterminals}, $S \in{N}$ is the \emph{start (or initial) nonterminal}, and \emph{$P\subseteq N\times T_{\Sigma}(N)\times\positivereals $} is the finite set of \emph{weighted productions} ($\positivereals \equiv\{ r\in{\reals }\st r>0\}$). We define the binary relation $\derivess{G}$ (\emph{single-step derives in G}) on $T_{\Sigma}(N) \times (paths \times P)^{\star}$, pairs of trees and \emph{derivation histories}, which are logs of (location, production used): \[ \begin{array}{r} \derives_{G}\equiv\Bigl\{((a,h),(b,h\concat (p,(l,r,w)))\bigst \\ (l,r,w)\in{P}\logand p\in{paths_{a}(\{ l\})}\logand b=a[p\assign r]\Bigl\} \end{array} \] where $(a,h)\derivess{G}(b,h\concat (p,(l,r,w)))$ iff tree $b$ may be derived from tree $a$ by using the rule $l\transformsto^{w}r$ to replace the nonterminal leaf $l$ at path $p$ with $r$. For a derivation history $h=((p_{1},(l_{1},r_{1},w_{1})),\ldots,(p_{n},(l_{1},r_{1},w_{1})))$, the \emph{weight of $h$} is $w(h) \equiv \prod_{i=1}^{n} w_{i}$, and call $h$ \emph{leftmost} if $L(h)\equiv \forall 1\leq i < n : p_{i+1} \nless_{lex} p_{i}$.\footnote{$()<_{lex}(a)$, $(a_{1})<_{lex}(a_{2}) \textrm{ iff } a_{1}<a_{2}$, $(a_{1}) \cdot b_{1}<_{lex} (a_{2})\cdot b_{2} \textrm{ iff } a_{1}<a_{2} \logor (a_{1}=a_{2} \logand b_{1}<_{lex} b_{2})$} The reflexive, transitive closure of $\derivess{G}$ is written $\derivesc{G}$ (\emph{derives in $G$}), and the restriction of $\derivesc{G}$ to leftmost derivation histories is $\derivesl{G}$ (\emph{leftmost derives in $G$}). The \emph{weight of $a$ becoming $b$ in $G$} is $w_{G}(a,b) \equiv \sum_{h:(a,())\derivesl{G}(b,h)}w(h)$, the sum of weights of all unique (leftmost) derivations transforming $a$ to $b$, and the \emph{weight of $t$ in $G$} is $W_{G}(t)=w_{G}(S ,t)$. The \emph{weighted regular tree language produced by $G$} is $L_{G}\equiv \{(t,w)\in T_{\Sigma} \times \positivereals \st W_{G}(t)=w \}$. The \emph{derivation tree grammar} for a \cls{wRTG} $G=(\Sigma,N,S,P)$ is $DG(G)=(P,N,S,P')$, where \[ P'\equiv\{(l,p(yield_{N}(r)),w)\st p=(l,r,w)\in{P}\} \] ($p((s_{1},\ldots,s_{n})\in N^{\star})$ is the tree with root label $p$, rank $n$, and \nth{i}{th} child leaf $s_{i}$). The produced trees are called \emph{derivation trees} and correspond one-to-one with tree-producing derivations in $G$. \comment{ \subsection{Unordered Trees} Just like a multiset is a sequence where we don't care about the order of its elements, we can consider trees where we don't care about the order of children. Call $MT_\Sigma=\{t=(root(t)\in \Sigma,children_t\in \multiset{MT_\Sigma})\}$ the set of \emph{(rooted, labeled, finite) unordered trees over alphabet $\Sigma$}. The root's label is \emph{$root(t)$}; its rank would be $|children_t|$). Paths are not defined for unordered trees, but, as with ordered trees, we are interested in the subtrees descended along paths labeled by a monadic string, $t\downarrow s$: \[ t\downarrow^\geneq \seqn{s}{n}\equiv \ternary{n>1}{ \union_{(c,m)\in{t\downarrow (s_1)}}m(c\downarrow (s_2,\ldots,s_n)) }{\ternary{root(t)\geneq s_1}{\{t\}}{\emptyset}} \] As for ordered trees, we let $mstrings_t$ be a multiset (instead of a sequence) with multiplicity for $s$ $mstrings_t(s)\equiv |t\downarrow s|$. \comment{ Similarly, we define $children_{t}^{\geneq}(l)\equiv t\downarrow^{\geneq} (l)$. } An ordered tree $t$ can be interpreted as an unordered tree $u$ by the recursive rule $U$ that $U(t)\equiv (root(u)\equiv root(t), children_u\equiv U(children_t)$ (sequence interpreted as a multiset). Then, properties related to monadic strings $s$ of $t$ should be the same multiset in the ordered $t$ as in $u$---for example, $t \downarrow s=u \downarrow s$. } \subsection{Hypergraphs} A \emph{(directed) hypergraph $G$} is a pair $G=(V,E)$ where $V$ is a set of \emph{vertices} (or \emph{nodes}) of $G$, and $E$ are the \emph{edges} (or \emph{hyperarcs}) of $G$. An edge $e=(h_{e}\in{V},T_{e},c_{e}:\reals^{|T_{e}|}\into \reals)$ has \emph{head $h_{e}$}, \emph{tails $T_{e}$}, and \emph{cost function $c_{e}$}. The cost function for an edge maps the costs of reaching its tails to the cost of reaching the head through that edge. In a hypergraph, $T_{e}\subseteq V$---the tails are subsets of the vertices. \comment{For a \emph{multi-hypergraph}, $T_{e}\in{\multiset{V}}$---the tails are a multiset of vertices. } In an \emph{ordered multi-hypergraph}, $T_{e}\in{V^{\star}}$---the tails are ordered sequences. Typically hyperarc cost functions are symmetric; if not, then the order of arguments is the same as the order of tails. , or for unordered hypergraphs, fixed by some arbitrary total order $<_{G}$ on $V$. The usual cost function is given by $c_{e}\seqn{x}{n}\equiv l_{e}+\sum_{i=1}^{n}x_{i}$, where $l_{e}$ is the \emph{length} of the edge. A typical asymmetric cost function would combine tail hyperpath costs with different weights for each tail. We say there is a \emph{hyperpath from $X\subseteq V$ to $y\in{V}$ in $G=(V,E)$}, written $X\leadsto_{G}y$, if $y\in{X}\logor \exists e\in{E} : h_{e}=y \logand \forall t\in{T_{e}} : X\leadsto_{G}t$. A \emph{hyperpath-tree $t\in{(X\leadsto_{G}y)}$} is a tree labeled by edges, corresponding to a proof of $X\leadsto_{G}y$ (with a separate proof for each multiple occurrence of a tail vertex - note: the usual B-hyperpath allows only a single incoming hyperarc/proof of each vertex - our hyperpath-trees are more like derivations in a context-free grammar). The \emph{cost} of a hyperpath-tree $p$ is written $c(p)$ and is computed bottom-up for each subtree with root label $e$ using $c_{e}$. \comment{ The hyperpath-trees of an ordered multi-hypergraph are ordered trees ($T_E$) with subtree $p\downarrow (i)$ giving the proof used for the \nth{i}{th} tail, while the hyperpath-trees of an unordered (multi-)hypergraph are unordered ($MT_E$) trees. For each node in a hyperpath-tree with edge label $e$, there is exactly one child subtree for each instance of a tail $t\in T_e$, with root edge label $e'$ having the same head $h_{e'}=t$. There is a many-to-one cost-preserving correspondence between hyperpath-trees in an ordered multi-hypergraph $G=(V,E)$ and a derived multi-hypergraph $G'=(V,E')$ with $E'=E$ (by interpreting the tails as multisets instead of sequences). Each unordered hyperpath-tree $p:X \leadsto_{G'}y$ describes a set $O_G(p):X\leadsto_{G}y$ of unique equivalent ordered hyperpath-trees in $G$--- essentially (recursively) all permutations of $children(p)$, but with the child root edge dictating which tail positions it can attach to ($h_{label_o(k)}=T_{root(o)}[k]$). Another way to look at this is that we can specify the ordered child index $i$ as being the \nth{n}{th} least having corresponding to the tail vertex $h_{label_o((i))}$. That is, for an ordered hyperarc $e$ with $T_e\in V^{\star}$, $?_e(v\in V,n\in \naturals)\equiv subseqmap_{T_e}(v)[n]$ gives the location of a particular instance of a tail. We can compute $O_G(p)$ but need to check for identical subtrees in order to not count their inversion twice (this is done implicitly by iterating over unique items in the multiset ${p\downarrow v}$): \[ O_G(p)\equiv \left\{\begin{array}{r} (root(o),children_o) \st root(o)\equiv \text{the ordered version (from $G$) of }root(p)\;\logand \\ \forall v\in T_{root(p)} \exists{l}\in (p\downarrow ^{=_h}(v))! : children_o[subseqmap_{T_{root(o)}}^{=_h}(v)]=l \\ \text{ where } a=_h b \text{ iff } h_a=b \end{array} \right\} \] } For any derivation grammar $G'=(P,N,S,P')$ of \cls{wRTG} $G=(\Sigma,N,S,P)$, there is an equivalent ordered multi-hypergraph $H=(N\union\{\startnode\},E)$ with an edge $e\in{E}$ for each production $p=(l,r,w)\in{P'}$ such that $h_{e}=l$, $T_{e}=\ternary{yield_{N}(r)=\emptyset}{\{\startnode\}}{yield_{N}(r)}$, and the usual cost function with $l_{e}=-\ln{w}$. The hyperpath-trees $\startnode \leadsto_{H} S$ are exactly the derivation trees for $G$, with the cost of the hyperpath-tree equal to the $\ln$ of the weight of the tree (obviously, the labels of the hyperpath-tree are $e\in{E}$ and the labels of the derivation tree are $p\in{P}$, but there is an isomorphism between them, due to the construction of $E$). A hypergraph $(V,E)$ may be interpreted as a multigraph $(V,E')$ with an edge for every tail of each hyperarc ($E'=\{(h_e,t\in T_e,c_e)\st (h_e,T_e,c_e)\in E\}$). We can refer to \emph{simple} (or \emph{monadic}) paths corresponding to the usual paths in the graph. In fact, monadic strings $s$ of hyperarcs from a hyperpath-tree for $(V,E)$ correspond to a simple path in $h_{s[|s|]}\leadsto_{(V,E')}h_{s[1]}$. \section{Pruning Along a Hyperpath-Tree} If we are only interested in hyperpath-trees $X\leadsto_G y$, we can \emph{prune $G$ along $X$ to $y$} by eliminating vertices and hyperarcs that don't appear in any (cheap) hyperpath-tree. This is analogous to the problem of reducing a context free grammar by eliminating useless nonterminals \cite{hopcroft}, except that we wish to also eliminate those useful only for high-cost hyperpath-trees. Since we care only for the existence of a (cheapest) path for each node, tails of edges may be considered as sets while addressing this problem, so that multiply appearing tails $t$ in a multi-hypergraph always reuse the same hyperpath-tree $X\leadsto_G t$. We assume the cost function $c_e(c)=l_e+\sum_{(t,m)\in T_e}w_e(t)mc(t)$, where $c(t)$ is the cost due to the hyperpath-tree $X\leadsto t$ and $w_e(t)$ is a weight given to $t$-tails of that edge. Unweighted pruning consists of first eliminating vertices (and hyperarcs they occur in) that cannot be reached from the start, and second, eliminating from the remainder all those that do not lie along any hyperpath-tree to the destination. The first step can be performed in linear time by \algref{algo_reachfrom}. \begin{algorithm} \DontPrintSemicolon \caption{ Single-source-set hypergraph reachability } \KwIn{ A set of source nodes $X\subseteq V$ in a hypergraph $G=(V,E)$, nodes $V$, and hyperarcs $E=\{e_1,\ldots,e_m\}$ indexed by $1\leq i\leq m$. Each hyperarc has \hastails and \hashead. } \KwOut{ For all $y\in{V}$, $\reachfrom[y]=\true$ if $X\leadsto_G y$, $\false$ otherwise. Time complexity is $O(t)$ where $t$ is the total size of the input. } \Begin{ \lFor{$y\in{V}$}{ $\reachfrom[y] \assign \false$\; $\Adj[y] \assign \{\}$\; } \For{$1\leq i \leq e$,\text{ index of a hyperarc }$(T_{i}=\{x_{1},\ldots,x_{k}\}) \rightarrow \{h_{i}\}$}{ $r[i] \assign k$\; \tcc{ $r[i]$ is the number of tail nodes remaining before edge $i$ fires.} \lFor{$1\leq j \leq k$}{$\Adj[x_{j}] \assign \Adj[x_{j}] \union \{i\} $\; } } \lFor{$y\in X$}{\algname{REACH}(y)\;} } \BlankLine $\algname{REACH}(y)\equiv$ \Begin{ \If{$\neg \reachfrom[y]$}{ $\reachfrom[y]\assign \true$\; \For{$i\in{\Adj[y]}$}{ \If{$\neg \reachfrom[h_i]$}{ $r[i] \assign r[i] - 1$\; \lIf{$r[i] = 0$}{$\algname{REACH}(h_i)$\;} } } } } \label{algo_reachfrom} \end{algorithm} The weighted version of \algref{algo_reachfrom} establishes the lowest cost way of reaching each vertex from a start set (or that there is none). \algref{algo_knuth}, adapted from \cite{knuthgrammar} (first published in \cite{poweroftree}), is an extension of the graph shortest path problem \cite{dijkstra} to the hypergraph case. It works the same except that vertices are visited in increasing order of the cost of reaching them from $X$, and so requires a priority queue. Activated hyperarcs serve to potentially lower the cost of reaching their head, but visiting the head is deferred until it is certain that its minimal cost hyperpath-tree is known. This is in contrast to the simple depth first approach in the unweighted case, where the head is visited immediately with a recursive function call (using the implicit program stack for queuing nodes). \newcommand\sink{\omega} \newcommand\countnonterm{{\#}} \begin{algorithm} \DontPrintSemicolon \caption{ \algname{ViterbiInside}: single-source-set, multi-destination shortest hyperpath-trees.} \KwIn{ A set of source nodes $X\subseteq V$ with initial costs $\{i_{x},\forall x\in{X}\}$, and a hypergraph with $n$ nodes $V$, and $m$ hyperarcs $\seqn{e}{m}$ indexed by $1\leq i\leq m$. Each hyperarc has \hastails, \hashead, and superior cost function $c_{i}\equiv c_{e_{i}}$ \fnote{$f$ is \emph{superior} iff $f(x_{1},\ldots,x_{k}) \geq x_{i}, \forall 1\leq i\leq k$ \cite{knuthgrammar}} of variables $T_{i}$. The cost functions are implemented by constant time operations \algname{BIND}($c_{i},y\in T_{i},\text{cost of }y$) and \algname{INF}($c_{i}$), which returns a lower bound on the cost given the variables bound so far. For a context-free grammar or regular tree grammar, introduce a fictitious sink nonterminal $\sink$ to the rhs of terminal rules. Now let the $V$ be the nonterminals, and let $X$ be ${\omega}$. For each \nth{i}{th} rule, let $h_{i}$ be the lhs nonterminal, $T_{i}$ be the set of rhs nonterminals (or ${\sink}$ if there are none). Finally, initialize \algname{INF}($c_{i}$) to $w_{i}=-\log{P(i|h_i)}$, the negative log rule probability of rule $i$, and define \algname{BIND}($c_{i},y\in T_{i},c$) as increasing \algname{INF}($c_{i}$) by $\countnonterm_{i}(y)c$, where $\countnonterm_{i}(t)$ is the number of occurrences of nonterminal $t$ in rule $i$. } \KwOut{ For all $v\in{V}$, $\pi[v]=i$ is the index of the cheapest hyperarc with head $h_{i}=v$, giving the predecessor relation of the cheapest unordered hyperpath-tree from the $X \leadsto t$), and $\inside[v]$ is minimum cost of reaching $v$. $\pi[v]=0$ if there is no cost-improving edge to $v$. Time complexity is $O(n\lg{n}+t)$ where ($t$ is the total size of the input) if a Fibonacci heap is used, or $O(m\lg{n}+t)$ if a binary heap is used. } \Begin{ \For{$y\in{V}$}{ \lIf{$y\in{X}$}{$\inside[y] \assign i_{y}$\;} \lElse{$\inside[y] \assign \infty$\;} $\pi[y] \assign 0$\; $\Adj[y] \assign \{\}$\; } $Q \assign \PQ{CREATE}()$\; \lFor{$x\in{X}$}{$\PQ{INSERT}(Q,x,i_{x})$\;} \For{$1\leq i \leq m$,\text{ index of a hyperarc }$(T_{i}=\{x_{1},\ldots,x_{k}\}) \rightarrow^{c_{i}} \{h_{i}\}$}{ $r[i] \assign k$\; \tcc{ $r[i]$ is the number of tail nodes remaining before edge $i$ fires.} \lFor{$1\leq j \leq k$}{$\Adj[x_{j}] \assign \Adj[x_{j}] \union \{i\} $\; } } \While{$Q \neq \emptyset$}{ $y \assign \PQ{EXTRACT-MIN}(Q)$\; \For{$i\in{\Adj[y]}$}{\tcc{ edge $i$ with $y$ as a tail} \If{$\algname{INF}(c_{i}) < \inside[h_{i}]$}{ $\algname{BIND}(c_{i},y,\inside[y])$\; $r[i] \assign r[i] - 1$\; \If{$r[i] = 0$}{ $c \assign \algname{INF}(c_{i})$\; \If{$c < \inside[h_{i}]$}{ \lIf{$\inside[h_{i}] = \infty$}{$\PQ{INSERT}(Q,h_i,c)$\;} \lElse{$\PQ{DECREASE-KEY}(Q,h_{i},c)$\;} $\pi[h_{i}] \assign i$\; $\inside[h_{i}] \assign c$\; } } } } } } \label{algo_knuth} \end{algorithm} Having eliminated parts of the hypergraph that aren't reachable from $X$, it still remains to further remove any parts that don't contribute to reaching $y$. In \algref{algo_reachto}, we perform a simple depth-first traversal from heads to tails of hyperarcs, starting with the destination $y$, ultimately saving only vertices that can help reach $y$. \newcommand\hrestrict[2]{{{#1}\langle {#2} \rangle}} To see how this works, let the \emph{restriction} of hypergraph $G=(V,E)$ to a subset of its vertices $V'\subseteq V$ be $\hrestrict{G}{V'}\equiv(V',E):E'=\{e\in E \st h_e\in V' \logand T_e\subseteq V'\}$. First, run \algref{algo_reachfrom} on $G$ to find $V'=\{v \in V' \st X\leadsto_G\}$, then second, run \algref{algo_reachto} on the resulting restriction $G'=\hrestrict{G}{V'}$ to find $V''=\{v\in V' \st \exists F\supseteq\{v\}: F \leadsto_{G'} y$. Then the hypergraph $G''=\hrestrict{G'}{V''}$ has the same hyperpath-trees $X\leadsto_{G''} y$ as $G$, and is the minimal such. The order of these steps is essential - there may be vertices that only help reach $y$ through hyperarcs that are eliminated in \algref{algo_reachfrom}. In the second step, we qualify each node $t\in T_e$ that is connected through $e$ to $y$ as participating in a path to $X \leadsto_G h_e$ automatically, which is sound only if we can assume some path from $X \leadsto_G t'$, for all $t'\in T_e$. But the first step guarantees this by removing all nodes that aren't reachable from $X$. \begin{algorithm} \DontPrintSemicolon \caption{ Single-destination hypergraph reachability } \KwIn{ A destination node $y\in V$ in a hypergraph $G=(V,E)$, with $n$ nodes $V$, and $m$ hyperarcs $E=\{e_1,\ldots,e_m\}$ indexed by $1\leq i\leq m$. Each hyperarc has \hastails and \hashead. } \KwOut{ For all $x\in{V}$, $\reachto[x]=\true$ if there is a hyperpath-tree $X\leadsto_G y$ such that $x\in{X}$, $\false$ otherwise. Time complexity is $O(t)$ where $t$ is the total size of the input (this is simple depth-first search on the projected regular graph). } \Begin{ \lFor{$x\in{V}$}{ $\reachto[x] \assign \false$\; } \algname{USE}(y)\; } \BlankLine $\algname{USE}(y)\equiv$ \Begin{ $\reachto[y]\assign \true$\; \For{$t\in T_i$}{ \If{$\neg \reachto[t]$}{ $\algname{USE}(t)$\; } } } \label{algo_reachto} \end{algorithm} What we are really doing is reversing a hypergraph by interpreting it as a monadic graph consisting of all edges formed by selecting just one tail of each hyperarc, and plugging in a default rule for completing the omitted siblings. We can extend this strategy to the weighted case, using the shortest hyperpath-tree $X\leadsto v$ ($\pi[v]$) (from from \algref{algo_knuth}) for each omitted sibling $v$. Then we can attribute to each monadic arc the cost of those omitted hyperpath-trees ($\inside[v]$), in addition to the cost of its original hyperarc. Then we can perform the usual single-source shortest graph paths computation\cite{dijkstra} on the this reverse monadic graph. Since any subtree of a shortest hyperpath-tree $t\in{(X\leadsto y)}$ is a shortest hyperpath-tree from $X$ to its root-head $h_{label_{t}(())}$, we can decompose the shortest hyperpath-tree using node $v$ into the shortest \emph{inside} $X\leadsto v$ plus the \emph{outside} $v\leadsto y$ formed by reconstituting a path in the monadic graph with the default interpretation of omitted siblings. The outside part is an almost-hyperpath-tree, missing only an inside subtree for $X\leadsto v$ (an outside tree would be a hyperpath-tree from $X\union \{v\} \leadsto y$). This is the insight behind the inside-outside algorithm\cite{InsideOutside} for training context free string grammars, and also its extension to training tree transducers\cite{TTT}. Note that this decomposition means that the cost functions for hyperarcs must be separable into an independent sum over parts due to the tails and a part due to the arc. In \algref{algo_dijkstra}, we implicitly perform this reversal and monadification of a hypergraph and obtain for each vertex $v$ the cheapest way to complete the hyperpath-tree $X\leadsto v$ into $X\leadsto v\leadsto y$ (by that we mean adjoining some inside hyperpath-tree $X\leadsto v$ with , using parent $\psi[v]$ with total outside cost (leaving out the cost of $X\leadsto v$) $\outside[v]$. Then, the \emph{utility} of $v$, or the cost of the cheapest hyperpath-tree using it, is just $\gamma[v]\equiv \outside[v]+\inside[v]$ and the utility of hyperarc $e$ is $\gamma[e]\equiv \outside[h_e]+l_e+\sum_{(t,m)\in T_e}m\inside[t]$. It is then easy to select vertices and edges for removal based on some criteria on their utility relative to the cost of the cheapest hyperpath-tree $X\leadsto y$, which is $\inside[y]$. \algref{algo_prune_relatively_useless} selects the minimal subset of the hyperarcs and vertices necessary to include the best hyperpath-tree $x\leadsto y$ with cost $\inside[y]$ and all hyperpath-trees with cost no worse than $\inside[y]+\delta$. \newcommand\holdout[3]{{\algname{COSTEXCEPT_{#1}({#2},{#3})}}} \begin{algorithm} \DontPrintSemicolon \caption{ \algname{ViterbiOutside} - single-destination, shortest outside hyperpath-trees } \KwIn{ A destination $y\in V$ and default (inside) costs $\inside[v]$ for reaching each $v\in V$ from $X$ (computed with \algname{ViterbiInside}), for a hypergraph with $n$ nodes $V$, and $m$ hyperarcs $\seqn{e}{m}$ indexed by $1\leq i\leq m$. \comment{ Each hyperarc has \hastails, \hashead, and superior cost function $c_{i}\equiv c_{e_{i}}$. The cost function is provided as an amortized constant time operation that builds up the cost of using the default cost ways to reach the tails of an edge, then taking the edge, but holding out one instance of a tail $v$, $\holdout{\inside}{i}{v\in V}$, for example, in $\holdout{\inside}{i}{v}\equiv (l_{e_i}+\sum_{(t,m)\in T_{e_i}}m\inside[t]) - \inside[v]$, everything but the last term (a constant time operation) is constant with respect to v and the constant takes just O($|\domain(T_e)|$) time to compute. } Each hyperarc has length (i.e. cost to use) $l_i\equiv l_{e_i}$, a multiset of tails $T_i\equiv T_{e_i}\in \multiset{V}$, and \hashead. The cost for hyperpath-tree from $X\leadsto h_e$ using edge $e$ and the best hyperpath-trees from $X$ to each of its tails $t$ with cost $\inside[t]$ is $c_{e}=l_{e}+\sum_{(t,m)\in T_{i}}m\inside[t]$ (where m is the number of occurrences of $t$ in the tails), but other cost functions are possible - what is important is the ability to build up the cost for using an edge assuming the default for its tails, and later subtract out the contribution from the default of a single instance of a tails. } \KwOut{ For all $v\in V$, $\psi[v]$ is the index of the hyperarc used to reach $y$ from $v$ (or 0 if none was taken) with the minimum outside cost $\outside[v]$=$\inside[y]-\inside[v]$ given by assuming the default cost way to was used to reach its siblings from $X$. Time complexity is $O(n\lg{n}+t)$ where ($t$ is the total size of the input) if a Fibonacci heap is used, or $O(m\lg{n}+t)$ if a binary heap is used. } \Begin{ \For{$x\in V$}{ $\psi[x]\assign 0$\; $\outside[x]\assign \infty$\; $\Adji[x]\assign \{\}$\; } \For{$1\leq i \leq m$,\text{ index of a hyperarc } $(T_{i}=\{x_{1},\ldots,x_{k}\}) \rightarrow^{l_i} \{h_{i}\}$}{ \lFor{$1\leq j \leq k$}{$\Adji[h_i] \assign \Adji[h_i] \union \{x_j\} $\;} } $\outside[y]\assign 0$\; $Q \assign \PQ{CREATE}()$\; $\PQ{INSERT}(Q,y,0)$\; \While{$Q \neq \emptyset$}{ $x\assign \PQ{EXTRACT-MIN}(Q)$\; \For{$i \in \Adji[x]$}{ \tcc{ edge $i$ with $x$ as a head} $c\assign \outside[x]+l_i+\sum_{(t,m)\in T_i})m\inside[t]$ \tcc{ c=total cost of $X\leadsto e_i \leadsto y$}\; \For{$t\in T_i$}{ $c' \assign c-\inside[t]$ \tcc{ $c'$ is the proposed improved outside cost for $t$ through $e_i$, removing $X\leadsto t$}\; \If{$c'<\alpha[t]$}{ \lIf{$\outside[h_{i}] = \infty$}{$\PQ{INSERT}(Q,t,c')$\;} \lElse{$\PQ{DECREASE-KEY}(Q,t,c')$\;} $\psi[t] \assign i$\; $\outside[t] \assign c'$\; } } } } } \label{algo_dijkstra} \end{algorithm} \newcommand\goodenough{\kappa} \begin{algorithm} \DontPrintSemicolon \caption{ Prune relatively-useless vertices and hyperarcs } \KwIn{ $\inside[v]$ and $\outside[v]$, the Viterbi inside and outside costs of each vertex V over all hyperpath-trees from $X\leadsto y$ (computed with \algname{ViterbiInside} and \algname{ViterbiOutside}) in a hypergraph $G=(V,E)$ with $m$ hyperarcs $E=\{e_1,\ldots,e_m\}$ indexed by $1\leq i\leq m$. Each hyperarc has \hastails and \hashead. The cost for hyperpath-tree from $X\leadsto h_e$ using edge $e$ and the best hyperpath-trees from $X$ to each of its tails $t$ with cost $\inside[t]$ is $c_{e}=l_{e}+\sum_{t\in T_{i}}m_{t}\inside[t]$, where $l_{e}$ is the weight on hyperarc $e$ and $m_{t}$ is a weight, e.g. the number of occurrences of $t$ in the rhs of a grammar production. $\delta$ is a beam (cost distance from the best hyperpath-tree). } \KwOut{ For all $x\in{V\union E}$, $\gamma[x]$ is the cost of the best hyperpath-tree $t\in{(X\leadsto_{G}y)}$ such that $x$ is used in $t$, or $\infty$ if none exists, $\goodenough[x]=\true$ iff that cost is not more worse than $\delta$ from the best $\inside[y]$. Time complexity is $O(t)$ where $t$ is the total size of the input. (total complexity including \algname{ViterbiInside} is $O(n\lg{n}+t)$). } \Begin{ \; $l \assign \inside[y] + \delta$\; \For{$v\in{V}$}{ $\gamma[v] \assign \inside[v]+\outside[v]$\; } \For{$e\in{E}$}{ $\gamma[e] \assign \outside[h_e]+l_{e}+\sum_{t\in T_{i}}m_{t}\inside[t]$\; } \lFor{$x\in{V\union E}$}{ $\goodenough[x] \assign (\gamma[x] \leq l)$\; } } \label{algo_prune_relatively_useless} \end{algorithm} \bibliographystyle{fullname}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,160
Secret weapons revealed tackling crime and antisocial behaviour By Andrew Merrell | 13th January 2020 Hundreds of 'secret weapons' in the fight against crime and antisocial behaviour in Gloucestershire have been primed from a £1 million fund by the man charged with making sure the county's police meet the community's needs. Many already firmly embedded in the county, the teams have been able to put their incredible knowledge and drive to even better use as a result of the innovative funding initiative from the Office of the Police and Crime Commissioner. All of them charities, the organisations form part of a friendly front line army dedicated to tackling everything from social exclusion and supporting young people, families and the elderly to tackling drug and alcohol dependence and launching community projects. All have benefited from PCC Martin Surl's Commissioners Fund, which aims to improve community safety for all by empowering not just county organisations already at the coalface, but seed-funding new initiatives too. Mr Surl said: "Community safety is not just about policing. It's about everyone taking responsibility and playing their part in making their neighbourhood as good as they can be. "Solutions are always best when they come from the people involved. "Some have helped make our roads safer, some will help young people make the difficult transaction to responsible adulthood and others will make older people feel more secure. "I firmly believe the best way to reduce crime and bring about more peace and good order is to involve our police, criminal justice services, community and voluntary sector." The fund has made available £1,000,000 a year from Mr Surl's OPCC budget with an onus on small amounts making a big difference. By the end of 2013, 45 local organisations had been awarded grants aimed at reducing crime and anti-social behaviour. In total since 2012 more than 470 projects had benefited - 77 in Gloucester, 62 in Cheltenham, 43 in Stroud, 34 in the Forest of Dean, 20 in the Cotswolds and 15 in Tewkesbury. Charities which have benefited have included the BeSocial@GL54 centre in Winchcombe, which has opened its doors to a new adventure in learning and leisure activities for the over 55s, the Candi youth project in Cinderford in the Forest of Dean, the Cheltenham West End Partnership, which promotes integration, Cotswold Friends, which helps with training for sixth formers and supports older people, the Gloucester City Mission, GL11 Community Hub, Dursely, and the Stroud Valley's Projects. Some, like the Aston Project in Cheltenham, have won national awards. The Door in Stroud and Gym Nation in Gloucester, now the Friendship Café and St James City Farm, have all helped young people make the difficult transition into adulthood. Voluntary and community sector organisations with a constitution and a bank account have been able to apply for support for new and innovative projects that address at least one of the Police and Crime Commissioner's priorities. Money has been available to cover costs such as practical work, feasibility studies, or research projects, group or partnership development, awareness raising, training, equipment and materials, marketing and promotion. More details on any future scheme are expected to be announced in the autumn. Visit the OPCC website to find out more and watch out for any announcements.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,104
{"url":"http:\/\/explore.bl.uk\/primo_library\/libweb\/action\/display.do?frbrVersion=5&tabs=moreTab&ct=display&fn=search&doc=TN_springer_jour10.1140%2Fepjc%2Fs10052-016-4292-5&indx=10&recIds=TN_springer_jour10.1140%2Fepjc%2Fs10052-016-4292-5&recIdxs=9&elementId=9&renderMode=poppedOut&displayMode=full&frbrVersion=5&rfnGrpCounter=1&vl(2084770711UI1)=all_items&dscnt=0&scp.scps=primo_central_multiple_fe&fctV=PMC+%28PubMed+Central%29&vid=BLVU1&mode=Basic&rfnGrp=1&tab=primo_central&fctN=facet_domain&vl(2084770710UI0)=creator&vl(freeText0)=John%20M%20Pandolfi&dstmp=1573614147305","text":"skip to main content\nPrimo Search\nShow Results with:\n\n# Search for direct pair production of supersymmetric top quarks decaying to all-hadronic final states in pp collisions at $$\\sqrt{s} = 8\\;\\text {TeV}$$ s = 8 TeV\n\nThe European Physical Journal C, 2016, Vol.76(8), pp.1-46 [Peer Reviewed Journal]\n\nFull text available","date":"2019-12-06 21:19:34","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6916954517364502, \"perplexity\": 14326.326100021392}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-51\/segments\/1575540490972.13\/warc\/CC-MAIN-20191206200121-20191206224121-00418.warc.gz\"}"}
null
null
Q: Disable request logging via HttpClient for single url I have an own library that exposes health-check implementation via http client: In StartupExtensions in my library: services.AddHttpClient(Consts.HealthChecksHttpClientName, c => { c.BaseAddress = new Uri(options.BaseUrl); c.DefaultRequestHeaders.Add("Connection", "close"); }); How can I turn off default logging for the health-check url? I KNOW I can disable all logs: services.RemoveAll<IHttpMessageHandlerBuilderFilter>(); I don't want to remove all logs from all of http clients that do not belongs to the library - just want to disable only single url for my http client. Is the way to override for it only for specified HttpClient? I do not want to use Serilog - only with standard Microsoft.Extensions.Logging A: Your logging filter should be depend on how we make use of HttpClient. For example, using like this using var weatherHttpClient = new HttpClient() // make using here AFAIK, this case would be impossible to separate logging from HttpClient. If we using HttpClient via DI, we could make a filter like this. // Register with DI. I'm just a fan of named HttpClient, use your register as wished services.AddHttpClient(nameof(WeatherService), cfg => { cfg.Timeout = TimeSpan.FromSeconds(10); cfg.BaseAddress = new Uri("https://api.openweathermap.org"); }); // Using WeatherService class: private readonly HttpClient _httpClient; public WeatherService(IHttpClientFactory httpClientFactory) { _httpClient = httpClientFactory.CreateClient(nameof(WeatherService)); } // We can create a logging Filter on Program.cs like public static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args) .ConfigureLogging(logging => { logging.AddFilter((_, category, _) => !category.StartsWith($"System.Net.Http.HttpClient.{nameof(WeatherService)}")); }) Register as services.AddHttpClient<WeatherService> would result the same category name to filter. But I still feel it some way cumbersome... of not using Serilog, could you share the reason why say no to that ?
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,760
Stygian Abyss may refer to: Ultima Underworld: The Stygian Abyss, 1992 video game developed by Blue Sky Productions Ultima Online: Stygian Abyss, 2009 video game developed by Electronic Arts
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,871
St Hugh's Church Foolow, Wesleyan Reform Chapel Great Hucklow, Presbyterian (Unitarian) Chapel Great Hucklow, Great Hucklow Methodist Church Wardlow, Wesleyan Methodist Mission Wardlow, Church of the Good Shepherd (formerly St Savour's) Eyam, Eyam Methodist Church ("Top Chapel") Eyam, St Lawrence's Church (formerly St Helen) (1.7m.) Litton, Christ Church Litton, Litton Methodist Church Abney, Hathersage, Wesley Chapel (Demolished) Eyam, Wesleyan Reform ("Bottom") Chapel Litton, Litton School (former Christ Church) Copyright of Andrew McCann/Alf Beard/Pete Howard St Hugh's Church, Foolow St Hugh's Church, The Green, Foolow, Derbyshire. We believe the Church does NOT have a graveyard. This Place of Worship was founded in 1888, and we understand it is still open. Kelly's Directory of 1932 describes Foolow as an ancient village and township, 2 miles west from Eyam, chiefly inhabited by farmers. St Hugh's was said to be "a small mission church, built in 1889". An account of St Hugh's Mission Church - Foolow elsewhere explains that the main body of the building was originally a smithy. This was purchased from Mr Bagshawe of Sheffield, and the trustees of his late brother, for a price such that the total cost of purchase and any necessary alterations was not expected to exceed £150. This may have been a reference to Benjamin Bagshawe of Sheffield, the son and one of the next of kin of Benjamin Bagshawe of Foolow, who died 15th April 1879, whose Will was proved on 10th November 1800. His personal estate was said to be under £1,500. The foundation stone of the Church was laid on August 15th 1888, and the opening took place on December 17th of that year. The Chancel was added the following year, and opened on Dec 17th 1889, and the porch at a later (unknown) date. The following information about the Church has been provided to accompany the photographs on the right. A list of people who have supplied the information is included in the Acknowledgements, below. [Image 1] [1] For further reading see Take a Look at: Crosses Around The Peak and Take a Look at: Bull Rings[1] [Image 2] There is a Short History and Notes on the Fabric and Furnishings of St Hugh's Mission Church Foolow on a separate web page.[1] [Image 3] This photograph shows St Hugh's Church (ahead) and the Wesleyan Church almost next door to one another.[1] St Hugh's Church, Foolow shown on a Google Map. Places of Worship in Foolow shown on a Google Map. A special thanks to the following people who have contributed information for this web page: 1. Information provided by Rosemary Lockie. Information last updated on 3 Jan 2015 at 14:33. Please also remember that whilst the above account may suggest that St Hugh's Church remains open and accessible, this may not remain so. This Report was created 11 Jan 2021 - 10:22:39 GMT from information held in the Derbyshire section of the Places of Worship Database. This was last updated on 6 Feb 2019 at 15:49.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,090
Syugut () is a rural locality (a selo) in Tsakhurskoye Rural Settlement, Rutulsky District, Republic of Dagestan, Russia. The population was 167 as of 2010. There is 1 street. Geography Syugut is located on the Samur river, 38 km northwest of Rutul (the district's administrative centre) by road. Muslakh and Tsakhur are the nearest rural localities. Nationalities Tsakhurs live there. References Rural localities in Rutulsky District
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,828
\section{Appendix} Here we discuss how the energy landscape for a spin governed by Eq.~\ref{Ecl} depends on the anisotropy parameters $a$ and $\lambda$. (As in the main text, we measure these anisotropy parameters in units of $D$ and define $a\equiv kJ^2$.) Using standard techniques from multivariable calculus we determined the minima, maxima and saddle points of the landscape. For small values of $a$ and $\lambda$ (unshaded region in Fig.~\ref{pspace}), the landscape resembles that shown in Fig.~\ref{Effects of Lambda} with global minima at the poles ($\pm z$ directions) and saddle points along the equator ($x-y$ plane). As $a$ increases, the saddle points become deeper and for $a>1/4$ the saddle points transform into local minima on the equator (right shaded region in Fig.~\ref{pspace}). This limit is far from relevant for Mn$_{12}$, where $a\approx 3 \times 10^{-3}$. Increasing $\lambda$ causes pairs of saddle points on the equator to move towards each other (see Fig.~\ref{Effects of Lambda}). They merge together when $\lambda=8a$ (dashed line in Fig.~\ref{pspace}). Remarkably, even when there are only two saddle points in the potential landscape, there remain four distinct instanton paths. The solid angle between pairs of paths vanishes when $\lambda=\lambda_c$ (solid curve), which does not correspond to any notable change in the energy landscape. The vertical dashed line represents the values in this parameter space used for the results presented in Fig.~\ref{IvsLambda}, i.e. $a$ is such that $\lambda_c=1$. Finally, when $\lambda>1$ each energy minimum at a pole bifurcates into two minima, one tilted towards the +y direction and the other towards the -y direction. This substantially changes the nature of the problem. The analysis described in this paper applies only to the unshaded region in Fig.~\ref{pspace} and would need modification to handle the energy landscapes corresponding to the shaded regions. \begin{figure}[htbp] \centering \includegraphics[width=0.40\textwidth]{parameter_space1.eps} \caption{Parameter-space plot indicating the essential features of the spin's energy landscape. The unshaded region represents the locus of parameters to which the analysis in this paper applies. In the shaded region on the right, the saddle points on the equator have become local minima. In the shaded region above $\lambda=1$, the minima at the poles have bifurcated. The dashed line indicates where equatorial saddle points merge pairwise. The solid curve shows the behavior of $\lambda_c$ and is included for reference.} \label{pspace} \end{figure} We thank W. Loinaz, K. Jagannathan, R. Benedetto, D. Velleman, R. Behrend and E. H. da Silva Neto for useful discussions. Support for this work was provided by the U.S. National Science Foundation under grant no.~DMR-0449516.
{ "redpajama_set_name": "RedPajamaArXiv" }
7,658
\section{ Introduction} Scalar fields play essential role in many branches of physics, from cosmology to condensed matter physics to particle physics -- there is an unremitting interest in models of self-interacting scalar fields. The rich variety of such models includes some that have been studied only recently, e.g., the so called K-fields with a nonstandard kinetic part \cite{1}, or models with a non-smooth V-shaped self-interaction \cite{2}. The signum-Gordon model considered in the present paper is probably the simplest example from the latter class. The pertinent field potential has the form $U(\varphi)= g |\varphi|,$ where $g >0$ is a coupling constant and $|\varphi|$ is the modulus of the real scalar field $\varphi$. Such potential is V-shaped with the sharp minimum at the vacuum field $\varphi=0$. Models of this kind were discovered while playing with the well-known classical systems of harmonically coupled pendulums in order to illustrate the phenomenon of spontaneous symmetry breaking and topological defects \cite{3}. Subsequent investigations have revealed that the V-shaped form of the potential has very interesting consequences for the dynamics of the scalar field. One of them is the existence of strictly periodic oscillons \cite{4}. The motivation, various results and further references for the V-shaped self-interaction can be found in \cite{2,3,4}. The present note is a follow-up to the paper \cite{4}. The oscillons described in that paper did not move in space (apart from the trivial uniform motion obtained by applying Lorentz boosts). Rather unexpectedly, we have found that there exist also oscillons that periodically move to and fro in the space with arbitrary constant velocity $\pm v$, where $0 < |v| \leq 1$. For the oscillons presented in \cite{4} $v=0$. The new oscillons appear naturally when the particular solution reported in \cite{4} is put in the framework of polynomial solutions of the signum-Gordon equation. Comparing with other oscillons discussed in literature \cite{5}, several differences should be pointed out. First, our oscillons are strictly periodic in time, in particular they do not emit any radiation. Second, they have strictly finite size because the field assumes the vacuum value at a finite distance exactly. Third, they have relatively simple, explicitly given form composed of several linear and quadratic functions of the time $t$ and the spatial coordinate $x$. The swaying oscillon reminds the wobbling kink in the $\varphi^4$ model \cite{6}. However, one should note that the wobbling kink is an excitation of static kink, while all the swaying oscillons are degenerate in energy, and moreover there is no static oscillon -- even for the presented in \cite{4} non-swaying one the field oscillates in time. The plan of our paper is as follows. Section 2 is devoted to a preliminary discussion of the signum-Gordon equation and of its solutions. The swaying oscillons are presented in Section 3. Section 4 contains the conclusion. \section{Preliminaries } The Lagrangian of the signum-Gordon model (s-G) has the form \begin{equation} L = \frac{1}{2} (\partial_t \varphi\partial_t \varphi - \partial_x \varphi\partial_x \varphi) - g\:|\varphi|, \end{equation} where $\varphi$ is a real scalar field, $t, x$ are time and position coordinates in the two-dimensional Minkowski space-time $M$. For convenience, $t, x, \varphi, g$ are dimensionless -- this can be achieved by redefinitions of the physical position, time, field and the coupling constant (multiplication by constants of appropriate dimensions). The signum-Gordon equation \begin{equation} \partial_t^2 \varphi - \partial_x^2 \varphi + \mbox{sign}(\varphi(x, t))=0 \end{equation} is the Euler-Lagrange equation corresponding to Lagrangian (1) (from now on we put $g=1$). The sign function $\mbox{sign}(\varphi)$ has the values $\pm 1$ for $\varphi \neq 0$ and $\mbox{sign}(0)=0$. The simplest way to obtain Eq.\ (2) from Lagrangian (1) is first to regularize the field potential $U(\varphi)= |\varphi|$, e.g., $U(\varphi) = \sqrt{\epsilon^2 + \varphi^2}$ or $U(\varphi) = \epsilon\: \ln(\cosh(\varphi/\epsilon))$, and to take the limit $\epsilon \rightarrow 0_+ $ in the Euler-Lagrange equation obtained from the regularized Lagrangian. Direct computation of the variation of the action $S = \int dt dx L$ is more subtle because of the $|\varphi|$ term, but it gives the signum-Gordon equation (2) too. The l.h.s. of Eq.\ (2) is not continuous with respect to $\varphi$. Because such equations are not very common in field theory, let us briefly comment on the related mathematical aspects. First, it is clear that in general one should expect non smooth solutions: the value of at least one of the second derivatives $\partial_t^2\varphi, \partial_x^2\varphi$ has to jump when the function $\mbox{sign}(\varphi)$ changes its value. Second, the use of the stationary action principle implies that in general we consider so called weak solutions of the Euler-Lagrange equation, \cite{7}. For the weak ones it is sufficient that \[ \delta S = \int_M dt dx\; \left(\frac{\partial L}{\partial \phi}\: \delta\phi(x,t) + \frac{\partial L}{\partial (\partial_{\mu}\phi)} \; \partial_{\mu} \delta\phi(x,t)\right) =0 \] for all test functions $\delta\phi(x,t)$ from a certain class (typically one uses the $D(M)$ class of smooth functions on $M$ with compact support). This condition is equivalent to $\int_M dt dx\: {\cal E\!\!L}\: \delta \varphi =0$, where $ {\cal E\!\!L} = \partial L/\partial \phi - \partial_{\mu}(\partial L/ \partial(\partial_{\mu}\varphi))$, only if the derivative $\partial_{\mu}(\partial L/ \partial(\partial_{\mu}\varphi))$ exists for a given probed function $\varphi(x,t)$. Then the Euler--Lagrange equation ${\cal E\!\!L}=0$, in our case the signum-Gordon Eq.\ (2), has to be satisfied at almost all points $(x,t)$ in the two-dimensional space-time $M$, but not necessarily at all points as it would be the case with strong solutions. Of course, the set of weak solutions contains the strong ones as a subset. In the case of signum-Gordon equation the weak solutions that are not strong are ubiquitous. For instance, $\varphi_0 = x^2/2$ is a smooth static solution of Eq.\ (2) in the weak sense, but not in the strong sense. The point is that $\partial_t^2 \varphi_0 - \partial_x^2 \varphi_0 + \mbox{sign}(\varphi_0)=0$ everywhere in $M$ except the line $x=0$ in $M$. On this line $\partial_t^2 \varphi_0 - \partial_x^2 \varphi_0 + \mbox{sign}(\varphi_0)= -1 $ because $\partial_x^2 \varphi_0 =1$, $ \mbox{sign}(0) =0$. Nevertheless, \[ \int_{M} dt dx \;[\partial_t^2 \varphi_0 - \partial_x^2 \varphi_0 + \mbox{sign}(\varphi_0)] \; \delta\phi(x,t) =0 \] for arbitrary test function $\delta\phi$. In general, physically relevant are the weak solutions. To see this, consider the following simple example from classical mechanics of a point particle on a plane with Cartesian coordinates $(x,y)$. The particle is free except when it crosses the $y$-axis, where it is subjected to a finite constant force $\vec{F}_0$ parallel to the $y$-axis. Thus, the force $\vec{F}=0$ at all points $(x, y)$ with $x\neq0$, and $\vec{F}= \vec{F}_0$ when $x =0$. It is clear that integrating the Newton's equation $d\vec{p}/dt = \vec{F}$ we obtain $\vec{p} = const$ even if the trajectory crosses the $y$-axis. The physical reason is that the finite force $\vec{F}_0$ acts on the particle only during infinitesimally short time when the particle is exactly on the $y$-axis, hence it is not able to perturb the free motion. Such trajectories are the weak solutions of the Newton's equation (now the test functions are denoted as $\delta\vec{r}(t)$ and we integrate over $t$). On the other hand, the trajectories which do not intersect the $y$-axis are solutions in the strong sense. Notice that such Newton's equation is not equivalent to the free equation, in which $\vec{F}=0$ everywhere, because our particle is accelerated if it moves along the $y$-axis. Coming back to the signum-Gordon model, in the case the field $\varphi$ is constant in the space Eq. (2) acquires the form of one dimensional Newton's equation $ \ddot{\varphi}(t) = - \mbox{sign}(\varphi)$ that describes nonlinear oscillations around $\varphi=0$. Notice that there is no linear regime even for arbitrarily small values of $\varphi$. Newton's equation of this kind appears in the elementary problem of a ball vertically bouncing from a floor in a constant gravitational field (the elevation above the floor is given by $|\varphi|$) \cite{2}. Many examples of oscillatory systems from classical mechanics that do not have the linear small amplitude regime can be found in \cite{8}. Because the function $\mbox{sign}(\varphi)$ is piece-wise constant, it is natural first to solve Eq.\ (2) in the regions in which $\varphi$ has a constant sign. For instance, if $\varphi <0$, Eq.\ (2) acquires the form \begin{equation} \partial_t^2 \varphi - \partial_x^2 \varphi - 1 =0. \end{equation} The oscillon solutions are constructed from second order polynomials in $x, t$. The most general second order polynomial that obeys Eq. (3) has the form \begin{equation} \varphi_2(x, t) = a_0 x^2 + a_1 t x + (a_0+\frac{1}{2}) t^2 + b_0 x + b_1 t + c_0, \end{equation} where $a_0, a_1, b_0, b_1, c_0$ are constants (beware that they are not completely arbitrary because of the condition $\varphi_2 <0$). It is rather exceptional feature of the signum-Gordon equation that non-trivial and interesting solutions can be constructed from such simple elementary functions. Note that the class of functions of the form (4) is invariant with respect to Lorentz boosts, space-time translations, and the reflections $x \rightarrow -x, \; t \rightarrow -t$. It contains the static solutions of the form \begin{equation} \varphi_s = - \frac{1}{2} (x-b_0)^2 + c_0 + \frac{1}{2} b_0^2, \end{equation} where $c_0 + b_0^2/2 <0$ in order to keep $\varphi_s <0$. The oscillons are constructed by patching together several such polynomial solutions. The patching conditions have the standard form: the field $\varphi$ is continuous all over $M$, also the derivatives $\partial_t\varphi, \partial_x\varphi$ are continuous function of $x, t$ except perhaps at the border line between two patches. If the border line is a (segment of) characteristic line ($x = \pm t + \mbox{const}$) for the signum-Gordon equation, the derivative in the direction perpendicular to that line does not have to be continuous -- a finite jump is allowed. \section{The swaying oscillons } Hint that new oscillons may exist comes from the following procedure for constructing periodic solutions of the signum-Gordon equation. Let $\varphi_-(x,t) $ be a solution of Eq.\ (3) negative for all $t$ from an open interval $(0, T), \; T>0,$ and such that \begin{equation} \varphi_-(x, 0) = \: 0 \: = \varphi_-(x, T). \end{equation} It is clear that the function $\varphi_+$ defined by \begin{equation} \varphi_+(x, t) = - \varphi_-(x, -t) \end{equation} is a positive solution of the equation $\partial_t^2 \varphi - \partial_x^2 \varphi +1 =0 $ for all $t \in (-T,0)$. The functions $\varphi_-, \varphi_+$ as well as their time derivatives match each other at the time $t=0$: \[ \varphi_+(x, 0) = \: 0 \: = \varphi_-(x, 0), \] \[ \lim_{t \rightarrow 0_-} \partial_t\varphi_+(x, t ) = - \lim_{t \rightarrow 0_-} \partial_t\varphi_-(x, -t ) = \lim_{s \rightarrow 0_+} \partial_s\varphi_-(x, s ), \] where $s$ stands for $-t$, and $t\in (-T,0)$. The crucial observation is that also $\varphi_+(x, -T), \: \varphi_-(x, T)$ match each other: \[ \varphi_+(x, -T) = \: 0 \: = \varphi_-(x, T), \] \[ \lim_{t \rightarrow -T_+} \partial_t\varphi_+(x, t ) = \lim_{s \rightarrow T_-} \partial_s\varphi_-(x, s ). \] Therefore, we may extend our partial solutions $\varphi_{\pm}$ to all times $t \geq T$ and $ t \leq -T$ just by applying time translations (by multiples of $\pm T$) to $\varphi_{\pm}$. In this way we obtain periodic solutions of the signum-Gordon equation (2) with the period equal to $2T$, provided that there exists $\varphi_-(x,t)$ with the properties specified above. It turns out that a class of the solutions $\varphi_-(x,t)$ with the desired properties can be constructed by patching together several solutions of the form (4). Also the trivial solution $\varphi=0$ is involved. The schematic picture of such `patchwork' for the swaying oscillon is presented in Fig.\ 1. Note that so far we have not made any assumption about the behavior of $\varphi_-$ at large $|x|$. In \cite{4} certain restrictive boundary conditions were imposed right at the start of calculations and they forced the oscillons to stay still. The method adopted in the present note is radically different from the one used in \cite{4} -- in that paper the main tool was d'Alembert formula for solutions with given initial data. In order to ensure finiteness of the total energy we assume that $\varphi_-(x,t) =0$ outside a certain compact region. This is related to the general observation that in the case of models with the V-shaped potential there are no exponential or long range tails. The field reaches its vacuum value rather abruptly, the tails have a parabolic shape and a strictly finite length \cite{2}. Thus, our first task is to find the polynomials of the form (4) which match the trivial solution $\varphi =0$. The matching conditions imposed on a line $x(t)$ in $M$ can be written in the form \[ \varphi_2(x(t), t) =0, \;\;\; \left. \partial_x \varphi_2(x, t)\right|_{x = x(t)} =0. \] They give the following two equations \[ a_0 x^2(t) + a_1\: t\: x(t) + (a_0+\frac{1}{2}) t^2 + b_0 x(t) + b_1 t + c_0 =0, \;\;\; 2 a_0 x(t) + a_1 t + b_0 =0. \] Simple calculations show that the solution $x(t)$ exists only if $a_0 \neq0$, and then \begin{equation} x(t) = v t + x_0, \;\; \varphi_2(x,t) = -\frac{(x - x(t))^2}{2 (1-v^2)}, \end{equation} where $ v= - a_1/(2 a_0), \; x_0 = - b_0/(2 a_0)$, and $v^2 <1$ in order to satisfy the condition $\varphi_2 < 0$. Here we have assumed that $x(t)$ does not coincide with a characteristic line. Thus we have found that the boundary of our oscillon has to move with the constant velocity $v$, and close to the boundary the field has the parabolic shape (as expected). Note that $\varphi$ given by formula (8) coincides with the Lorentz boosted and translated in the space the static solution \[ \varphi_s = \left\{ \begin{array}{cc} 0 & x \leq 0, \\ - \frac{x^2}{2} & x>0. \end{array}\right. \] (but the swaying oscillon is not the Lorentz boosted oscillon of \cite{4}). The structure of the solution $\varphi_-$ is shown in Fig.\ 1, in which the support of $\varphi_-$ for the swaying oscillon of unit length and vanishing total momentum is depicted. The period of this oscillon is equal to its spatial size, i.e., to 1. As discussed in \cite{4}, we may use the symmetries of the signum-Gordon equation, such as Poincar\'e or scaling transformations in order to obtain more general oscillons. The interior of the parallelogram is divided into seven sectors $\emph{a}\div \emph{f}\:$ by the four characteristic lines drawn from its corners. Each sector has different causal neighborhood. For instance, the field in the triangular sector \emph{c} is completely determined by Cauchy data on the segment $[1/2 + v/2, 1]$ of the $x$ axis; the sector \emph{e} is controlled by Cauchy data in the future, i.e., on the segment $[1/2, 1+v/2]$ of the $t=1/2$ line which lies in the future of the sector \emph{e}; the sectors \emph{a } and \emph{d} are controlled by the boundaries of the oscillon, etc. The parallelogram shown in Fig.\ 1 has the height equal to one half of its length. In this case the characteristic lines drawn from the lower (upper) corners meet at a point lying on the upper (lower) edge. In the case of the non-swaying oscillon presented in \cite{4} we have $v=0$ and a rectangle in Fig.\ 1. The parallelogram is the simplest deformation of that rectangle consistent with the conditions $ \varphi_-(x,0) = 0 = \varphi_-(x, \frac{1}{2})$, and with the fact that the both sides of the oscillon have to move with a constant velocity, as has been shown above. Such generalization -- the parallelogram instead of the rectangle -- is very suggestive in the `patchwork approach' adopted in the present paper, but it is not obvious at all in the based on the d'Alembert formula approach used in \cite{4}. Let us also note that a Lorentz boost of the oscillon considered in \cite{4} gives a uniform rectilinear motion, and not the swaying one. Moreover, it deforms the rectangle into a hyperbolically rotated parallelogram with the upper and bottom sides not parallel to the $x$-axis. The building blocks of $\varphi_-$ are denoted as $\varphi_a, \ldots , \varphi_g$ after the sectors of the parallelogram. The fields $\varphi_a, \varphi_d$ in the sectors $a$ and $ d$ have the form (8) with $x(t) = v t$ or $x(t)= v t +1$, respectively, i.e., \begin{equation} \varphi_a = -\frac{(x-vt)^2}{2(1-v^2)}, \;\;\; \varphi_d = - \frac{(x-vt-1)^2}{2(1-v^2)}. \end{equation} In the regions $x \leq vt$ and $ x \geq v t +1$, i.e. on both sides of the parallelogram, the field has the vacuum value $\varphi=0$. \begin{center} \begin{figure}[tph!] \hspace*{1.7cm} \includegraphics[height=5.5cm, width=8cm]{fig1.eps} \caption{\small The support of the solution $\varphi_-(x,t)$. The field $\varphi_-(x,t)$ vanishes on the continuous lines that form the boundary of the parallelogram. In each sector $\emph{a}\div \emph{f}$ the function $\varphi_-$ is given by a different formula. The matching conditions that relate the functions in neighboring sectors are imposed along the dotted lines. These four lines are characteristic lines of the signum-Gordon equation. They have the slopes $\pm 1$. } \end{figure} \end{center} The fields $\varphi_b, \varphi_c, \varphi_e, \varphi_f$ are determined by imposing on the solution (4) the condition $\varphi_2 =0$ on the lines $t=0$ or $t=1/2$, and the conditions of matching with $\varphi_a$ or $\varphi_d$ on the characteristic lines. As the example let us determine $\varphi_b$. The condition $\varphi_2(x,0) =0$ gives $a_0=b_0=c_0=0$. Next, $\varphi_2 = t^2/2 + a_1 t x +b_1 t$ is compared to $\varphi_a$ on the part of the characteristic line $x=t$ with $t \in (0, 1/4 + v/4)$: \[ \frac{t^2}{2} + a_1 t^2 +b_1 t = - \frac{1-v}{2(1+v)} t^2. \] Therefore, $b_1=0$, $a_1 = -1/(1+v)$, and $\varphi_b = t^2/2 - tx/(1+v)$. Similar calculations give $\varphi_c, \varphi_e, \varphi_f$. Finally, we compute $\varphi_g$ by comparing $\varphi_2$ to $\varphi_b, \varphi_c, \varphi_e, \varphi_f$ along the four characteristic lines that form the boundary of the sector $g$. The results have the following form: \begin{equation} \varphi_b = \frac{t^2}{2} - \frac{t x }{1+v}, \;\;\; \varphi_c = \frac{t^2}{2} + \frac{t (x-1)}{1-v}, \end{equation} \begin{equation} \varphi_e = \frac{1}{2} (t-\frac{1}{2}) \: \left( \frac{1 }{2 } + t + \frac{1-2x }{1+v}\right), \end{equation} \begin{equation} \varphi_f = \frac{1}{2} (t-\frac{1}{2}) \: \left( \frac{1 }{2 } + t + \frac{2x-1}{1-v}\right), \end{equation} \begin{equation} \varphi_g = \frac{(v x +t)^2 }{2(1-v^2)} + \frac{x^2 + t^2}{2 } + \frac{1+v}{8(1-v)} - \frac{x+t }{2(1-v)}. \end{equation} All these functions are negative inside their domains. The shape of the swaying oscillon is depicted in Fig.\ 2. \begin{center} \begin{figure}[tph!] \hspace*{1.8cm} \includegraphics[height=5.5cm, width=7cm]{fig2.eps} \caption{\small The shape of the swaying oscillon at the times $t=1/8$ (the dashed line) and $t=3/8$ (the continuous line). The velocity of the swaying motion $v = 1/2$. } \end{figure} \end{center} The evolution of our oscillon is described by the function $\varphi_-(x, t)$ in the time interval $[0, 1/2]$, and by $\varphi_+(x,t)$, formula (7), for $t\in[-1/2,0]$. In particular, the field $\varphi_+$ at the boundaries of the oscillon has the form \[ \varphi_{+,a}(x,t) = \frac{(x+v t)^2}{2(1-v^2)}, \;\;\; \varphi_{+,d}(x,t) = \frac{(x+v t-1)^2}{2(1-v^2)}. \] We see that now the boundaries of the oscillon move with the velocity $-v$. The world-sheet of the oscillon is depicted in Fig.\ 3. Note that at the times $t=k/2$, $k$ - integer, when the sharp turns take place, the field $\varphi$ vanishes everywhere. \begin{center} \begin{figure}[tph!] \hspace*{1.8cm} \includegraphics[height=5.5cm, width=7cm]{fig3.eps} \caption{\small The world-sheet of the swaying oscillon. In the interior of the parallelograms $\varphi_+ >0$ and $\varphi_- <0$, whereas on their boundaries (the thick continuous lines) $\varphi_{\pm}=0$.} \end{figure} \end{center} In the case $x(t)$ is a characteristic line we have $x(t) = v t + x_0$, where $|v| =1$. There is just one matching condition $\varphi_2(x(t),t) =0$. Solving it we obtain relations between the constant coefficients present in $\varphi_2$, and finally \[ \varphi_2(x,t) = (x-x(t))\; \left(a_0 \:(x - x(t)) - \frac{1}{2} x(t) + (\frac{1}{2} + 2 a_0) x_0 + b_0\right) \] (in the region where $\varphi_2(x,t) <0$). Next steps are similar to those described above, but the situation is much simpler. In particular, when $v=1$, the left and right hand sides of the parallelogram in Fig.\ 1 coincide with characteristic lines. Therefore, the sectors $a, f, c, d, g$ are absent. The remaining sectors $ b, e$ meet at the line $x=1-t$. The corresponding functions $\varphi_b, \varphi_e$ are given by formulas (10), (11) with $v=1$, and they correctly match each other on that line. The total energy $E$ and momentum $P$ of the oscillon can easily be calculated from formulas \[ E= \frac{1}{2} \int^{\infty}_{-\infty}dx\; [(\partial_t\varphi)^2 + (\partial_x\varphi)^2]+ \int^{\infty}_{-\infty}dx\; |\varphi|, \;\;\; P = - \int^{\infty}_{-\infty}dx\;\partial_t\varphi \partial_x\varphi, \] considered at the time $t=0$ when $\varphi=0= \partial_x\varphi$. We see that $P=0$, in spite of the swaying motion of the oscillon. This can be understood if we regard the swaying oscillon as a nonlinear bound state of the basic oscillon, that is the one with $v=0$, with a wave packet traveling along the basic oscillon. If the swaying oscillon has $P=0$, the nonzero momentum of the wave packet is compensated by the momentum related with the motion of the basic oscillon. The wave packet bounces from the boundaries of the basic oscillon and does not leave its interior. Then the basic oscillon has to move accordingly in order to keep $P=0$. In order to compute the total energy we need $\partial_t\varphi|_{t=0}$. Formulas (9), (10) give $\partial_t\varphi_b|_{t=0}= -x/(1+v)$ for $ x\in [0,(1+v)/2]$, and $\partial_t\varphi_c|_{t=0}= (x-1)/(1-v)$ for $ x\in [(1+v)/2, 1]$. In the case $v=\pm1$ the part with $\varphi_c$ is absent. Simple integration gives $E=1/24$. Thus the total energy does not depend on $v$ -- all our swaying oscillons have the same energy. We have not found any explanation for such a degeneracy. One may suspect that there exists a hidden symmetry. Note that it would be sufficient if it works only in the subspace of the polynomial solutions $\varphi_2$, not necessarily on the level of Lagrangian or action. The bound state interpretation offers the following picture. The basic oscillon set in motion would have an energy larger that its rest energy equal to 1/24. Apparently, the binding energy compensates the kinetic energy of the basic oscillon as well as the energy of the bouncing wave packet, so that the total energy remains equal to 1/24. \section{Conclusion} We have shown that oscillons in the (1+1)-dimensional signum-Gordon model can periodically move to and fro in the space (the x-line) with a constant speed $v$ from the interval $[0,1]$. The amplitude of such swaying motion is equal to $v l/2$, where $l$ is the length of the oscillon. The pertinent analytic solutions of the field equation have been constructed from the second order polynomials in $t$ and $x$. The present paper is a follow-up to \cite{4}, and the remarks and comments given there apply also to the swaying oscillons. Our new findings contribute to the already substantial evidence that the models of the signum-Gordon type have rather amazing properties. In particular, it is quite surprising that one can find simple, explicit solutions that describe very nontrivial objects like the oscillons, or $Q$-balls \cite{9}, and this happens in spite of the unpleasant $\mbox{sign}(\varphi)$ form of the nonlinear term in the field equation.
{ "redpajama_set_name": "RedPajamaArXiv" }
5,484
{"url":"https:\/\/www.jobilize.com\/physics-ap\/section\/conceptual-questions-addition-of-velocities-by-openstax?qcr=www.quizover.com","text":"# 2.5 Addition of velocities \u00a0(Page 5\/12)\n\n Page 5 \/ 12\n\n## Making connections: relativity and einstein\n\nBecause Einstein was able to clearly define how measurements are made (some involve light) and because the speed of light is the same for all observers, the outcomes are spectacularly unexpected. Time varies with observer, energy is stored as increased mass, and more surprises await.\n\n## Phet explorations: motion in 2d\n\nTry the new \"Ladybug Motion 2D\" simulation for the latest updated version. Learn about position, velocity, and acceleration vectors. Move the ball with the mouse or let the simulation move the ball in four types of motion (2 types of linear, simple harmonic, circle).\n\n## Summary\n\n\u2022 Velocities in two dimensions are added using the same analytical vector techniques, which are rewritten as\n${v}_{x}=v\\phantom{\\rule{0.25em}{0ex}}\\text{cos}\\phantom{\\rule{0.25em}{0ex}}\\theta$\n${v}_{y}=v\\phantom{\\rule{0.25em}{0ex}}\\text{sin}\\phantom{\\rule{0.25em}{0ex}}\\theta$\n$v=\\sqrt{{v}_{x}^{2}+{v}_{y}^{2}}$\n$\\theta ={\\text{tan}}^{-1}\\left({v}_{y}\/{v}_{x}\\right).$\n\u2022 Relative velocity is the velocity of an object as observed from a particular reference frame, and it varies dramatically with reference frame.\n\u2022 Relativity is the study of how different observers measure the same phenomenon, particularly when the observers move relative to one another. Classical relativity is limited to situations where speed is less than about 1% of the speed of light (3000\u00a0km\/s).\n\n## Conceptual questions\n\nWhat frame or frames of reference do you instinctively use when driving a car? When flying in a commercial jet airplane?\n\nA basketball player dribbling down the court usually keeps his eyes fixed on the players around him. He is moving fast. Why doesn't he need to keep his eyes on the ball?\n\nIf someone is riding in the back of a pickup truck and throws a softball straight backward, is it possible for the ball to fall straight down as viewed by a person standing at the side of the road? Under what condition would this occur? How would the motion of the ball appear to the person who threw it?\n\nThe hat of a jogger running at constant velocity falls off the back of his head. Draw a sketch showing the path of the hat in the jogger's frame of reference. Draw its path as viewed by a stationary observer.\n\nA clod of dirt falls from the bed of a moving truck. It strikes the ground directly below the end of the truck. What is the direction of its velocity relative to the truck just before it hits? Is this the same as the direction of its velocity relative to ground just before it hits? Explain your answers.\n\n## Problems&Exercises\n\nBryan Allen pedaled a human-powered aircraft across the English Channel from the cliffs of Dover to Cap Gris-Nez on June 12, 1979. (a) He flew for 169 min at an average velocity of 3.53 m\/s in a direction $\\text{45\u00ba}$ south of east. What was his total displacement? (b) Allen encountered a headwind averaging 2.00 m\/s almost precisely in the opposite direction of his motion relative to the Earth. What was his average velocity relative to the air? (c) What was his total displacement relative to the air mass?\n\n(a) $\\text{35}\\text{.}8 km$ , $\\text{45\u00ba}$ south of east\n\n(b) $5\\text{.}\\text{53 m\/s}$ , $\\text{45\u00ba}$ south of east\n\n(c) $\\text{56}\\text{.}1 km$ , $\\text{45\u00ba}$ south of east\n\nA seagull flies at a velocity of 9.00 m\/s straight into the wind. (a) If it takes the bird 20.0 min to travel 6.00 km relative to the Earth, what is the velocity of the wind? (b) If the bird turns around and flies with the wind, how long will he take to return 6.00 km? (c) Discuss how the wind affects the total round-trip time compared to what it would be with no wind.\n\nshow that the set of all natural number form semi group under the composition of addition\nwhat is the meaning\nDominic\nexplain and give four Example hyperbolic function\n_3_2_1\nfelecia\n\u2157 \u2154\u00bd\nfelecia\n_\u00bd+\u2154-\u00be\nfelecia\nThe denominator of a certain fraction is 9 more than the numerator. If 6 is added to both terms of the fraction, the value of the fraction becomes 2\/3. Find the original fraction. 2. The sum of the least and greatest of 3 consecutive integers is 60. What are the valu\n1. x + 6 2 -------------- = _ x + 9 + 6 3 x + 6 3 ----------- x -- (cross multiply) x + 15 2 3(x + 6) = 2(x + 15) 3x + 18 = 2x + 30 (-2x from both) x + 18 = 30 (-18 from both) x = 12 Test: 12 + 6 18 2 -------------- = --- = --- 12 + 9 + 6 27 3\nPawel\n2. (x) + (x + 2) = 60 2x + 2 = 60 2x = 58 x = 29 29, 30, & 31\nPawel\nok\nIfeanyi\non number 2 question How did you got 2x +2\nIfeanyi\ncombine like terms. x + x + 2 is same as 2x + 2\nPawel\nx*x=2\nfelecia\n2+2x=\nfelecia\n\u00d7\/\u00d7+9+6\/1\nDebbie\nQ2 x+(x+2)+(x+4)=60 3x+6=60 3x+6-6=60-6 3x=54 3x\/3=54\/3 x=18 :. The numbers are 18,20 and 22\nNaagmenkoma\nMark and Don are planning to sell each of their marble collections at a garage sale. If Don has 1 more than 3 times the number of marbles Mark has, how many does each boy have to sell if the total number of marbles is 113?\nMark = x,. Don = 3x + 1 x + 3x + 1 = 113 4x = 112, x = 28 Mark = 28, Don = 85, 28 + 85 = 113\nPawel\nhow do I set up the problem?\nwhat is a solution set?\nHarshika\nfind the subring of gaussian integers?\nRofiqul\nhello, I am happy to help!\nAbdullahi\nhi mam\nMark\nfind the value of 2x=32\ndivide by 2 on each side of the equal sign to solve for x\ncorri\nX=16\nMichael\nWant to review on complex number 1.What are complex number 2.How to solve complex number problems.\nBeyan\nyes i wantt to review\nMark\n16\nMakan\nx=16\nMakan\nuse the y -intercept and slope to sketch the graph of the equation y=6x\nhow do we prove the quadratic formular\nDarius\nhello, if you have a question about Algebra 2. I may be able to help. I am an Algebra 2 Teacher\nthank you help me with how to prove the quadratic equation\nSeidu\nmay God blessed u for that. Please I want u to help me in sets.\nOpoku\nwhat is math number\n4\nTrista\nx-2y+3z=-3 2x-y+z=7 -x+3y-z=6\ncan you teacch how to solve that\ud83d\ude4f\nMark\nSolve for the first\u00a0variable\u00a0in one of the\u00a0equations, then substitute the result into the other\u00a0equation. Point\u00a0For: (6111,4111,\u2212411)(6111,4111,-411) Equation\u00a0Form: x=6111,y=4111,z=\u2212411x=6111,y=4111,z=-411\nBrenna\n(61\/11,41\/11,\u22124\/11)\nBrenna\nx=61\/11 y=41\/11 z=\u22124\/11 x=61\/11 y=41\/11 z=-4\/11\nBrenna\nNeed help solving this problem (2\/7)^-2\nx+2y-z=7\nSidiki\nwhat is the coefficient of -4\u00d7\n-1\nShedrak\nA soccer field is a rectangle 130 meters wide and 110 meters long. The coach asks players to run from one corner to the other corner diagonally across. What is that distance, to the nearest tenths place.\nJeannette has $5 and$10 bills in her wallet. The number of fives is three more than six times the number of tens. Let t represent the number of tens. Write an expression for the number of fives.\nWhat is the expressiin for seven less than four times the number of nickels\nHow do i figure this problem out.\nhow do you translate this in Algebraic Expressions\nwhy surface tension is zero at critical temperature\nShanjida\nI think if critical temperature denote high temperature then a liquid stats boils that time the water stats to evaporate so some moles of h2o to up and due to high temp the bonding break they have low density so it can be a reason\ns.\nNeed to simplify the expresin. 3\/7 (x+y)-1\/7 (x-1)=\n. After 3 months on a diet, Lisa had lost 12% of her original weight. She lost 21 pounds. What was Lisa's original weight?\nGot questions? Join the online conversation and get instant answers!","date":"2021-05-18 10:57:05","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 11, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5206864476203918, \"perplexity\": 1001.3503603937154}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-21\/segments\/1620243989819.92\/warc\/CC-MAIN-20210518094809-20210518124809-00306.warc.gz\"}"}
null
null
Of Ted Bundy In One Flew Over The Cuckoo's Nest? Ted Bundy, an infamous serial killer in the 1970s, also volunteered at a suicide helpline. Using his manipulative personality, he convinced people to live. Although Bundy is better known for the dozens of women he murdered, he also made a positive impact on several people. As author Shinde Sweety said in her novel Arjun:Without a Doubt, "No person is completely wicked, just as no person is perfect. We are all grey." Similarly, In the book One Flew Over the Cuckoo's Nest by Ken Kesey, Randle Patrick McMurphy's traits and actions blur the line between good and evil. McMurphy is committed to a mental institution in the late 1950s. There he challenges the control and dominance of the unmerciful Nurse Ratched. McMurphy's traits show he is a flawed…show more content… Once McMurphy realizes the patients could use his guidance, he begins to positively impact them. Throughout the book, McMurphy gives advice to the patients to try and break them of their fears. What immediately strikes McMurphy is that the ward is devoid of laughter. Continuously McMurphy tries to get a laugh out of the patients. Bromden says, "He knew you can't really be strong until you can see a funny side to things"(Kesey,227). He knows that laughter is the best medicine and could be more therapeutic for them than a lot of the techniques the ward uses. McMurphy also gives them advice on making an effort. When the patients on the ward are too afraid to go against Nurse Ratched and have a fear of failure, McMurphy shows them it is better to try than not at all. He does this by making a bet he can do the seemingly impossible. Although he fails he says , "But I tried, though. Goddammit, I sure as hell did that much, now, didn't I"(Kesey,125)? The advice on effort is largely what drives the patients to make an attempt to change ward policy the next chance they get. Besides giving the patients advice, he improves their lives through changing the ward policies. The first improvement he makes is creating a second day room. He sets a good example for the other patients that perserverence and effort can have a great reward, for he does not give up on the idea of the day room when first told no. Another significant action he takes is planning a fishing trip. The trip invigorates the men and gives them greatly needed experience in the world outside the asylum. Bromden says he felt, "better than he remembered feeling since he was a kid, when everything was good and the land was still singing kids' poetry to him" (Kesey, 243). Allowing the Acutes to see life outside the asylum could promote them to leave the Experiments In Stanley Milgram's The Perils Of Obedience However, when informed he would not be held responsible he continued and even administered the most severe shock. This says that if humans feel some diffusion of responsibility, we are more willing to cause harm to another individual. There was also subjects such as Morris Braverman, a 39-year-old social worker, who offered to change places with the victim, but also laughed while taking part in the experiment. Braverman also went as far as administering the most severe shock. Furthermore, Stanley Milgram's experiments resulted in great and relevant results, which helped scientist and psychologist understand the power that authority and obedience has on Character Analysis Of Ray Bradbury's 'Fahrenheit 451' Ahmad—Showing that firemen will start burning things instead of ending fire was a very nice idea I don't know how you came up with this idea. Bradbury—I was thinking about the things that happen in real life but we don't see it.We always see doctors as good people because they risk our lives but not all of the doctors are good just how we think.I want you to think decently about this if you meet somebody doesn't think he is good just because he is a doctor or he is bad because he has another job that you don't like.I wrote about this in Fahrenheit 451 when Clarisse told Montage that he is not like all other firemen. Ahmad—Yes, I read this and I think it was really good that you showed this thing because today in our lives we think bad things about people that we don't know and in the reality we don't know them very good to decide if they are good or not. Bradbury —Have you read the veldt? Ahmad—Yes, I did. Hannibal Lecter Character Analysis Due to his high IQ, Hannibal can easily analyze others ' psychological intentions, but he had sinned many years. His evil spirit was under the role of doctor. He is very calm, resourceful, rational, intellectual. At the same time, he also hate the world. He ate the people in order to take revenge on the world. The Green Mile Moral Analysis Paul wants to help Melinda as she has been diagnosed with a brain tumor. As he discovers John Coffey's gift, he makes the moral decision to use John and his superior gift to help get rid of Melinda's brain tumor. This act of kindness towards Melinda, shows a case of morality within Paul. It shows morality as Paul wants to help those around him at their worst times. The choices Paul makes are based on those around him, some decisions he makes put those he cares for before him. Social Problems In Ray Bradbury's 'Fahrenheit 451' Thus, when society tries to achieve the total equality, there always could be found people who would struggle to prevent it and go against the flow. In the "Fahrenheit 451" the system of keeping the people away from any ideas that are not controlled by government starts to fail by the end of the book. In spite of the fact that still, the most people are kept under control, there is a group of people who found the alternative way to spread books and ideas that they try to deliver - remembering them by heart. Whereas in the "We" the same system wins due to the invention fo the medical surgery that is supposed to deprive people of the ability to imagine and dream. This is a great demonstration of how strong is the will of the human to self-expression, as the only way to stop it at all is prevent everyone from physical ability to do Relationships In Ernest Hemingway's Hills Like White Elephants The man is manipulating her through his words to get his girl to go through with the operation. First, he brings up the operation and goes on to say that "It's really an awfully simple operation"(42) hinting at the fact that it is easily done and not a big deal at all. Secondly, the man uses the idea of happiness to win her over in this decision, "That's the only thing that bothers us. It's the only thing that's made us unhappy"(50) he is manipulating her into thinking that this operation will revive their happiness they once shared in this relationship. Thirdly, he tries to normalize the operation to make her feel like it's a common thing, no big deal, he tells her she doesn't "have to be afraid. Maslow's Poem: An Introduction To Low Self-Esteem In the main way, we need to improve the self-actualization to avoid self-esteem. Based on the Maslow's hierarchy, sale-actualization is the fulfillment of greatest need for an individual, which are very meaningful in our life. Moreover, that's easy to make people feel sorry for you when their heard your pity story, but they will give the sight of sympathetic and they will shy for you because they considered that you are different with them. So, we cannot give them have the feeling, we need to love more to ourselves. A professor, Dr. Helen Johnson has suggested many ways to love ourselves. One Flew Over The Cuckoo's Nest Analysis In his novel One Flew Over the Cuckoo's Nest, Ken Kesey masterfully combines metaphors and imagery into a piece of art. The story is narrated from the viewpoint of Bromden, a chronic, who is the longest living member of the ward. This perspective introduces an unconventional view of what turns the gears of typical conformist society. During his confinement, Bromden is introduced to McMurphy, a rambunctious hothead who symbolically challenges the beliefs of the patients. The resulting novel uses the fog, the machine, the Combine, and religious imagery as a culminating analysis of societal problems and the people who cause them. All with the purpose of a therapeutic assistance. Professor Kaptchuk, a leading figure in the placebo studies, says: "The placebo effect is a way for your brain to tell the body what it needs to feel better". And I believe this could be the key for learning to overcome any situation. Most of the times, our lack of hope is what's actually bringing us down and stopping us from conquering our fears. It is impossible not to doubt the strength of our faith after learning more about the placebo effect since it has been proven that believing in something will make you achieve it, no matter What Important Skills Should I Have As A Hospital Essay You can also read Forbes revelations about wonderful secrets of great communicators. Ability to motivate oneself and to boost emotional endurance Serving clients should be your priority. But you should never forget to look after yourself too. Prepare to face an emotional roller coaster once you set foot on your preferred hospital. There will be challenges and if you are not tough enough, you will have the desire to quit easily. More about Of Ted Bundy In One Flew Over The Cuckoo's Nest?
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
35
\section{Introduction} Every day more and more domains are increasing the breadth and depth of their data every year. It becomes critical to find ways to create compact and interpretable representations of our data\cite{Guyon2003}. In this paper we focus on the problem of diverse online feature selection, where diversity is defined in terms of the features themselves, and online enabled means that the feature streams may arrived in mini-batch format or stream-wise fashion. In this paper we will consider the online feature selection problem, where features flow into the model dynamically, this can be in groups or one by one. As the features arrive, a feature selection process is performed. This formulation differs from the typical online learning problem, where the feature space is assumed to remain constant while new instances are shown to the model and the weights subsequently updated\cite{agarwal14a}. Existing techniques generally do not consider diversity and instead rely on other measures, whether it be through use of a regularizer, statistical tests or correlation measures for feature selection. To this end, we propose an online feature selection approach called \emph{Diverse Online Feature Selection} (DOFS). Our framework is composed of three stages: feature sampling, local criteria and global criteria for feature selection. In the feature sampling, we sample incoming stream of features using conditional DPP. In the local criteria, this is used to assess and select features only when they arrive, we use unsupervised scale invariant methods to remove redundant features and optionally supervised methods to introduce label information to assess relevant features. Lastly we use global criteria which uses regularization methods to select a globally optimal subset of features. This three stage procedure continues until there are no more features arriving or some predefined stopping condition is met. This work makes the following contributions. \begin{itemize} \item We propose using conditional DPP as a means for selecting diverse features from stream of features. In order to do so, we provide a new and novel truncated DPP sampling algorithm. \item To evaluate a stream of features, we introduce an unsupervised, scale invariant criteria to remove redundant features and supervised approach to address the shortcomings of using only DPP for sampling the feature stream. \item our proposed \emph{Diverse Online Feature Selection} (DOFS) achieves the strong classification results whether working in supervised or unsupervised framework \end{itemize} The paper is organized into the following sections. Section 2 we will lay the preliminary foundations and review related approaches to the online feature selection problem. In section 3, we will introduce our framework for Diverse Online Feature Selection (DOFS). In section 4 we will provide experimental results to demonstrate the effectiveness of DOFS. We will conclude this work in section 5. \section{Preliminaries and Related Work} In this section we first give a review of offline feature selection and the state-of-the-art online feature selection counterparts. Representative methods reviewed are Grafting, Alpha-investing, Online Streaming Feature Selection (OSFS), Online Group Feature Selection (OGFS). Afterwards, we will provide a review of determinantal point processes and the feature sampling problem. \subsection{Feature Selection} Traditionally, feature selection has been performed in an offline setting. The feature selection problem can be framed as follows. We are given a matrix \(X = [x_1, \dots, x_n] \in \mathbb{R}^{d \times n}\) which has \(n\) instances and \(d\)-dimension feature space \(F= [f_1, f_2, \dots, f_d] \in \mathbb{R}^d\). The goal of feature selection is to selection a subset of the feature space such that \(U \in \mathbb{R}^l\) where \(l\) is the number of desired features, where in most cases \(l < d\)\cite{wang2015online}. Offline feature selection is a widely studied topic with many different reviews\cite{Guyon2003}. Rather than provide a comprehensive review, we will instead focus on several selected techniques and their online feature selection counterparts. We will cover feature selection from two perspectives as a filter and wrapper method. From the filter approach, we will consider batch approaches using statistical significance and spectral feature selection as well as their online variants being Online Streaming Feature Selection and Online Group Feature Selection respectively. We will also consider the wrapper methods in the batch setting using regularization and information criterion approaches, as well as the online variants being grafting and alpha-investing respectively. For completeness, the third approach for feature selection is the embedded method which perform feature selection in the process of training as they are specific to models. Approaches here could include decision tress such as CART, which have built-in mechanism to perform feature selection\cite{Guyon2003}. To the best of our knowledge, there are not any embedded methods present from an online feature selection perspective. \subsubsection{Correlation Criteria and OSFS} The first approach uses the filter method, which evaluates features by certain criterion and select features by ranking their evaluation values or by some chosen threshold. One common approach is to consider the correlation related criteria\cite{Guyon2003} such as mutual information, maximum margin, or independence criterion. Of particular interest is conditional independence criterion which is constructed through consideration of relevance and redundancy of features in terms of condition independence\cite{Koller1996}. In this setting the process of labelling a feature to be relevant or redundant is performed using statistical tests based on conditional independence. Online Streaming Feature Selection (OSFS) uses this framework of relevance and redundancy to determine whether incoming features are added. When a feature arrives, OSFS first analysis correlation with the label and determines whether the feature is relevant\cite{Wu2010}. Once a feature is successfully chosen, then OSFS performs redundancy test to determine if both previous and current features are redundant and can be removed. In this setting the redundancy is a key component of OSFS approach. \textit{Spectral Feature Selection and OGFS}\label{spectral-feature-selection-and-ogfs} A similar approach which also uses statistical tests and falls under the filter method for feature selection is the use of the spectral feature selection. In spectral feature selection a graph is constructed. From this graph where the \(i\)th vertex corresponds to \(\mathbf{x_i} \in X\), with an edge between all vertex pairs. In this graph construct its adjacency matrix \(W\), and degree matrix \(D\). The adjacency matrix is constructed differently depending on the supervised or unsupervised context. In the spectral analysis setting the adjancy matrix can be the similarity metric of choice \cite{Zhao2007}, \cite{Wang2015}. For example in the unsupervised context this is can be the RBF kernel function \cite{Zhao2007}, \cite{Wang2015}, or a weighted sum of correlation metric and rank coefficient metric \cite{Roffo_2015_ICCV}. Once the appropriate metric is chosen then, a feature ranking function is used for filtering the features. This function can change depending on context, and can be constructed. The choice of this function can be used to determine the statistical significance of each individual feature using trace ratio criterion approach\cite{Grave2011}. To extend Spectral feature selection to the online setting, the Online Group Feature Selection (OGFS) has been proposed which considers incoming \emph{groups} of features and applying spectral feature selection on a group-wise level. This is used to determine the relevancy over the particular group of features which has been shown to extend into the online setting. \subsubsection{Regularization and Grafting} The wrapper method, which uses the machine learning algorithm of interest as a black box to score subsets of features. Regularization is typically labelled as a wrapper method in the feature selection framework, meaning that it uses a model algorithm to jointly build a model and select features. This is typically employed through both minimizing empirical error and a penalty. In the context of regularization, the goal is to encourage sparsity on the feature subset. Regularizer penalties are typically framed as \cite{PerkinsA2003} $$\Omega_p(\mathbf{\theta}) = \lambda \sum_{i=1}^m \mathbf{\alpha_i} \lvert \mathbf{\theta_i} \rvert ^p $$ where a choice of \(p=1\) is typically chosen to promote sparsity, commonly referred to as the Lasso penalty. To alter this framework to an online setting, the grafting algorithm is used. Grafting is performed on any model which can be subjected to Lasso regularizer. The idea behind grafting is to determine whether the addition of a new feature would cause the incoming feature or alternatively, any existing feature to have a non-zero weight. With a chosen parameter \(\lambda\), the regularizer penalty is then \(\lambda \lvert w_j \rvert\). Thus gradient descent will accept a new incoming feature \(w_j\) if: \[ \left\lvert \frac{\partial \mathcal{\bar{L}}}{\partial w_j} \right\rvert > \lambda \] where \(\mathcal{\bar{L}}\) is the mean loss. In other words, if the reduction in \(\mathcal{\bar{L}}\) outweighs the regularizer penalty \(\lambda \lvert w_j \rvert\), then the new incoming feature \(w_j\) will be chosen. If this test is not passed, then the feature is discarded. As Grafting makes no assumption on the underlying model, it can be used in both linear and non-linear models. \subsubsection{Information Criterion and Alpha-investing} Another approach to feature selection in the wrapping sense is the usage of penalized likelihoods. In the context of single pass feature selection techniques, penalized likelihoods are preferred \cite{Zhou2006}. This set of approaches can be framed as: \[-2 \log(\text{likelihood}) + F\] where the parameter \(F\) indicates how a criterion is to penalize model complexity directly. The alpha-investing algorithm \cite{Zhou2006}, makes use the information in order to determine whether a new incoming stream of features is considered to be relevant or not. It makes use of the \emph{change} in \(\text{log-likelihood}\) and is equivalent to a t-statistic, which means a feature is added to the model if its p-value is greater than some \(\alpha\). Alpha-investing works through adaptively controlling the threshold for adding features. This works through increasing the wealth \(\alpha\) when a feature is chosen to reduce the change of incorrect inclusion of features. Similarly when a feature is assessed wealth is ``spent'', which reduces the threshold, in order to avoid adding additional spurious features. In contrast to the previous work, we will tackle feature selection through the use of feature sampling through determinantal point processes. \subsection{Determinantal Point Process} We begin by reviewing determinantal point processes (DPPs) and conditional DPP. A point process \(\mathcal{P}\) on a discrete set \(\mathcal{Y} = \{ 1, 2, \dots, N \}\) is a probability measure over all \(2^{\mathcal{Y}}\) subsets. \(\mathcal{P}\) is a determinantal point process (DPP) if \(\boldsymbol{Y}\) range over finite subsets of \(\mathcal{Y}\), we have for every \(A \subseteq \mathcal{Y}\) \[P(\mathcal{A} \subseteq \boldsymbol{Y}) = \text{det}(\mathbf{K}_{\mathcal{A}})\] where \(K \in \mathbb{R}^{M \times M }\) is a positive semidefinite kernel matrix, where all eigenvalues of \(K\) are less than or equal to \(1\). An alternative construction of DPP is defined by \(L\)-ensembles where \(L_{ij}\) is a measurement of similarity between elements \(i\) and \(j\), then DPP assigns higher probability to subsets that are diverse. The relationship between \(K\) and \(L\) has been shown to be \cite{kulesza2011learning} \[K = (L + I)^{-1} L\] Where \(I\) is the identity matrix. Then the choice of a specific subset \(Y\) is shown to be \cite{kulesza2011learning} \[\mathcal{P}_L (\boldsymbol{Y} = Y) = \frac{\text{det}(L_Y)}{\text{det}(L+I)}\] \subsubsection{Conditional Determinantal Point Process}\label{conditional-determinantal-point-process} In our situtation, often we would like to sample future unchosen/unseen points with the additional constraints based on the currently chosen features. Suppose that we have input \(X\) and set \(\mathcal{Y}(X)\) of iterms dervied from the input. Then conditional DPP is defined to be \(\mathcal{P}(\boldsymbol{Y} = Y | X)\) which is a conditional probability that assigns a probability to every possible subset \(Y \subseteq \mathcal{Y}(X)\). Then the model will take form \[\mathcal{P}(\boldsymbol{Y} = Y | X) \propto \text{det}(L_Y(X))\] DPP have demonstrated its use in discovering diverse sample points which has found use in applications such as computer vision and document summarisation \cite{kulesza2011learning}\cite{kulesza2011kdpps}. In this context we will consider sampling feature vectors. \begin{algorithm} \caption{Conditional Feature Sampling using DPP} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE Best candidate feature set: $X \in \mathbb{R}^{d\times n}$ , new set of features $G$, reconstruction error $\alpha$ \ENSURE Sample of features from $G$ \\ \STATE Construct similarity matrix $L$ based on $X \cup G$. \\ \STATE Sample features from $G$ conditioning on $X$ using conditional DPP \end{algorithmic} \end{algorithm} Assuming that the similarity matrix and eigenvalues decomposition \(L\) is provided, DPP sampling has been shown to have complexity \(O(k^3)\) \cite{NIPS2010_3969} though Markov Chain DPP sampling (under certain conditions) is linear in time with respect to the size of data \cite{Li2016}. As the above algorithm is inherently unsupervised (i.e.~makes no assumption on the response vector). This sampling approach could easily be suitable for both supervised and unsupervised problems. Furthermore, we propose two different approaches for removing redundant features; first approach in an unsupervised, scale-invariant manner and second in a supervised way, leveraging the label information to improve the consistency of the features chosen. \subsection{Local Criterion} Feature sampling alone is insufficient to provide suitable subset of features without redundancy. Although DPP seeks to promote diversity within its features it may not necessarily remove all redundant features. Depending on choice of kernels, kernels may not necessarily be scale invariant and almost never consistent with respect to response. In order to address both of these concerns, we turn turn to other criteria to promote further compactness and reduce redundancy in the feature selection framework; irrespective of the type of kernel chosen. \subsubsection{Unsupervised Criterion} In order to address the \emph{scale-invariant} aspect, we turn towards non-parametric pair-wise tests to remove redundant features, such as the Wilcoxon signed-rank test\cite{wilcoxon45}. In our scenario, any two pairs of features can be viewed as a pair of measurements. If \(N\) is the sample size, and the pairwise measurements are \(x_i, y_i\) for the \(i\)th measurement for feature \(x\) and \(y\) respectively, then the test statistic is calculated through first ranking the pairs by smallest to largest absolute difference, \(\lvert x_i - y_i \rvert\). Each pair is then given a rank, in this scenario we will define \(R_i\) to be the rank of the \(i\)th ranked pair. Then the statistic is calculated as \[W = \sum_{i=1}^N (\text{sign}(x_i - y_i) R_i)\] where \(W\) converges to approximately normal distribution, with \(z\)-score is given by \[z = W/\left(\sqrt{\frac{N(N+1)(2N+1)}{6}}\right)\] Here we propose Wilcoxon signed-rank test to remove any incoming features which are redundant compared with the present features. \begin{algorithm} \caption{Wilcoxon Criterion} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE Best candidate feature set: $X \in \mathbb{R}^{d\times k}$, proposed single new feature $f$, significance level $\alpha$ \ENSURE Boolean, if feature $f$ is discarded or kept \\ \FOR {each feature $x$ in $X$} \STATE $p \leftarrow$ Wilcoxon signed-rank test. \STATE If $p > \alpha$. Then discard $f$ and terminate, otherwise continue \ENDFOR \STATE Keep feature $f$ \end{algorithmic} \end{algorithm} As the Wilcoxon signed-rank test requires sorting along a vector of size \(d\), and all other computation are simple arithmetic, then a single test will have complexity \(O(d\log (d))\), as this test is repeated \(k\) times under the proposed Wilcoxon criterion, then it has complexity \(O(kd \log (d))\). Although redundancy would already be minimised due to the nature of DPP sampling, Wilcoxon signed-rank test will provide an approach to removing redundant features which will help augment the existing approach through addressing \emph{scale-invariant} aspect which would have been missed. In addition to using this criteria to detect and remove redundant features in a scale invariant way, it is also worthwhile to incorporate information relating to our label in order to select features that are consistent with our label. \subsubsection{Supervised Criterion} Another approach is to make use of information embedded in our label vector \(Y\). Under this situation it would help address \emph{consistency} aspect which DPP alone would fail to account for. Our criteria is based class separability critera in conjunction with trace ratio criterion \cite{Nie2008} and criterions devised by Wang et al. (2015). We will define the selected feature set to be \(U\), \(S_w\) to be the within class scatter matrix and \(S_b\) to be the between class scatter matrix. There are several ways for class separability to be defined: First it can be defined using the mean and variance measures of the class\cite{Mitra2002}: \[ \begin{aligned} S_w(U) &= \sum_{j=1}^c \pi_j \sigma_j\\ S_b(U) &= \sum_{j=1}^c (\mu_j - M_o)(\mu_j - M_o)^T\\ M_o(U) &= \sum_{j=1}^c \pi_j \mu_j\\ \end{aligned} \] Where \(\pi_j\) is the priori probability that a pattern belongs to class \(y_j\), \(U\) is the current candidate feature vector, \(\mu_j\) is the sample mean vector of class \(y_j\), \(M_o\) is the sample mean vector for the enture data point, \(\sigma_j\) is the sample covariance matrix of class \(y_j\). Similarly it can be constructed through the use of any kernel to define measure of similarity\cite{Liu2016}: \[ \begin{aligned} S_w(U) &= \frac{1}{c}\sum_{j=1}^c \frac{1}{N_j^2} \left( \sum_{k=1}^{N_j} \sum_{l=1}^{N_j} || x_k^{(j)} - x_l^{(j)} ||^2 \right)\\ S_b(U) &= \frac{2}{c(c-1)}\sum_{i=1}^c \sum_{j=1, j\neq i}^c \frac{1}{N_i N_j} \left( \sum_{k=1}^{N_j} \sum_{l=1}^{N_j} || x_k^{(j)} - x_l^{(j)} ||^2 \right) \\ \end{aligned} \] Where \(c\) represents the total number of classes for the supervised classification problem. Furthermore class separability can also be defined using the label information directly\cite{wang2015online}: \[ \begin{aligned} S_w(U) &= \begin{cases} \frac{1}{n}-\frac{1}{n_c} & y_i = y_j = l\\ \frac{1}{n} & \text{otherwise} \end{cases} \\ S_b(U) &= \begin{cases} \frac{1}{n_c} & y_i = y_j = l\\ 0 & \text{otherwise} \end{cases} \end{aligned} \] Where \(n_c\) represents the number of instances in class \(c\). Using any of the between and within class separation criteria defined above, we can use use these to determine whether a feature is informative or not. The feature level criterion we will define based on a single feature \(f\): \[s(f) = \frac{S_b(f)}{S_w(f)}\] We can extend this to yield a score for a subset of features based on a subset of features \(U\), where the goal would be to maximise the following criterion: \[F(U) = \frac{\text{tr}(S_b(U))}{\text{tr}(S_w(U))}\] Both of these criterion can be used to select a stream of features. \textbf{Supervised Criterion 1} \emph{Given \(U\) to be the previously selected subset, \(x_i\) denoting the newly arrived feature. Then feature \(x_i\) will be selected if} \[F(U \cup f) - F(U) > \epsilon\] where \(\epsilon\) is a small positive parameter. \textbf{Supervised Criterion 2} \emph{Given \(U\) to be the previously selected subset, \(f\) denoting the newly arrived feature. Then feature \(f\) will be selected if it is a significant feature with discriminative power} The significance of the feature can be evaluated by \(t\)-test \[t(f, U) = \frac{\hat{\mu} - s(f)}{\hat{\sigma}/\sqrt{\lvert U \rvert}}\] Where \(\hat{\mu}, \hat{\sigma}\) are the sample mean and standard deviation of scores of all features in \(U\). If the \(t\)-value reaches the chosen significance level (in experiments conducted here, chosen to be \(0.05\)) then the feature is assumed to be significant. \begin{algorithm} \caption{Supervised Criterion} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE Incoming set of features: $U \in \mathbb{R}^{d\times k}$, significance level $\alpha$ \ENSURE A set $G$, representing the set of selectioned features \\ \textit{Initialize:} $G = \{\}$ \\ \STATE Sort $U$ according to function $s$ \\ \FOR {each feature $f$ in $U$} \STATE If $F(G \cup f) - F(G) > \epsilon$ then $G = G \cup f$ \STATE If $t(f, G) > \alpha$ then $G = G \cup f$ \ENDFOR \\ \STATE Return $G$ \end{algorithmic} \end{algorithm} As both of these criterion are in linear time \cite{wang2015online}, then the remaining complexity comes from the construction of the class separability critera. The class separability critera has different time complexity depending on the choice of criterion. In the class separation criterion from Mitra et all (2002), it relies on the construction of a covariance matrix, with all other operations being simple arithmetic operations. As the complexity of covariance matrix calculation is \(O(k^2d)\), this suggests that the criteria is of complexity \(O(ck^2d)\), as the covariance is needed to be computed for each class, and dominates this criterion. Similarly for the class separation which uses the kernel, the time complexity is \(O(c^2 N^2)\). However if we use class separation criterion which uses the label information directly, then it would be in linear time as well\cite{wang2015online}. In our supervised criterion, we will accept features if they pass either \textbf{supervised criterion 1} or \textbf{supervised criterion 2}. It can also be used in conjunction with unsupervised criterion to result in providing additional representative features. After the various criterions which are selected is run, we can proceed with global criterion to remove redundant features both assessed from the streaming process and previously accepted features. \subsection{Global Criterion} Similar to approaches used by Grafting \cite{PerkinsA2003}, we also use regulariser to remove redundant features after the conditional sampling step is complete. This approach was also used in OGFS algorithm under ``inter-group selection'' criteria which used the Lasso regulariser specifically\cite{wang2015online}. In this setting we will consider elasticnet implementation as an alternative to using lasso to promote sparsity. The regularizer penalty is framed as \[\Omega_p(\mathbf{\theta}) = \lambda \sum_{i=1}^m \mathbf{\alpha_i} \lvert \mathbf{\theta_i} \rvert ^p \] Where Elasticnet penalty is specifically \(\alpha_1 \Omega_1 + \alpha_2 \Omega_2\) which is elasticnet, typically chosen where \(\alpha_1, \alpha_2 > 0\) and \(\alpha_1 + \alpha_2 = 1\) Similar to the approach taken by Lasso methods, elasticnet can be used to select features by having some tolerate \(\lambda \geq 0\) in mind\cite{Zou05}. Without loss of generality, assume that the coefficient of a predictor for a particular feature \(f\), is \(\beta_f\), then we will remove a feature if: \[\lvert \beta_f \rvert < \lambda \] Using this, we can now form our global criterion. \begin{algorithm} \caption{Global Criterion} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE Incoming set of features: $U \in \mathbb{R}^{d\times n}$, tolerance level $\lambda$ \ENSURE A set $G$, representing the set of selectioned features \\ \textit{Initialize:} $G = \{\}$ \\ \STATE Fit a model with elasticnet regularizer \\ \FOR {each feature $f$ in $U$} \STATE If $\lvert \beta_f \rvert \geq \lambda$ then $G = G \cup f$ \ENDFOR \\ \STATE Return $G$ \end{algorithmic} \end{algorithm} \section{Framework for Diverse Online Feature Selection}\label{framework-for-diverse-online-feature-selection} The framework for online feature selection is as follows. First, assume the current best candidate subset model matrix \(G = [x_1, \dots, x_n] \in \mathbb{R}^{d \times n}\), where \(d\) is the number of selected features and \(n\) is the number of instances. Let the incoming matrix \(\mathbf{G'}\) be \(\mathbf{G'} = [x'_1, \dots, x'_n] \in \mathbb{R}^{(d + m) \times k}\), where \(m\) is the number of newly available features. Without loss of generality we can assume that \(\mathbf{G'} \in \mathbb{R}^{(d + m) \times n}\), that is the incoming feature stream have the same number of instances as the best subset model matrix. Then the difference between the new batch and best subset is that the new incoming stream of data contains additional features. Then the online feature selection problem at each iteration selects the best subset of features of size \(m'\), where \(0 \leq m' \leq m\). \[\begin{aligned} { \begin{array}{@{}c@{}}{ \begin{bmatrix} \multirow{2}{*}{G} \\ \vphantom{\vdots} \end{bmatrix}}_{d\times n}\\ \\ \vphantom{\vdots} \end{array} }& { \begin{array}{@{}c@{}}{ \begin{matrix} \vphantom{\vdots} \\ \longrightarrow \\ \vphantom{\vdots} \end{matrix}}\\ \\ \vphantom{\vdots} \end{array} }& \hspace{0.5cm} { \begin{array}{@{}c@{}}{ \begin{bmatrix} \multirow{3}{*}{$G^{\prime}$} \\ \vphantom{\vdots} \\ \vphantom{\vdots} \end{bmatrix}}_{(d+m)\times n}\\ \\ \end{array} } & { \begin{array}{@{}c@{}}{ \begin{matrix} \vphantom{\vdots} \\ \longrightarrow \\ \vphantom{\vdots} \end{matrix}}\\ \\ \vphantom{\vdots} \end{array} }& \hspace{0.5cm} { \begin{array}{@{}c@{}}{ \begin{bmatrix} \multirow{2}{*}{$G^{\prime\prime}$} \\ \vphantom{\vdots} \end{bmatrix}}_{(d+m')\times n}\\ \\ \vphantom{\vdots} \end{array} } \end{aligned}\] If the initial best subset was size \(d\) and there were an additional \(m\) features available to be selected, the online feature selection algorithm will then select \(d+m'\) features. \subsection{Diverse Online Feature Selection}\label{diverse-online-feature-selection} \begin{algorithm} \caption{Diverse Online Feature Selection} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE Best feature candidate feature set $X$, Feature stream $F$, label vector $Y$ \ENSURE A set $G$, representing the set of selectioned features \\ \WHILE{features are arriving} \STATE $G \leftarrow$ generate new group of features \\ \STATE Sample features from $G$ using DPP to get sampled subset $G'$ conditional on $X$ \\ \FOR {$f$ in $G'$} \STATE \textbf{Local Criterion}: Evaluate feature $f$ using unsupervised and/or supervised criterions to determine relevancy \\ \STATE \textbf{Global Criterion}: Perform redundacy check based on regulariser \ENDFOR \ENDWHILE \end{algorithmic} \end{algorithm} As the complexity of the various parts have been touched on in the previous sections, we can put them all together to get the overall complexity of DOFS. If a single iteration has the best candidate feature set to be \(G \in \mathbb{R}^{d \times n}\), with a stream of new data of size \(F \in \mathbb{R}^{(d+m)\times n}\). Then the complexity of DPP sampling will be, \(O((d+m)^3)\) where \(m\) represents the number of features available to be selected from the feature stream after DPP sampling. The unsupervised criterion will then have complexity at most \(O((d+m)\log (d+m))\) and supervised criterion will have complexity at most \(O(cn^2 (d+m))\) or as little as being linear in time. Overall the worse case complexity will be \(O((d+m)^3) + O(cn^2 (d+m)) = O(\max((d+m)^3, cn^2 (d+m))\). Where \(n\) represents the number of incoming instances used to update our feature selection, and \(m\) is the number of new available features. If we use the class separation criteria which has linear time complexity, then the overall complexity will reduce to DPP sampling, i.e. \(O((d+m)^3)\). \section{Experiments}\label{experiments} Various experiements were conducted to validate the efficiency of our proposed method. We used several benchmark datasets, several other state-of-the-art online feature selection methods are used for comparison including Grafting, OSFS, and OGFS. The classification accuracy, log-loss and compactness (the number of selected features) are used to measure performances of the algorithms in our experiments. We divide this section into three sub-sections, including introduction to our data sets, the experimental setting and the experimental comparisons. \subsection{Benchmark Data Sets} The benchmark datasets are from UCI Machine Learning Repository, and the Micro Array datasets. The information of these datasets are described in the table below. \begin{table} \centering \begin{tabular}{|l|l|l|} \hline Data Set & \#instances & \#dim. \\ \hline Ionosphere & 351 & 34 \\ \hline Spambase & 4601 & 57 \\ \hline Spectf & 267 & 44 \\ \hline Wdbc & 567 & 30 \\ \hline Colon & 62 & 2000 \\ \hline Leukemia & 72 & 7129 \\ \hline Lung Cancer & 181 & 12533 \\ \hline Prostate & 102 & 12600 \\ \hline \end{tabular} \end{table} There are four datasets from UCI repository (Ionosphere, Spambase, Spectf, Wdbc), and four datasets from microarray dataset (colon, leukemia, lung cancer, prostate). \subsection{Experimental Settings}\label{experimental-settings} In our experiments, Grafting and OGFS used elasticnet setup with \(\lambda = 0.15\) for the regularizer penalty and intergroup selection parameters respectively. For OSFS, OGFS, DOFS the threshold parameter \(\alpha\) is set to \(0.05\). To simulate online group feature selection, a similar setup by Wang et al. was followed. The group structures of the feature space was simulated by dividng the feature space as a global feature stream by streaming features in groups of size \(m\). In our experiements we set \(m \in [5, 10]\) as suggested by Wang et al.. Models were compared using using existing Matlab implementations such as the LOFS library\cite{Yu2016}, whilst the DOFS implementation was completed in Python using scikit-learn library. The DOFS models include the unsupervised variant (without consideration of class separability), and supervised variant using the criteria which used the label information directly. \subsection{Experimental Results} \begin{table}[h] \centering \begin{tabular}{|l|l|l|l|l|} \hline \multirow{2}{*}{Data Set} & \multicolumn{2}{l|}{Alpha-investing} & \multicolumn{2}{l|}{OSFS} \\ \cline{2-5} & \#dim & accu. & \#dim & accu. \\ \hline Ionosphere & 10 & 87.18 & 8 & 79.93 \\ \hline Spambase & 45 & 77.18 & 54 & 60.99 \\ \hline Spectf & 7 & 79.40 & 5 & 79.09 \\ \hline Wdbc & 21 & 71.53 & 10 & 62.74 \\ \hline Colon & 4 & 79.76 & 4 & 85.48 \\ \hline Leukemia & 16 & 66.67 & 5 & 91.83 \\ \hline Lung cancer & 69 & 86.67 & 7 & 83.43 \\ \hline Prostate & 25 & 97.09 & 5 & 91.84 \\ \hline \end{tabular} \end{table} \begin{table}[h] \centering \begin{tabular}{|l|l|l|l|l|} \hline \multirow{2}{*}{Data Set} & \multicolumn{2}{l|}{Grafting} & \multicolumn{2}{l|}{OGFS} \\ \cline{2-5} & \#dim & accu. & \#dim & accu. \\ \hline Ionosphere & 32 & 91.76 & 26 & 88.26 \\ \hline Spambase & 50 & 92.28 & 24 & 91.07 \\ \hline Spectf & 37 & 80.36 & 5 & 71.27 \\ \hline Wdbc & 24 & 94.82 & 18 & 96.07 \\ \hline Colon & 26 & 84.26 & 102 & 90.47 \\ \hline Leukemia & 13 & 94.53 & 63 & 100 \\ \hline Lung cancer & 19 & 96.53 & 33 & 99.44 \\ \hline Prostate & 17 & 95.53 & 96 & 98.00 \\ \hline \end{tabular} \end{table} \begin{table} \centering \begin{tabular}{|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{Data Set} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}DOFS\\(DPP only)\end{tabular}} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}DOFS\\(Unsupervised)\end{tabular}} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}DOFS\\(Supervised)\end{tabular}} \\ \cline{2-7} & \#dim & acc. & \#dim & accu. & \#dim & accu. \\ \hline Ionosphere & 11 & 87.75 & 9 & 88.12 & 23 & 86.47 \\ \hline Spambase & 24 & 86.44 & 10 & 82.54 & 37 & 88.26 \\ \hline Spectf & 10 & 79.40 & 31 & 79.26 & 29 & 79.57 \\ \hline Wdbc & 8 & 86.29 & 13 & 86.62 & 13 & 86.34 \\ \hline Colon & 750 & 90.32 & 47 & 95.70 & 38 & 94.52 \\ \hline Leukemia & 836 & 63.37 & 5 & 68.41 & 58 & 100 \\ \hline Lung cancer & 1366 & 94.48 & 7 & 91.08 & 88 & 98.37 \\ \hline Prostate & 1441 & 78.43 & 42 & 92.02 & 34 & 86.61 \\ \hline \end{tabular} \end{table} \textit{Comparison of DOFS variants}\label{comparison-of-dofs-variants} If we consider the three variants of DOFS, the usefulness of both the supervised and unsupervised algorithms are clearly warranted as if we consider accuracy to be metric of interest, supervised/unsupervised variant has better accuracy in 4 of 8 models. However, in the situations which supervised variant underperforms, the difference with the unsupervised variant is much lower. In the results above, it is clear that the unsupervised variant promotes greater compactness over the supervised variant. This can be thought of as the algorithm allowing more ``chances'' for a feature to be accepted and passed through the model. This is further highlighted by the difference when there is no redundancy check placed as in the variant which only uses DPP. In this setting there is a distinct possibility that extrenous set of features is selected despite the use of conditional DPP, which comes at a cost of performance, as can be observed in all the Micro Array datasets, where the number of features selected is at least 10 times, and in some cases 100 times larger than the other two variants provided. Overall from the results above comparing against either the supervised or unsupervised DOFS algorithm, we can see that DOFS generally has superior performance compared with Alpha-investing and OSFS algorithms, whilst it seems to be competitive with Grafting and OGFS. In generally there is a trade-off between compactness and performance; where it would perform better than alpha-investing and OSFS algorithm whilst being less compact, and competitive with Grafting and OGFS whilst having better compactness. What is interesting is that DOFS algorithm demonstrates inferior performance against all methods when using the Prostate dataset. \textit{DOFS vs Alpha-investing}\label{dofs-vs-alpha-investing} Both variants of DOFS manages to outperform alpha-investing in 6 of the 8 datasets. Excluding Prostate dataset, in the ionosphere the performance is within 2\%. When comparing compactness, alpha-investing is generally more compact. Overall DOFS (Unsupervised) has roughly \textasciitilde{}5-7\% improvement and DOFS (Supervised) \textasciitilde{}8-10\% improvement over alpha-investing approach for online feature selection. In terms of compactness, the unsupervised variant has even better compactness for 5 of the 8 datasets chosen, demonstrating that the unsupervised variant of DOFS consistently outperforms alpha-investing both in terms of accuracy and compactness. Overall our algorithm is able to select sufficient features with discriminative power. \textit{DOFS vs OSFS}\label{dofs-vs-osfs} Unsupervised and supervised variant of DOFS outperforms OSFS in 7 of the 8 datasets, with roughly \textasciitilde{}4-6\% improvement for the unsupervised variant and \textasciitilde{}10\% for the supervised variant. OSFS achieves greater compactness in all combination of datasets and variants of DOFS algorithm, with the exception of unsupervised DOFS and Spambase dataset. This demonstrates the trade-off in compactness of representation in this algorithm against the accuracy in performance. Overall our algorithm is able to select sufficient features with discriminative power. \textit{DOFS vs Grafting}\label{dofs-vs-grafting} Across the board Grafting appears to be a superior algorithm in terms of accuracy. Unsupervised DOFS outperforms Grafting in only 1 of the 8 datasets, whilst supervised variant outperformed Grafting in 3 of the 8 datasets. On average the difference in accuracy for the supervised variant suggests that we suffer \textasciitilde{}1-2\% loss in accuracy, demonstrating minimal loss in performance. With this in mind, in 4 of the 5 datasets where performance was worse than grafting, the supervised DOFS achieved improved compactness by \textasciitilde{}30\%. \textit{DOFS vs OGFS}\label{dofs-vs-ogfs} Compared with OGFS, DOFS unsupervised variant outperforms in 2 of 8 datasets and DOFS supervised outperforms in 3 of 8. On average the difference in accuracy for the supervised variant suggests that we suffer \textasciitilde{}1-2\% loss in accuracy on average, demonstrating minimal loss in performance. Given this trade-off, supervised variant of DOFS manages to have an improved compactness by \textasciitilde{}12\%. This demonstrates that DOFS is a competitive algorithm retaining similar level of performance whilst promoting further compactness. \section{Conclusion} In this paper, we have presented a new algorithm called DOFS which can select diverse features both in a supervised or unsupervised environment. We have explored the limitations of using DPP for feature sampling alone, and demonstrated the necessity and value of introducing additional redundancy checks to provide a competitive performance. This framework allows us to efficient select features that arrive by groups and also one by one. We have divided online feature selection into three stages: DPP sampling, local criteria and global criteria. We have designed several criteria for selecting the optimal number of \(k\) to sample from DPP, trace rank approach for supervised learning problem, group wilcoxon signed rank test and Lasso to reduce redundancy. Experiments have demonstrated that DPP is on par or better than other state-of-the-art online feature selection methods whilst being more compact through the use of the UCI and Micro Array benchmark datasets. \section{Acknowledgment}\label{Acknowledgment} We would like to acknowledge everyone in the data science team at Suncorp Group Limited for their help and support in making this possible. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of Suncorp Group Limited. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,781
package org.hisp.dhis.webapi.controller; import static org.hisp.dhis.web.WebClientUtils.assertStatus; import static org.junit.jupiter.api.Assertions.assertEquals; import static org.junit.jupiter.api.Assertions.assertNotNull; import org.hisp.dhis.jsontree.JsonObject; import org.hisp.dhis.jsontree.JsonResponse; import org.hisp.dhis.web.HttpStatus; import org.hisp.dhis.webapi.DhisControllerConvenienceTest; import org.junit.jupiter.api.Test; /** * Tests the {@link org.hisp.dhis.webapi.controller.mapping.MapController} using * (mocked) REST requests. * * @author Jan Bernitt */ class MapControllerTest extends DhisControllerConvenienceTest { @Test void testPutJsonObject() { String mapId = assertStatus( HttpStatus.CREATED, POST( "/maps/", "{'name':'My map'}" ) ); assertStatus( HttpStatus.NO_CONTENT, PUT( "/maps/" + mapId, "{'name':'My updated map'}" ) ); } @Test void testPutJsonObject_NotFound() { assertWebMessage( "Not Found", 404, "ERROR", "Map does not exist: xyz", PUT( "/maps/xyz", "{'name':'My updated map'}" ).content( HttpStatus.NOT_FOUND ) ); } @Test void testGetWithMapViewAndOrgUnitField() { String attrId = assertStatus( HttpStatus.CREATED, POST( "/attributes", "{ 'name':'GeoJsonAttribute', " + "'valueType':'GEOJSON', " + "'organisationUnit':true}" ) ); String mapId = assertStatus( HttpStatus.CREATED, POST( "/maps/", "{\"name\":\"My map\", \"mapViews\":[ { \"orgUnitField\": \"" + attrId + "\", " + "\"layer\": \"thematic1\",\"renderingStrategy\": \"SINGLE\" } ]}" ) ); JsonResponse map = GET( "/maps/{uid}", mapId ).content(); assertNotNull( map.getArray( "mapViews" ) ); assertEquals( 1, map.getArray( "mapViews" ).size() ); JsonObject mapView = map.getArray( "mapViews" ).get( 0 ).as( JsonObject.class ); assertEquals( attrId, mapView.getString( "orgUnitField" ).string() ); assertEquals( "GeoJsonAttribute", mapView.getString( "orgUnitFieldDisplayName" ).string() ); } }
{ "redpajama_set_name": "RedPajamaGithub" }
6,749
Q: background-position value manipulation I have a project where I am building slides described inside of an XML file but it requires to allow image positioning of the slides based on offset values. Now I have Y offsets down pat, only problem now is that I require the ability to offset something in the X by an amount but still keep the %'age value behavior. So basically is there anyway to have background-position's x start at 50% and then offset it by a pixel amount and keep the relative behavior of the %'age( 50% + offsetInPixels)? A: You can do this, but it isn't widely supported. background-position: -moz-calc(50% - 20px) 0; background-position: calc(50% - 20px) 0; Currently (May 2011) this only works in Firefox 4 and IE9. See http://caniuse.com/#calc for compatibility. A: You can't do that with plain CSS (at this point in time, see Rich Bradshaw's answer). You could accomplish that in javascript with something like: var totalWidth = 960; var xOffset = 10; el.style.backgroundPosition = ((totalWidth/2) + xOffset) +"px 50px"; A: I'd say your best bet is sticking the background image as an image inside the containter... It's bit of a hack, but it works Also, consider (As Jesse said) adding overflow:hidden if you don't want the bg pouring out. <div id="main" > <div id="bg"><img src="http://www.google.com/images/logos/ps_logo2.png"/> </div </div> #main { width:400px; height:300px; background-color:blue; position:relative; } #bg { margin-left: 10px; position:absolute; width:100%; height:100%; margin-left:50%; } demonstrated: http://jsfiddle.net/mhy3r/10/ A: I found another solution using CSS3. However, it requires the container to have a fixed size. HTML: <div id="example">Example</div> CSS: #example { width: 200px; height: 200px; padding-left: 100px; background-origin: content-box; background-position: 10px 10%; } It's a bit of a hack I guess. Rather than starting the background-position from the left top corner of the border-box, it uses the content-box instead which has 50% (i.e. 100px) padding. Like I said, you will need to know the exact value of 50% padding because writing padding-left: 50%; will be interpreted as 50% of the parent element. If you need the full space inside this container you can put another <div> into it with margin-left: -100px;
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,277
{"url":"https:\/\/hypothes.is\/users\/daaronr","text":"311 Matching Annotations\n1. Feb 2021\n2. systematicreviewsjournal.biomedcentral.com systematicreviewsjournal.biomedcentral.com\n1. To deal with this, we organised all of the factors into six overarching categories, comprising three barriers and three facilitators: 1. Difficulties in accessing evidence (six studies) 2. Challenges in understanding the evidence (three studies) 3. Insufficient resources (six studies) 4. Knowledge sharing and ease of access (six studies) 5. Professional advisors and networks (three studies) 6. A broader definition of what counts as credible evidence and better standardisation of reporting (three studies).\n\nbarriers and facilitators organised - seems to miss psychological factors?\n\n#### URL\n\n3. giving-evidence.com giving-evidence.com\n1. hey run conjoint analysis: in which customers are offered goods with various combinations of characteristics and price \u2013 maybe a pink car with a stereo for \u00a31,000, a pink car without a stereo for \u00a3800, a blue car for \u00a31,100 and a blue car without a stereo for \u00a3950 \u2013 to identify how much customers value each characteristic.\n\nBut these are usually (always) hypothetical choices, I believe.\n\n2. et me tell you a story. Once upon a time, researcher Dean Karlan was investigating microloans to poor people in South Africa, and what encourages people to take them. He sent people flyers with various designs and offering loans at various rates and sizes.\u00a0It turns out\u00a0that giving consumers only one choice of loan size, rather than four, increased their take-up of loans as much as if the lender had reduced the interest rate by about 20 percent. And if the flyer features a picture of a woman, people will pay more for their loan \u2013 demand was as high as if the lender had reduced the interest rate by about a third. Nobody would say in a survey or interview that they would pay more if a flyer has a lady on it. But they do. Similarly, Nobel Laureate Daniel Kahneman\u00a0reports that, empirically, people are more likely to be believe a statement if it is written in red than in green. But nobody would say that in a survey, not least because we don\u2019t know it about ourselves.\n\non self-reported motivations\n\n#### URL\n\n4. towardsdatascience.com towardsdatascience.com\n1. do(cluster_summary = summary(.))\n\ndo was old dplyr syntax, replaced by something more consistent but more verbose\n\n2. gives us the best segmentation possible.\n\nthat's a bit strong\n\n3. Just like K-means and hierarchical algorithms go hand-in-hand with Euclidean distance, the Partitioning Around Medoids (PAM) algorithm goes along with the Gower distance.\n\nwhy can't I do hierarchical with Gower distance?\n\n4. The silhouette width is one of the very popular choices when it comes to selecting the optimal number of clusters. It measures the similarity of each point to its cluster, and compares that to the similarity of the point with the closest neighboring cluster. This metric ranges between -1 to 1, where a higher value implies better similarity of the points to their clusters.\n\nThis is under- explained.\n\nSilhouette width of each obs: Scaled measure of dissimilarity from (nearest) neighbor cluster relative to dissimilarity from own cluster.\n\n5. library(cluster)gower_df <- daisy(german_credit_clean, metric = \"gower\" , type = list(logratio = 2))\n\nCode needs a line\n\n mutate_if(is.character, as.factor)\n\n\nTo avoid an error\n\n6. We find that the variable amount needs a log transformation due to the positive skew in its distribution.\n\njust by visual inspection?\n\nthe others DON'T all seem normally distributed to me\n\n7. e details about the mathematics of Gower distance are quite complicated and left out for another article.\n\nI want to know\n\n8. Clustering datasets having both numerical and categorical variables\n\ndiscusses the vignette I used before more completely\n\n#### URL\n\n5. www.datanovia.com www.datanovia.com\n1. For each observation iii, calculate the average dissimilarity aiaia_i between iii and all other points of the cluster to which i belongs. For all other clusters CCC, to which i does not belong, calculate the average dissimilarity d(i,C)d(i,C)d(i, C) of iii to all observations of C. The smallest of these d(i,C)d(i,C)d(i,C) is defined as bi=minCd(i,C)bi=minCd(i,C)b_i= \\min_C d(i,C). The value of bibib_i can be seen as the dissimilarity between iii and its \u201cneighbor\u201d cluster, i.e., the nearest one to which it does not belong. Finally the silhouette width of the observation iii is defined by the formula: Si=(bi\u2212ai)\/max(ai,bi)Si=(bi\u2212ai)\/max(ai,bi)S_i = (b_i - a_i)\/max(a_i, b_i).\n\nSilhouette width of each obs: Scaled measure of dissimilarity from (nearest) neighbor cluster relative to dissimilarity from own cluster.\n\n2. Average silhouette method\n\nthis is not really an explanation!\n\n3. The total WSS measures the compactness of the clustering and we want it to be as small as possible.\n\nas small as possible (within sample) for a given number of clusters\n\n4. To avoid distortions caused by excessive outliers, it\u2019s possible to use PAM algorithm, which is less sensitive to outliers.\n\nanother solution to outliers?\n\n5. Next, the wss (within sum of square) is drawn according to the number of clusters. The location of a bend (knee) in the plot is generally considered as an indicator of the appropriate number of clusters.\n\nneed more explanation here. What is the value of this \"within sum of square\" and why does a 'bend' lead to the appropriate number\n\n6. K-means algorithm can be summarized as follow: Specify the number of clusters (K) to be created (by the analyst) Select randomly k objects from the dataset as the initial cluster centers or means Assigns each observation to their closest centroid, based on the Euclidean distance between the object and the centroid For each of the k clusters update the cluster centroid by calculating the new mean values of all the data points in the cluster. The centoid of a Kth cluster is a vector of length p containing the means of all variables for the observations in the kth cluster; p is the number of variables. Iteratively minimize the total within sum of square. That is, iterate steps 3 and 4 until the cluster assignments stop changing or the maximum number of iterations is reached. By default, the R software uses 10 as the default value for the maximum number of iterations.\n\nthe implicit claim is that this 'mean-finding' procedure will minimise the sum of squared distances\n\n7. to use correlation distance, the data are input as z-scores.\n\nnormalization to weigh each dimension the same\n\n#### URL\n\n6. en.wikipedia.org en.wikipedia.org\n1. A successful evaluation of discriminant validity shows that a test of a concept is not highly correlated with other tests designed to measure theoretically different concepts.\n\nBut what if the traits you are trying to measure are actually correlated in the real world?\n\n#### URL\n\n7. en.wikipedia.org en.wikipedia.org\n1. The remaining term, 1\u00a0\/\u00a0(1\u00a0\u2212\u00a0Rj2) is the VIF. It reflects all other factors that influence the uncertainty in the coefficient estimates. The VIF equals 1 when the vector Xj is orthogonal to each column of the design matrix for the regression of Xj on the other covariates. By contrast, the VIF is greater than 1 when the vector Xj is not orthogonal to all columns of the design matrix for the regression of Xj on the other covariates. Finally, note that the VIF is invariant to the scaling of the variables\n\nVIF interpretation\n\n2. It turns out that the square of this standard error, the estimated variance of the estimate of \u03b2j, can be equivalently expressed as:[3][4] var ^ ( \u03b2 ^ j ) = s 2 ( n \u2212 1 ) var ^ ( X j ) \u22c5 1 1 \u2212 R j 2 , {\\displaystyle {\\widehat {\\operatorname {var} }}({\\hat {\\beta }}_{j})={\\frac {s^{2}}{(n-1){\\widehat {\\operatorname {var} }}(X_{j})}}\\cdot {\\frac {1}{1-R_{j}^{2}}},} where Rj2 is the multiple R2 for the regression of Xj on the other covariates (a regression that does not involve the response variable Y). This identity separates the influences of several distinct factors on the variance of the coefficient estimate: s2: greater scatter in the data around the regression surface leads to proportionately more variance in the coefficient estimates n: greater sample size results in proportionately less variance in the coefficient estimates var ^ ( X j ) {\\displaystyle {\\widehat {\\operatorname {var} }}(X_{j})} : greater variability in a particular covariate leads to proportionately less variance in the corresponding coefficient estimate The remaining term, 1\u00a0\/\u00a0(1\u00a0\u2212\u00a0Rj2) is the VIF. It reflects all other factors that influence the uncertainty in the coefficient estimates\n\na useful decomposition of the variance of the estimated coefficient\n\n#### URL\n\n8. danielmiessler.com danielmiessler.com\n1. Summary: Algorithms to Live By\n\nthese annotations look like a great resource\n\n#### URL\n\n9. maxkasy.github.io maxkasy.github.io\n1. When treatment assign-ment takes place in waves, it is natural to adapt Thompson sampling by assigning a non-random number\u0007pdtNt\bof observations in wavetto treatmentd, in order to reduce ran-domness. The remainder of observations are assigned randomly so that expected sharesremain equal topdt.\n\nnot sure what this means\n\n#### URL\n\n10. en.wikipedia.org en.wikipedia.org\n1. Q = 12 n k ( k + 1 ) \u2211 j = 1 k ( r \u00af \u22c5 j \u2212 k + 1 2 ) 2 {\\displaystyle Q={\\frac {12n}{k(k+1)}}\\sum _{j=1}^{k}\\left({\\bar {r}}_{\\cdot j}-{\\frac {k+1}{2}}\\right)^{2}} . Note\n\nQ is something that will increase the more certain wine tends to be ranked systematically lower or higher than average\n\n2. is the rank of x i j {\\displaystyle x_{ij}}\n\nJust rank the 'scores' of the wines within each rater\n\n3. Find the values r \u00af \u22c5 j = 1 n \u2211 i = 1 n r i j {\\displaystyle {\\bar {r}}_{\\cdot j}={\\frac {1}{n}}\\sum _{i=1}^{n}{r_{ij}}}\n\naverage rank of wine j across all raters\n\n#### URL\n\n11. Jan 2021\n12. egap.org egap.org\n1. For some reason I'm having trouble commenting on particular parts of this page with hypothesis\n\n#### URL\n\n13. daaronr.github.io daaronr.github.io\n1. Definitions\n\n@Jasonschukraft wrote:\n\nNot sure where to put this comment, but how are you thinking about uncertainty about effectiveness? There's a small pool of donors who deny that GiveWell has identified the most effective global poverty\/health charities because (e.g.) GiveWell is too focused on \"randomista\" interventions and doesn't give enough weight to \"systematic\" interventions.\n\n2. Individual donors, governments and firms demonstrate substantial generosity (e.g., UK charity represents 0.5-1% of GDP, US charity around 2% of GDP).\n\nThings to emphasize, from Jason Shukraft conversation.\n\nDo the \u2018masses of donors\u2019 matter, or only the multimillionaire response? The average person \u2026 do small donations add up Also, knowing more about how average people to respond to analytical information (in an other regarding \/social context) will inform how to influence good LT decision-making. (edited) 4:05 how to get USDA to care about animals\/WAW\u2026 government to care about LT\n\n#### URL\n\n14. daaronr.github.io daaronr.github.io\n1. how people react to the presentation of charity-effectiveness information.\n\n@JasonSchukraft wrote:\n\nMaybe. I suppose it depends on our goals. Do we want people to give to top charities for the right reason (i.e., because those charities are effective) or do we just want people to give to top charities, simpliciter? If the latter, then maybe it doesn't matter how people react to effectiveness information; we should just go with whatever marketing strategy maximizes donations.\n\n#### URL\n\n15. Dec 2020\n16. daaronr.github.io daaronr.github.io\n1. Beem101: Project, discussion of research\n\nI was asked about the 'structure' of the project. This depends on the option, on your topic choice, and on how you wish to pursue it. Nonetheless, a rough structure might look like the following:\n\nAcross the topics (more or less... it depends on the project option and topic)\n\n1. Introduce the topic, model, question, overview of what you are going to do, and why this is relevant and interesting (some combination of this)\n\u2022 The Economic theory\/theories and model(s) presented\n\n\u2022 with reference to academic authors (papers textbooks)\n\n\u2022 using formal (maths) modeling, giving at least one simple but formal presentation, and explaining it clearly and in your own voice (remember to explain what all variables mean),\n\n\u2022 considering the assumptions and simplifications of the model, the 'Economic tool\/fields considered' (e.g., optimisation, equilibrium)\n\n\u2022 Sensitivity of the 'predictions' to the assumptions\n\n\u2022 The justification for these assumptions\n\n\u2022 Relationship between this model and your (applied) topic or focus... are the assumptions relevant, what are the 'predictions' etc.\n\n1. The application or real world example:\n\u2022 Explain it in practical terms and what the 'issues and questions are' (possibly engaging the previous literature a bit, but not too much)\n\u2022 describe and express it formally\n\u2022 relate it to the model\/theory and apply the model theory to your real world example\n\n\u2022 Try to 'model it' and derive 'results' or predictions, continually justifying the application of the model to the example\n\n1. Presenting and assessing the insights from the model for the application and vice\/versa\n\u2022 considering the relevance and sensitivity\n\u2022 what alternative models might be applied, how might it be adjusted\n\u2022 Discuss 'what modeling and theory achieved or did not achieve here'\n\nNote that \"2\" could come before or after \"3\" ... you can present the application first, or the model first... (or there might even be a way to go between the two, presenting one part of each)\n\n#### URL\n\n17. Oct 2020\n18. globalprioritiesinstitute.org globalprioritiesinstitute.org\n1. pure\u2019 altruism or \u2018warm glow\u2019 altruism (Andreoni 1990; Ashraf and Bandiera 2017)\n\nThis classification is often misunderstood and misused. The Andreoni 'Warm Glow' paper was meant to consider a fairly simple general question about giving overall, not to unpick psychological motivations.\n\n2. The Global Priorities Institute\u2019s vision and mission\n\n#### URL\n\n19. en.wikipedia.org en.wikipedia.org\n1. Formula The Y-intercept of the SML is equal to the risk-free interest rate. The slope of the SML is equal to the market risk premium and reflects the risk return tradeoff at a given time: S M L : E ( R i ) = R f + \u03b2 i [ E ( R M ) \u2212 R f ] {\\displaystyle \\mathrm {SML} :E(R_{i})=R_{f}+\\beta _{i}[E(R_{M})-R_{f}]\\,} where: E(Ri) is an expected return on security E(RM) is an expected return on market portfolio M \u03b2 is a nondiversifiable or systematic risk RM is a market rate of return Rf is a risk-free rate\n\nThe key equation ... specifying risk vs return\n\n2. The Y-intercept of the SML is equal to the risk-free interest rate. The slope of the SML is equal to the market risk premium and reflects the risk return tradeoff at a given time: S M L : E ( R i ) = R f + \u03b2 i [ E ( R M ) \u2212 R f ] {\\displaystyle \\mathrm {SML} :E(R_{i})=R_{f}+\\beta _{i}[E(R_{M})-R_{f}]\\,} where: E(Ri) is an expected return on security E(RM) is an expected return on market portfolio M \u03b2 is a nondiversifiable or systematic risk RM is a market rate of return Rf is a risk-free rate\n\nThis is one statement of the key relationship.\n\nThe point is that the market will have a single tradeoff between unavoidable (nondiversifiable) risk and return.\n\nAsset's returns must reflect this, according to the theory. Their prices will be bid up (or down), until this is the case ... the 'arbitrage' process.\n\nWhy? Because (assuming borrowing\/lending at a risk free rate) *any investor can achieve a particular return for a given risk level simply by buying the 'diversified market basket' and leveraging this (for more risk) or investing the remainder in the risk free-asseet (for less risk). (And she can do no better than this.)\n\n3. This abnormal extra return above the market's return at a given level of risk is what is called the alpha.\n\nthis is why you here the stock-touts bragging about their 'alpha'\n\n#### URL\n\n20. en.wikipedia.org en.wikipedia.org\n1. Capital asset pricing model\n\n2. quantity beta (\u03b2)\n\nYou hear about this 'beta' all the time as the measure of 'the correlation of the risk of an asset with the representative market basket'...\n\nbut confusingly, $$\\beta$$ is used to represent the slope of the expected return of an asset as this risk increases.\n\n3. systematic risk (beta) t\n\nThe concept of \"systematic risk\" is crucial in order to understand the CAPM. This relates to the risk of an 'optimally diversified portfolio'\n\n#### URL\n\n21. en.wikipedia.org en.wikipedia.org\n1. If the fraction q {\\displaystyle q} of a one-unit (e.g. one-million-dollar) portfolio is placed in asset X and the fraction 1 \u2212 q {\\displaystyle 1-q} is placed in Y, the stochastic portfolio return is q x + ( 1 \u2212 q ) y {\\displaystyle qx+(1-q)y} . If x {\\displaystyle x} and y {\\displaystyle y} are uncorrelated, the variance of portfolio return is var ( q x + ( 1 \u2212 q ) y ) = q 2 \u03c3 x 2 + ( 1 \u2212 q ) 2 \u03c3 y 2 {\\displaystyle {\\text{var}}(qx+(1-q)y)=q^{2}\\sigma _{x}^{2}+(1-q)^{2}\\sigma _{y}^{2}} . The variance-minimizing value of q {\\displaystyle q} is q = \u03c3 y 2 \/ [ \u03c3 x 2 + \u03c3 y 2 ] {\\displaystyle q=\\sigma _{y}^{2}\/[\\sigma _{x}^{2}+\\sigma _{y}^{2}]} , which is strictly between 0 {\\displaystyle 0} and 1 {\\displaystyle 1} . Using this value of q {\\displaystyle q} in the expression for the variance of portfolio return gives the latter as \u03c3 x 2 \u03c3 y 2 \/ [ \u03c3 x 2 + \u03c3 y 2 ] {\\displaystyle \\sigma _{x}^{2}\\sigma _{y}^{2}\/[\\sigma _{x}^{2}+\\sigma _{y}^{2}]} , which is less than what it would be at either of the undiversified values q = 1 {\\displaystyle q=1} and q = 0 {\\displaystyle q=0} (which respectively give portfolio return variance of \u03c3 x 2 {\\displaystyle \\sigma _{x}^{2}} and \u03c3 y 2 {\\displaystyle \\sigma _{y}^{2}} ). Note that the favorable effect of diversification on portfolio variance would be enhanced if x {\\displaystyle x} and y {\\displaystyle y} were negatively correlated but diminished (though not eliminated) if they were positively correlated.\n\nKey building block formulae.\n\n\u2022 Start with 'what happens to the variance when we combine two assets (uncorrelated with same expected return)'\n\n\u2022 What are the variance minimizing shares and what is the resulting variance of the portfolio.\n\n2. Similarly, a 1985 book reported that most value from diversification comes from the first 15 or 20 different stocks in a portfolio.[6]\n\nthe conventional wisdom is that there are sharply diminishing returns to this diversification\n\n#### URL\n\n22. bookdown.org bookdown.org\n1. d(p)=(209000-130p)\n\na simple demand function ('price-response function')\n\n2. CLV Formula\n\n#### URL\n\n23. daaronr.github.io daaronr.github.io\n1. \u201cSue\u2019s mother\u201d RaRaR_a \u201cSue\u2019s lecturer in the UK\u201d \u2192\u2192\\rightarrow false (so it\u2019s not \u2018transitive\u2019)\n\nI think this is where Andrea meant to ask her question:\n\nI wanted to ask how is this a false statement? I want to clarify. Is it that, she is a mother and and this does not relate with her being a lecturer in the UK? From my understanding the theory of transitive means that there is consistency, hence from the first statement to the last it would make sense\u2026\n\n2. intend\n\nI have a video. Need to add it!\n\n#### URL\n\n24. daaronr.github.io daaronr.github.io\n1. (Highly optional): Properties of binary relations - O-R problem 1a.\n\nI went over this in the 16 October Q&A. Available to Exeter students HERE: https:\/\/web.microsoftstream.com\/video\/c2e218a8-0632-4d86-8ad2-d0ab7b70ebfb\n\n#### URL\n\n25. daaronr.github.io daaronr.github.io\n1. Students\n\nA household chooses how to invest ... to lay aside money for future consumption... which asset to buy To store this value and hopefully get \u201chigh payoffs\u201d with little risk\n\n#### URL\n\n26. Local file Local file\n1. We say that uu is \u2019a utility function for \u227f\\succsim.\n\nDoes \"u is a utility function for $$\\succsim$$\" mean that the utility function 'represents' $$\\succsim$$\/\n\n#### Annotators\n\n27. daaronr.github.io daaronr.github.io\n1. Differentiating this wrt III yields Engel aggregation:\n\nTODO: make video of this\n\n#### URL\n\n28. Sep 2020\n29. github.com github.com\n1. direct\n\nwhat is meant by 'direct?'\n\n#### URL\n\n30. rtcharity.org rtcharity.org\n1. Past Projects\n\nThese are not all 'past'; the survey continues\n\n#### URL\n\n31. Aug 2020\n32. forum.effectivealtruism.org forum.effectivealtruism.org\n1. That's because cause prioritization research is extremely difficult, not because no one has thought to do this.\n\nYeah, I thought the same\n\n2. 4. It is difficult to find cause neutral funding.I think funders like to choose their cause and stick with it so there is a lack of cause neutral funding.\n\nA good point!\n\n3. Growth and the case against randomista development,\n\nI would say this one raised a lot of questions but didn't provide definitive answers\n\n4. me that when reading the GPI research agenda, the economics parts read like it was written by philosophers.\n\nI would agree with this\n\n5. (Also, I have never worked in academia so there may be theories of change in the academic space that others could identify.)\n\nThere are some explicit 'Impact targets' in the REF, and pots of ESRC funding for 'impact activities'.\n\nIn general I don't think we believe that our 'publications' will themselves drive change. It's more like publications $$\\rightarrow$$ status $$\\rightarrow$$ influence policymakers\n\n6. But for a new organisation to solely focus on doing the research that they believed would be most useful for improving the world it is unclear what the theory of change would be.\n\nI'm not quote sure how this is differentiated from 'for a big funder'\n\n7. I think that people are hesitant to do something new if they think it is being done, and funders want to know why the new thing is different so the abundance of organisations that used to do cause prioritisation research or do research that is a subcategory of cause prioritisation research limits other organisations from starting up.\n\nVery good point. I think this happens in a lot of spheres.\n\n8. Theoretical cause selection beyond speculation. Evidence of how to reason well despite uncertainty and more comparisons of different causes.\n\nI also think this may have run into some fundamental obstacles.\n\n9. more consideration of second order effect\n\nsuper hard to measure\n\n10. Let me give just one example, if you look at best practice in risk assessment methodologies[5] it looks very different from the naive expected value calculations used in EA\n\nI agree somewhat, but I'm not sure if the 'risk-assesment methodologies' are easily communicated, nor if they apply to the EA concerns here.\n\n11. theorists\n\nhere you are equating 'theorists' with long-termists\n\n12. e. From my point of view, I could save a human life for ~\u00a33000. I don\u2019t want to let kids die needlessly if I can stop it. I personally think that the future is really important but before I drop the ball on all the things I know will have an impact it would be nice to have:\n\nReasonable statement of 'risk-aversion over the impact that i have'\n\n13. (There could be experimental hits based giving.)\n\nwhat does this mean?\n\n14. Now let\u2019s get a bit more complicated and do some more research and find other interventions and consider long run effects and so on\u201d. There could be research looking for strong empirical evidence into:the second order or long run effects of existing interventions.how to drive economic growth, policy change, structural changes, and so forth.\n\nThese are just extremely difficult to do\/learn about. Economists, political scientists, and policy analysts have been debating these for centuries. I'm not sure there are huge easy wins here.\n\n15. Looking around it feels a like there is a split down the middle of the EA community:[4]\u00a0On the one hand you have the empiricals: those who believe that doing good is difficult, common sense leads you astray and to create change we need hard data, ideally at least a few RCTs.On the other side are the theorists: those who believe you just need to think really hard and to choose a cause we need expected value calculations and it matters not if calculations are highly uncertain if the numbers tend to infinity.Personally I find myself somewhat drawn to the uncharted middle ground.\n\nI agree that much of the most valuable work doesn't fall into either camp\n\n16. Post community building I moved back into policy and most recently have found myself in the policy space, building support for future generations in the UK Parliament. Not research. Not waiting. But creating change.\n\nThis sounds a little self-aggrandizing. I don't think it was meant in such a way, though\n\n17. The case of the missing cause prioritisation research\n\nPutting in some Hypothes.is comments. Curious if others like this tool.\n\n18. We theoretically expect and empirically observe impact to be \u201cheavy tailed\u201d with some causes being orders of magnitude more impactful\n\nWhat are these 'theoretical' reasons we should expect this? Remind me.\n\n#### URL\n\n33. daaronr.github.io daaronr.github.io\n1. Students: please propose some of these as a Hypothes.is comment HERE.\n\n#### URL\n\n34. daaronr.github.io daaronr.github.io\n1. well\n\nWhat do you mean \"Wel;\"\n\n$$x^2=4$$\n\n2. How individuals interact with one another, and the consequences of this (Game theory and mechanism design\/agency problems)\n\nWhat does this mean? Does it mean $$x^2=4$$\n\n#### URL\n\n35. egap.org egap.org\n1. sometimes put together as measure like 'd' of 'effect relative to noise.... effect size\/SD\n\n#### URL\n\n36. Jul 2020\n37. daaronr.github.io daaronr.github.io\n1. This relies heavily on:\n\nalso raw html code\n\n#### URL\n\n38. Jun 2020\n39. rethinkpriorities.freshteam.com rethinkpriorities.freshteam.com\n1. We\u2019re backed by Open Philanthropy, Effective Altruism Funds, and viewers like you.\n\nThe funders\n\n#### URL\n\n40. bookdown.org bookdown.org\n1. In typical meta-analyses, we do not have the individual data for each participant available, but only the aggregated effects, which is why we have to perform meta-regressions with predictors on a study level\n\nBut in principle we could do more if we had the raw data? This would then be a standard regression with an interaction and a study level 'random effect', I guess.\n\n#### URL\n\n41. bookdown.org bookdown.org\n1. Same is the case once we detect statistical heterogeneity in our fixed-effect-model meta-analysis, as indicated by\n\nI think empirically I-sq will always exceed 0. It's a matter of degree.\n\n#### URL\n\n42. handbook-5-1.cochrane.org handbook-5-1.cochrane.org\n1. A useful statistic for quantifying inconsistency is , where Q is the chi-squared statistic and df is its degrees of freedom (Higgins 2002, Higgins 2003). This describes the percentage of the variability in effect estimates that is due to heterogeneity rather than sampling error (chance).\n\nI-sq measure of heterogeneity\n\n#### URL\n\n43. May 2020\n44. www.openbookpublishers.com www.openbookpublishers.com\n1. MODELS IN MICROECONOMIC THEORY\n\nCommenting as a placeholder. Hope to use this in teaching soon.\n\n#### URL\n\n45. daaronr.github.io daaronr.github.io\n1. wasting\n\ntest comment -- I wouldn't say 'wasting'\n\n#### URL\n\n46. bookdown.org bookdown.org\n1. We can use the ecdf function to implement the ECDF in R, and then check the probability of our pooled effect being smaller than 0.30. The code looks like this.\n\nshould put this first and the plot afterwards\n\n2. We see that the posterior distributions follow a unimodal, and roughly normal distribution, peaking around the values for \u03bc\u03bc\\mu and \u03c4\u03c4\\tau we saw in the output.\n\nConsider: why are the peaks not exactly these values? Mean versus mode, I guess.\n\n3. By using the ranef function, we can also extract the estimated deviation of each study\u2019s \u201ctrue\u201d effect size from the pooled effect: ranef(m.brm) ## $Author ## , , Intercept ## ## Estimate Est.Error Q2.5 Q97.5 ## Call et al. 0.07181028 these are measures of deviations. But they don't exactly equal the difference between the input effect size and the estimated pooled effect size. I assume that somewhere this estimates a true effect for each study which 'averages towards the mean' following some criteria. 4. 0.09 Is this like a measure of the standard deviation of the estimated intercept? 5. Please be aware that Bayesian methods are much more computationally intensive compared to the standard meta-analytic techniques we covered before; it may therefore take a few minutes until the sampling is completed. I found it was the compiling of the C++ that took a bit of time 6. m.brm <- brm(TE|se(seTE) ~ 1 + (1|Author), data = ThirdWave, prior = priors, iter = 4000) Here r asks me to install tools and opens this link: https:\/\/www.cnet.com\/how-to\/install-command-line-developer-tools-in-os-x\/ But I don't know which tools I need to install 7. In this example, I will use my ThirdWave dataset, which contains data of a real-world meta-analysis investigating the effects of \u201cThird-Wave\u201d psychotherapies in college students. The data is identical to the madata dataset we used in Chapter 4. Again, Bayesian analysis only seems to need the right summary stats, not the raw data #### Annotators #### URL 47. r4ds.had.co.nz r4ds.had.co.nz 1. using a sophisticated algorithm Is OLS such a sophisticated algorithm? #### Annotators #### URL 48. adv-r.hadley.nz adv-r.hadley.nz 1. call2() is often convenient to program with, why? 2. lobstr::ast(f1(f2(a, b), f3(1, f4(2)))) I'm having trouble seeing the point of this. 3. f <- expr(f(x = 1, y = 2)) # Add a new argument f$z <- 3 f #> f(x = 1, y = 2, z = 3)\n\nYou can 'add an argument' to an expression\n\n4. function specifically designed to capture user input in a function argument: enexpr()\n\nI think I need a more concrete example here\n\n5. expr() lets you capture code that you\u2019ve typed\n\nbut what do you do with it?\n\n#### URL\n\n1. Note that when you attach another package with library(), the parent environment of the global environment changes:\n\nInstalled packages are 'between' the global and base environments. But when you create a new environment with the env command it is 'after' (a child of) the global environment?\n\n2. Unlike lists, setting an element to NULL does not remove it, because sometimes you want a name that refers to NULL. Instead, use env_unbind():\n\nsetting a list element to null removes it\n\n3. But you can\u2019t use [[ with numeric indices, and you can\u2019t use [:\n\nno 'element number'\n\n4. Only one environment doesn\u2019t have a parent: the empty environment.\n\npoor guy\n\n5. The current environment, or current_env() is the environment in which code is currently executing. When you\u2019re experimenting interactively, that\u2019s usually the global environment, or global_env(). The global environment is sometimes called your \u201cworkspace\u201d, as it\u2019s where all interactive (i.e.\u00a0outside of a function) computation takes place.\n\nthis is super important\n\nenv print to see parent and 'bindings; of environment\n\n7. e1$d <- e1 referring to or setting a list element with \"$\" ... it can also contain itself. mind blower\n\n#### URL\n\nIs this book dynamically updated?\n\n#### URL\n\n51. eml.berkeley.edu eml.berkeley.edu\n1. Strong evidence for the perils of underpowered practive\n\n#### URL\n\n52. www.replicationmarkets.com www.replicationmarkets.com\n1. Replication is testing the same claims using data that was not used in the original study. That required some changes from us. Starting in Round 6, Replication Markets will no longer distinguish between \u201cdata replication\u201d and \u201cdirect replication.\u201d\n\nBut what if it is impossible to find data 'not used in the original study' that is still a direct test of the claims?\n\n#### URL\n\n53. bookdown.org bookdown.org\n1. t has been argued that a good approach is to use weakly informative priors (Williams, Rast, and B\u00fcrkner 2018). Weaky informative priors can be contrasted with non-informative priors.\n\n!\n\n2. integrate prior knowledge and assumptions when calculating meta-analyses.\n\nincluding uncertainty over methodological validity?\n\n#### URL\n\n54. bookdown.org bookdown.org\n1. It can either be stored as the raw data (including the Mean, N, and SD of every study arm) Or it only contains the calculated effect sizes and the standard error (SE).\n\nnote that this process does not 'dig in' to the raw data, it just needs the summary statistics\n\n#### URL\n\n55. bookdown.org bookdown.org\n1. meta and metafor package which do most of the heavy lifting, there are still some aspects of meta-analyses in the biomedical field and psychology which we consider important, but are not easy to do in R currently, particularly if you do not have a programming or statistics background. To fill this gap, we developed the dmetar package, which serves as the companion R package for this guide. The dmetar package has its own documentation, which can be found here. Functions of the dmetar package provide additional functionality for the meta and metafor packages (and a few other, more advanced packages), w\n\ndmetar package\n\n#### URL\n\n56. Apr 2020\n57. cran.r-project.org cran.r-project.org\n1. set_variable_labels(s1 = \"Sex\", s2 = \"Yes or No?\")\n\n2. Adding variable labels using pipe\n\n#### URL\n\n58. bookdown.org bookdown.org\n1. preview_chapter()\n\nwhen I try this I get\n\nError in files2[[format]] :\nattempt to select less than one element in get1index\n\n\nHowever, I'm also not able to use the knit function, only the 'build' function\n\n#### URL\n\n59. Mar 2020\n1. But if you end up with a very long series of chained if statements, you should consider rewriting. One useful technique is the switch() function. It allows you to evaluate selected code based on position or name. #> function(x, y, op) { #> switch(op, #> plus = x + y, #> minus = x - y, #> times = x * y, #> divide = x \/ y, #> stop(\"Unknown op!\") #> ) #> }\n\nswitch is great!\n\n#### URL\n\n61. bookdown.org bookdown.org\n1. The second type of tutorial provides much richer feedback and assessment, but also requires considerably more effort to author. If you are primarily interested in this sort of tutorial, there are many features in learnr to support it, including exercise hints and solutions, automated exercise checkers, and multiple choice quizzes with custom feedback.\n\nfull-blown course\/learning materials\n\n2. There are two main types of tutorial documents: Tutorials that are mostly narrative and\/or video content, and also include some runnable code chunks. These documents are very similar to package vignettes in that their principal goal is communicating concepts. The interactive tutorial features are then used to allow further experimentation by the reader. Tutorials that provide a structured learning experience with multiple exercises, quiz questions, and tailored feedback. The first type of tutorial is much easier to author while still being very useful. These documents will typically add exercise = TRUE to selected code chunks, and also set exercise.eval = TRUE so the chunk output is visible by default. The reader can simply look at the R code and move on, or play with it to reinforce their understanding.\n\nthe easier kind of tutorial... just content with some code chunks (some pre-populated with code) the user can play with\n\n#### URL\n\n62. bookdown.org bookdown.org\n1. button \u201cRun Document\u201d in RStudio, or call the function rmarkdown::run() on this Rmd file\n\nHitting the button worked for me; the script did not\n\n#### URL\n\n63. www.sciencedirect.com www.sciencedirect.com\n1. image conscience donors\n\nthey meant 'image-conscious'\n\n#### URL\n\n64. www.nytimes.com www.nytimes.com\n1. First, many health experts, including the surgeon general of the United States, told the public simultaneously that masks weren\u2019t necessary for protecting the general public and that health care workers needed the dwindling supply. This contradiction confuses an ordinary listener. How do these masks magically protect the wearers only and only if they work in a particular field?\n\nexactly what I was thinking\n\n#### URL\n\n65. www.the-brights.net www.the-brights.net\n1. These results arein line with predictions, such that in those cases in which aconsequentialist judgment does not clearly violate fairness-basedprinciples about respecting others and not treating them as meremeans, people do not infer that the agent is necessarily an untrust-worthy social partner\n\nbut isn't it still a consequentialist judgement?!\n\n2. We reasoned that if deontological agents are preferred overconsequentialist agents because they are perceived as more com-mitted to social cooperation, such preferences should be lessenedif consequentialist agents reported their judgments as being verydifficult to make, indicating some level of commitment to coop-eration (Critcher, Inbar, & Pizarro, 2013). From the process dis-sociation perspective (Conway & Gawronski, 2013), a person whoreports that it is easy to make a characteristically consequentialistjudgment can be interpreted as being high in consequentialism\n\nI'm not sure I understand or like this approach. Couldn't it just be seen as merely a stronger consequentialism if they had no doubts? And is it even a meaningful distinction ... can I like the 'presence of cold' versus the 'absence of heat'.\n\n3. In contrast to the previous studies, for the switch dilemma,consequentialist agents were rated to be no less moral (Z\u0002\u00030.73,p\u0002.47,d\u00020.10) or trustworthy (Z\u0002\u00031.87,p\u0002.06,d\u00020.26)than deontological agents.\n\nTo me, this seems to weigh against their main claim. In the one case in which a majority favored the consequentialist choice, the consequentialists are not disfavored! They are really playing this down. Am I missing something?\n\n4. . Despite thegeneral endorsement many people have that \u201cends do not justifymeans,\u201d people do typically judge that sacrificing the one man bydiverting the train is less morally wrong than sacrificing the manby using his body to stop the train (Foot, 1967; Greene et al.,2001).\n\nHow is this 'despite'? It doesn't seem to be in contradiction.\n\n5. The switch case differs from the footbridge case in two criticalways\n\nBut it is still in the domain of HARMING people (more versus fewer).\n\n6. The only difference is thatAdam does not push the large man, but instead pushes a button thatopens a trapdoor that causes the large man to fall onto the tracks.\n\nMeh. This difference hardly seems worth bothering with.\n\n7. The amount of moneyparticipants transferred to the agent (from $0.00 to$0.30) was usedas an indicator of trustworthiness, as was how much money theybelieved they would receive back from the agent (0% to 100%)\n\nNote that this is a very small stake. (And was it even perhaps hypothetical?)\n\n8. . However, the data did not support a meresimilarity effect: Our results were robust to controlling for partic-ipants\u2019 own moral judgments, such that participants who made adeontological judgment (the majority) strongly preferred a deon-tological agent, whereas participants who made a consequentialistjudgment (the minority) showed no preference between the two\n\nBut this is a lack of a result in the context of a critical underlying assumption. Yes, the results were 'robust', but could we really be statistically confident that this was not driving the outcome? How tight are the error bounds?\n\n9. However, the central claims behind thisaccount\u2014that people who express deontological moral intuitions areperceived as more trustworthy and favored as cooperation partners\u2014has not been empirically investigated.\n\nHere is where the authors claim their territory.\n\n10. the typicaldeontological reason for why specific actions are wrong is that theyviolate duties to respect persons and honor social obligations\u2014fea-tures that are crucial when selecting a social partner. An individualwho claims that stealing is always morally wrong and believes them-selves morally obligated to act in accordance with this duty seemsmuch less likely to steal from me than an individual who believes thatthe stealing is sometimes morally acceptable depending on the con-sequences. Actors who express characteristically deontological judg-ments may therefore be preferred to those expressing consequentialistjudgments because these judgments may be more reliable indicatorsof stable cooperative behavior.\n\nKey point.. deontological ethics signals stable cooperative behavior\n\n11. First, deontologists\u2019 prohibition of certain acts or behaviors mayserve as a relevant cue for inferring trustworthiness, because theextent to which someone claims to follow rule or action-based judg-ments may be associated with the reliability of their moral behavior.One piece of preliminary evidence for this comes from a studyshowing that agents willing to punish third parties who violate fair-ness principles are trusted more, and actually are more trustworthy(Jordan, Hoffman, Bloom, & Rand, 2016).\n\nBut couldn't this punishment be seen as utilitarian... as it promotes the general social good?\n\n12. One approach to explaining why moral intuitions often align withdeontology comes from mutualistic partner choice models of theevolution of morality. These models posit a cooperation market suchthat agents who can be relied upon to act in a mutually beneficial wayare more likely to be chosen as cooperation partners, thus increasingtheir own fitness\n\nthis is the key theoretical argument\n\n13. intriguingly\n\n14. nd recent theoretical work has demon-strated that \u201ccooperating without looking\u201d\u2014that is, without consid-ering the costs and benefits of cooperation\u2014is a subgame perfectequilibrium (Hoffman, Yoeli, & Nowak, 2015). Therefore, expressingcharacteristically deontological judgments could constitute a behaviorthat enhances individual fitness in a cooperation market because thesejudgments are seen as reliable indicators of a specific valued behav-ior\u2014cooperation\n\nIs this relevant to the idea that '(advocating) Effective giving is a bad signal'?\n\nDoes utilitarian decision-making in 'good space' contradict this?\n\nI'm not convinced. An 'excuse not to do something' is not the same as a 'choice to be effective'.\n\n15. Across 5 studies, we show that people who make characteristically deontological judgments arepreferred as social partners, perceived as more moral and trustworthy, and are trusted more in economicgames.\n\nBut this does NOT hold in the switching case\/switching study\n\n#### URL\n\n66. citeseerx.ist.psu.edu citeseerx.ist.psu.edu\n1. Table 3also suggests that conditional norm enforcement is more pronounced among the populationwith intermediate and high levels of education. This finding is consistent with the observationthat conditional cooperation is particularly robust in lab experiments with student subjectpools (see G \u0308achter, 2007). The data further show that females tend to be more inclined tosanction, in particular deviations from the strong norms. In contrast, employed respondentsare less engaged in sanctioning. All other socioeconomic characteristics do not show a clear\n\ndemographic breakdown of survey responses ... evidence\n\n2. Ina national survey conducted in Austria, respondents were confronted with eight different\u2018incorrect behaviors\u2019, including tax evasion, drunk driving, fare dodging or skiving off work.Respondents were then asked how they would react if an acquaintance followed such behavior.The response categories cover positive reactions \u2013 like approval (Rege and Telle, 2004) \u2013 aswell as negative reactions like cooling down the contact or expressing disapprova\n\nbelow... targeted to be nationally representative.\n\n#### URL\n\n67. Feb 2020\n68. daaronr.github.io daaronr.github.io\n1. A dissertation or final-year project allows you to explore your aptitude for, and interest in doing economic research\n\nThis should be a separate bullet point. This is big. If you are going to do postgraduate study it WILL involve research.\n\nAside from the academic track, much professional work involves research.\n\n#### URL\n\n69. www1.essex.ac.uk www1.essex.ac.uk\n1. James, Gareth; Witten, Daniela; Hastie, Trevor; Tibshirani, Robert. (2013) An introduction to statistical learning: with applications in R, New York: Springer. vol. Springer texts in statistics\n\nThis would seem to overlap the ML module ?\n\n#### URL\n\n70. www1.essex.ac.uk www1.essex.ac.uk\n1. - construct factorial experiments in blocks;\n\nDid they get into power calculation and design efficiency? This seems more general statistics and less experimetrics. OK, it doesn't say 'design'\n\n#### URL\n\n71. www1.essex.ac.uk www1.essex.ac.uk\n1. Overleaf \/LaTex\n\nNot sure students need to know too much latex anymore\u2026 markdown\/r-md is a lot simpler and using it with css and html bits is very flexible. (although it still helps to know how to code maths in Latex)\n\n#### URL\n\n72. declaredesign.org declaredesign.org\n1. f you can avoid assigning subjects to treatments by cluster, you should.\n\nSometimes clustered assignment is preferable if mixing treatments in a cluster --> contaminated treatments (e.g., because participants communicate)\n\n2. fit_simple <- lm(Y_simple ~ Z_simple, data=hec)\n\n'regress' the outcome on the treatment. Yields ATE with even with heterogeneity if treatment is equiprobable.\n\n3. This complication is typically addressed in one of two ways: \u201ccontrolling for blocks\u201d in a regression context, or inverse probability weights (IPW), in which units are weighted by the inverse of the probability that the unit is in the condition that it is in.\n\nI don't think these are equivalent. I believe only the latter recovers the ATE under heterogeneity... but this is just my memory.\n\n4. The gains from a blocked design can often be realized through covariate adjustment alone.\n\nI believe Athey and Heckman come out strongly in favor of blocking instead of covariate adjustment.\n\n5. Of course, such heterogeneity could be explored if complete random assignment had been used, but blocking on a covariate defends a researcher (somewhat) against claims of data dredging.\n\nA preregistration plan can accomplish this without any cost.\n\n6. In this simulation complete random assignment led to a -0.59% decrease in sampling variability. This decrease was obtained with a small design tweak that costs the researcher essentially nothing.\n\nThis is not visible in the html. You specified too few digits.\n\nAlso, the results would be more striking if you had a smaller data set.\n\n7. with(hec, mean(Y1 - Y0))\n\nATE with heterogeneity?\n\n8. # Reveal observed potential outcomes\n\nHe means 'the outcome observed given random assignment'\n\n9. when deploying a survey experiment on a platform like Qualtrics, simple random assignment is the only possibility due to the inflexibility of the built-in random assignment tools.\n\nThat's not entirely true\n\n10. Since you need to know N beforehand in order to use simple_ra(), it may seem like a useless function.\n\nthis is a confusing sentence\n\n11. depending on the random assignment, a different number of subjects might be assigned to each group.\n\nIn large samples this won't usually matter much... but still worth avoiding, to make power as high as possible.\n\n12. Y0 <- rnorm(n = N,mean = (2*as.numeric(Hair) + -4*as.numeric(Eye) + -6*as.numeric(Sex)), sd = 5)\n\nlinear heterogeneity of baseline and of TE\n\n13. hec <- within(hec,{\n\nwhy does he use 'within' rather than mutate?\n\n#### URL\n\n73. community.spotify.com community.spotify.com\n1. Solution! Re: Export To Excel Mark as New Bookmark Subscribe Mute Subscribe to RSS Feed Permalink Print Email to a Friend Report Inappropriate Content slipstream42 Regular \u200e2017-07-31 04:22 PM another csv export link it is quite nicehttps:\/\/rawgit.com\/watsonbox\/exportify\/master\/exportify.htmlcode on github\u00a0 View solution in original post 31 Likes\n\nworks great\n\n#### URL\n\n74. www.vox.com www.vox.com\n1. As you can see, having one fewer child still comes out looking like a solid way to reduce carbon emissions \u2014 but it\u2019s absolutely nowhere near as effective as it first seemed. It no longer dwarfs the other options. On this model, instead of having one fewer kid, you can skip a couple of transatlantic flights and you\u2019ll save the same amount of carbon. That seems like a way more manageable sacrifice if you\u2019re a young person who longs to be a parent.\n\nEven if I believed the highly optimistic predictions of very strong climate policy in the USA (which I don't), having one fewer child still reduces emissions each year more than twice as much as living car free or avoiding a trans-atlantic flight every year.\n\nAnd they state it as \"instead of having one fewer kid, you can skip a couple of transatlantic flights and you\u2019ll save the same amount of carbon.\" ... but this requires each parent to forgo 2 transatlantic flights they would have taken every year for the rest of their life, if I understand correctly.\n\n#### URL\n\n1. commas <- function(...) stringr::str_c(..., collapse = \", \")\n\nno braces needed for function on a single line\n\n#### URL\n\n1. it\u2019s the same as the input!\n\nbecause we want to modify columns in place\n\n2. Compute the mean of every column in mtcars.\noutput <- vector(\"double\", ncol(mtcars)) # 1. output\nfor (i in seq_along(mtcars)) { # 2. sequence\noutput[[i]] <- mean(mtcars[[i]]) # 3. body\n}\n\n\n#### URL\n\n77. www.fmassari.com www.fmassari.com\n1. The rational expectation and thelearning-from-price literatures argue that equilibrium prices are accurate becausethey reveal and aggregate the information of all market participants. The MarketSelection Hypothesis,MSH, proposes instead that prices become accurate becausethey eventually reflect only the beliefs of the most accurate agent. The Wisdomof the Crowd argument,WOC, however suggests that market prices are accuratebecause individual, idiosyncratic errors are averaged out by the price formationmechanism\n\nThree models (arguments for) drivers of market efficiency\n\n#### URL\n\n1. external fundraising page\n\nWhat is meant by 'external'?\n\n2. Fundraising Dashboard \/ Participant Center Visited When a person visits their fundraising dashboard or participant center\n\nwho is the 'person' visiting the dashboard here?\n\n3. Fundraising Page Created \/ Registration Complete Upon completion of the last step of the registration flow that creates a fundraising page\n\nby whom? which ones can be detected?\n\n#### URL\n\n79. crumplab.github.io crumplab.github.io\n1. Contributing to the textbook Use Hypothes.is, an amazing tool for annotating the web. Go to Hypothes.is, and \u201cget-started\u201d\n\nTo nudge people slightly towards this, you can add to the index.Rmd:\n\n includes:\n\n\nAnd in that header_include.html file, inclide\n\n\n<script async defer src=\"https:\/\/hypothes.is\/embed.js\"><\/script>\n\n\nI do this here in my Writing Economics book\n\n#### URL\n\n80. dplyr.tidyverse.org dplyr.tidyverse.org\n1. head(as.data.frame(nasa))#> lat long month year cloudhigh cloudlow cloudmid ozone pressure #> 1 36.20000 -113.8 1 1995 26.0 7.5 34.5 304 835 #> 2 33.70435 -113.8 1 1995 20.0 11.5 32.5 304 940 #> 3 31.20870 -113.8 1 1995 16.0 16.5 26.0 298 960 #> 4 28.71304 -113.8 1 1995 13.0 20.5 14.5 276 990 #> 5 26.21739 -113.8 1 1995 7.5 26.0 10.5 274 1000 #> 6 23.72174 -113.8 1 1995 8.0 30.0 9.5 264 1000 #> surftemp temperature #> 1 272.7 272.1 #> 2 279.5 282.2 #> 3 284.7 285.2 #> 4 289.3 290.7 #> 5 292.2 292.7 #> 6 294.1 293.6\n\nunrolling a tbl_cube into 2 dimensions (data.frame)\n\n#### URL\n\n81. www.r-bloggers.com www.r-bloggers.com\n1. Now this can be simplified using the new {{}} syntax: summarise_groups <- function(dataframe, grouping_var, column_name){ dataframe %>% group_by({{grouping_var}}) %>% summarise({{column_name}} := mean({{column_name}}, na.rm = TRUE)) } Much easier and cleaner! You still have to use the := operator instead of = for the column name however. Also, from my understanding, if you want to modify the column names, for instance in this case return \"mean_height\" instead of height you have to keep using the enquo()\u2013!! syntax.\n\ncurly curly syntax\n\n#### URL\n\n82. www.sciencedirect.com www.sciencedirect.com\n1. (1) \u201cHow likely do you think it is that this hypothesis will be replicated (on a scale from 0% to 100%)?\u201d (2) \u201cHow large do you think the standardized effect size (in terms of Cohen\u2019s d) from the replication will be, relative to that in the original paper (on a scale from \u221250% to 200%)?\u201d, and (3) \u201cHow well do you know this topic? (Not at all; Slightly; Moderately; Very well; Extremely well.)\u201d\n\npre-market survey Many labs 2\n\n2. 0.506 (0.532)\n\ncan we find any measures of dispersion here?\n\n3. For the 12 studies with an original p\u202f<\u202f0.005, 10 (83%) replicated. For the 12 studies with an original p\u202f>\u202f0.005, only 1 (8%) replicated. Further work is needed to test if prediction markets outperform predictions based only on the initial p-value, to test if the market also aggregates other information important for reproducibility.\n\np values may capture all the information?\n\n#### URL\n\n83. daaronr.github.io daaronr.github.io\n1. At least one of my un-named research co-authors will heartily agree with this statement.\u21a9\ufe0e\n\nHi Dave!\n\n#### URL\n\n84. Jan 2020\n85. wilsonmar.github.io wilsonmar.github.io\n1. ps f\n\nthis doesn't run on my system. However ps -f seems to list processes started in the terminal and ps -ef lists all (?) processes\n\n2. List previous command history: history\n\nuseful\n\n#### URL\n\n86. www.freecodecamp.org www.freecodecamp.org\n1. It\u2019s worth noting that first line of the script starts with #!. It is a special directive which Unix treats differently.\n\nTerm hash tag at top of bash scripts are NOT comments... they are important\n\n#### URL\n\n87. happygitwithr.com happygitwithr.com\n1. Happy Git and GitHub for the useR\n\nOska: can you see this note?\n\n#### URL\n\n88. daaronr.github.io daaronr.github.io\n1. 8 Writing, argumentation, presentation, and (Economic) logic: Being clear and making sense\n\nList of words and phrases to avoid -- what are your biggest pet peeves in student writing?\n\n#### URL\n\n89. pubs.aeaweb.org pubs.aeaweb.org\n1. Prediction in Policy\n\nRelevance to my own project: can we predict who has the most to gain from admission to an HE institution. (But I'm limited in what I can report)\n\n2. Suppose the algorithm chooses a tree that splits on education but not on age. Conditional on this tree, the estimated coefficients are consistent. But that does not imply that treatment effects do not also vary by age, as education may well covary with age; on other draws of the data, in fact, the same procedure could have chosen a tree that split on age instead\n\na caveat\n\n3. hese heterogenous treatment effects can be used to assign treatments; Misra and Dub\u00e9 (2016) illustrate this on the problem of price targeting, applying Bayesian regularized methods to a large-scale experiment where prices were randomly assigned\n\ntodo -- look into the implication for treatment assignment with heterogeneity\n\n4. Chernozhukov, Chetverikov, Demirer, Duflo, Hansen, and Newey (2016) take care of high-dimensional controls in treatment effect estimation by solving two simultaneous prediction problems, one in the outcome and one in the treatment equation.\n\nthis seems similar to my idea of regularizing on only a subset of the variables\n\n5. In particular, a set of papers has already introduced regu-larization into the first stage in a high-dimensional setting, including the LASSO (Belloni, Chen, Chernozhukov, and Hansen 2012) and ridge regression (Carrasco 2012; Hansen and Kozbur 2014\n\nworth referencing\n\n6. These same techniques applied here result in split-sample instrumental variables (Angrist and Krueger 1995) and \u201cjackknife\u201d instrumental variables\n\nsome classical solutions to IV bias are akin to ML solutions\n\n7. Understood this way, the finite-sample biases in instrumental variables are a consequence of overfitting.\n\ntraditional 'finite sample bias of IV' is really overfitting\n\n8. Even when we are interested in a parameter \u03b2 \u02c6, the tool we use to recover that parameter may contain (often implicitly) a prediction component. Take the case of linear instrumental variables understood as a two-stage procedure: first regress x = \u03b3\u2032z + \u03b4 on the instrument z, then regress y = \u03b2\u2032x + \u03b5 on the fitted values x \u02c6. The first stage is typically handled as an estimation step. But this is effectively a prediction task: only the predictions x \u02c6 enter the second stage; the coefficients in the first stage are merely a means to these fitted values.\n\nfirst stage of IV -- handled as an estimation problem, but really it's a prediction problem!\n\n9. Prediction in the Service of Estimation\n\nThis is especially relevant to economists across the board, even the ML skeptics\n\n10. New Data\n\nThe first application: constructing variables and meaning from high-dimensional data, especially outcome variables\n\n\u2022 satellite images (of energy use, lights etc) --> economic activity\n\u2022 cell phone data, Google street view to measure wealth\n\u2022 extract similarity of firms from 10k reports\n\u2022 even traditional data .. matching individuals in historical censuses\n11. Zhao and Yu (2006) who establish asymptotic model-selection consistency for the LASSO. Besides assuming that the true model is \u201csparse\u201d\u2014only a few variables are relevant\u2014they also require the \u201cirrepresentable condition\u201d between observables: loosely put, none of the irrelevant covariates can be even moderately related to the set of relevant ones.\n\nBasically unrealistic for microeconomic applications imho\n\n12. First, it encourages the choice of less complex, but wrong models. Even if the best model uses interactions of number of bathrooms with number of rooms, regularization may lead to a choice of a simpler (but worse) model that uses only number of fireplaces. Second, it can bring with it a cousin of omitted variable bias, where we are typically concerned with correlations between observed variables and unobserved ones. Here, when regular-ization excludes some variables, even a correlation between observed variables and other observed (but excluded) ones can create bias in the estimated coefficients.\n\nIs this equally a problem for procedures that do not assum sparsity, such as the Ridge model?\n\n13. 97the variables are correlated with each other (say the number of rooms of a house and its square-footage), then such variables are substitutes in predicting house prices. Similar predictions can be produced using very different variables. Which variables are actually chosen depends on the specific finite sample.\n\nLasso-chosen variables are unstable because of what we usually call 'multicollinearity.'<br> This presents a problem for making inferences from estimated coefficients.\n\n14. Through its regularizer, LASSO produces a sparse prediction function, so that many coefficients are zero and are \u201cnot used\u201d\u2014in this example, we find that more than half the variables are unused in each run\n\nThis is true but they fail to mention that LASSO also shrinks the coefficients on variables that it keeps towards zero (relative to OLS). I think this is commonly misunderstood (from people I've spoken with).\n\n15. One obvious problem that arises in making such inferences is the lack of stan-dard errors on the coefficients. Even when machine-learning predictors produce familiar output like linear functions, forming these standard errors can be more complicated than seems at first glance as they would have to account for the model selection itself. In fact, Leeb and P\u00f6tscher (2006, 2008) develop conditions under which it is impossible to obtain (uniformly) consistent estimates of the distribution of model parameters after data-driven selection.\n\nThis is a very serious limitation for Economics academic work.\n\n16. First, econometrics can guide design choices, such as the number of folds or the function class.\n\nHow would Econometrics guide us in this?\n\n17. These choices about how to represent the features will interact with the regularizer and function class: A linear model can reproduce the log base area per room from log base area and log room number easily, while a regression tree would require many splits to do so.\n\nThe choice of 'how to represent the features' is consequential ... it's not just 'throw it all in' (kitchen sink approach)\n\n18. Ta b l e 2Some Machine Learning Algorithms\n\nThis is a very helpful table!\n\n19. Picking the prediction func-tion then involves two steps: The first step is, conditional on a level of complexity, to pick the best in-sample loss-minimizing function.8 The second step is to estimate the optimal level of complexity using empirical tuning (as we saw in cross-validating the depth of the tree).\n\nML explained while standing on one leg.\n\n20. egularization combines with the observability of predic-tion quality to allow us to fit flexible functional forms and still find generalizable structure.\n\nBut we can't really make statistical inferences about the structure, can we?\n\n21. This procedure works because prediction quality is observable: both predic-tions y \u02c6 and outcomes y are observed. Contrast this with parameter estimation, where typically we must rely on assumptions about the data-generating process to ensure consistency.\n\nI'm not clear what the implication they are making here is. Does it in some sense 'not work' with respect to parameter estimation?\n\n22. In empirical tuning, we create an out-of-sample experiment inside the original sample.\n\nremember that tuning is done within the training sample\n\n23. Performance of Different Algorithms in Predicting House Values\n\nAny reason they didn't try a Ridge or an Elastic net model here? My instinct is that these will beat LASSO for most Economic applications.\n\n24. We consider 10,000 randomly selected owner-occupied units from the 2011 metropolitan sample of the American Housing Survey. In addition to the values of each unit, we also include 150 variables that contain information about the unit and its location, such as the number of rooms, the base area, and the census region within the United States. To compare different prediction tech-niques, we evaluate how well each approach predicts (log) unit value on a separate hold-out set of 41,808 units from the same sample. All details on the sample and our empirical exercise can be found in an online appendix available with this paper athttp:\/\/e-jep.org\n\nSeems a useful example for trying\/testing\/benchmarking. But the link didn't work for me. Can anyone find it? Is it interactive? (This is why I think papers should be html and not pdfs...)\n\n25. Making sense of complex data such as images and text often involves a prediction pre-processing step.\n\nIn using 'new kinds of data' in Economics we often need to do a 'classification step' first\n\n26. The fundamental insight behind these breakthroughs is as much statis-tical as computational. Machine intelligence became possible once researchers stopped approaching intelligence tasks procedurally and began tackling them empirically.\n\nI hadn't thought about how this unites the 'statistics to learn stuff' part of ML and the 'build a tool to do a task' part. Well-phrased.\n\n27. Why not also use it to learn something about the \u201cunderlying model\u201d: specifically, why not use it to make infer-ences about the underlying data-generating process?\n\n(they give reasons why not)\n\n28. Economic theory and content expertise play a crucial role in guiding where the algorithm looks for structure first. This is the sense in which \u201csimply throw it all in\u201d is an unreasonable way to understand or run these machine learning algo-rithms.\n\nAt least we (Economists) hope this is the case ... motivated reasoning?\n\n29. available finite-sample guidance on its implementation\u2014such as heuristics for the number of folds (usually five to ten) or the \u201cone standard-error rule\u201d for tuning the LASSO (Hastie, Tibshirani, and Friedman 2009)\u2014has a more ad-hoc flavor.\n\nIt sounds like there are big unknowns... a lot is still 'rules of thumb'","date":"2021-03-04 04:05:45","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 2, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5568763613700867, \"perplexity\": 3586.6798037460894}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-10\/segments\/1614178368431.60\/warc\/CC-MAIN-20210304021339-20210304051339-00581.warc.gz\"}"}
null
null
begin require File.expand_path('../testrequest', __FILE__) require 'rack/handler/fastcgi' describe Rack::Handler::FastCGI do extend TestRequest::Helpers @host = '127.0.0.1' @port = 9203 if `which lighttpd` && !$?.success? raise "lighttpd not found" end # Keep this first. $pid = fork { ENV['RACK_ENV'] = 'deployment' ENV['RUBYLIB'] = [ File.expand_path('../../lib', __FILE__), ENV['RUBYLIB'], ].compact.join(':') Dir.chdir(File.expand_path("../cgi", __FILE__)) do exec "lighttpd -D -f lighttpd.conf" end } should "respond" do sleep 1 GET("/test") response.should.not.be.nil end should "respond via rackup server" do GET("/sample_rackup.ru") status.should.equal 200 end should "be a lighttpd" do GET("/test.fcgi") status.should.equal 200 response["SERVER_SOFTWARE"].should =~ /lighttpd/ response["HTTP_VERSION"].should.equal "HTTP/1.1" response["SERVER_PROTOCOL"].should.equal "HTTP/1.1" response["SERVER_PORT"].should.equal @port.to_s response["SERVER_NAME"].should.equal @host end should "have rack headers" do GET("/test.fcgi") response["rack.version"].should.equal [1,3] response["rack.multithread"].should.be.false response["rack.multiprocess"].should.be.true response["rack.run_once"].should.be.false end should "have CGI headers on GET" do GET("/test.fcgi") response["REQUEST_METHOD"].should.equal "GET" response["SCRIPT_NAME"].should.equal "/test.fcgi" response["REQUEST_PATH"].should.equal "/" response["PATH_INFO"].should.equal "" response["QUERY_STRING"].should.equal "" response["test.postdata"].should.equal "" GET("/test.fcgi/foo?quux=1") response["REQUEST_METHOD"].should.equal "GET" response["SCRIPT_NAME"].should.equal "/test.fcgi" response["REQUEST_PATH"].should.equal "/" response["PATH_INFO"].should.equal "/foo" response["QUERY_STRING"].should.equal "quux=1" end should "have CGI headers on POST" do POST("/test.fcgi", {"rack-form-data" => "23"}, {'X-test-header' => '42'}) status.should.equal 200 response["REQUEST_METHOD"].should.equal "POST" response["SCRIPT_NAME"].should.equal "/test.fcgi" response["REQUEST_PATH"].should.equal "/" response["QUERY_STRING"].should.equal "" response["HTTP_X_TEST_HEADER"].should.equal "42" response["test.postdata"].should.equal "rack-form-data=23" end should "support HTTP auth" do GET("/test.fcgi", {:user => "ruth", :passwd => "secret"}) response["HTTP_AUTHORIZATION"].should.equal "Basic cnV0aDpzZWNyZXQ=" end should "set status" do GET("/test.fcgi?secret") status.should.equal 403 response["rack.url_scheme"].should.equal "http" end # Keep this last. should "shutdown" do Process.kill 15, $pid Process.wait($pid).should.equal $pid end end rescue RuntimeError $stderr.puts "Skipping Rack::Handler::FastCGI tests (lighttpd is required). Install lighttpd and try again." rescue LoadError $stderr.puts "Skipping Rack::Handler::FastCGI tests (FCGI is required). `gem install fcgi` and try again." end
{ "redpajama_set_name": "RedPajamaGithub" }
5,715
{"url":"http:\/\/www.guitaretab.com\/e\/everclear\/29570.html","text":"Song name\n# A B C D E F G H I J K L M N O P Q R S T U V W X Y Z\n\n# Everclear - Now That Its Over tab\n\n```Now That It's Over\nWords By Art P. Alexakis\nMusic By Everclear\nSongs From An American Movie Volume 1: Learning How To Smile\n\nYeah Right!\n(Drum Intro x2)\n\n(With some mandolin-y type effects)\nC\ne|--------------------| x4 (with sliding effects\/pick scrapes overdubbed)\nB|----------1---------|\nG|------0------0---0--|\nD|----2---2------2----|\nA|--3-----------------|\nE|--------------------|\nOne... Two... Three... Four...\n\nC D F\ne|-------------------(x4)|---------2---------------1--------|\nB|----------1------------|-----3-----3---3-----1-----1---1--|\nG|------0-----0---0------|---2---2-----2-----2---2-----2----|\nD|----2---2-----2--------|-0---------------3----------------|\nA|--3--------------------|----------------------------------|\nE|-----------------------|----------------------------------|\nBreak down, Shake for me\nNothing ever is the way you want it to be\nNothing even tastes right now that it's over\n\nC\ne|--------------------| x2\nB|----------1---------|\nG|------0------0---0--|\nD|----2---2------2----|\nA|--3-----------------|\nE|--------------------|\n\nC D F\ne|-------------------(x4)|---------2---------------1--------|\nB|----------1------------|-----3-----3---3-----1-----1---1--|\nG|------0-----0---0------|---2---2-----2-----2---2-----2----|\nD|----2---2-----2--------|-0---------------3----------------|\nA|--3--------------------|----------------------------------|\nE|-----------------------|----------------------------------|\nBreak down, shake for me\nDon't write words unless you want me to read them\nNothing really matters now that it's over\n\nG F\ne|-----------------------------------------------|\nB|----------3----------------1-------------------|\nG|------0-----0---0------2-----2---3-------------|\nD|----0---0-----0------3---3-----3---------------|\nA|-----------------------------------------------|\nE|--3----------------1---------------------------|\nMaybe we can be friends now that we're older. We can have\n\nG G# G\ne|-----------------------------------------------|\nB|----------3------------------------------------|\nG|------0-----0---0------------------------------|\nD|----0---0-----0--------6-------5---------------|\nA|---------------------6-------5-----------------|\nE|--3----------------4-----4\/3------3------------|\nfun like we did in the early days. Now that it's\n\nC\ne|----------------------|\nB|----------1-----------|\nG|------0------0---0----|\nD|----2---2------2------|\nA|--3----------------3~-|\nE|----------------------|\nover. Yeah right!\n\nC D F C\ne|-----------------------------------------------| (with the picked riff\nB|-----------------------------------------------| from the 1st verse\nG|-5------------5------------7-7\/10--10\\5-5------| played underneath)\nD|-5------------5------------7-7\/10--10\\5-5------|\nA|-3-(let ring)-3-(let ring)-5-5\/8---8-\\3-3------|\nE|-----------------------------------------------|\nBreak down, shake for me\nNothing ever seems the way it ought to be\nNothing ever seems right now that it's over\n\nC G F G G# G C\ne|----------------------------------------------------------------------|\nB|----------------------------------------------------------------------|\nG|-5\\-----------------------------------------------------------------5-|\nD|-5\\-5----------------3--------------\/5---------------6-------5------5-|\nA|-3\\-5----------------3--------------\/5---------------6-------5------3-|\nE|----3-(let ring)-----1-(let ring)---\/3---------------4-------3--------|\n| |\ne|----------------------------------------------------------------------|\nB|------------3---------------1---------------3-------------------------|\nG|--------0-----0---0-----2-----2---3-----0-----0---0-------------------|\nD|------0---0-----0-----3---3-----3-----0---0-----0--------6-------5----|\nA|-------------------------------------------------------6-------5------|\nE|----3---------------1---------------3----------------4-----4\/3------3-|\nYeah, now maybe we can be friends\nMaybe we can be closer\nWe can have fun like we did in the old days\nNow that it's over\n\nBb F\ne|--------------------------------|\nB|--------------------------------|\nG|-3--------------10------------\\-|\nD|-3--------------10------------\\-|\nA|-1-(let ring)---8--(let ring)-\\-|\nE|--------------------------------|\nOh yeah...\n[ Tab from: http:\/\/www.guitaretab.com\/e\/everclear\/29570.html ]\nC Bb F C\ne|--------------------------------------------------------| x3\nB|--------------------------------------------------------|\nG|--5------------3---------------------------5------------|\nD|--5------------3--------------3------------5------------|\nA|--3-(let ring)-1-(let ring)---3-(let ring)-3-(let ring)-|\nE|------------------------------1-------------------------|\nMy bad dreams just don't seem the same, baby without you\nOh, I wish you were willing to accept the blame for everything you do\nMy nightmares just don't scare me now, baby without you, yeah yeah\n\nC Bb F\ne|--------------------------------------------------------|\nB|--------------------------------------------------------|\nG|--5------------3----------------------------------------|\nD|--5------------3--------------3------------3------------|\nA|--3-(let ring)-1-(let ring)---3-(let ring)-3-(let ring)-|\nE|------------------------------1------------1------------|\nI wish that I could find the words to tell\nIn the best way possible, you and your friends to go to hell\n\ne|------------------------------|\nB|------------------------------|\nG|--8-10--9---------------------|\nD|------------------\\\\\\\\-pick---|\nA|------------------\\\\\\\\-scrape-|\nE|------------------\\\\\\\\--------|\nYeah right!\n\nC D F C\ne|-----------------------------------------------------|(with picked riff\nB|-----------------------------------------------------| from first verse\nG|-5------------5------------7---------10--10\\5-5------| played underneath)\nD|-5------------5------------7---------10--10\\5-5------|\nA|-3-(let ring)-3-(let ring)-5---------8---8-\\3-3------|\nE|-----------------------------------------------------|\n| |\n| synth arranged for guitar |\ne|----8-6-----------8-6-------------5------------------|\nB|--------8-6-5---------8-6-5---6-8---8--6-5~----------|\nG|-5------------5-------------7------------------------|\nD|-----------------------------------------------------|\nA|-----------------------------------------------------|\nE|-----------------------------------------------------|\n\nC D F C\ne|-----------------------------------------------------|(with picked riff\nB|-----------------------------------------------------| from first verse\nG|-5------------5------------7-------7\/10--10\\5-5------| played underneath)\nD|-5------------5------------7-------7\/10--10\\5-5------|\nA|-3-(let ring)-3-(let ring)-5-------5\/8---8-\\3-3------|\nE|-----------------------------------------------------|\nWhoa, breakup time is never easy to do\nNothing ever ends the way you want it to\nNothing seems to make sense now that it's over\n\nG F G F G F G G# G C\ne|----------------------------------------------------|\nB|----------------------------------------------------|\nG|------------------------------------------5---------|\nD|-5----3----5----3----5----3---\/5----6--5--5---------|\nA|-5----3----5----3----5----3---\/5----6--5--3---------|\nE|-3----1----3----1----3----1---\/3----4--3------------|\nYeah, now maybe we can be friends\nYeah, now that you're leaving\nYou can be nice to me\nMaybe I'm dreaming\nI am a lot better now than just okay\nMaybe I am just waking up in my own way\n\nC Bb F C\ne|--------------------------------------------------------| x5\nB|--------------------------------------------------------|\nG|--5------------3---------------------------5------------|\nD|--5------------3--------------3------------5------------|\nA|--3-(let ring)-1-(let ring)---3-(let ring)-3-(let ring)-|\nE|------------------------------1-------------------------|\nNow that it's over\nNow that it's over\n\nMy bad dreams just don't seem the same, baby without you\nI wish you were willing to accept the blame,\nyeah for all the shitty things you do\nNightmares just don't scare me now, baby without you\n\nC Bb F\ne|----------------------------------------------------------|\nB|----------------------------------------------------------|\nG|--5------------3------------------------------------------|\nD|--5------------3--------------3------------3----\\\\\\pick---|\nA|--3-(let ring)-1-(let ring)---3-(let ring)-3----\\\\\\scrape-|\nE|------------------------------1------------1----\\\\\\-------|\nI wish that I could find the words to tell\nYou to politely go fuck yourself\n\ne|--------------------| x8 (with pick scrapes over the top)\nB|----------1---------|\nG|------0------0---0--|\nD|----2---2------2----|\nA|--3-----------------|\nE|--------------------|\nYeah, now that it's over\nNow that it's over\nNow that it's over\nNow that it's over\nNow that it's over\n\nSuggestion from \"Chook R.V.\" (chookmv@hotmail.com)\n\nHey, I have a correction about your tab for Now That it's over\nIt's played an octave higher with this pattern of chord\n\n| | | | | |\n| | | | | |\n| | | | o o\n| | | o | |\n| | o | | |\nIt is played:\n\n1|----------------8----------|------------------10----------|\n2|----------8--------8-----8-|-----------10--------10-----10|\n3|-------9-----9--------9----|6x------11----11---------11---|\n4|---10----------------------|----12------------------------|\n\n1|-------------13------------|\n2|-------13-------13-----13--|\n3|----14----14--------14-----|\n4|-15------------------------|\n\n```\nRelated for Now That Its Over tab","date":"2017-01-23 04:53:16","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8029426336288452, \"perplexity\": 256.58970887057325}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-04\/segments\/1484560282110.46\/warc\/CC-MAIN-20170116095122-00552-ip-10-171-10-70.ec2.internal.warc.gz\"}"}
null
null
\section{Introduction} \label{intro} The Black Hole Candidate (BHC) IGR~J17091$-$3624 was discovered by {\it INTEGRAL}/IBIS during a Galactic Centre observation on 2003 April 14--15~\citep{Atel1}. At the onset of the discovery outburst, the source showed a hard spectrum with a flux of about $\sim$20 mCrab in the 40--100 keV energy range. The analysis of IBIS, JEM-X, and {\it RXTE}/PCA data of the whole outburst ~\citep{Cap,Lut,Lut2} revealed an indication of a hysteresis-like behaviour. The presence of a hot disc blackbody emission component during the softening of the X-ray emission of the source was also unveiled. After the {\it INTEGRAL} discovery, IGR~J17091$-$3624 was searched in the archival data of both TTM-KVANT~\citep{Atel2} and {\it BeppoSAX}/WFC~\citep{Atel4}. In the former archive, one outburst was discovered dating back to 1994 and reaching a flux of 10 mCrab in the 3-30 keV energy band; the analysis of {\it BeppoSAX}/WFC data revealed that a second outburst had occurred in 2001, reaching a flux of 14$\div$20 mCrab (2-10 keV). IGR~J17091$-$3624 lies at 9.6$\arcmin$ from another transient X-ray binary, IGR~J17098-3628, discovered on 2005 March 24 ~\citep{Atel444} when it underwent a 4 year long outburst~\citep{Cap2}. On 2006 August 29 and 2007 February 19, two {\it XMM-Newton} observations of the region around these two sources were performed. While IGR~J17098-3624 was detected in a relatively bright state in both observations, IGR~J17091$-$3624 was not detected and an X-ray upper limit of 7$\times$10$^{32}$ erg s$^{-1}$ was obtained \citep[assuming a distance of 8 kpc;][]{Cap2}. The refined position of IGR~J17091$-$3624 provided by \citet{ATEL1140} ruled out the tentative radio counterpart previously proposed for the source~\citep{Atel3,Pan}. A re-analysis of the archival radio observations performed 9 days after the source discovery by IBIS in 2003, enabled the identification of a faint transient radio source (sub-mJy level at 5 GHz) that showed a flux increase in the subsequent two weeks and an inverted spectrum, a signature of a compact jet ~\citep{Cap2}. This was consistent with the Low/Hard spectral state ( hereafter LHS) observed by {\it INTEGRAL} in the same period~\citep{Cap}. The source behaviour during the 2007 observation campaign was typical of a BHC in outburst, even if the relatively low X-ray flux of the source ( 0.5--10\,keV peak flux of $\sim$2 $\times$10$^{-9}$ erg~cm$^{-2}$~s$^{-1}$) hindered a detailed spectral evolution study~\citep{Cap2}. At the end of January 2011 the {\it Swift}/BAT hard X-ray transient monitor reported a renewed activity from IGR~J17091$-$3624. The source flux increased from 20 mCrab on January 28 up to 60 mCrab on February 3 in the energy range 15-50 keV~\citep{Atel3144,Atel3148}. The corresponding XRT spectrum obtained with a ToO observation was well described by an absorbed power law with a photon index of 1.73$\pm$0.29 ~\citep{Atel3148}. On 2011 February 7, the region around IGR~J17091$-$3624 was also observed by the IBIS/ISGRI and JEM-X telescopes on board the {\it INTEGRAL} satellite. The estimated source flux in the 20-100 keV energy range was 120 mCrab. The combined ISGRI+JEM-X spectrum (5-200 keV) could be well described by an absorbed cut-off power law model with a photon index of $\sim$1.4 and a high energy cutoff of about 110 keV. This suggested that the source was in LHS ~\citep{Atel3159}. Follow-up radio observations carried out with the ATCA telescope measured a flat spectrum \citep{Atel3150,Atel3167,Rodriguez} associated with self absorbed compact jets, as expected in accreting black holes in the LHS. Later on, \citet{Rodriguez} reported also on the detection of a discrete jet ejection event usually observed when a BHC undergoes a transition from the Hard Intermediate State ( hereafter HIMS) to the Soft Intermediate State ( hereafter SIMS). A 0.1 Hz QPO, increasing in frequency with the source flux and spectral softening, was revealed by both ~\citet{Atel3168} and \citet{Atel3179}. These findings motivated a long monitoring campaign that was carried out with {\it Swift}/XRT, starting on February 28. The XRT observations were planned to be simultaneous with the {\it INTEGRAL} pointings already scheduled in the direction of the source, in order to ensure the broadest possible energy coverage (0.3-200 keV) during the entire outburst. As reported by~\citet{Atel3203}, on February 28 the XRT+IBIS joint spectrum resulted in a typical High Soft State (HSS) shape, with a prominent disc black body component (kT$_{in}\sim$1keV) and a power-law photon index of 2.2$\pm$0.2. No high-energy cut-off was present up to 200 keV. On 2011 March 14 (MJD 55634) a $\sim$ 10 mHz QPO was detected in a 3.5 ks {\it RXTE} observation~\citep{Atel3225}. One week later, {\it RXTE}/PCA showed a continuous progression of quasi-periodic flare-like events occurring at a rate between 25 and 30 mHz. This kind of variability resembles the ``heartbeat'' variation observed in the Black Hole (BH) binary GRS~1915$+$105 \citep{Atel3230,Atel3418,Pahari}. \citet{Altamirano} reported a detailed study of the behaviour of the flare-like events of IGR~J17091$-$3624 during the first 180 days of the outburst. This study classified the different types of flares with the same scheme used by~\citet{Belloni_a} for GRS~1915$+$105. In this paper we report on the {\it Swift} and {\it INTEGRAL} data analysis of the new outburst of IGR~J17091$-$3624 started at the end of January 2011. \section{Data reduction and analysis} \label{data} The XRT ToO follow-up observations were performed, when possible, simultaneously to the {\it INTEGRAL} ones~\citep{Atel3159}. {\it INTEGRAL} data were collected in the framework of the Galactic bulge observations\footnotemark\footnotetext[1]{http://integral.esac.esa.int/BULGE} (public data) and the open time observation of the RX J1713.7-3946 field. Due to the long duration of the outburst, {\it Swift}/XRT data were collected also in the period in which the region around IGR~J17091$-$3624 became unobservable by {\it INTEGRAL}. In this paper we made use of the whole available data set of {\it INTEGRAL} and {\it Swift} observations performed from 28 January to 14 August 2011. The XRT observations were taken in window timing mode in order to avoid the pile-up effects. Each observation was composed of two or more segments. We reported only the analysis of the first segments of all XRT observations, since the other segments were always consistent with the first segments of each observation. For the XRT data analysis we followed standard procedures~\citep{Burrows} and the technique summarized in ~\citet{Bozzo}. XRT light curves and the hardness-intensity diagrams were obtained from the XRT data extracting two different energy ranges, 0.3-4 keV and 4-10 keV. For the {\it INTEGRAL} data analysis, we used the latest release of the standard Offline Scientific Analysis, OSA version 9.0, distributed by the ISDC~\citep{Courvoisier} and the latest response matrices available. In particular, the IBIS response matrices were produced using the closest available Crab observations to the 2011 outburst of IGR~J17091$-$3624. Our {\it INTEGRAL} analysis was focused on ISGRI~\citep{Lebrun}, the low-energy detector of the $\gamma$-ray telescope IBIS ~\citep{Ubertini} and on the X-ray monitor JEM-X~\citep{Lund}. Unfortunately, due to the {\it INTEGRAL} observing strategy combined to the small JEM-X field of view (FOV), IGR~J17091$-$3624 was not in the JEM-X {\it FOV} in most of the observations. During the {\it INTEGRAL} observations both JEM-X modules were switched on. However, for the data analysis we used the second module (JEM-X2) and checked the consistency with module~1. The ISGRI and JEM-X spectra were extracted in 20-200 keV and 3-20 keV, respectively. A systematic error of 2\% was taken into account for spectral analysis ~\citep[see also][]{Jourdain}. Details on all the {\it Swift} and {\it INTEGRAL} data analysed in this paper are given in Table~\ref{parameters} (columns 1-4). The spectral and timing analysis have been performed with HEASOFT 6.9 package. In particular, the periods of the flare-like events were calculated with the FTOOL {\it efsearch}. The {\it rms} values were estimated from the source light-curves by using an {\it ad hoc} developed tool and {\it the IDL Astronomy User's Library procedures}\footnotemark\footnotetext[2]{{http://idlastro.gsfc.nasa.gov/}}. For the {\it rms} calculation, we divided the light curves, extracted in 1\,s bins, into 140\,s chunks. For each segment we computed the fractional {\it rms} after subtracting the expected white noise. We then estimated the fractional {\it rms} of the light curves and its uncertainty from the average and standard deviation of the single determinations. The effective frequency range over which the {\it rms} is integrated is therefore 0.007-0.5. \scriptsize \onecolumn \begin{landscape} \begin{longtable}{cccccccccccc} \caption[Observations log and spectral parameters of the outburst evolution]{ Observations log and spectral parameters of the outburst evolution. Note: all the errors are at 90\% confidence level. N is the label of each XRT observation associated to the points of Figure~\ref{HID} and Figure~\ref{RMS}; ID is the XRT observation number; Date is the date of the XRT observation; {\it rms} is the value of the root-mean-square amplitude of each XRT observations averaged in an interval between 0.007 and 0.5 Hz. {\it INTEGRAL} REV indicates, when available, the revolution number of {\it INTEGRAL} simultaneous observations; T$_{in}$ is the inner temperature of the {\it diskbb} model in {\tt XSPEC}; NORM {\it diskbb} is the normalization of the {\it diskbb} model proportional to the square of the inner disc radius square; $\Gamma$ is the power law photon index and E$_{c}$ is the high energy cut off; FLUX$_{(2-10)keV}$ is the unabsorbed flux between 2 and 10 keV.}\label{parameters} \\ \multicolumn{1}{c}{{N}} &\multicolumn{1}{c}{{ID}} & \multicolumn{1}{c}{{Date}} & \multicolumn{1}{c}{{XRT EXP}} &\multicolumn{1}{c}{{{\it INTEGRAL}}} &\multicolumn{1}{c}{{{\it rms}}} &\multicolumn{1}{c}{{T$_{in}$}} &\multicolumn{1}{c}{{NORM}} &\multicolumn{1}{c}{{$\Gamma$}} &\multicolumn{1}{c}{{E$_{c}$}} &\multicolumn{1}{c}{{FLUX$_{(2-10)keV}$}} &\multicolumn{1}{c}{{$\chi^{2}_{red}$(d.o.f.)}}\\ \multicolumn{1}{c}{{-}} &\multicolumn{1}{c}{{-}} & \multicolumn{1}{c}{{MJD}} & \multicolumn{1}{c}{{s}} &\multicolumn{1}{c}{{REV}} &\multicolumn{1}{c}{{cnt}} &\multicolumn{1}{c}{{keV}} &\multicolumn{1}{c}{{\it diskbb}} &\multicolumn{1}{c}{{-}} &\multicolumn{1}{c}{{keV}} &\multicolumn{1}{c}{{ ($\times$10$^{-10}$erg~cm$^{-2}$s$^{-1}$)}} &\multicolumn{1}{c}{{-}}\\ \hline \hline \endfirsthead \multicolumn{3}{c} {\footnotesize\itshape\tablename~\thetable: continued from previous page} \\ \multicolumn{1}{c}{{N}} &\multicolumn{1}{c}{{ID}} &\multicolumn{1}{c}{{Date}} &\multicolumn{1}{c}{{XRT EXP}} &\multicolumn{1}{c}{{{\it INTEGRAL}}} &\multicolumn{1}{c}{{{\it rms}}} &\multicolumn{1}{c}{{T$_{in}$}} &\multicolumn{1}{c}{{NORM}} &\multicolumn{1}{c}{{$\Gamma$}} &\multicolumn{1}{c}{{E$_{c}$}} &\multicolumn{1}{c}{{FLUX$_{(2-10)keV}$}} &\multicolumn{1}{c}{{$\chi^{2}_{red.}$(d.o.f.)}}\\ \multicolumn{1}{c}{{-}} & \multicolumn{1}{c}{{-}} & \multicolumn{1}{c}{{MJD}} & \multicolumn{1}{c}{{s}} &\multicolumn{1}{c}{{REV}} &\multicolumn{1}{c}{{cnt}} &\multicolumn{1}{c}{{keV}} &\multicolumn{1}{c}{{\it diskbb}} &\multicolumn{1}{c}{{-}} &\multicolumn{1}{c}{{keV}} &\multicolumn{1}{c}{{($\times$10$^{-10}$erg~cm$^{-2}$s$^{-1}$)}} &\multicolumn{1}{c}{{-}}\\ \hline \hline \endhead \multicolumn{3}{c}{{Continued on next page}} \\ \endfoot \endlastfoot \input{tabella_referee.tex} \hline \hline \end{longtable} \end{landscape} \twocolumn \normalsize \section{Results} \label{evo} The 2011 outburst of IGR J17091-3624 can be divided in two main phases: during the first one, the source underwent the typical sequence of events of a transient BH in outburst (described in Section~\ref{evo_1}); during the second part, it exhibited ``heartbeat'' variability previously observed only in GRS~1915$+$105 (Sections~\ref{heartbeat_p} and ~\ref{heartbeat_p2}). Finally, a detailed study on the presence of a Compton reflection component and iron line upper limit are given in Section~\ref{reflection}. \subsection{The initial phases of the outburst} \label{evo_1} The outburst of IGR~J17091$-$3624 started on MJD$\sim$55598 (Figure~\ref{xrtlctot}) and in about 12 days the X-ray flux of the source (2-10 keV) increased of about 70\%. During this starting phase, the {\it Swift} and {\it INTEGRAL} simultaneous data, when available, could be well fit by an absorbed cutoff power-law model. The source showed a typical hard state spectrum and the photon index and high-energy cutoff remained consistent within the errors ($\Gamma\sim$1.5, E$_{c}\sim$100 keV, see Table\ref{parameters} for details). The equivalent hydrogen column density value was consistent with the one reported by~\citet[][]{Atel3148}, N$_{H}$=(1.1$\pm$0.3)$\times$10$^{22}$cm$^{-2}$. \begin{figure} \includegraphics[angle=0, scale=0.3] {capitanio11_fig1.ps} \hspace{5cm} \caption{Top panel: {\it Swift}/BAT (15-50 keV) count rate (bin time= 1 day). The shadowed parts represent the {\it INTEGRAL} observation periods. Second panel: XRT (0.3-4 keV) count rate (bin time =4000s). Third panel: XRT (4-10 keV) count rate (bin time =4000s). Bottom panel: XRT Hardness Ratio (defined as the ratio between the 4-10 keV to the 0.3-4 keV count rate).}\label{xrtlctot} \end{figure} Figure~\ref{LHS} shows the combined XRT-ISGRI unfolded LHS spectrum along with the residuals expressed in terms of sigmas (MJD=55603.2, observation n$^{o}$6 in Table~\ref{parameters}). On MJD$\sim$55610.2, the source displayed evidence for a beginning of a spectral transition to the softer state. The flux continued to increase more rapidly: $\sim$100\% from the observation n$^{o}$12 until the observation n$^{o}$15 (about 6 days). But, this time, a significant softening of the hard X-ray spectrum (see i. e. bottom panel of Figure~\ref{xrtlctot}) was observed, together with a drop in the hard X-ray flux. During the transition the spectra became steeper and in about two days the fit required a multicolor disc blackbody component \citep[ modeled with {\tt diskbb} in {\tt XSPEC},][ hereafter {\tt MDBB}]{Mitsuda}. Figure~\ref{HIS} shows two spectra extracted at the intermediate hardness values (HR$\sim$0.2, observation n$^{o}$13 and n$^{o}$14). An acceptable fit to these spectra could be obtained by using an absorbed cutoff power law model). Adding the {\tt MDBB} component, the F-test probability of a chance improvement is $~$7\% and $~$0.4\%, for the observations n$^{o}$13 and n$^{o}$14, respectively. Thus it is reasonable to add a {\tt MDBB} component only to the second spectrum.\\ The obtained spectral parameters of the spectrum n$^{o}$14 are compatible with the intermediate spectral states of a BHC (see e. g.~\citet{Fender} and \citet{Remillard} and references therein). During this transition from the hard to the soft state, the inner temperature of the {\tt MDBB} component (kT$_{in}$) increased from 0.3 keV (observation n$^{o}$14) to $\sim$1 keV (observations n$^{o}$15$\div$16), while its normalization decreased significantly\footnotemark\footnotetext[3]{ In the {\tt MDBB} model ~\citep{Mitsuda} the square root of the normalization constant is proportional to the apparent inner radius of the truncated disc. However, when the high energy behaviour of the spectrum is modeled with a power law component, the evolution of the disc internal radius can be significantly underestimated~\citep[see e. g.][p. 28-29]{Done}.}. \begin{figure} \includegraphics[angle=-90, scale=0.3] {capitanio11_fig2.ps} \caption{{\it Swift}/XRT and {\it INTEGRAL}/IBIS joint unfolded spectrum at the beginning of the outburst. The source presents a typical LHS spectrum (observation n$^{o}$3 in Table~\ref{parameters}).} \label{LHS} \end{figure} \begin{figure} \includegraphics[angle=-90,scale=0.3]{capitanio11_fig3.ps} \includegraphics[angle=-90, scale=0.3] {capitanio11_fig4.ps} \caption{Two {\it Swift}/XRT and {\it INTEGRAL}/IBIS joint intermediate spectra during the transition from the Low Hard State (LHS) to the High Soft State (HSS). The two spectra have been collected from data separated by 2 days. Top spectrum: observation n$^{o}$13 in Table~\ref{parameters}. Bottom spectrum: observation n$^{o}$14 in Table~\ref{parameters}.} \label{HIS} \end{figure} At the end of the transition to the soft state (observation n$^{o}$16), the disc temperature reached a value of about 1 keV, while the power law photon index reached $\sim$2.1, with no cutoff detectable up to about 200 keV (see Table ~\ref{parameters} for details). The fractional {\it rms} amplitude of the X-ray emission from IGR~J17091$-$3624 as measured by XRT data decreased from previous values (25$\div$30\%) up to about 4$\div$5\% (see Figure~\ref{RMS}). Thus, as also reported by~\citet{Atel3203}, the source is probably in the HSS. In the following 65 days (until observation n$^{o}$42) the spectral characteristics of the source showed no significant variability. Figure~\ref{HIS2} shows the unfolded spectrum of IGR~J17091$-$3624 after the transition (observation n$^{o}$33). The fit to these data was obtained with an absorbed {\tt MDBB} plus a simple power law component. No Compton-reflection from the disc surface and no iron line models were required by the data even though these components are usually expected to be very strong in the canonical soft state of BH binaries~\citep{Gierlinsky}. On MJD=55655.8 (observation n$^{o}$34) a short flare, reaching a peak flux of 3$\times$10$^{-9}$~erg~cm$^{-2}$~s$^{-1}$ (2-10 keV) was detected. No significant changes in the spectral properties of the source were detected during this event. \begin{figure} \includegraphics[angle=-90, scale=0.3] {capitanio11_fig5.ps} \caption{ {\it Swift}/XRT {\it INTEGRAL}/JEM-X2 and {\it INTEGRAL}/IBIS unfolded spectra of the IGR~J17091$-$3624 soft state (see Section~\ref{disc}). The fit is an absorbed {\tt MDBB} plus a power law. No reflection component is needed in the fit (the spectral parameters values are reported in Table ~\ref{parameters}, observation n$^{o}$33).} \label{HIS2} \end{figure} \subsection{The appearance of the ``heartbeat''} \label{heartbeat_p} Figure~\ref{RMS} shows the fractional {\it rms} amplitude as a function of the hardness ratio, hereafter HR\footnotemark\footnotetext[4]{ We defined as the hardness ratio the ratio of the counts in the 4-10 keV energy band to the counts in the 0.3-4 keV energy band in each XRT observation.}. As mentioned above, during the transition from the hard to the soft state, the fractional {\it rms} and the HR decreased as expected by a typical transient BH entering the HSS~\citep{Fender}. \begin{figure*} \centering \includegraphics[angle=0,scale=0.5]{capitanio11_fig6.ps} \caption{Hardness-rms diagram of each XRT pointing of the IGR~J17091$-$3624 outburst. For the observations with more than one segment only the first one has been considered. For the usage of {\it rms} as a tracer of the different accretion regimes see e. g.~\citet{Munoz-D} and~\citet{Cap3}. In order to get a more readable Figure, we did not show the hardness error bars that are, instead, reported in Figure~\ref{HID}.} \label{RMS} \end{figure*} However, from observation n$^{o}$ 26 the fractional rms amplitude moved away from the expected values and started to increase and decrease rapidly with a chaotic behaviour (see e. g. Figure~\ref{RMS}). The rapid increases correspond to the observations in which the quasi-periodic flare-like events are detected in the light curves (the ``heartbeat'' in analogy with GRS~1915$+$105, see also Section~\ref{intro}). As an example, Figure~\ref{hbeats} shows a zoom of the light curve of one of the XRT observations in which the ``heartbeat'' is detected. \begin{figure} \centering \includegraphics[angle=-90,scale=0.28]{capitanio11_fig7.ps} \caption{Zoom of the XRT count rate of the observation n$^{o}$28 in Table~\ref{parameters}. The time bin is 1\,s and the start time is MJD=55640.5.} \label{hbeats} \end{figure} The ``heartbeat'' oscillations vary in intensity and in hardness; in some observations they are not detected at all (in these cases lower values of the fractional rms amplitude are measured). No significative variations can be observed in the spectra of each XRT observation with or without the presence of the ``heartbeat''. We also observed that the flare-like events lose coherence and change their period with time. Figure~\ref{periods} shows the evolution of the ``heartbeat'' period as a function of time. This behaviour is consistent with what observed with {\it RXTE}~\citep{Atel3299,Altamirano}. The two panels of Figure~\ref{periods2} show the ``heartbeat'' period as a function of hardness and XRT count rate, respectively. No evident correlation between the periods of the flare-like events with the count rate or the HR has been found. The only peculiarity is the presence of a sort of "forbidden zone" in the possible period values (from $\sim$40\,s to $\sim$65\,s, Figures~\ref{periods},~\ref{periods2}). For a detailed discussion of the different ``heartbeat'' states of IGR~J17091$-$3624 see~\citet{Altamirano}. \begin{figure} \includegraphics[angle=0,scale=0.2]{capitanio11_fig8.ps} \caption{ ``heartbeat'' period versus time. The dashed segments represent the three different groups of observations discussed in the text.} \label{periods} \end{figure} \begin{figure} \begin{center} \includegraphics[angle=0,scale=0.2]{capitanio11_fig9.ps} \hspace{5cm} \caption{Top panel: hardness versus ``heartbeat'' period. Bottom panel: XRT count rate versus ``heartbeat'' period. } \label{periods2} \end{center} \end{figure} No significant detection of the ``heartbeat'' was found in the IBIS light curve because of the faintness of the source in the hard X-ray domain (20--200\,keV) and the relatively poor statistics. After MJD$\sim$55690 (observations n$^{o}$ 43-44), the ``heartbeat'' was no longer detected and at the same time the flux in the 15--50\,keV energy band started to increase again (see the BAT light curve in Figure~\ref{xrtlctot}). The spectral analysis of the observations collected during this period showed that the inner temperature of the {\tt MDBB} component decreased down to $\sim$1~keV and a power-law component was also required in order to have an acceptable fit of the XRT spectra. In the previous observations, a power law component additional to {\tt MDBB} was required only when XRT and IBIS data were fitted simultaneously. Between observations n$^{o}$37-41 the {\it INTEGRAL} data were unavailable, and thus we could not constrain the properties of the source emission in the hard X-ray domain. On MJD=55705.6 (observation n$^{o}$49), the 15--50\,keV light curve started to decrease again. Correspondingly, the soft XRT light curve increase significantly (see Figure~\ref{xrtlctot}) and the XRT spectra reached again approximately the same shape observed during the previously detected soft state. On the same date, MJD=55705.6, a second group of recurrent flare-like events appeared again in the light curves. At this time the flux variation of the flare events was less pronounced and less coherent, while the periods scanned approximately the same range than in the previous group of events (see Figure~\ref{periods}). As Figure~\ref{xrtlctot} shows, from MJD$\sim$55730 until 55770 there was an increase of the XRT flux together with a sharp hardening. The consequence of the hardening in the XRT spectra are an increase of the inner disc temperature and a decrease of the normalization constant of the {\tt MDBB} model, NORM, that reached values of about $\sim$18 (see Table~\ref{parameters}). In particular NORM is proportional to the square of the apparent inner disc radius and to {\tt cos{\tt i}}, where {\tt i} is the angle between the disc and the observer~\citep{Mitsuda}.\footnotemark\footnotetext[5]{The connection between the apparent inner disc radius and the inner radius itself is reported by~\citet{Kubota}).} Thus in order to obtain an inner radius with a plausible length, cos{\tt i} should be very small. Simultaneously to the spectral hardening, the XRT light curves and the corresponding power spectra clearly showed that a third group of recurrent flare-like events started with a remarkably decreased period (see Figure~\ref{periods}). As an example, the four panels of Figure~\ref{35-17flares} show the XRT power spectra evolution, from MJD=55737.5 to MJD=55759.3 (observations n$^{o}$60$\div$64 ). This time interval corresponds to the reappearance of the flare-like events: at MJD=55737.3 (observation n$^{o}$60) there were no flare-like events and the power spectrum presented a power law like behaviour (panel {\it a}). On MJD=55741.6 the flare-like events started again and a prominent and broad feature appeared in the power spectrum shape (panels {\it b} and {\it c}). The frequency of this feature changed with time from $\sim$0.72 Hz to $\sim$0.22 Hz (panels {\it d} and {\it e}). The ``heartbeat'' is always detected during the final part of the {\it Swift} campaign with periods that span from about 3\,s until 30\,s. The energy spectra of each single XRT observation were fitted with the same model than before (absorbed multicolor disc black body plus a power law component). However, after the observation n$^{o}$65, the inner temperature of the {\tt MDBB} component decreased from $\sim$1.5 keV to $\sim$1.3 keV (see Table~\ref{parameters} for details). The {\it INTEGRAL} observations, performed during revolution 1078 (MJD=55785.0), showed that the fit of the hard part of the spectrum is consistent with a simple power law component with a photon index of $\Gamma$=2.3$\pm0.2$ (see Figure~\ref{newspe}). We also note that during the periods in which IGR~J17091$-$3624 displayed evidence for the ``heartbeat'' phenomena, its spectral evolution remained trapped in the top left corner of the hardness-intensity diagram (hereafter HID; see Figure~\ref{HID}) and outlined no more the canonical path through the different spectral states expected from a BHC in outburst (the so called q-track). \begin{figure*} \centering \subfigure[id: 00035096016] {\includegraphics[angle=-90, scale=0.3] {capitanio11_fig10.ps}} \subfigure[id: 00035096017] {\includegraphics[angle=-90, scale=0.3] {capitanio11_fig11.ps}}\\ \subfigure[id: 00035096018] {\includegraphics[angle=-90, scale=0.3] {capitanio11_fig12.ps}} \subfigure[id: 00035096019] {\includegraphics[angle=-90, scale=0.3] {capitanio11_fig13.ps}}\\ \subfigure[id: 00035096020] {\includegraphics[angle=-90, scale=0.3] {capitanio11_fig14.ps}} \caption{XRT power spectra evolution of five observations (binned at 1s), from MJD=55737.5 to MJD=55759.3 (observation n$^{o}$60$\div$64), that correspond to the reappearance of the flare-like events of the last part of the XRT campaign of IGR~J17091$-$3624.} \label{35-17flares} \end{figure*} \begin{figure} \includegraphics[angle=-90, scale=0.3] {capitanio11_fig15.ps} \caption{{\it INTEGRAL}/JEM-X2 and {\it INTEGRAL}/IBIS averaged spectrum of IGR~J17091$-$3624 in the soft state during revolution 1078 (MJD=55785.0). The fit is an absorbed {\tt MDBB} plus a power law. The spectral parameters are reported in Table~\ref{parameters}. } \label{newspe} \end{figure} \begin{figure*} \includegraphics[angle=0, scale=0.6] {capitanio11_fig16.ps} \caption{ Hardness-intensity diagram (HID) of all the XRT 2011 outburst observations of IGR~J17091$-$3624. For the observations with more than one segment only the first one has been considered. } \label{HID} \end{figure*} \subsection{Spectra from the ``heartbeat'' } \label{heartbeat_p2} In order to investigate the origin of the changes in the hardness ratio during the ``heartbeat'', we extracted XRT spectra in the time intervals corresponding to the highest ($>$60 ct/s) and lowest ($<$30 ct/s) count rates of the source spectra during the flaring activity. For these data, we performed a rate resolved analysis adding-up time intervals corresponding to the peaks and to the minima of the flare in each observation (note, however, that the hardening of the different peaks was not constant; see for example the HR behaviour in Figure~\ref{hbeats}). Because of the periodicity of the light curve, the rate resolved analysis overlaps with the phase resolved analysis. A fit to the spectra was obtained by using an absorbed {\tt MDBB} component. The spectral parameters at highest count rates indicated a higher inner disc temperature and a hint for a smaller inner disc radius (see Table~\ref{tab_peak} for details) than what measured at lower count rates. This behaviour is more evident in some observations of the first group of data showing recurrent flare like events (between MJD$\sim$55630 and MJD$\sim$55690), where the flux variation during the flares was more pronounced. In fact, unfortunately, due to the low data statistics, only in a few observations was it possible to constrain the {\tt MDBB} normalization constant with enough confidence. In the second group (from MJD$\sim$55700 to MJD$\sim$55730) the changes in the HR with the source count rates and the coherence of the ``heartbeat'' oscillation are less evident. We report in Table~\ref{tab_peak} the spectral parameters of three representative XRT observations selected at different time periods. The N$_{H}$ is fixed to be the same for the different phases of the same observations. The unfolded phase-resolved spectra obtained for the XRT observation n$^{o}$30 (MJD=55640.5) are shown in Figure~\ref{spepick} . We found evidence that the flares are due to an oscillation of the inner disc boundary (Table~\ref{tab_peak}): at the peak of the flare the {\tt MDBB} temperature (radius) is higher (smaller) with the disc approaching the BH event horizon. The opposite behaviour is observed during the minima of the flare. This is similar to what has been observed in the case of GRS~1915$+$105~\citep{Neilsen}. The lower X-ray flux of IGR~J17091$-$3624 with respect to GRS~1915$+$105, however, does not allow us to study the ``heartbeat'' in the same details. Theoretical studies suggest that this phenomenon is due to the Lightman-Eardley instability, a limit cycle in the inner accretion disc dominated by the radiation-pressure~\citep{Lightman, Nayakshin, Szuszkiewicz}. According to this interpretation, the inner part of the disc empties and refills with a timescales of seconds~\citep{Belloni97}. \begin{figure} \includegraphics[angle=-90,scale=0.3]{capitanio11_fig17.ps} \caption{Count rate resolved spectra of the observation 00031921030. The upper spectrum was extracted during time intervals corresponding to the peaks of the flare-like events observed in this observation. The lower spectrum corresponds to the time intervals of the flares where the source count rate was a minimum. The two spectra were fit together with unabsorbed {\tt MDBB} model (we constrained the N$_{H}$ to be the same for the two spectra and we let the other parameters to vary independently).} \label{spepick} \end{figure} \begin{table*} \begin{center} \caption{Spectral parameters of the different phases of three XRT observations: N is the number of the XRT observation as in Table~\ref{parameters}; {\it H}: maxima count rate intervals ($>$ 60 ct/s); {\it L}: minima count rate intervals ($<$ 30 ct/s).} \label{tab_peak} \leavevmode \begin{tabular}{lccccccc} N & ID & Phase & T$_{in}$ & NORM & F$_{(0.1-10keV)}$ &$\chi^{2}_{red.}$ & d.o.f.\\ -& - & - & keV &{\it diskbb}& ($\times$10$^{-9}$erg~cm$^{-2}$s$^{-1}$) & - & -\\ \hline 26& 00031921028 & {\it H} & 1.4$^{+0.1}_{-0.1}$&69$^{+12 }_{-10}$ & 6 &1.04& 67\\ 26& 00031921028 & {\it L} & 1.1$^{+0.1}_{-0.1}$&81$^{+27 }_{-21}$& 2 &1.17& 24\\ \hline 28& 00031921030 & {\it H} & 1.49$^{+0.03}_{-0.03}$ &52$^{+4 }_{-4}$& 6 &0.99& 212\\ 28& 00031921030 & {\it L} & 1.10 $^{+0.03}_{-0.03}$ &63$^{+8 }_{-7}$& 2 &1.02& 83\\ \hline 38& 00031921042 & {\it H} & 1.6$^{+0.1}_{-0.1}$ &37 $^{+5 }_{-4}$& 5 &0.88& 112\\ 38& 00031921042 & {\it L} & 1.00$^{+0.02}_{-0.02}$ &83$^{+8 }_{-7}$& 2 &0.98& 128\\ \hline 51& 00035096002 &{\it H}& 1.4$^{+0.1}_{-0.1}$ & 52$^{+12 }_{-10}$ & 4 & 0.85 &42\\ 51& 00035096002 &{\it L} & 1.18$^{+0.03}_{-0.03}$ & 46$^{+5 }_{-4}$&2& 1.00& 126\\ \hline 54& 00035096005 &{\it H} & 1.5$^{+0.1}_{-0.1}$ & 38$^{+8 }_{-7}$ & 4 & 1.3 &64\\ 54& 00035096005 &{\it L} & 1.23$^{+0.03}_{-0.03}$ & 40$^{+3 }_{-3}$&2& 1.10& 185\\ \hline \end{tabular} \end{center} \end{table*} \subsection{Reflection component} \label{reflection} In order to investigate the presence of Compton-reflection component and the iron line in the spectra of IGR~J17091$-$3624, we used the XRT, JEM-X and IBIS joint spectra showed in Figure~\ref{newspe}. In this case the spectral parameters revealed that IGR~J17091$-$3624 is in the soft state (observation n$^{o}$33) when the highest contribution from the reflection component is expected~\citep[see e. g.][and reference therein]{Ross}. The model used to fit the data is an absorbed {\tt MDBB} plus an exponentially cut-off power-law spectrum reflected by neutral material~\citep[{\it pexrav} in XSPEC;][]{Magdziarz}. Considering the distance of the source estimated by ~\citet{Pahari} and ~\citet{Rodriguez}, we took into account also the hypothesis that the source could belong to the Galactic halo and thus have a different metallicity with respect to the sources in the Galactic bulge, where normally LMXBs are concentrated~\citep{Grimm}. No significant changes in the spectral fits were observed by leaving the metallicity of the reflecting medium free to vary. We thus assumed two different values of the metallicity, i.e. the solar one (the source belongs to Galactic bulge, Z/Z$_{\odot}$ = 1) and Z/Z$_{\odot}$ = 0.13 as reported by~\citet{Frontera} for XTE J1118+480 which is a BH binary that lies at very high Galactic Latitudes. In both cases the estimated upper limit on the reflection component was of R=0.1, and the F-test probability indicated that there is not a clear evidence of a significant improvement in the $\chi^{2}$ by adding this component (the F-test probability in the two cases was of 7\% and 2\%, corresponding to a detection significance of $<$2.0 $\sigma$ and $<$2.5$\sigma$, respectively). {We also estimated an upper limit on the normalization of the iron line fixing the centroid of the line at 6.7~keV. We assumed a broad line with $\sigma$=0.7 keV as in the case of GRS~1915+105~\citep[see e. g.][and references therein]{Martocchia}. The obtained upper limit on the equivalent width is EQ$<$ 0.9~keV.} \section{Discussion} \label{disc} All the outbursts of IGR~J17091$-$3624 observed before 2011 were fainter and poorly observed with respect to the last one. However the source, in the limit of the instruments capability, displayed the typical spectral and temporal evolution~\citep{Cap,Cap2} expected from a canonical BHC \citep[for details on the transient BHC outburst evolution see e.g.,][]{Fender}. The ``heartbeat'' phenomenon appeared only during the last 2011 outburst. Indeed, using all the available archival XRT observations in the direction of IGR~J17091$-$3624, we verified that no ``heartbeat'' was visible during the previous outbursts of the source. We summarize here the initial evolution phases of the outburst occurred on 2011. The source underwent to a transition from the LHS to the HSS moving from the bottom right corner of the HID to the top left corner (Figure~\ref{HID}, observations n$^{o}$1$\div$15): \begin{itemize} \item during this transition, the source reached the intermediate states and the radio flare reported by~\citet{Rodriguez} should be the signature of the transition from the HIMS to the SIMS~\citep{Fender}; \item the {\it rms} amplitude starting from values of about $\sim$30\% in the LHS, decreased significantly reaching values that span from 6\% to 2\% (see Figure~\ref{RMS} and column n$^{o}$6 in Table~\ref{parameters}); \item the spectrum became softer with the presence of a prominent disc blackbody component (starting from observation n$^{o}$15) with the high energy cutoff no longer detectable up to 200 keV. \end{itemize} The source remained in the HSS for about 10 days (from MJD=55623.5 till MJD=55633.3). Starting from MJD=55635 the source followed no more the standard evolution of a transient BHC in outburst: the properties of the X-ray spectra in each observation showed no significant variability, while the source displayed a sudden atypical timing variability in the form of flare-like events occurring at a 33\,s period (``heartbeat''). { The X-ray emission at the peak of these flares is typically harder than the average source emission (see the third panel of Figure~\ref{hbeats}). Starting from MJD=55692 we measured a progressive decrease of the {\tt MDBB} inner temperature with a corresponding hardening of the source emission. At this time the flare-like events were no longer visible in the light curve. The hardening continued uninterrupted for about two days, then the inner temperature of the disc started to increase again, leading to a clear increase in the soft X-ray flux and a decrease of the hard X-ray emission. At this epoch, the ``heartbeat'' became again visible.} The last part of the data analysed presented a short period oscillations (between 3\,s and 30\,s) and also a particularly hot inner disc temperature with a very small {\tt MDBB} normalization constant that corresponds to a small apparent inner radius. Between MJD=55740 and MJD=55760, the 4-10 keV XRT flux increased significantly (a factor of 60\%) . The peak in the 4-10 keV flux (see Figure~\ref{xrtlctot}) corresponds to a peak in the inner disc temperature (T$_{in}\sim$1.7 keV on MJD=55759.3). The period of the ``heartbeat'' changed with time (Figure~\ref{periods}) and it seems to have a decreasing trend. \subsection{Comparison with the BH binary GRS~1915$+$105} \label{1915} As reported by~\citet{Altamirano} and by~\citet{Pahari} the behaviour of the source resembles what observed from GRS~1915$+$105 in the various flaring states. Thus the principal common characteristic between these two sources is just the presence of pseudo periodic flare-like events in the light curve i. e. the so called ``heartbeat''. The HR (bottom panel of Figure~\ref{hbeats}) of IGR~J17091$-$3624 is similar to the GRS~1915$+$105 one, in the sense that in both sources the modulation of the light curve is projected also in the HR~\citep{Neilsen}. However, in the GRS~1915$+$105 case the hardness variation seems more pronounced \citep[see for example][]{Naik}. Our phase resolved energy spectra of the XRT data revealed that the hardening of the source X-ray emission at the peak and at the lower part of each flare is similar to what measured in the case of GRS~1915$+$105~\citep[see e. g.][]{Mineo, Belloni97} and thus it is probably due to the same physical phenomenon~\citep{Lightman}. The period of the ``heartbeat'' seems also to vary with time in the same range of values for the two sources, even though in GRS~1915$+$105 the period amplitude gets larger for long time scales. This does not seem to be the case for IGR~J17091$-$3624. Indeed, as showed in Figure~\ref{periods}, the period variation with time seems to decrease and, moreover, in the third group of observations (from MJD=55750 until the end) it reaches values of the order of few seconds ($\sim$3$\div$5\,s). These values were not observed in GRS~1915$+$105~\citep{Neilsen}. Similarly to GRS~1915$+$105, we measured also for IGR~J17091$-$3624 particularly hot inner disc temperatures \citep[in the case of GRS~1915$+$105 the temperature can reach even higher values; see e.g.,][]{Belloni97,Muno, Fender2}~\footnotemark\footnotetext[6]{ The inner disc radius values reported for GRS~1915+105 by~\citet{Muno} and related to inner temperatures greater than 1.6 keV, are too small to be associated with the ISCO for any reasonable black hole mass. Even if the hard part of the spectrum, modeled using a power law, could underestimate the inner disc radius~\citep{Done}, it is not possible to exclude that, in these cases, the accretion geometry could be different from the one predicted by {\tt MDBB}. However this should not be the case for IGR J17091-3624. In fact, the spectral parameters reported in our analysis are not as extreme as the ones reported by \citet{Muno} for GRS~1915+105.}. This property, together with a small inner radius of the disc blackbody spectrum in X-ray binaries, has been directly associated with high values of the BH spin~\citep{Zhang,Devis}. Besides all these similarities between GRS~1915$+$105 and IGR~J17091$-$3624, a particularly striking difference is the X-ray flux intensity during the outbursts. This fact cannot be easily explained because, unlike GRS~1915$+$105, for IGR~J17091$-$3624 we do not have an estimation of the distance, the inclination angle, BH mass and spin, and the properties of the companion star. Some results on optical and NIR counter-part of IGR~J17091$-$3624 have been reported by~\citet{Atel3150}. \citet{Chaty}, on the basis of optical and NIR photometric and spectroscopic studies of two possible counter-parts of the source, suggested that the source should belong to the Galactic bulge. However, ~\citet{Rodriguez} recently estimated a lower limit of the source distance from its hard to soft transition luminosity concluding that, if the transition occurred at luminosity that spans from 4\% to 10\% of the Eddington luminosity (assuming a BH mass of 10 M$_{\odot}$), IGR~J17091$-$3624 is farther from the Galactic bulge, at a distance that spans from about 11\,kpc up to 17\,kpc. Moreover \citet{Pahari}, using a different method, based on QPO, estimated an even larger distance of 20\,kpc and a mass range that spans from 8M$_{\odot}$ to 11.4M$_{\odot}$. Assuming a distance range of 11-17 kpc, the bolometric luminosities of IGR~J17091$-$3624 estimated from the observation displaying ``heartbeat'' with the highest flux would be (3-7)$\times$ 10$^{37}$ erg~s$^{-1}$ which translates in to L$\sim$(3-6)\% L$_{Edd}$. However, considering the distance and the BH mass range supposed by~\citet{Pahari} these luminosities result in 1\% and 8\% L$_{Edd}$. Since the flare-like events should be at Eddington limit regime~\citep[see e.g.][and reference therein]{Nayakshin,Neilsen}, if we consider the values reported above, we conclude that the faintness of IGR~J17091$-$3624 should not be only due to the source distance. For this reason ~\citet{Altamirano} supposed that the distance of the source could be even larger than 20\,kpc, otherwise the BH mass should be extremely small (less than 3$M_{\odot}$). Other peculiar differences of IGR~J17091$-$3624 with respect to GRS~1915$+$105 are the lack of detection of the Compton-reflection component and the extremely low apparent inner disc radius (see Section~\ref{heartbeat_p}). Taking these results as a whole, we speculate that IGR~J17091$-$3624 could be a highly inclined system and we suggest that the lower luminosity of IGR~J17091$-$3624 could be also ascribed to the spectral deformation effects due to the high inclination angle as reported by ~\citet{Cunningham}. Indeed, when a Kerr BH is seen at a high inclination angle (cos{\tt i} $<$0.25, {\tt i}$\sim$75 degrees), the source appears significantly fainter (up to a factor that depends on the BH spin and mass but can reach about an order of magnitude less) with respect to system observed face-on. At odds with this hypothesis is the lack of detection of eclipses. Although we do not have any information about the system, such as the orbital period or the companion star mass, we can speculate that the lack of eclipses could be related to a small ratio between the companion star and BH mass. Using the Eggleton approximation~\citep{Eggleton}, we calculated the relation between the mass ratio, $q$ (M$_{star}$/M$_{BH}$) and the Roche lobe radius. Then, the minimum inclination angle, $i$, for which the Roche lobe does not cover the central engine along the observer line of sight, is extracted from simple geometrical considerations giving: \begin{equation} \label{eq1} R_{L}/a < cos{\tt i} \end{equation} where R$_{L}$ is the Roche Lobe radius calculated with the Eggleton approximation; $a$ is the distance between the BH and the companion star; $i$ is the inclination angle of the system. Plotting R$_{L}/a$ versus $q$~\citep{Eggleton} and considering the equation~\ref{eq1}, we found that for $q<0.2$, the Roche lobe does not cover the central engine for inclination angles smaller than 75 degrees~\citep{Cunningham}. Moreover, the lack of information on the orbital period of the system hampers the search for the eventual presence of partial eclipses via the usual light curve folding techniques that increase the signal to noise ratio. \section{Conclusions} \label{conclusions} The outcome of the observational campaign presented here suggests that IGR~J17091$-$3624 can be no longer considered as typical transient Black Hole~\citep{Fender}. After the transition from hard to soft state in 2011~\citep{Rodriguez}, the source did not follow the standard q-track in the HID diagram \citep[see e. g.][and reference therein]{Homan,Homan2005} and, since March 2011, it remained trapped in an oscillatory state, similarly to what observed during the flaring states of GRS~1915$+$105~\citep{Altamirano}. As mentioned above (see section~\ref{1915}), the pseudo periodic bursts in the light curve of GRS~1915$+$105 reach the Eddington luminosity and are believed to be related to disc oscillations. The physics that drives these inner disc oscillations is connected with both the local Eddington limit and the radiation pressure instability. If the ``heartbeat'' oscillations seen from IGR~J17091$-$3624 are interpreted as being due to the same mechanism as in GRS~1915$+$105 ~\citep[as also supposed by][]{Altamirano}, then the apparent "faintness" of IGR~J17091$-$3624 remains unexplained unless to suppose a huge distance or an extremely low BH mass~\citep{Altamirano}. In Section~\ref{1915} we noted that that a reduction of the apparent luminosity up to an order of magnitude can also be achieved if the system is seen nearly edge on~\citep[for inclination angles $<$75 degrees,][]{Eggleton}. According to this idea and considering also the L/L$_{Edd}$ ratio calculated for the different distance values, we can speculate that the source, probably, not only lies far from the Galactic bulge, in agreement with ~\citet{Rodriguez}, but it is observed at a high inclination angle as well. As also discussed in Section~\ref{1915}, this finding is not in contrast with the lack of eclipses in the source light curve. In fact, if the companion star is small, the eclipses can be undetected even for a high inclination angle, as for example in the case of the BHC XTE J1118+480~\citep[see e.g.][]{Wagner, McClintock}. We note that at present we cannot exclude that the faintness of IGR~J17091$-$3624 is only due to a very large distance ($>$20 kpc) or to the extremely low BH mass ($<$ 3M$_{\odot}$), as suggested by \citet{Altamirano}. The large distance, unusual for low mass X-ray binaries generally concentrated in the Galactic bulge~\citep{Grimm}, could agree with the hypothesis reported by~\citet{Jonker} that the distances of the LMXB could be affected by a systematic error due to misclassification of the companion star. However recent results, reported by~\citet{King}, based on a {\it Chandra} observation campaign, support the hypothesis that IGR~J17091$-$3624 is observed at high inclination angle. Future refined estimation of the distance and the BH mass of IGR~J17091$-$3624 might help understand if GRS~1915$+$105 and IGR~J17091$-$3624 are very similar objects simply observed at very different distances or inclination angles. We point out that the high inclination of the system is a possible scenario to explain the low luminosity of the source without invoking very large distances or extremely low BH masses that may challenge the Rhoades \& Ruffini limit~\citep{Ruffini}. Finally we suggest that, as in the case of GRS~1915$+$105, also IGR~J17091$-$3624 might show a "quasi-persistent" outburst of the order of years. Thus the {\it INTEGRAL} and {\it Swift} observation campaign of the 2011 outburst probably caught the evolution of a transient BH in a persistent GRS~1915$+$105-like phase. \section*{Acknowledgments} FC, MDS, AP and GDC acknowledge financial support from the agreement ASI-INAF I/009/10/0. FC thanks Giorgio Matt and Piergiorgio Casella for useful scientific discussions. MDS and GDC acknowledge contribution by the grant PRIN-INAF 2009. We would like to thank N. Gehrels and the {\it Swift} Team for making {\it Swift} observations possible. A special thanks goes to E. Kuulkers and the {\it INTEGRAL} Galactic bulge monitoring program. {\it INTEGRAL} is an ESA project with instruments and science data centre funded by ESA member states especially the PI countries: Denmark, France, Germany, Italy, Switzerland, Spain, Czech Republic and Poland, and with the participation of Russia and the USA.
{ "redpajama_set_name": "RedPajamaArXiv" }
1,372
1,2-dibroom-ethaan (ook ethyleendibromide of EDB genoemd) is een toxisch broomderivaat van etheen met een milde, zoete geur. Het is een kleurloze vloeistof, die slecht oplosbaar is in water. De stof komt in kleine hoeveelheden voor in de oceaan, waar de stof gevormd wordt door algen. Toch is de stof schadelijk voor het milieu als er grote hoeveelheden in het water terechtkomen. Synthese 1,2-dibroomethaan wordt bereid door etheen en dibroom te laten reageren: H2C=CH2 + Br2 -> C2H4Br2 Toepassingen 1,2-dibroomethaan kent een belangrijke toepassing in de organische synthese bij de vorming van een Grignard-reagens. Daarbij activeert het de magnesiumkrullen en wordt het zelf omgezet naar etheen. Het wordt in de agrarische industrie gebruikt als pesticide op citrusvruchten, groenten en granen. In de meeste gevallen is het gebruik van 1,2-dibroomethaan als pesticide sinds 1984 in de Verenigde Staten verboden door het Environmental Protection Agency. Daarnaast wordt het aangewend als insecticide tegen termieten en kevers. Toxicologie en veiligheid Bij contact met een heet oppervlak of met een vlam ontleedt 1,2-dibroomethaan onder vorming van giftige en corrosieve dampen van waterstofbromide en dibroom. De stof ontleedt traag onder invloed van licht en vocht met vorming van corrosief waterstofbromide. Bovendien is 1,2-dibroomethaan erg reactief (kans op brand en ontploffing) in de omgeving van volgende stoffen: aluminiumpoeder, magnesium, natrium, kalium, calcium, sterke basen en sterke oxidatiemiddelen. Externe links Broomalkaan Vlamvertrager Insecticide Toxische stof Carcinogene stof Milieugevaarlijke stof
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,088
{"url":"http:\/\/mathhelpforum.com\/calculus\/121375-find-points-surface-where-tangent-line-parallel-xy-plane.html","text":"# Thread: find the points of the surface where tangent line is parallel to xy-plane?\n\n1. ## find the points of the surface where tangent plane is parallel to xy-plane?\n\nhi\n\nQuestion:\nfind the points on the surfaces xy + yz + zx - x- z^2 = 0\nwhere the tangent plane is parallel to the xy-plane?\n\nmy solution:\ntangent line is parallel to xy-plane = normal line to surface orthognal to xy-plane\n\nsince normal line is orthognal to xy-plane that mean its intersect xy-plane\nand the normal is gradient f\nall what i will do is finding the parametric equation of the normal line and putting z=0\n\nequation of normal line:\nx = x0 + (y+z-1) t\ny = y0 + (x+z) t\nz = z0 + (y+x-2z) t\nputting z=0\n-z0 = (y+x-2z)t\nand then ??!!\n\nEdit:\ni saw question like this .. but it was given a point at which the normal line starts from\ni.e. x0,y0 and z0 are known ..\n\n2. $f(x,y,z)=xy+yz+zx-x-z^{2}=0$\n\nCompute the partials:\n\n${\\nabla}F(x,y,z)=(y+z-1)i+(x+z)j+(-2z+x+y)k$\n\nNow, try to find those points where the tangent plane is horizontal.\n\nThe points where the derivatives equal 0.\n\nSolve for x,y, and z.\n\n3. y+z-1 = 0 ... (1)\nx+z = 0 ... (2)\nx + y - 2z = 0 ... (3)\n\nfrom (1) ---> y = -z + 1\nfrom (2) ---> x = -z\nsubstitute this values in (3)\n-z - z + 1 - 2z = 0\n-4z + 1 = 0\nz = 1\/4\nx = -1\/4\ny = 1 - (1\/4) = 3\/4\npoint is ( -1\/4 , 3\/4 , 1\/4 )\n\nis this right?\n\n4. but there is a problem\nin the exam he said\nfind the points\nit will be more than one point??\n\n5. Oh Sorry\nits tangent plane\nnot tangent line @@\nin the title of the thread","date":"2016-08-30 07:35:28","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 2, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.645106852054596, \"perplexity\": 1531.1050359308279}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-36\/segments\/1471982974951.92\/warc\/CC-MAIN-20160823200934-00010-ip-10-153-172-175.ec2.internal.warc.gz\"}"}
null
null
online than high road stores so competition is fierce and several deals are available. Signing up to internet site newsletters that sell UGGs can you regularly informed about new products and deals and will give you scope for buy as soon seeing that deals go live. There are numerous websites that offer astounding discounts on boots as well as UGGs, this may be with the company buying in bulk and can therefore offer better prices recommended to their customers. and find the very best deal without too many hours on your hard drive? ?Perform a Google seek out on Cheap UGGs and browse the effects, this is a good way to compare the greatest prices. and dodgy sites claiming to trade cheap authentic UGGs. Consequently, check the reviews on websites like these for customer comments - these will inform you on their service in addition to product authenticity. will be sold at a slashed price. Of training, be aware of reports and check the dealers reviews first. sites are reliable in addition to which are untrustworthy due to the popularity of cheap UGGs.
{ "redpajama_set_name": "RedPajamaC4" }
3,205
\section{Introduction} Numerical simulation represents a main tool for the investigation of dynamical systems in science, engineering and other fields of application. High-fidelity modelling is required to obtain detailed information on complex problems. However, the high-fidelity systems may have a huge number of state variables, which makes a numerical simulation expensive or even infeasible. Hence methods of model order reduction (MOR) are applied to decrease the dimensionality of the dynamical systems, see~\cite{antoulas,benner-mehrmann,schilders}. Yet the reduced system has to reproduce the quantities of interest sufficiently accurate. We consider linear systems of ordinary differential equations (ODEs), which are asymptotically stable. Projection-based MOR determines linear ODEs with a lower dimensionality. However, the reduced system may be unstable and thus useless. Some solutions become unbounded for unstable systems in the time domain. Furthermore, error bounds, which follow from the transfer functions in the frequency domain, are not available any more. Hence stability-preserving MOR methods are essential to generate appropriate reduced systems. The balanced truncation technique, see~\cite{gugercin-antoulas}, always produces stable reduced systems, while the computational effort is often relatively large. Krylov subspace techniques, see~\cite{freund}, are less expensive, whereas stability can easily be lost. A stability preservation of a Krylov subspace approach is achieved by special assumptions and methods in~\cite{ionescu}. A post-processing, which works on the poles of the transfer function, can recondition the stability, see~\cite{bai-freund}. We employ projection-based MOR of Galerkin-type, where each scheme is defined by a single orthogonal projection matrix. Important methods are the one-sided Arnoldi algorithm and the proper orthogonal decomposition (POD), for example. A transformation of the system of ODEs guarantees the stability of any reduced system, see~\cite{castane-selga,prajna,pulch-arxiv}. This technique was also applied to a stochastic Galerkin projection in~\cite{pulch-augustin}, where the stability of larger systems than the original ODEs is ensured. In our MOR methods, the main effort consists in solving a single high-dimensional Lyapunov inequality, where the efficient numerical solution is critical. The Lyapunov inequality can be satisfied by the approximate solution of a high-dimensional Lyapunov equation. Therein, we perform a simple but effective choice of an input matrix. We prove an error bound on the approximation, which is sufficient for achieving the Lyapunov inequality. The solution of the Lyapunov equation also represents a matrix-valued integral in the frequency domain. Phillips and Silveira~\cite{phillips-silveira} computed integrals of this type approximately by a quadrature rule. Quadrature methods were also considered for such integrals in~\cite{benner-schneider,breiten}. We use this approach to construct a stability-preserving MOR technique. Therein, not the solution of the Lyapunov equation itself is required but an associated matrix-matrix product with a small number of columns. Now large sparse linear systems of algebraic equations have to be solved, where the linear dynamical system yields the coefficient matrices. Furthermore, we extend this stability-preserving MOR method to systems of differential-algebraic equations (DAEs). M\"uller~\cite{mueller} investigated a regularisation technique, which changes an asymptotically stable DAE system into an asymptotically stable ODE system. Now the stability-preserving MOR applies to this system of ODEs. In the case of DAEs with a strictly proper transfer function, we show that the additional regularisation error converges to zero in dependence on a regularisation parameter. The paper is organised as follows. We introduce the considered MOR methods in Section~\ref{sec:problem-def}. The stability-preserving transformation and the Lyapunov equations are discussed in Section~\ref{sec:preservation}. We arrange the frequency domain integrals and analyse the usage of quadrature rules. In Section~\ref{sec:daes}, the stability-preserving approach is transferred to systems of DAEs. Finally, we present results of numerical experiments in Section~\ref{sec:examples}, where an ODE system and a DAE system are examined. \section{Model order reduction and stability} \label{sec:problem-def} Projection-based MOR of linear dynamical systems is closely related to stability properties, which are reviewed in this section. \subsection{Linear dynamical systems and stability} We consider linear time-invariant systems in the form \begin{equation} \label{linear-system} \begin{array}{rcl} E \dot{x}(t) & = & A x(t) + B u(t) \\ y(t) & = & C x(t) \\ \end{array} \end{equation} with state/inner variables $x : [0,\infty) \rightarrow \mathbbm{R}^n$, inputs $u : [0,\infty) \rightarrow \mathbbm{R}^{n_{\rm in}}$ and outputs $y : [0,\infty) \rightarrow \mathbbm{R}^{n_{\rm out}}$. The system includes constant matrices $A,E \in \mathbbm{R}^{n \times n}$, $B \in \mathbbm{R}^{n \times n_{\rm in}}$ and $C \in \mathbbm{R}^{n_{\rm out} \times n}$. If the mass matrix~$E$ is non-singular, then the system~(\ref{linear-system}) consists of ordinary differential equations (ODEs). If the mass matrix~$E$ is singular, then differential-algebraic equations (DAEs) are given. The pair $(E,A)$ is called a matrix pencil. We assume that the matrix pencil is regular, i.e., $\det ( \lambda E - A ) \neq 0$ for some $\lambda \in \mathbbm{C}$. ODEs always yield a regular matrix pencil. We add predetermined initial values $x(0)=x_0$, which are assumed to be consistent in the case of DAEs. In the frequency domain, a transfer function describes the input-output behaviour of the system~(\ref{linear-system}) completely, see~\cite{antoulas}. This transfer function $H : \mathbbm{C} \backslash \Sigma \rightarrow \mathbbm{C}^{n_{\rm out} \times n_{\rm in}}$ reads as \begin{equation} \label{transfer} H(s) = C ( s E - A )^{-1} B \qquad \mbox{for} \;\; s \in \mathbbm{C} \backslash \Sigma . \end{equation} The mapping~(\ref{transfer}) is a rational function with a finite set of poles~$\Sigma \subset \mathbbm{C}$. The magnitude of a transfer function can be characterised by norms in Hardy spaces. The $\mathscr{H}_2$-norm is defined by, see~\cite[p.~92]{shmaliy}, \begin{equation} \label{h2-norm} \left\| H \right\|_{\mathscr{H}_2} = \sqrt{ \frac{1}{2\pi} \int_{-\infty}^{+\infty} \left\| H({\rm i}\omega) \right\|_{\rm F}^2 \; {\rm d}\omega } \end{equation} with ${\rm i} = \sqrt{-1}$ and the Frobenius (matrix) norm $\| \cdot \|_{\rm F}$ provided that the integral exists. The stability issues of the system~(\ref{linear-system}) are independent of the definition of inputs or outputs. To discuss the stability, we recall some general properties of matrices in the following definitions. \begin{definition} \label{def:spectral} Let $A \in \mathbbm{R}^{n \times n}$ and $\lambda_1,\ldots,\lambda_n \in \mathbbm{C}$ be its eigenvalues. The {\em spectral abscissa} of the matrix~$A$ is the real number $$ \alpha (A) = \max \left\{ {\rm Re}(\lambda_1) , \ldots , {\rm Re} (\lambda_n) \right\} . $$ \end{definition} \begin{definition} \label{def:stable-pencil} A matrix pencil $(E,A)$ is called {\em stable}, if and only if each eigenvalue~$\lambda$ characterised by $\det (\lambda E - A) = 0$ has a strictly negative real part. \end{definition} \begin{definition} \label{def:stable-system} The linear dynamical system~(\ref{linear-system}) is {\em asymptotically stable} if and only if its associated matrix pencil $(E,A)$ is stable. \end{definition} In the case of a non-singular mass matrix, asymptotic stability of a system~(\ref{linear-system}) is equivalent to the property $\alpha (E^{-1}A) < 0$ of the spectral abscissa in Definition~\ref{def:spectral}. Concerning Definition~\ref{def:stable-pencil}, a regular matrix pencil exhibits a finite set of eigenvalues. Furthermore, Definition~\ref{def:stable-system} of asymptotic stability can also be found in~\cite[p.~376]{braun}. The asymptotic stability guarantees the existence of the transfer function~(\ref{transfer}) on the imaginary axis. The $\mathscr{H}_2$-norm~(\ref{h2-norm}) is always finite for asymptotically stable ODEs, whereas this norm may not exist in the case of (stable) DAEs. If the matrix pencil $(E,A)$ has eigenvalues with non-positive real part and a real part zero appears, then Lyapunov stability may still be satisfied. We consider this instance also as a loss of stability, because the advantageous asymptotic stability is not valid any more. \subsection{Projection-based model order reduction} \label{sec:mor} We assume that the linear dynamical system~(\ref{linear-system}) exhibits a huge dimensionality~$n$. Thus the involved matrices~$A$ and~$E$ are typically sparse. The purpose of MOR is to decrease the complexity. An alternative linear dynamical system \begin{equation} \label{system-reduced} \begin{array}{rcl} \bar{E} \dot{\bar{x}}(t) & = & \bar{A} \bar{x}(t) + \bar{B} u(t) \\ \bar{y}(t) & = & \bar{C} \bar{x}(t) \\ \end{array} \end{equation} has to be constructed with state/inner variables $\bar{x} : [0,\infty) \rightarrow \mathbbm{R}^r$ and the matrices $\bar{A},\bar{E} \in \mathbbm{R}^{r \times r}$, $\bar{B} \in \mathbbm{R}^{r \times n_{\rm in}}$, $\bar{C} \in \mathbbm{R}^{n_{\rm out} \times r}$, where the dimension~$r$ is much smaller than~$n$. Initial values $\bar{x}(0)=\bar{x}_0$ are derived from the initial values $x(0) = x_0$. Nevertheless, the output of~(\ref{system-reduced}) should be a good approximation to the output of~(\ref{linear-system}), i.e., $\bar{y}(t) \approx y(t)$ for all relevant times. The system~(\ref{system-reduced}) is called the reduced-order model (ROM) of the full-order model (FOM) given by~(\ref{linear-system}). The linear dynamical system~(\ref{system-reduced}) has its own transfer function $\bar{H} : \mathbbm{C} \backslash \bar{\Sigma} \rightarrow \mathbbm{C}^{n_{\rm out} \times n_{\rm in}}$ of the form~(\ref{transfer}). If both the original system~(\ref{linear-system}) and the reduced system~(\ref{system-reduced}) are asymptotically stable, then error bounds are available in the case of $x_0=0$ and $\bar{x}_0=0$. It holds that, see~\cite[p.~496]{benner-gugercin-willcox}, \begin{equation} \label{error-bound} \sup_{t \ge 0} \| y(t) - \bar{y}(t) \|_\infty \le \left\| H - \bar{H} \right\|_{\mathscr{H}_2} \| u \|_{\mathscr{L}_2[0,\infty)} \end{equation} with the $\mathscr{L}_2[0,\infty)$-norm \begin{equation} \label{l2-norm} \| u \|_{\mathscr{L}_2[0,\infty)} = \sqrt{ \int_0^{\infty} \| u(t) \|_2^2 \; {\rm d}t } \; , \end{equation} the $\mathscr{H}_2$-norm~(\ref{h2-norm}), the maximum (vector) norm $\| \cdot \|_\infty$ and the Euclidean (vector) norm $\| \cdot \|_2$. In projection-based MOR, see~\cite{antoulas}, each approach yields two projection matrices $V,W \in \mathbbm{R}^{n \times r}$ of full rank. We obtain the matrices of the ROM~(\ref{system-reduced}) by \begin{equation} \label{projected-matrices} \bar{A} = W^\top A V , \quad \bar{B} = W^\top B , \quad \bar{C} = C V , \quad \bar{E} = W^\top E V . \end{equation} The orthogonality $V^\top V = I_r$ and sometimes the biorthogonality $W^\top V = I_r$ are supposed with the identity matrix $I_r \in \mathbbm{R}^{r \times r}$. Often the projection matrices result from the determination of subspaces, i.e., $$ \mathcal{V} = {\rm span}(V) \subset \mathbbm{R}^n \qquad \mbox{and} \qquad \mathcal{W} = {\rm span}(W) \subset \mathbbm{R}^n . $$ On the one hand, the original state/inner variables are approximated within the space~$\mathcal{V}$ by $x \approx V \bar{x}$. On the other hand, the residual \begin{equation} \label{residual} g(t) = E V \dot{\bar{x}}(t) - A V \bar{x}(t) - B u(t) \in \mathbbm{R}^n \end{equation} is kept small by the requirement $g(t) \perp \mathcal{W}$ and thus $W^\top g(t) = 0$ for all~$t$. \subsection{Galerkin-type methods} \label{sec:galerkin} A Galerkin-type projection~(\ref{projected-matrices}) is characterised by the property~$W=V$. Thus we have to determine just a suitable projection matrix~$V$. Examples of Galerkin-type MOR methods are: \begin{itemize} \item one-sided Arnoldi method, see~\cite{freund}, \item proper orthogonal decomposition (POD), see~\cite[p.~277]{antoulas}, \item multi-parameter moment matching as in~\cite{li-etal}, \item iterative improvement for the case of many outputs as in~\cite{freitas}, \item and others. \end{itemize} Moment matching methods identify an approximation of the transfer function~(\ref{transfer}) in the frequency domain. The one-sided Arnoldi scheme represents a Galerkin-type moment matching method. Alternatively, the POD technique employs information on a solution for a particular input in the time domain. We explain the one-sided Arnoldi method, because it is used for the numerical experiments in Section~\ref{sec:examples}. An expansion point $s_0 \in \mathbbm{C} \backslash \Sigma$ is chosen. The matrix \begin{equation} \label{matrix-moment} F = s_0 E - A \in \mathbbm{C}^{n \times n} \end{equation} is arranged, which includes the matrices of the linear dynamical system~(\ref{linear-system}). Let a single input ($n_{\rm in}=1$) be given without loss of generality. We define the matrix $G = F^{-1} E$ and the vector $z = F^{-1} B$. The Krylov subspaces belonging to the matrix and the vector read as \begin{equation} \label{krylov} \mathcal{K}_r (G,z) = {\rm span} \{ z,Gz,G^2z,\ldots,G^{r-1}z \} \subset \mathbbm{C}^n \end{equation} for $r \ge 1$. The Arnoldi algorithm computes an orthonormal basis of the subspace~(\ref{krylov}) by a specific orthogonalisation scheme. The basis is collected in a matrix $\hat{V} \in \mathbbm{C}^{n \times r}$. Hence it holds that $\hat{V}^\top \hat{V} = I_r$ by construction. The projection matrix $V=\hat{V}$ becomes real-valued in the case of $s_0 \in \mathbbm{R} \backslash \Sigma$. Otherwise, the projection matrix~$V$ is obtained from a reorthogonalisation of the matrix $({\rm Re}(\hat{V}) , {\rm Im}(\hat{V})) \in \mathbbm{R}^{n \times 2r}$. This technique can be generalised straightforward to the case of several expansion points and multiple inputs. The computational effort of the one-sided Arnoldi method consists in two parts. Firstly, we compute an $LU$-decomposition of the high-dimensional matrix~(\ref{matrix-moment}). The sparsity of the matrices often allows for an efficient computation. Each matrix-vector multiplication $Gz'$ with some $z' \in \mathbbm{C}^n$ requires a solution of a linear system with coefficient matrix~(\ref{matrix-moment}). Thus $r$ forward and backward substitutions are performed to determine~(\ref{krylov}). Secondly, basic linear algebra operations yield the orthonormal basis similar to the Gram-Schmidt orthogonalisation. In each MOR approach, the reduced system~(\ref{system-reduced}) is often useless if it is not at least Lyapunov stable. In particular, the error bound~(\ref{error-bound}) holds true only for asymptotically stable systems. Many moment matching techniques like Krylov subspace methods and POD do not guarantee a stable ROM. \section{Stability preservation} \label{sec:preservation} We investigate linear dynamical systems~(\ref{linear-system}) consisting of ODEs in this section. \subsection{Stability and Lyapunov equations} We consider the Lyapunov inequality \begin{equation} \label{lyapunov-ineq} A^\top M E + E^\top M A < 0 \end{equation} with $A,E$ from the dynamical system~(\ref{linear-system}) and a (non-unique) solution $M \in \mathbbm{R}^{n \times n}$. This problem consists in finding a symmetric positive definite matrix~$M$ such that the left-hand side of~(\ref{lyapunov-ineq}) is negative definite. We can solve the problem by choosing any symmetric positive definite matrix $F \in \mathbbm{R}^{n \times n}$. It follows that the generalised Lyapunov equation, cf.~\cite{penzl}, \begin{equation} \label{lyapunov} A^\top M E + E^\top M A + F = 0 \end{equation} yields a unique symmetric positive definite solution~$M$, because the spectral abscissa from Definition~\ref{def:spectral} exhibits $\alpha(E^{-1}A)<0$. This solution~$M$ also satisfies the Lyapunov inequality~(\ref{lyapunov-ineq}). Direct methods of linear algebra compute the solution of~(\ref{lyapunov}) or its Cholesky factorisation, see~\cite{hammarling,penzl}. However, direct methods are excluded in our context, since their computational effort is $O(n^3)$. Approximate methods are available like projection methods and the alternating direction implicit (ADI) iteration, see~\cite{kramer,li-white,wolf}, for example. These methods often produce approximations $M \approx Z Z^\top$ with a low-rank factor $Z \in \mathbbm{R}^{n \times \ell}$ ($\ell \ll n$). Thus the approximation becomes a singular matrix, which makes the transformation dubious. Ill-conditioned reduced matrices arise sometimes as shown in~\cite{pulch-arxiv}. Using the solution of the Lyapunov equation~(\ref{lyapunov}), we transform the ODEs~(\ref{linear-system}) into the equivalent system \begin{equation} \label{ode-trafo} E^\top M E \dot{x}(t) = E^\top M A \dot{x} + E^\top M B u(t) . \end{equation} This transformation operates only in the image space and not in the state space. Stability preservation is given in a Galerkin-type MOR of the equivalent system~(\ref{ode-trafo}). We cite a theorem, whose proof can be found in~\cite{pulch-arxiv}, for example. \begin{theorem} \label{thm:stable} Let the linear dynamical system~(\ref{linear-system}) with a non-singular mass matrix be asymptotically stable. If $M$ is a solution of the Lyapunov inequality~(\ref{lyapunov-ineq}), then each Galerkin-type projection-based MOR of the linear dynamical system~(\ref{ode-trafo}) yields an asymptotically stable reduced system~(\ref{system-reduced}). \end{theorem} Let $V \in \mathbbm{R}^{n \times r}$ with $V^\top V = I_r$ be any projection matrix constructed for a reduction of the FOM~(\ref{linear-system}). The Galerkin-type MOR with~$V$ is applied to the transformed system~(\ref{ode-trafo}) now. The matrices of the associated ROM~(\ref{system-reduced}) can be written in the form~(\ref{projected-matrices}) with the projection matrix \begin{equation} \label{matrix-W} W = M E V . \end{equation} Consequently, this reduction represents a special case of a (non-Galerkin-type) MOR~(\ref{projected-matrices}) for the original system~(\ref{linear-system}). It holds that $W^\top V \neq I_r$ in general. Biorthogonality is achieved by the alternative projection matrix \begin{equation} \label{matrix-tilde-W} W' = W (V^\top W)^{-1} . \end{equation} The ROMs obtained from~(\ref{matrix-W}) and~(\ref{matrix-tilde-W}) are equivalent and thus the stability properties coincide. The matrix~$W$ in~(\ref{matrix-W}) is defined by matrix-matrix products. On the one hand, the evaluation $V'=EV$ is cheap, because~$E$ is typically sparse. On the other hand, the matrix~$M$ is dense. We require an approximation of~$M$ such that the product $M V'$ is computable with relatively low effort. The Galerkin-type MOR for the system~(\ref{linear-system}) and the MOR for the system~(\ref{ode-trafo}) are not equivalent, since the residual~(\ref{residual}) is not invariant in the used transformations. \subsection{Frequency domain integrals} There are two analytical formulas for the solution of the generalised Lyapunov equations~(\ref{lyapunov}), see~\cite[p.~177]{antoulas}. It holds that \begin{equation} \label{lyap-time-domain} M = \int_0^\infty {\rm e}^{t (E^{-1} A)^\top} E^{-1} F E^{-\top} {\rm e}^{t (E^{-1}A)} \; {\rm d}t \end{equation} including the matrix exponential in the time domain. The asymptotic stability of~(\ref{linear-system}) implies $\alpha(E^{-1}A)<0$ and thus this matrix-valued integral exists. Alternatively, Parseval's theorem induces the matrix-valued integral \begin{equation} \label{lyap-frequency-domain} M = \frac{1}{2\pi} \int_{-\infty}^\infty \big( {\rm i} \omega E^\top - A^\top \big)^{-1} F \big( -{\rm i} \omega E - A \big)^{-1} \; {\rm d}\omega \end{equation} in the frequency domain. The asymptotic stability of~(\ref{linear-system}) yields the invertibility of the involved matrices. Although the integrand is complex-valued, the integral~$M$ becomes real-valued. Our idea is to apply the elementary choice~$F=I_n$ with the identity matrix in the Lyapunov equation~(\ref{lyapunov}). The identity matrix features the maximum rank~$n$. Furthermore, there is no potential for a low-rank approximation of~$I_n$, because all eigenvalues are identical to one. A symmetry of the integrand allows for a restriction of the integration to non-negative frequencies. The frequency domain integral~(\ref{lyap-frequency-domain}) simplifies to \begin{equation} \label{lyap-frequency-domain2} M = \frac{1}{\pi} \, {\rm Re} \left[ \int_{0}^\infty S(\omega)^{-\htop} S(\omega)^{-1} \; {\rm d}\omega \right] \end{equation} using the abbreviation \begin{equation} \label{matrix-S} S(\omega) = -{\rm i} \omega E - A \in \mathbbm{C}^{n \times n} \qquad \mbox{for}\;\; \omega \in \mathbbm{R} . \end{equation} In~(\ref{lyap-time-domain}), the matrix exponential yield dense matrices. In~(\ref{lyap-frequency-domain}), the matrices $s E - A$ for $s \in \mathbbm{C}$ are often sparse, whereas the inverse matrices are always dense. Hence we never compute the inverse matrices explicitly. Nevertheless, sophisticated algorithms often produce a sparse $LU$-decomposition of a matrix $s E - A$. In an MOR with projection matrix~(\ref{matrix-W}), we require just the matrix-matrix product $M V'$ with the constant matrix $V' = EV$. It follows that \begin{equation} \label{M-times-V} W = M V' = \frac{1}{\pi} \, {\rm Re} \left[ \int_{0}^\infty S(\omega)^{-\htop} S(\omega)^{-1} V' \; {\rm d}\omega \right] , \end{equation} which represents a matrix-valued integral of size $n \times r$ in the frequency domain. We do not use the formulation~(\ref{lyap-time-domain}) in the time domain, because there is no suitable method to calculate the matrix exponential, see~\cite{moler-vanloan}. Appropriate iterative techniques to compute a matrix-vector product with the matrix exponential do exist, see~\cite{almohy-higham}. However, rough approximations would confuse the error estimation of an adaptive quadrature method. \subsection{Error condition for Lyapunov inequality} We show the following general result to characterise the influence of errors in the context of the Lyapunov inequality~(\ref{lyapunov-ineq}). \begin{theorem} \label{thm:condition} Let $M \in \mathbbm{R}^{n \times n}$ be the solution of the Lyapunov equations~(\ref{lyapunov}) with $F=I_n$ and $\widetilde{M} \in \mathbbm{R}^{n \times n}$ be any symmetric matrix. If it holds that \begin{equation} \label{error-tolerance-abs} \| \widetilde{M} - M \| < \displaystyle \frac{1}{ \| A^\top \| \cdot \| E \| + \| A \| \cdot \| E^\top \|} \end{equation} in some subordinate matrix norm~$\| \cdot \|$, then $\widetilde{M}$ is positive definite and satisfies the Lyapunov inequality~(\ref{lyapunov-ineq}). \end{theorem} \underline{Proof:} It holds that $$ \begin{array}{rcl} A^\top M E + E^\top M A & = & - I_n \\[0.5ex] A^\top \widetilde{M} E + E^\top \widetilde{M} A & = & - G \\ \end{array} $$ with a symmetric matrix~$G$. Subtraction yields $$ A^\top (\widetilde{M}-M) E + E^\top (\widetilde{M}-M) A = I_n - G . $$ Let $\eta = \| A^\top \| \cdot \| E \| + \| A \| \cdot \| E^\top \|$. We estimate $$ \| I_n - G \| = \| A^\top (\widetilde{M}-M) E + E^\top (\widetilde{M}-M) A \| \le \eta \| \widetilde{M}-M \| . $$ Now the condition~(\ref{error-tolerance-abs}) is sufficient for $\eta \| \widetilde{M}-M \| < 1$ and thus $\| I_n - G \| < 1$. Let $\lambda_j$ and $\mu_j$ for $j=1,\ldots,n$ be the eigenvalues of~$G$ and $I_n - G$, respectively. It follows that $$ | 1 - \lambda_j | = | \mu_j | \le \| I_n - G \| < 1 \quad \mbox{for}\;\; j=1,\ldots,n . $$ We obtain $0< \lambda_j < 2$ for all $j=1,\ldots,n$. Consequently, the matrix~$G$ is positive definite and the matrix~$-G$ is negative definite. Since $\widetilde{M}$ represents the solution of a Lyapunov equation~(\ref{lyapunov}) including the symmetric positive definite matrix~$F=G$, $\widetilde{M}$~inherits the positive definiteness. \hfill $\Box$ \medskip If we employ the spectral (matrix) norm, then the above constant~$\eta$ simplifies to $\eta = 2 \| A \|_2 \| E \|_2$. However, the evaluation of the spectral norm takes more effort in comparison to the norms $\|\cdot\|_1,\|\cdot\|_\infty$. An obvious question is if a sufficient condition can be derived for the relative error $\frac{\| \widetilde{M} - M \|}{\| M \|}$. However, we require an upper bound on $\| M \|$ in this case, which becomes more involved. In~\cite[p.~100]{stykel-diss}, the analysis yields the estimate $$ \| M \|_{\rm F} = \| \mathcal{L}^{-1} (I_n) \|_{\rm F} \le \| \mathcal{L}^{-1} \|_{\rm F} \| I_n \|_{\rm F} = \sqrt{n} \left( \inf_{\| X \|_{\rm F} = 1} \| A^T X E + E^\top X A \|_{\rm F} \right)^{-1} $$ including the inverse of the Lyapunov operator $\mathcal{L}$ in the Frobenius norm. Yet this upper bound cannot be simplified in the case of general matrices~$A$ and~$E$. We obtain a necessary condition on the relative error with respect to the requirement~(\ref{error-tolerance-abs}). \begin{lemma} \label{lemma:relative-error} If the solution~$M$ of the Lyapunov equations~(\ref{lyapunov}) with $F=I_n$ and a symmetric matrix~$\widetilde{M}$ satisfy the condition~(\ref{error-tolerance-abs}), then the relative error exhibits the bound \begin{equation} \label{delta-M-rel} \frac{\| \widetilde{M} - M \|}{\| M \|} < 1 \end{equation} in the used matrix norm. \end{lemma} \underline{Proof:} The Lyapunov equation~(\ref{lyapunov}) with $F=I_n$ yields $$ 1 = \| I_n \| \le \| A^\top M E + E^\top M A \| \le \eta \| M \| $$ with the constant~$\eta$ in the proof of Theorem~\ref{thm:condition}. It follows that $\frac{1}{\eta} \le \| M \|$. We obtain $$ \frac{\| \widetilde{M} - M \|}{\| M \|} \le \frac{\| \widetilde{M} - M \|}{\frac{1}{\eta}} = \eta \| \widetilde{M} - M \| < 1 $$ using the property~(\ref{error-tolerance-abs}). \hfill $\Box$ \medskip Lemma~\ref{lemma:relative-error} motivates that the condition~(\ref{error-tolerance-abs}) is not strong, because the induced relative error~(\ref{delta-M-rel}) may become up to 100\%. \subsection{Quadrature methods} \label{sec:quadrature} Phillips and Silveira~\cite{phillips-silveira} investigated the integrals~(\ref{lyap-frequency-domain}) including $F = G G^\top$ with a low-rank factor $G \in \mathbbm{R}^{n \times \ell}$ ($\ell \ll n$). Therein, a quadrature method with $K$~nodes and positive weights yields an approximation $M \approx Z Z^{\htop}$ with a factor $Z \in \mathbbm{C}^{n \times (\ell K)}$. Hence both the rank~$\ell$ and the number of nodes~$K$ has to be small. This requirement disappears in our approach using $F=I_n$. We also apply a quadrature rule to our frequency domain integrals. For theoretical investigations, we define an approximation of~(\ref{lyap-frequency-domain2}) by \begin{equation} \label{approx-M} \widetilde{M} = \frac{1}{\pi} \, {\rm Re} \left[ \sum_{k=1}^K \gamma_k S(\omega_k)^{-\htop} S(\omega_k)^{-1} \right] = \frac{1}{\pi} \sum_{k=1}^K \gamma_k {\rm Re} \left[ S(\omega_k)^{-\htop} S(\omega_k)^{-1} \right] \end{equation} with nodes~$\omega_k \ge 0$ and weights~$\gamma_k > 0$ for $k=1,\ldots,K$. The quadrature introduces an error. Nevertheless, the approximations~(\ref{approx-M}) always own the following desired properties independent of the magnitude of the error. \begin{lemma} \label{lemma:definiteness} If the quadrature rule involves positive weights only, then the approximation~$\widetilde{M}$ in~(\ref{approx-M}) is always symmetric and positive definite. \end{lemma} \underline{Proof:} \nopagebreak We define $S(\omega_k)^{-1} = X_k + {\rm i} Y_k$ with real-valued matrices $X_k,Y_k$. It follows that $$ {\rm Re} \left[ S(\omega_k)^{-\htop} S(\omega_k)^{-1} \right] = X_k^{\top} X_k + Y_k^{\top} Y_k . $$ Now the symmetry of~$\widetilde{M}$ is obvious. We show the definiteness. Let $z \in \mathbbm{R}^n \backslash \{ 0 \}$. We obtain $$ z^\top \widetilde{M} z = \frac{1}{\pi} \sum_{k=1}^K \gamma_k \left( z^{\top} X_k^{\top} X_k z + z^{\top} Y_k^{\top} Y_k z \right) = \frac{1}{\pi} \sum_{k=1}^K \gamma_k \left( \| X_k z \|_2^2 + \| Y_k z \|_2^2 \right) \ge 0 . $$ It holds that $X_k z \neq 0$ or $Y_k z \neq 0$ for each~$k$, because otherwise $S(\omega_k)^{-1} z = 0$ would cause a contradiction to the non-singularity of the matrix $S(\omega_k)$. It follows that the above sum is strictly positive. \hfill $\Box$ \medskip The associated approximation of~(\ref{M-times-V}) reads as \begin{equation} \label{approx-W} \widetilde{W} = \widetilde{M} V' = \frac{1}{\pi} \sum_{k=1}^K \gamma_k \, {\rm Re} \left[ S(\omega_k)^{-\htop} S(\omega_k)^{-1} V' \right] . \end{equation} It turned out that this approach is similar to a quadrature technique given in~\cite{benner-schneider}, where an integral of the form~(\ref{lyap-frequency-domain}) yields the Gramian of a linear dynamical system with many outputs. Lemma~\ref{lemma:definiteness} shows that the matrix~$\widetilde{M}$ is always symmetric and positive definite in~(\ref{approx-W}). Consequently, the underlying transformation is non-singular, which represents a crucial advantage in comparison to methods for Lyapunov equations~(\ref{lyapunov}) producing low-rank approximations. Theorem~\ref{thm:condition} implies that an approximation satisfying~(\ref{error-tolerance-abs}) guarantees a stability preservation in an MOR. However, the approximation errors cannot be checked in practise, because the exact solution~$M$ is unknown. We compute an approximation~(\ref{approx-W}), where an adaptive quadrature yields errors below predetermined tolerances. Yet there is no direct connection to the errors in the counterpart~(\ref{approx-M}). \subsection{Numerical solution of linear systems} Our aim is to evaluate the approximation~(\ref{approx-W}) without computations of matrices of size $n \times n$. Thus we solve complex-valued linear systems \begin{equation} \label{linear-system-alg} S(\omega_k) S(\omega_k)^{\htop} X_k = V' \end{equation} for each~$k$ with the matrices~(\ref{matrix-S}), a predetermined matrix $V' \in \mathbbm{R}^{n \times r}$ and the unknowns $X_k \in \mathbbm{C}^{n \times r}$. We consider only direct approaches of numerical linear algebra. Iterative methods introduce an additional error, which restricts the accuracy of high-order quadrature rules. Our aim is to use as less nodes as possible. There are two possibilities to solve a linear system~(\ref{linear-system-alg}) directly now: \begin{itemize} \item[(i)] The matrix-matrix product $\hat{S}_k = S(\omega_k) S(\omega_k)^{\htop}$ is computed. An algorithm for sparse matrices generates a Cholesky-decomposition $\hat{S}_k = \hat{L}_k \hat{L}_k^{\htop}$. We determine the solution $X_k = \hat{L}_k^{-\htop} \hat{L}_k^{-1} V'$ by forward and backward substitutions for multiple right-hand sides. \item[(ii)] An $LU$-decomposition including pivoting with row as well as column reordering is applied to the matrices~(\ref{matrix-S}), i.e., \begin{equation} \label{lu-decomp} P_k S(\omega_k) Q_k = L_k U_k \end{equation} with orthogonal permutation matrices~$P_k,Q_k$. It follows that $$ Q_k^{\top} S(\omega_k)^{\htop} P_k^{\top} = U_k^{\htop} L_k^{\htop} $$ represents an $LU$-decomposition of $S(\omega_k)^{\htop}$. We obtain $$ X_k = P_k^\top L_k^{-\htop} U_k^{-\htop} U_k^{-1} L_k^{-1} P_k V' $$ using permutations, forward and backward substitutions for multiple right-hand sides. \end{itemize} We do not apply the approach~(i) in our numerical computations for two reasons: (i)~The matrix $\hat{S}_k$ is less sparse than a matrix~(\ref{matrix-S}). Thus a Cholesky factorisation of $\hat{S}_k$ may not be (significantly) faster than an $LU$-decomposition of $S(\omega_k)$. (ii)~The condition number increases considerably due to ${\rm cond}(\hat{S}_k) = {\rm cond}(S(\omega_k))^2$ with respect to the spectral norm. Hence we perform the $LU$-decompositions~(\ref{lu-decomp}). Efficient numerical methods are available like UMFPACK~\cite{davis}. Therein, pivoting and permutation strategies keep the factorisations as sparse as possible, while still numerical stability is achieved. \subsection{Integrals on finite intervals} \label{sec:finite-interval} Concerning the integrals~(\ref{M-times-V}), we can straightforward transform the infinite frequency domain $[0,\infty)$ to the finite interval $[0,1)$. The substitution $\omega = \frac{\xi}{1-\xi}$ or, equivalently, $\xi = \frac{\omega}{1+\omega}$ yields \begin{equation} \label{integral-trafo} W = \frac{1}{\pi} \, {\rm Re} \left[ \int_{0}^1 S\left(\frac{\xi}{1-\xi}\right)^{-\htop} S\left(\frac{\xi}{1-\xi}\right) V' \frac{1}{(1-\xi^2)} \; {\rm d}\xi \right] . \end{equation} An advantage is that the evaluation of the integrand at $\xi = 1$ exists in the limit \begin{equation} \label{limit-xi} \lim_{\xi \rightarrow 1} \; S\left(\frac{\xi}{1-\xi}\right)^{-\htop} S\left(\frac{\xi}{1-\xi}\right) \frac{1}{(1-\xi^2)} = E^{-\top} E^{-1} \end{equation} provided that the mass matrix is non-singular. Now any (open or closed) quadrature rule for finite intervals generates an approximation to the integral~(\ref{integral-trafo}). Numerical tests indicate that the Gauss-Legendre quadrature is superior for computing the integral~(\ref{integral-trafo}) in comparison to other common schemes. The reason is that the integrand is analytic in the open interval~$(0,1)$. An adaptive Gauss-Kronrod quadrature rule was considered for integrals of this type in~\cite{benner-schneider}. However, as mentioned in Section~\ref{sec:quadrature}, the quadrature is not required to be sufficiently accurate for the integrals~(\ref{integral-trafo}) but the inherent integrals~(\ref{lyap-frequency-domain2}). An alternative is to use an adaptive refinement of nested quadrature rules until a desired set of ROMs becomes asymptotically stable. Note that it is cheap to check the stability for low-dimensional systems. An elementary adaptive method of this type can be based on the midpoint rule. The nodes read as $\xi_k = \frac{h}{2} + (k-1)h \in (0,1)$ for $k=1,\ldots,K$ with step size $h = \frac{1}{K}$. The iteration $K_i = 2^{i-1}$ for $i=1,2,3,\ldots$ induces a sequence of nested grids, where the evaluations of the integrand in~(\ref{integral-trafo}) can be reused. The iteration is terminated if all considered ROMs become stable. \section{Application to differential-algebraic equations} \label{sec:daes} The technique of Section~\ref{sec:preservation} cannot be applied directly to systems of DAEs~(\ref{linear-system}). A singular mass matrix implies $z^\top (A^\top M E + E^\top M A) z=0$ for $z \in {\rm ker}(E)$ and any~$M$. Hence the Lyapunov equation~(\ref{lyapunov}) is not fulfilled for each definite matrix~$F$. The associated integral~(\ref{lyap-frequency-domain}) does not exist, even though the integrand is always well-defined in the case of a stable matrix pencil with respect to Definition~\ref{def:stable-pencil}. Likewise, the limit~(\ref{limit-xi}) does not exist. \subsection{Kronecker normal form} A linear dynamical system~(\ref{linear-system}) with a singular mass matrix can be transformed into the Kronecker normal form, see~\cite[p.~452]{hairer2}. There are non-singular matrices $T_{\rm l},T_{\rm r} \in \mathbbm{R}^{n \times n}$ such that \begin{equation} \label{kronecker} T_{\rm l} A T_{\rm r} = \begin{pmatrix} A' & 0 \\ 0 & I_{n_2} \\ \end{pmatrix} \qquad \mbox{and} \qquad T_{\rm l} E T_{\rm r} = \begin{pmatrix} I_{n_1} & 0 \\ 0 & N \\ \end{pmatrix} \end{equation} with a matrix $A' \in \mathbbm{R}^{n_1 \times n_1}$ and a nilpotent strictly upper triangular matrix $N \in \mathbbm{R}^{n_2 \times n_2}$ ($n=n_1+n_2$). The system~(\ref{linear-system}) splits into a slow and a fast subsystem \begin{equation} \label{dae-semi-expl} \begin{array}{rcl} \dot{z}_1(t) & = & A' z_1(t) \\ N \dot{z}_2(t) & = & z_2(t) \\ \end{array} \end{equation} with $z_1(t) \in \mathbbm{R}^{n_1}$ and $z_2(t) \in \mathbbm{R}^{n_2}$. The input terms are omitted in~(\ref{dae-semi-expl}), because they do not influence the stability properties of the systems. The smallest integer~$\nu \ge 0$ such that $N^{\nu-1} \neq 0$ and $N^{\nu}=0$ is called the nilpotency index of the DAE system. Unfortunately, there is no efficient numerical method to compute the decomposition~(\ref{kronecker}) of the matrices in the linear dynamical system~(\ref{linear-system}). Thus we must design techniques, which are feasible without an explicit knowledge of the Kronecker normal form. \subsection{Regularisation} \label{sec:regularisation} In~\cite{mohaghegh}, a system of DAEs was regularised straightforward under the assumption of semi-explicit systems. Alternatively, we apply an approach for general descriptor systems from~\cite{mueller}, which goes back to~\cite{wang-etal}. The matrix pencil $(E,A)$ is modified into $(\hat{E},\hat{A})$ by \begin{equation} \label{matrices-regularised} \hat{E} = E - \alpha A \qquad \mbox{and} \qquad \hat{A} = A + \beta E \end{equation} with parameters $\alpha,\beta > 0$. The matrix $\hat{E}$ is non-singular for all $\alpha > 0$, because otherwise some $\lambda = \frac{1}{\alpha} > 0$ would be an eigenvalue of the stable matrix pencil $(E,A)$. The theorem below follows from the results in~\cite{mueller}. \begin{theorem} \label{thm:regularisation} Let the matrix~$E$ be singular and the matrix pencil $(E,A)$ be regular. The perturbed matrices~(\ref{matrices-regularised}) with $\alpha = \beta^2$ and \begin{equation} \label{bound-beta} 0 < \beta < \frac{1}{\rho(A')^2} \end{equation} using the spectral radius $\rho(A')$ of the matrix in the Kronecker normal form~(\ref{kronecker}) yield an asymptotically stable system of ODEs~(\ref{linear-system}). \end{theorem} The upper bound~(\ref{bound-beta}) is unknown in general, because the Kronecker normal form~(\ref{kronecker}) is not available in practise. Nevertheless, Theorem~\ref{thm:regularisation} tells us that a regularisation to a stable system is feasible for all sufficiently small~$\beta>0$. Furthermore, the sparsity pattern of $s E - A$ is identical to the sparsity pattern of $s \hat{E} - \hat{A}$. Thus the computational effort for a solution of linear systems does not change significantly. Now we employ MOR methods to the regularised system~(\ref{linear-system}) including the matrices~(\ref{matrices-regularised}), where the stability-preserving technique is applicable from Section~\ref{sec:preservation}. \subsection{Error estimates} We reduce the regularised system with matrices~(\ref{matrices-regularised}) instead of the original descriptor system~(\ref{linear-system}). This approach is appropriate, if error bounds can be provided for the transfer functions. It holds that \begin{equation} \label{dae-total-error} \left\| H_{\rm DAE} - H_{\rm ROM} \right\| \le \left\| H_{\rm DAE} - H_{\rm ODE} \right\| + \left\| H_{\rm ODE} - H_{\rm ROM} \right\| \end{equation} in each norm, where $H_{\rm ODE}$ and $H_{\rm ROM}$ are the transfer functions of the regularised system and its ROM, respectively. The difference $H_{\rm ODE} - H_{\rm ROM}$ depends on the quality of the MOR. We discuss the difference $H_{\rm DAE} - H_{\rm ODE}$ in the following. A linear dynamical system (or its transfer function) is called strictly proper, if and only if the transfer function~$H$ satisfies $$ \lim_{s \rightarrow \infty} H(s) = 0 . $$ A system of ODEs always exhibits a strictly proper transfer function. The transfer function of a general system of DAEs reads as, see~\cite{benner-stykel}, \begin{equation} \label{transfer-dae} H_{\rm DAE}(s) = H_{\rm SP}(s) + P(s) \end{equation} with a strictly proper part $H_{\rm SP} : \mathbbm{C} \backslash \Sigma \rightarrow \mathbbm{C}^{n_{\rm out} \times n_{\rm in}}$ and a polynomial part $P : \mathbbm{C} \rightarrow \mathbbm{C}^{n_{\rm out} \times n_{\rm in}}$. The polynomial part either vanishes or represents a non-zero matrix-valued polynomial of degree at most~$\nu$ with the index~$\nu$ of the system. These properties also depend on the definition of inputs and outputs in each system. If the polynomial part vanishes, then the $\mathscr{H}_2$-norm~(\ref{h2-norm}) as well as the $\mathscr{H}_{\infty}$-norm of the transfer function~(\ref{transfer-dae}) exist independent of the index. The $\mathscr{H}_{\infty}$-norm is always finite in the case of index-one systems. In~\cite{guenther}, for example, the electric circuit of the Miller integrator is modelled by a linear DAE system of index $\nu=2$, where the polynomial part becomes zero for the transfer function relating the input voltage to the output voltage. We provide an error bound for the regularisation on compact frequency intervals, which is relevant in this context. \begin{lemma} \label{lemma:error} Assume that the DAE is asymptotically stable and the ODE is given by~(\ref{matrices-regularised}) with $\alpha = \beta^2$. For each $\omega' > 0$ there are constants $K_{\omega'},L_{\omega'} > 0$ such that \begin{equation} \label{error-compact} \left\| H_{\rm DAE}({\rm i}\omega) - H_{\rm ODE}({\rm i}\omega) \right\|_2 < K_{\omega'} \beta \end{equation} uniformly for all frequencies $\omega \in [-\omega',\omega']$ provided that $\beta<L_{\omega'}$, where the spectral (matrix) norm $\| \cdot \|_2$ is used. \end{lemma} \underline{Proof:} Let $I = [-\omega',\omega']$, $s={\rm i}\omega$ and $\| \cdot \| = \| \cdot \|_2$ in this proof. We assume a bound $L_{\omega'} \le \min \{ \rho(A')^{-2} , 1 \}$ on~$\beta$ due to~(\ref{bound-beta}). Theorem~\ref{thm:regularisation} implies that the ODE is asymptotically stable. Thus the transfer functions exist and are continuous on the imaginary axis. The spectral norm is submultiplicative for matrices of any size. Thus we estimate $$ \left\| H_{\rm DAE}(s) - H_{\rm ODE}(s) \right\| \le \| B \| \cdot \| C \| \cdot \left\| (sE-A)^{-1} - (s\hat{E}-\hat{A})^{-1} \right\| . $$ We use the abbreviations $G(s) = s E - A$ and $\hat{G}(s) = s \hat{E} - \hat{A}$. A general estimate on the difference between inverse matrices is available in a subordinate matrix norm. It follows that $$ \left\| G(s)^{-1} - \hat{G}(s)^{-1} \right\| \le \frac{\| G(s)^{-1} \|^2 \| G(s) - \hat{G}(s) \|}{ 1- \| G(s)^{-1} \| \cdot \| G(s) - \hat{G}(s) \|} $$ provided that $\| G^{-1}(s) \| \cdot \| G(s) - \hat{G}(s) \| < 1$. On the one hand, the definition~(\ref{matrices-regularised}) with $\alpha = \beta^2$ and $\beta \le 1$ implies $$ \left\| G(s) - \hat{G}(s) \right\| = \| s \alpha A - \beta E \| \le \beta^2 |s| \cdot \| A \| + \beta \| E \| \le \Gamma \beta $$ for all $\omega \in I$ with the constant $\Gamma = \omega' \| A \| + \| E \| > 0$. On the other hand, we require the constant $$ \Theta = \max_{\omega \in I} \left\| \left( {\rm i}\omega E - A \right)^{-1} \right\| > 0 $$ such that $\| G(s)^{-1} \| \le \Theta$ for all $\omega \in I$. We obtain $$ \left\| G(s)^{-1} - \hat{G}(s)^{-1} \right\| \le 2 \Theta^2 \Gamma \beta $$ for all $\omega \in I$ provided that $\beta < \frac{1}{2 \Theta \Gamma}$. Consequently, the constants read as $K_{\omega'} := 2 \Theta^2 \Gamma \| B \| \| C \|$ and $L_{\omega'} = \min \{ \frac{1}{2\Theta\Gamma} , \frac{1}{\rho(A')^2} , 1\}$. \hfill $\Box$ \medskip Lemma~\ref{lemma:error} demonstrates that error of the regularisation is low on a compact frequency domain for sufficiently small parameters $\alpha,\beta$. An error estimate of the type~(\ref{error-compact}) cannot be derived uniformly for all $\omega \in \mathbbm{R}$, because the integral~(\ref{lyap-frequency-domain}) does not exist in the limit $\beta \rightarrow 0$. Thus high frequencies represent the critical part. If a system of DAEs~(\ref{linear-system}) has a strictly proper transfer function, then this problem becomes obsolete. \begin{theorem} \label{thm:error} Let the linear dynamical system~(\ref{linear-system}) be an asymptotically stable DAE with a strictly proper transfer function. A system of ODEs is given by the perturbed matrices~(\ref{matrices-regularised}) with $\alpha = \beta^2$. For each $\varepsilon > 0$ there is a constant $L > 0$ such that the transfer functions satisfy \begin{equation} \label{error-global} \left\| H_{\rm DAE} - H_{\rm ODE} \right\|_{\mathscr{H}_2} < \varepsilon \end{equation} for all parameters $\beta$ with $0<\beta<L$. \end{theorem} \underline{Proof:} The restriction $\beta \le \beta' < \min \{ \rho(A')^{-2},1 \}$, see~(\ref{bound-beta}), for some $\beta'>0$ (close to the upper bound) and $\alpha = \beta^2$ ensures the existence of each $\mathscr{H}_2$-norm. The $\mathscr{H}_2$-norm of the difference reads as $$ \left\| H_{\rm DAE} - H_{\rm ODE} \right\|_{\mathscr{H}_2}^2 = \frac{1}{2\pi} \int_{-\infty}^{\infty} \left\| H_{\rm DAE}({\rm i}\omega) - H_{\rm ODE}({\rm i}\omega) \right\|_{\rm F}^2 \; {\rm d}\omega $$ including the Frobenius (matrix) norm. Let $\Delta H = H_{\rm DAE} - H_{\rm ODE}$. The assumptions imply that each component of $\Delta H$ is a rational function of~$\omega$, where the degree of the numerator is less than the degree of the denominator. The coefficients of the rational functions depend continuously on the parameters~$\alpha,\beta \ge 0$. We discuss the part for high frequencies, i.e., $$ \int_{\omega'}^{\infty} \left\| \Delta H ({\rm i}\omega) \right\|_{\rm F}^2 \; {\rm d}\omega = \sum_{i=1}^{n_{\rm in}} \sum_{j=1}^{n_{\rm out}} \int_{\omega'}^{\infty} \left| \Delta H_{ij} ({\rm i}\omega) \right|^2 \;{\rm d}\omega $$ with $\omega' \gg 1$. It follows that $$ \lim_{\omega' \rightarrow \infty} \int_{\omega'}^{\infty} \left\| \Delta H ({\rm i}\omega) \right\|_{\rm F}^2 \; {\rm d}\omega = 0 $$ for each $\beta \in \mathcal{B} = [0,\beta']$ and $\alpha = \beta^2$, where the convergence is monotone from above (for $\omega' \rightarrow \infty$). The parameter interval $\mathcal{B}$ is compact. Dini's theorem yields $$ \lim_{\omega' \rightarrow \infty} \max_{\beta \in \mathcal{B}} \int_{\omega'}^{\infty} \left\| \Delta H ({\rm i}\omega) \right\|_{\rm F}^2 \; {\rm d}\omega = 0 . $$ Hence we obtain a frequency $\omega'(\varepsilon)>0$ such that $$ \max_{\beta \in \mathcal{B}} \int_{\omega'(\varepsilon)}^{\infty} \left\| \Delta H ({\rm i}\omega) \right\|_{\rm F}^2 \; {\rm d}\omega < \frac{2\pi}{3} \varepsilon^2 . $$ The integration domain $(-\infty,-\omega'(\varepsilon))$ exhibits the same bound due to a symmetry. Now we apply Lemma~\ref{lemma:error} to the interval $[-\omega'(\varepsilon),\omega'(\varepsilon)]$ and obtain a bound~(\ref{error-compact}) with constants $K_{\omega'(\varepsilon)},L_{\omega'(\varepsilon)} > 0$. Let $m = \min \{ n_{\rm in} , n_{\rm out} \}$. The matrix norms exhibit the general bound $\| \Delta H \|_F \le \sqrt{m} \| \Delta H \|_2$. We obtain $$ \int_{-\omega'(\varepsilon)}^{\omega'(\varepsilon)} \left\| \Delta H ({\rm i}\omega) \right\|_{\rm F}^2 \; {\rm d}\omega < 2 m \omega'(\varepsilon) K_{\omega'(\varepsilon)}^2 \beta^2 < \frac{2\pi}{3} \varepsilon^2 $$ provided that $\beta < \Xi(\varepsilon)$ with the constant $$ \Xi(\varepsilon) = \min \left\{ \frac{\sqrt{\pi} \, \varepsilon}{\sqrt{3 m \omega'(\varepsilon)} K_{\omega'(\varepsilon)}} , L_{\omega'(\varepsilon)} \right\} . $$ Thus the estimate~(\ref{error-global}) is satisfied for all $0 < \beta < L$ with the single constant $L = \min\{ \Xi(\varepsilon), \beta' \}$ and $\beta' < \min \{ \rho(A')^{-2} , 1 \}$. \hfill $\Box$ \medskip Theorem~\ref{thm:error} illustrates that a regularisation with a low error in the $\mathscr{H}_2$-norm can be achieved provided that the parameters~$\alpha,\beta$ are chosen sufficiently small. The derived constants are pessimistic, because rough estimates appear in the proof. The magnitude of appropriate regularisation parameters depends on the system of DAEs~(\ref{linear-system}). However, tiny parameters cause problems in numerical computations due to ill-conditioned matrices, for example. If the system of DAEs is just proper, then the polynomial part in~(\ref{transfer-dae}) represents a (non-zero) constant. Consequently, both the $\mathscr{H}_2$-error and the $\mathscr{H}_{\infty}$-error of the regularisation do not become arbitrarily small. There is still some potential for using this regularisation technique. The input-output relation of the linear dynamical system~(\ref{linear-system}) is given by $Y(s) = H_{\rm DAE}(s) U(s)$ with the Laplace transforms $U,Y$ of input and output, respectively. If a particular input induces a Laplace transform with a sufficiently fast decay for high frequencies, then the same effect as in a strictly proper system emerges. \section{Numerical experiments} \label{sec:examples} We apply the stability-preserving technique from Section~\ref{sec:preservation} to two high-dimen\-sional examples now. All numerical computations were executed by the software package MATLAB~\cite{matlab2018}. \subsection{Microthruster benchmark} \label{sec:microthruster} In~\cite{morwiki}, a microthruster unit represents a test example called boundary condition independent thermal model. A spatial discretisation of the two-dimensional heat transfer partial differential equation yields a system of ODEs. More details can be found in~\cite{rudnyi}. Three parameters appear in this model, which we choose all equal to one. The linear dynamical system~(\ref{linear-system}) is single-input-multiple-output (SIMO) with $n_{\rm out} = 7$. Its Bode plot is depicted in Figure~\ref{fig:microthruster-bode}. Table~\ref{tab:micro} illustrates some properties of the system. In particular, the system is asymptotically stable. The matrix~$E$ is diagonal with positive elements. Thus we simply scale this system into explicit ODEs ($E=I_n$), which are used in the following. \begin{table} \caption{Properties of the microthruster benchmark system.\label{tab:micro}} \begin{center} \begin{tabular}{cc} \hline dimension~$n$ & 4257 \\ \# outputs & 7 \\ \# non-zero entries in~$A$ & 37465 \\ \# non-zero entries in~$E$ & 4257 \\ spectral abscissa $\alpha(E^{-1}A)$ & $-0.0013$ \\ \hline \end{tabular} \end{center} \end{table} \begin{figure} \begin{center} \includegraphics[width=6.5cm]{microthruster_bode_1.eps} \hspace{5mm} \includegraphics[width=6.5cm]{microthruster_bode_2.eps} \end{center} \caption{Bode plot of microthruster example with seven outputs.} \label{fig:microthruster-bode} \end{figure} We use the one-sided Arnoldi method with the single real expansion point $s_0 = 100$. The reduced systems are arranged for $r=1,2,\ldots,100$. The spectral abscissas of the reduced systems are depicted in Figure~\ref{fig:micro-spectrum}. It follows that 11 ROMs become unstable, which are all in the range $20 < r < 50$. Thus we apply the stability preserving approach from Section~\ref{sec:preservation}. \begin{figure} \begin{center} \includegraphics[width=10cm]{microthruster_abscissa.eps} \end{center} \caption{Spectral abscissa of the matrices in the ROMs from original system and transformed system in microthruster example.} \label{fig:micro-spectrum} \end{figure} A quadrature method requires the solution of linear systems~(\ref{linear-system-alg}). We investigate the sparsity in the $LU$-decomposition~(\ref{lu-decomp}) of the matrix $-{\rm i}I_n-A$ ($\omega=1$). The number of non-zeros in the $LU$-decomposition with partial pivoting ($Q=I_n$) is $395 \, 463$. Alternatively, UMFPACK achieves a factorisation with $226 \, 395$ non-zero entries, whose sparsity pattern is shown in Figure~\ref{fig:micro-sparse} (right). \begin{figure} \begin{center} \includegraphics[width=6cm]{microthruster_sparsity_1.eps} \hspace{10mm} \includegraphics[width=6cm]{microthruster_sparsity_2.eps} \end{center} \caption{Sparsity patterns: system matrix~$A$ (left) and $LU$-decomposition of $-{\rm i}I_n-A$ (right).} \label{fig:micro-sparse} \end{figure} We investigate three quadrature methods for the computation of the projection matrix~(\ref{approx-W}): \begin{itemize} \item[i)] adaptive Gauss-7-Kronrod-15 quadrature using the built-in MATLAB function {\tt integral}, see~\cite{shampine}, \item[ii)] Gauss-Legendre rule, see~\cite[p.~171]{stoerbulirsch}, with fixed numbers of nodes, \item[iii)] nested midpoint rules as described in Section~\ref{sec:finite-interval}. \end{itemize} In the adaptive quadrature (i), we choose the absolute and the relative error tolerances as $\varepsilon_{\rm abs} = \varepsilon_{\rm rel} = 0.1$. The algorithm performs $K=150$ evaluations of the integrand. The projection matrices~(\ref{matrix-tilde-W}) are used to attain biorthogonality. All reduced systems become stable now. Figure~\ref{fig:micro-spectrum} depicts the spectral abscissas of the ROMs. In the Gauss-Legendre rule~(ii), we increase the number of nodes~$K$ until all ROMs become stable, see Table~\ref{tab:micro-gauss}. Just $K=14$ nodes are sufficient to obtain always stable systems. Table~\ref{tab:micro-midpoint} shows the number of stable ROMs for the refinement in the midpoint rule~(iii). Now $127$ nodes are required to achieve the stability preservation in all reduced systems. \begin{table}[h] \caption{Number of stable reduced systems out of 100 for different numbers of nodes in Gauss-Legendre quadrature.\label{tab:micro-gauss}} \begin{center} \begin{tabular}{rcccccccccccccc} \# nodes & 1 & 2 & 4 & 5 & 6 & 7 & 9 & 11 & 14 \\ \hline \# stable systems & 91 & 92 & 93 & 94 & 96 & 97 & 98 & 99 & 100 \end{tabular} \end{center} \end{table} \begin{table}[h] \caption{Number of stable reduced systems out of 100 for nested midpoint rule.\label{tab:micro-midpoint}} \begin{center} \begin{tabular}{rccccccc} \# nodes & 1 & 3 & 7 & 15 & 31 & 63 & 127 \\ \hline \# stable systems & 91 & 93 & 93 & 95 & 98 & 99 & 100 \end{tabular} \end{center} \end{table} We also compare the approximation quality between the ROMs obtained from the conventional reduction and the stabilisation method. Figure~\ref{fig:micro-error} illustrates the relative error in the $\mathscr{H}_2$-norm~(\ref{h2-norm}), i.e., \begin{equation} \label{relative-h2-error} E_{\rm REL} = \frac{\left\| H_{\rm FOM} - H_{\rm ROM} \right\|_{\mathscr{H}_2}}{\left\| H_{\rm FOM} \right\|_{\mathscr{H}_2}} \end{equation} including the transfer functions. We calculate approximations of $\mathscr{H}_2$-norms (\ref{h2-norm}) by the trapezoidal rule on a logarithmically spaced grid on the imaginary axis. The ROMs from the adaptive Gauss-Kronrod rule ($K=150$) and the Gauss-Legendre rule ($K=14$) are examined. We observe that the error~(\ref{relative-h2-error}) of the stabilisation method is often much lower than the conventional method for dimensions $r<50$, whereas the errors become close for $r>50$. Furthermore, the adaptive quadrature is more accurate than the Gauss-Legendre rule with the low number of nodes for $r<30$. Our important observation is that the stabilisation approach does not deteriorate the accuracy of the MOR. \begin{figure} \begin{center} \includegraphics[width=10cm]{microthruster_error.eps} \end{center} \caption{Relative differences in $\mathscr{H}_2$-norm for three MOR approaches in microthruster example.} \label{fig:micro-error} \end{figure} \subsection{Random low-pass filter} We investigate the electric circuit of a low-pass filter in Figure~\ref{fig:filter-circuit}. The circuit includes 21 physical parameters: seven capacitances, six inductances and eight conductances. A mathematical modelling generates a system of DAEs~(\ref{linear-system}) for 14~node voltages and 6~branch currents ($n'=20$). The (nilpotency) index of this system is one. Furthermore, the system is asymptotically stable and strictly proper. The system is single-input-single-output (SISO), because a voltage source is supplied and the output is defined as the voltage at a load conductance. \begin{figure} \begin{center} \includegraphics[width=14cm]{low_pass_filter.eps} \end{center} \caption{Circuit diagram of low-pass filter.} \label{fig:filter-circuit} \end{figure} In a stochastic modelling, all physical parameters are replaced by uniformly distributed random variables with a variation of 15\% around their mean values. We use a truncated polynomial chaos expansion to approximate the random processes, see~\cite{xiu-book}. All basis polynomials are included up to total degree three, i.e., $m=2024$ basis functions depending on 21~variables. The stochastic Galerkin method yields a larger system of DAEs~(\ref{linear-system}) with dimension~$n=mn'$, whose solution approximates the unknown coefficient functions. Table~\ref{tab:filter} depicts its characteristic numbers. The system is SIMO with a large number of outputs. This linear dynamical system inherits the properties of the circuit model: index-one, asymptotically stable and strictly proper. Figure~\ref{fig:filter-bode} illustrates the Bode plot of the first output, which represents an approximation for the expected value of the output voltage. The magnitude of the transfer function shows that high frequencies are damped out. This test example was also used in~\cite{pulch18}. \begin{table} \caption{Properties of stochastic Galerkin system for random low-pass filter. \label{tab:filter}} \begin{center} \begin{tabular}{cc} \hline dimension~$n$ & 40480 \\ \# outputs & 2024 \\ \# non-zero entries in~$A$ & 116886 \\ \# non-zero entries in~$E$ & 32890 \\ rank($E$) & 26312 \\ \hline \end{tabular} \end{center} \end{table} \begin{figure} \begin{center} \includegraphics[width=6.5cm]{lpf_bode_1.eps} \hspace{5mm} \includegraphics[width=6.5cm]{lpf_bode_2.eps} \end{center} \caption{Bode plot of first output in stochastic Galerkin system of random low-pass filter.} \label{fig:filter-bode} \end{figure} Furthermore, Figure~\ref{fig:filter-sparse} depicts the sparsity patterns of the system matrices and the $LU$-de\-composi\-tion~(\ref{lu-decomp}) for $\omega=1$. UMFPACK yields an $LU$-factorisation with about $1.8$ million non-zero entries. In contrast, the common $LU$-factorisation with partial pivoting generates about 150 million non-zero elements and thus requires much more computational work. Consequently, we employ decompositions from UMFPACK whenever linear systems of this type appear. \begin{figure} \begin{center} \includegraphics[width=6cm]{lpf_sparsity_1.eps} \hspace{18mm} \includegraphics[width=6cm]{lpf_sparsity_2.eps} \vspace{3mm} \includegraphics[width=6cm]{lpf_sparsity_3.eps} \end{center} \caption{Sparsity patterns of matrices in stochastic Galerkin system of random low-pass filter.} \label{fig:filter-sparse} \end{figure} Now the one-sided Arnoldi method with the single real expansion point $s_0 = 5 \cdot 10^5$ is used in all cases. The ROMs are always computed for dimensions $r=1,2,\ldots,100$. We directly reduce the system of DAEs first. The relative $\mathscr{H}_2$-errors~(\ref{relative-h2-error}) of the MOR are depicted in Figure~\ref{fig:galerkin-errors} (left). This error decays rapidly and becomes tiny. However, only 6 out of 100 ROMs inherit the asymptotic stability of the FOM. Hence a stability-preserving method is essential in this example. The system of DAEs is regularised by the technique described in Section~\ref{sec:regularisation}. We arrange the modified matrices~(\ref{matrices-regularised}) with $\alpha = \beta^2$ for different parameters~$\beta$. The total error~(\ref{dae-total-error}) is bounded by the sum of regularisation error and MOR error. Figure~\ref{fig:galerkin-errors} (right) shows the (absolute) $\mathscr{H}_2$-error of the regularisation. We recognise that this error converges exponentially to zero for $\beta$ tending to zero. In addition, numerical computations confirm that the investigated regularised systems are asymptotically stable. On the one hand, we apply the conventional Arnoldi method to the systems of ODEs for several parameters~$\beta$. On the other hand, we perform the stabilisation technique of Section~\ref{sec:preservation} for the ODEs in combination with the Arnoldi algorithm. The adaptive Gauss-Kronrod quadrature yields the associated projection matrix as in Section~\ref{sec:microthruster}. The used error tolerances read as $\varepsilon_{\rm abs}=\varepsilon_{\rm rel}=0.1$ again. Table~\ref{tab:galerkin-stab} illustrates the number of stable ROMs and the number of nodes in the quadrature. The conventional approach generates more and more unstable systems for decreasing parameters~$\beta$. In contrast, the stabilised method always yields at least 95\% stable reduced systems. Moreover, the unstable ROMs occur only within dimensions $r < 10$. The number of nodes, which are selected by the adaptive quadrature, increases for decaying parameters~$\beta \rightarrow 0$. This behaviour reflects that the integral~(\ref{M-times-V}) does not exist in the limit case~$\beta=0$. However, the ratios $\beta / K$ of the regularisation parameter and the number of nodes still converges nearly exponentially to zero in the observed range. Thus the rise in~$K$ is much lower than the decay in~$\beta$. It turns out that alleviated tolerances $\varepsilon_{\rm abs},\varepsilon_{\rm rel}$ cause more unstable systems. Furthermore, the Gauss-Legendre rule performs worse in this example. \begin{table} \caption{Number of stable ROMs for different regularisation parameters.} \begin{center} \begin{tabular}{rcccccc} parameter~$\beta$ & $10^{-2}$ & $10^{-3}$ & $10^{-4}$ & $10^{-5}$ & $10^{-6}$ & $10^{-7}$ \\ \hline \# stable, conventional & 100 & 77 & 56 & 57 & 51 & 43 \\ \# stable, stabilised & 100 & 99 & 95 & 95 & 95 & 95 \\ \# nodes in quadrature & 180 & 330 & 480 & 600 & 810 & 900 \end{tabular} \end{center} \label{tab:galerkin-stab} \end{table} \begin{figure} \begin{center} \includegraphics[width=6.5cm]{lpf_error_mor_dae.eps} \hspace{5mm} \includegraphics[width=6.5cm]{lpf_regularisation_error.eps} \end{center} \caption{Error of MOR for DAE system in relative $\mathscr{H}_2$-norm (left) and error of the regularisation in $\mathscr{H}_2$-norm (right) for stochastic Galerkin system.} \label{fig:galerkin-errors} \end{figure} Finally, we examine the total error~(\ref{dae-total-error}) of the MOR, where the system of DAEs represents the FOM. Figure~\ref{fig:galerkin-mor-ode-error} shows the relative errors with respect to the $\mathscr{H}_2$-norm in the two cases $\beta = 10^{-4},10^{-6}$. The error decreases fastly for low reduced dimensions. Thereafter the total error stagnates, because the error of the regularisation dominates. We observe that the total error of the stability-preserving approach is always smaller or equal in comparison to the conventional technique. Moreover, the stabilised MOR method yields much smaller errors in the case of low dimensions. \begin{figure} \begin{center} \includegraphics[width=6.5cm]{lpf_mor_error_beta_4.eps} \hspace{5mm} \includegraphics[width=6.5cm]{lpf_mor_error_beta_6.eps} \end{center} \caption{Error in MOR of regularised systems from stochastic Galerkin method for parameters $\beta = 10^{-4}$ (left) and $\beta = 10^{-6}$ (right).} \label{fig:galerkin-mor-ode-error} \end{figure} \clearpage \section{Conclusions} In Galerkin-type projection-based MOR, stability preservation can be achieved by a transformation of a projection matrix. The transformation is associated with a high-dimensional Lyapunov inequality, which is satisfied by solving a specific Lyapunov equation. We designed a numerical method to compute the alternative projection matrix, where quadrature methods determine approximations of integrals in the frequency domain. In contrast to other numerical solvers of high-dimensional Lyapunov equations, our frequency domain integral ensures that the inherent approximate solution is a non-singular matrix. Results of numerical computations demonstrate that our approach is efficient in the case of ODEs. We also generalised the numerical technique to DAEs by a regularisation. Again quadrature rules, which are applied to frequency domain integrals, yield the projection matrices for the regularised ODEs. However, the quadrature methods require more and more nodes for decreasing regularisation parameters, This property restricts the efficiency of our approach to some extend in the case of DAEs. \clearpage
{ "redpajama_set_name": "RedPajamaArXiv" }
245
#ifndef DCDEBUG_H #define DCDEBUG_H #include "osconfig.h" /* make sure OS specific configuration is included first */ #include "ofstream.h" #include "ofglobal.h" extern OFGlobal<int> DcmDebugLevel; /* default 0 */ #ifdef DEBUG void DCM_dcmdata_debug_print(const char* text, ... ); // Set the debug level #define SetDebugLevel(level) DcmDebugLevel.set(level); // debug prints a debug message in param if lev <= DcmDebugLevel. param has the // format of the printf parameters (with round brackets)! #define DCM_dcmdataDebug(lev, param) \ { \ if ((lev) <= DcmDebugLevel.get()) \ { \ ofConsole.lockCerr() << __FILE__ << ", LINE " << __LINE__ << ":"; \ DCM_dcmdata_debug_print param ; \ ofConsole.unlockCerr(); \ } \ } // Cdebug does the same as debug but only if a condition cond is OFTrue #define DCM_dcmdataCDebug(lev, cond, param) \ { \ if ((lev) <= DcmDebugLevel.get() && (cond)) \ { \ ofConsole.lockCerr() << __FILE__ << ", LINE " << __LINE__ << ":"; \ DCM_dcmdata_debug_print param ; \ ofConsole.unlockCerr(); \ } \ } #else // DEBUG #define SetDebugLevel(param) #define DCM_dcmdataDebug(lev, param) #define DCM_dcmdataCDebug(lev, cond, param) #endif // DEBUG #endif // DCDEBUG_H /* * CVS/RCS Log: * $Log: dcdebug.h,v $ * Revision 1.1 2006/03/01 20:15:19 lpysher * Added dcmtkt ocvs not in xcode and fixed bug with multiple monitors * * Revision 1.13 2005/12/08 16:28:04 meichel * Changed include path schema for all DCMTK header files * * Revision 1.12 2005/11/28 15:53:16 meichel * Renamed macros in dcdebug.h * * Revision 1.11 2004/01/16 14:06:32 joergr * Removed acknowledgements with e-mail addresses from CVS log. * * Revision 1.10 2002/04/16 13:41:43 joergr * Added configurable support for C++ ANSI standard includes (e.g. streams). * * Revision 1.9 2001/06/01 15:48:35 meichel * Updated copyright header * * Revision 1.8 2000/04/14 15:45:30 meichel * Dcmdata debug facility now uses ofConsole for output. * * Revision 1.7 2000/03/08 16:26:12 meichel * Updated copyright header. * * Revision 1.6 2000/03/03 14:05:22 meichel * Implemented library support for redirecting error messages into memory * instead of printing them to stdout/stderr for GUI applications. * * Revision 1.5 1999/03/31 09:24:33 meichel * Updated copyright header in module dcmdata * * */
{ "redpajama_set_name": "RedPajamaGithub" }
7,642
Nacho Ruiz Capillas (Madrid, 1966) és un muntador i editor de cinema espanyol, guanyador d'un Goya al millor muntatge i diverses Medalles del Cercle d'Escriptors Cinematogràfics al millor muntatge. Va començar a editar curtmetratges produïts per Elías Querejeta en la dècada del 1980. El seu primer llargmetratge destacat fou El aliento del diablo de Paco Lucio (1993) i el 1996 va guanyar la seva primera Medalla del Cercle d'Escriptors Cinematogràfics pel seu treball a El último viaje de Robert Rylands de Gracia Querejeta. Ha treballat amb directors destacats com Alejandro Amenábar, Daniel Sánchez Arévalo, Fernando León de Aranoa o José Luis Cuerda. El 2001 va guanyar novament la Medalla del Cercle d'Escriptors Cinematogràfics per Intacto, el 2004 per Héctor i el 2002 va obtenir el Goya al millor muntatge per The Others. El 2018 va guanyar el premi com a millor editor al Festival Internacional del Nou Cinema Llatinoamericà de l'Havana pel seu treball a La noche de 12 años. Filmografia El aliento del diablo (1993) El último viaje de Robert Rylands (1996) Lluvia en los zapatos (1998) Barrio (1998) La lengua de las mariposas (1999) Nadie conoce a nadie (1999) El bola (2000) The Others (2001) Intacto (2001) Los lunes al sol (2002) Noviembre (2003) Héctor (2004) El calentito (2005) Princesas (2005) Azuloscurocasinegro (2006) Siete mesas de billar francés (2007) Los girasoles ciegos (2008) Agora (2009) Gordos (2009) Amador (2010) Lobos de Arga (2011) Katmandú, un mirall al cel (2011) La gran familia española (2013) El olivo (2016) No culpes al karma de lo que te pasa por gilipollas (2016) Loving Pablo (2017) La noche de 12 años'' (2018) Referències Editors de cinema espanyols Guanyadors del Premi Goya al millor muntatge Artistes madrilenys
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,087
I have inherited a Pro1000 that was in parts. I have it all back together but there is a small part missing. The black plastic lever the gets actuated by a case to allow the primer thru sits on a small steel sleeve, that allows the arm to move freely even when the screw is tightened. I need to know the dimensions of this steel sleeve so I can make another one. If some one can measure one for me (in mm) for all dimensions and PM me, would be great. Look at the parts list for Pro1000 with pictures. Lee main page->products->pro1000->serv parts for your cal. The link is for 9mm press. Plastic lever that you are talking about is named there "case sensor TR2548", and the steel sleeve, which you need dimensions for, is "sensor bushing TR2550", right? I will try to check with my friend that has the press, but this may take a while. Maybe somebody else can post the dimensions way faster. BTW, you can order the part online from the above Lee page. The total is (in US$) 1$ for the part + 4$ for processing + 35% for international shipment. If you haven't gone to this website, you are missing out on some great instructional video's. I took apart my carrier/shellplate setup yesterday and without the video I don't think I could have done it right. Talk about dirty --- well it runs just like new now. Yeah, I can order it from Lee. But I have a Lathe and a Mill and figure I will just make one. But I need the dimensions. I have finally got a response from the friend. The sensor bushing shape is two cylinders: one (wide) on top of another one with a hole on the axis. Nothing extra. Sizes below are in millimeters. All other dimensions can be calculated from the above. The press is fresh new, so the bushing dimensions should be very close to factory specs. I will be up in the shed playing on the lathe tonight. Thank your friend for me too?
{ "redpajama_set_name": "RedPajamaC4" }
9,332
using System; using Leak.Client.Swarm; using Leak.Options; using Leak.Reporting; using Pargos; namespace Leak { public class CommandLine { [Parameter, At(0)] public string Command { get; set; } [Option("--trackers")] public string[] Trackers { get; set; } [Option("--hash")] public string Hash { get; set; } [Option("--destination")] public string Destination { get; set; } [Option("--listener")] public string Listener { get; set; } [Option("--port")] public string Port { get; set; } [Option("--connector")] public string Connector { get; set; } [Option("--accept")] public string[] Accept { get; set; } [Option("--strategy")] public string Strategy { get; set; } [Option("--metadata")] public string Metadata { get; set; } [Option("--exchange")] public string Exchange { get; set; } [Option("--logging")] public string Logging { get; set; } public bool IsValid() { if (Logging != null) { switch (Logging) { case "compact": case "verbose": break; default: return false; } } switch (Command) { case "download": return DownloadOption.IsValid(this); case "seed": return SeedOption.IsValid(this); } return false; } public Reporter ToReporter() { switch (Logging) { case null: case "compact": return new ReporterCompact(Command); default: return new ReporterVerbose(Command); } } public SwarmSettings ToSettings() { SwarmSettings settings = new SwarmSettings(); if (Connector != null) { settings.Connector = Connector == "on"; } if (Listener != null) { settings.Listener = Listener == "on"; } if (settings.Listener && Port != null) { settings.ListenerPort = Int32.Parse(Port); } if (Accept != null) { settings.Filter = new GeonFilter(Accept); } if (Strategy != null) { settings.Strategy = Strategy; } if (Metadata != null) { settings.Metadata = Metadata == "on"; } if (Exchange != null) { settings.Exchange = Exchange == "on"; } return settings; } } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,143
TOHU AND VOHU Human beings developed an alternative language based on symbols. Animals don't use symbols. Symbols are agreed signs with meaning, in its broad understanding; this article is full of symbols because each letter has a symbolic meaning that can change within a word and again in a sentence. The dot at the end of this sentence has an agreed meaning as well, so it is a different kind of a symbol. In this fable the human being is a dot or comma in a sentence formed by the Solar system. This sentence contains letters and words formed by moons, planets, comets, asteroids, nebulae, constellations and other forms of celestial bodies. This sentence – the Solar system – is but one of another 100 billion sentences that form the book called the Milky Way which is but a chapter in a larger encyclopedia named the Virgo cluster. With this line of thinking the universe is the whole library. Symbol, in its higher form, encapsulates a hidden connection to another level of understanding reality or an immediate device to connect to higher essence; it is like a key that can open locked drawers of information. There are two enigmatic words in the first chapter of the Bible that describe the state of the universe before the first day of creation; those two words appear only once, one after the other. TOHU VOHU Tohu and Vohu are the elementary state of the universe surrounding us; they are the first symbols of creation. There are scientists who developed the Tohu and Vohu theory that speaks about the basic elementary particles of creation, and is related to the String theory that tries to unify all known forces at play (weak force, strong force, gravity, electro-magnetism). In this article I am not interested in the scientific theories but in the symbolic aspect of those enigmatic words. There are not wasted words in the Bible. There is a meaning for those two words that appear together only once and disappear after the first day of creation. In Hebrew the meaning of Tohu and Vohu is a chaotic state that exists in an uncontrolled world and is the other end of order. There was one creator that issued two forces into the universe. The convergence of those two led to the first creation or apparent order of separation between Earth and Sky. In the symbolic world it is the one that for its revelation needs to be divided into two, the unity of the two leads to creation or birth which is symbolized by three, repetition of this creation is four, the higher form of creation is the human being symbolized by five. Tohu and Vohu are a non-state, a non-defined area and time, the primordial contemplation before action. Human beings are born out of the merge between a sperm and ovum, the two that give birth to three. In each life there are Tohu and Vohu moments. There are those for whom Tohu and Vohu is an integral part of their life. The first chapter in the Bible tells us that chaos is a natural process in our lives, that Tohu and Vohu can occur frequently in daily life and we shouldn't be afraid of it but embrace it as a natural process of creation and development with the notion that we are but a coma in an infinite library. Ted Barr, October 2014, Ein Karem, Jerusalem
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,036
On New Year's Eve 2004 we were with friends and family discussing what we wanted to achieve in the next 12 months. We wanted to get civil partnershipped but we didn't hear until 21 Feb 2005 that the law would definitely change. We had already been together 15 years and didn't know whether to have something small or large. A priority guest was my sister in New Zealand. When we suddenly heard that she could make it at Christmas, so we had to have a big party! However, it wasn't easy finding a venue. There was a cancellation at The Inn on the Lake in Cumbria for 29 December, so we took it. I was so excited in the build-up to the day, I couldn't believe it was real or that we'd let ourselves in for so much work. There was a lot of planning and neither of us have family nearby who could help. Many people I've spoken to say they enjoyed our day more than 'straight' weddings they'd been to. Maybe, as it's not so formulaic, what is said is really meant and felt. I don't feel any different now but it was brilliant that we were able to do it. Fifteen years ago I fell in love with Jo, since then I have wanted to shout it from the rooftops, and now I can. That's what our wedding day was all about. It's your day, make it the best you can but don't take responsibility for everyone's travel/sleeping/eating/breathing arrangements - it's too much! Don't watch a movie or look at photos until you have your own memories stored away in your head - or they will be replaced by snaps.
{ "redpajama_set_name": "RedPajamaC4" }
1,315
$(document).ready(function() { //initialize swiper when document ready var mySwiper = new Swiper ('.swiper-container', { // Optional parameters direction: 'horizontal', slidesPerView: 4, freeMode: true, loop: false, speed: 300, spaceBetween: 20, nextButton: '.swiper-button-next', prevButton: '.swiper-button-prev', }) // Slider if (jQuery().flexslider) { $('.flexslider').flexslider({ smoothHeight: true, controlNav: false, directionNav: true, prevText: "←", nextText: "→", selector: ".slides > .slide" }); }; // Smooth scrolling - css-tricks.com function filterPath(string){return string.replace(/^\//,'').replace(/(index|default).[a-zA-Z]{3,4}$/,'').replace(/\/$/,'');}var locationPath=filterPath(location.pathname);var scrollElem=scrollableElement('html','body');$('a[href*="#nav"]').each(function(){var thisPath=filterPath(this.pathname)||locationPath;if(locationPath==thisPath&&(location.hostname==this.hostname||!this.hostname)&&this.hash.replace(/#/,'')){var $target=$(this.hash),target=this.hash;if(target){var targetOffset=$target.offset().top;$(this).click(function(event){event.preventDefault();$(scrollElem).animate({scrollTop:targetOffset},'slow',function(){location.hash=target;});});}}});function scrollableElement(els){for(var i=0,argLength=arguments.length;i<argLength;i++){var el=arguments[i],$scrollElement=$(el);if($scrollElement.scrollTop()>0){return el;}else{$scrollElement.scrollTop(1);var isScrollable=$scrollElement.scrollTop()>0;$scrollElement.scrollTop(0);if(isScrollable){return el;}}}return[];} // TOGGLES $('.toggle-view li').click(function () { var text = $(this).children('.toggle'); if (text.is(':hidden')) { text.slideDown('fast'); $(this).children('.toggle-title').addClass('tactive'); } else { text.slideUp('fast'); $(this).children('.toggle-title').removeClass('tactive'); } }); //TABS var tabContents = $(".tab_content").hide(), tabs = $("ul.tabs li"); tabs.first().addClass("active").show(); tabContents.first().show(); tabs.click(function() { var $this = $(this), activeTab = $this.find('a').attr('href'); if(!$this.hasClass('active')){ $this.addClass('active').siblings().removeClass('active'); tabContents.hide().filter(activeTab).fadeIn(); } return false; }); // OPACITY $(".zoom").css({"opacity":0}); $(".zoom").hover( function(){$(this).stop().animate({ "opacity": 0.9 }, 'slow'); $(this).siblings('img').stop().animate({ "opacity": 0.7 }, 'fast');}, function(){$(this).stop().animate({ "opacity": 0 }, 'fast'); $(this).siblings('img').stop().animate({ "opacity": 1 }, 'fast');}); // PORTFOLIO sorting // NAV $('.works-page aside menu a').click(function(){ $(this).addClass("buttonactive").siblings().removeClass("buttonactive") }); // SELECTION $("#work_1").click(function() { $(".works figure").not(".work_1").stop().fadeTo("normal",0.1); $(".work_1").stop().fadeTo("normal",1); }); $("#work_2").click(function() { $(".works figure").not(".work_2").stop().fadeTo("normal",0.1); $(".work_2").stop().fadeTo("normal",1); }); $("#work_3").click(function() { $(".works figure").not(".work_3").stop().fadeTo("normal",0.1); $(".work_3").stop().fadeTo("normal",1); }); $("#work_all").click(function() { $(".works figure").stop().fadeTo("normal",1); }); // CONTACT form validation if (jQuery().validate) { $("#contact_form").validate(); }; // END });
{ "redpajama_set_name": "RedPajamaGithub" }
8,960
A platform that brings together plug & play hardware and drag & drop software to allow everyone to create and invent! 623 backers pledged $105,714 to help bring this project to life. We decided to come to do something a little different in this update. Check out this video and get the latest info on Cubit! We have a lot of news and pictures in today's update. Let's get started. Hello backers! We have exciting updates for you today. Let's get to it! Happy New Year to all of our wonderful backers! We have been working hard over the holidays to deliver the best Cubit we possibly can. Let's get to it! Cubit is getting ever closer to release. We can see the light at the end of the tunnel and we are anxious to get Cubit into your hands. 600+ Backers, 100K Stretch Goal unlocked, and 3 hours to go! 500 backers strong and Cubit featured on The New Screen Savers! A Big Thank You to our Early Backers!
{ "redpajama_set_name": "RedPajamaC4" }
799
{"url":"https:\/\/www.greencarcongress.com\/2011\/11\/leicester-20111104.html","text":"## Study finds GHG emissions associated with palm oil production have been significantly underestimated; implications for carbon intensity of biofuels as well as biofuel policies in Europe\n\n##### 04 November 2011\n\nA new study on greenhouse gas (GHG) emissions associated with the conversion and degradation of peatland in palm oil plantations in Southeast Asia has determined that past studies have generally significantly underestimated emissions associated with palm oil grown on peatland. This has resulted in underestimation of the indirect land use change emissions from many biofuels derived from palm oil, the study concluded.\n\nThe study led by a team from the University of Leicester (UK) suggested that 86 Mg CO2-eq ha-1 yr-1 (over 50 years) or 100 Mg CO2-eq ha-1 yr-1 (over 25 years) represent the best available estimates of typical emissions from peat decomposition in palm plantations.\n\nA number of recent publications have addressed the GHG emissions associated with land use conversion of tropical peat swamp forest to OP [oil palm] plantation. All conclude that while carbon losses from biomass replacement and land clearance are considerable, it is the large and sustained CO2 emissions from drained peat that contribute most to overall emissions and biofuel carbon debts. The values used to estimate peat CO2 emissions have a wide range (19 to 115 Mg CO2-eq ha-1 yr-1) and are derived from a variety of sources, including IPCC defaults and a limited number of scientific studies. Dependency on a limited number of flux studies, combined with inappropriate upscaling, has resulted in systematic underestimation of GHG emissions from OP plantations on tropical peat.\n\n...In terms of an uncertainty range, we suggest that likely peat CO2 emissions should be represented by the minimum and maximum values of 54 to 115 Mg CO2-eq ha-1 yr-1 for the typical OP drainage depth range of 0.6 to 0.85 m. It should be noted that none of these values explicitly consider local factors promoting GHG emission other than water depth (e.g., fertilization, land use history) or regional geographical variations. The adoption of the best estimate and full uncertainty range suggested here will, however, lead to reduced uncertainty in future assessments conducted at the regional scale.\n\nThe majority of previous studies aiming to assess GHG emissions from OP production systems on tropical peatlands have at best based their analyses on values below or towards the lower end of this range, and in all likelihood have significantly underestimated CO2 emissions from drained peats. In terms of biofuel production, it is likely that the true magnitude of the biofuel carbon debt for OP feedstocks produced on tropical peatlands is more substantial than has been previously assumed.\n\n\u2014Page et al.\n\nTropical peatland is one of the Earth\u2019s most spatially efficient carbon sinks and largest long-term repositories of terrestrial organic carbon. Development of tropical peatland for agriculture and plantations requires radical changes in the vegetation cover. These changes reduce or remove the carbon sink capacity of the peatland system by:\n\n\u2022 lowering of the peat water table, which ensures continuous aerobic decomposition of organic matter (plant litter and peat), resulting in high peat surface CO2 emissions; and\n\n\u2022 greatly reducing or stopping carbon inputs to the peat from biomass.\n\nThe study was conducted for the International Council on Clean Transportation (ICCT), which wished to assess the greenhouse gas emissions associated with biodiesel production. Biodiesel mandates can increase palm oil demand directly (the European Biodiesel Board recently reported big increases in biodiesel imported from Indonesia) and also indirectly, because palm oil will replace oil from rapeseed or soy in food if they are instead used to make biodiesel.\n\nThe University of Leicester researchers carried out the first comprehensive literature review of the scale of greenhouse gas emissions from oil palm plantations on tropical peatland in Southeast Asia. In contrast to previous work, this study also provides an assessment of the scientific methods used to derive emissions estimates.\n\nThe team discovered that many previous studies were based on limited data without appropriate recognition of uncertainties and that these studies have been used to formulate current biofuel policies.\n\nThe findings have been published as an International White Paper from the ICCT: Review Of Peat Surface Greenhouse Gas Emissions From Oil Palm Plantations In Southeast Asia. This ICCT paper was produced as a consultancy report; a scientific version of the research will be submitted for publication in the peer-reviewed academic literature.\n\nAlthough the climate change impacts of palm oil production on tropical peatland are becoming more widely recognized, this research shows that estimates of emissions have been drawn from a very limited number of scientific studies, most of which have underestimated the actual scale of emissions from oil palm. These results show that biofuels causing any significant expansion of palm on tropical peat will actually increase emissions relative to petroleum fuels. When produced in this way, biofuels do not represent a sustainable fuel source.\n\n\u2014Ross Morrison, of the University of Leicester Department of Geography\n\nGrowth in palm oil production has been a key component of meeting growing global demand for biodiesel over recent decades. This growth has been accompanied by mounting concern over the impact of the oil palm business on tropical forests and carbon dense peat swamp forests in particular. Tropical peatland is one of Earth\u2019s largest and most efficient carbon sinks. Development of tropical peatland for agriculture and plantations removes the carbon sink capacity of the peatland system with large carbon losses arising particularly from enhanced peat degradation and the loss of any future carbon sequestration by the native peat swamp forest vegetation.\n\nAlthough there have been a number of assessments on greenhouse gas emissions from palm oil production systems, estimates of greenhouse gas emissions from land use have all been based on the results of a limited number of scientific studies. A general consensus has emerged that emissions from peat degradation have not yet been adequately accounted for.\n\nThe results of the Leicester study are important because an increase in the greenhouse gas emissions associated with biodiesel from palm oil, even if expansion on peat only occurs indirectly, could negate any savings relative to the use of diesel derived from fossil fuel.\n\nThe likely underestimation of emissions from peat in previous assessments has implications for the results of the modeling of the land use impacts of biofuel policies, and hence potentially for the policies themselves. The underestimation or non-inclusion of peat emissions from oil palm expansion in most previous modeling of the iLUC [indirect land use change] impacts of biofuels was noted by JRC (2010). Based on this review, the value of 57 Mg CO2 ha-1 yr-1 proposed by JRC (2010) is also an underestimate (although we note that these authors also propose an upwards revised value of 112 Mg CO2 ha-1 yr-1, which may be an overestimate). This underestimation of peat GHG emissions in the iLUC modeling literature may have contributed significantly to an underaccounting of the indirect land use change GHG emissions of biodiesel, and in particular of biodiesel made from palm oil.\n\nFor instance, Al-Riffai et al. (2010) used two emission values\u20145 and 40 Mg CO2-eq ha-1 yr-1, based on IPCC (2006), and Wetlands International (2009a), averaged to 22.5 Mg CO2-eq ha-1 yr-1\u2014to find that peat emissions contributed around 4 g CO2-eq MJ-1 to the carbon intensity of palm biodiesel, and perhaps under 1 g CO2-eq MJ-1 to the carbon intensity of other biodiesel.\n\nWith the central value suggested here, those values would have been more like 19 and 5 g CO2-eq MJ-1, respectively. JRC (2010) noted that the estimate of 18% of OP expansion occurring at the expense of peat had also been set too low by Al-Riffai et al. (2010). In that case, correcting up to 33% as suggested by JRC (2010) would create a compound effect and further increase the reported peat contribution to the biodiesel carbon intensities to 35 and 9 g CO2-eq MJ-1 for palm oil biodiesel and other biodiesel, an intensity increase of 31 and 8 g CO2-eq MJ-1, respectively. To place this in context, an increase in carbon intensity of 31 g CO2-eq MJ-1 would subtract 37% from the reportable carbon savings of palm oil biodiesel used in the European Union.\n\n\u2014Page et al.\n\nIf these improved estimates are applied to recent International Food Policy Research Institute (IFPRI) modeling of the European biofuel market (LaBorde, 2011), they imply that on average biofuels in Europe will be as carbon intensive as gasoline, with all biodiesel from food crops worse than fossil diesel and the biggest impact being a 60% increase in the land use emissions resulting from palm oil biodiesel. Bioethanol or biodiesel from waste cooking oil, on the other hand, could still offer carbon savings.\n\nThis outcome has important implications for European Union policies on climate and renewable energy sources.\n\nWe are very excited by the outcomes of our research\u2014our study has already been accepted and used by several scientists, NGOs, economists and policy advisors in Europe and the USA to better represent the scale of greenhouse gas emissions from palm oil biodiesel production and consumption.\n\nThe findings of this research will be used by organisations such as the US Environmental Protection Agency, European Commission and California Air Resources Board to more fully account for greenhouse gas emissions and their uncertainties from biofuel produced from palm oil. This is essential in identifying the least environmentally damaging biofuel production pathways, and the formulation of national and international biofuel and transportation policies.\n\n\u2014Dr. Sue Page, Reader in Physical Geography at the University of Leicester\n\nThe research was commissioned by Dr. Chris Mallins of the ICCT. Other contributors to the work were Professor Jack Rieley of the University of Nottingham and chair of the scientific advisory board of the International Peat Society (IPS), Dr. Aljosja Hooijer of Deltares in the Netherlands, and Dr. Jyrki Jauhiainen of the University of Helsinki.\n\nPeat degradation under oil palm is a major source of emissions from biodiesel production. Recognizing that emissions are larger than previously thought will help regulators such as the US Environmental Protection Agency (EPA), European Commission (EC) and California Air Resources Board (CARB) identify which biofuel pathways are likely to lead to sustainable greenhouse gas emissions reductions.\n\n\u2014Dr Chris Malins of the ICCT\n\nResources\n\nA real eye opener !!!\n\nWhat would a similar extensive study find out for Tar Sands crude?\n\nOne curious fact can be easily discovered; it releases less carbon dioxide to the air if tar sands are used to fuel automobiles and land used to grow corn is used to grow large permanent fast growing trees instead. There are areas of denuded forests in the US and other countries where trees can be started and they will absorb CO2 in very large quantities.\n\nThe WATERBOXX from Groasis can establish new trees in seemingly dry territories without permanent irrigation, and there are many places where trees once existed and can exist again without the continuing help of man.\n\nCanada invented the CANDU reactor and it can supply all of the zero carbon heat needed to extract bitument from tar sands.\n\nPerhaps one of you could examine the published figures of the cost of producing electricity with CANDU reactors and then examine the cost of producing hydrogen with electrolysis also considering that the reactor can provide heat for cheaper high temperature electrolysis. This hydrogen may well provide energy at lower cost than is provided at the world market price for oil. The hydrogen can be added to the bitumen for the production of automotive fuels from the bitumen for even lower CO2 cost fuels.\n\nPerhaps China will build many pebble bed reactors that develop high enough temperatures to produce hydrogen from water directly with heat. Or modifications of the lead cooled Rubbia reactor, energy amplifier, can produce high enough temperatures for that purpose without complicated fuel pellets. ..HG..\n\nDakota Gasification in North Dakota in the US sells much of the CO2 that it produces to be pumped into oil fields in Canada. It could start producing methanol instead of methane in a few months from coal and pretend that half of the methanol produced was produced with zero carbon release from the coal source. ..HG..\n\nHG...could the E-Cat reactor eventually do a better job to supply clean heat where required? Secondly, heat can always be transformed into electricity. Home owners could eventually have their own small E-Cat, at a price?\n\nNothing wrong with re-planting insect resistant trees to replace the trillions we cut down. However, it may be advantageous to recycle forests every so often. Young growing forests absorb more CO2 and are less of a fire hazard. Nano crystaline cellulose could be extracted and used to produce lighter re-enforced plastics for future electrified vehicles etc.\n\nHarvey, great to see you have gotten the light on the massive new change coming. I expect corporate resistance to various LENR power systems will retard progress for a while. But with the US Navy on board and thousands of R&D labs around the world at work on heater\/generators and CHP systems - it is only a matter of time.\n\nThe GHG issue already starts to fade as we acknowledge a new global energy source that produces ZERO CO2. Once again the Earth's major source of GHG \"pollution\" will be it's natural eco-systems, oceans, volcanoes, forest fires, and vents.\n\nOf course this puts an end to the AGW alarm and CREATES AN OPPORTUNITY to redirect all that eco-energy to reclaiming rivers, wetlands, forests, and mountain regions scarred by hydro, oil and gas exploitation and grid transmission equipment.\n\nIt's going to be a beautiful planet again!\n\nReel...assuming that the production of unlimited clean energy becomes possible, what will the planet look like with over 15 billion people by end of current century or 25 billion or so by 2200? Will we design clean incinerators or find a better to recycle Hollywood style?\n\nHarveyD, you need to work on your negativity. Population stabilization is the entire reason for propping up China! Same for India. Two nations with huge birthrates that will begin to decline as prosperity increases. Why? Historical data shows this and economics of middle class demand less children. Who can PAY for 'em??\n\n2200?? Who the hell wants to hang around Earth that long? I'd rather cruise the Milky Way, maybe take a triop to Arcturus or Beedlejus (sorry spelling too lazy to lookup). Anyway have a nice weekend Harvey.\n\nHarvey, all it takes is water at 250 C or so to break down most organic garbage (lignocellulose) into sugars. Raise the temperature to 550 C and those sugars become light gases and C1-C3 hydrocarbons.\n\nBoth iron and nickel combine with carbon monoxide to make carbonyls at fairly low temperatures. Raise the temperature to 200 C or so and they break down to CO and metals again.\n\nAll you need to recycle this stuff is cheap heat.\n\nIf it ever work as claimed, Rossi's device could supply the low cost heat required. Of course, sugars can be transformed into various chemicals and fuels? Interesting future possibilities.\n\nThanks for that bit of info EP. We appear to have our cheap heat. Now to find a more efficient conversion to V than thermocouples. Possibly Hagelstein's Micron-gap Thermal Photo-Voltaics approach will yield 50% Carnot.\n\nOr, machine the lattice to capture quantum flux in the reaction.\n\nThe comments to this entry are closed.","date":"2022-11-30 03:03:44","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2875039577484131, \"perplexity\": 4083.801464011429}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446710719.4\/warc\/CC-MAIN-20221130024541-20221130054541-00416.warc.gz\"}"}
null
null
package com.dianping.cat.broker.api; import org.unidal.web.mvc.AbstractModule; import org.unidal.web.mvc.annotation.ModuleMeta; import org.unidal.web.mvc.annotation.ModulePagesMeta; @ModuleMeta(name = "api", defaultInboundAction = "single", defaultTransition = "default", defaultErrorAction = "default") @ModulePagesMeta({ com.dianping.cat.broker.api.page.single.Handler.class, com.dianping.cat.broker.api.page.batch.Handler.class, com.dianping.cat.broker.api.page.speed.Handler.class, com.dianping.cat.broker.api.page.js.Handler.class, com.dianping.cat.broker.api.page.cdn.Handler.class, com.dianping.cat.broker.api.page.save.Handler.class }) public class ApiModule extends AbstractModule { }
{ "redpajama_set_name": "RedPajamaGithub" }
1,123
/* * SubsetTests.cpp * * Created on: 28 Apr 2013 * Author: Jason */ #include <UnitTest11/Core.hpp> #include <UnitTest11/Is/True.hpp> #include <UnitTest11/Is/False.hpp> #include <UnitTest11/Is/Empty.hpp> #include <UnitTest11/Is/EqualTo.hpp> #include <UnitTest11/Is/Iterable/Containing/Subset.hpp> class IsIterableContainingSubsetTests : public ut11::TestFixture { public: virtual void Run() { Then("a vector containing the passed subset is true", []() { AssertThat(ut11::Is::Iterable::Containing::Subset(std::vector<int>({4,2}))(std::vector<int>({1,2,3,4,5})), ut11::Is::True); }); Then("a vector containing an equivalent set of the passed set is true", []() { AssertThat(ut11::Is::Iterable::Containing::Subset(std::vector<int>({1,2,3,4,5}))(std::vector<int>({1,2,3,4,5})), ut11::Is::True); }); Then("a vector containing a subset of the passed set is false", []() { AssertThat(ut11::Is::Iterable::Containing::Subset(std::vector<int>({1,2,3,4,5,6}))(std::vector<int>({1,2,3,4,5})), ut11::Is::False); }); Then("a vector not containing the passed subset is false", []() { AssertThat(ut11::Is::Iterable::Containing::Subset(std::vector<int>({4,8}))(std::vector<int>({1,2,3,4,5})), ut11::Is::False); }); Then("Is::Iterable::Containing::Subset() is an operand", []() { AssertThat(ut11::detail::IsOperand< decltype(ut11::Is::Iterable::Containing::Subset(5)) >::value, ut11::Is::True); }); Then("Is::Iterable::Containing::Subset() has an error message", []() { AssertThat(ut11::Is::Iterable::Containing::Subset(5).GetErrorMessage(std::vector<int>()), ut11::Is::Not::EqualTo("")); }); } }; DeclareFixture(IsIterableContainingSubsetTests)(ut11::Category("unit")); class IsIterableNotContainingSubsetTests : public ut11::TestFixture { public: virtual void Run() { Then("a vector containing the passed int is not true", []() { AssertThat(ut11::Is::Iterable::Not::Containing::Subset(std::vector<int>({4,2}))(std::vector<int>({1,2,3,4,5})), ut11::Is::Not::True); }); Then("a vector containing an equivalent set of the passed set is not true", []() { AssertThat(ut11::Is::Iterable::Not::Containing::Subset(std::vector<int>({1,2,3,4,5}))(std::vector<int>({1,2,3,4,5})), ut11::Is::Not::True); }); Then("a vector not containing the passed int is not false", []() { AssertThat(ut11::Is::Iterable::Not::Containing::Subset(std::vector<int>({4,8}))(std::vector<int>({1,2,3,4,5})), ut11::Is::Not::False); }); Then("a vector containing a subset of the passed set is not false", []() { AssertThat(ut11::Is::Iterable::Not::Containing::Subset(std::vector<int>({1,2,3,4,5,6}))(std::vector<int>({1,2,3,4,5})), ut11::Is::Not::False); }); Then("Is::Iterable::Containing::Subset() is an operand", []() { AssertThat(ut11::detail::IsOperand< decltype(ut11::Is::Iterable::Not::Containing::Subset(5)) >::value, ut11::Is::True); }); Then("Is::Iterable::Containing::Subset() has an error message", []() { AssertThat(ut11::Is::Iterable::Not::Containing::Subset(5).GetErrorMessage(std::vector<int>()), ut11::Is::Not::EqualTo("")); }); } }; DeclareFixture(IsIterableNotContainingSubsetTests)(ut11::Category("unit"));
{ "redpajama_set_name": "RedPajamaGithub" }
4,704
Cash for Cars - Madison Salvage - GET PAID for JUNK! At Madison Salvage & Recycling, we make it easy to get Cash for Cars in Bloomsburg! Feel free to bring your vehicle directly to the scale, or if the vehicle is not running, give us a call at (570) 458-5109 to schedule a FREE pickup. We pay top prices for your junk! That's it! Time to get paid!
{ "redpajama_set_name": "RedPajamaC4" }
7,662
Q: How to create Custom pagination in angular 2+? <nav aria-label="Page navigation example"> <ul class="pagination"> <li class="page-item"><a class="page-link" href="#">Previous</a></li> <li class="page-item"><a class="page-link" href="#">1</a></li> <li class="page-item"><a class="page-link" href="#">2</a></li> <li class="page-item"><a class="page-link" href="#">3</a></li> <li class="page-item"><a class="page-link" href="#">Next</a></li> </ul> </nav> I am new in angular please help... I want to use above bootstrap pagination code on my custom pagination component and also when I use pagination component in another component html as (app-paginate [somedata]="somevalue" and event> _______ (/app-paginate> It should work... NOTE: I have already used ngx-bootstrap pagination so please don't provide that answer, I want to make my own pagination component. sorry for my poor english .. THANK YOU A: If you know of ngx-bootstrap and like their component but want a different look, you can check their implementation of the pagination component on github, and then adapt to your case. Here is an example of a custom version of the previous button, with roughly the same typescript code : <li *ngIf="directionLinks" class="page-item" [class.disabled]="!hasPrevious() || disabled"> <a aria-label="Previous" i18n-aria-label="@@pagination.previous-aria"class="page-link" href (click)="!!selectPage(page-1)" [attr.tabindex]="(hasPrevious() ? null : '-1')"> <myCustomIcon aria-hidden="true"></myCustomIcon > <span class="sr-only" i18n="@@pagination.previous"></span> </a> </li>
{ "redpajama_set_name": "RedPajamaStackExchange" }
47
{"url":"https:\/\/bethzero.com\/2018\/11\/","text":"## Promotional factory farming photos\n\n180 words\n\nOrganizations like Mercy for Animals do undercover investigations of factory farms to expose the bad circumstances there. That results in photos like these, intended to make factory farms look bad.\n\nTW: animal cruelty, continued below the fold.\n\n## Abstinence-Only Education Criticism | Part 2\n\n172 words\n\nDue to recent events, here is a part 2 to my earlier post on abstinence \u201ctreatment\u201d. Slightly more personal this time.\n\n### Eating Disorders\n\nReddit just banned \/r\/ProED and \/r\/ProEDmemes. I\u2019m not sure what to say, other than that it sucks.\n\nI\u2019ve never had an eating disorder. I\u2019ve flirted with it and my eating has never been healthy, but it has never interfered with my day-to-day functioning.\n\nI liked ProEDmemes. Many posts were relatable, and the community has helped me through some dark spots. The people were lovely and caring, it was a place to relate and to vent. It wasn\u2019t a place where eating disorders were encouraged, but one where eating disorders were accepted and everyone could work it out on their own pace, sharing and receiving help along the way.\n\nReddit is really convenient. You can easily participate in many different communities at once, so any single community can survive even when little fresh content gets posted. But for vulnerable communities, it doesn\u2019t work so well.\n\nI miss the old internet.\n\n## What am I feeling\n\n317 words\n\nWhat do normal humans look like? Have my arms always looked this out of place? Are my arms too long or too short? Too wide or too narrow? They are wrong, but I can\u2019t quite put my finger on what part is wrong.\n\nMy face looks off. All its parts have the wrong shape, size and location. My head looks fake and silly, not like heads are supposed to look.\n\nProprioception is bothering me. It constantly makes me aware of where my left leg is, even when I\u2019m trying to concentrate on things that have nothing to do with anyone\u2019s left leg. What\u2019s up with that? Why a leg, why only the left one?\n\nI want loud music, I want to stand outside in the cold, I want physical sensations. Anything to stop the goddamn noise.\n\nI press my tongue against my teeth. Have my teeth always been there? They feel too widely spaced and too narrowly. They are too close to the centre of my jaw. My teeth don\u2019t fit in my mouth and they definitely shouldn\u2019t be where they are now. I want to grab a hammer and smash them from my skull.\n\nHas my hair always looked this ridiculous?\n\nAll proprioception is too present right now, all over my body. My ears\u00a0are bombarding my brain even though the world is quiet. My eyes are doing something equivalent that I can\u2019t describe. My skin is crawling, itching to be cut. I wish my senses could turn off for an hour; I want some rest.\n\nEverything about my body is deformed and in the wrong place and feels like it doesn\u2019t belong.\n\nIsn\u2019t it strange that depression dampens your colour vision while also heightening proprioception? Because hypomania increases both and depression should be the opposite of hypomania.\n\nI feel like a stranger in my own body. Maybe I would feel like a stranger in any body.\n\n## Veg*n dishes versus constrained optimization\n\n313 words\n\nMathematical optimization is concerned with\u00a0 problems of the form $$\\text{maximize~} f(x) \\text{~ for ~} x \\in X$$ for some set $X \\subset Y$ and function $f : Y \\to \\mathbb{R}$. In this post, we\u2019ll think of $Y$ as the set of possible restaurant dishes, $X\\subset Y$ as the set of dishes satisfying certain constraints like being digestible, non-poisonous and not containing human flesh. The function $f$ to optimize is some combination of price, healthyness and taste.\n\nA first observation is that\u00a0for any $X' \\subset X$ the maximum of $f(x)$ over $X'$ is no bigger than the maximum over $X$, for if $x \\in X'$ attains the maximum of $f(x)$ over $X'$, then also $x \\in X$, so the maximum of $f$ over $X$ is at least $f(x)$. In normal words, if you restrict your diet, you can miss out on good dishes, but never gain access to better dishes than on an unrestricted diet.\n\nFrom this, we could deduce that you should never pick a vegan dish in a restaurant because the non-vegan dishes were made with fewer restrictions and hence can only be better than the vegan dish. Same for choosing recipes to cook yourself. Before I was vegan, this was my conscious reason for always choosing dishes with meat.\n\nBut is the deduction true? I don\u2019t think so. Because, unbeknownst to many, the meat dishes are actually constrained to contain meat. I don\u2019t know why, but my two prime suspects are Goodhart\u2019s law impacting the reasoning above, or meat-eaters being scared of vegetables.\n\nAs it turns out, making a good vegetarian meal takes non-zero skill, contrary to making a good meal with meat. In my experience, this causes chefs to put actual thought into their vegetarian dishes, causing these to actually be tastier than most dishes with meat. So the argument from mathematical optimization actually gives the wrong answer here!","date":"2019-05-26 00:39:30","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 17, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.22605933248996735, \"perplexity\": 2218.2171733537025}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-22\/segments\/1558232258453.85\/warc\/CC-MAIN-20190525224929-20190526010929-00162.warc.gz\"}"}
null
null
{"url":"http:\/\/mathhelpforum.com\/calculus\/15450-length-curve.html","text":"# Math Help - Length Of the curve\n\n1. ## Length Of the curve\n\nHey all, I cant seem to figure out this last question..I have the answer but I cant get the full step by step process. Please help\n\nFind the length of the curve: Y = ln x on 1 less than and equal to x less than or equal to square root 3.\n\n2. Originally Posted by sikhest\nHey all, I cant seem to figure out this last question..I have the answer but I cant get the full step by step process. Please help\n\nFind the length of the curve: Y = ln x on 1 less than and equal to x less than or equal to square root 3.\nuse the formula: $s = \\int_{1}^{ \\sqrt {3}} \\sqrt {1 + (y')^2}dx$\n\nwhere $s$ is the length of the curve on the desired interval and $y'$ is the derivative of $y$ with respect to $x$\n\n3. Hey thanks, All that is left to do is change the bounds and solve it..any ideas on how the bounds are changed?\n\nThanks\n\n4. Hello, sikhest!\n\nFind the length of the curve: . $y = \\ln x$ .on $1 \\leq x \\leq \\sqrt{3}$\n\nIf you followed Jhevon's advice: . $y' = \\frac{1}{x}$\n\nThen: . $S \\;=\\;\\int^{\\sqrt{3}}_1\\sqrt{1 + \\frac{1}{x^2}}\\:dx \\;=\\; \\int^{\\sqrt{3}}_1\\frac{\\sqrt{x^2+1}}{x}\\:dx$\n\nLet $x = \\tan\\theta\\quad\\Rightarrow\\quad dx = \\sec^2\\!\\theta\\ d\\theta$\n\n. . and we have: . $S \\;=\\;\\int\\frac{\\sec\\theta}{\\tan\\theta}(\\sec^2\\!\\th eta\\ d\\theta) \\;=\\;\\int\\frac{\\sec^3\\!\\theta}{\\tan\\theta}\\ d\\theta$\n\nTo change the limits, we have: . $\\tan\\theta = x$\n. . When $x = 1$, we have: . $\\tan\\theta = 1\\quad\\Rightarrow\\quad \\theta = \\frac{\\pi}{4}$\n. . When $x = \\sqrt{3}$, we have: . $\\tan\\theta = \\sqrt{3}\\quad\\Rightarrow\\quad\\theta = \\frac{\\pi}{3}$\n\nTherefore, the integral is: . $S \\;=\\;\\int^{\\frac{\\pi}{3}}_{\\frac{\\pi}{4}} \\frac{\\sec^3\\!\\theta}{\\tan\\theta}\\ d\\theta$\n\nGood luck!","date":"2015-11-26 08:41:31","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 17, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.95326167345047, \"perplexity\": 320.3672717052529}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-48\/segments\/1448398446535.72\/warc\/CC-MAIN-20151124205406-00007-ip-10-71-132-137.ec2.internal.warc.gz\"}"}
null
null
We make every effort to remain open however in the event of severe weather, power outages or other emergencies Play and Learn Centers may find it necessary to delay opening or close. Tune in to ABC 6 or KYW for your center's closing number. When possible Directors will email, leave messages on Center voicemail and post on Play & Learn's Facebook Page.
{ "redpajama_set_name": "RedPajamaC4" }
4,012
Q: Xcode 4.3.2 is missing iOS 4.0 simulator I have the latest version of Xcode (4.3.2) and have set Deployment Target to 4.0. But there is not iOS 4.0 simulator only 4.3 and above. Is there an easy way to install iOS 4.0 Simulator in Xcode? A: Apple has stopped supporting devices running operating system previous to iOS 4.3 in their Xcode 4.3.2 SDK. So for that you might need to install the previous SDK of Xcode A: Follow these steps to add a (new) simulator * *Click on Simulator icon and open simulator list. *At the end of list, there is an option to add new simulator "Add Additional Simulator". That will open 'Device & Simulator' window. *Switch to 'Simulator' tab. *There are three field in simulator tab. *Click on '+' icon, on left bottom corner of window. *Simulator Name: Enter simulator name here *Device Type: Select iPad from this dropdown list *OS Version: Select OS version from this dropdown list *Click on 'Create' A new simulator will be added in your Simulator option list. Look at this snapshot to understand flow of above steps: And if there is no simulator/OS version in simulator list, you're looking for, * *Click on Simulator icon and open simulator list. *At the end of list, there is an option to add new simulator "Download Simulator". That will open 'Component' window (from Xcode >> Preferences). *Select/click simulator from list, which you need to download. Look at this snapshot:
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,261
Adorable baby sitting in a white nursery. portrait of adorable baby in the nursery. Adorable baby boy in white sunny bedroom. Newborn child relaxing on a rug. Nursery for young children. Furniture, textile and bedding for kids. New born kid during tummy time with toys at a window.
{ "redpajama_set_name": "RedPajamaC4" }
2,119
\section{Introduction} This paper explores approximation properties of finite smooth mixtures of normal regressions as flexible models for conditional densities. These models are a special case of mixtures of experts (ME) introduced by \citet{JacobsEtAl91}. ME have become increasingly popular is statistical literature since they are very flexible, easy to interpret and reasonably easy to estimate. See, for example, papers by \citet {JordanJacobs94} and \citet{JordanXu95} who employ the expectation maximization (EM) estimation algorithm or papers by \citet{PengJacobsTanner1996}, \citet{WoodJiangTanner02}, \citet{Geweke07} and\break \citet {VillaniKohnGiordani07} who use Markov chain Monte Carlo methods for estimation of ME in the Bayesian framework. This paper contributes to the literature that provides a theoretical explanation of the success of ME models in applications. In particular, I show that large classes of conditional densities can be approximated in the Kullback--Leibler (KL) distance by finite smooth mixtures of normal regressions. Approximation results are obtained in the KL distance for the following reason. If a data generating density is in the KL closure of a class of models then this density can be consistently estimated from data by these models under weak regularity conditions [see, e.g., \citet{GhoshRamamoorthi03} for a textbook treatment of Schwarz's theorem on posterior consistency and \citet{RoederWasserman97} for posterior consistency results for finite mixture of normals]. Consider a joint probability distribution $F$ on a product space $Y \times X$, $Y \subset R^d$ and $X \subset R^{d_x}$. Assume the conditional distribution $F(y |x)$ has a density $f(y |x)$ with respect to the Lebesgue measure. The marginal density of $x$ with respect to some generic measure is denoted by $f(x)$. A model $\mathcal{M}$ for the conditional density $f(y |x)$ is described by $p(y|x,\mathcal{M})$. The KL distance between $f(y|x)f(x)$ and $p(y|x,\mathcal{M})f(x)$ is defined by \[ d_{\mathrm{KL}}(F,\mathcal{M}) = \int\log\frac{f(y|x)}{p(y|x,\mathcal{M})} F(dy,dx). \] This distance can also be interpreted as the expected KL distance between the conditional distributions. Either way, this is the distance useful for obtaining estimation consistency results. Also, convergence in the KL distance implies convergence in the total variation distance. Below, I consider several different specifications of mixture of normal regressions models, $p(y|x,\mathcal{M})$, and provide conditions on $F$ under which $d_{\mathrm{KL}}(F,\mathcal{M})$ can be made arbitrarily small. I also derive rates of convergence and easy to interpret bounds for $d_{\mathrm{KL}}(F,\mathcal{M})$. In general, a finite mixture of normal regressions model can be written as \[ p(y|x,\mathcal{M}) = \sum_{j=1}^m \alpha_j^m(x ) \phi(y, \mu_j^m(x ),\sigma_j^m(x )), \] where mixing probabilities satisfy $ \alpha_j^m(x) \in[0,1]$ and $\sum_j \alpha_j^m(x) = 1$, and $\phi(y, \mu$, $\sigma)$ is a normal density with mean $\mu$ and standard deviation $\sigma$ evaluated at $y$ (if $y$ is multidimensional then the variance--covariance matrix is diagonal $\sigma ^2 I$). Most of the results obtained in the paper can be easily extended to models in which general location scale densities $\sigma^{-d}K((y-\mu )/\sigma)$ are mixed instead of the normal densities $\phi(y, \mu ,\sigma)$. Models, in which the mixing weights depend on $x$, are referred in this paper as smooth mixtures. In practice, $\alpha_j^m(x)$'s are often modeled by a multinomial choice model, for example, multinomial logit [\citet {PengJacobsTanner1996}] or probit [\citet{Geweke07}], or it might not depend on $x$. The mean $\mu_j^m(x)$ can be constant, linear or flexible, for example, polynomial, in $x$. An exponentiated polynomial or spline in $x$ can be used for modeling the standard deviation $\sigma_j^m(x)$ [\citet {VillaniKohnGiordani07}]. To the best of my knowledge, previous literature on smooth mixtures of regressions (or experts) does not provide a theory on what specifications for $\alpha_j^m$, $\mu_j^m$ and $\sigma_j^m$ deliver a model that can approximate and consistently estimate large nonparametric classes of densities $F$. There are theoretical results on approximation of smooth functions and estimation of conditional expectations by ME [see \citet {ZeeviMeirMaiorov98} and \citet{MaiorovMeir98}]. The only paper on approximation of conditional densities by ME seems to be \citet{JiangTanner99} who develop approximation and estimation results for target densities from a single parameter exponential family, in which the parameter is a smooth function of covariates. A detailed comparison with results in \citet{JiangTanner99} is presented in Section \ref{sec:comparison}. In this paper, I do not restrict the functional form of $f(y|x)$ and use weak regularity conditions to describe a class of $F$ that can be approximated. Conditions on approximable classes of $f(y|x)$ and $f(x)$ that are common for different model specifications include bounded support for $f(x)$, continuity of $f(y|x)$ in $(y,x)$, finite expectation of a change of $\log f(y|x)$ in a neighborhood of $y$ and existence of the second moments of $y$. The latter restriction can be weakened by adding densities with fat tails to the mixtures in addition to normal densities. In Section \ref{sec:lin_logit}, I show that considerable flexibility is already attained when $\alpha_j^m$'s are modeled by multinomial logit with linear\vspace*{1pt} indices in $x$, and $(\mu_j^m,\sigma_j^m)$ are independent of $x$. Results in Sections \ref{sec:poly_logit} and \ref{sec:lin_logit} suggest that using polynomials in the logit specification reduces the number of mixture components $m$ required to achieve a specified approximation precision. As shown in Section \ref{sec:flex_mean}, models for univariate response $y$ in which the mixing probabilities and the variances of the mixed normals are independent of $x$, and the means are flexible, for example, polynomial in $x$, can approximate large classes of $f(y|x)$. Differences in quantiles of $f(y|x)$ from these classes have to be bounded above and below uniformly in $x$. These restrictions on $f(y|x)$ can be weakened if the variances of the mixed normals are modeled by flexible functions of $x$. Section \ref{sec:conclusion} summarizes the findings. \section{Infeasible model} \label{sec:Infeasible_model} In this section, I explicitly construct a smooth mixture of normals model that converges to a given $F$ in the KL distance as $m$ increases. This model is not feasible in the sense that it is not based on components employed in practice, for example, logit/probit mixing probabilities. However, the results for feasible models presented in the following sections follow from this one or are similar. Let $A_j^m$, $j=0,1,\ldots,m$, be a partition of $Y$ consisting of adjacent half-open half-closed hypercubes $A_1^m,\ldots,A_m^m$ with side length $h_m$ and the rest of the space~$A_0^m$. As $m$ increases the fine part of the partition becomes finer, $h_m \rightarrow0$. Also, it covers larger and larger part of $Y$: for any $y \in Y$ there exists $M_0$ such that \begin{equation} \label{eq:con_partition_gen_case} \forall m \geq M_0\qquad C_{\delta_m}(y) \cap A_0^m = \varnothing, \end{equation} where $C_{\delta_m}(y)$ is a hypercube with center $y$ and side length $\delta_m \rightarrow0$. It is always possible to construct such a partition. For example, if $Y=[0,\infty)$ let $A_0^m=[\log m, \infty)$, $A_j^m=[(j-1)\log m / m, j \log m / m)$ for $j \neq0$, and $h_m=\log m / m$. A candidate model $\mathcal{M}_0$ for approximating $f(y|x)$ is \begin{equation} \label{eq:candidate_general_case} p(y|x,\mathcal{M}_0) = \sum_{j=1}^m F(A_j^m|x) \phi(y, \mu _j^m,\sigma_m) +F(A_0^m|x) \phi(y, 0,\sigma_0), \end{equation} where $\sigma_0$ is fixed, $\sigma_m$ converges to zero as $m$ increases and $\mu_j^m$ is the center of $A_j^m$. One can always construct a model $\mathcal{M}_0$ and a partition $A_j^m$ so that \begin{equation} \label{eq:cond_delta_sigma_h} \delta_m \rightarrow0,\qquad \sigma_m / \delta_m \rightarrow0,\qquad \delta _m^{d-1} h_m / \sigma_m^d \rightarrow0, \end{equation} for example, in the example for $Y=[0,\infty)$ from the previous paragraph let $\sigma_m = h_m^{0.5}$ and $\delta_m = h_m^{0.25}$. For a partition satisfying (\ref{eq:con_partition_gen_case}) and (\ref{eq:cond_delta_sigma_h}), let us introduce the following restrictions on $F$. \begin{assumption} \label{assn:general_case} 1. \hypertarget{assnitem:general_case_1} $f(y|x)$ is continuous in $y$ a.s. $F$. \smallskipamount=0pt \begin{enumerate}[2.] \item[2.] \hypertarget{assnitem:general_case_2} The second moments of $y$ are finite. \item[3.] \hypertarget{assnitem:general_case_3} For any $(y,x)$ there exists a hypercube $C(r,y,x)$ with side length $r>0$ and $y \in C(r,y,x)$ such that (i) \begin{equation} \label{eq:IntBoundFinfF} \int\log\frac{f(y|x)}{\inf_{z \in C(r,y,x)} f(z|x) } F(dy,dx) < \infty \end{equation} and (ii) exists $M_3$ such that for any $m \geq M_3$, if $y \in A_0^m$ then $C(r,y,x) \cap A_0^m$ contains a hypercube $C_0(r,y,x)$ with side length $r/2$ and a vertex at $y$ and if $y \in Y \setminus A_0^m$, then $C(r,y,x) \cap(Y \setminus A_0^m)$ contains a hypercube $C_1(r,y,x)$ with side length $r/2$ and a vertex at $y$. \end{enumerate} \end{assumption} Parameter $\sigma_0$ can always be chosen so that \begin{equation} \label{eq:cond_sigma0} 1>2^{-(d+1)}>\phi(y, 0,\sigma_0) \lambda(C_0(r,y,x)), \end{equation} where $\lambda$ is the Lebesgue measure. \begin{proposition} \label{prp:gen_case} If the model $p(y|x,\mathcal{M}_0)$ and the partition $A_j^m$ are constructed so that (\ref{eq:con_partition_gen_case}), (\ref{eq:candidate_general_case}), (\ref{eq:cond_delta_sigma_h}) and (\ref{eq:cond_sigma0}) hold, and $F$ satisfies Assumption \ref{assn:general_case}, then $d_{\mathrm{KL}}(F,\mathcal{M}_0) \rightarrow0$ as $m\rightarrow\infty$. \end{proposition} The proposition is rigorously proved in the \hyperref[app]{Appendix}. Here, I briefly describe the intuition behind the argument and the role of the assumptions. Convergence in the KL distance is proved by the dominated convergence theorem (DCT). First, I establish point-wise convergence of the integrand, $\log f(y|x)/ p(y|x ,\mathcal{M}_0)$, to zero, and then I derive an integrable upper bound on the integrand for the DCT applicability. Nonnegativity of the KL distance is fruitfully exploited in the proof as it allows working only with upper bounds and ignoring the lower ones in convergence arguments. The first term on the right-hand side of (\ref {eq:candidate_general_case}) (the sum from 1 to $m$) approximates the integral \begin{equation} \label{eq:convol} \int\phi(y, \mu,\sigma_m) f(\mu|x) \,d \mu= \int f(y - \sigma_m z | x) \phi(z, 0,1) \,d z , \end{equation} when $h_m$ is much smaller than $\sigma_m$, and the fine part of the partition is large. The integral on the right-hand side of (\ref{eq:convol}) is obtained by the change of variables. For a small $\delta_m$ and $z$ satisfying $\Vert\sigma_m z \Vert\leq \delta _m$, $f(y-\sigma_mz|x)$ is close to $f(y|x)$ as $f(y|x)$ is assumed to be continuous in $y$. Therefore, when $\sigma_m$ is much smaller than $\delta_m$ the right-hand side of (\ref{eq:convol}) should be close to $f(y|x)$. Thus, this intuitive argument explains the role of conditions (\ref{eq:cond_delta_sigma_h}) and continuity of $f(y|x)$. The second term on the right-hand side of (\ref {eq:candidate_general_case}) converges to zero. This term is not needed for point-wise convergence. It can be omitted when the support of $f(y|x)$ is bounded uniformly in $x$ as in this case we can set $A_0^m=\varnothing$ and use the same variance $\sigma_m^2$ in all mixture components (there is no need to define $\sigma_0$). This term together with part \hyperlink{assnitem:general_case_2}{2} of Assumption \ref {assn:general_case} prevents tails of $p(y|x ,\mathcal{M}_0)$ from becoming too thin relative to $f(y|x)$ in the unbounded support case (in the absence of this term the tails would be too thin as $\sigma_m \rightarrow0$). Parts \hyperlink{assnitem:general_case_2}{2} and \hyperlink {assnitem:general_case_3}{3} of Assumption \ref{assn:general_case} together guarantee existence of an integrable upper bound for the DCT applicability. An upper bound on $\log f(y|x) / p(y | x $, $\mathcal{M}_0)$ involves a lower bound on $p(y | x ,\mathcal{M}_0)$. Both terms on the right-hand side in the definition of $p(y | x ,\mathcal{M}_0)$ in (\ref{eq:candidate_general_case}) can be bounded below by an expression proportional to $\inf_{z \in C(r,y,x)} f(z|x)$. That is how condition (\ref{eq:IntBoundFinfF}) is deduced. The lower bound for the second term in (\ref {eq:candidate_general_case}) also includes $\phi(y, 0,\sigma_0)$ and that is why finiteness of the second moments of $y$ is assumed. One interpretation of condition (\ref{eq:IntBoundFinfF}) [part \hyperlink{assnitem:general_case_3}{3}(i) of Assumption \ref {assn:general_case}] is that local relative changes in $f(y|x)$ due to changes in $y$ should not be infinitely large on average. It seems difficult to think of an unconditional density, which is well behaved and positive everywhere, that would violate (\ref{eq:IntBoundFinfF}). This part of the assumption though can be violated by reasonable conditional densities as Example~\ref{ex:exponential} below illustrates. \begin{figure}[b] \includegraphics{765f01.eps} \caption{Construction of $C(r,y,x)$.} \label{fig:Cryx} \end{figure} When $f(y|x)$ is positive everywhere, part \hyperlink {assnitem:general_case_3}{3}(ii) of Assumption \ref{assn:general_case} is not needed. It always holds if $C(r,y,x)$ is a hypercube with center at $y$. Part \hyperlink{assnitem:general_case_3}{3}(ii) becomes important when $f(y|x)$ can be equal to zero. In particular, the sets $C_0(r,y,x)$ and $C_1(r,y,x)$ in part \hyperlink{assnitem:general_case_3}{3}(ii) of Assumption \ref {assn:general_case} are introduced to specify that $C(r,y,x)$ needs to be defined differently near the boundary of the support and in the tails if one wants to use condition (\ref{eq:IntBoundFinfF}) in its present form. This is illustrated in Figure \ref{fig:Cryx}. The support of $f(\cdot|x)$ should include $C(r,y,x)$ a.s. $F$; otherwise, part \hyperlink{assnitem:general_case_3}{3}(i) of Assumption \ref{assn:general_case} is not satisfied. Therefore, for $f(y|x)$ in Figure \ref{fig:Cryx}, it has to be the case that $C(r,y,x)=[y,y+r]$ at the boundary of the support (the intersection of the axes). Setting $C(r,y,x)=[y,y+r]$ near the boundary of the support makes the ratio $f(y|x) / \inf_{z \in C(r,y,x)} f(z|x)$ smallest possible (equal to one) and thus helps with condition (\ref{eq:IntBoundFinfF}). Parts of $Y$ near the boundary of the support are covered by the fine part of the partition $A_1^m,\ldots,A_m^m$ for all sufficiently large $m$, and part \hyperlink{assnitem:general_case_3}{3}(ii) of Assumption \ref{assn:general_case} holds for $C_1(r,y,x)=[y,y+r/2]$. Using $C(r,y,x)=[y,y+r]$ for all $y$ would not work. Since for any $m$ one can find $y \in A_m^m$ such that $C(r,y,x) \cap Y\setminus A_0^m$ is arbitrary small, and part \hyperlink{assnitem:general_case_3}{3}(ii) of Assumption \ref{assn:general_case} fails. Thus, for $y$ that are arbitrary far from the boundary of the support, one has to use $C(r,y,x)=[y-r/2,y+r/2]$ eventually. Then, part \hyperlink{assnitem:general_case_3}{3}(ii) of the assumption clearly holds for $C_1(r,y,x)=[y-r/2,y]$, $C_0(r,y,x)=[y,y+r/2]$ and any $m$. Results in this section and similar results in the following sections can be generalized in several different ways. First, the derivation of the integrable upper bound in the proof of Proposition \ref{prp:gen_case} suggests that the requirement of finite second moments of $y$ can be weakened by adding a density with thicker than normal tails to the mixture of normals; for example, substitute $\phi(y, 0,\sigma_0)$ in (\ref{eq:candidate_general_case}) with a Student $t$-density. Second, more general shapes of the support of $F$ can be accommodated if instead of hypercubes $C(r,y,x)$, $C_0(r,y,x)$, and $C_1(r,y,x)$ in Assumption \ref{assn:general_case} different sets with positive Lebesgue measure are used. For example, if the support of $f(\cdot|x)$ is a triangle in $R^2$ then small triangles can be used instead of the squares $C(r,y,x)$, $C_0(r,y,x)$ and $C_1(r,y,x)$. Third, general location scale densities $\sigma^{-d}K((y-\mu)/\sigma)$ can be used in mixtures instead of normal densities. As long as analogs of Lemmas \ref{lm:boundRSbyInt_ESP}, \ref {lm:boundIntBy1} and \ref{lm:boundRS_EPP_new} (see the \hyperref [app]{Appendix}) are available for a particular type of densities, results in this and the following sections will hold for mixtures of these densities. Lemmas \ref{lm:boundRSbyInt_ESP} and \ref{lm:boundRS_EPP_new} hold for $\sigma^{-d}K((y-\mu)/\sigma)$ if $K(z)$ is bounded and nonincreasing in $|z|$ (proofs of the lemmas use only these facts about the normal distributions). The derivation of bounds in Lemma \ref{lm:boundIntBy1} exploits normality; however, the qualitative results of the lemma hold as long as $\int_R K(z)\,dz = 1$ and $K(z)$ is positive in a neighborhood of zero. Thus, all the results in this paper that establish $d_{\mathrm{KL}}(F,\mathcal{M}) \rightarrow0$ do not depend on the normality assumption; however, bounds and convergence rates for $d_{\mathrm{KL}}(F,\mathcal{M})$ derived below are specific to mixtures of normal densities, and they might be different for mixtures of other densities. All these generalizations seem to be straight forward and I do not pursue them in this paper to keep the arguments short and simple. Examples below demonstrate that Assumption \ref{assn:general_case} is satisfied for a large class of densities. They also describe some situations in which the assumption fails. \begin{example} \label{ex:exponential} Exponential distribution, $f(y|x)=\gamma(x) \exp\{-\gamma(x)y\}$,\break \mbox{$\gamma(x)>0$}. The density is continuous in $y$ (part \hyperlink {assnitem:general_case_1}{1} of Assumption \ref{assn:general_case}). Let $\int\gamma^{-2} \,dF < \infty$ so that the second moment of $y$ is finite (part \hyperlink{assnitem:general_case_2}{2} of Assumption~\ref {assn:general_case}). Define the partition $A_j^m$ and $C(r,y,x)$, $C_0(r,y,x)$ and $C_1(r,y,x)$ as shown in Figure \ref{fig:Cryx}, for example, for some $r>0$ let $C(r,y,x)=[y,y+r]$ for $y \in[0,r]$ and $C(r,y,x)=[y-r/2,y+r/2]$ for $y \in(r,\infty)$. Thus, from the discussion of Figure \ref{fig:Cryx} above it follows that part \hyperlink{assnitem:general_case_3}{3}(ii) of Assumption \ref {assn:general_case} is satisfied. Because $\log f(y|x) / \inf_{z \in C(r,y,x)} f(z|x) \leq r \gamma(x)$, part \hyperlink{assnitem:general_case_3}{3}(i) of Assumption \ref {assn:general_case} holds as long as $\gamma(x)$ is integrable with respect to $f(x)$. If $\gamma(x)$ is not integrable, then part \hyperlink{assnitem:general_case_3}{3}(i) of the assumption fails. \end{example} \begin{example} \label{ex:inf_student} A Student $t$-distribution, in which scale and location parameters are functions of $x$, $f(y|x) \propto[ \nu+ ((y-b(x))/c(x))^2]^{-(\nu+ 1)/2}$, $\nu>2$ and $b(x)^2$, $c(x)^{-2}$ and $c(x)^2$ are integrable w.r.t. $f(x)$. The second moment of $y$ is finite since \begin{eqnarray*} \int y^2 \,dF & = & \int\biggl(c(x)^2 \biggl[ \frac{y-b(x)}{c(x)} \biggr]^2 + 2 b(x)y - b(x)^2 \biggr) \,dF \\ & = & \int\biggl( c(x)^2 \frac{\nu}{\nu- 2} + 2 b(x)^2 - b(x)^2 \biggr) \,dF < \infty. \end{eqnarray*} As I discuss above, for densities positive everywhere part \hyperlink{assnitem:general_case_3}{3}(ii) of Assumption \ref {assn:general_case} always holds with $C(r,y,x) = [y-r/2,y+r/2]$. Part \hyperlink{assnitem:general_case_3}{3}(i) of Assumption \ref {assn:general_case} is also satisfied because \begin{eqnarray*} && \int\log\frac{f(y|x)}{\inf_{z \in C(r,y,x)} f(z|x) } F(dy,dx) \\ &&\qquad = 2 \int_X \int_{b(x)}^{\infty} -\frac{\nu+ 1}{2} \log\frac{\nu+ ((y-b(x))/c(x))^2}{\nu+ ((y+ r - b(x))/c(x) )^2} f(y|x)\, dy F(dx) \\ &&\qquad \leq(\nu+ 1) 2 \int_X \int_{b(x)}^{\infty} \bigl[\nu+ \bigl(\bigl(y + r - b(x)\bigr)/c(x) \bigr)^2\bigr] f(y|x)\, dy F(dx) < \infty, \end{eqnarray*} where the last inequality follows by the integrability of $((y - b(x))/c(x)$, its square and $c(x)^{-2}$. \end{example} \begin{example} \label{ex:inf_cont_bddsupport} Suppose that conditional density $f(y|x)$ is continuous in $y$ and bounded above and away from zero, $\infty> \overline{f} \geq f(y|x) \geq\underline{f}>0$ for any $y \in Y=[a,b]$ and $x \in X$. Then we can set $A_0^m=\varnothing$. For $r \in(0, (b-a)/4)$, let $C(r,y,x) = [y, y + r]$ and $C_1(r,y,x) = [y, y + r/2]$ for $y \in [a,(a+b)/2]$ and $C(r,y,x) = [y-r, y]$ and $C_1(r,y,x) = [y-r/2, y]$ for $y \in((a+b)/2,b]$. Clearly, part \hyperlink{assnitem:general_case_3}{3}(ii) of Assumption \ref {assn:general_case} is satisfied. Because $f(y|x)/ \inf_{z \in C(r,y,x)} f(z|\break x) \leq\overline{f} / \underline{f}$ part \hyperlink{assnitem:general_case_3}{3}(i) of Assumption \ref {assn:general_case} also holds. The second moment of $y$ is finite and thus all parts of Assumption \ref {assn:general_case} hold. The boundedness away from zero condition can be replaced by a monotonicity condition at the boundary of the support. For example, let $f(y|x)$ be nondecreasing on $[a, a + 2r]$, nonincreasing on $[b - 2r, b]$ and bounded below by $\underline{f} > 0$ on $[a + r, b - r]$. In this case $f(y|x)/ \operatorname{inf}_{z\in C(r,y,x)} f(z|x) \leq \operatorname{max}\{1, \overline{f}/\underline{f}\}$ for any $y \in [a, b]$. Thus, part 3(i) of Assumption \ref{assn:general_case} holds. The other parts of the assumption are not affected by this change. \end{example} \begin{example} \label{ex:infeasible_unifrm} Consider a uniform distribution $f(y|x) = x^{-1} 1_{[0,x]}(y)$ and $f(x)>0$ for any $x \in[1,\infty)$. A natural choice of the partition would be $A_0^m=[m h_m, \infty)$ and $A_j^m=[(j-1)h_m,j h_m)$ for $j \in\{1,\ldots,m\}$. When $y=x$, the only reasonable choice of $C(r,y,x)$ is $C(r,y,x)=[y-r,y]$. For an arbitrary $m$ and $y=x=m h_m+r/4$, $C(r,y,x)$ violates part \hyperlink{assnitem:general_case_3}{3}(ii) of Assumption \ref{assn:general_case} since the only possible $C_0(r,y,x) = [y-r/2,y]$ is not included in $A_0^m$. For $f(x)$ with bounded support, this example would satisfy Assumption \ref{assn:general_case} since in this case we could set $A_0^m=\varnothing$. This example illustrates that Assumption \ref{assn:general_case} rules out some cases in which the support of $f(\cdot|x)$ is increasing in $x$ without a bound. In Section \ref{sec:flex_mean}, I consider model specifications in which means and variances of the mixed normals can be flexible functions of $x$. Those specifications seem to be more promising for modeling densities $f(\cdot|x)$ with support increasing in $x$ without a bound (see Example \ref{ex:uniform_flex_mean}). \end{example} \subsection{Approximation error bounds} \label{sec:M0bounds} The proof techniques of this section can also be used to derive explicit bounds on the approximation error. The bounds for positive everywhere and especially differentiable $f(y|x)$ are particularly informative. It is also easy to deduce an approximation rate from them. Thus, I present below the bounds and approximation rate for these special albeit important cases. Convergence rates and bounds for other special classes can be obtained in a similar way, for example, for densities bounded away from zero. However, rates and bounds for the general case seem to be difficult to calculate. \begin{corollary} \label{crl:gen_case_bounds} Part \textup{(i)}. Suppose the model $p(y|x ,\mathcal{M}_0)$ and the partition $A_j^m$ are constructed so that (\ref{eq:con_partition_gen_case}), (\ref{eq:candidate_general_case}), (\ref{eq:cond_delta_sigma_h}) and\vspace*{-2pt} (\ref{eq:cond_sigma0}) hold. Suppose $f(y|x)$ is positive and continuous in $y$ on $Y=R^d$ for all $x$, second moments of $y$ are finite and (\ref{eq:IntBoundFinfF}) holds with $C(r,y,x)=C_r(y)$ taken to be a hypercube with center at $y$ and radius $r$. Then, for all sufficiently large $m$, \begin{eqnarray} \label{eq:bound1_intFinfF} d_{\mathrm{KL}}(F,\mathcal{M}_0) &\leq& \int\log\frac{f(y|x)}{\inf_{z \in C_{\delta_m}(y)} f(z|x)} F(dy,dx) \\ \label{eq:bound2_lof1_epsm} &&{} + 2 \frac{3 d^{3/2} \delta_m^{d-1} h_m }{(2 \pi)^{d/2} \sigma_m^d} + 2 \exp\biggl\{-\frac{(\delta_m/\sigma_m)^2}{8} \biggr\} \\ \label{eq:bound3_intFinfFtail} &&{} + \int_{B_{\delta_m}(A_0^m)} \log\frac{f(y|x)}{\inf_{z \in C_{r}(y)} f(z|x)} F(dy,dx) \\ \label{eq:bound4_inty2_cFtail} &&{} + \int_{B_{\delta_m}(A_0^m)} \biggl[ \frac{y^{\prime}y}{2 \sigma_0^2} - \log\frac{(r/2)^d}{(2 \pi\sigma_0^2)^{d/2}} \biggr] F(dy,dx), \end{eqnarray} where $B_{\delta_m}(A_0^m)=\{(y,x)\dvtx C_{\delta_m}(y)\cap A_0^m \neq\varnothing \} and bounds in (\ref{eq:bound1_intFinfF})--(\ref{eq:bound4_inty2_cFtail}) converge to zero as $m \rightarrow\infty$. Part \textup{(ii)}. If $f(y|x)$ is continuously differentiable in $y$ for all $x$ and instead of (\ref{eq:IntBoundFinfF}) the following condition holds: \begin{equation} \label{eq:dfdy_integrable} \int\sup_{z \in C_{r}(y)} \biggl\Vert\frac{d \log f(z|x) }{dz}\biggr\Vert F(dy,dx) < \infty, \end{equation} then for all sufficiently large $m$, \begin{eqnarray}\qquad \label{eq:bound1_intdsuplnFdz} d_{\mathrm{KL}}(F,\mathcal{M}_0) &\leq& \delta_m \cdot\frac{d^{1/2}}{2} \int\sup_{z \in C_{\delta _m}(y)} \biggl\Vert\frac{d \log f(z|x) }{dz}\biggr\Vert F(dy,dx) \\ \label{eq:bound2_lof1_epsm2} &&{} +2 \frac{3 d^{3/2} \delta_m^{d-1} h_m }{(2 \pi)^{d/2} \sigma_m^d} +2 \exp\biggl\{-\frac{(\delta_m/\sigma_m)^2}{8} \biggr\} \\ \label{eq:bound3_intdsuplnFdztail} &&{} + \frac{r d^{1/2}}{2} \int_{B_{\delta_m}(A_0^m)} \sup_{z \in C_{r}(y)} \biggl\Vert\frac{d \log f(z|x) }{dz}\biggr\Vert F(dy,dx) \\ \label{eq:bound4_inty2_cFtail_2} &&{} + \int_{B_{\delta_m}(A_0^m)} \biggl[ \frac{y^{\prime}y}{2 \sigma_0^2} - \log\frac{(r/2)^d}{(2 \pi\sigma_0^2)^{d/2}} \biggr] F(dy,dx), \end{eqnarray} and bounds in (\ref{eq:bound1_intdsuplnFdz})--(\ref {eq:bound4_inty2_cFtail_2}) converge to zero as $m \rightarrow\infty$. Part \textup{(iii)}. If, in addition to assumptions from part \textup{(ii)}, for some $q>2$ and some $i_1 \in\{1,\ldots,d\}$ \begin{equation} \label{eq:bdd_q_moment} \int|y_i|^q F(dy) < \infty,\qquad i \in\{1,\ldots,d\}, \end{equation} and \begin{equation} \label{eq:bdd_y_dlogfdz} \int|y_{i_1}|^{q-2} \sup_{z \in C_{r}(y)} \biggl\Vert\frac{d \log f(z|x) }{dz}\biggr\Vert F(dy,dx) < \infty, \end{equation} then the approximation error bound can be written as \begin{equation} \label{eq:bd_rate} d_{\mathrm{KL}}(F,\mathcal{M}_0) \leq c \cdot\biggl(\frac{1}{m} \biggr)^{1/(d \cdot[ 2+ 1 / (q-2) + \varepsilon])}, \end{equation} where $\varepsilon>0$ can be arbitrarily close to zero and $c$ does not depend on $m$. \end{corollary} The corollary is proved in the \hyperref[app]{Appendix}. The bounds in part (i) of the corollary follow from the proof of Proposition \ref{prp:gen_case}. The bounds in part (ii) are derived from the bounds in part (i), and they are especially easy to interpret. The larger the ``average'' derivative of $\log f(y|x)$ is the smaller $\delta_m$ has to be to achieve a prespecified level for the right-hand side of (\ref{eq:bound1_intdsuplnFdz}). Constant $h_m$ has to be much smaller than $\sigma_m$, and $\sigma_m$ has to be much smaller than $\delta_m$ [condition (\ref {eq:cond_delta_sigma_h})] so that (\ref{eq:bound2_lof1_epsm2}) becomes sufficiently small. Size of (\ref{eq:bound3_intdsuplnFdztail}) and (\ref {eq:bound4_inty2_cFtail_2}) depends on how fast and by how much tails of $f(y|x)f(x)$ dominate $d \log f(y|x)/dy$, $y^2$, and a constant. The approximation rate in part (iii) is derived from the bounds in part (ii). Expressions in (\ref{eq:bound1_intdsuplnFdz}) and (\ref{eq:bound2_lof1_epsm2}) can be immediately converted in expressions in terms of $m$. To convert (\ref{eq:bound3_intdsuplnFdztail}) and (\ref{eq:bound4_inty2_cFtail_2}) in expressions in terms of $m$ one seems to need slightly more than integrability of ${\sup_{z \in C_{r}(y)}} \Vert d \log f(z|x) / dz\Vert$ [condition (\ref{eq:bdd_y_dlogfdz})] and slightly more than finiteness of the second moments of $y$ [condition (\ref{eq:bdd_q_moment})]. Under these conditions, (\ref{eq:bound3_intdsuplnFdztail}) and (\ref{eq:bound4_inty2_cFtail_2}) are bounded by $(h_m m^{1/d})^{-(q-2)}$ times a constant (see the corollary proof). An upper bound on $(h_m m^{1/d})^{-(q-2)}$, (\ref {eq:bound1_intdsuplnFdz}) and (\ref{eq:bound2_lof1_epsm2}) gives the rate in (\ref{eq:bd_rate}). This upper bound has to be strictly larger than (\ref{eq:bd_rate}) with $\varepsilon=0$ as I show in the corollary proof. For distributions with exponentially declining tails, (\ref {eq:bound3_intdsuplnFdztail}) and (\ref{eq:bound4_inty2_cFtail_2}) can be decreasing exponentially in $h_m m^{1/d}$. In this case, one can set $q=\infty$ in (\ref {eq:bd_rate}) (see Example \ref{ex:2side_exp_flex_mean_rates} below). The dimension of $y$ enters the approximation bounds exponentially. The dimension of $x$ does not affect the bound and the approximation rate for the ``infeasible'' model because this model is constructed with the use of $F(A_j^m|x)$'s, which are unknown functions of $x$. The following sections shed some light on the role of the dimension of $x$ in approximating $f(y|x)$ by feasible models. \section{Flexible multinomial choice models for mixing probabilities} \label{sec:poly_logit} This section gives conditions under which approximation results for ``infeasible'' model $\mathcal{M}_0$ also hold for a model with logit mixing probabilities that include polynomial terms in $x$. It also shows how to extend these results to multinomial probit and other models for mixing probabilities. \begin{assumption} \label{assn:XcompactLogFcont} $X$ is compact and for partitions $A_j^m$, $j=0,1,\ldots,m$ satisfying (\ref{eq:con_partition_gen_case}), $F(A_j^m|x)$ is a continuous function of $x$ on $X$ and $F(A_j^m|x)>0$ [the support of $f(\cdot|x)$ does not depend on $x$]. \end{assumption} Under this assumption (by the Stone--Weierstrass theorem) for any sequence of $\varepsilon_m \rightarrow0$, $\varepsilon_m>0$ there exist finite order polynomials in $x$, $P_j^m(x )$ such that \begin{equation} \label{eq:logFcont} |{\log F(A_j^m|x) - P_j^m(x )}| < \varepsilon_m\qquad \forall x \in X, j=1,\ldots,m. \end{equation} Let $p(y|x, \mathcal{M}_1)$ denote a model with $\sigma_j^m$ and $\mu _j^m$ independent of $x$ and logit mixing probabilities, \begin{eqnarray*} \alpha_j^m(x ,\mathcal{M}_1) & = & \frac{ \exp\{ P_j^m(x ) \}}{\sum _{k=1}^m \exp\{ P_k^m(x )\} } \nonumber\\ & = & \frac{ F(A_j^m|x) \exp\{ P_j^m(x ) - \log F(A_j^m|x) \}}{\sum _{k=1}^m F(A_k^m|x) \exp\{ P_k^m(x ) - \log F(A_k^m|x)\} }. \end{eqnarray*} Condition (\ref{eq:logFcont}) implies $\alpha_j^m(x,\mathcal{M}_1) \in(F(A_j^m|x) \exp\{-2 \varepsilon_m\}, F(A_j^m|x) \exp\{2 \varepsilon_m\})$. The following corollary immediately follows. \begin{corollary} \label{crl:logit_polynomials} If Assumption \ref{assn:XcompactLogFcont} and the conditions of Proposition \ref{prp:gen_case} hold then $d_{\mathrm{KL}}(F,\mathcal{M}_1)$ is bounded above and below by $d_{\mathrm{KL}}(F,\mathcal{M}_0) \pm2 \varepsilon _m$ and thus converges to zero. \end{corollary} It seems possible to extend this corollary to other models for mixing probabilities, in particular, to a class of multinomial choice models in which mixing probabilities have the following representation: \[ \alpha_j^m(x)=\operatorname{Pr} [(e_0,\ldots,e_m)\dvtx v_j(x)+e_j \geq v_k(x)+e_k, k \in\{0,\ldots,m\} ], \] where $v_j(x)$ are flexible functions of $x$ and $e_k$'s are i.i.d. Multinomial logit and probit models fall into this category with polynomial $v_j(x)$ and extreme value and normal distributions for $e_k$'s. The proof of Proposition 1 in \citet{HotzMiller93} implies that if $e_k$ are i.i.d. and have a density with respect to the Lebesgue measure, which is positive on $R$, then \[ (v_0(x),\ldots,v_{m-1}(x)) = Q(\alpha_0^m(x),\ldots,\alpha_{m-1}^m(x)), \] where $v_{m}(x)$ is normalized to 0 and $Q$ and $Q^{-1}$ are differentiable mappings defined correspondingly on $R^m$, and the interior of the $m$-dimensional simplex. Flexible functional forms for $(v_0(x),\ldots,v_{m-1}(x))$ can be used to approximate $Q(F(A_0^m|x),\ldots,F(A_{m-1}^m|x))$. Then $(\alpha_0^m(x),\ldots,\alpha_{m-1}^m(x)) =\break Q^{-1}(v_0(x),\ldots ,$ $v_{m-1}(x))$ will approximate $(F(A_0^m|x),\ldots,F(A_0^{m-1}|x)$. To get an analog of Corollary \ref{crl:logit_polynomials} one only needs to show that $Q^{-1}$ transfers small additive approximation errors in $v_j(x)$ into multiplicative\vspace*{-1pt} approximation errors for $\alpha_j^m(x)$, that are close to one. Since the mapping $Q^{-1}$ is continuous this is the case as long as $F(A_j^m|x)$ are positive. Thus, it seems one does not need more than Assumption \ref{assn:XcompactLogFcont} to extend Corollary \ref{crl:logit_polynomials} to other models for mixing probabilities. Of course, Corollary \ref{crl:logit_polynomials} can be formulated for any other method for approximating continuous functions in the sup norm on compacts, for example, for splines instead of the polynomials in the logit mixing probabilities. The corollary implies that for $F$ satisfying conditions of Corollary \ref{crl:gen_case_bounds}, bounds on the approximation error for model $\mathcal{M}_1$ are given by the bounds in the corollary for $\mathcal {M}_0$ plus $\varepsilon_m$. Results from the function approximation theory [see, e.g., Section~3.3 in \citet{Rustndp96} for a survey] suggest that to achieve a worst case approximation bound $\varepsilon_m$, computable approximations to Lipschitz continuous functions must involve the number of parameters proportional to $\varepsilon_m^{-d_x}$ ($\varepsilon_m^{-d_x/n}$ if the function has bounded derivatives up to order $n+1$). Thus, the number of parameters in the polynomials (or splines) $P_j^m(x)$ depends at best exponentially on the dimension of~$x$. It might be very difficult to estimate a model with high order polynomials in the logit mixing probabilities. The following section shows that it is not necessary to use high order polynomials in logit specification to attain flexibility. However, as I discuss at the end of the following section, polynomial terms might reduce the number of mixture components required to achieve a specified approximation precision. \section{Linear indices in logit} \label{sec:lin_logit} In this section I explore an alternative approximation to $F(A_j^m|x)$ based on logit mixing probabilities that use only linear indices in~$x$. The following assumption is a slightly stricter analog of Assumption \ref{assn:general_case}. \begin{assumption} \label{assn:logitlin_case} 1. $X=[0,1]^{d_x}$ (the arguments would go through for a bounded $X$). \smallskipamount=0pt \begin{enumerate}[2.] \item[2.] \hypertarget{assnitem:logitlin_case_1} $f(y|x)$ is continuous in $(y,x)$ a.s. $F$ \item[3.] \hypertarget{assnitem:logitlin_case_2} The second moments of $y$ are finite. \item[4.] \hypertarget{assnitem:logitlin_case_3} For any $(y,x)$ there exists a hypercube $C(r,y,x)$ with side length $r>0$ and $y \in C(r,y,x)$ such that (i) \begin{equation} \label{eq:IntBoundFinfFyx} \int\log\frac{f(y|x)}{\inf_{z \in C(r,y,x), \Vert t-x\Vert\leq r} f(z|t) } F(dy,dx) < \infty \end{equation} and (ii) exists $M$ such that for any $m \geq M$, if $y \in A_0^m$ then $C(r,y,x) \cap A_0^m$ contains a hypercube $C_0(r,y,x)$ with side length $r/2$ and a vertex at $y$ and if $y \in Y \setminus A_0^m$, then $C(r,y,x) \cap(Y \setminus A_0^m)$ contains a hypercube $C_1(r,y,x)$ with side $r/2$ and a vertex at $y$. \end{enumerate} \end{assumption} Let $B_i^m$, $i=1,\ldots,N(m)$ be equal size half-open half-closed hypercubes forming a partition of $X=[0,1]^{d_x}$. The partition becomes finer as $m$ increases, $\lambda(B_i^m)=N(m)^{-1} \rightarrow 0$. Let $x_i^m$ denote the center of $B_i^m$. Before looking at logit let us consider an ``infeasible'' model $\mathcal{M}_2$, \[ p(y|x ,\mathcal{M}_2) = \sum_{i=1}^{N(m)} \Biggl[ \sum_{j=1}^m \alpha_{ij}^m(x ,\mathcal{M}_2)\phi(y, \mu_j^m,\sigma_m) +\alpha_{i0}^m(x ,\mathcal{M}_2)\phi(y, 0,\sigma_0) \Biggr], \] where the mixing probabilities $\alpha_{ij}^m(x ,\mathcal{M}_2) = 1_{B_i^m}(x) F(A_j^m|x_i^m)$. As the partition of $X$ becomes finer,\vspace*{-2pt} model $\mathcal{M}_2$ approximates $\mathcal{M}_0$ because $F(A_j^m|x) \approx\sum_{i=1}^{N(m)} 1_{B_i^m}(x) F(A_j^m|x_i^m)$ under continuity\vspace*{1pt} of $f(y|x)$ in $x$ (part \hyperlink {assnitem:logitlin_case_1}{2} of Assumption~\ref{assn:logitlin_case}). Since, $\mathcal{M}_2$ is not interesting on its own I do not make this argument precise here. Instead I employ this idea to get approximation results for model $\mathcal{M}_3$ constructed similarly to $\mathcal {M}_2$ but with logit mixing probabilities, \begin{eqnarray}\label{eq:appr1Bxi} \alpha_{ij}^m(x ,\mathcal{M}_3) & = & \frac{ \exp\{ \log F(A_j^m|x_i^m) - R_m (x_i^{m \prime} x_i^m - 2 x_i^{m \prime } x ) \}} {\sum_{k,l} \exp\{\log F(A_k^m|x_l^m) - R_m (x_l^{m \prime} x_l^m - 2 x_l^{m \prime} x )\} } \nonumber\\[-8pt]\\[-8pt] & = & F(A_j^m|x_i^m) \frac{ \exp\{ - R_m (x_i^{m \prime} x_i^m - 2 x_i^{m \prime} x ) \}} {\sum_{l} \exp\{ - R_m (x_l^{m \prime} x_l^m - 2 x_l^{m \prime} x )\} }.\nonumber \end{eqnarray} In this expression, $R_m$ is a positive diverging to infinity sequence that satisfies the following condition: \begin{equation} \label{eq:CondSmRm} \exp\{-R_m s_m\}/s_m^{d_x/2} \rightarrow0\qquad \mbox{where } s_m = d_x \lambda(B_i^m)^{2/d_x} \rightarrow0, \end{equation} is the squared diagonal of $B_i^m$. This condition specifies that $R_m$ should increase fast relative to how fine the partition of $X$ becomes. It is always possible to define sequence $R_m$ satisfying (\ref {eq:CondSmRm}), for example, $R_m=s_m^{-2}$. \begin{proposition} \label{prp:gen_linear_logit} If condition (\ref{eq:CondSmRm}), Assumption \ref{assn:logitlin_case}, and conditions of Proposition \ref{prp:gen_case} hold then $d_{\mathrm{KL}}(F,\mathcal{M}_3) \rightarrow0$ as $m\rightarrow\infty$. \end{proposition} The proposition is proved in the \hyperref[app]{Appendix}. The proof shows that the expression in (\ref{eq:appr1Bxi}) multiplying $F(A_j^m|x_i^m)$ behaves like $1_{B_{i}^m}(x)$ when $R_m$ becomes large and then uses the same arguments as in the proof of Proposition \ref{prp:gen_case}. Attempts to develop similar results for mixing probabilities modeled by multinomial probit [see, e.g., \citet{Geweke07} for applications] were not successful. It would not be hard to make multinomial probit mixing probabilities behave like indicator functions. However, making them behave like an indicator times $F(A_j^m|x_i^m)$ as in (\ref{eq:appr1Bxi}) seems to be more difficult. The bounds on the approximation error for $\mathcal{M}_3$ and $f(y|x)$ positive everywhere are similar to bounds for $\mathcal{M}_0$ obtained in Corollary \ref {crl:gen_case_bounds}. This is formalized in the following corollary. \begin{corollary} \label{crl:linlogit_case_bounds} Part \textup{(i)}. Suppose conditions of Proposition \ref{prp:gen_linear_logit} hold, $f(y|x)$ is positive for any $y \in Y=R^d$ and any $x \in X$, $f(y|x)$ is continuously differentiable in $(y,x)$, and instead of (\ref{eq:IntBoundFinfFyx}) the following condition holds: \begin{equation} \label{eq:dfdyx_integrable} \int\sup_{y \in C_{r}(y), \Vert x-t\Vert\leq r} \biggl\Vert\frac{d \log f(z|t) }{d(z,t)}\biggr\Vert F(dy,dx) < \infty; \end{equation} then, for all sufficiently large $m$, \begin{eqnarray}\label{eq:bound1_intdsuplnFdyx}\qquad\quad d_{\mathrm{KL}}(F,\mathcal{M}_3) &\leq& \biggl(\delta_m \frac{d^{1/2}}{2} + s_m^{1/2}\biggr)\nonumber\\[-8pt]\\[-8pt] &&{} \times\int\sup_{z \in C_{\delta_m}(y), \Vert x-t\Vert\leq s_m^{1/2}} \biggl\Vert\frac{d \log f(z|t) }{d(z,t)}\biggr\Vert F(dy,dx) \nonumber\\ \label{eq:bound2_lof1_epsm2_2} &&{} +2 \frac{3 d^{3/2} \delta_m^{d-1} h_m }{(2 \pi)^{d/2} \sigma_m^d} +2 \exp\biggl\{-\frac{(\delta_m/\sigma_m)^2}{8} \biggr\} \\ \label{eq:bound3_intdsuplnFdzxtail} &&{} + \frac{r d^{1/2}}{2} \int_{B_{\delta_m}(A_0^m)} \sup_{z \in C_{r}(y), \Vert x-t\Vert\leq r} \biggl\Vert\frac{d \log f(z|t) }{d(z,t)}\biggr\Vert F(dy,dx) \\ \label{eq:bound4_inty2_cFtail_2_2} &&{} + \int_{B_{\delta_m}(A_0^m)} \biggl[ \frac{y^{\prime}y}{2 \sigma_0^2} - \log\frac{(r/2)^d}{(2 \pi\sigma_0^2)^{d/2}} \biggr] F(dy,dx) \\ \label{eq:bound5_expRmsm} &&{} + \log[1-d_x^{d_x/2}\exp\{-R_m s_m\}/s_m^{d_x/2} ], \end{eqnarray} and bounds in (\ref{eq:bound1_intdsuplnFdyx})--(\ref {eq:bound5_expRmsm}) converge to zero as $m \rightarrow\infty$. Part \textup{(ii)}. If, in addition to assumptions from part \textup{(i)}, for some $q>2$ and some $i_1 \in\{1,\ldots,d\}$, \begin{equation} \label{eq:bdd_q_moment_logit} \int|y_i|^q F(dy) < \infty,\qquad i \in\{1,\ldots,d\}, \end{equation} and \begin{equation} \label{eq:bdd_y_dlogfdzt} \int|y_{i_1}|^{q-2} \sup_{z \in C_{r}(y), \Vert x-t\Vert\leq r} \biggl\Vert \frac{d \log f(z|t) }{d(z,t)}\biggr\Vert F(dy,dx) < \infty, \end{equation} then the approximation error bound can be written as \begin{equation} \label{eq:bd_rate_linlogit} d_{\mathrm{KL}}(F,\mathcal{M}_3) \leq \mbox{constant} \cdot[m N(m) ]^{-1/ (d_x+d \cdot[2+1 / (q-2)+\varepsilon])}, \end{equation} where $m N(m)+1$ is the number of mixture components in $\mathcal{M}_3$ and $\varepsilon>0$ can be arbitrarily close to zero. \end{corollary} From the definition of models $\mathcal{M}_2$ and $\mathcal{M}_3$ and from the comparison of the convergence rates in (\ref{eq:bd_rate}) and (\ref{eq:bd_rate_linlogit}), it is clear that using only linear indices in $x$ in the mixing probabilities does not come without a cost. The number of mixing components in model $\mathcal{M}_3$ that approximates an infeasible model $\mathcal{M}_0$ is equal to $m N(m)+1$ while for model with polynomial terms in logit, $\mathcal{M}_1$, this number is $m+1$ (Corollary \ref{crl:logit_polynomials}). The proof of Corollary \ref{crl:linlogit_case_bounds} implies that the number of hypercubes in the partition of $X$, $N(m)$, increases exponentially with the dimensionality of $X$. Thus, the number of parameters in model $\mathcal{M}_3$ grows exponentially in the dimension of $x$ (the exponential growth of the number of parameters in $\mathcal{M}_1$ is discussed at the end of the previous section). Overall, approximation results for $\mathcal{M}_1$ and $\mathcal{M}_3$ do not seem to suggest which model might perform better in practice; however, they seem to identify a tradeoff between the number of components in the mixture and the flexibility of models for the mixing probabilities. \section{Flexible means and variances} \label{sec:flex_mean} In this section, I show that a finite mixture of normal regressions models, in which mixing probabilities do not depend on $x$, can be quite flexible. However, the results also suggest that specifications in which mixing probabilities are flexible functions of $x$ might perform better. There is a large literature on finite mixture of regressions models. In early work, mixtures of two normal regressions were considered [see, e.g., \citet{QuandtRamsey78} and \citet{Kiefer78}]. \citet{JonesMcLachlan92} applied the EM algorithm for estimation of finite mixtures of normal regressions. Fitting of more general finite mixtures of generalized linear models has been considered in \citet{Jansen93} and \citet {WedelDeSarbo95} among others. Many more references can be found in a comprehensive book on finite mixture models by \citet{McLachlanPeel00}. To the best of my knowledge, the literature on finite mixtures of regressions does not contain any approximation results for conditional densities. The closest analogs of the results I obtain can be found in the literature on finite mixtures of unconditional densities [see, e.g., \citet{ZeeviMeir1997} and references therein and \citet {LiBarron99}]. Even for mixtures of unconditional densities approximation results for the KL distance, which is useful for establishing consistency of Bayesian or classical maximum likelihood estimators, seem to be scarce. Approximation results in the KL distance for convex combinations of densities in \citet{ZeeviMeir1997} and \citet{LiBarron99} seem to apply to mixtures of truncated normals and to target densities that are compactly supported. Some of these results are very strong. For example, for target densities that are general mixtures of the densities mixed in the model, approximation error bounds obtained by \citet{LiBarron99} are proportional to $m^{-1}$. If there are no covariates $x$, then the infeasible model from Section \ref{sec:Infeasible_model} is simply a finite mixture of multivariate normals. For an elaboration on this idea in the context of joint and conditional density estimation and for consistency results for a Bayesian estimator based on this model see \citet{NoretsPelenis09}. The convergence rates obtained for this model in Section \ref {sec:M0bounds} are slower than $m^{-1}$. However, the convergence rates are not directly comparable as the target densities in \citet {LiBarron99} are different from those considered here. Model $\mathcal{M}_4$ constructed in this section is very similar to model $\mathcal{M}_0$ except for one important difference. In $\mathcal{M}_4$, fine equal probability partitions of $Y$ are used instead of fine equal length partitions in $\mathcal{M}_0$. As will be clear below, $\mathcal{M}_4$ defined in this way allows mixing probabilities to be independent of $x$. However, it requires the means of the mixed normals to be flexible functions of $x$. In this section, I~assume that the response variable is univariate: $Y \subset R$ or $d=1$ (all the results from previous sections were obtained for arbitrary $d$). If fine equal probability partitions can be well defined for distributions of multivariate random variables and if these partitions depend smoothly on covariates, then it might be possible to extend the results of this section to multivariate responses. I do not pursue this conjecture here. Define model $\mathcal{M}_4$ as follows: \[ p(y|x ,\mathcal{M}_4) = \sum_{j=1}^m \alpha_j^m \phi(y, \mu_j^m(x ),\sigma_j^m(x )). \] For a given $x$ let $A_j^m(x)$, $j=0,1,\ldots,m$, be a partition of $Y$ such that $\bigcup_{j=1}^m A_j^m(x)$ is a nondecreasing interval and \begin{eqnarray} \label{eq:EPPconditions} F(A_j^m(x)|x) &=& p_m,\qquad j>0,\nonumber\\[-8pt]\\[-8pt] F(A_0^m(x)|x) &=& 1 - m p_m \quad\mbox{and}\quad m p_m \rightarrow1,\nonumber \end{eqnarray} for some $p_m \in(0,m^{-1}]$ that does not depend on $x$. Define an upper bound on the length of an element of the fine part of the partition $h_m(x) \geq\break\max_{j>0} \lambda(A_j^m(x))$. The candidate mixing probabilities are given by $\alpha _j^m=F(A_j^m(x)|x)$ and $\mu_j^m(x) \in A_j^m(x)$. The standard deviations $\sigma_j^m(x )=\sigma_m(x)$ for $j>0$ and $\sigma_0^m(x )=\sigma_0(x)$ are treated as functions of $x$ which is not essential but it weakens the restrictions on $F$ (Corollaries \ref{crl:flex_mean_bddsupport} and \ref {crl:poly_mean_bddsupport} and Examples \ref{ex:exponential_flex_mean} and \ref{ex:uniform_flex_mean} below illustrate this point). Note that $\mathcal{M}_4$ is an infeasible model; in Corollary \ref{crl:poly_mean_bddsupport} below, I consider a feasible model $\mathcal{M}_5$ in which $\mu_j^m(x)$ are approximated by polynomials (see also Examples~\ref{ex:exponential_flex_mean} and~\ref{ex:uniform_flex_mean}). Suppose sequences $\delta_m(x)$, $\sigma_m(x)$, and $h_m(x)$ satisfy \begin{equation} \label{eq:cond_delta_sigma_h_x} \delta_m(x) \rightarrow0,\qquad \frac{\sigma_m(x)} {\delta_m(x)} \rightarrow0, \qquad\frac{h_m(x)} {\sigma_m(x)} \rightarrow0. \end{equation} Next, let us introduce the following restrictions on $F$. \begin{assumption} \label{assn:flexiblemu_case} 1. Partitions $A_j^m(x)$ used in construction of $p(y|x ,\mathcal {M}_4)$ satisfy (\ref{eq:EPPconditions}), and (\ref{eq:cond_delta_sigma_h_x}) holds. \smallskipamount=0pt \begin{enumerate}[2.] \item[2.] \hypertarget{assnitem:flexiblemu_case_1} $f(y|x)$ is continuous in $y$ a.s. $F$ \item[3.] \hypertarget{assnitem:flexiblemu_case_3} For any $(y,x)$ there exists interval $C(r(x),y,x)$ with length $r(x)>0$ and $y \in C(r(x),y,x)$ such that (i) \begin{equation} \label{eq:IntBoundFinfFflexmu} \int\log\frac{f(y|x)}{\inf_{z \in C(r(x),y,x)} f(z|x) } F(dy,dx) < \infty \end{equation} and (ii) exists $M$ such that for any $m \geq M$, if $y \in A_0^m(x)$, then $C(r(x),y,x) \cap A_0^m(x)$ contains an interval $C_0(r(x),y,x)$ with an end at $y$ and length $r(x)/2$, and if $y \in Y \setminus A_0^m(x)$, then $C(r(x),y,x) \cap(Y \setminus A_0^m(x))$ contains an interval $C_1(r(x),y,x)$ with an end at $y$ and length $r(x)/2$. \item[4.] $h_m(x)$, $\sigma_m(x)$, and $r(x)$ satisfy \begin{equation} \label{eq:cond_r_sigma_h_x} \sup_x \frac{\sigma_m(x)} {r(x)} \rightarrow0,\qquad \sup_x \frac{h_m(x) } {\sigma_m(x)} \rightarrow0. \end{equation} \item[5.] $\sigma_0(x)$ and $r(x)$ satisfy \begin{equation} \label{eq:cond_sigma0_x} 1> 1/4 \geq\phi(y, 0,\sigma_0(x)) r(x)/2, \end{equation} which holds, for example, when $\sigma_0(x) \geq2 (2 \pi)^{-1/2} \cdot r(x)$. \item[6.] \hypertarget{assnitem:flexiblemu_case_2} $| {\int\log[\phi(y, 0,\sigma_0(x)) r(x)/2 ] F(dy,dx)}| < \infty$. \end{enumerate} \end{assumption} \begin{proposition} \label{prp:flexiblemu_case} If Assumption \ref{assn:flexiblemu_case} holds then $d_{\mathrm{KL}}(F,\mathcal{M}_4) \rightarrow0$ as \mbox{$m\rightarrow\infty$}. \end{proposition} The proposition is proved in the \hyperref[app]{Appendix}. The assumptions of the proposition and their role in the proof are similar to those discussed in detail in Section \ref {sec:Infeasible_model} for~$\mathcal{M}_0$. The assumptions are satisfied by a large class of densities as illustrated by the following corollaries and examples. Approximation error bounds for $\mathcal{M}_4$ are presented below in Corollary \ref{crl:flex_mean_rate}. \mbox{} \begin{corollary} \label{crl:flex_mean_bddsupport} Assume: \begin{enumerate} \item$f(y|x)$ is continuous in $y$ in the interior of the support of $f(y|x)$ for all $x \in X$. \item There exists $\overline{f}<\infty$, such that $ f(y|x) \leq \overline{f}$ for all $(y,x)$. \item The support of $f(\cdot|x)$ is given by a finite interval $[a(x),b(x)]$, where $a(x)$ and $b(x)$ are square integrable. Also, for some $\underline{f} \in(0,1)$, a positive integer $n$, and\vspace*{1pt} $a(x) \leq a_1(x) \leq b_1(x) \leq b(x)$ $f(y|x) \geq\underline{f}$ on $[a_1(x) , b_1(x)]$, \begin{figure} \includegraphics{765f02.eps} \caption{Approximation of densities with bounded support by $\mathcal {M}_4$. \label{fig:fmubs \end{figure} $f(y|x) \geq\underline{f} \cdot[ y-a(x)]^n$ on $(a(x) , a_1(x))$, and $f(y|x) \geq\underline{f} \cdot[b(x)-y]^n$ on $(b_1(x) , b(x))$. Figure~\ref{fig:fmubs} provides an illustration for $n=1$. \item \hypertarget{crlitem:monotone} There exists $r>0$ such that $f(\cdot|x)$ is nondecreasing on $(a(x) , a_1(x)+r/2)$ and nonincreasing on $(b_1(x) - r/2, b(x))$ for all $x \in X$. \end{enumerate} Then for $\mathcal{M}_4$ constructed so that $p_m=1/m$, $A_0^m = \varnothing$, $\mu_j^m(x) \in A_j^m(x)$ and $\sigma_m(x)=p_m^{1/[4(n+1)]}$ and $\sigma_0(x) = 2 (2 \pi )^{-1/2}\cdot r$ are independent of $x$, $d_{\mathrm{KL}}(F$,\break $\mathcal{M}_4) \rightarrow0$. \end{corollary} \begin{corollary} \label{crl:poly_mean_bddsupport} Assume conditions from Corollary \ref{crl:flex_mean_bddsupport}, $F^{-1}(p|x)$ is continuous in $x$ for all $p \in[0,1]$, $X$ is compact. Then there exists a sequence of polynomials $P_j^m(x )$ such that $d_{\mathrm{KL}}(F,\mathcal{M}_5) \rightarrow0$ where \[ p(y|x ,\mathcal{M}_5) = \sum_{j=1}^m p_m \phi(y, P_j^m(x ),p_m^{1/8}). \] \end{corollary} \begin{pf} Let $\mu_j^m(x)=F^{-1}((j-1/2)p_m|x)$. Note that $\mu_j^m(x) \in A_j^m(x)=[F^{-1}((j-1)p_m|x),F^{-1}(j p_m|x)]$ and \[ p_m/2 = \int_{\mu_j^m(x)}^{F^{-1}(j p_m|x)} f(y|x) \,dy \leq\bigl(F^{-1}(j p_m|x) - \mu_j^m(x)\bigr) \overline{f}. \] Similarly, $p_m/2 \leq(\mu_j^m(x) - F^{-1}((j-1) p_m|x)) \overline{f}$. Thus, for $\varepsilon_m = p_m / (2 \overline{f})$, $(\mu_j^m(x)-\varepsilon_m, \mu_j^m(x)+\varepsilon_m) \subset A_j^m(x)$. By the Stone--Weierstrass theorem there exist finite order polynomials in $x$, $P_j^m(x )$ such that $|P_j^m(x ) - \mu_j^m(x)| < \varepsilon_m$. Therefore, $P_j^m(x ) \in A_j^m(x)$, which was the only requirement on the means of the mixture components in Corollary \ref{crl:flex_mean_bddsupport}. \end{pf} \begin{example} \label{ex:exponential_flex_mean} Exponential distribution, $f(y|x) = \gamma(x) \exp\{- \gamma(x) y\}$,\break $\gamma(x) \geq\underline{\gamma} > 0$, $\gamma(x)$ is continuous, $\int\gamma \,dF < \infty$ and the second moment of $y$ is finite ($\int \gamma^{-2} \,dF < \infty$). The quantile function is given by $F^{-1}(p|x)= - \gamma(x)^{-1} \log(1-p)$. Let the partition be such that $A_0^m = [F^{-1}(m p_m|x),\infty)$. Since the exponential density is decreasing the largest interval in the fine part of the partition is given by $A_m^m = [ F^{-1}((m-1) p_m|x), F^{-1}( m p_m | x ) )$. Therefore, $h_m(x)= h_m = \underline{\gamma}^{-1} \log(1+ p_m / (1-p_m m))$. Choosing $p_m = (m-m^{0.5})/m^2$ guarantees that $h_m \rightarrow0$. For $\sigma_m= h_m^{1/4}$, and $\delta_m(x)=h_m^{1/8}$, and $r(x)= 1$ conditions (\ref{eq:EPPconditions}), (\ref{eq:cond_delta_sigma_h_x}) and (\ref{eq:cond_r_sigma_h_x}) hold. Next, let $C(1,y,x)=[y, y+1]$ if $y \in[0, 1/2]$, $C(1,y,x)=[y-1/2, y+1/2]$ if $y \in[1/2, \infty)$. Since \[ \inf_{z \in C(1,y,x)} f(z|x) \geq\gamma(x) \exp\{-\gamma(x) (y+1)\}, \] we have \[ 1 \leq f(y|x) \big/ \inf_{z \in C(1,y,x)} f(z|x) \leq\exp\{ \gamma(x) \}. \] Inequality (\ref{eq:IntBoundFinfFflexmu}) is satisfied since $\gamma (x)$ is assumed to be integrable. Finally, let $\sigma_0(x) = 2 (2 \pi)^{-1/2}$ so that equation (\ref{eq:cond_sigma0_x}) in Assumption \ref {assn:flexiblemu_case} holds. Then, \[ \biggl| \int\log[\phi(y, 0,\sigma_0(x)) r(x)/2 ] F(dy,dx)\biggr| = \biggl| \int\biggl[-\log(4) - \frac{y^2 \pi}{4}\biggr] F(dy,dx) \biggr| < \infty \] since the second moment of $y$ is assumed to be finite. Thus, condition \hyperlink{assnitem:flexiblemu_case_2}{6} of Assumption \ref {assn:flexiblemu_case} holds. If $X$ is compact the same argument as in the proof of Corollary \ref {crl:poly_mean_bddsupport} can be used to show that $\mu_j^m(x)$ can be polynomial in $x$ [for fixed $m$ there exists $\varepsilon_m>0$ such that $\lambda(A_j^m(x))>\varepsilon_m$ for all $x$ and $j$]. It is possible to give sufficient conditions for approximation results when $\gamma(x)$ is not bounded away from zero, for example, let $r(x)=\gamma(x)^{-1}$, $h_m(x) = \gamma(x)^{-1} \log(1+ p_m / (1-p_m m))$, etc. However, then $\sigma_m$ and $\sigma_0$ would have to be functions of $x$ [not necessarily flexible functions of $x$ but functions that would have the same order as $\gamma(x)$]. Also, $\gamma (x)^{-1}$ is not continuous and the argument I use for justifying the use of polynomial $\mu_j^m(x)$ breaks down in this case. \end{example} \begin{example} \label{ex:uniform_flex_mean} Uniform distribution, $f(y|x) = b(x)^{-1} 1_{[0,b(x)]}(y)$, $b(x)>0$ is continuous, $\int\log b \,dF < \infty$ and the second moment of $y$ is finite ($\int b^{2} \,dF < \infty$). This example demonstrates that the support of $f(y|x)$ does not have to be (un)bounded uniformly in $x$ as long as normal variances are modeled as flexible functions of $x$. Let the\vspace*{-1pt} partition be such that $A_0^m = \varnothing$ and $p_m=F(A_j^m|x)=m^{-1}$, $j>0$. Note that $h_m(x)= b(x) / m$. For\vspace*{1pt} $\sigma_m(x)= b(x) p_m^{1/4}$, and $\delta_m(x)=b(x) p_m^{1/8}$, and $r(x)= b(x)$ conditions (\ref{eq:EPPconditions}), (\ref{eq:cond_delta_sigma_h_x}) and (\ref{eq:cond_r_sigma_h_x}) hold. Next, let $C(r(x),y,x)=[0,b(x)]$. Note that $f(y|x) / \inf_{z \in C(r(x),y,x)} f(z|x) = 1$, and inequality (\ref{eq:IntBoundFinfFflexmu}) is satisfied. Finally, let $\sigma_0(x) = 2 (2 \pi)^{-1/2} b(x)$ so that inequality (\ref{eq:cond_sigma0_x}) in Assumption \ref {assn:flexiblemu_case} holds. Then, \[ \biggl| \int\log[\phi(y, 0,\sigma_0(x)) r(x)/2 ] F(dy,dx)\biggr| = |{{-}\log(4) - \pi/ (3 \cdot4)}| < \infty \] and condition \hyperlink{assnitem:flexiblemu_case_2}{6} of Assumption \ref {assn:flexiblemu_case} holds. If $X$ is compact and $b(x)$ is bounded away from zero then the same argument, as in the proof of Corollary \ref{crl:poly_mean_bddsupport}, can be used to show that $\mu_j^m(x)$ can be polynomial in $x$ [for fixed $m$ there exists $\varepsilon_m>0$ such that $\lambda (A_j^m(x))>\varepsilon_m$ for all $x$ and $j$]. \end{example} \begin{corollary} \label{crl:flex_mean_rate} Suppose conditions of Proposition \ref{prp:flexiblemu_case} are satisfied for $h_m(x)=h_m$, $\sigma_m(x)=\sigma_m$, $\delta_m(x)=\delta_m$ and $r(x)=r$ that do not depend on $x$. Also, suppose conditions from parts \textup{(i)} and \textup{(ii)} of Corollary \ref{crl:gen_case_bounds} hold. Then for all sufficiently large $m$, \begin{eqnarray} \label{eq:bound1fm_intdsuplnFdz} d_{\mathrm{KL}}(F,\mathcal{M}_4) &\leq& \delta_m \cdot\frac{d^{1/2}}{2} \int\sup_{z \in C_{\delta _m}(y)} \biggl\Vert\frac{d \log f(z|x) }{dz}\biggr\Vert F(dy,dx) \\ \label{eq:bound2fm_lof1_epsm2} &&{} + 2 \frac{3 h_m }{(2 \pi)^{1/2} \sigma_m} + 2 \exp\biggl\{-\frac{(\delta_m/\sigma_m)^2}{8} \biggr\} \\ \label{eq:bound3fm_intdsuplnFdztail} &&{} + \frac{r}{2} \int_{B_{\delta_m}(A_0^m(x))} \sup_{z \in C_{r}(y)} \biggl\Vert\frac{d \log f(z|x) }{dz}\biggr\Vert F(dy,dx) \\ \label{eq:bound4fm_inty2_cFtail_2} &&{} + \int_{B_{\delta_m}(A_0^m(x))} \biggl[ \frac{y^{\prime}y}{2 \sigma_0^2} - \log\frac{(r/2)}{(2 \pi\sigma_0^2)^{1/2}} \biggr] F(dy,dx), \end{eqnarray} where $B_{\delta_m}(A_0^m(x))=\{(y,x,)\dvtx C_{\delta_m}(y) \cap A_0^m(x) \neq \varnothing\} and bounds in (\ref{eq:bound1fm_intdsuplnFdz})--(\ref {eq:bound4fm_inty2_cFtail_2}) converge to zero as $m \rightarrow\infty$. \end{corollary} \begin{pf} The proof is identical to the proof of Corollary \ref{crl:gen_case_bounds}. \end{pf} The bounds for $\mathcal{M}_4$, (\ref{eq:bound1fm_intdsuplnFdz})--(\ref{eq:bound4fm_inty2_cFtail_2}), are almost the same as the bounds for $\mathcal{M}_0$, (\ref{eq:bound1_intdsuplnFdz})--(\ref{eq:bound4_inty2_cFtail_2}), obtained in Corollary \ref {crl:gen_case_bounds}, except for a difference between $B_{\delta_m}(A_0^m(x))$ in $\mathcal{M}_4$ and $B_{\delta_m}(A_0^m)$ in $\mathcal{M}_0$. For the same value of $h_m$, the length of the complement of $A_0^m(x)$ in $\mathcal{M}_4$ is bounded above by $m h_m$ [$h_m=\max_{j>0} \lambda(A_j^m(x))$] which is the length of the complement of $A_0^m$ in $\mathcal{M}_0$. Thus the bounds obtained for $\mathcal{M}_4$ are likely to be larger than the bounds obtained for $\mathcal{M}_0$. Compact and interpretable conditions sufficient for deriving an explicit approximation rate for $\mathcal{M}_4$ from (\ref {eq:bound1fm_intdsuplnFdz})--(\ref{eq:bound4fm_inty2_cFtail_2}) seem to be difficult to find. Instead, I show in the following example that not only bounds for $\mathcal{M}_0$ can be smaller but also that convergence for $\mathcal{M}_0$ can be slightly faster than for $\mathcal{M}_4$. \begin{example} \label{ex:2side_exp_flex_mean_rates} Laplace distribution, $f(y|x) = 0.5 \gamma(x) \exp\{- \gamma(x) |y|\}$,\break $\gamma(x) \geq\underline{\gamma} > 0$, $\gamma(x)$ is continuous, $\int\gamma \,dF < \infty$ and the second moment of $y$ is finite ($\int \gamma^{-2} \,dF < \infty$). Note that nondifferentiability of $f(y|x)$ at zero does not affect any of the theoretical results above. First\vspace*{-1pt} consider $\mathcal{M}_4$. Let $A_j^m(x)=[F^{-1}((1-p_m m)/2 + (j-1)p_m|x), F^{-1}((1-p_m m)/2 + j p_m|x))$. Note that $F^{-1}(p|x)=\log(2p)/\gamma(x)$ for $p<0.5$ and $F^{-1}(p|x)=-\log(2(1-p))/\gamma(x)$ for $p \geq0.5$. Then, \begin{eqnarray} \label{eq:h_m_def} h_m & \geq & F^{-1}\bigl((1-p_m m)/2 + p_m|x\bigr) - F^{-1}\bigl((1-p_m m)/2|x\bigr) \nonumber\\[-8pt]\\[-8pt] & = & \frac{1}{\gamma(x)} \log\biggl(1+ \frac{2 p_m}{1-p_m m} \biggr). \nonumber \end{eqnarray} Since $h_m \rightarrow0$ and $m p_m \rightarrow1$ we can write \[ p_m=\frac{1}{m+g(m)}, \] where $g(m)$ satisfies $g(m)/m \rightarrow0$ and $g(m) \rightarrow \infty$. Note that \[ B_{\delta_m}(A_0^m(x)) \subset \biggl(-\infty, \frac{\log( 1-p_m m )(1-\varepsilon_0)}{\gamma (x)} \biggr) \cup\biggl(-\frac{\log( 1-p_m m )(1-\varepsilon_0)}{\gamma(x)}, \infty \biggr) \] for any $\varepsilon_0 \in(0,1)$ and all sufficiently large $m$. A direct calculation shows that integrals in (\ref{eq:bound3fm_intdsuplnFdztail}) and (\ref{eq:bound4fm_inty2_cFtail_2}) can be bounded by \[ \mbox{constant} \cdot(1-p_m m)^{1-\varepsilon} \leq\mbox{constant} \cdot\bigl(g(m)/m\bigr)^{1-\varepsilon} \] for any $\varepsilon\in(\varepsilon_0, 1)$ and all sufficiently large $m$. From (\ref{eq:h_m_def}) and the mean value theorem, \[ h_m \geq\mbox{constant} \cdot\gamma(x)^{-1} \cdot g(m)^{-1}. \] Since the approximation error bounds increase in $h_m$, we should choose the smallest possible value for $h_m=\mbox{constant} \cdot \underline{\gamma}^{-1} \cdot g(m)^{-1}$. One can verify that the smallest upper bound for $\delta_m$, $h_m/\sigma_m$, $\exp\{-(\delta_m/\sigma_m)^2/8\}$ and $(g(m)/m)^{1-\varepsilon}$ is inside the interval $(m^{-1/3}, m^{-1/[3+\varepsilon_1]}]$ for any $\varepsilon_1>0$ and all sufficiently large $m$. Thus, \[ d_{\mathrm{KL}}(F,\mathcal{M}_4) \leq\mbox{constant} \cdot\biggl(\frac {1}{m} \biggr)^{1/[3+\varepsilon_1]}. \] Next, consider $\mathcal{M}_0$. Expressions (\ref{eq:bound3_intdsuplnFdztail}) and (\ref{eq:bound4_inty2_cFtail_2}) are exponentially decreasing in $h_m m$. Setting $h_m$ to a power of $m$, one can show that \[ d_{\mathrm{KL}}(F,\mathcal{M}_0) \leq\mbox{constant} \cdot\biggl(\frac {1}{m} \biggr)^{1/[2+\varepsilon_2]}, \] for any $\varepsilon_2 > 0$ and all sufficiently large $m$. These results suggest that $\mathcal{M}_0$ converges to the target density faster than $\mathcal{M}_4$. \end{example} It might be unfair to compare approximation errors for $\mathcal{M}_0$ and $\mathcal{M}_4$. Although both models are ``infeasible'' and include $m$ functions that need to be approximated by polynomials (or splines), the error from approximation by the polynomials enters the total approximation error in different ways. Nevertheless, the results obtained in this section do seem to suggest that models in which mixing probabilities depend on covariates might perform better in practice. \section{\texorpdfstring{Comparison with Jiang and Tanner (\protect \citeyear{JiangTanner99})}{Comparison with Jiang and Tanner (1999)}} \label{sec:comparison} Jiang and Tanner (\citeyear{JiangTanner99}) is the only work on approximation of conditional densities by ME that I am aware of. \citet{JiangTanner99} develop approximation and estimation results for target densities of the form \begin{equation} \label{eq:exp_family} \pi(y|x;h(\cdot)) = \exp\bigl(a(h(x))y + b(h(x)) + c(y)\bigr). \end{equation} Functions $a$, $b$ and $c$ are assumed to be known, $a$ and $b$ are assumed to have nonzero derivatives and $h(x)$ is assumed to have uniformly bounded continuous second order derivatives. It seems that their results could still hold if $a$, $b$ and $c$ are known only up to some parameters (see their Remark 4). \citet{JiangTanner99} show that $\pi(y|x;h(\cdot))$ can be approximated in the KL distance by ME of the form \begin{equation} \label{eq:JTmodel} \sum_{j=1}^m \alpha_j^m(x) \pi(y|x;h_j(\cdot)), \end{equation} where $\pi(\cdot| \cdot;\cdot)$ is defined in (\ref{eq:exp_family}), $h_j(x)$ is a linear function of $x$ and the mixing probabilities $\alpha_j^m(x)$ can be modeled by logit (more general specifications for mixing weights are also allowed). The idea of their argument is to divide $X$ into a fine partition $B_j^m$, approximate $1_{B_j^m}(x)$ by $\alpha_j^m(x)$ and approximate $h(x)$ by linear function $h_j(x)$ on $B_j^m$. \citet{JiangTanner99} prove that for their target class of densities a bound on the approximation error is proportional to $m^{-4/d_x}$. There are several important differences between the present work and \citet{JiangTanner99}. First, I consider multivariate responses, $y$, while \citet {JiangTanner99} consider univariate responses. Most importantly, I do not assume that functional form of $f(y|x)$ is known, for example, known $\pi$, $a$, $b$ and $c$. The components of the model I employ, for example, normal densities and logit mixing probabilities, are generally not related to the true density. As Examples \ref{ex:inf_student} and \ref{ex:inf_cont_bddsupport} and Corollary \ref{crl:flex_mean_bddsupport} illustrate, many densities that are not from (\ref{eq:exp_family}) are shown to be approximable by ME models. Examples \ref{ex:exponential} and \ref{ex:exponential_flex_mean} also show that some of the densities from class (\ref{eq:exp_family}) satisfy sufficient conditions for approximation results I obtain. However, there might exist densities from (\ref{eq:exp_family}) that violate these sufficient conditions. This would not be surprising since the ``correct'' functional forms are mixed in (\ref{eq:JTmodel}). For the same reason it is not surprising that the approximation rate obtained by \citet{JiangTanner99}, $m^{-4/d_x}$, differs from the ones obtained here, for example, $m^{-1/[d_x+2+1/(q-2)+\varepsilon]}$ for model $\mathcal{M}_3$ in Corollary \ref{crl:linlogit_case_bounds}. Finally, responses in \citet{JiangTanner99} class (\ref {eq:exp_family}) can be discrete, for example, Poisson. To accommodate discrete responses in the framework of the present paper one could map the discrete values of response $y$ into a partition of $R$ and introduce a corresponding latent variable $y^* \sim p(y^*|x,\mathcal{M})$. For example, for binary $y \in\{0,1\}$ let $y^* \in(-\infty,0)$ if $y=0$ and $y^* \in[0,\infty)$ if $y=1$. Any discrete distribution can be represented by a continuously distributed latent variable in this fashion. This continuous distribution can be flexibly modeled by $p(y^*|x,\mathcal{M})$. Models with latent variables are easy to estimate in the Bayesian framework using MCMC methods [see, e.g., \citet{TannerWong87} and \citet{AlbertChibbinpoly93}]. \section{Discussion} \label{sec:conclusion} This paper shows that large classes of conditional densities can be approximated in the Kullback--Leibler distance by different specifications of finite smooth mixtures of normal densities or regressions. The theory can be generalized to smooth mixtures of location scale densities. These results have interesting implications for applied researchers. First of all, smooth mixtures of densities or experts can be used as flexible models for estimation of multivariate conditional densities. It seems this issue has not been explored in the literature and it would be interesting to see how specifications studied in the paper work in these settings. Second, smooth mixtures of simple components, for example, models in which mixing probabilities are modeled by multinomial logit linear in covariates and the means and variances do not depend on covariates, can be quite flexible. A~simulation study in \citet{VillaniKohnGiordani07} suggests though that models with more complex components perform better in practice. This issue should be further explored in simulation studies. Third, results in Section \ref{sec:lin_logit} suggest that making mixing probabilities more flexible, for example, by using polynomials in logit, might reduce the number of necessary mixture components. However, these models are more difficult to estimate. Fourth, models in which mixing probabilities do not depend on covariates can be very flexible at least for univariate response variables. However, they seem to require a lot of mixture components and very flexible models for the means of the mixed normals. Also, approximation error bounds and convergences rates (Example \ref {ex:2side_exp_flex_mean_rates}) obtained in Section \ref{sec:flex_mean} suggest that models with flexible mixing probabilities might perform better in practice than models with flexible means of the mixed normals and constant mixing probabilities. Nevertheless, it would be interesting to see how these specifications perform in actual applications and simulation studies. On the basis of a simulation study, \citet{VillaniKohnGiordani07} generally recommend using heteroscedastic experts (mixture components with variances that depend on covariates). The theory obtained here suggests that heteroscedastic experts might be necessary when differences in quantiles of $f(\cdot|x)$ are not uniformly bounded in $x$ and, especially, when the support bounds of $f(\cdot|x)$ are increasing without a bound in $x$ (see Examples \ref{ex:infeasible_unifrm} and \ref{ex:uniform_flex_mean}). This suggestion is likely to remain useful when the differences in quantiles and/or support of $f(\cdot|x)$, although bounded, still change considerably with covariates. Practical implications of the theoretical results obtained in the paper and summarized in this section are deduced under the assumption of no estimation and parameter uncertainty. Exploring the behavior of the estimation error in addition to the approximation error would result in a more complete understanding of the ME models. This issue is left for future work. Overall, the paper provides a number of encouraging approximation results for (smooth) mixtures of densities or experts which might stimulate more theoretical and applied work in this area of research. \begin{appendix} \section*{Appendix}\label{app} \vspace*{-14pt} \begin{pf*}{Proof of Proposition \protect\ref{prp:gen_case}} Since $d_{\mathrm{KL}}$ is always nonnegative, \[ 0 \leq\int\log\frac{f(y|x)}{p(y|x ,\mathcal{M}_0)} F(dy,dx) \leq \int\log\max\biggl\{1, \frac{f(y|x)}{p(y|x ,\mathcal{M}_0)} \biggr\} F(dy,dx). \] Thus, it suffices to show that the last integral in the inequality above converges to zero as $m$ increases. The dominated convergence theorem (DCT) is used for that. First, I establish conditions for point-wise convergence of the integrand to zero a.s. $F$. Then, I present conditions for existence of an integrable upper bound on the integrand required by the DCT. For fixed $(y,x)$, \begin{eqnarray}\label{eq:model_lb} p(y|x ,\mathcal{M}_0) &=& \sum_{j=1}^m F(A_j^m|x) \phi(y, \mu_j^m,\sigma_m) +F(A_0^m|x) \phi(y, 0,\sigma_0) \nonumber\\[-8pt]\\[-8pt] &\geq& \inf_{z \in C_{\delta_m}(y)} f(z|x) \sum_{j\dvtx A_j^m \subset C_{\delta_m}(y)} \lambda(A_j^m) \phi(y, \mu _j^m,\sigma_m),\nonumber \end{eqnarray} where $\lambda$ is the Lebesgue measure. In Lemmas \ref{lm:boundRSbyInt_ESP} and \ref{lm:boundIntBy1}, I derive the following bounds for the Riemann sum in (\ref{eq:model_lb}) (the Riemann sum is not far from the corresponding normal integral, and the integral is not far from 1): \begin{eqnarray} \label{eq:RiemannSumBd} && \sum_{j\dvtx A_j^m \subset C_{\delta_m}(y)} \lambda(A_j^m) \phi(y, \mu _j^m,\sigma_m) \nonumber\\ &&\qquad \geq1 - \frac{3 d^{3/2} \delta_m^{d-1} h_m }{(2 \pi)^{d/2} \sigma _m^d} - \frac{8 (\sigma_m / \delta_m) }{(2 \pi)^{1/2} } \exp\biggl\{-\frac{(\delta_m/\sigma_m)^2}{8} \biggr\} \\ &&\qquad \geq1 - \frac{3 d^{3/2} \delta_m^{d-1} h_m }{(2 \pi)^{d/2} \sigma _m^d} - \exp\biggl\{-\frac{(\delta_m/\sigma_m)^2}{8} \biggr\}, \nonumber \end{eqnarray} where the last inequality holds for all sufficiently large $m$ ($\delta _m/\sigma_m \rightarrow\infty$). Given $\varepsilon> 0$ there exists $M_1$ such that for $m \geq M_1$, expressions in (\ref{eq:RiemannSumBd}) are bounded below by $(1 - \varepsilon)$. If $f(y|x)$ is continuous in $y$ at $(y,x)$ and $f(y|x)>0$ there exists $M_2$ such that for $m \geq M_2$, $[f(y|x) / \inf_{z \in C_{\delta_m}(y)} f(z|x)] \leq(1 + \varepsilon)$ since $\delta_m \rightarrow0$. For any $m \geq\max\{M_0, M_1, M_2\}$, \begin{eqnarray*} 1 &\leq& \max \biggl\{ 1, \frac{f(y|x)}{p(y|x ,\mathcal{M}_0)} \biggr\} \\ &\leq& \max\biggl\{1,\frac{f(y|x)}{\inf_{z \in C_{\delta_m}(y)} f(z|x) (1 - \varepsilon ) } \biggr\}\leq\frac{1 + \varepsilon}{1 - \varepsilon}. \end{eqnarray*} Thus, $\log\max\{1,f(y|x)/p(y|x ,\mathcal{M}_0)\} \rightarrow0$ a.s. $F$ as long as $f(y|x)$ is continuous in $y$ a.s. $F$ [$f(y|x)$ is always positive a.s. $F$]. Parts \hyperlink{assnitem:general_case_2}{2} and \hyperlink {assnitem:general_case_3}{3} of Assumption \ref{assn:general_case} are used for establishing an integrable upper bound for the DCT \begin{eqnarray}\label{eq:RiemannSumInBound} p(y|x ,\mathcal{M}_0) &=& \sum_{j=1}^m F(A_j^m|x) \phi(y, \mu_j^m,\sigma_m) +F(A_0^m|x) \phi(y, 0,\sigma_0) \nonumber\\ &\geq& [1-1_{A_0^m}(y)] \nonumber\\[-8pt]\\[-8pt] &&\hspace*{0pt}{}\times\inf_{z \in C_1(r,y,x)} f(z|x) \cdot \sum_{j\dvtx A_j^m \subset C_1(r,y,x)} \lambda(A_j^m) \phi(y, \mu _j^m,\sigma_m) \nonumber\\ &&{} + 1_{A_0^m}(y) \cdot \inf_{z \in C_0(r,y,x)} f(z|x) \cdot \lambda(C_0(r,y,x)) \phi(y, 0,\sigma_0). \nonumber \end{eqnarray} Lemmas \ref{lm:boundRSbyInt_ESP} and \ref{lm:boundIntBy1} imply that the Riemann sum in (\ref{eq:RiemannSumInBound}) is bounded below by $2^{-d} - 2^{-(d+1)}=2^{-(d+1)}$ for any $m$ larger then some $M_4$. Inequalities (\ref{eq:RiemannSumInBound}) and (\ref{eq:cond_sigma0}) imply \begin{eqnarray}\label{eq:inqlty1}\qquad &&\log\max\biggl\{1,\frac{f(y|x)}{p(y|x ,\mathcal{M}_0)}\biggr\} \nonumber\\ &&\qquad\leq \log\max\biggl\{1,\frac{f(y|x)}{\inf_{z \in C(r,y,x)} f(z|x) \cdot\phi(y, 0,\sigma_0) \cdot(r/2)^d}\biggr\} \nonumber\\[-8pt]\\[-8pt] &&\qquad= \log\frac{1}{\phi(y, 0,\sigma_0) (r/2)^d} \max\biggl\{\phi(y, 0,\sigma_0) (r/2)^d,\frac{f(y|x)}{\inf_{z \in C(r,y,x)} f(z|x)}\biggr\} \nonumber\\ &&\qquad\leq -\log\bigl(\phi(y, 0,\sigma_0) (r/2)^d\bigr) + \log\frac{f(y|x)}{\inf_{z \in C(r,y,x)} f(z|x)},\nonumber \end{eqnarray} where inequality (\ref{eq:inqlty1}) follows by the first inequality in (\ref{eq:cond_sigma0}). The first expression in (\ref{eq:inqlty1}) is integrable by Assumption \ref{assn:general_case}, part \hyperlink{assnitem:general_case_2}{2}. The second expression in (\ref{eq:inqlty1}) is integrable by Assumption \ref{assn:general_case}, part \hyperlink{assnitem:general_case_3}{3}. Thus the proposition is proved. \end{pf*} \begin{pf*}{Proof of Corollary \protect\ref{crl:gen_case_bounds}} The proof of the first part of the proposition is a simple implication of the argument in the proof of Proposition \ref{prp:gen_case}. Note that \begin{eqnarray} \label{eq:d_kl_2_ints} d_{\mathrm{KL}}(F,\mathcal{M}_0) &=& \int_{Y\times X \setminus B_{\delta_m}(A_0^m)} \log\frac{f(y|x)}{p(y|x,\mathcal{M}_0)} F(dy,dx) \nonumber\\[-8pt]\\[-8pt] &&{} + \int_{B_{\delta_m}(A_0^m)} \log\frac{f(y|x)}{p(y|x,\mathcal{M}_0)} F(dy,dx). \nonumber \end{eqnarray} For $(y,x) \in Y\times X \setminus B_{\delta_m}(A_0^m)$, inequalities (\ref{eq:model_lb}) and (\ref{eq:RiemannSumBd}) apply. Thus, the first integral in (\ref{eq:d_kl_2_ints}) is bounded by the sum of (\ref{eq:bound1_intFinfF}) and (\ref{eq:bound2_lof1_epsm}), where the bound in (\ref{eq:bound2_lof1_epsm}) is obtained by the mean value theorem for $-\log(1-x)$ and a small positive $x$, \begin{eqnarray} \label{eq:bd_rate_log} && - \log\biggl( 1 - \frac{3 d^{3/2} \delta_m^{d-1} h_m }{(2 \pi)^{d/2} \sigma_m^d} - \exp\biggl\{-\frac{(\delta_m/\sigma_m)^2}{8} \biggr\} \biggr) \nonumber\\[-8pt]\\[-8pt] &&\qquad \leq2 \biggl( \frac{3 d^{3/2} \delta_m^{d-1} h_m }{(2 \pi)^{d/2} \sigma_m^d} + \exp\biggl\{-\frac{(\delta_m/\sigma_m)^2}{8} \biggr\} \biggr). \nonumber \end{eqnarray} By inequality (\ref{eq:RiemannSumInBound}), the second integral in (\ref{eq:d_kl_2_ints}) is bounded by the sum of (\ref{eq:bound3_intFinfFtail}) and (\ref{eq:bound4_inty2_cFtail}). Expression (\ref{eq:bound1_intFinfF}) converges to zero by the DCT. The point-wise convergence follows by the assumed continuity and positivity of $f(y|x)$. An integrable upper bound is given by (\ref{eq:IntBoundFinfF}). Expression (\ref{eq:bound1_intFinfF}) converges to zero by (\ref{eq:cond_delta_sigma_h}). Expressions (\ref{eq:bound3_intFinfFtail}) and (\ref {eq:bound4_inty2_cFtail}) converge to zero because $Y\times X \setminus B_{\delta_m}(A_0^m) \nearrow Y\times X$ and the integrands are integrable by (\ref{eq:IntBoundFinfF}) and by the assumed finiteness of the second moment of $y$. Thus, the first part of the proposition is proved. The second part of the proposition [bounds for differentiable $f(y|x)$] follows from the first part since \[ \biggl| \log\frac{f(y|x)}{\inf_{z \in C_{r}(y)} f(z|x)} \biggr| \leq \sup_{z \in C_{r}(y)} \biggl\Vert\frac{d \log f(z|x) }{dz}\biggr\Vert\frac {d^{1/2} r}{2}, \] which is implied by the multivariate mean value theorem: for any $(z_1,z_2)$ \[ |{\log f(z_1|x) - \log f(z_2|x) }| \leq\bigl\Vert f^{\prime}\bigl(c z_1 + (1-c) z_2\bigr) \bigr\Vert\Vert z_1-z_2\Vert \] for some $c \in[0,1]$. Convergence of the bounds to zero is obtained in the same way as in the first part of the proposition. To obtain the third part let us suppose that the fine part of the partition $\{A_j^m, 1 \leq j \leq m \} $ is centered at 0. If $(y,x) \in B_{\delta_m}(A_0^m)$, then $|y_i| \geq h_m m^{1/d} / 2 - \delta_m > h_m m^{1/d}/3$ for $i \in\{1,\ldots,d\}$ and all sufficiently large $m$ and \begin{eqnarray} \label{eq:bd_rate_y2}\qquad && \int_{B_{\delta_m}(A_0^m)} y_i^2 F(dy,dx) \nonumber\\ &&\qquad\leq \int_{\{(y,x)\dvtx|y_i| > h_m m^{1/d} / 3, \forall i \}} y_i^2 F(dy,dx)\nonumber\\[-18pt] \\ &&\qquad \leq (h_m m^{1/d} / 3)^{-(q-2)}\nonumber\\ &&\qquad\quad{}\times \int_{ \{(y,x)\dvtx|y_i| > h_m m^{1/d} / 3, \forall i \}} (h_m m^{1/d} / 3)^{q-2} y_i^2 F(dy,dx) \nonumber\\ &&\qquad \leq (h_m m^{1/d} / 3)^{-(q-2)} \int_{ Y \times X} y_i^q F(dy,dx).\nonumber \end{eqnarray} Similarly, \begin{eqnarray}\label{eq:bd_rate_dlogfdz}\qquad && \int_{B_{\delta_m}(A_0^m)} \sup_{z \in C_{r}(y)} \biggl\Vert\frac{d \log f(z|x) }{dz}\biggr\Vert F(dy,dx) \nonumber\\ &&\qquad \leq \int_{\{(y,x)\dvtx|y_i| > h_m m^{1/d} / 3, \forall i \}} \sup_{z \in C_{r}(y)} \biggl\Vert\frac{d \log f(z|x) }{dz}\biggr\Vert F(dy,dx) \nonumber \\ &&\qquad \leq \biggl(\int_{ \{(y,x)\dvtx|y_i| > h_m m^{1/d} / 3, \forall i \}} (h_m m^{1/d} / 3)^{q-2}\nonumber\\[-8pt]\\[-8pt] &&\hspace*{137pt}{}\times \sup_{z \in C_{r}(y)} \biggl\Vert\frac{d \log f(z|x) }{dz}\biggr\Vert F(dy,dx)\biggr)\nonumber\\ &&\qquad\quad\hspace*{0pt}{}\times\bigl((h_m m^{1/d} / 3)^{q-2} \bigr)^{-1} \nonumber\\ &&\qquad \leq (h_m m^{1/d} / 3)^{-(q-2)} \int_{ Y \times X} y_{i_1}^{q-2} \sup_{z \in C_{r}(y)} \biggl\Vert\frac{d \log f(z|x) }{dz}\biggr\Vert F(dy,dx).\nonumber \end{eqnarray} Since integrals in (\ref{eq:bd_rate_y2}) and (\ref{eq:bd_rate_dlogfdz}) are finite by assumption, (\ref{eq:bound3_intdsuplnFdztail}) and (\ref {eq:bound4_inty2_cFtail_2}) can be bounded above by an expression proportional to $(h_m m^{1/d})^{-(q-2)}$. Thus, the sum of (\ref{eq:bound1_intdsuplnFdz})--(\ref {eq:bound4_inty2_cFtail_2}) is bounded by \begin{eqnarray} \label{eq:bd_rate_alphas} &&c_1 \cdot\delta_m +c_2 \cdot\exp\{- (\delta_m/\sigma_m)^2/8\} +c_3 \cdot\delta_m^{d-1} h_m / \sigma_m^d\nonumber\\[-8pt]\\[-8pt] &&\qquad{}+c_4 \cdot1/(h_m m^{1/d})^{q-2},\nonumber \end{eqnarray} where constants $c_1$, $c_2$, $c_3$ and $c_4$ do not depend on $m$. Let $b_m$ be the smallest number satisfying $b_m \geq\delta_m$, $b_m \geq\delta_m^{d-1} h_m / \sigma_m^d$, $b_m \geq1/(h_m m^{1/d})^{q-2}$ and $b_m \geq\exp\{- (\delta_m/\sigma_m)^2/8\}$. The first three of these inequalities imply \[ b_m \geq\{[(\delta_m/\sigma_m)^d]/m^{1/d}\}^{1/[2+1/(q-2)]}. \] It implies that for all sequences $\delta_m$, $\sigma_m$ and $h_m$ allowed by the corollary, \[ b_m > \biggl( \frac{1}{m} \biggr)^{1/(d\cdot[2+1/(q-2)])}. \] One can verify that \begin{equation}\qquad \label{eq:2bds_rate} b_m \leq\biggl( \frac{(4 \log m / d )^{d/2}}{m^{1/d}} \biggr)^{1/[2+1/(q-2)]} \leq\biggl( \frac{1}{m} \biggr)^{1/(d \cdot[2+1/(q-2)+\varepsilon])}, \end{equation} when $\delta_m$ equal to the first bound in (\ref{eq:2bds_rate}), $(\delta_m/\sigma_m)^2 = 4 \log m /d$ and $h_m=\delta_m^2/(\delta _m/\sigma_m)^d$. \end{pf*} \begin{pf*}{Proof of Proposition \protect\ref{prp:gen_linear_logit}} Define $I_1^m(x,s_m)=\{i\dvtx\Vert x_i^m-x\Vert^2 < s_m \}$ and $I_2^m(x,s_m)=\{ i\dvtx\Vert x_i^m-x\Vert^2 > 2 s_m \}$. For $i \in I_1^m(x,s_m)$, \begin{equation} \label{eq:I1Rmsm} [- R_m (x_i^{m \prime} x_i^m - 2 x_i^{m \prime} x )] > [-R_m (s_m - x^{ \prime} x)] \end{equation} and for $i \in I_2^m(x,s_m)$, \begin{equation} \label{eq:I2Rmsm} [- R_m (x_i^{m \prime} x_i^m - 2 x_i^{m \prime} x )] < [-R_m (2 s_m - x^{ \prime} x)]. \end{equation} Note that \begin{eqnarray} \label{eq:BoundSumI1}\qquad &&\frac{ \sum_{i \in I_1^m(x,s_m)} \exp\{ - R_m (x_i^{m \prime} x_i^m - 2 x_i^{m \prime} x ) \}} {\sum_{l} \exp\{ - R_m (x_l^{m \prime} x_l^m - 2 x_l^{m \prime} x )\} } \nonumber\\ &&\qquad \geq 1- \frac{ \sum_{i \in I_2^m(x,s_m)} \exp\{ - R_m (x_i^{m \prime} x_i^m - 2 x_i^{m \prime} x ) \}} {\sum_{i \in I_1^m(x,s_m)} \exp\{ - R_m (x_i^{m \prime} x_i^m - 2 x_i^{m \prime} x )\} } \\ &&\qquad \geq 1 - \frac{\mbox{card}(I_2^m(x,s_m))}{\mbox{card}(I_1^m(x,s_m))} \exp\{-R_m s_m\} \geq1 - d_x^{d_x/2} \frac{\exp\{-R_m s_m\}}{s_m^{d_x/2}}, \nonumber \end{eqnarray} where the second inequality follows from (\ref{eq:I1Rmsm}) and (\ref {eq:I2Rmsm}). The last inequality follows from the following bounds on the number of elements in $I_1^m(x,s_m)$ and $I_2^m(x,s_m)$: $\mbox{card}(I_1^m(x,s_m)) \geq1$ [$s_m$ is chosen in (\ref{eq:CondSmRm}) so that any ball in $X$ with radius $s_m^{1/2}$ has to contain at least one $x_i^m$] and \[ \mbox{card}(I_2^m(x,s_m)) \leq N(m) = d_x^{d_x/2} s_m^{-d_x/2}. \] For $i \in I_1^m(x,s_m)$ and $A_j^m \subset C_{\delta_m}(y)$, \begin{equation} \label{eq:FA_jBoindLogitLin} F(A_j^m|x_i^m) \geq\lambda(A_j^m) \inf_{z \in C_{\delta_m}(y), \Vert t-x\Vert^2 \leq s_m} f(z|t). \end{equation} Inequalities (\ref{eq:BoundSumI1}), (\ref{eq:FA_jBoindLogitLin}) and (\ref{eq:RiemannSumBd}) imply that $p(y|x ,\mathcal{M}_3)$ exceeds \begin{eqnarray*} && \sum_{j\dvtx A_j^m \subset C_{\delta_m}(y)} \sum_{i \in I_1^m(x,s_m)} F(A_j^m|x_i^m) \frac{ \exp\{ - R_m (x_i^{m \prime} x_i^m - 2 x_i^{m \prime} x ) \}} {\sum_{l} \exp\{ - R_m (x_l^{m \prime} x_l^m - 2 x_l^{m \prime} x )\} } \phi(y, \mu_j^m,\sigma_m) \\ &&\qquad \geq \inf_{z \in C_{\delta_m}(y), \Vert t-x\Vert^2 \leq s_m} f(z|t) \cdot \biggl[ 1 - \frac{3 d^{3/2} \delta_m^{d-1} h_m }{(2 \pi)^{d/2} \sigma_m^d}\\ &&\qquad\quad\hspace*{120.2pt}{} - \frac{8 d \sigma_m }{(2 \pi)^{1/2} \delta_m} \exp\biggl\{-\frac{(\delta_m/\sigma_m)^2}{8} \biggr\} \biggr] \\ &&\qquad\quad\hspace*{78.5pt}{} \times \biggl[1-d_x^{d_x/2}\frac{\exp\{-R_m s_m\}}{s_m^{d_x/2}} \biggr]. \end{eqnarray*} The expression on the last line of this inequality converges to $1$ by (\ref{eq:CondSmRm}). The rest of the proof is exactly the same as the proof of Proposition \ref{prp:gen_case}. \end{pf*} \begin{pf*}{Proof of Corollary \protect\ref{crl:linlogit_case_bounds}} The proof of part (i) is identical to the proof of Corollary \ref {crl:gen_case_bounds} part (ii). The proof of part (ii) is also similar to the proof of Corollary \ref {crl:gen_case_bounds} part (iii). Just set $s_m^{1/2} = \delta_m$ and note that (\ref{eq:bound5_expRmsm}) can be made arbitrarily smaller than the other parts of the bound by an appropriate choice of $R_m$. Thus, the bound is the same as in (\ref{eq:bd_rate}), we just\vspace*{1pt} need to express $m$ in terms of the number of mixture components in $\mathcal {M}_3$, $m N(m)$. From the definition of $N(m)$ and $s_m$, $N(m)=\lambda(B_i^m)^{-1}=d_x^{d_x/2} s_m^{-d_x/2}$. Since we set $s_m^{1/2} = \delta_m$ and $\delta_m=m^{-1/(d\cdot [2+1/(q-2)])}$ in the proof of Corollary \ref{crl:gen_case_bounds}, \[ m N(m)= d_x^{d_x/2} m^{1+d_x / (d\cdot[2+1/(q-2)])}. \] From this equation, one can express $m$ as a function of $m N(m)$ and plug it in (\ref{eq:bd_rate}) to obtain (\ref{eq:bd_rate_linlogit}). \end{pf*} \begin{pf*}{Proof of Proposition \protect\ref{prp:flexiblemu_case}} First, consider point-wise convergence a.s. $F$. For fixed $(y,x)$ and an interval $C_{\delta_m(x)}(y)$ with center $y$ and length $\delta_m(x) > 0$, \begin{eqnarray} \label{eq:M4_lb}\hspace*{32pt} p(y|x ,\mathcal{M}_4) &=& \sum_{j=1}^m F(A_j^m(x)|x) \phi(y, \mu_j^m(x),\sigma_m(x)) \nonumber\\ &&{}+F(A_0^m(x)|x) \phi(y, 0,\sigma_0(x)) \nonumber\\ &\geq& \inf_{z \in C_{\delta_m(x)}(y)} f(z|x) \sum_{j=1}^m \lambda\bigl(A_j^m(x) \cap C_{\delta_m(x)}(y)\bigr)\nonumber\\[-8pt]\\[-8pt] &&\hspace*{90.1pt}{}\times \phi(y, \mu _j^m(x),\sigma_m(x)) \nonumber\\ &\geq& \inf_{z \in C_{\delta_m(x)}(y)} f(z|x) \biggl(1 - \frac{ 6 h_m(x) } {(2 \pi)^{1/2} \sigma_m(x) }\nonumber\\ &&\hspace*{80.38pt}{} - \frac{16 \sigma_m(x) }{(2 \pi)^{1/2} \delta_m(x)}\exp\biggl\{-\frac {(\delta_m/\sigma_m)^2}{8} \biggr\}\biggr),\nonumber \end{eqnarray} where the last inequality follows from Lemma \ref{lm:boundRS_EPP_new} [if $\delta_m(x) \rightarrow0$ and $m p_m \rightarrow1$ then for any $(y,x)$ there exists $M$ such that $\forall m \geq M$, $C_{\delta_m(x)}(y) \cap A_0^m(x) = \varnothing$ and the lemma applies]. Convergence of the bound in (\ref{eq:M4_lb}) to $f(y|x)$ a.s. $F$ is implied by a.s. positivity and continuity in $y$ of $f(y|x)$ and conditions in (\ref{eq:cond_delta_sigma_h_x}). The rest of the argument establishing point-wise convergence is the same as for $\mathcal{M}_0$ [details are below (\ref {eq:cond_delta_sigma_h})]. Next, let us derive an integrable upper bound for the DCT, \begin{eqnarray}\label{eq:RiemannSumEPPInBound}\hspace*{28pt} p(y|x ,\mathcal{M}_4) &=& \sum_{j=1}^m F(A_j^m(x)|x) \phi(y, \mu_j^m(x),\sigma_m(x))\nonumber\\ &&{}+F(A_0^m(x)|x) \phi(y, 0,\sigma_0(x)) \nonumber\\ &\geq& [1-1_{A_0^m(x)}(y)] \nonumber\\ &&\hspace*{0pt}{}\times\inf_{z \in C_1(r(x),y,x)} f(z|x)\nonumber\\[-8pt]\\[-8pt] &&\hspace*{66pt}{}\times \sum_{j\dvtx A_j^m(x) \subset C_1(r(x),y,x)} \lambda(A_j^m(x))\nonumber\\ &&\hspace*{161pt}{}\times \phi(y, \mu _j^m(x),\sigma_m(x)) \nonumber\\ &&{} + 1_{A_0^m(x)}(y) \cdot \inf_{z \in C_0(r(x),y,x)} f(z|x) \cdot \lambda(C_0(r(x),y,x))\nonumber\\ &&\hspace*{117.2pt}{}\times \phi(y, 0,\sigma_0(x)). \nonumber \end{eqnarray} Lemma \ref{lm:boundRS_EPP_new} and condition (\ref{eq:cond_r_sigma_h_x}) imply that the sum in (\ref{eq:RiemannSumEPPInBound}) is bounded below by $1/2 - 1/4=1/4$ for all sufficiently large $m$. Equation (\ref{eq:cond_sigma0_x}) implies \begin{eqnarray}\label{eq:EPPinqlty1}\quad &&\log\max\biggl\{1,\frac{f(y|x)}{p(y|x ,\mathcal{M}_4)}\biggr\} \nonumber\\ &&\qquad\leq \log\max\biggl\{1,\frac{f(y|x) \cdot(r(x)/2)^{-1}}{\inf_{z \in C(r(x),y,x)} f(z|x) \cdot\phi(y, 0,\sigma_0(x)) }\biggr\} \nonumber\\ &&\qquad\leq \log\frac{1}{\phi(y, 0,\sigma_0(x)) (r(x)/2)} \max\biggl\{\phi(y, 0,\sigma_0(x)) \bigl(r(x)/2\bigr),\\ &&\hspace*{184pt} \frac{f(y|x)}{\inf_{z \in C(r(x),y,x)} f(z|x)}\biggr\} \nonumber\\ &&\qquad\leq -\log[\phi(y, 0,\sigma_0(x)) r(x)/2 ] + \log\frac{f(y|x)}{\inf_{z \in C(r(x),y,x)} f(z|x)}.\nonumber \end{eqnarray} Inequality (\ref{eq:EPPinqlty1}) follows by (\ref{eq:cond_sigma0_x}). The first expression in (\ref{eq:EPPinqlty1}) is integrable by Assumption \ref{assn:flexiblemu_case}, part \hyperlink {assnitem:flexiblemu_case_2}{6}. The second expression in (\ref{eq:EPPinqlty1}) is integrable by Assumption \ref{assn:flexiblemu_case}, part \hyperlink {assnitem:flexiblemu_case_3}{3}. This completes the proof of the proposition. \end{pf*} \begin{pf*}{Proof of Corollary \protect\ref{crl:flex_mean_bddsupport}} It suffices to show that Assumption \ref{assn:flexiblemu_case} is satisfied. First, let us obtain a suitable $h_m$. Note that \begin{equation}\qquad \label{eq:p_mbd1} p_m \geq\int_{A_j^m(x) \cap[a_1(x) , b_1(x)]} f(y|x)\,dy \geq\lambda \bigl(A_j^m(x) \cap[a_1(x) , b_1(x)]\bigr) \underline{f}. \end{equation} Also, \begin{eqnarray}\label{eq:p_mbd2} p_m &\geq& \int_{A_j^m(x) \cap[a(x),a_1(x)]} f(y|x)\,dy\nonumber\\ &\geq&\int_{A_j^m(x) \cap[a(x),a_1(x)]} \underline{f} \cdot[ y-a(x)]^n \,dy \\ &\geq& (n+1)^{-1} \lambda\bigl(A_j^m(x) \cap[a(x),a_1(x)]\bigr)^{n+1} \underline{f}\nonumber \end{eqnarray} and similarly $p_m \geq(n+1)^{-1} \lambda(A_j^m(x) \cap[b_1(x),b(x)])^{n+1} \underline{f}$. Combining this inequality with (\ref{eq:p_mbd1}) and (\ref{eq:p_mbd2}) we get for all $x$ and $j$, \begin{eqnarray*} \lambda(A_j^m(x)) &\leq& \frac{p_m}{\underline{f}} + \frac{2 \cdot (n+1)^{1/(n+1)} \cdot p_m^{1/(n+1)}}{\underline{f}^{1/(n+1)}}\\ &\leq& \frac{7 p_m^{1/(n+1)}}{\underline{f}} = h_m. \end{eqnarray*} For $\sigma_m(x)=p_m^{1/4(n+1)}$ and $\delta_m(x)=p_m^{1/8(n+1)}$ conditions (\ref{eq:EPPconditions}), (\ref{eq:cond_delta_sigma_h_x}) and (\ref{eq:cond_r_sigma_h_x}) hold. Next, let $C(r,y,x)=[y, y+r]$ if $y \in(a(x), a_1(x)+r/2)$, $C(r,y,x)=[y-r/2, y+r/2]$ if $y \in[a_1(x)+r/2, b_1(x)-r/2]$ and $C(r,y,x)=[y-r/2, y]$ if $y \in(b_1(x)-r/2, b(x))$. By condition \hyperlink{crlitem:monotone}{4} of the corollary $\inf_{z \in C(r(x),y,x)} f(z|x) = f(y|x)$ for $y \notin[a_1(x)+r/2, b_1(x)-r/2]$. For $y \in[a_1(x)+r/2, b_1(x)-r/2]$, $\inf_{z \in C(r(x),y,x)} f(z|x) \geq\underline{f}$ and \[ \int\log\frac{f(y|x)}{\inf_{z \in C(r(x),y,x)} f(z|x) } F(dy,dx) \leq\log(\overline{f}/\underline{f}) < \infty. \] Condition \hyperlink{assnitem:flexiblemu_case_1}{2} and (\ref {eq:cond_sigma0_x}) in Assumption \ref{assn:flexiblemu_case} are assumed in the corollary. Since $a(x)$ and $b(x)$ are assumed to be square integrable, the second moment of $y$ is finite, and condition \hyperlink{assnitem:flexiblemu_case_2}{6} of Assumption \ref {assn:flexiblemu_case} holds. \end{pf*} \begin{lemma} \label{lm:boundRSbyInt_ESP} Define a hypercube $C_{\delta}(y)=\{\mu\in R^d\dvtx y_i \leq\mu_i \leq y_i + \delta, i=1,\ldots,d\}$. Let $A_1,\ldots,A_m$ be adjacent hypercubes with centers $\mu_j$ and side length $h$ such that $C_{\delta}(y) \subset\bigcup_{j=1}^m A_j$ and $\delta> 3 d^{1/2} h $. Define $J=\{j\dvtx A_j \subset C_{\delta}(y)\} $. Then \[ \sum_{j \in J}{\lambda(A_{j})\phi(y;\mu_{j},\sigma)} \geq \int _{C_{\delta}(y)}{\phi(\mu;y;\sigma)\,d\mu}-\frac{3 d^{3/2} \delta ^{d-1}h}{(2\pi)^{d/2}\sigma^{d}}. \] By symmetry, this result holds for any hypercube with vertex at $y$ and side length~$\delta$. This implies that for hypercube $D_{\delta }(y)=\{ x\dvtx y_i - \delta/2 \leq x_i \leq y_i + \delta/2, i=1,\ldots,d \}$, \[ \sum_{j\dvtx A_j \subset D_{\delta}(y)}{\lambda(A_{j})\phi(y;\mu_{j},\sigma)} \geq \int_{D_{\delta}(y)}{\phi(\mu;y;\sigma)\,d\mu}-2^d \frac{3 d^{3/2} (\delta/2)^{d-1}h}{(2\pi)^{d/2}\sigma^{d}} \] as long as $D_{\delta}(y) \subset\bigcup_{j=1}^m A_j$ and $\delta> 6 d^{1/2} h$. \end{lemma} \begin{pf} For $j \in J$ let $B_j = \{ x\dvtx\mu_{ji} \leq x_i \leq\mu_{ji}+h, i=1,\ldots,d\}$ be a shifted and rotated version of $A_j$. Note that $\mu_j = \arg\max_{\mu\in B_j} \phi(\mu;y;\sigma)$, and therefore \begin{eqnarray*} &&\sum_{j \in J}{\lambda(A_{j})\phi(y;\mu_{j},\sigma)}\\ &&\qquad= \sum_{j \in J}{\lambda(B_{j})\phi(y;\mu_{j},\sigma)} \geq \int_{\bigcup_{j \in J} B_j}{\phi(\mu;y;\sigma)\,d\mu}\\ &&\qquad\geq \int_{C_{\delta}(y)}{\phi(\mu;y;\sigma)\,d\mu} -\int_{C_{\delta}(y) \setminus\bigcup_{j \in J} B_j}{\phi(\mu ;y;\sigma )\,d\mu}. \end{eqnarray*} Since\vspace*{1pt} $ \{x\dvtx\min_J \mu_{ji} \leq x_i \leq\max_J \mu_{ji}, i=1,\ldots,d \} \subset C_{\delta}(y) \cap[\bigcup_J B_j] $ and $\max_{j \in J} \mu_{ji} - \min_{j \in J} \mu_{ji} \geq\delta- 3 d^{1/2} h$, we get\vspace*{2pt} $\lambda(C_{\delta}(y) \cap[\bigcup_J B_j]) \geq(\delta- 3 d^{1/2} h)^d$ and \begin{eqnarray*} \lambda\biggl(C_{\delta}(y) \Bigm\backslash\biggl[\bigcup_J B_j\biggr]\biggr) &=& \lambda(C_{\delta}(y)) - \lambda\biggl(C_{\delta}(y) \cap\biggl[\bigcup_{j \in J} B_j\biggr]\biggr) \nonumber\\ &\leq& \delta^d - (\delta- 3 d^{1/2} h)^d \leq3 d^{3/2} h \delta^{d-1}, \end{eqnarray*} where the last inequality follows by induction. Thus, \begin{eqnarray*} \int_{C_{\delta}(y) \setminus\bigcup_J B_j}{\phi(\mu;y;\sigma)\,d\mu} &\leq& \lambda\biggl(C_{\delta}(y) \Bigm\backslash\biggl[\bigcup_J B_j\biggr]\biggr) \frac{1}{(2 \pi )^{d/2} \sigma^d} \\ &\leq& \frac{3 d^{3/2} h \delta^{d-1}}{(2 \pi)^{d/2}\sigma^d}. \end{eqnarray*} \upqed\end{pf} \begin{lemma} \label{lm:boundIntBy1} Let $C_{\delta}(y)$ be a $d$-dimensional hypercube with center $y$ and side length $\delta>0$. Then \[ \int_{C_{\delta}(y)}{\phi(\mu;y;\sigma)\,d\mu} > 1 -\frac{8d\sigma /\delta }{(2\pi)^{1/2}} \exp\biggl\{-\frac{(\delta/\sigma)^2}{8} \biggr\}. \] Note that this inequality immediately implies that for any sub-hypercube of $C_{\delta}(y)$, $\tilde{C}$, with vertex at $y$ and side length $\delta/ 2$, for example, $\tilde{C} = C_{\delta}(y) \cap[\mu\geq y]$, \begin{eqnarray*} \int_{\tilde{C}}{\phi(\mu;y;\sigma)\,d\mu} &=& \frac{1}{2^{d}} \int_{C_{\delta}(y)}{\phi(\mu;y;\sigma)\,d\mu} \\ &>& \frac{1}{2^{d}} -\frac{8d\sigma/\delta}{ 2^{d} (2\pi)^{1/2}} \exp\biggl\{-\frac{(\delta/\sigma)^2}{8} \biggr\}. \end{eqnarray*} \end{lemma} \begin{pf} \begin{eqnarray*} \int_{C_{\delta(y)}}{\phi(\mu;y;\sigma)\,d\mu} &=& \int_{ \bigcap_{i=1}^d [|\mu_{i}| \leq\delta/2]} {\phi(\mu;0;\sigma )\,d\mu}\\ &=& 1 - \int_{\bigcup_{i=1}^d [|\mu_{i}| \geq\delta/2]} {\phi(\mu;0;\sigma)\,d\mu} \\ &\geq& 1 - \sum_{i=1}^{d} \int_{|\mu_{i}| \geq\delta/2} {\phi(\mu_{i};0;\sigma)\,d\mu_{i}} \\ &=& 1- 2 d \int_{\delta/2 }^{\infty}{\phi(\mu_1;0;\sigma)\,d\mu_1} \\ &>& 1 - \frac{2 d}{(2 \pi)^{1/2} \sigma} \int_{\delta/2 }^{\infty}{ \exp\biggl\{-\frac{0.5 (\delta/2) \mu_1 } {\sigma^2} \biggr\}\,d\mu_1} \\ &=& 1 - \frac{2 d}{(2 \pi)^{1/2} \sigma} \frac{- \sigma^2}{0.5 (\delta/2)} \exp\{-0.5 (\delta/2) \mu_1 / \sigma^2 \} |_{\delta/2}^{\infty} \\ &=& 1 - \frac{8 d (\sigma/ \delta) }{(2 \pi)^{1/2} } \exp\biggl\{-\frac{(\delta/\sigma)^2}{8} \biggr\}. \end{eqnarray*} \upqed\end{pf} \begin{lemma} \label{lm:boundRS_EPP_new} Let $A_1,\ldots,A_m$ be a partition of an interval on $R$ such that $\lambda(A_j) \leq h$ and $\mu_j \in A_j$. Assume $C_{\delta }(y)=[y-\delta,y+\delta] \subset\cup A_j$ is an interval with center $y$ and length $\delta$. Then \[ \sum_{j=1}^m \lambda\bigl(A_j \cap C_{\delta}(y)\bigr) \phi(y, \mu_j,\sigma ) \geq 1 - \frac{ 6 h } {(2 \pi)^{1/2} \sigma} - \frac{8 (\sigma/ \delta) }{(2 \pi)^{1/2} } \exp\biggl\{-\frac{(\delta/\sigma)^2}{8} \biggr\}. \] If $C_{\delta}(y)=[y-\delta,y]$ or $C_{\delta}(y)=[y,y+\delta]$ the lower bound in the above expression should be divided by 2. \end{lemma} \begin{pf} Let $J=\{j\dvtx A_j \cap C_{\delta}(y) \subset[y-\delta,y]\}$. For any $j \in J$ and $\mu\in A_j \cap C_{\delta}(y)$, $\mu- h \leq \mu_j$ as $\lambda(A_j) < h$ and $\mu_j \in A_j$, which implies $\phi(y, \mu_j,\sigma) \geq\phi(y, \mu-h,\sigma)$. Therefore, \begin{equation} \label{eq:sum_bd_shift}\qquad \sum_{j \in J} \lambda\bigl(A_j \cap C_{\delta}(y)\bigr) \phi(y, \mu _j,\sigma) \geq \int_{\bigcup_{j \in J} [A_j \cap C_{\delta}(y)]} \phi(y, \mu -h,\sigma)\, d \mu. \end{equation} Note next that \begin{eqnarray*} && \int_{\bigcup_{j \in J} [A_j \cap C_{\delta}(y)]} \phi(y, \mu -h,\sigma) \,d \mu\\ &&\qquad\geq \int_{y-\delta}^{y-h} \phi(y, \mu-h,\sigma)\, d \mu= \int _{y-\delta -h}^{y-2h} \phi(y, \mu,\sigma) \,d \mu \\ &&\qquad = \int_{y-\delta}^{y} \phi(y, \mu,\sigma)\, d \mu\\ &&\qquad\quad{} - \int_{y-\delta-h}^{y-\delta} \phi(y, \mu,\sigma)\, d \mu- \int_{y-2h}^{y} \phi(y, \mu,\sigma)\, d \mu \\ &&\qquad \geq \int_{y-\delta}^{y} \phi(y, \mu,\sigma) \,d \mu- \frac{3h}{(2 \pi )^{1/2} \sigma}. \end{eqnarray*} By symmetry the same results can be obtained for $J=\{j\dvtx A_j \cap C_{\delta}(y) \subset[y,y+\delta]\}$. Thus \[ \sum_{j=1}^m \lambda\bigl(A_j \cap C_{\delta}(y)\bigr) \phi(y, \mu_j,\sigma) \geq \int_{y-\delta}^{y+\delta} \phi(y, \mu,\sigma) \,d \mu- 2 \frac{3h}{(2 \pi)^{1/2} \sigma}. \] The claim of the lemma follows by Lemma \ref{lm:boundIntBy1}. \end{pf} \end{appendix} \section*{Acknowledgments} The author is grateful to John Geweke and participants of seminars at Princeton, SBIES 09 and SITE 09 for helpful discussions. I thank Justinas Pelenis for pointing out shortcomings in several proofs. I thank an associate editor and anonymous referees for useful suggestions. All remaining errors are mine.
{ "redpajama_set_name": "RedPajamaArXiv" }
9,591
« When Linda Ronstadt Sings Faces » Ray Bradbury In Autumn Under the dusky red Martian sun a hazy glow seemed to make the rhythm of time itself slow down. The elderly couple clinked their chilled martini glasses, looked at the blue water rippling in their backyard pool, sat up straighter, scraping patio chairs on bleached white cement, and took their first sip. He was quiet for a moment. And because they could read each other's quiet like a morning newspaper, she said, "The connection thing, right?" The Martian wind blew tenderly warm off the endless red prairies onto the cool tree-lined, paved streets of the green-lawned and sidewalked settlement. The first of it's kind. Now replicated like giant stepping-stones across this corner of the planet. When the first ships had arrived from earth, all those years ago, the settlements were hodgepodge mixes of architecture awkwardly blending styles from the entire American landscape circa 1930-1960. What the initial planners hadn't taken into account was that a house in Chicago looked different from one in LA, New York, or Miami. So Clarissa and Jason lived in an orange pastel one-story home perfectly suited for small town south Florida that had somehow been plunked down in the middle of a northern Illinois forest next to a deep dark scary ravine and a burbling clear stream. All of it man-made. All of it made fast. So lives could begin again. Serene and lush, but still somehow off. There really was no way at all to forget you were now on another planet. Every now and then one of them slipped and said the word "home." But they didn't use it often, because they weren't sure what it really meant. "Hey!" She laughed that same soul-lifting laugh he'd been listening to for over 40 years. "At least we can drink again and it doesn't matter because we're old!" "And we really don't even know how old either. How cool is that?" The first set of ships had in fact been pumped full of an unnamed compound that did effect memory. To leave one's planet without the deepest of emotional scars did take planning. One really need not remember everything. In the latter weeks of the last great war there was so very much that needed to be forgotten. That's when the migration to Mars had begun. It was with the Earth's last great gasp that Clarissa and Jason had come together. Seeing each other in the crowded hold of the ship. Strangers, who without even speaking felt that there had been some other time when they belonged together. Even if they couldn't remember when. Quiet at first, while they held back the pain of leaving. The rippling disasters across the world that had all seemed related by a bloody red thread of terror. His thoughts mired in the killing fields of Syria. The atrocities so unspeakable that even the distance of television could not blunt the pain. Her thoughts with the attack on the fresh water supplies of the United States. The drying up of drinkable water that prompted all restrictions on any kind of firearms to vanish because, as the pundits preached, "People got a right to protect their drinking water." Both of them still carrying images of the electrical wars when the power grids of nations sizzled to black, quiet and forever gone in puffs of coughing grey smoke. And then there was the day when the flowers were gone. Somewhere, something they used to call a "folk song" connected to that memory blip. But he wasn't sure how or why. No one person could remember all of it. So it was to Mars they went to try again. Because that's what humans do. The ships carried health care for all. Medicine unknown to most of those in the naked dawn. Medicine that had been kept in secret storage for the 1%, should they ever need it. Mars was a cakewalk for this crew. No one even knew how long the life span was up here. Age became an afterthought. That health-for-all meant that Clarissa and Jason could make love with their old bodies and somehow still feel the slippery, hard, wet, firm rhythm of strength in their blending, wrapped and exploding like a cascading tower of a joyful sun. Love that had no age. Only a rhythm. Their strange little orange pastel palace was plunked down in the settlement of Bradbury Village. An irony that made them laugh most every day. The village had a town square with a clock almost set to noon. A soda shop. A Mayor named Clem. And a 4th of July marching band. Friendly competition over whose lawn looked the best. Bradbury Village did not have everything. That's what prompted one of them to say to ask the other, "That connection thing." Sometimes they'd be able to tell what was missing. One of them could dream up an answer to just exactly what it was they missed about the earth. Of course there were no oceans on Mars. The rain came once every seven years. And buried deep inside their secret hearts was a memory of a city that had its own stop and start rhythm. Its parade of characters. Its snow-kissed January morning when the wind would howl and the feeling would be something they called "cold." These were times they knew where that connection was broken. But there were other times when they couldn't figure it out. Didn't know what was missing. The feeling was one of a loss they couldn't put their finger on. Those times were the toughest, but those were also the times when one of them saved the other. One of them connected the other to what had vanished in the Martian breeze. They had seen so much danger in those last days on earth, danger that overstepped every kind of boundary. Shattered all their precious crystal beliefs. They were over danger now; so much so that when they got to Bradbury Village, they created their own place of danger. They made the dark green ravine. It was a place where they could play with danger. So they'd never have to be so scared again. The ravine was a shadow place of tattooed October carnival barkers, circus clown shadows, and Mr. Electro, Ray Bradbury's knight from the future who would place his sword on the shoulder of all the memories of children and shout to the Martian sun, "You will live forever!" In Bradbury Village they created their danger as a way to keep themselves safe. Those first settlers built their settlements in the image of what they called back on earth, "the good old days." A manufactured world dreamed into paved roads and white picket fences by the politicians of a certain stripe in the second decade of the 21st century. A place that only really existed in dreams. But as Mars in so many ways was a dream, why not? On Mars they would say "the good old days," but no one would really know why. On earth he had been a newspaperman. The Chicago Sun Times, The Miami Herald, Albuquerque, Denver and Portland and finally the Times Picayune. All of them, of course, long gone. She had painted. Actually sold her art. Landscapes with colors that would have made O'Keeffe put down her brush and applaud. But there wasn't a lot to paint beyond their manufactured little stepping stone villages. The dry Martian landscape streaked only with the red and grey dust of other worlds that had crumbled. So there they sat there with their martinis looking out at nothing. She repeated, "That connection thing," and he answered. "My love, we are okay. We do have everything we really need here. And it's not like we have to go to work every morning. We can spend our day in the soda shop on the town square. We do have everything we need. I know we're connected, to those who are left behind and those on other worlds. In fact with the earth's core cooling, someday maybe we'll even go back. We do have time my love. We do have time." To which she answered, "Okay. Nice speech, hot shot. But we're missing something. We've never talked about it. Not even once. Maybe we didn't even know how much we missed it." He sat up straight, swung his feet to the cement, got up and walked over to stand right in front of her. The joyful electricity of what it meant to be close, still as alive as it had always been. In both this world and the last. And he looked at her, saying nothing, but with eyes that she knew were posing the question, "Okay, now what are you talking about?" Then, saying nothing, reaching behind herself, looking up at those eyes she'd known forever, she handed him a rolled up tube of paper. And as both their hands touched this paper, something cold and dead came alive again. "My God. It's a newspaper! It is a newspaper!" "Delivered daily now. The ship will bring it in. Right before breakfast," she smiled like a sun from long ago. "With our coffee!" He ran his hands over the crinkled newsprint. As if magic was somehow now on paper. Paper where the pages turned at just the right speed. Paper and ink and print. And right then and there, Bradbury Village, Planet Mars, took one more giant step towards connecting distant stars. One more giant step towards being home. *in memory of Ray Bradbury, August 22, 1920 – June 5, 2012…on Earth. 3 Responses to "Ray Bradbury In Autumn" toritto Says: Oh Roger – I so remember reading the Chronicles in the fifties and watching "Space Cadet" on our little black and white T. V. When we landed on the moon in '69 I knew I would live to see the day a new Magellan would set a foot on Mars. I would see the great ship sail the solar wind to that distant place. It hasn't worked out that way. I won't see it…but I still can get the paper each morning. Beautiful tribute and nicely done as always. Regards. David Ramesh Says: 2013-11-15 at 1:35 am | Reply …and as I watched, the Illustrated Man turned in his deep sleep, and I noticed one last hazy red tattoo that I had somehow missed, perhaps because it was across his eyelids, visible now only because the eyes of the great man were finally, unblinkingly, closed…. Yep. . . .that's what happened. . .
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
80
{"url":"https:\/\/quantumcomputing.stackexchange.com\/tags\/nielsen-and-chuang\/new","text":"Tag Info\n\n0\n\nwhat is being said with \"a single probe within $2^\ud835\udc5b$ possible locations\" It wants to say the process has translated the qubit (binary representation) into a single location in memory. What is the \"probe\"? And what is behind \"degree of freedom\"? A probe is an abstraction of a device that could visit a location (the degree of ...\n\n1\n\nAncilla-free solution: replace the two controlled-SWAPs in the \"summary update\" of Craig Gidney's solution with controlled-$Z$s between the second and fourth qubits in the diagram, and remove the third qubit. (That is, instead of swapping $|-\\rangle$ with a $|+\\rangle$ state stored in the second register, conditioned on $|q\\rangle$ being set to 1, ...\n\n2\n\nYou're missing a bit of algebraic trickery. Remember that $\\frac{1}{\\sqrt{2}}=\\sin(\\pi\/4)=\\cos(\\pi\/4)$. Thus, $$\\cos(\\pi\/8)\/\\sqrt{2}+\\sin(\\pi\/8)\/\\sqrt{2}=\\cos(\\pi\/8)\\cos(\\pi\/4)+\\sin(\\pi\/8)\\sin(\\pi\/4)=\\cos(\\pi\/4-\\pi\/8)=\\cos(\\pi\/8)$$ by the double angle formula. Also, be careful of signs. It might be an amplitude is $\\pm\\sin(\\pi\/8)$, but when you take the ...\n\nTop 50 recent answers are included","date":"2020-08-10 12:23:23","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.776938259601593, \"perplexity\": 961.489485664632}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-34\/segments\/1596439738674.42\/warc\/CC-MAIN-20200810102345-20200810132345-00348.warc.gz\"}"}
null
null
<?php /** * fluxtol : float * is the maximum l2 norm for solution of the nonlinear problem. * (default is 500). */ declare(strict_types=1); namespace Inowas\Common\Modflow; class Fluxtol { /** @var float */ private $value; public static function fromFloat(float $value): Fluxtol { return new self($value); } private function __construct(float $value) { $this->value = $value; } public function toFloat(): float { return $this->value; } }
{ "redpajama_set_name": "RedPajamaGithub" }
2,458
namespace MassTransit.Distributor { using System; using System.Threading; using Context; using Magnum; using Magnum.Extensions; using Messages; using Stact; using Stact.Internal; // public class Worker<TMessage> : // IWorker<TMessage>, // Consumes<WakeUpWorker>.All // where TMessage : class // { // readonly IPendingMessageTracker<Guid> _pendingMessages = new WorkerPendingMessageTracker<Guid>(); // IServiceBus _bus; // IServiceBus _controlBus; // Uri _controlUri; // Uri _dataUri; // Func<TMessage, Action<TMessage>> _getConsumer; // int _inProgress; // int _inProgressLimit = 4; // int _pendingLimit = 16; // readonly Fiber _fiber = new PoolFiber(); // UnsubscribeAction _unsubscribeAction = () => false; // bool _updatePending; // bool _wakeUpPending; // Scheduler _scheduler; // ScheduledOperation _scheduled; // // public Worker(Func<TMessage, Action<TMessage>> getConsumer) // : this(getConsumer, new WorkerSettings()) // { // } // // public Worker(Func<TMessage, Action<TMessage>> getConsumer, WorkerSettings settings) // { // if(getConsumer == null) // throw new ArgumentNullException("getConsumer"); // if(settings == null) // throw new ArgumentNullException("settings"); // // _getConsumer = getConsumer; // // _inProgress = 0; // _inProgressLimit = settings.InProgressLimit; // _pendingLimit = settings.PendingLimit; // } // // public void Consume(Distributed<TMessage> message) // { // _pendingMessages.Consumed(message.CorrelationId); // // Action<TMessage> consumer = _getConsumer(message.Payload); // // Interlocked.Increment(ref _inProgress); // try // { // RewriteResponseAddress(message.ResponseAddress); // // consumer(message.Payload); // // var consumeContext = _bus.MessageContext<Distributed<TMessage>>(); // // consumeContext.BaseContext.NotifyConsume(consumeContext, typeof (Worker<TMessage>).ToShortTypeName(), // message.CorrelationId.ToString()); // } // finally // { // Interlocked.Decrement(ref _inProgress); // // ScheduleUpdate(); // ScheduleWakeUp(); // // var disposal = consumer as IDisposable; // if (disposal != null) // { // disposal.Dispose(); // } // } // } // // public bool Accept(Distributed<TMessage> message) // { // if (_inProgress >= _inProgressLimit) // { // _pendingMessages.Viewed(message.CorrelationId); // return false; // } // // return true; // } // // void Consume(IConsumeContext<PingWorker> context) // { // try // { // var message = new WorkerAvailable<TMessage>(_controlUri, _dataUri, _inProgress, _inProgressLimit, // _pendingMessages.PendingMessageCount(), _pendingLimit); // _updatePending = false; // // context.Respond(message); // } // catch // { // } // } // // public void Consume(WakeUpWorker message) // { // _wakeUpPending = false; // } // // bool _disposed; // TimeSpan _availabilityInterval; // // public void Dispose() // { // Dispose(true); // } // // void Dispose(bool disposing) // { // if (_disposed) return; // if (disposing) // { // Stop(); // _fiber.Stop(); // // _controlBus = null; // _getConsumer = null; // } // // _disposed = true; // } // // public void Start(IServiceBus bus) // { // _bus = bus; // _controlBus = bus.ControlBus; // // _dataUri = _bus.Endpoint.Address.Uri; // _controlUri = _controlBus.Endpoint.Address.Uri; // // _unsubscribeAction = bus.ControlBus.SubscribeHandler<ConfigureWorker>(Consume, Accept); // _unsubscribeAction += bus.ControlBus.SubscribeContextHandler<PingWorker>(x => Consume(x)); // // _unsubscribeAction += bus.SubscribeInstance(this); // // _scheduler = new TimerScheduler(new PoolFiber()); // // _availabilityInterval = 3.Seconds(); // _scheduled = _scheduler.Schedule(_availabilityInterval, _availabilityInterval, _fiber, PublishWorkerAvailability); // } // // public void Stop() // { // if (_scheduled != null) // { // _scheduled.Cancel(); // _scheduled = null; // } // // if (_scheduler != null) // { // _scheduler.Stop(60.Seconds()); // _scheduler = null; // } // // if (_fiber != null) // { // _fiber.Shutdown(60.Seconds()); // } // // if (_unsubscribeAction != null) // { // _unsubscribeAction(); // _unsubscribeAction = null; // } // } // // bool Accept(ConfigureWorker message) // { // return GetType().GetGenericArguments()[0].FullName == message.MessageType; // } // // void Consume(ConfigureWorker message) // { // if (message.InProgressLimit >= 0) // _inProgressLimit = message.InProgressLimit; // // if (message.PendingLimit >= 0) // _pendingLimit = message.PendingLimit; // // ScheduleUpdate(); // } // // void ScheduleWakeUp() // { // if (!_wakeUpPending) // { // _wakeUpPending = true; // _fiber.Add(() => // { // try // { // _bus.Endpoint.Send(new WakeUpWorker()); // } // catch // { // } // }); // } // } // // void ScheduleUpdate() // { // if (!_updatePending) // { // _updatePending = true; // try // { // _fiber.Add(PublishWorkerAvailability); // } // catch // { // } // } // } // // void PublishWorkerAvailability() // { // try // { // var message = new WorkerAvailable<TMessage>(_controlUri, _dataUri, _inProgress, _inProgressLimit, // _pendingMessages.PendingMessageCount(), _pendingLimit); // _updatePending = false; // // _bus.Publish(message, context => // { // context.SetExpirationTime(SystemUtil.UtcNow + _availabilityInterval); // }); // } // catch // { // } // } // // static void RewriteResponseAddress(Uri responseAddress) // { // var context = ContextStorage.MessageContext<Distributed<TMessage>>() as ConsumeContext<Distributed<TMessage>>; // if (context != null) // { // context.SetResponseAddress(responseAddress); // } // } // } }
{ "redpajama_set_name": "RedPajamaGithub" }
3,529
Q: What do you call it when you add energy to an inductor? When you add energy to a capacitor, you say that you are "charging it". (This is kind of a misnomer, since the total amount of charge in the capacitor is the same, but whatever.) But what do you call it when you put current through an inductor, and it ________es up and forms a magnetic field? Edit: Actually, using "charge" for a capacitor is not a misnomer, as shown below and in 'charge' etymology, though it leads to confusion, with people mistakenly thinking that capacitors store electric charge, when in actuality, the charge of energy just moves the electric charge from one plate to the other. A: I am going to take your question literally: " ...what do YOU call it when..." (emphasis supplied). Where I take your 'you' to be me. I call it charging (and discharging). When I was in college my teachers and fellow students called it charging (and discharging). When I was at work designing electronics, we called it charging (and discharging). The guys who I rubbed elbows with, who wound their own toroids and built power supplies, called it charging (and discharging). I also (infrequently) heard the term "energize/energizing" used; and (rarely) de-energize. That may not be politically or technically correct; but that's how the guys (and gals) I worked with, who actually made the stuff that actually flew on airplanes and spacecraft (and, indeed, enabled them to fly), talked. Nothing wrong with energize/de-energize; but charging/discharging an inductor is perfectly acceptable vernacular. Think about it from a systems or macro point of view: With a cap you push current into the device to store energy in an electric field. With an inductor you push current into the device to store energy in a magnetic field. With a battery you push current into the device to store energy in the form of a chemical reaction. Discharging extracts energy from whichever field or form is fundamental to the device. The inductor has the neat attribute that you can extract that energy without reversing current flow; but that fact does not demand an alternative word set to "charging/discharging," A: Putting energy into an inductor is called "energizing", and removing energy from it is "de-energizing". A: The term "charge" was used to refer to loading things with other things long before anyone knew what an electron was; the term "electric charge" derives from the earlier usage, but hardly renders the earlier usage obsolete. The act of adding compressed gas to a fire extinguisher, for example, is referred to as "charging" it, even though no electrical potential difference is induced. I would thus consider it perfectly proper to use the term "charge" with an inductor. A: I've heard the term "excitation" with respect to magnetics ... I personally use the term "ramping", as in current ramping up and ramping down. A: There is energy stored in an inductor, namely \$\frac{1}{2} * L * I^2\$ For instance, it is used as energy storage in switching power supplies. For lack of a better word, I would choose to call it charging. Also, wikipedia does not discern between capacitance and inductance in the article time_constant, which relates to the process of releasing charge. see http://hyperphysics.phy-astr.gsu.edu/hbase/electric/indeng.html A: The word is simply energizing. It is actually used quite often when referring to superconducting magnets, which are nothing but inductors. http://en.wikipedia.org/wiki/Superconducting_magnet#Persistent_mode A: At first I was going to suggest 'currenting' since it's somewhat the opposite of 'charging' (current vs. voltage). That doesn't sound right as a verb though. I want to go with 'spin' since it connotes a continuous movement (of current) through the device.
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,955
The Southern Wesleyan University Gospel Choir performs at the Nell Hobson Newton Chapel at the university Sunday afternoon. The performance was part of Gospel Sunday during the Clemson Blues Festival. (Photo: Nathan Gray) Mac Arnold stands in a field on his farm with one of his gas can guitars. (Photo: Nathan Gray) Frankie Lee Robinson, lead singer and guitarist of the Frankie's Blues Misson performs at the Nothin' but the Blues Festival at Patrick Square in Clemson Saturday afternoon. (Photo: Nathan Gray) The Greater Clemson Music Festival has undergone quite a few changes since it started in 2012. Originally known as the Nothin' but the Blues Fest, the festival covered three days and focused on the rich blues history of Clemson and some of the city's surrounding towns. In 2014 it was changed to the Clemson Blues Festival to reflect the specifics of the community it served. Last year, it became the Greater Clemson Blues Festival to showcase the many musical venues from Pendleton and Seneca to Central and Clemson. And for 2016, the name has shifted once more. Now known as the Greater Clemson Music Festival, the event features nearly two weeks of musical fun showcasing a variety of genres and covering an even larger geographical area. Not bad for a simple event that was born out of two men's ideas to highlight the Upstate's contributions to blues, jazz, gospel and more. Vincent Jackson worked with Clemson's late mayor, Larry Abernathy, to make this festival a reality. And though Abernathy passed away several years ago, Jackson kept going. "For Mayor Abernathy, this was one of his dreams," Jackson said in an interview with Vincent Harris last year. "He grew up here and went to Clemson, and there used to be a real music tradition. It was like an Athens, Georgia, or a Columbia. We had bands here. And for whatever reason, that faded away. And Larry's dream was to bring that back. He passed away before we had our first festival, and we dedicated it to him, and still do. It's just something very important to him and to all of us. He was instrumental in helping us in the early days." The Greater Clemson Music Festival kicks off Friday in Westminster with a concert featuring Brandon Turner and Fassoux Starling McLean. McLean has played with famed musicians like Emmylou Harris and Kris Kristofferson as well as all over the Upstate and the country. The kickoff concert will be at the Westminster Music Hall on Main Street and tickets are available for $15 each. Other shows include new and existing events from the Jazz on the Alley concert series on Ram Cat Alley in downtown Seneca each Thursday to a special, invitation-only show featuring Wanda Johnson at Clemson Downs. There will be shows in the heart of Six Mile as well as at the Historic Hagood Mill in Pickens. "We've got a little bit of everything now," Jackson said. "We've got rock, reggae, gospel, country, and then we have blues. It's really grown." The festival will also include performances by Upstate blues legend Mac Arnold, a sold-out performance by Loretta Holloway, Freddy Vanderford and the Mill Billy Blues Band, The Tony Tidwell Band, Men of Distinction, the Clemson University Jazz Ensemble, The Wobblers and many more. The shows will be at Cox Hall, Southern Wesleyan University and Patrick Square among others. They are also featuring a series of CATBUS historical tours throughout the week with musical significance. Jackson and his staff are all volunteers, working to put the festival on for fun and to help their community. Proceeds from the Greater Clemson Music Festival have traditionally gone to the musicians and to charity, and this year is no exception. A portion of the festival's profits will benefit the Clemson Sertoma Club's Camp Sertoma, Pickens Co. Meals on Wheels and Mac Arnold's I Can Do Anything Foundation, which partners with school music departments to create events that spotlight student musicians. This year's festival has also garnered more than 20 sponsors, ranging from the Clemson Area Chamber Of Commerce to Edward Jones and CATBUS. "I remember when it was just an idea," Johnson said. "I'm super happy to perform because I know it was Larry Abernathy's dream to have something like this. I know how much he would've liked to have seen this. And to be a part of this makes it so special to me. I can't speak of or think about (the festival) without thinking of Larry. They go hand-in-hand for me." The Greater Clemson Music Festival will begin Friday and run through April 24. For more information, visit their website at www.clemsonmusicfest.org.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,954
A Moropus az emlősök (Mammalia) osztályának páratlanujjú patások (Perissodactyla) rendjébe, ezen belül a fosszilis Chalicotheriidae családjába tartozó nem. Tudnivalók A Moropus (magyarul: "lassú láb"), egy kihalt emlős, amely a kihalt chalicotheriidae családba tartozott. A chalicotheriidae-fajok páratlanujjú patások voltak, így rokonságban álltak a lovakkal, az orrszarvúfélékkel és a tapírokkal. A Moropus a miocén kor idején élt. Mint más chalicotheriidae-fajoknak, a Moropusnak is nagy karmai voltak a mellső lábain. Ezeket a karmokat védekezésre vagy/és táplálékkeresésre használta az állat. A Moropus marmagassága 240 centiméter volt. Rendszerezés A nembe az alábbi 7 faj tartozik: Moropus distans Marsh, 1877 - típusfaj Moropus elatus Marsh, 1877 Moropus hollandi Peterson, 1907 Moropus matthewi Holland & Peterson, 1913 Moropus merriami Peterson, 1914 Moropus oregonensis Leidy, 1873 Moropus senex Marsh, 1877 Lelőhelyek A Moropus kövületeket Észak-Amerikában fedezték fel. Források Answers.com Palmer, D., ed. (1999). The Marshall Illustrated Encyclopedia of Dinosaurs and Prehistoric Animals. London: Marshall Editions. p. 261. . O. C. Marsh. 1877. Notice of some new vertebrate fossils. American Journal of Arts and Sciences 14:249-256 D. Geraads, E. Tsoukala, and N. Spassov. 2007. A skull of Ancylotherium (Chalicotheriidae, Mammalia) from the late Miocene of Thermopigi (Serres, N. Greece) and the relationships of the genus. Journal of Vertebrate Paleontology 27(2):461-466 O. A. Peterson. 1907. Annals of Carnegie Museum 4(3) M. C. Coombs, R. M. Hunt, E. Stepleton, L. B. Albright, III, and T. J. Fremd in 2001. Stratigraphy, chronology, biogeography, and taxonomy of early Miocene small chalicotheres of North America. Journal of Vertebrate Paleontology 21(3):607-620 Páratlanujjú patások Emlősnemek Észak-Amerika állatvilága a miocénben Fosszilis páratlanujjú patások
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,035
Vale de Estrela é uma freguesia portuguesa do município da Guarda, com 13,94 km² de área<ref>{{citar web|url= http://www.dgterritorio.pt/ficheiros/cadastro/caop/caop_download/caop_2013_0/areasfregmundistcaop2013_2|titulo= Carta Administrativa Oficial de Portugal CAOP 2013|publicado= IGP Instituto Geográfico Português|acessodata= 13 de Março de 2014|notas= descarrega ficheiro zip/Excel|arquivourl= https://web.archive.org/web/20131209063645/http://www.dgterritorio.pt/static/repository/2013-11/2013-11-07143457_b511271f-54fe-4d21-9657-24580e9b7023$$DB4AC44F-9104-4FB9-B0B8-BD39C56FD1F3$$43EEE1F5-5E5A-4946-B458-BADF684B3C55$$file$$pt$$1.zip|arquivodata= 2013-12-09|urlmorta= yes}}</ref> e 394 habitantes (2011). A sua densidade populacional é de 28,3 hab/km². Esta freguesia designou-se Porcas'' até 16 de janeiro de 1928. A esta freguesia pertence ainda a povoação de Albardeiros. Foi das primeiras aldeias do concelho da Guarda a ser eletrificada, retribuindo a Câmara da Guarda assim, o facto da água que se consumia na cidade, sair desta freguesia dum lugar que ficou sempre designado como "O Poço", na quinta da Montanheira, onde ainda são visíveis as velhas maquinarias, que elevavam a água que era canalizada para consumo doméstico na cidade. Também por ser uma freguesia muito próxima da cidade, e apresentar relevo e vegetações variadas, foi muitas vezes escolhida pelo extinto Regimento de Infantaria N.° 12 (R12) da Guarda, para aí fazerem acampamentos e simulações/treinos para a ex guerra colonial, sendo frequente verem-se colunas militares a "invadirem e tomarem a aldeia". Nas imediações da localidade existe um cruzeiro denominado "Marco das Três Bacias", implantado no ponto de convergência das bacias hidrográficas dos Rios Douro, Tejo e Mondego. Efetivamente, à saída de Vale de Estrela e em direção a Manteigas, apanhamos o caminho (primeiro alcatroado e a seguir em terra batida), que facilmente nos conduz ao referido cruzeiro (que foi reconstruído nos finais do século XX) e que se localiza junto do ex "campo da bola"; Poderemos constatar que daquele alto uma das encosta faz escoamento de águas para a ribeira da Vela, que vai para o Rio Zêzere e portanto para o rio Tejo. Outra encosta escoa para a Ribeira da Quinta das Cabras e depois para o Rio Coa e o Rio Douro. Da terceira encosta o escoamento é feito para a Ribeira da Corujeira, que entra no Rio Mondego. A partir deste ponto pode calmamente e em qualquer veículo, prosseguir viagem ao longo das eólicas até ao "Penedo Depois" e regalar-se com a amplitude visual em 360 graus. Continuando até à freguesia de Aldeia do Bispo, vira-se para os Albardeiros, Fontão, Portomé e através do vale, chega-se à Vela. Já agora e se gosta de conduzir em terra batida ou estradas pouco usuais, saia da Guarda pelo caminho da prisão/cemitério, reveja o local onde as tropas do RI 12 treinavam o tiro (carreira de tiro, ainda usada por forças para-militares) e a já referida estação elevatória intermédia de água, desça até à igreja paroquial de Vale de Estrela e poderá virar para a direita através de uma estreita estrada alcatroada, chegar a Maçainhas (ou se preferir, a meio vire para a esquerda e vai até à Corujeira). População ★ Nos anos de 1864 a 1920 denominava-se Porcas. Pelo decreto nº 14.912, de 16 de janeiro de 1928, passou a ter a actual designação. <small> {| ! colspan="17" | Totais e grupos etários |- | | align="center" | 1864 | align="center" | 1878 | align="center" | 1890 | align="center" | 1900 | align="center" | 1911 | align="center" | 1920 | align="center" | 1930 | align="center" | 1940 | align="center" | 1950 | align="center" | 1960 | align="center" | 1970 | align="center" | 1981 | align="center" | 1991 | align="center" | 2001 | align="center" | 2011 | align="center" | 2021 |- bgcolor="white" |Total | align="right" | 592 | align="right" | 663 | align="right" | 747 | align="right" | 701 | align="right" | 704 | align="right" | 574 | align="right" | 611 | align="right" | 650 | align="right" | 677 | align="right" | 532 | align="right" | 397 | align="right" | 408 | align="right" | 414 | align="right" | 418 | align="right" | 394 | align="right" | 355 |- bgcolor="white" <big> Por idades em 2001, 2011 e 2021 <small> <big> Freguesias da Guarda
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,528
module.exports = { process: async message => { let updateInfo = bot.config.bot.updates; if(bot.config.beta) return "This is the beta bot!"; else if(!updateInfo) return "No update object was set in the config"; else if(!updateInfo.channel) return "A channel id to release the updates in has not been set in the config"; else if(!updateInfo.guild) return "A guild id which the updates role belongs to has not been set in the config"; else if(!updateInfo.role) return "An updates role has not been set in the config"; try { await bot.editRole(updateInfo.guild, updateInfo.role, { mentionable: true }); await bot.createMessage(updateInfo.channel, `<@&${updateInfo.role}>\n${bot.utils.codeBlock(message.args[0], "diff")}`); await bot.editRole(updateInfo.guild, updateInfo.role, { mentionable: false }); return "Update released"; } catch(err) { return `Error during releasing update: ${err.message}`; } }, caseSensitive: true, description: "Release an update", args: [{ type: "text", label: "update" }] };
{ "redpajama_set_name": "RedPajamaGithub" }
6,475
{"url":"https:\/\/phd.haziqj.ml\/classification\/","text":"# Classification with I-priors\n\nThe I-prior methodology is extended from the continuous response case to the categorical response case - we call this the I-probit model. Estimation involves some form of approximation as the marginal density cannot be found in closed form.\n\n# Categorical Responses\n\nSuppose that each of the response variables $$y_i$$ takes on one of the values from $$\\{1,\\dots,m\\}$$, and that\n\n$y_i \\sim \\text{Cat}(p_{i1}, \\dots, p_{im})$\n\nwith probability mass function\n\n$p(y_i) = \\prod_{j=1}^m p_{ij}^{y_{ij}}, \\hspace{1cm} y_{ij} = [y_i = j]$\n\nsatisfying $$p_{ij} \\geq 0$$, and $$\\sum_{j=1}^m p_{ij} = 1$$.\n\nThe categorical distribution is a special case of the multinomial distribution, and can be seen as a generalisation of the Bernoulli distribution. Here, we have used the notation $$[\\cdot]$$ to denote the Iverson bracket - $$[A]$$ equals one if the proposition $$A$$ is true, and zero otherwise.\n\nThe assumption of normality on $$y_i$$ is now highly inappropriate. In the spirit of generalised linear models, we model instead\n\n$\\text{E}[y_{ij}] = p_{ij} = g^{-1}\\big( f_j(x_{ij})\\big)$\n\nusing some link function $$g:[0,1] \\to \\mathbb R$$ and a regression function for each class $$j$$, on which an I-prior is specified. As we will see later, the probit link $$g = \\Phi^{-1}$$ is preferred, where $$\\Phi$$ is the cumulative distribution function (CDF) for a standard normal distribution.\n\n### Binary Responses\n\nIn the simplest case where $$m=2$$, each $$y_i$$ follows a Bernoulli distribution with success probability $$p_i$$. The probit link can be motivated through the use of continuous, underlying latent variables $$y_i^*,\\dots,y_n^*$$ such that\n\n$y_i = \\begin{cases} 1 & \\text{if } y_i^* \\geq 0 \\\\ 0 & \\text{if } y_i^* < 0. \\\\ \\end{cases}$\n\nWe can then model these auxiliary random variables $$y_i^*$$ using an I-prior as usual (cf. Model 1) with fixed error precision $$\\Psi = I_n$$. Thus,\n\n\\begin{align} p_i = \\text{P}(y_i = 1) &= \\text{P}(y_i^* \\geq 0) \\nonumber \\\n&= \\text{P}\\big(f(x_i) + \\epsilon_i \\geq 0\\big) \\nonumber \\\n&= \\Phi \\big(f(x_i) \\big). \\nonumber \\end{align}\n\nThere is no loss of generality compared with using an arbitrary threshold $$\\tau$$ (other than zero) for the $$1\\text{-}0$$ determination or precision $$\\Psi$$ (other than identity) for the error terms $$\\epsilon_i$$.\n\n### Multinomial Responses\n\nThe approach we take is to model each probability class $$p_{ij}$$ using separate regression functions $$f_j$$ and separate I-priors (thus the index $$j$$ on the functions). In the most general setting, there would be $$m$$ sets of hyperparameters to estimate (one for each class), though it is possible to assume some common values among classes.\n\nUsing a latent variable motivation similar to the binomial case, we find that\n\n\\begin{align} p_{ij} = \\text{E}_Z\\Bigg[\\mathop{\\prod_{k=1}^m}_{k \\neq j} \\Phi\\big(Z + f_j(x_i) - f_k(x_i)\\big) \\Bigg]. \\tag{3} \\end{align}\n\nFor $$m > 3$$ this is known not to have a closed-form expression, but nonetheless is easily evaluated using quadrature methods.\n\nIt is also possible to reparameterise the model by anchoring on one latent variable as the reference class and working with the latent differences so that only $$m \u2212 1$$ I-priors are required. It is easily seen that using this approach with $$m=2$$ reduces the model to the same binomial model described above.\n\n# Estimation\n\nUnlike the normal regression model, the marginal likelihood\n\n$p(\\mathbf y) = \\int \\prod_{i=1}^n \\prod_{j=1}^m \\left[ \\big\\{ g^{-1}\\big(f_j(x_i)\\big) \\big\\}^{[y_i=j]} \\cdot \\text{N}_n (\\mathbf{f}_{0j}, \\mathcal I[f_j]) \\, \\text{d}\\mathbf f_j \\right],$\n\non which the posterior depends, is no longer available in closed form. Several methods can be employed to overcome this intractable integral, by way of approximating the true posterior density by $$q(\\mathbf y)$$, in order to obtain estimates of the hyperparameters. These are described below in an order analogous to the methods described in the normal regression model.\n\n### Laplace\u2019s Method\n\nSuppose that we are interested in\n\n$p(\\mathbf f \\vert \\mathbf y) \\propto p(\\mathbf y \\vert \\mathbf f) p(\\mathbf f) =: e^{Q(f)},$\n\nwith normalising constant $$p(\\mathbf y) = \\int e^{Q(f)} \\, \\text{d}\\mathbf f$$ (the marginal). The Taylor expansion of $$Q$$ about its mode $$\\mathbf f^*$$,\n\n$Q(\\mathbf f) \\approx Q(\\mathbf f^*) - \\frac{1}{2} (\\mathbf f - \\mathbf f^*)^\\top A (\\mathbf f - \\mathbf f^*),$\n\nis recognised as the logarithm of an unnormalised Gaussian density, with $$A = -\\text{D}^2 Q(\\mathbf f^*)$$ being the negative Hessian of $$Q$$ evaluated at $$\\mathbf f^*$$. Therefore, the posterior density $$p(\\mathbf f \\vert \\mathbf y)$$ can be approximated by $$\\text{N}_n(\\mathbf f^*, A^{-1})$$, and the marginal by\n\n$p(\\mathbf y) \\approx (2\\pi)^{n\/2} \\vert A \\vert^{-1\/2} p(\\mathbf y \\vert \\mathbf f^*) p(\\mathbf f^*).$\n\nThe marginal density can then be maximised with respect to the hyperparameters using Newton-based methods. However, each Newton step would require finding the posterior modes $$\\mathbf f^*$$, which is difficult for very large $$n$$.\n\n### Variational Approximation\n\nAn approximation $$q(\\mathbf f)$$ to the true posterior density $$p(\\mathbf f \\vert \\mathbf y)$$ is considered, with $$q$$ chosen to minimise the Kullback-Leibler divergence (under certain restrictions),\n\n$\\text{KL}(q || p) = - \\int \\log \\frac{p(\\mathbf f \\vert \\mathbf y)}{q(\\mathbf f)} q(\\mathbf f) \\, \\text{d}\\mathbf f.$\n\nThe name \u201cvariational\u201d stems from the fact that we are seeking to minimise a functional (the Kullback-Leibler divergence) which uses calculus of variations techniques. Of course it would be impossible to minimise the KL over all possible functions $$q$$, so some restrictions are required. We use the mean-field factorisation assumption, which considers only densities which factorises completely over its components, i.e. densities of the form $$q(z_1, \\dots, z_N) = \\prod_{i=1}^N q(z_i).$$\n\nBy assuming priors on the hyperparameters $$\\mathbf \\theta$$, we work in a fully Bayesian setting and append these model hyperparameters to $$\\mathbf f$$ to form $$\\mathbf z = (\\mathbf f, \\mathbf \\theta)$$ and obtain a variational approximation to the posterior density $$p(\\mathbf z \\vert \\mathbf y)$$. The result is a sequential updating scheme similar to the EM algorithm.\n\nThis variational-EM algorithm works harmoniously with exponential family distributions, and as such the probit link provides an advantage over other link functions such as the more popular logit. In fact, all of the required posterior densities, with the exception of the $$y_i$$, involve the normal distribution. The posterior distribution for $$y_i$$ is of course categorical.\n\nThe marginal likelihood is approximated by a quantity known as the variational lower bound, and is given by $$\\mathcal L = \\text{E}_{\\mathbf z}[\\log p(\\mathbf y, \\mathbf z)] - \\text{E}_{\\mathbf z}[\\log q(\\mathbf z)]$$, where expectation is taken over the approximate posterior distribution $$q$$.\n\n### Markov Chain Monte Carlo\n\nIn keeping with the Bayesian theme, MCMC samplers such as Gibbs or Hamiltonian Monte Carlo can also be used to estimate these I-probit models. The MCMC method is a form of stochastic approximation which guarantees asymptotically exact results. However, in our experience, these methods can be computationally slow, and sampling difficulty often arises which result in unreliable posterior samples.\n\n# Modelling and Prediction\n\nThe advantages of I-priors in the normal model extend even to the I-probit model. This includes being able to simply model various types of categorical response regression models by choosing appropriate kernel functions for the covariates.\n\nFor prediction purposes, we can derive the posterior predictive class probabilities given a new data point $$x_\\text{new}$$ as follows:\n\n$\\text{P}(y_\\text{new} = j \\vert \\mathbf y) \\approx \\int \\prod_{j=1}^m \\Big[ p(y_{\\text{new},j} \\, \\vert \\, f_{\\text{new},j}) q(f_{\\text{new},j}) \\, \\text{d} f_{\\text{new},j} \\Big],$\n\nwhere $$f_{\\text{new},j} = f_j(x_\\text{new})$$ in which the approximate posterior density of $$q$$ is used. This complex integral reduces to the expectation of products of standard normal CDFs (similar to 3).\n\nFor examples of I-probit models used for binary and multiclass classification, meta-analysis, and spatio-temporal modelling, see the Examples section.\n\nUpdated:","date":"2021-09-21 04:50:42","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 1, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.918268620967865, \"perplexity\": 496.0981141077437}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-39\/segments\/1631780057158.19\/warc\/CC-MAIN-20210921041059-20210921071059-00260.warc.gz\"}"}
null
null
{"url":"https:\/\/proofwiki.org\/wiki\/Bounded_Piecewise_Continuous_Function_is_Riemann_Integrable","text":"# Bounded Piecewise Continuous Function is Riemann Integrable\n\n## Theorem\n\nLet $f$ be a real function defined on the closed interval $\\left[{a \\,.\\,.\\, b}\\right]$.\n\nLet $f$ be piecewise continuous and bounded on $\\left[{a \\,.\\,.\\, b}\\right]$.\n\nThen $f$ is Riemann integrable on $\\left[{a \\,.\\,.\\, b}\\right]$.\n\n## Proof\n\nWe are given that $f$ is piecewise continuous and bounded on $\\left[{a \\,.\\,.\\, b}\\right]$.\n\nTherefore, there exists a finite subdivision $\\left\\{ {x_0, x_1, \\ldots, x_n}\\right\\}$ of $\\left[{a \\,.\\,.\\, b}\\right]$, where $x_0 = a$ and $x_n = b$, such that for all $i \\in \\left\\{{1, 2, \\ldots, n}\\right\\}$:\n\n$f$ is continuous on $\\left({x_{i - 1} \\,.\\,.\\, x_i}\\right)$\n$f$ is bounded on $\\left[{x_{i - 1} \\,.\\,.\\, x_i}\\right]$.\n\nNote that $n$ is the number of intervals $\\left({x_{i - 1} \\,.\\,.\\, x_i}\\right)$ defined from the (finite) subdivision $\\left\\{{x_0, x_1, \\ldots, x_n}\\right\\}$.\n\nWe shall use proof by induction on these $n$ intervals.\n\nFor all $k \\in \\left\\{{1, 2, \\ldots, n}\\right\\}$, let $P \\left({k}\\right)$ be the proposition:\n\n$f$ is Riemann integrable on $\\left[{x_0 \\,.\\,.\\, x_k}\\right]$.\n\n### Basis for the Induction\n\n$P \\left({1}\\right)$ is the case:\n\n$f$ is Riemann integrable on $\\left[{x_{i - 1} \\,.\\,.\\, x_i}\\right]$\n\nfor an arbitrary $i \\in \\left\\{{1, 2, \\ldots, k}\\right\\}$.\n\n$f$ is piecewise continuous and bounded for the case $n = 1$ means that:\n\n$f$ is continuous on $\\left({x_{i - 1} \\,.\\,.\\, x_i}\\right)$\n$f$ is bounded on $\\left[{x_{i - 1} \\,.\\,.\\, x_i}\\right]$.\n\nBy Bounded Function Continuous on Open Interval is Riemann Integrable, $f$ is Riemann integrable on $\\left[{x_{i - 1} \\,.\\,.\\, x_i}\\right]$.\n\nThus $P \\left({1}\\right)$ is seen to hold.\n\nThis is the basis for the induction.\n\n### Induction Hypothesis\n\nNow it needs to be shown that, if $P \\left({k}\\right)$ is true, where $k \\ge 1$, then it logically follows that $P \\left({k + 1}\\right)$ is true.\n\nSo this is the induction hypothesis:\n\n$f$ is Riemann integrable on $\\left[{x_0 \\,.\\,.\\, x_k}\\right]$\n\nfrom which it is to be shown that:\n\n$f$ is Riemann integrable on $\\left[{x_0 \\,.\\,.\\, x_{k + 1} }\\right]$.\n\n### Induction Step\n\nThis is the induction step:\n\nBy definition of a bounded piecewise continuous function, for every $i \\in \\left\\{ {1, 2, \\ldots, k, k + 1}\\right\\}$:\n\n$f$ is continuous on $\\left({x_{i - 1} \\,.\\,.\\, x_i}\\right)$\n$f$ is bounded on $\\left[{x_{i - 1} \\,.\\,.\\, x_i}\\right]$.\n\nBy the induction hypothesis, $f$ is Riemann integrable on $\\left[{x_0 \\,.\\,.\\, x_k}\\right]$.\n\nFrom the basis for the induction, $f$ is Riemann integrable on $\\left[{x_k \\,.\\,.\\, x_{k + 1} }\\right]$.\n\nWe have that $f$ is Riemann integrable on $\\left[{x_0 \\,.\\,.\\, x_k}\\right]$ and $\\left[{x_k \\,.\\,.\\, x_{k + 1} }\\right]$.\n\nTherefore, $f$ is Riemann integrable on $\\left[{x_0 \\,.\\,.\\, x_k}\\right] \\cup \\left[{x_k \\,.\\,.\\, x_{k + 1} }\\right]$ by Existence of Integral on Union of Adjacent Intervals.\n\nWe have that:\n\n$\\left[{x_0 \\,.\\,.\\, x_{k + 1} }\\right] = \\left[{x_0 \\,.\\,.\\, x_k}\\right] \\cup \\left[{x_k \\,.\\,.\\, x_{k + 1} }\\right]$.\n\nAccordingly, $f$ is Riemann integrable on $\\left[{x_0 \\,.\\,.\\, x_{k + 1} }\\right]$.\n\nSo $P \\left({k}\\right) \\implies P \\left({k + 1}\\right)$ and the result follows by the Principle of Mathematical Induction.\n\nTherefore:\n\n$f$ is Riemann integrable on $\\left[{a \\,.\\,.\\, b}\\right]$.\n\n$\\blacksquare$","date":"2019-12-15 00:04:41","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9863457679748535, \"perplexity\": 88.02570008952844}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-51\/segments\/1575541297626.61\/warc\/CC-MAIN-20191214230830-20191215014830-00493.warc.gz\"}"}
null
null
Moving Forward: Reno Reacts To Nevada Basketball's Steve Alford KUNR Public Radio | By Stephanie Serrano Published April 14, 2019 at 10:57 PM PDT Stephanie Serrano Nevada men's basketball coach Steve Alford speaks in front of almost 1,000 Northern Nevada community members during the announcement of his hire Friday. Wolf Pack fans have been adjusting to the news that Coach Eric Musselman has accepted a new position in Arkansas and Steve Alford has taken his place. The local basketball community gathered Friday to welcome the new coach. KUNR's Stephanie Serrano was there to capture reactions to this surprising series of events, which unfolded pretty quickly. Close to 1,000 people showed up at the Lawlor Event Center to welcome Steve Alford, who was named the Nevada men's basketball head coach during a public announcement. The Wolf Pack Marching Band opened the event while community members took their seats in the arena, staring down at the basketball court, which had a layer of blue and white confetti sprinkled on top of it. The crowd was a mix of longtime fans and students, including C.J. Christensen, a junior at the university. He says he is optimistic about the new coach, given Alford's prior experience at UCLA, where he led the Bruins to four NCAA Tournament appearances, taking them all the way to the Sweet 16 three times. Christensen says attending games in the student section is what he's looking forward to. "Muss was great, but he's [Alford] going to be right there with Muss and maybe not take his shirt off after we win the Mountain West but still be trying to win Mountain West Tournaments," Christensen said. "I've been to the Super Bowl, I've been to everything and there is nothing like being front row at the Nevada student section, screaming your head off, yelling at the referees, getting excited about a dunk; there's nothing like it." Musselman's departure left some Nevadans surprised and disappointed, but Brian Park-Li says it is evolving into an opportunity for people to come together. "It hurt, and I think it was a little bit bitter for a while, but I think people are rallying around him leaving, like we all feel it together, 'Hey, he left all of us, not just the team," Park-Li said. Musselman is gone, and Alford has inherited a basketball program that has reached a new level of success over the last few years, winning the last three Mountain West regular season titles. Credit Stephanie Serrano / KUNR Public Radio KUNR Public Radio Confetti, fireworks and Wolf Pack fans filled the Lawlor Event Center to welcome Steve Alford. "I know today with all the fireworks and festivities, today is a lot about me, and I hope that's it," Alford said. "It's great, it's enthusiastic, and I'm excited about it, but my coaching style is about us. It's not about a single player; it's not about a single coach. It's about us doing this together. I want the team to be the focal point. They have a lot of work to do here in the spring, summer, and fall. We have a great schedule I've already put in place." In the past few years, Musselman created community involvement within the basketball program, re-energizing the local fanbase. In his first speech, Alford pledged his commitment to do the same. That's important for fans, like Dale Clark, who has been a loyal supporter for 15 years. Under Musselman, the public was able to sit in on open practices, which gave Clark the opportunity to build a personal connection with the team. "It'll be interesting to see what coach Alford's philosophy is," he said. "Musselman was very open and let the community in, and, frankly, that's pretty rare in the basketball world, so I don't know if coach Alford will have that same kind of open door philosophy, but I hope he does. I love going to those practices." Alford comes with a suitcase of experience having coached at five universities. He's also a gold medal Olympian with experience playing in the NBA. As a college head coach, he's led his teams to the NCAA Tournament in 14 different seasons, 11 times at the Division I level. He's also coached 11 NBA draft selections, and seven of those players were chosen in the first round. Alford says he chose Nevada for three reasons: the fans, the players and the climate. The coach says he's excited to build the team's culture and identity in order to out-work and out-think the competition. Sports and recreation sportsWolf Pack AthleticsNevada Men's BasketballEric MusselmanSteve Alford Stephanie Serrano (she/her/ella) is an award-winning multimedia bilingual journalist based in Reno, Nevada. Her reporting is powered by character-driven stories and is rooted in sound-rich audio. Her storytelling works to share the experiences of unserved communities in regards to education, race, affordable housing and sports. See stories by Stephanie Serrano Nevada Wolf Pack Men's Basketball On And Off The Court After a 6 a.m. phone call from head coach Eric Musselman, Gus Argenal left his head coaching position at Cal State East Bay to join the Nevada Men's… Division 1 Athlete And Type 1 Diabetic: The Pack's Trey Porter Nationwide, close to 21 million people are living with diabetes. Ninety percent of them are living with type 2 diabetes but just 10 percent are diagnosed… Nevada Wolf Pack's Jazz Johnson On Family And Basketball Nevada men's basketball player Jazz Johnson recently scored a career high of 27 points in his game against Air Force. As a junior, he's been awarded two…
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,864
{"url":"http:\/\/lists.gnu.org\/archive\/html\/avr-libc-dev\/2002-08\/msg00019.html","text":"avr-libc-dev\n[Top][All Lists]\n\n## Re: [avr-libc-dev] Re: circ dep fix\n\n From: Joerg Wunsch Subject: Re: [avr-libc-dev] Re: circ dep fix Date: Sun, 4 Aug 2002 23:12:43 +0200 User-agent: Mutt\/1.2.5i\n\n```As Theodore A. Roth wrote:\n\n> >The only problem is that starting with doxygen 2.1.17, it needs this\n> >hack where doxygen.config gets copied into a second file, so doxygen\n> >can be run another time with USE_PDFLATEX = YES.\n\n> I think the reason for this is that is that pdflatex \"extends\" the latex\n> command set. Thus, you can't just run the tex source through latex because\n> of the extra commands.\n\nAs far as i can tell, they both operate on the same LaTeX input\nfiles. Regular latex either ignores them, or embeds them as\n\\special directives to be processed by the respective output device.\n\nAnyway, here's a patch that should do the right thing for both,\ndoxygen 1.2.17, as well as for the older version.\n\nPlease note that i'm fiddling with the TOC creation values using some\nsed(1) magic. The default to only include top-level section names\ninto the TOC yields a pretty silly PDF file where you can't really\nuse the TOC at the left side in Acrobat reader. Bumping that level\nfrom 1 to 3 gives more details and thus makes it more resemble the\ngenerated HTML code. Alas, bumping that level is not a supported\ndoxygen option, thus the sed magic. Sorry for non-Unix users, i hope\nyou've all got sed around.\n\nBtw., i think even the PS version should include one additional\nlevel in the TOC.\n--\nJ\"org Wunsch Unix support engineer","date":"2014-11-23 11:15:23","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9819883704185486, \"perplexity\": 9133.412610219333}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-49\/segments\/1416400379462.60\/warc\/CC-MAIN-20141119123259-00009-ip-10-235-23-156.ec2.internal.warc.gz\"}"}
null
null
{"url":"http:\/\/dave.thehorners.com\/tech-talk\/science-and-math\/288-cognitive-science","text":"Dave Horner's Website - Yet another perspective on things...\n132 guests\nRough Hits : 2701617\nhow did u find my site?\n\nwhat's a matter\n\n\"Whenever two men meet there are really six people present. There is each man as he sees himself, each man as the other sees him, and each man as he really is.\" -William James\n$$e = \\sum_{n=0}^\\infty \\frac{1}{n!}$$\n\n# Cognitive Science\n\nMonday, 09 April 2007 03:29\nCognitive science - Wikipedia, the free encyclopedia\n\n< Prev\u00a0 Next >\nLast Updated on Wednesday, 26 June 2013 22:56","date":"2017-01-22 14:24:34","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.29108095169067383, \"perplexity\": 12486.667094593267}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-04\/segments\/1484560281426.63\/warc\/CC-MAIN-20170116095121-00011-ip-10-171-10-70.ec2.internal.warc.gz\"}"}
null
null
Adverplanner Adverplanner | Tom Gibby About info + many more Originally from England, Tom joined Droga5 as a Strategy Director after nearly three years at Wieden+Kennedy New York, where he led interactive strategy on campaigns for brands including Squarespace, Southern Comfort, Heineken, Delta and Spotify. Before moving over to New York, Tom spent six years in London working at agencies such as Poke, Blue Hive and Wunderman. His work has been awarded by the likes of Cannes Lions, One Show, Clio Awards, ANDYs, D&AD and The Webby Awards. In his own time, Tom's also worked on digital-marketing campaigns for artists including Calvin Harris, Hardwell, One Direction and The Rolling Stones, helping to launch more than six number-one singles and 12 number-one albums. He's also the co-founder of The Bot Platform, a Facebook recommended development parter for building bots on Messenger and Workplace. A keen skier, you'll find Tom in various snow-covered mountains around the world for at least 10 days a year, most likely using SnowBuddy, a winter sports app he developed with some friends and is shamelessly plugging here. Hello! Let's work together. 1 x Gold, 2 x Silver, 10 x Shortlist Cannes Creative Effectiveness: Bronze x 1 8 x Gold, 2 x Silver, 11 x Merit 1 x Gold, 1 x Silver, 2 x Bronze 3 x Winner, 5 x Finalist, 4 x Honoree 1 x Graphite, 3 x Wood 1 x Gold 2 x Winner 2 x Shortlist 5 x Finalist 5 x Gold, 2 x Bronze, 2 x People's Lovie 1 x Andy Finalist 5 x FWA Grand Prix, Best Cross Platform Campaign Creative Review Annual Winner
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,172
Pipulka zlatohlavá je drobný pták z podřádu křikavých. Výskyt žije v Jižní Americe severně od řeky Amazonky. Popis Dosahuje délky 9 centimetrů a váhy okolo 12 gramů. Samci mají černé tělo se zářivě žlutým peří v horní části hlavy, samice jsou zbarveny šedozeleně. Způsob života Pipulka zlatohlavá žije stromovým způsobem života v deštných lesích do nadmořské výšky 1500 metrů. Je pokládána za nejhojnější druh mezi pipulkovitými. Živí se hmyzem a ovocem. Rozmnožování Pipulka zlatohlavá je známa pozoruhodnými zásnubními tanci, při nichž provozují skupiny až dvanácti samečků složitou akrobacii a vydávají hlasité cvrčivé zvuky.Když přilákají samičku dochází k páření.Samička pak sama staví řídké hnízdo, do kterého snese 2 krémová, hnědě vzorovaná vejce.Ta se líhnou za 12–14 dní. Pipulky se dožívají 15 let. Reference Externí odkazy http://www.oiseaux-birds.com/card-golden-headed-manakin.html http://neotropical.birds.cornell.edu/portal/species/overview?p_p_spp=505196 https://web.archive.org/web/20140714165210/http://www.surinambirds.com/passeriformes-pipridae-golden-headed-manakin-pipra-erythrocephala Pipulkovití Fauna Jižní Ameriky
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,859
Q: Which design pattern(s) to use for a dynamic/modular "event" system? I'm trying to create a kind of event-system (in lack of a better definition) for a game I'm making. What I need to do, is to have a set of (hardcoded) 'core' functions, which I can link together in a arbitrary way and define as an "event", which I can then execute with some arbitrary parameters. These "events" would be relatively simple if they were hardcoded, f.ex. hardcodedEvent(int this, int that){ coreFunc1(this); coreFunc2(that); coreFunc3(coreFunc4(this + that)); //etc... } But the entire point is that they need to be dynamic and modular, so that you would be able to, in theory, construct and represent these events in a flow-chart/diagram like way (example: flow-chart editor from 3d software) Eventually, I would need these events to be serializable, so that I could save and load them as a file or to/from a database. I've looked at Callback, Command, Observer and State machine patterns, but I don't know which one(s) would best suited for something like this, and I havn't worked much with any of them before. A: To me you have in mind exactly the kind of design pattern that would be useful for your problem. It's certainly only a matter of combining them. If I understand well, you are trying to do a game with some events (messageing, information exchange, ...) and with some save/load/log logic (file save, database, ...) at some point. State : For a state machine behavior, with modular states, seems suited for the game according to your "flow chart" description but less for the event-system Observer : For a publish/subscribe logic (information sharing, ...) Callback : For a loose coupled logging-system (sync'ing things, log to file) Command : For encapsulating request as objects, maybe the less useful here I would add two others to the list : Decorator : Because your game components, events seems to be "chainable" in a modular way (ie : chaining core functions to make more evolved ones) Builder : For the part where you want to load a game from a file - It could crawl your file and recreate the game in the state it has been saved, encapsulating a heavy loading logic... Hope it helps.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,353
\section{Introduction} Quantum statistics for bosons and fermions are derived in the context of finite temperature field theory from the corresponding free Lagrangian; see, for example, \cite{kapusta,laine}. The main point that I wish to address in this paper is whether other kind of statistics, introduced through different statistical weights, may be useful to describe nature. The motivation of this search lies in dark energy. The observed accelerated expansion of the universe implies a negative pressure that is attributed to dark energy \cite{hogan,planck}. But an ideal gas of fermions or bosons has positive pressure; it is necessary to assume some kind of unknown interaction in the Lagrangian to solve this problem \cite{amendola,gott}. Here I consider a different approach. We can keep the free Lagrangian of non-interacting particles but take into account the possibility of statistics other than the corresponding to fermions or bosons. The possibility that a negative pressure can be obtained from non-interacting particles with an appropriate statistics was recently analyzed in reference \cite{hoyusist}. The conjecture that non-interacting particles have free diffusion in energy space was shown to lead to the known distributions of Fermi-Dirac, Bose-Einstein and Maxwell-Boltzmann, and to a new one: ewkons. Occupation number for ewkons has the shape of an exponential, like for classical particles, but shifted a given positive quantity. They have negative pressure \cite{hoyusist,hoyuelos}; moreover, the relation between pressure and energy density is close to $-1$ [see Eq.\ (40) in Ref.\ \cite{hoyuelos} for a non-relativistic gas of ewkons]. There are several examples of previous works on exotic or intermediate statistics, that go beyond fermions or bosons, useful to describe specific systems, in some cases as a way to incorporate interactions; see, for example \cite{gentile,green,katsura,wilczek,greenberg,haldane,isakov,isakov2,poly,anghel,dai}; for a review, see \cite{khare}. I consider a system composed by non-interacting particles without spin, described by a scalar field $\phi$ that obeys the Klein-Gordon equation. The assumption of equal statistical weights for the allowed number states implies that such system in equilibrium has occupation number with the Bose-Einstein distribution. Nevertheless, there may be situations in which that assumption is not appropriate. In Sect.\ \ref{variable}, I analyze the different statistical weights of number states that lead to the aforementioned distributions for bosons, fermions, classical particles and ewkons. It is possible to reproduce such distributions evaluating the partition function in the context of quantum field theory, that is, evaluating the trace in the base of eigenstates of the field operator. A first step is accomplished in Sect.\ \ref{0+1} where the partition function for the harmonic oscillator is obtained; this is equivalent to a $0+1$ dimension theory. The extension to a $d+1$ dimensional spacetime is presented in Sect.\ \ref{d+1}. Conclusions are worked out in Sect.\ \ref{conclusions}. \section{Counting operator} \label{variable} Let us consider a system in contact with a heat reservoir at temperature $T$. Besides heat, system and reservoir can also exchange particles. In the grand canonical ensemble it is assumed that the reservoir is also a reservoir of particles that imposes a constant chemical potential per particle $\mu$. For a quantum system with Hamiltonian $\hat{H}$ and number of particles operator $\hat{n}$, the grand partition function is \begin{equation} \mathcal{Z} = \text{tr}\, e^{-\beta (\hat{H}- \mu \hat{n})} \label{defZ} \end{equation} where the trace is evaluated using a base of normalized states. Let us consider non-interacting particles in a harmonic oscillator with frequency $\omega$ as a preliminary step before dealing with a quantum field, that can be thought as a set of infinite harmonic oscillators. The Hamiltonian is $\hat{H} = (\hat{n}+1/2)\hbar \omega$. The grand partition function is obtained evaluating the trace in the base of number eigenstates: \begin{equation} \mathcal{Z} = e^{-\beta \hbar \omega/2} \sum_{n=0}^{\infty} \delta_n e^{-\beta(\hbar \omega-\mu)n}. \label{partfun} \end{equation} where $\delta_n$ is a counting factor included to represent, in principle, both statistics of Bose-Einstein and Fermi-Dirac in the same equation. For the first case, $\delta_n = 1$ $\forall n$; and for the second, $\delta_n = 1$ for $n=0$ or 1, and $\delta_n=0$ for $n\ge 2$. The definition \eqref{defZ} remains unchanged for these two canonical cases when the counting operator $\hat{\delta}$, introduced in Ref.\ \cite{hoyuelos}, is included: \begin{equation} \mathcal{Z} = \text{tr}\,[ \hat{\delta} e^{-\beta (\hat{H}- \mu \hat{n})}] \label{qpartfun} \end{equation} Operator $\hat{\delta}$ commutes with $\hat{n}$ and has eigenvalues equal to the counting factor mentioned before: $\langle n |\hat{\delta} |n\rangle = \delta_n$. Here I wish to explore the possibility of other kinds of particles that require eigenvalues $\delta_n$ different from 1 or 0 for their statistical description. For example, Eq.\ \eqref{qpartfun} can also represent Maxwell-Boltzmann statistics, also called Quantum Boltzmann statistics in a quantum mechanical context \cite{isakov}; in this case we have $\delta_n = 1/n!$, this is equivalent to consider a not normalized set of number states $|\tilde{n}\rangle = |n\rangle/\sqrt{n!}$ in the definition \eqref{defZ}. The purpose of $\delta_n$, therefore, is not only to determine which states have to be considered in the evaluation of the partition function, but to specify the statistical weight of number states. I am interested in the statistics of non-interacting particles derived from the condition of free diffusion in energy space \cite{hoyusist}. They correspond to Bose-Einstein, Fermi-Dirac, classical particles, and ewkons. There is no evidence of particles with classical or ewkon statistics at a quantum level. Nevertheless, there is no fundamental principle that forbids the possibility of particles with statistical weight of number states different from 1 or 0, and it may be useful to develop a quantum description of such particles. One reason, as stated in the introduction, is that thermodynamic properties of ewkons share features with those of dark energy. Ewkons have the Quantum Boltzmann distribution shifted an integer quantity $\sigma$. The ground state is not the vacuum; each energy level has at least $\sigma$ particles. This situation is represented by the following counting factor \begin{equation} \delta_n = \left\{ \begin{array}{cl} 0 & \text{if } n<\sigma \\ 1/(n-\sigma)! & \text{if } n\ge \sigma \end{array} \right.. \end{equation} Then, the grand partition function for ewkons in a harmonic oscillator is \begin{align} \mathcal{Z}_\text{ewk} &= e^{-\beta \hbar \omega/2}\sum_{n=\sigma}^\infty \frac{1}{(n-\sigma)!} e^{-\beta(\hbar \omega-\mu)n} \nonumber \\ &= e^{-\beta \hbar \omega/2}\sum_{m=0}^\infty \frac{1}{m!} e^{-\beta(\hbar \omega-\mu)(m+\sigma)} \nonumber \\ &= e^{-\beta \hbar \omega/2} e^{-\beta(\hbar \omega-\mu)\sigma} \exp\left( e^{-\beta(\hbar \omega-\mu)} \right). \end{align} It is easy to check that, in this case, the mean occupation number is \begin{equation} \bar{n} = \frac{1}{\beta} \frac{\partial \ln \mathcal{Z}_\text{ewk}}{\partial \mu} = \sigma + e^{-\beta(\hbar \omega-\mu)}. \end{equation} \section{Harmonic oscillator} \label{0+1} Before evaluating the partition function in the base of eigenstates of a scalar field operator for a system without interactions, it is useful to do the same in the base of eigenstates of the position operator for the harmonic oscillator. The purpose of this section is to calculate \begin{equation} \mathcal{Z} = \int dx\; \langle x| \hat{\delta}\, e^{-\beta \hat{H}} |x\rangle \label{zxx} \end{equation} with $\hat{H} = \frac{1}{2m}\hat{p}^2 + \frac{m\omega^2}{2} \hat{x}^2$; $|x\rangle$ are eigenstates of the position operator $\hat{x}$, $\hat{p}$ is the momentum operator and $m$ is the mass. As usual in quantum field theory, we first consider the case $\mu=0$; the chemical potential can be included later considering both particles and antiparticles; see, e.g., \cite[p. 25]{laine}. It is possible to calculate the partition function for different statistics in a unified manner, without a priori specifying the eigenvalues of $\hat{\delta}$. First, we write the counting operator in terms of the Hamiltonian: \begin{align} \hat{\delta} &= \sum_{n=0}^{\infty} \delta_n |n\rangle \langle n| \nonumber \\ &= \sum_{n=0}^{\infty} \delta_n \int_{0}^{2\pi} \frac{d\varphi}{2\pi} e^{i(\hat{n}-n)\varphi} \nonumber \\ &= \sum_{n=0}^{\infty} \delta_n \int_{0}^{2\pi} \frac{d\varphi}{2\pi} e^{-i(n+1/2)\varphi} e^{i\hat{H}\varphi/\hbar\omega}, \label{deltaphi} \end{align} where the integral representation of the Kronecker delta was used for $|n\rangle \langle n|$, and $\hat{n}= \hat{H}/\hbar\omega-1/2$. Replacing \eqref{deltaphi} in \eqref{zxx}, we have \begin{equation} \mathcal{Z} = \sum_{n=0}^{\infty} \delta_n \int_{0}^{2\pi} \frac{d\varphi}{2\pi} e^{-i(n+1/2)\varphi} \underbrace{\int dx\; \langle x| e^{-(\beta-i\varphi/\hbar\omega) \hat{H}} |x\rangle}_{\mathcal{Z}'}. \label{zxx2} \end{equation} Let us focus on the integral in $x$: \begin{equation} \mathcal{Z}' = \int dx\; \langle x| e^{-(\beta-i\varphi/\hbar\omega) \hat{H}} |x\rangle. \end{equation} It can be solved using a standard procedure in which the exponential is spitted in a large number of factors and between each pair another factor $\int dx_i\; |x_i\rangle \langle x_i|$ or $\int dp_i\; |p_i\rangle \langle p_i|$ is inserted, transforming the expression in a path integral. It is well known that, when the term with $\varphi$ in the exponential is absent, this procedure yields the boson's partition function (see, e.g., \cite[sec.\ 1.1]{laine}): \begin{equation} \int dx\; \langle x| e^{-\beta \hat{H}} |x\rangle = \frac{e^{-\beta\hbar\omega/2}}{1-e^{-\beta\hbar\omega}}. \label{xHx} \end{equation} If we repeat the procedure with $\mathcal{Z}'$, the final result is equivalent to make the replacement $\beta \rightarrow \beta-i\varphi/\hbar\omega$ in the previous expression: \begin{equation} \mathcal{Z}' = \frac{e^{-(\beta\hbar\omega-i\varphi)/2}}{1-e^{-\beta\hbar\omega+i\varphi}}. \end{equation} With this result we go back to Eq.\ \eqref{zxx2} and obtain \begin{equation} \mathcal{Z} = e^{-\beta\hbar\omega/2}\sum_{n=0}^{\infty} \delta_n \int_{0}^{2\pi} \frac{d\varphi}{2\pi} \frac{e^{-i n \varphi} }{1-e^{-\beta\hbar\omega+i\varphi}} \end{equation} To solve the integral in $\varphi$, we can make the change of variable $z=e^{i\varphi}$ and integrate in the unit circle in the complex plane; the resulting integral can be calculated using the residue theorem. The result \begin{equation} \mathcal{Z} = \sum_{n=0}^{\infty} \delta_n e^{-\beta (n+1/2)\hbar\omega } \end{equation} coincides, as expected, with that obtained through the trace with number eigenstates \eqref{partfun} (with $\mu=0$). \section{Free scalar field} \label{d+1} We have now the necessary elements to analyze the possibility of generalized statistics of a free scalar field $\phi$ (eigenvalue of the field operator $\hat{\phi}$). I briefly summarize some basic concepts. In this section I consider units such that $\hbar=1$ and the speed of light is $c=1$. The free Hamiltonian density is \begin{equation} \mathcal{H} = \frac{1}{2} \pi^2 + \frac{1}{2} (\nabla \phi)^2 + \frac{1}{2} m^2 \phi^2 \end{equation} where $\pi$ is the momentum conjugate of the field, with the canonical commutation relation $[\hat{\phi}(\mathbf{x}),\hat{\pi}(\mathbf{x}')]=i \delta^3(\mathbf{x}-\mathbf{x}')$. It is convenient to use an expansion in Fourier modes: \begin{equation} \phi(\mathbf{x}) = \frac{1}{V} \sum_{\mathbf{k}} \phi_\mathbf{k}\, e^{-i\mathbf{k}\cdot\mathbf{x}}, \end{equation} where $V$ is the system's volume, and a similar expression for $\pi(\mathbf{x})$. Since $\phi(\mathbf{x}) \in \mathbb{R}$, $\phi_\mathbf{k}^* = \phi_\mathbf{-k}$, and the same for $\pi_\mathbf{k}$. Then, the Hamiltonian is \begin{align} H &= \int d\mathbf{x} \; \mathcal{H} \nonumber \\ &= \frac{1}{V} \sum_{\mathbf{k}} \frac{1}{2}\left( |\pi_\mathbf{k}|^2 + (m^2 + k^2) |\phi_\mathbf{k}|^2 \right) \end{align} The system consists in a collection of harmonic oscillators. The Hamiltonian operator can be written as $\hat{H} = \sum_{\mathbf{k}} \hat{H}_\mathbf{k}$ with \begin{equation} \hat{H}_\mathbf{k} = (\hat{n}_\mathbf{k} + 1/2) E_\mathbf{k} \end{equation} where $E_\mathbf{k} = \sqrt{m^2 + k^2}$, and operator $\hat{n}_\mathbf{k}$ represents the number of quanta in mode $\mathbf{k}$; see the appendix for additional details. For simplicity, I am considering a real scalar field in which only particles have to be taken into account; for a complex scalar field, operators for the number of particles and antiparticles have to be considered. We define a counting operator $\hat{\delta}_\mathbf{k}$ that represents the statistical weight of a number state $|n_\mathbf{k}\rangle$ in mode $\mathbf{k}$, as we did in the previous section for the harmonic oscillator: \begin{align} \hat{\delta}_\mathbf{k} &= \sum_{n_\mathbf{k}=0}^{\infty} \delta_{n_\mathbf{k}} |n_\mathbf{k}\rangle \langle n_\mathbf{k}| \nonumber \\ &= \sum_{n_\mathbf{k}=0}^{\infty} \delta_{n_\mathbf{k}} \int_{0}^{2\pi} \frac{d\varphi_\mathbf{k}}{2\pi} e^{-i(n_\mathbf{k}+1/2)\varphi_\mathbf{k}} e^{i\hat{H}_\mathbf{k}\varphi_\mathbf{k}/E_\mathbf{k}}, \label{deltaphik} \end{align} The goal is to evaluate the partition function in the base of the field operator eigenstates of a scalar field: \begin{equation} \mathcal{Z}_\text{sf} = \int d\phi \; \langle \phi | \hat{\delta}\, e^{-\beta \hat{H}} |\phi\rangle \end{equation} where $\hat{\delta} = \prod_{\mathbf{k}} \hat{\delta}_\mathbf{k}$ represents the statistical weight of the system's number state of $N$ modes that can be written in product form as $|n_{\mathbf{k}_1}\rangle \cdots |n_{\mathbf{k}_N}\rangle$, such that $\hat{\delta} |n_{\mathbf{k}_1}\rangle \cdots |n_{\mathbf{k}_N}\rangle = (\delta_{n_{\mathbf{k}_1}}\cdots \delta_{n_{\mathbf{k}_N}}) |n_{\mathbf{k}_1}\rangle \cdots |n_{\mathbf{k}_N}\rangle$. The field operator eigenstate is also written in product form as $|\phi\rangle = \prod_{\mathbf{k}} |\phi_\mathbf{k}\rangle$, and $d\phi = \prod_{\mathbf{k}} d\phi_\mathbf{k}$. Since different modes are independent, $\hat{\delta}_\mathbf{k}$ acts only on $|\phi_\mathbf{k}\rangle$, and the same for $\hat{H}_\mathbf{k}$. Then, the partition function is \begin{align} \mathcal{Z}_\text{sf} &= \prod_{\mathbf{k}} \int d\phi_\mathbf{k}\; \langle \phi_\mathbf{k} | \hat{\delta}_\mathbf{k}\, e^{-\beta \hat{H}_\mathbf{k}} |\phi_\mathbf{k}\rangle \nonumber \\ &= \prod_{\mathbf{k}}\left[ \sum_{n_\mathbf{k}=0}^{\infty} \delta_{n_\mathbf{k}} \int_{0}^{2\pi} \frac{d\varphi_\mathbf{k}}{2\pi} e^{-i(n_\mathbf{k}+1/2)\varphi_\mathbf{k}} \underbrace{\int d\phi_\mathbf{k}\; \langle \phi_\mathbf{k} | e^{-(\beta -i\varphi_\mathbf{k}/E_\mathbf{k}) \hat{H}_\mathbf{k}} |\phi_\mathbf{k}\rangle}_{\mathcal{Z}'_\text{sf}} \right], \end{align} where, in the last line, Eq.\ \eqref{deltaphik} was used for $\hat{\delta}_\mathbf{k}$. Now, the procedure goes on as in the case of the harmonic oscillator. The integral in $\phi_\mathbf{k}$, that we call $\mathcal{Z}'_\text{sf}$, can be evaluated using a path integral method. The result is equivalent to make the replacements $\beta \rightarrow \beta - i \varphi_\mathbf{k}/E_\mathbf{k}$ and $\hbar \omega \rightarrow E_\mathbf{k}$ in Eq.\ \eqref{xHx}: \begin{equation} \mathcal{Z}'_\text{sf} = \frac{e^{-(\beta E_\mathbf{k} - i\varphi_\mathbf{k})/2}}{1 - e^{-\beta E_\mathbf{k}} e^{i\varphi_\mathbf{k}}}. \end{equation} Then, the partition function is \begin{align} \mathcal{Z}_\text{sf} &= \prod_{\mathbf{k}} \left[ e^{-\beta E_\mathbf{k}/2} \sum_{n_\mathbf{k}=0}^{\infty} \delta_{n_\mathbf{k}} \int_{0}^{2\pi} \frac{d\varphi_\mathbf{k}}{2\pi} \frac{e^{-i n_\mathbf{k} \varphi_\mathbf{k}}}{1 - e^{-\beta E_\mathbf{k}} e^{i\varphi_\mathbf{k}}} \right] \nonumber \\ &= \prod_{\mathbf{k}} \left[ \sum_{n_\mathbf{k}=0}^{\infty} \delta_{n_\mathbf{k}} e^{-\beta (n_\mathbf{k}+1/2) E_\mathbf{k}} \right] \nonumber \end{align} where the integral in $\varphi_\mathbf{k}$ is solved using a change of variable and the residue theorem, as explained in the previous section. The final result is the same as the one that can be obtained using the base of number eigenstates for the evaluation of the trace. This calculation is intended to show the possibility of using the counting operator in the partition function in a context of quantum field theory. \section{Ewkons and dark energy} We can consider a quantum field with statistical weights \begin{equation} \delta_{n_\mathbf{k}} = \left\{ \begin{array}{cl} 0 & \text{if } n_\mathbf{k}<\sigma \\ 1/(n_\mathbf{k}-\sigma)! & \text{if } n_\mathbf{k}\ge \sigma \end{array} \right., \end{equation} associated to the number state with mode $\mathbf{k}$, such that ewkon statistics is obtained: $\mathcal{Z}_\text{ewk} = \prod_{\mathbf{k}} \mathcal{Z}_\mathbf{k}$ with \begin{equation} \mathcal{Z}_\mathbf{k} = e^{-\beta E_\mathbf{k} \sigma}\,\exp(e^{-\beta E_\mathbf{k}}). \end{equation} The vacuum energy factor, $e^{-\beta E_\mathbf{k}/2}$, was not included. It leads to inconsistencies in the evaluation, at a cosmological level, of, for example, the average energy of fermions or bosons, and is accordingly removed; see, e.g., \cite[p.\ 19]{kapusta}. Here, the same prescription is used for ewkons. Nevertheless, in the case of ewkons the removal of the vacuum energy factor does not qualitatively change thermodynamic properties; its inclusion is equivalent to make the replacement $\sigma \rightarrow \sigma +1/2$ in the following results. In order to evaluate the average energy density and pressure we need \begin{align} \frac{1}{V} \ln \mathcal{Z}_\text{ewk} &= \frac{1}{V} \sum_{\mathbf{k}} \ln \mathcal{Z}_\mathbf{k} \nonumber \\ &= \frac{1}{(2\pi)^3} \int d\mathbf{k}\; \ln \mathcal{Z}_\mathbf{k} \nonumber \\ &= \frac{1}{2\pi^2} \int_0^{k_\text{max}} dk \; k^2\, (e^{-\beta E_\mathbf{k}} - \beta E_\mathbf{k} \sigma) \end{align} where in the large volume limit the sum in $\mathbf{k}$ goes over to an integral, and an upper limit $k_\text{max}$ was included for the absolute value of $\mathbf{k}$ to avoid divergences; it is equivalent to an ultraviolet cutoff. Let us consider the most simple situation: a massless field, $m=0$, with $\mu=0$. The energy density is \begin{equation} \rho = - \frac{1}{V} \frac{\partial \mathcal{Z}_\text{ewk}}{\partial \beta} \end{equation} and the pressure is \begin{equation} P = \frac{1}{\beta V} \ln \mathcal{Z}_\text{ewk}. \end{equation} The upper limit for the energy is $E_\text{max} = k_\text{max}$. Assuming that $\beta E_\text{max} \gg 1$, we get \begin{align} \rho &\simeq \frac{E_\text{max}^4 \sigma}{8\pi^2}\left(1 + \frac{24}{\beta^4 E_\text{max}^4\sigma} \right) \label{rho} \\ P &\simeq -\frac{E_\text{max}^4 \sigma}{8\pi^2}\left(1 - \frac{8}{\beta^4 E_\text{max}^4\sigma}\right). \end{align} The parameter $w_\text{ewk}$, that represents the cosmological equation of state for ewkons, is \begin{equation} w_\text{ewk} = \frac{P}{\rho} \simeq -1 + \frac{32}{\beta^4 E_\text{max}^4\sigma}. \label{wewk} \end{equation} The accelerated expansion of the universe implies a negative value of $w$, mainly due to the presence of dark energy. According to Table 3 in Ref.\ \cite{planck}, it is smaller than $-0.94$. Assuming that dark energy has the statistics of ewkons, and knowing that its energy density is $4\;10^9$ eV/m$^3$ \cite{beringer}, using \eqref{rho} we can obtain $E_\text{max} \sigma^{1/4} \simeq 0.028$ eV, a rather small value compared to the mass of elementary particles, but two orders of magnitude larger than the present value of $1/\beta$ ($1/\beta \simeq 2.4\;10^{-4}$ eV); and, using \eqref{wewk}, we have $w_\text{ewk} \simeq - 0.9999998$. \section{Conclusions} \label{conclusions} The inclusion of the counting operator $\hat{\delta}$ in the definition of the partition function allows to go beyond Fermi-Dirac or Bose-Einstein statistics. It is diagonal in the base of number states. An eigenvalue equal to 0 indicates that the corresponding number state is not allowed, and a value different from 0 represents its statistical weight. Using this modified definition of the partition function, particles without spin, described through a free scalar field $\phi$, may obey to statistics other than Bose-Einstein's. Assuming that non-interacting particles have free diffusion in energy space, it has been shown that the possible statistics are the corresponding to bosons, fermions, classical particles and ewkons \cite{hoyusist}. The statistics of ewkons turns out to be particularly interesting for the description of dark energy, since a gas of ewkons has negative pressure, and a negative value of parameter $w$ that is necessary for an understanding of the accelerated expansion of the universe. This possibility was analyzed in the context of quantum field theory. There is no contradiction with the spin-statistics theorem. According to this theorem, special relativity restricts the possible creation and annihilation operators to those that have commutation or anti-commutation relations; see, e.g., \cite[ch.\ 4]{srednicki}. The usual definition \eqref{defZ} of the partition function implies that we only have Bose-Einstein or Fermi-Dirac statistics for commutation or anti-commutation relations respectively. Nevertheless, the definition of the partition function with the counting operator \eqref{qpartfun} leads to other possible statistics for creation and annihilation operators, associated to a scalar field, that satisfy the commutation relation (see the appendix). The choice of the definition depends on its utility to describe nature. Here it is argued that generalized statistics of particles without spin may be useful to describe dark energy. \section{Appendix} The number of particles operator for mode $\mathbf{k}$ is $\hat{n}_\mathbf{k} = \hat{a}^\dagger_\mathbf{k} \hat{a}_\mathbf{k}$, with the annihilation operator given by \begin{equation} \hat{a}_\mathbf{k} = \frac{1}{\sqrt{2E_\mathbf{k} V}} (E_\mathbf{k} \hat{\phi}_\mathbf{k} + i \hat{\pi}_\mathbf{k}). \end{equation} From the canonical commutation relation $[\hat{\phi}(\mathbf{x}),\hat{\pi}(\mathbf{x}')]=i \delta^3(\mathbf{x}-\mathbf{x}')$ it can be shown that \begin{equation} [\hat{\phi}^\dagger_\mathbf{k}, \hat{\pi}_{\mathbf{k}'}] = i V \delta_{\mathbf{k},\mathbf{k}'}, \end{equation} where $\delta_{\mathbf{k},\mathbf{k}'}$ is the Kronecker delta and $V$ is the system's volume. This commutation relation implies that \begin{equation} [\hat{a}_{\mathbf{k}},\hat{a}^\dagger_{\mathbf{k}'}] = \delta_{\mathbf{k},\mathbf{k}'}. \end{equation} Let us notice that this commutation relation holds not only for Bose-Einstein statistics, but to any statistics obtained with a counting operator $\hat{\delta}_\mathbf{k}$. Using these relations we can obtain \begin{align} \hat{H}_\mathbf{k} &= (\hat{a}^\dagger_\mathbf{k} \hat{a}_\mathbf{k} +1/2)E_\mathbf{k} \nonumber \\ &= \frac{1}{2V}\left( \hat{\pi}_\mathbf{k}\hat{\pi}_\mathbf{k}^\dagger + E_\mathbf{k}^2\, \hat{\phi}_\mathbf{k}\hat{\phi}_\mathbf{k}^\dagger \right) + \frac{iE_\mathbf{k}}{2V}(\hat{\phi}_\mathbf{k}^\dagger \hat{\pi}_\mathbf{k} - \hat{\phi}_{-\mathbf{k}}^\dagger \hat{\pi}_{-\mathbf{k}}). \end{align} The term $\hat{\phi}_\mathbf{k}^\dagger \hat{\pi}_\mathbf{k} - \hat{\phi}_{-\mathbf{k}}^\dagger \hat{\pi}_{-\mathbf{k}}$ can be ignored in the calculations since it vanishes when the sum on $\mathbf{k}$ is performed. \section*{Acknowledgments} I acknowledge useful discussions with Pablo Sisterna and Ariel Megevand. This work was partially supported by Consejo Nacional de Investigaciones Cientí\-ficas y Técnicas (CONICET, Argentina, PIP 0021 2015-2017).
{ "redpajama_set_name": "RedPajamaArXiv" }
4,047
Trump moves to shut down WeChat in the US. But TikTok will live until after the election. Bill Gates is spending $150 million to try to make a coronavirus vaccine as cheap as $3 Two top CEOs swear by this same technology podcast – The Australian Financial Review As the coronavirus vaccines have rolled out across the US, the process has been confusing and disastrous. States, left by the federal government to fend for themselves, have struggled... But the heart and soul of CES isn't the smooth-talking prognosticators or the journalists who follow them. It's the tech makers who make the show special, and an all-virtual... Top Ten Technology Books Of 2020 – Forbes AFP via Getty Images There were many great technology books published in 2020, but when polling technology and digital executives for some of... Signal has so many new users, it's stopped working Following the Capitol riots, a privacy-minded messaging service is now the most popular app in the US. | Photo illustration by Chesnot/Getty Images The encrypted... Congress will finally grill Jeff Bezos. It's about time. Microsoft Blazor gains Infragistic UI toolkit support If Covid-19 Did Start With a Lab Leak, Would We Ever... "We find ourselves ten months into one of the most catastrophic global health events of our lifetime," wrote Stanford University immunologist and bio-threat expert... As the coronavirus vaccines have rolled out across the US, the process has been confusing and disastrous. States, left by the federal government to... Virtual CES Was As Surreal As We All Suspected It Would... But the heart and soul of CES isn't the smooth-talking prognosticators or the journalists who follow them. It's the tech makers who make the... AFP via Getty Images There were many great technology books published in 2020, but when polling technology and... Following the Capitol riots, a privacy-minded messaging service is now the most popular app in the US. | Photo illustration by... By clamping down on DC rioters, Airbnb is finally acting like... Lawmakers have asked people not to travel to Washington, DC, for the inauguration. Airbnb is helping. | Chip Somodevilla/Getty Images ...
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,601
SOWW JOGGERS YS / BLACK - $34.99 USD YS / WHITE - $34.99 USD YM / BLACK - $34.99 USD YM / WHITE - $34.99 USD YL / BLACK - $34.99 USD YL / WHITE - $34.99 USD YXL / BLACK - $34.99 USD YXL / WHITE - $34.99 USD S / BLACK - $34.99 USD S / WHITE - $34.99 USD M / BLACK - $34.99 USD M / WHITE - $34.99 USD L / BLACK - $34.99 USD L / WHITE - $34.99 USD XL / BLACK - $34.99 USD XL / WHITE - $34.99 USD 2X / BLACK - $34.99 USD 2X / WHITE - $34.99 USD Soldier Sports donates a portion of every sale to Special Operations Wounded Warriors (SOWW). It is the belief of SOWW that we truly can make a difference in the life of a service member who has chosen to put their safety at risk while defending our freedoms and that has suffered personal injury in that endeavor. SOWW feels that there is not a more deserving group of individuals than our Special Operation Forces members that frequently stand in harm's way for the protection of our freedoms, often with little or no recognition.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,030
Ransomware Attacks Grow More Targeted & Dangerous Ransomware attacks used to have a broad scale. Think of the massive WannaCry attack that infected 200,000 machines in 150 countries over a handful of days in 2017. Today, ransomware is much more targeted – and losses from the attacks have risen sharply, according to an FBI alert published last week. The alert emphasizes that, since early last year, ransomware has become more targeted and more damaging to victims – even as the volume of attacks has not changed much. More about this latest rash of attacks below. Industries Hit by Ransomware Attacks on state and local government have attracted attention, but the criminals are also showing a preference for other sectors, according to the alert. Ransomware hit more than 20 local governments in Texas in a well-coordinated attack in August. Attacks in this sector are up from 55 in 2018 to more than 80 so far this year, according to Recorded Future. Three hospitals in Alabama turned away patients earlier this month after ransomware seized their systems. Leaders of the response paid an undisclosed ransom. An August ransomware attack at Wood Ranch Medical, a California-based provider, locked patient medical records and forced the practice to permanently close. A ransomware attack at a massive aluminum producer this year generated staggering losses, estimated at $58 to $70 million. This was recently eclipsed by a September attack on a major manufacturer of hearing aids with estimated losses of $90 to $95 million. Arizona Beverages, one of the largest beverage suppliers in the U.S., was also hit this year, with more than 200 servers and computers infected. Staff had to rebuild the network from scratch at a massive cost. Falcon Transport, an Ohio-based trucking company, said its permanent closure in April partly caused by a ransomware attack earlier this the year. Duie Pyle, a large Pennsylvania-based trucking company, was also hit by ransomware in June. Tactics Used in Attacks The FBI is receiving reports of the following tactics being used in these attacks. Email Phishing Attackers previously spammed the masses with email, hoping to land a few fish. Today's attacks are more targeted, using messages more closely tailored to the victim's context – such as their job or industry. Remote Desktop Protocol Attackers use brute-force and purchased credentials to gain remote access to the victim's system. Once breached, installing ransomware on the system is trivial. Software Vulnerabilities The FBI alert cites a recent attack exploiting flaws in the remote management tools used by managed service providers. The clients of at least three MSPs had ransomware installed on their systems once attackers controlled the RMM tools. Reports surfaced this week of a ransomware strain that exploits an iTunes vulnerability (Windows version). The flaw allows attackers to evade detection by antivirus software, according to PC Magazine. Ransomware Protection & Prevention Recommended practices from the FBI and elsewhere to prevent a ransomware disaster: Keep backups "The most important defense for any organization against ransomware is a robust system of backups," according to the FBI alert. That said, backups can help only if they are configured correctly. Test them periodically, and always keep a set offline. Plan for disaster No company can guarantee they will remain clean of ransomware – so plan for disaster before it strikes. Make contingency and remediation plans. Test the plans periodically. Train users Email phishing is the most common means of malware infection. This is partly due to the reliable incompetence of users. Raise your uses' competence. Train them on safe email practices. Automate patching Always patch operating systems, firmware, and software – especially antivirus software. Ensure end-points are patched as soon as vulnerabilities are exposed. Automate patching when possible. Follow Least-Privilege Follow the principle of least-privilege to limit access to privileged accounts. Users should be granted access to the only systems and resources they need to perform their duties. Administrator accounts should only be used to perform certain tasks. Standard user accounts should be used at all other times. Protect RDP Close unused RDP ports and use two-factor authentication where possible. Here are more RDP security tips. Block Bad Websites & Email Filter web traffic and email to prevent users from accessing or receiving malicious content. Restrict App Directories Use software restriction policies or other controls to prevent programs from executing in directories favored by ransomware, such as the AppData/LocalAppData folder. Restrict Allowed Apps Configure an application whitelisting solution to allow only approved software to run on workstations and servers. If ransomware reaches a machine, this can prevent it from running. Separate Data Categorize the data in your organization by value and use physical and logical separation to keep them apart. For example, customer data should not reside on the same server or network segment as a company's email environment. Source: Calyptix Security https://www.calyptix.com/top-threats/ransomware-attacks-grow-more-targeted-dangerous/ Ransomware, The Modern Extortion Scheme Used By... Cybercriminals are waiting to have a zero-day exploit...
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,828
Q: How to solve "no eligible Bundle IDs for iOS apps"? I just went to submit my first app that uses iAds, and got the following error: You have no eligible Bundle IDs for iOS apps. Register one here. My App ID is green in App Push Notification, In-App Purchase, and Game Center. What does this mean? How can I fix it? A: Have you accepted the appropriate contract, set up bank account info, etc. in iTunesConnect?
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,947
Q: PhoneGap Xcode Missing Header files on Build 'Cordova/CDVViewController.h' file not found I am using PhoneGap 2.2.0 and XCode 4.5.2. I can test my programs in the simulators, and I can put them on my devices to test them. But I simply cannot build for distribution. It always fails with the following error: my-projevt-path/Classes/AppDelegate.h:30:9: 'Cordova/CDVViewController.h' file not found I've seen this problem around the web and still can't make it work, given whatever solutions have been posted. I've changed things in Build Settings, I've reinstalled PhoneGap, I've run new lines in terminal, I've done my app over starting a new PhoneGap project from scratch, I've checked preferences in the build location in Xcode... I can't figure this out AT ALL. Please, can anyone help? I've been working on this for days. Thanks! A: Problems in Xcode If you have compilation problems related to missing headers, the build products should build into the same build directory. You may need to set the preference "Xcode Preferences -> Locations -> Derived Data -> Advanced…" to "Unique". This is the default setting for Xcode on a fresh new install, if you upgraded from older versions of Xcode, you might have a legacy preference in there that you need to update. Found the answer!!! A: Yes I am getting the same problem yeah and some help could be great..... I followed all the instructions even with the ./update_cordova_subproject path as well it does not work. Also I solved the locking problem but I could not find the solution to this problem A: The answer, in my case, had seemingly nothing to do with the error message that was being sent. Missing header files? That didn't seem to be the issue. Or, at least, not the direct cause of the issue. This was an issue with my provisioning/certificates being somehow not right. I had re-created them several times, but it continued to be an issue. I sent the job to another developer, who opened it on his machine, revoked my certificates and created new ones, and built it without changing anything else. He forwarded me the certificate, the provisioning, and an archive of the job. I opened the archive in xCode and validated it and uploaded it. And it was fine. If you have got this problem, be certain your certificate/provisioning is set up right. I thought mine was, but apparently it wasn't? The "Apple Process" is definitely weird, and when certificates / profiles gets messed up, problems arise. A: I was having the same problem and just solved it! First of the problem may very well be because of your distribution provisioning files... but when you look at the Project Navigator in xCode at the top level you have your Project and inside you have the CordovaLib.xcodeproj click on this file and you will see the iOS Deployment target. Make sure the proper IOS version is selected there. This is 1/2. 2) Then you need to duplicate the Release configuration and rename it Distribution. While the CordovaLib.xcodeproj is selected make a build and then build the actual project. This worked smoothly for me. A: Add this line to your Build Settings -> Header Search Paths: $(OBJROOT)/UninstalledProducts/$(PLATFORM_NAME)/include Don't replace the existing line that looks similar, that is still needed to be backwards compatible with Xcode 7 and Xcode 6.4.
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,523
\section{Introduction}\label{sec:intro} When a massive object (MO, an object with mass much larger than the typical mass of individual stars) orbits within its host galaxy, its trajectory is affected by the so-called dynamical friction \citep[DF, ][]{Chandrasekhar1942, Ostriker1999}. DF arises as a response of the environment to the passage of the perturbing mass, and typically results in the gradual inspiral of the MO. In spite of the crude assumptions over which it has been first derived \citep[][]{Chandrasekhar1942,Chandrasekhar1943}, DF theory seems to properly describe the decay of many MOs, as galaxy satellites, stellar clusters, and massive black holes (MBHs) within their host systems \citep[e.g.][]{Inoue2009, Pfister2017}. However, most studies supporting the success of DF theory model the host environment with very simplistic and idealized assumptions: the host galaxies are typically modelled as spherical and isotropic or axisymmetric systems, with smooth galactic potentials \citep[e.g.][]{Just2011, Arca-Sedda2014, Petts2015, Petts2016}. This is not surprising, as these are the systems that are typically addressed in standard (non cosmological) astrophysical simulations \citep[e.g.][]{White1978, Bortolas2016, Gualandris2017, Capelo_Dotti_2017, Bortolas2018, Bortolas2018tr, Tamfal2018}. Perhaps owing to this, DF alone has been often referred to as the very main phenomenon capable to determine the decay of an orbiting MO \citep[e.g.][]{Tremaine1975, Begelman1980}. Only recently, a number of studies started exploring the evolution of MBHs within much less idealized galactic environments, featuring, e.g. the cosmological evolution of the galaxies, the possible formation of clumps, spirals, bars, the effect of star formation and hydrodynamics and so forth \citep[e.g.][]{Fiacconi2013, VanWassenhove_et_al_2014, Lupi2015, Roskar2015, Tamburello2017,Souza-Lima2017, Tremmel2018, Tremmel2018b, Pfister2019, Bellovary_et_al_2019, Bortolas2020, Souza-Lima2020}. The MO evolution within these more realistic, composite galaxies appears to be much harder to predict in the DF framework, as the aforementioned non-symmetric, time-dependent perturbations in the potential result in a much more erratic orbital evolution. \citet{Bortolas2020} evolved a set of MBHs within a typical, irregular and turbulent galaxy at $z\gtrsim 6$, embedded in a cosmological environment. They found that, once a strong bar develops in the host galaxy, the MBHs orbital evolution is critically affected by it: owing to the bar interaction, the decay time is $\sim10$ times faster than what DF theory would predict for four out of five MBHs, while in one case the interaction kicks the MBH in the galaxy outskirts \citep[][]{Bortolas2020}. This study further highlights that the magnitude of the galactic global torques resulting from the non-symmetric galactic mass distribution is virtually always much stronger than the DF-induced torques, suggesting that assuming DF to be the main driver of the inspiral may be inadequate for realistic galaxies \citep[][]{Bortolas2020}. The aforementioned shortcomings of DF theory are particularly relevant in view of the forthcoming opening of a low-frequency ($<0.1$ Hz) gravitational wave window, where the nano-Hz regime is being probed by Pulsar Timing Arrays (PTAs; \citealt{2016MNRAS.458.3341D,2016MNRAS.455.1751R,2019MNRAS.490.4666P,2021ApJS..252....5A}), and the milli-Hz band will be explored by the Laser Interferometer Space Antenna (LISA; \citealt{Amaro-Seoane2017,Schodel2017LISA,Barack_et_al_2019}) in the 2030s. Therefore, it is important to constrain the time spanning from a galaxy merger to the gravitational wave induced coalescence of the host's MBHs, that is going to be observed by the aforementioned facilities; such time-scales would obviously strongly depend on the physics of the large scale galactic inspiral. In this paper, we aim at addressing more systematically how galactic bars affect the decay time-scale of MOs. Conservatively, we explore the evolution of MOs in an idealized, Milky Way-like galaxy, in which the only deviation from axisymmetry is constituted by a rotating triaxial bar of $\approx5$ kpc extension. We integrate the MO orbit with the semi-analytical code presented in \citet{Bonetti2020, Bonetti2021}, whose novel treatment for DF guarantees remarkable agreement with $N$-body simulations of composite galaxies. We perform a large number of numerical experiments, comparing the decay time-scale in the barred galaxy to its value in an analogous, axisymmetric galaxy not featuring the bar. In Sec.~\ref{sec:methods}, we detail the methodology adopted for the orbits initialization and integration; Sec.~\ref{sec:theory} briefly reviews the theoretical aspects related to the orbital evolution within a uniformly rotating, non-axisymmetric potential; in Sec.~\ref{sec:results}, we present the results of our study, which are then discussed and summarized in Sec.~\ref{sec:concl}. \section{Methods}\label{sec:methods} \subsection{Galaxy potential} \begin{table*} \centering \caption{Reference galaxy structural parameters} \label{tab:mwstruct} \begin{center} \begin{tabular}{ccccc} \hline Component & Model & Mass [$\ensuremath{\, \mathrm M_{\sun{}}}$] & scale length [kpc] & others \\ \hline % Bulge$^\ast$ & \citet{Dehnen1993} & $M_{\rm B} = 5\times 10^9$ & $r_{\rm B} = 0.7$ & $\gamma = 1$ \\% % Disc$^\ast$ & Exponential \citep{BinneyTremaine2008} & $M_{\rm D} = 3\times 10^{10}$ & $(r_{\rm D}, z_{\rm D}) = (3, 0.3)$ & -- \\% Halo & \citet*{Navarro1996} & $M_{\rm H} = 4.317\times 10^{11}$, $M_{\rm V} = 8\times 10^{11}$ & $r_{\rm H} = 16$, $r_{\rm V} = 245$ & $c_{\rm H} = 15.3$ \\% Bar & \textit{Softened Needle} \citep{Long1992} & $M_{\rm bar} = 1.8\times 10^{10}$ & $(a,b,c)_{\rm bar} = (5,2,0.3)$ & $\omega_{\rm bar} = 40$ km s$^{-1}$ kpc$^{-1}$ \\% \hline \end{tabular} \end{center} \justifying {\footnotesize $^\ast$ Note that the disc and bulge mass shown here refer to the case in which the bar is present, and have to be enhanced as discussed in the text for the integrations that are not featuring a bar.} \end{table*} We first introduce the reference parameters adopted for the study of the Milky Way-like galaxy. We model the galaxy by considering components of different shape and nature, specifically: a stellar spherical bulge, a stellar disc, and (in part of our runs) a stellar bar, all of them embedded in a dark matter halo. The properties for the different Galactic components adopted here are in agreement with recent literature on the topic, and in particular with \citet{Bovy2015}; the properties of the Galactic bar are taken from \citet{Portail2017}. Table~\ref{tab:mwstruct} reports the relevant values adopted for the galaxy initialization. Specifically, we represent the central bulge using a \citet{Dehnen1993} potential well, whose associated density profile reads \begin{equation} \rho_{\rm B}(r) = \dfrac{(3-\gamma)M_{\rm B}}{4\pi} \dfrac{r_{\rm B}}{r^\gamma (r+r_{\rm B})^{3-\gamma}}, \label{eq:bulge} \end{equation} where $M_{\rm B}$ is the bulge total mass, $r_{\rm B}$ is its characteristic radius, and $\gamma$ represents the inner density slope of the model; finally, $r$ is the distance from the centre. We choose $\gamma=1$, which corresponds to a \citet{Hernquist1990} profile. The disc is modelled with an exponential profile \citep{Spitzer1942,BinneyTremaine2008}: \begin{equation} \rho_{\rm D}(R,z) = \dfrac{M_{\rm D}}{4\pi r_{\rm D}^2 z_{\rm D}} {\rm e}^{-R/r_{\rm D}} {\rm sech}^2\left(\dfrac{z}{z_{\rm D}}\right), \label{eq:disc} \end{equation} where $R$ represents the cylindric radius, $z$ is the coordinate perpendicular to the disc, $r_{\rm D}$ is the disc scale lenght, $z_{\rm D}$ is the disc scale height, and $M_{\rm D}$ is the total mass of the disc. Given that an analytical expression for the associated disc potential does not exist, the integrator obtains the accelerations induced by the disc potential numerically, as described in detail in \citet{Bonetti2021}. In order to speed up the computation of the disc acceleration, the $R$ and $z$ components of the acceleration are tabulated in an adaptive grid spanning several orders of magnitude in both $R$ and $z$, and the acceleration along the integration is obtained by interpolating the tabulated values for the $R$ and $z$ values needed at each timestep. The dark matter halo is described via a \citet{Navarro1996} potential, whose associated density profile is \begin{equation} \rho_{\rm H}(r) = \dfrac{M_{\rm H}}{4 \pi r_{\rm H}^3} \dfrac{r_{\rm H}}{r (1+r/r_{\rm H})^2}, \label{eq:halo} \end{equation} where $M_{\rm H}$ is the mass scale of the model and $r_{\rm H}$ its scale radius; the model virial mass can be expressed as $M_{\rm V}= M_{\rm H} [\ln(1+c_{\rm H})-c_{\rm H}/(1+c_{\rm H})]$, with $c_{\rm H}$ concentration parameter defined as the ratio between the galaxy virial radius $r_{\rm V}$ and $r_{\rm H}$. The above three components (central bulge, disc, and halo) are always accounted for in our study. In addition to the disc, bulge, and halo components, in many of our runs we also account for the presence of a galactic bar. We model the bar as a \textit{softened needle} profile \citep{Long1992}, whose potential has the form \begin{align} \Phi_{\rm bar}(x, y, z) &= \dfrac{GM_{\rm bar}}{2a_{\rm bar}}\ln{\left(\dfrac{x-a_{\rm bar}+T_-}{x+a_{\rm bar}+T_+}\right)}\\ T_{\pm} &=\{(a_{\rm bar}\pm x)^2 + y^2 + [b_{\rm bar} + (c_{\rm bar}^2+z^2)^{1/2}]^2\}^{1/2}, \label{eq:bar_potential} \end{align} where $G$ is the gravitational constant, $M_{\rm bar}$ is the total mass of the bar, and $(a_{\rm bar}, b_{\rm bar},c_{\rm bar})$ are the scale lengths in the direction of the $(x,y,z)$ Cartesian coordinates. We assume the bar to initially lie along the $x$ direction. The bar rotates in the disc plane with a constant orbital frequency $\omega_{\rm bar}$. The parameters associated to the bar are also shown in Table~\ref{tab:mwstruct}. Note that the mass of the disc and bulge are adjusted depending on the presence (or absence) of the bar. In order to disentangle the effect of the bar alone on the evolution of MOs, we decided to run all our integrations in two analogous galaxy models, one featuring the galaxy bar described above, and another one which is purely axisymmetric, and in which the mass assigned to the bar is re-distributed between the bulge and disc components. We perform this latter task by redistributing the bar mass% \footnote{Note that the prescription we propose for redistributing the bar mass into the disc and bulge is by no means a general prescription and we found it to work well for the galaxy we are considering, but it may fail if different galaxy potentials are adopted.} as: $M_{\rm B} \rightarrow M_{\rm B}+0.1\times M_{\rm bar} (r_{\rm B}/a_{\rm bar})$, $M_{\rm D} \rightarrow M_{\rm D}+ M_{\rm bar}(1 - 0.1\times r_{\rm B}/a_{\rm bar})$. We find that this choice allows us to maintain a very similar rotation curve in the disc plane for the two models: if the circular velocity of the barred galaxy in the disc plane is averaged over all possible bar orientations, we find our prescription to maintain the deviation between the two always below 4 per cent (see Fig.~\ref{fig:rot_curve} in the Appendix). For clarity, in the following we will always refer to the galaxy rotation curve as that associated to the barred galaxy. The characteristic {resonances} of the galaxy are shown in Table~\ref{tab:resonances}. The profiles of the epicyclic frequency and the orbital frequency are computed in the non-barred galaxy, and the bar orbital frequency is used to define the various resonances reported in the Table. \subsection{Dynamical friction prescriptions} The implementation for the DF-induced deceleration suffered by the MO along its orbital evolution is described in detail in \citet{Bonetti2020, Bonetti2021}. Here we report the key aspects of the implementation, and we refer the reader to the aforementioned papers for more details. The DF acting on the MO is computed as a sum of the DF associated to the different galactic components. Each of the spherically symmetric components induces a deceleration with the form \begin{equation}\label{eq:DF_sph} \mathbf{a}_{\rm df,sph} = -2\pi G^2 \ln(1+\Lambda^2) m_{\rm p} \rho(r) \left({\rm erf}(X) - \dfrac{2 X{\rm e}^{-X^2}}{\sqrt{\pi}}\right) \dfrac{\mathbf{v}_{\rm p}}{|\mathbf{v}_{\rm p}|^3}, \end{equation} where $m_{\rm p}$ and $\mathbf{v}_{\rm p}$ are the MO mass and instantaneous velocity, respectively, $\rho(r)$ is the local background density associated with the given spherical component, and $X=v_{\rm p}/(\sqrt{2}\sigma(r))$, with $\sigma(r)$ being the local velocity dispersion of the considered galactic component. The argument of the Coulomb logarithm in the equation is given by the ratio between the maximum and minimum impact parameters, $\Lambda = p_{\rm max}/p_{\rm min}$, computed as $p_{\rm max} = r\left(- ~{\rm d} \ln \rho/~{\rm d} \ln r\right)^{-1}$ and $p_{\rm min} = \max[{G m_{\rm p}}/\left({v_{\rm p}^2+\sigma(r)^2}\right), D_{\rm p}]$, where $D_{\rm p}$ is the physical radius of the MO (which we set to zero in the present integration, as we always assume non extended MOs, as MBHs). The DF associated to the rotating disc is modelled as \begin{align}\label{eq:adf_disc} \mathbf{a}_{\rm df,disc} = -2\pi G^2 \ln(1+\Lambda^2) & m_{\rm p} \rho_{\rm D}(R,z) \ \times \nonumber\\ & \times\left({\rm erf}(X_{\rm D}) - \dfrac{2 X_{\rm D}{\rm e}^{-X_{\rm D}^2}}{\sqrt{\pi}}\right) \dfrac{\mathbf{v}_{\rm rel}}{|\mathbf{v}_{\rm rel}|^3}, \end{align} where $\mathbf{v}_{\rm rel} = \mathbf{v}_{\rm p} - \mathbf{v}_{\rm rot}(R)$, and $\mathbf{v}_{\rm rot}(R)$ is the rotational velocity in the disc, generally not equal to the circular velocity of the disc, as we assume that it is not fully rotationally supported. Finally, $X_{\rm D} = v_{\rm rel}/(\sqrt{2}\sigma_{\rm R})$, where $\sigma_{\rm R}$ denotes the radial velocity dispersion of the disc. The details for the computation of $\mathbf{v}_{\rm rot}(R)$ and $\sigma_{\rm R}$ can be found in \citet[][]{Bonetti2021}. In the above expression, the minimum impact parameter entering the Coulomb logarithm is computed as $ p_{\rm min, D} = G m_{\rm p}/(v_{\rm rel}^2 + \sigma_{\rm R}^2)$, whereas $p_{\rm max}$ is chosen equal to the disc scale height. Physically, the expression adopted for the DF in the rotating disc accounts for the fact that the MO is moving within a medium featuring a net rotational velocity, so that the relative velocity between the MO and the disc typical rotational velocity at each given radius is what has to be accounted for when estimating the MO deceleration. This treatment has been proven to work very well in rotating environments \citep[][]{Bonetti2021}, and it reproduces the so-called drag-towards-circular-corotation (see Sec.~\ref{sec:counterrotating} for a description). The DF induced by the bar, when present, is very difficult to describe starting from first principles. Here we considered only the effect produced by the enhanced density and the additional DF caused by the bar is simply obtained by adding the bar density to the disc component in the equations for the deceleration (Eq.~\ref{eq:adf_disc}). Note that this assumption can in principle be inaccurate and impact our results. In Appendix~\ref{sec:appB}, we thus compare our semi-analytical treatment with full $N$-body simulations. The stochasticity induced by merely changing the number of particles in the $N$-body run is significant, thus suggesting that the detailed implementation of a more accurate DF prescription would probably not severely impact the evolution, as stochasticity induced by the fact that the orbits are chaotic appears to be the main factor in determining the decay time-scale. \subsection{Initial conditions for the orbit} In order to explore the effect of bars on the MOs dynamics, we perform a large number of orbital integrations. Each of the simulations is always performed with the very same initial conditions in the galaxy featuring and non-featuring the galactic rotating bar. The MO does not suffer any mass variation during the evolution; this is obviously a simplification, and we plan to implement the effect of the MO mass loss in a forthcoming study. The initialization of the orbit of the MO is characterized by a series of variables that serve to uniquely determine the initial position and velocity of the MO, and specifically we will mainly use the following: \begin{itemize} \item $r_0$, the initial distance of the MO from the centre of the system; \item $f_{\rm circ} \in (0,1]$: if $v_{\rm c}$ is the circular velocity at $r_0$, we assign to the MO a tangential velocity equal to $v = f_{\rm circ}v_{\rm c}$, and zero radial velocity, meaning that the orbital evolution in the axisymmetric case and in the disc plane is always initialized at the apocentre;\footnote{Note that the apocentre is not well defined out of the disc plane and in the barred case.} so $f_{\rm circ} \simeq 0$ means an almost radial orbit, and $f_{\rm circ}=1$ corresponds to an ideally circular orbit (when the MO is in the disc plane). Note that $v_{\rm c}$ is always taken in the disc plane, regardless of whether the MO actually starts its evolution within the disc; \item $\phi \in [0, 180)$ degrees, the azimuthal angle; in principle, this should run from 0 to 360 degrees, but we limit its range for symmetry reasons. Note that this angle can be neglected for the non-barred galaxy, as the potential is axisymmetric. In the barred case, $\phi=0$ means that the MO initially sits along the bar longest principal axis, $a_{\rm bar}$; \item $\theta \in [0, 180]$ degrees, the angle between the disc ($x-y$) plane and the MO initial position vector; \item $\alpha \in [0, 360)$ degrees, the angle between the initial velocity vector and the $x-y$ plane; remember that the initial velocity vector is always perpendicular to the position vector of the particle; \item $i \in [0, 180)$ degrees, the inclination of the initial orbit with respect to the disc plane. Note that this variable is degenerate with the previous three angular variables, but we will refer to it as well in some situations. \end{itemize} Fig.~\ref{fig:orbit_initialization} shows most of the aforementioned quantities in the three-dimensional space. In what follows, we define the inspiral to be completed once the MO stably remains below a separation of $10$ pc from the centre; we always stop the integration when the simulation time reaches a Hubble time (assumed to be 13.7 Gyr). \begin{figure} \centering \includegraphics[ width=0.45\textwidth]{img/orbit_initialization.png} \caption{The image shows the relevant variables adopted to initialize the orbit of the MO in the presented integrations. The point P $(x_0, y_0, z_0$) in which the MO is initialized is defined by the azimuthal and polar angles $\theta$, $\phi$, and by the length $r_0$ of the position vector. The velocity (${\rm vel}$, indicated as $v_{\rm p}$ in the text) always lies in the plane perpendicular to the position vector associated to P, and its orientation is defined by the angle $\alpha$, which is defined to be $0$ if the velocity lies parallel to the $x-y$ plane. } \label{fig:orbit_initialization} \end{figure} \begin{table} \centering \caption{Characteristic scales for resonances} \label{tab:resonances} \begin{center} \begin{tabular}{lc} \hline Label & Value \\ \hline % Bar semi-major axis & 5.0000 kpc \\% % Co-rotation radius & 5.0041 kpc \\% % Inner Lindblad resonance & 0.5746 kpc \\% % Outer Lindblad resonance & 8.8870 kpc \\% % Saddle radius ($\Phi_{\rm eff}$) & 5.3165 kpc \\% % Maxima radius ($\Phi_{\rm eff}$) & 4.9620 kpc \\% \hline \end{tabular} \end{center} \justifying {\footnotesize The table displays the characteristic scales at which the bar resonates with the galaxy characteristic orbital frequencies, and the radii of the saddle points and maxima associated to the effective potential shown in Fig.~\ref{fig:bar_contour}.} \end{table} \section{Theoretical background}\label{sec:theory} \begin{figure} \centering \includegraphics[ width=0.45\textwidth]{img/bar_contour} \caption{The colour-coded map displays the effective galactic potential $\Phi_{\rm eff}$ in the plane of the disc ($z=0$), measured in units of $4.301\times10^4 $ km$^2$ s$^{-2}$. The central white point in the origin marks a central minimum, the two red `+' are the two potential maxima ($x=0; y\approx\pm 4.962$~kpc), and the green `$\times$' are the two saddle points ($x\approx\pm 5.316$~kpc; $y=0$). The cyan line describes a circle of radius 5 kpc, i.e. the spatial extension of the bar; the arrows indicate the direction of the gravitational force associated to the effective potential displayed, with their length being proportional to its magnitude.} \label{fig:bar_contour} \end{figure} In order to understand the behaviour of the MO evolution, we recall that the gravitational potential of a uniformly rotating, non-axisymmetric density distribution is usefully described in a framework that co-rotates with the triaxial perturbation. In particular, it is useful to define the effective potential \citep[e.g.][]{Sellwood1993} \begin{equation} \Phi_{\rm eff} = \Phi-\frac 1 2 \omega_{\rm bar}^2 r^2, \end{equation} where $\Phi$ is the conservative, space-dependent galactic potential of the barred galaxy, $\omega_{\rm bar}$ is the rotational frequency of the bar and $r$ is the distance from the centre. Even neglecting the Coriolis force (which depends on the velocity of the moving mass), the gradient of the effective potential gives a good estimate of the force experienced by a test mass in the rotating frame within the disc plane. Fig.~\ref{fig:bar_contour} displays a map of the effective potential in the plane of the disc, for our barred galaxy model; the plot additionally shows the magnitude and direction of the associated effective force. This effective potential can be thought as a `volcano' \citep{Prendergast1983}, with a minimum (crater) at the centre, a rim whose height varies slightly, and the slope that descends at larger radii. In this framework, neglecting DF, the so-called Jacobi integral (rather than the standard energy) of a test mass in the disc plane is conserved in time. This quantity can be expressed as \begin{equation}\label{eq:EJ} E_{\rm J} = E - \omega_{\rm bar}J_z = v_p^2/2 + \Phi_{\rm eff}, \end{equation} where $E$ and $J_z$ are, respectively, the energy and $z$ component of the angular momentum per unit mass, and $v_{\rm p}$ is the velocity magnitude, all quantities being measured in the non-rotating frame; note that $E_J$ is defined in the plane of the disc. In absence of DF, $E_{\rm J}$ would determine whether a mass is limited to orbits in a particular region of space: only if $E_{\rm J}$ is larger than the maxima of $\Phi_{\rm eff}$, it can in principle explore the entire galaxy plane. It is also relevant to note that the two saddle points are unstable equilibria points, whereas in the present galaxy model the two potential maxima and the central minimum are stable points, meaning that a test mass can stably sit there or orbit these points in the absence of perturbations. More details on the orbits of subject masses in triaxial, rotating potentials can be found in, e.g. \citet[][especially from their sec.~4.3.2]{Sellwood1993}. In our present framework, the otherwise conserved $E_{\rm J}$, that determines the orbit of a subject mass, can vary due to the effect of DF. The above considerations allow us to better understand the orbital behaviour of MOs subject to the combined effect of the galactic potential, the rotating bar, and DF. \section{Results}\label{sec:results} \begin{figure*} \centering \includegraphics[ width=0.49\textwidth]{img/ex0} \includegraphics[ width=0.49\textwidth]{img/ex1} \caption{The plot shows various quantities associated to the orbital evolution of an MO in the non-barred (left-hand panels) and barred (right-hand) galactic potential. In both cases, the $5\times10^6\ensuremath{\, \mathrm M_{\sun{}}}$ MO is initially at 8 kpc from the centre, with $f_{\rm circ}=0.3$, $\theta=i=15^\circ$, and initial velocity parallel to the disc plane. In the barred case, $\phi=24^\circ$ (the system has been rotated in the right-hand image, so that the coordinates of the initial MO position are the same in the barred and unbarred case). For each scenario, the three panels on the left-hand side show the projections of the orbit in time in three different directions, with $x-y$ being the plane of the disc. The initial 150 Myr of the orbital evolution are highlighted in red. The four panels on the right-hand side show, from top to bottom, (i) the distance of the MO from the centre of the system, (ii) the orbital energy per unit mass, measured in units of $4.301\times 10^4$ km$^2$ s$^{-2}$, and (iii - iv) the $x-y$ and $z$ components of the orbital angular momentum per unit mass, measured in internal units of 207.4 kpc km s$^{-1}$. The dashed line in each plot marks the starting value for each of the displayed quantities. Interestingly, in the run with the bar, the MO gets dragged towards the centre faster thanks to the interactions with the bar, that allow it to reach the centre in less than a Hubble time, contrarily to the non-barred case. } \label{fig:orbit_example} \end{figure*} Fig.~\ref{fig:orbit_example} reports an illustrative example of how the bar may affect the decay time-scale. An MO of $5\times 10^6\ensuremath{\, \mathrm M_{\sun{}}}$ on a relatively low-angular-momentum orbit, initially decaying from $r_0=8$ kpc with an initial inclination $i=15^\circ$, needs $<10$ Gyr to reach the centre if the bar is present, while it needs more than a Hubble time in the non-barred scenario. The plots also display some recurrent features of the orbital evolution: in the non-barred scenario, the evolution is way more smooth and predictable, contrarily to the stochastic evolution that characterizes the barred case; in both runs, the orbit circularizes and is dragged in the disc plane (as can be seen by looking at the different angular momentum components) at nearly kpc separation. \subsection{Systematic orbital sampling}\label{sec:systematic_sampling} \begin{figure*} \centering \includegraphics[ width=0.93\textwidth]{img/theta0} \caption{ All runs shown here assume an MO coplanar with the disc and co-rotating with the bar and galactic disc (i.e. $\theta=\alpha=i=0$). The plots show the time for an MO of $5\times 10^6 \ensuremath{\, \mathrm M_{\sun{}}}$ to reach the galaxy centre in a range of initial configurations: different rows consider a distinct initial separation from the centre ($r_0$ decreasing from 12 to 3 kpc, from top to bottom), whereas different columns imply an initial velocity expressed as a fraction of the circular velocity ($f_{\rm circ}$ increasing from 0.1 to 1, from left to right). In each panel, the green horizontal line marks the time needed by the MO to complete the inspiral in the non-barred galaxy; blue circles refer to the run with the bar and show the time needed for the MO to inspiral as a function of the phase $\phi$ (note that $\phi=0$ when the MO initially sits along the bar longest axis). The red triangles (and the orange dashed line, for the cases without a bar) mark the configurations for which the MO does not complete the inspiral within a Hubble time. } \label{fig:td_inplane} \end{figure*} \begin{figure*} \centering \includegraphics[ width=0.97\textwidth]{img/mess0} \caption{The matrix is the same as in Fig.~\ref{fig:td_inplane}. Each panel shows the distance of an MO from the centre as a function of time; the black line refers to the run without the bar; the coloured lines in each panel refer to a run with the bar, each with a different phase, mapped in the colour-bar in the bottom-right panel. The two magenta horizontal lines mark the co-rotation radius (lower line) and the outer Lindblad resonance radius (upper line). All runs shown here assume an MO coplanar with the disc and co-rotating with the bar and galactic disc.} \label{fig:mess0} \end{figure*} As a first test, we explore the orbital evolution of a $5\times 10^6\ensuremath{\, \mathrm M_{\sun{}}}$ MO in the galaxy. This mass is a compromise between the typical mass an intruder MBH would have, if brought in the Milky Way by a minor merger, and the whole mass of the satellite galaxy that could host it. We find this value to be a good compromise in order for a reasonable fraction of MO orbital decays to be completed in a Hubble time. \subsubsection{In-plane, prograde orbits} Fig.~\ref{fig:td_inplane} shows the time needed by the MO to complete its inspiral for in-plane, prograde orbits (i.e. whose angular momentum has the same direction as that of the bar and the disc). In each sub-plot, the inspiral time is shown as a function of the phase $\phi$ (sampled as $\phi=0, 6, 12, ..., 174$ degrees) if the bar is present, while it is represented as a dashed line for the equivalent non-barred galaxy case. A more detailed view of the inspiral can be found in Fig.~\ref{fig:mess0}, which shows the MO distance from the centre as a function of time for the same runs referenced in Fig.~\ref{fig:td_inplane}. The effect of the bar on the orbital evolution and decay time-scale is particularly relevant for orbits that cross or initially remain close to the co-rotation radius, which roughly coincides with the bar major axis $a_{\rm bar}$ ($5$ kpc); at these scales, the bar reduces the decay time for orbits initialized near the edges of its major axis, while the decay time tends to be larger for initial phases near 90 degrees. As expected, the effect of the bar weakens for orbits which are initially close to the size of the second axis $b_{\rm bar}=2$ kpc, as can be seen by looking at the decay time-scales of MOs starting from small $r_0$ and small $f_{\rm circ}$ in Fig.~\ref{fig:td_inplane}. At scales of the order of the outer Lindblad resonance (Table~\ref{tab:resonances}), the interaction with the bar becomes less predictable and, in some cases, the bar keeps the MO out of $\approx 9$ kpc, preventing any inspiral and quashing the effect of DF, as can be seen in Fig.~\ref{fig:mess0}. \begin{figure*} \centering \includegraphics[ width=0.24\textwidth]{img/R9.000_fcirc0.800_phase000_theta000_alpha000} \includegraphics[ width=0.24\textwidth]{img/R10.000_fcirc0.200_phase000_theta000_alpha000} \includegraphics[ width=0.24\textwidth]{img/R9.000_fcirc0.800_phase090_theta000_alpha000} \includegraphics[ width=0.24\textwidth]{img/R6.000_fcirc0.800_phase090_theta000_alpha000} \caption{The plots show different aspects of the orbital evolution for a $5\times10^6\ensuremath{\, \mathrm M_{\sun{}}}$ MO evolving in the barred potential. Each column refers to a different run, whose initialization variables are displayed in the bottom panels. The top plots show the MO orbital evolution in the rotating frame of the bar, and the colour associated to the line refers to a different time in the evolution, as mapped in the two bottom panels; the dotted grey lines are effective potential isocontours, the same as in Fig.~\ref{fig:bar_contour}. The second panel shows the $z$ component of the torque experienced by the MO averaged over a full azimuthal oscillation in the rotating frame, measured in units of $4.4985\times 10^4$ kpc$^2$ Gyr$^{-2}$; we distinguish between the DF-induced torque, the global torque due to the `potential' of all components in the galaxy (see Footnote~\ref{fn:torque}), and the total torque experienced by the MO (the sum of the aforementioned ones); note that, when the orbit is too irregular, it is almost impossible to get a proper orbit average of the torque, so this quantity is not shown for all time ranges. The third panel shows the value of the Jacobi integral (Eq.~\ref{eq:EJ}, measured in units of 4.301$\times10^4 $ km$^2$ s$^{-2}$) as a function of time, the black dashed horizontal line being the value of the effective potential at the saddle points. The bottom panel shows the distance of the MO from the galaxy centre as a function of time. All runs shown refer to prograde and in-plane MOs. } \label{fig:orbs_in_rotating_frame} \end{figure*} In order to better understand the aforementioned behaviour, we show in Fig.~\ref{fig:orbs_in_rotating_frame} the orbital evolution of the MO in the rotating frame for four different runs. The same Figure also shows the evolution of the different contributions to the $z$ component of the torque (averaged over a radial oscillation, $\tau_z$), of the Jacobi integral (Eq.~\ref{eq:EJ}), and the orbital radius of the MO. By examining Figs~\ref{fig:td_inplane}, \ref{fig:mess0}, and \ref{fig:orbs_in_rotating_frame}, we can see that the orbital evolution of MOs exhibits some recurrent behaviours: if an object starts from a large $r_0$, with $f_{\rm circ}\approx 1$, it may remain trapped in a nearly circular orbit near the outer Lindblad resonance, characterized by the same value for $E_{\rm J}\approx -4$ (in units of $4.301\times10^4 $~km$^2$~s$^{-2}$), without experiencing any net decay. This behaviour is due to the positive bar-induced torque that, over a full orbit, counteracts the effect of DF, as shown in the first column of Fig.~\ref{fig:orbs_in_rotating_frame}. Accordingly, Fig.~\ref{fig:mess0} clearly shows that several MOs starting from a large separation remain trapped there as they do not experience any net decay in about a Hubble time. Fig.~\ref{fig:orbs_in_rotating_frame} displays other typical configurations for the evolution: if the MO starts with an initial $E_{\rm J}$ larger than the maximum of the effective potential, then it can in principle explore the whole galaxy. Owing to the drag of DF, though, $E_{\rm J}$ gets smaller and smaller, so that the orbit typically remains confined in a given region about one of the bar stable Lagrangian points (i.e. about one of the two effective potential maxima,\footnote{We stress that the maxima of the effective potential are not maxima of the gravitational potential, and therefore stable orbits can exist around these two Lagrangian points \citep[][]{Sellwood1993}.} or about the origin). This is, for instance, what is shown in the second column of Fig.~\ref{fig:orbs_in_rotating_frame}: the MO is initially wandering freely in the inner 10~kpc but, owing to the DF energy loss, it remains trapped about one maximum. While orbiting the maximum, it slowly decreases its $E_{\rm J}$ due to DF and increases its eccentricity (as it happens in the run in the fourth column in the same Figure; in that case, however, the MO spends a Hubble time orbiting the effective potential maximum), until it manages to go trough one of the two saddle points; from this moment, the DF-driven inspiral proceeds within the eye-shaped central crater, and the MO successfully inspirals towards the centre. Analogously, in the run shown in the third column of Fig.~\ref{fig:orbs_in_rotating_frame}, the MO wanders with an $E_{\rm J}$ close to the value of the effective potential at the saddle; since it immediately manages to pass close to one saddle point, its inspiral proceeds smoothly and effectively in the inner eye-shaped hollow of the effective potential. Note that the torque induced by DF and the global torque\footnote{\label{fn:torque} From this moment on, we will denote the torque experienced by the MO owing to the effect of the non-spherical and rotating galaxy potential (as opposed to the dissipative torque due to DF) as the \textit{global} torque. } may work against each other out of the central crater (as the rotating bar tends to increase the angular momentum of the MO), while they both promote the inspiral within the central hollow. The aforementioned behaviours allow for a better interpretation of the time-scales in Fig.~\ref{fig:td_inplane}: orbits with initial $\phi\approx0, 180^\circ$ starting their evolution with $r_0\approx5$~kpc and $f_{\rm circ}\approx 1$ (or, analogously, with $r_0\gtrsim5$ kpc, but with $f_{\rm circ}<1$) start from a point that is very close to a saddle point, so that they can easily cross it and enter the region in which both $\tau_z$ from DF and the galaxy potential promote the inspiral. On the other hand, the evolution of these MOs, if it starts from $\phi\approx90^\circ$, is necessarily delayed as they are initially `trapped' near a potential maximum, and they remain there until their $E_{\rm J}$ becomes small enough so that they can cross a saddle point and proceed with the central inspiral. The behaviour of the decay time-scales for $r_0\gtrsim 8$ kpc is much less predictable, but it essentially boils down to understanding whether the MO starts oscillating about a potential maximum, so that it can eventually cross a saddle point and reach the centre, or whether it remains trapped in a circular orbit at the outer Lindblad resonance, not experiencing any net decay, as in the first column of Fig.~\ref{fig:orbs_in_rotating_frame}. As a matter of fact, for orbits with $r_0\gtrsim7$ kpc and $f_{\rm circ}\gtrsim0.6$, the decay is typically possible if they start from $\phi\approx90^\circ$ (see, e.g. the case with $r_0=10$ kpc, $f_{\rm circ}=0.8$, or $r_0=12$ kpc, $f_{\rm circ}=0.6$), as the effective potential at that location is slightly higher than that at the saddle point. On the other hand, the potential evaluated at the same initial radius for $\phi\approx0, 180^\circ$ is lower and therefore, for such values of $\phi$, the Jacobi integral is too small to allow for the crossing of the saddles. As a consequence, the associated orbits are more likely to remain trapped about the outer Lindblad resonance. % Summarizing, the different behaviour of the MO orbiting near the co-rotation or outer Lindblad resonance can be understood as follows. Near co-rotation, the MO would remain trapped about the ridges in the effective potential in absence of DF. As shown in the right-most panels of Fig.~\ref{fig:orbs_in_rotating_frame}, the orbit-averaged torque due to DF (which is always negative in this run) and the oscillating bar-induced one (which is positive, once orbit-averaged) nearly balance each other along the evolution; the two torques combine in such a way that, if the MO starts from near the top of the ridge, it then descends while exhibiting wider and wider oscillations about the ridge top. As these oscillations grow larger, the non-averaged global torque grows in modulus due to the fact that the MO can get closer and closer to the bar. This descent eventually brings the MO out of the rim area so that it can cross the saddle point. Orbits trapped about the outer Lindblad resonance (see the left-most panels in Fig.~\ref{fig:orbs_in_rotating_frame}) behave quite differently. In there, both the bar torque and DF oscillate between positive and negative values along each orbit\footnote{DF can also induce an acceleration in rotating discs, see \citet{Chandrasekhar1942} and \citet{Bonetti2020}}, and the net torque oscillates significantly as well. The net torque over each orbit is nearly zero, and the MO orbit does not drift along the evolution, once in the \textit{trap} orbit{, because of the angular momentum transfer from the bar to the MO that compensates for the loss due to DF}. Note that we tried to evolve the MO on this orbit for a hundred Hubble times, and we found no net decay. We further note that those trap orbits exist only for relatively light intruders, since DF becomes much stronger for significantly more massive MOs, overwhelming the bar-induced torque. \subsubsection{Counter-rotating orbits} \label{sec:counterrotating} Figs~\ref{fig:td_inplane_counter} and~\ref{fig:mess180} of Appendix~\ref{sec:appendix_fig} show the analogous to Figs~\ref{fig:td_inplane} and ~\ref{fig:mess0}, respectively, but initializing the MO orbit so that it initially counter-rotates with respect to the galaxy angular momentum. The bar impact on the decay time-scale is relatively modest in the counter-rotating cases, as the inspiral times remain very similar for runs with and without the bar. This is due to the following: when the MO is initially counter-rotating, its velocity relative to the bar is much larger than in the prograde case, so the bar does not manage to effectively torque the MO. This is true as long as the MO does not reverse the sign of its angular momentum. Indeed, a retrograde MO embedded in a rotating system has been shown to experience the so-called drag-towards circular co-rotation: this means that the MO would progressively lose angular momentum via DF, until its orbit gets very radial and its angular momentum reverses sign; from this moment on, DF would promote the circularization of the now prograde MO \citep{Dotti2007, Bonetti2020}. In the present framework, this means that the MO experiences little effect from the bar until its orbit turns to prograde: at that point the evolution can be assimilated to the prograde one, described above, and the effect of the bar becomes significant. We find that the MOs that switch the sign of their angular momentum earlier in the evolution are consistently found to take a longer time to complete their inspiral, for a given value of $r_0$ and $f_{\rm circ}$. This is likely due to the fact that, once the angular momentum reverses, circularization occurs promptly, thus the MO spends more time on a nearly circular orbit that does not reach the dense central regions where DF would be more efficient. On the other hand, if the angular momentum reversal never occurs or occurs when the inspiral is nearly completed, the MO orbit stays more eccentric, so that, along each orbit, it penetrates the denser regions near the centre, experiencing a stronger DF. In addition, we found that the angular momentum sign reversal almost never happens if $f_{\rm circ}\gtrsim0.6$; this is likely due to the fact that the efficiency of DF is weaker if the relative velocity between the MO and the background is larger; given that retrograde, nearly circular orbits maximise this relative velocity, the effect of DF is weaker, thus circularization is not effectively promoted. \subsubsection{Off-plane orbits} Finally, we also explore the inspiral time-scale of the same MO for off-plane orbits. We find that the bar effect gets weaker as the initial orbit gets more off-plane. In general, the orbital evolution time-scale is very stochastic if the bar is present, and it is not easy to define a clear trend for the decay. We report the map that illustrates the decay time-scale in a set of off-planar runs in Fig.~\ref{fig:spectra000}. Note that the off-plane MOs tend to get gradually dragged in the disc plane, where the evolution is analogous to what presented in the previous Sections. \subsection{Monte Carlo orbital sampling} \begin{figure*} \centering \includegraphics[ width=0.45\textwidth]{img/m_all} \includegraphics[ width=0.45\textwidth]{img/tau_z} \includegraphics[ width=0.45\textwidth]{img/time_ratio.pdf} \includegraphics[ width=0.45\textwidth]{img/stochasticity_measure.pdf} \caption{ The {\bf {top-left}} corner plot displays whether it is more probable that the bar promotes (blue) or demotes (red) the MO inspiral. More specifically, each region of the parameter space is colour-coded with the variable $f_{\rm b}=(n_{\rm promote}- n_{\rm demote})/n_{\rm tot}$, where $n_{\rm tot}$ is the total number of simulations in that given region of the parameter space, among which $n_{\rm promote}$ is the number of runs for which the barred inspiral time-scale is 0.75 or less times the non-barred inspiral; on the contrary, $n_{\rm demote}$ is the number of runs for which the \textit{non-barred} inspiral time-scale is 0.75 or less times the \textit{barred} inspiral. Runs taking more than a Hubble time are assumed to take infinitely positive time to inspiral; note that if we assume runs taking more than a Hubble time to take exactly a Hubble time instead, the plot looks similar. The {\bf {top-right}} corner plot shows the average magnitude of the $z$-component of the global torque in the barred runs: the time average of the torque is computed over each run, and this value is averaged over all runs that belong to each different region of the displayed maps. { The {\bf bottom-left} corner plot shows, for each given region of the parameter space, the ratio between the average MO inspiral time in the barred galaxy and the same quantity in the equivalent unbarred system, so that the red (blue) regions mark the portions of the parameter space in which the inspiral is slower with (without) accounting for the bar. The {\bf bottom-right} corner plot compares the degree of stochasticity associated with the ispiral time-scale in barred and unbarred systems, with the red (blue) colours showing the regions in which the inspiral time-scale gets more stochastic with (without) the bar. More specifically, the colour map refers to the quantity $(\sigma_t/\langle t\rangle)_{\rm bar} - (\sigma_{t}/\langle t\rangle)_{\rm no\ bar}$, where $\langle t\rangle$, $\sigma_t$ respectively represent the average inspiral time and its standard deviation, and the subscript refers to whether we are considering runs with or without the bar. In the bottom panels, inspirals taking more than a Hubble time have been set to take 16 Gyr for the computation of the colour-coded quantities; we checked that this somehow arbitrary choice does not appreciably affect our findings. In all plots,} the vertical lines mark the co-rotation radius and the position of the outer Lindblad resonance. } \label{fig:competition} \end{figure*} In addition to the aforementioned simulations, we perform a series of runs initializing the MO so that its initial position is isotropic in a sphere of radius $r_0$, where $r_0$ is extracted uniformly in the range $[2, 14]$ kpc. We sampled the angle $\alpha$ uniformly between [0,360) degrees and $f_{\rm circ}$ uniformly between $[0.03, 1.0]$. We additionally sampled the MO mass in a log-uniform distribution between $5\times10^6\ensuremath{\, \mathrm M_{\sun{}}}$ and $10^8\ensuremath{\, \mathrm M_{\sun{}}}$, in order to understand the dependence of the inspiral also on the intruder's mass. For each extracted initial conditions,\footnote{Note that the distributions from which these quantities have been extracted for the Monte Carlo sampling have no claim to be representative of a sample of MOs entering a real galaxy. } we run a simulation both in the barred and in the unbarred galaxy ($\approx 13,000$ runs). Fig.~\ref{fig:competition} shows {a set of corner plots: the left-hand ones show} whether the bar promotes (blue) or hinders (red) the inspiral for several combinations of parameters used in the orbit initialization. In particular, {in the top-left plot, } each area in the parameter space is colour-coded depending on the value of $f_{\rm b}=(n_{\rm promote}- n_{\rm demote})/n_{\rm tot}$, with $n_{\rm promote}$ the number of runs for which the barred inspiral time-scale is 0.75 or less times that of the non-barred case, $n_{\rm demote}$ the number of runs for which the non-barred inspiral time-scale is 0.75 or less times that of the barred case (see the caption for more details), and $n_{\rm tot}$ the total number of runs in that given region of the parameter space. {The bottom-left panel, instead, is colour-coded with the ratio of the average inspiral time-scale in the barred and unbarred scenario. Both left-hand plots show very similar features.} In general, there is a region near $r_0\approx5 $ kpc and $\phi\approx 90^\circ$ that shows the slow-down in the inspiral induced by the `trap' near the effective potential maxima. It is also clear that the bar tends to promote the inspiral of prograde MOs within the disc plane ($\cos(i)\approx1)$, at least within the outer Lindblad resonance. Large MO masses are less likely to sink in the barred case, especially for large $r_0$ and $f_{\rm circ}$. Indeed, the relative fraction of inspirals that are not completed in a Hubble time with and without the bar within our complete Monte Carlo sample respectively amounts to 31 and 26 per cent; however, the same ratio amounts to 23.5 (9.6) per cent in the barred (non barred) scenario if we limit our analysis to MOs with $m_{\rm p}>10^{7.5} \ensuremath{\, \mathrm M_{\sun{}}}$ and $r_0>7.5$ kpc. This is likely due to the fact that the DF-induced deceleration increases linearly with the MO mass, whereas the effect of global torques is independent of the MO mass. More massive MOs thus sink more promptly in an axisymmetric, static potential where they experience DF alone; however, if the bar is present, the effect of DF is hampered by global torques induced by the rotating triaxial structure: those typically hinder the inspiral at large scales. \\ The results shown in the left-hand {panels} of Fig.~\ref{fig:competition} can be almost completely explained in term of the $z$ component of the global (bar) torque for the barred cases. Indeed, in the {top-right} panel of the same Figure, we display the $z$ component of the (bar-induced) torque, time-averaged for every run, and then for all runs in a given region of the corner plot. This map nearly reproduces the left-hand ones, with averaged positive (negative) $z$ torques mapping the regions in which the bar promotes (demotes) the inspiral. {Furthermore, the bottom-right panel of Fig.~\ref{fig:competition} compares the degree of stochasticity associated with barred and unbarred runs. In particular, the plot is colour-coded according to the quantity $(\sigma_t/\langle t\rangle)_{\rm bar} - (\sigma_{t}/\langle t\rangle)_{\rm no\ bar}$, where $\langle t\rangle$, $\sigma_t$ respectively represent the average decay time and its standard deviation within a given region of the parameter space, and the subscript refers to whether we are considering runs with or without the bar. This means that red (blue) regions mark the portion of the parameter space in which the decay time-scale is more stochastic with (without) the bar. In most cases, the bar presence enhances the stochasticity in the same regions where the inspiral takes longer if the bar is present, and the bar average torque is positive. An exception is the region in which $\cos(i)=1$, mapping initially nearly prograde MOs. Those tend to have a faster inspiral in the barred case, at least for moderately light MOs starting from relatively small $r_0$; however, all coplanar runs accounting for the bar appear to have an enhanced degree of stochasticity, as for those runs the randomizing effect of the bar appears to be stronger.} \section{Discussion and Conclusion}\label{sec:concl} In this paper, we explored the orbital evolution of massive objects (MOs) in a barred Milky Way galaxy model, and we compared it to the evolution of MOs in an analogous, non-barred galaxy. We performed a large number of runs adopting a very accurate orbit integrator that features a careful treatment for the galaxy potential (including a bulge, a disc, a dark matter halo, and -- in some configurations -- a rotating bar) and careful treatment for dynamical friction (DF) that properly recovers the results of $N$-body simulations even in rotationally supported galaxy discs \citep{Bonetti2020, Bonetti2021}. We found that the presence of a typical galactic rotating bar, within an otherwise axisymmetric galaxy model, makes the MO orbital evolution more stochastic, and can significantly affect its orbital decay time-scale. In particular, the effect of the bar is more prominent for MOs that spend most of their evolution on a prograde orbit co-planar with the disc: in these situations, the inspiral time with and without bar can vary by a factor of a few. These results are remarkable, especially considering that the chosen Milky Way-like galaxy did not feature an extremely prominent bar. Rather, its properties, such as the mass, are compatible with the Milky Way bar including its pseudo-bulge component \citep{Portail2017}. The morphology of the considered system is analogous to that of many other spirals in the local Universe \citep{Kormendy2004, Drory2007}, in which pseudo-bulges are ubiquitous and are believed to be originated from the bar itself, suggesting that our results should apply to typical late-type spirals, one of the most common class of galaxies in the Universe. In our runs, the bar presence often promotes the orbital decay but, in some configurations (especially if the MO is initialized on a large prograde orbit co-planar with the disc), it does induce the stalling of an MO at large separations. This is in line with the results in \citet{Bortolas2020}: in their zoom-in cosmological simulation, MBHs were found to typically promptly inspiral when a bar develops in the host galaxy but, in one case, the bar instead scatters an MBH on a wide, large angular momentum orbit, hampering its further orbital decay. \\ Our semi-analytical approach implies an idealized treatment of the host galaxy and in particular of the DF drag, which is accounted for based on the \citeauthor{Chandrasekhar1943} implementation, a treatment that has its own limitations. Among those, the fact that the Coulomb logarithm entering the DF drag should be allowed to vary along the evolution \citep{Petts2015}, rather than being kept fixed, was taken into account in our implementation; in addition, the standard DF treatment does not consider objects moving faster than the MO in the braking effect, potentially resulting in an inaccurate evolution in some situations \citep[e.g.][]{Read2006, Antonini2012, Petts2015, Petts2016}. To constrain the impact of this approximation, we therefore checked that fast-moving stars do not crucially contribute to the friction in our implementation \citep{Bonetti2021}. It is also important to mention that \citeauthor{Chandrasekhar1943}'s DF treatment assumes the response of the host galaxy to the passage of the MO to be rather local, while in reality the whole host reacts to and \textit{resonates} as a result of the MO perturbation \citep{Tremaine1984}. Taking into account this aspect is very important \citep{Tamfal2020, Vasiliev2021} especially if the mass ratio between the host and the MO is not too far from unity. On the other hand, very low-mass MOs compared to the host, as those adopted in the presented study, are likely to result in a negligible global response from the host, so that the local treatment is good enough \citep{Bonetti2021,Vasiliev2021}. Finally, our implementation of DF is relatively simplistic especially for the axisymmetric and triaxial structures. In particular, in the triaxial case, the bar may substantially affect the main moments of the velocity distribution; as a result, the prescription adopted here may be systematically affected. Given that our results in the barred scenario are critically impacted by the interplay between DF and global torques, adopting a more accurate prescription for DF would affect the MO probability of approaching a resonance and remaining trapped into it or not. Nonetheless, it is clear that stochasticity plays a critical role in the orbital evolution of MOs in barred galaxies, as demonstrated by the $N$-body simulations presented in Appendix~\ref{sec:appB}. Thus, the limitations of the presented DF implementation do not threaten the qualitative finding that bar resonances induce stochasticity in the orbital evolution of MOs. Another important caveat concerns the temporal evolution of the galaxy (and its bar, when present): in our runs, the bar and galaxy properties were kept fixed, while in reality both would evolve significantly with time \citep[e.g.][]{Sellwood2014, Zana18a, Zana19}. A \textit{live} bar, as opposed to the rigid bar potential adopted here, may change its properties in time, possibly getting stronger (e.g. \citealt{Athanassoula2013}). This could not be taken into account in our semi-analytical treatment, and can only be addressed via devoted numerical simulations. Related to this, it is worth mentioning that the same galaxy merger that brings an MO in the outskirts of a larger galaxy may influence the presence of a bar, possibly triggering its formation, delaying it or weakening/destroying a bar which is already in place \citep[e.g.][]{Pfenniger1990, Zana18b}. Furthermore, (disc) galaxies may well feature further deviations from axisymmetry, the most obvious being spiral structures \citep[e.g.][]{Bertin1989}. The bar, when present, is generally the most prominent deviation, thus it would reasonably have the most relevant impact on the orbit of MOs; {spirals are generally transient structures and may be strongly fluctuating, and a recent study shows their angular momentum transfer to the halo is negligible \citep{Sellwood2021}, suggesting the MO may be virtually unaffected by the spirals}. Additional torquing sources could be represented by clumps \citep[][]{Tamburello2017}, tidal perturbations to the galaxy \citep[][]{Bortolas2020} and many others; our simplified treatment represents a lower limit to the sources of stochasticity that may affect the inspiral of MOs. Finally, another important limitation of our study is the fact that we consider only point mass MOs with fixed mass and negligible extension, an approximation that is valid when the considered MO is an MBH or a very compact cluster of stars.\footnote{As an example, a nearly naked MBH might be wandering in the outskirts of a galaxy if e.g. it was ejected from the centre as a result of the gravitational-wave recoil following the merger between two MBHs \citep[e.g.][]{Nasim2021}. Another possibility is to have a secondary galaxy that is severely ram-pressure stripped by the galaxy host, so that the intruder MBH remains nearly naked \citep{Capelo2015}. } If the MO were an extended and relatively diluted dwarf galaxy, or a stellar cluster, it would get trimmed by tidal forces along the evolution, depending on its properties with respect to the host's; modelling the effect of stripping however is beyond the scope of the present work. In spite of these limitations, our treatment allows to pinpoint the sole effect of the bar in the orbital evolution of an MO, and we defer the implementation of additional physical processes that may affect the inspiral to a forthcoming study. To conclude, it is worth highlighting that we found the most massive MOs in our sample ($\gtrsim 10^{7.5}\ensuremath{\, \mathrm M_{\sun{}}}$) that start their evolution from relatively large radii ($\gtrsim 8$ kpc) to be less likely to successfully complete the inspiral within the barred galaxy, compared to the axisymmetric case; when the bar is present, we find that the number of stalled MOs may double. This aspect is particularly relevant considering that, in a realistic scenario, one expects a relatively massive intruder galaxy to start interacting from large separations. In particular, MOs might be delivered by minor mergers, in the form of cores of a galaxy companion, and since these might be easily dropped at the outskirts of the galaxy when the host companion is tidally dissolved (e.g. \citealt{Callegari2009, Callegari2011}), this outcome might not be rare. {Our runs suggest that, in the limit of minor mergers that was probed in this work, the most massive MOs which are the most affected by DF are also those whose large-scale inspiral is most effectively hindered by the bar, implying that barred galaxies involved in minor mergers are likely to feature lower rates of MBH mergers. However, the statistical relevance of this } should be evaluated with the help of cosmological simulations or semi-analytical models of galaxy formation modeling a large sample of systems. {In general, we stress that} the presence of bars and other deviations from axisymmetry should be taken into account when exploring the accretion of galaxy satellites onto more massive systems, and when making predictions of MBH mergers, as the rates of gravitational wave driven MBH coalescences are closely connected with the efficiency of inspiral of their parent systems. For example, MBHs detectable by LISA should be abundant in the mass range $10^5-10^7 M_{\odot}$, which is the typical mass of MBHs hosted by late-type spirals, the class of galaxies in which the dynamical processes discussed in this paper is most relevant. In this context, our semi-analytical framework can be implemented into semi-analytical models of galaxy and MBH formation and evolution, to better evaluate the impact of bars on the formation and evolution time-scales of MBH binaries, which is critical in estimating the formation rate of gravitational-wave sources detectable by forthcoming low-frequency gravitational wave facilities such as LISA \citep{Bonetti2019}. \section*{Acknowledgements} We thank the anonymous referee for their useful comments and suggestions. We warmly thank Eugene Vasiliev for his help and support for the usage of the AGAMA tool. EB and AS acknowledge support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program ERC-2018-COG under grant agreement N.~818691 (B~Massive). EB, PRC, and LM acknowledge support from the Swiss National Science Foundation under the Grant 200020\_178949. MD, MB, and AL acknowledge funding from MIUR under the grant PRIN 2017-MB8AEZ. \section*{Data Availability Statement} The data underlying this article will be shared on reasonable request to the corresponding author.
{ "redpajama_set_name": "RedPajamaArXiv" }
4,001
package de.fred4jupiter.fredbet.data; import de.fred4jupiter.fredbet.domain.*; import de.fred4jupiter.fredbet.props.FredBetProfile; import de.fred4jupiter.fredbet.props.FredbetConstants; import de.fred4jupiter.fredbet.props.FredbetProperties; import de.fred4jupiter.fredbet.security.FredBetUserGroup; import de.fred4jupiter.fredbet.service.BettingService; import de.fred4jupiter.fredbet.service.InfoService; import de.fred4jupiter.fredbet.service.JokerService; import de.fred4jupiter.fredbet.service.MatchService; import de.fred4jupiter.fredbet.service.image.ImageAdministrationService; import de.fred4jupiter.fredbet.service.user.UserAlreadyExistsException; import de.fred4jupiter.fredbet.service.user.UserService; import de.fred4jupiter.fredbet.web.info.InfoType; import org.apache.commons.io.IOUtils; import org.apache.commons.lang3.tuple.ImmutablePair; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.core.env.Environment; import org.springframework.core.env.Profiles; import org.springframework.core.io.ClassPathResource; import org.springframework.stereotype.Component; import org.springframework.transaction.annotation.Transactional; import java.io.ByteArrayOutputStream; import java.io.IOException; import java.io.InputStream; import java.nio.charset.StandardCharsets; import java.time.LocalDateTime; import java.util.List; @Component public class DatabasePopulator { private static final int NUMBER_OF_DEMO_USERS = 12; private static final String DEFAULT_PASSWORD_ADMIN_USER = FredbetConstants.TECHNICAL_USERNAME; private static final Logger LOG = LoggerFactory.getLogger(DatabasePopulator.class); private final MatchService matchService; private final Environment environment; private final UserService userService; private final BettingService bettingService; private final RandomValueGenerator randomValueGenerator; private final InfoService infoService; private final ImageAdministrationService imageAdministrationService; private final JokerService jokerService; private final FredbetProperties fredbetProperties; private final FakeDataPopulator fakeDataPopulator; public DatabasePopulator(MatchService matchService, Environment environment, UserService userService, BettingService bettingService, RandomValueGenerator randomValueGenerator, InfoService infoService, ImageAdministrationService imageAdministrationService, JokerService jokerService, FredbetProperties fredbetProperties, FakeDataPopulator fakeDataPopulator) { this.matchService = matchService; this.environment = environment; this.userService = userService; this.bettingService = bettingService; this.randomValueGenerator = randomValueGenerator; this.infoService = infoService; this.imageAdministrationService = imageAdministrationService; this.jokerService = jokerService; this.fredbetProperties = fredbetProperties; this.fakeDataPopulator = fakeDataPopulator; } public void initDatabaseWithDemoData() { if (!isRunInIntegrationTest()) { createDefaultUsers(); addRulesIfEmpty(); } if (!isRunInIntegrationTest() && fredbetProperties.isCreateDemoData()) { createDemoUsers(NUMBER_OF_DEMO_USERS); createRandomMatches(); } imageAdministrationService.createDefaultImageGroup(); } private boolean isRunInIntegrationTest() { return environment.acceptsProfiles(Profiles.of(FredBetProfile.INTEGRATION_TEST)); } public void createRandomMatches() { bettingService.deleteAllBets(); matchService.deleteAllMatches(); LocalDateTime localDateTime = LocalDateTime.now().plusHours(1); localDateTime = createRandomForGroup(localDateTime, Group.GROUP_A, 4); localDateTime = createRandomForGroup(localDateTime, Group.GROUP_B, 4); localDateTime = createRandomForGroup(localDateTime, Group.GROUP_C, 4); localDateTime = createRandomForGroup(localDateTime, Group.GROUP_D, 4); localDateTime = createRandomForGroup(localDateTime, Group.GROUP_E, 4); localDateTime = createRandomForGroup(localDateTime, Group.GROUP_F, 4); localDateTime = createRandomForGroup(localDateTime, Group.GROUP_G, 4); localDateTime = createRandomForGroup(localDateTime, Group.GROUP_H, 4); localDateTime = createRandomForGroup(localDateTime, Group.ROUND_OF_SIXTEEN, 8); localDateTime = createRandomForGroup(localDateTime, Group.QUARTER_FINAL, 4); localDateTime = createRandomForGroup(localDateTime, Group.SEMI_FINAL, 2); localDateTime = createRandomForGroup(localDateTime, Group.FINAL, 1); createRandomForGroup(localDateTime, Group.GAME_FOR_THIRD, 1); } private LocalDateTime createRandomForGroup(LocalDateTime localDateTime, Group group, int numberOfMatches) { LocalDateTime tmpTime = localDateTime; for (int i = 0; i < numberOfMatches; i++) { ImmutablePair<Country, Country> teamPair = randomValueGenerator.generateTeamPair(); Match match = MatchBuilder.create().withTeams(teamPair.getLeft(), teamPair.getRight()).withGroup(group).withStadium("Somewhere") .withKickOffDate(tmpTime).build(); matchService.save(match); tmpTime = tmpTime.plusDays(1).plusMinutes(10); } return tmpTime; } public void createDemoBetsForAllUsers() { LOG.info("createDemoBetsForAllUsers..."); bettingService.deleteAllBets(); List<Match> allMatches = matchService.findAll(); List<AppUser> users = userService.findAll(); users.forEach(appUser -> { for (Match match : allMatches) { boolean jokerAllowed = false; if (randomValueGenerator.generateRandomBoolean()) { jokerAllowed = jokerService.isSettingJokerAllowed(appUser.getUsername(), match.getId()); } createBetForUser(appUser, match, jokerAllowed); } bettingService.createExtraBetForUser(appUser.getUsername()); }); LOG.debug("created demo bets for all users finished."); } private void createBetForUser(AppUser appUser, Match match, boolean joker) { Integer goalsTeamOne = randomValueGenerator.generateRandomValue(); Integer goalsTeamTwo = randomValueGenerator.generateRandomValue(); bettingService.createAndSaveBetting(appUser.getUsername(), match, goalsTeamOne, goalsTeamTwo, joker); } public void createDemoResultsForAllMatches() { LOG.info("createDemoResultsForAllUsers..."); matchService.enterMatchResultsForAllMatches(match -> { match.setGoalsTeamOne(randomValueGenerator.generateRandomValue()); match.setGoalsTeamTwo(randomValueGenerator.generateRandomValue()); }); } private void addRulesIfEmpty() { ClassPathResource classPathResource = new ClassPathResource("content/rules_de.txt"); try (ByteArrayOutputStream byteOut = new ByteArrayOutputStream()) { IOUtils.copyLarge(classPathResource.getInputStream(), byteOut); String rulesInGerman = byteOut.toString(StandardCharsets.UTF_8); infoService.saveInfoContentIfNotPresent(InfoType.RULES, rulesInGerman, "de"); } catch (IOException e) { LOG.error(e.getMessage(), e); } } public void createDemoUsers(int numberOfDemoUsers) { LOG.info("createAdditionalUsers: creating {} additional demo users ...", numberOfDemoUsers); final byte[] demoImage = loadDemoUserProfileImage(); for (int i = 1; i <= numberOfDemoUsers; i++) { final String usernameAndPassword = this.fakeDataPopulator.nextRandomUsername(); // final String usernameAndPassword = RandomStringUtils.randomAlphanumeric(6); AppUser user = AppUserBuilder.create().withUsernameAndPassword(usernameAndPassword, usernameAndPassword) .withUserGroup(FredBetUserGroup.ROLE_USER).build(); boolean isNewUser = saveIfNotPresent(user); if (isNewUser && fakeDataPopulator.nextRandomBoolean()) { this.imageAdministrationService.saveUserProfileImage(demoImage, user); } } } private byte[] loadDemoUserProfileImage() { ClassPathResource classPathResource = new ClassPathResource("static/images/profile_demo_image.jpg"); try (InputStream in = classPathResource.getInputStream()) { return IOUtils.toByteArray(in); } catch (IOException e) { throw new IllegalStateException("Could not load demo image from classpath. " + e.getMessage()); } } private void createDefaultUsers() { LOG.info("createDefaultUsers: creating default users ..."); saveIfNotPresent(AppUserBuilder.create().withUsernameAndPassword(FredbetConstants.TECHNICAL_USERNAME, DEFAULT_PASSWORD_ADMIN_USER) .withUserGroup(FredBetUserGroup.ROLE_ADMIN).deletable(false).build()); List<String> additionalAdminUsers = fredbetProperties.getAdditionalAdminUsers(); if (additionalAdminUsers != null && !additionalAdminUsers.isEmpty()) { additionalAdminUsers.forEach(username -> { saveIfNotPresent(AppUserBuilder.create().withUsernameAndPassword(username, username).withUserGroup(FredBetUserGroup.ROLE_ADMIN).build()); }); } } public boolean saveIfNotPresent(AppUser appUser) { try { userService.createUser(appUser); return true; } catch (UserAlreadyExistsException e) { LOG.debug(e.getMessage()); return false; } } @Transactional public void deleteAllBetsAndMatches() { bettingService.deleteAllBets(); matchService.deleteAllMatches(); } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,671
\section{Introduction} Stochastic models are a common tool in epidemiological research, where public health interventions aim at the reduction of fluctuating counts of infected or infective individuals \cite{bailey1975}, and models are used in explaining, predicting, and responding to acute and chronic diseases of public health significance. A fundamental result is the presence of a critical value of the basic reproduction number $R_0$, defined as the expected number of secondary cases resulting from a single infective case in an otherwise susceptible population. Supercritical diseases, those with $R_0>1$, tend to stabilize around a positive number of infectives that can persist for very long times, while in subcritical cases ($R_0<1$) the infective count declines to zero on a relatively short timescale. In either case, the long-term, stationary probability distribution of number of infectives is trivial, as all epidemics in finite-population stochastic transmission models must eventually die out due to chance fluctuations, but the quasistationary distribution---the distribution conditional on non-extinction of the disease---can be very informative about the behavior of the system within finite time intervals. When $R_0<1$, the quasistationary distribution of number infective in simple transmission models is often approximately geometric, with probability of $I$ infectives proportional to $(R_0)^I$ \cite{nasell_quasi-stationary_1996,lambert2008population}. Prevalences consistent with the geometric distribution, when analyzed statistically across multiple locations simultaneously, have been observed in trachoma elimination trials at times in which the disease's dynamics are subcritical \cite{lietman_epidemiological_2011,lietman-gebre-abdou2015,rahman_distribution_2015}. Such statistics of case count distributions observed in multiple communities at a single time may be able to help provide an assessment of the dynamics of a disease, possibly of its basic reproductive number, and hence, of the future time course of the disease. An approximately geometric distribution of prevalences also implies that there will be more high-prevalence communities than there would be in a lighter-tailed distribution, even when the mean prevalence is low and declining. This suggests that an exceptionally high-prevalence community may be simply a statistical outlier, which can be expected to regress to the mean without intervention, rather than a ``transmission hotspot'' calling for intensified intervention \cite{lietman-gebre-abdou2015}. While the quasistationary distribution of a specific stochastic model can be calculated as an eigenvector of a Markov transition matrix, since the equations for the entries of that vector can not be solved explicitly for even very simple models, research has focused on approximations \cite[for example]{cavender_quasi-stationary_1978,kryscio_extinction_1989,nasell_quasi-stationary_1996,naasell2003moment}. Barbour and Pollett \cite{barbour2010total} established that the quasistationary distribution is a fixed point of a given map defined on probability mass functions, allowing efficient approximation techniques \cite{van2013quasi}. The fixed point of that map can also be found using a ``ratio of means'' approach built on waiting times rather than transition rates \cite{artalejo_quasi-stationary_2010} that can aid in calculation. Quasistationary approximations for diffusion processes and branching processes are also well developed and are the subject of active research and development \cite{van2013quasi,lambert2008population,meleard2012quasi}. In this paper we introduce a method of approximating the quasistationary distribution of a stochastic model in the subcritical regime, using a technique that has been used previously to approximate rare large-deviation events in supercritical dynamics \cite{dykman_disease_2008,ovaskainen_stochastic_2010,schwartz_converging_2011}. This technique takes a large-population limit of the model dynamics in a way that yields a Hamilton-Jacobi equation, which can be understood by analyzing the geometry of an associated Hamiltonian ODE system. This Hamiltonian approach to stochastic mechanics, innovated by Graham \cite{graham_weak-noise_1984} for diffusion equations and extended by Hu \cite{hu_stationary_1987} to master equations, has primarily been used to study stationary solutions of the limiting stochastic process, by locating special solutions of the Hamiltonian ODE system, characterized by $H=0$ where $H$ is the Hamiltonian. The Hamiltonian ODE system includes the deterministic limit of the stochastic model as an invariant subsystem within the equipotential ($H=0$) set, and at each limit set of the deterministic system, the equipotential set extends outwards into the non-deterministic regions of the Hamiltonian system's phase space. Those extensions reveal quantitative information about the system's stochastic behavior near attractors. Thus they are used to analyze stationary probability densities associated with attractors and other limit sets of the deterministic system, and the frequencies and paths of rare escape events from one attractor to another \cite{ovaskainen_stochastic_2010,black_wkb_2011,forgoston2011maximal,lindley_iterative_2013,lindley_rare-event_2014}. This geometric structure, which encodes characteristics of the deterministic limit of the stochastic system and the probability distribution of deviations from the deterministic limit, is strange in comparison to the structures seen in Hamiltonian systems from physics, and is much less well understood. Here we investigate the use of structures within the equipotential set, but at a distance from the deterministic subsystem, to analyze a stochastic model's behavior. We identify such a structure far from the deterministic subsystem with the quasistationary behavior of an epidemic model, in contrast to the use of structures intersecting the deterministic subsystem to analyze stationary behavior. \section{Limiting behavior of birth-death process} \label{sec:bd} Many models of stochastic epidemic dynamics, biological population dynamics more generally, and branching processes, are included in the category of birth-death processes. Here we apply the analysis of Hu \cite{hu_stationary_1987} to this class of processes, and below we will apply it to specific example models. A stochastic birth-death process models the size of a single population, altered by events in which the size either increases by one or decreases by one. The rate of increase from size $k$ is labeled $B(k)$ and the rate of decrease from size $k$ is labeled $D(k)$. Writing $P(k,t)$ for the probability that the size is $k$ at time $t$, the change in probability over time is governed by a master equation: \begin{dmath} \label{eqn:BD-master} \frac{dP(k,t)}{dt} = B(k-1)P(k-1,t) + D(k+1)P(k+1,t) - B(k)P(k,t) - D(k)P(k,t) \quad\text{for each }k. \end{dmath} Taking $D(0)=0$ and $B(-1)P(-1,t)=0$ for all $t$, the dynamics of the master equation is confined to nonnegative values of $k$. In order to take a large-system-size limit, let $\Omega$ be a measure of system size such as, for example, a maximum population size, such that as we consider increasingly large birth-death systems in which both $\Omega$ and $k$ become unboundedly large, the ratio $k/\Omega$ remains finite. For example, in a system with finite population size $N$, we can use $\Omega=N$, as we will see below. Then letting $x=k/\Omega$, we obtain a transformed master equation \begin{dmath*} \frac1\Omega \frac{dP(x,t)}{dt} = b\left(x-\frac1\Omega\right) P\left(x-\frac1\Omega\right,t) + d\left(x+\frac1\Omega\right) P\left(x+\frac1\Omega\right,t) - b(x)P(x,t) - d(x)P(x,t), \end{dmath*} where $b(x)=(1/\Omega)B(\Omega x)$ and $d(x)=(1/\Omega)D(\Omega x)$. Let the functions $b$ and $d$ be smooth functions of $x$ for each $\Omega$, with a smooth limit as $\Omega\to\infty$. Additionally, let $\phi(x,t)$ be a probability density function that is smooth in $x$ and $t$, such that $\phi(k/\Omega,t)=\Omega\,P(k/\Omega,t)$. Following Hu \cite{hu_stationary_1987}, this allows construction of a Kramers-Moyal expansion of the dynamics, by substituting and Taylor expanding the master equation around $x$ so that it is expressed using only values at $x$: \begin{dmath}[label=BD-KM] \frac1\Omega \frac{\partial \phi(x,t)}{\partial t} = \sum_{n=1}^\infty \frac1{n!} \left( - \frac1\Omega \right)^n \frac{\partial^n}{\partial x^n} \left( b(x) \phi(x,t) \right) + \sum_{n=1}^\infty \frac1{n!} \left( \frac1\Omega \right)^n \frac{\partial^n}{\partial x^n} \left( d(x) \phi(x,t) \right). \end{dmath} To derive a partial differential equation in the large-system limit, we rewrite the density as an exponential expression: \begin{dmath}[label=fs] \phi(x,t) = \Omega e^{-\Omega U(x,t)}. \end{dmath} Assume that the function $U$ can be expanded in powers of $\Omega$ on $0<x<1$: \begin{dmath*} U(x,t) = u(x,t) + \frac1\Omega u_1(x,t) + \frac{1}{\Omega^2}u_2(x,t) + \cdots, \end{dmath*} and that the terms of that expansion other than $u(x,t)$ vanish asymptotically as $\Omega$ approaches infinity. This \emph{ansatz}, known as the WKB approximation \cite{hu_stationary_1987,bender_orzag_1978}, makes it possible to generate a partial differential equation in $u$. With these assumptions, derivatives of products of $\phi$ take on a simplified form, \begin{dmath*} \left[-\frac{1}{\Omega}\right]^n\frac{\partial^n}{\partial x^n}F(x,t)e^{-\Omega U(x,t)} = e^{-\Omega U(x,t)}F(x,t)\left(\frac{\partial u}{\partial x}\right)^n + \mathcal{O}\left(\frac1\Omega\right). \end{dmath*} Substituting, the expansion of (\ref{BD-KM}) to first order is \begin{dmath*} \frac1\Omega \frac{\partial \phi(x,t)}{\partial t} = \Omega \left[ e^{-\Omega U(x,t)} \left( b(x) \sum_{n=1}^\infty \frac1{n!} \left(\frac{\partial u}{\partial x}\right)^n + d(x) \sum_{n=1}^\infty \frac1{n!} \left(-\frac{\partial u}{\partial x}\right)^n \right) + \mathcal{O}\left(\frac1\Omega\right) \right]. \end{dmath*} Thus, in the large-size limit, (\ref{BD-KM}) becomes a partial differential equation for $u$: \begin{dmath}[label=BD-HJ] \frac{\partial u(x,t)}{\partial t} = - \left( b(x) \left(e^{{\partial u}/{\partial x}}-1\right) + d(x) \left(e^{-{\partial u}/{\partial x}}-1\right) \right). \end{dmath} \subsection{The associated Hamiltonian system} Because the right hand side of (\ref{BD-HJ}) contains only first partial derivatives of $u$, it has the form of a Hamilton-Jacobi equation of classical mechanics \cite{courant_methods_1989}, \begin{dmath*} \frac{\partial u(x,t)}{\partial t} = - H\left(x,\frac{\partial u}{\partial x}\right), \end{dmath*} with the consequence that it can be analyzed using characteristic curves described by an associated system of ordinary differential equations \cite{hu_stationary_1987}. This analysis is based on the Hamiltonian function \begin{dmath*} H\left(x,\frac{\partial u}{\partial x}\right) = b(x) \left(e^{{\partial u}/{\partial x}}-1\right) + d(x) \left(e^{-{\partial u}/{\partial x}}-1\right). \end{dmath*} From that Hamiltonian can be written a two-dimensional dynamical system, whose state variables are $x$, the scaled population size, and a conjugate variable $p$, which takes the place of ${\partial u}/{\partial x}$ in the Hamiltonian. The associated Hamiltonian dynamical system is \begin{dmath}[label=BD-Hamiltonian-dynamics] \begin{array}{r@{}r@{}l} \dfrac{dx}{dt} ={}& \dfrac{\partial}{\partial p}H(x,p) &{}= b(x) e^p - d(x) e^{-p} \\[12pt] \dfrac{dp}{dt} ={}& - \dfrac{\partial}{\partial x}H(x,p) &{}= - b'(x) \left(e^p-1\right) - d'(x) \left(e^{-p}-1\right). \end{array} \end{dmath} Trajectories of this system do not correspond to realizations of the stochastic birth-death process, but rather trace out curves along the surface of $u$ versus $x$ and $t$, which can be used to analyze the behavior of $u$ over time. Thus we can gain information about birth-death processes in the large size limit by using this associated system to analyze the Hamilton-Jacobi equation (\ref{BD-HJ}). Stationary solutions of the master equation, characterized by the equilibrium condition $d\phi(x,t)/dt=0$, are identified with curves on the $(x,p)$ plane on which $H(x,p)=0$. In the case of this one-dimensional system, though not in the general master-equation case, the Hamiltonian has two factors, \begin{dmath}\label{eqn:BD-H} H(x,p) = \left( b(x) - d(x)\,e^{-p} \right) \left(e^p - 1\right), \end{dmath} which contribute two solution sets to the solution of $H=0$. The flat subspace $p=0$ is always a solution set for $H=0$ in Hamiltonian systems constructed from master equations in this way \cite{hu_stationary_1987}. The dynamics within this set are the dynamics of the ODE approximation to the stochastic dynamics, and fixed points and other limit sets of the Hamiltonian system located in this set correspond to fixed points and other limit sets of this deterministic subsystem. Other solutions to the equation $H=0$ pass transversely through those limit sets, and can reveal information about the stochastic behavior of the master equation system, as we will see in the treatment of the supercritical SIS model, below. In the birth-death systems we consider here, in which $k=0$ is an absorbing state, a common factor of $x$ can be taken out of $b(x)$ and $d(x)$, allowing us to describe three components of the solution set in all. \section{The SIS model} The SIS (susceptible-infective-susceptible) model provides a simple representation of infectious disease processes in the absence of immunity \cite{hethcote1976}. Classically, this model describes the number of susceptibles $S$ and infectives $I$ in a population of fixed size, where increase in the infective class is driven by infective-susceptible contact events, and infectives return to the susceptible class at a rate independent of contact with others. SIS models have been used to describe a range of diseases, including trachoma \cite{lietman-porco-dawson1999,liu-porco-amza2015b,liu-porco-amza2015,liu-porco-mkocha2014,liu-porco-ray2013} and sexually transmitted infections \cite{lajmanovich-yorke1976,hethcote-yorke1984}. In population biology, a model identical in form to this one is known as a stochastic logistic model \cite{nasell_extinction_2011}. In the basic SIS model, the infective class increases at a rate $\beta S (I / N)$, which is proportional to a quadratic susceptible-infective contact rate, and decreases at a per capita constant rate $\gamma I$, with $S=N-I$, and total population $N$ held fixed. Thus it is the number infective, $I$, that is the stochastically varying state variable of the model. Infective cases are added by transmission events, at rate $\beta\,(S/N)\,I$, where $\beta$ is the transmission rate per susceptible-infective pair \cite{bailey1975}. Cases return to the susceptible class at rate $\gamma\,I$, where $\gamma$ is the per capita removal rate. The parameters can be combined into one nondimensional value by rescaling the time variable by a factor of $\gamma$, after which the birth and death rates are \[ B(I) = R_0 \left(1-\frac{I}N\right) I, \qquad D(I) = I, \] where $R_0=\beta/\gamma$ is the basic reproduction number \cite{hethcote2000mathematics}. \commentout{ The primary line of inquiry into the quasistationary behavior of this model begins with Cavender's \cite{cavender_quasi-stationary_1978} construction of the stationary distribution of a closely-related process, a modification of the SIS process in which the rate of transition from one infective individual to none is set to zero. The stationary distribution $\mathbf{p}^{(0)}$ of this process is used as an approximation to the quasistationary distribution of the SIS model. That approximation is commonly studied together with the one introduced by Kryscio and Lef\`evre \cite{kryscio_extinction_1989}, which takes the stationary distribution $\mathbf{p}^{(1)}$ of the SIS process modified by introducing one permanently infective individual as an approximation to $\mathbf{q}$. More recently, N{\aa}sell has defined a ``uniform approximation'', constructed by applying an iterated contracting map to the starting vectors $\mathbf{p}^{(0)}$ and $\mathbf{p}^{(1)}$, which approximates the true quasistationary solution quite well in the body and left tail of the distribution \cite{nasell_extinction_2007,nasell_extinction_2011}. Many of the other approaches discussed in our introduction have also been applied to the SIS model. } Using system size $\Omega=N$, the analysis we have presented for birth-death systems applies to the SIS model, with Hamiltonian \begin{dmath*} H(x,p) = R_0 (1-x) x \left( e^p - 1 \right) - x \left( e^{-p} - 1 \right), \end{dmath*} where $x=I/N$ is the infective fraction of the population. \subsection{The supercritical case} In the supercritical ($R_0>1$) case, the SIS process is attracted to a positive, or endemic, equilibrium value $x=1-1/R_0$, at which the birth and death rates are equal. The probability density of the fraction infective concentrates around that value. On very long time scales, however, in finite systems, stochastic fluctuation will bring the fraction infective to zero, which is an absorbing state from which the epidemic cannot return. Thus the stationary distribution of the process is a point mass at $x=0$, and the density function concentrated around the endemic equilibrium, while it is a stationary distribution in the infinite-size limit, is the quasistationary distribution in the finite cases. The Hamiltonian analysis of the supercritical SIS model has been treated exactly elsewhere \cite{schwartz_converging_2011,forgoston2011maximal}. The phase plane of the Hamiltonian system is shown in figure~\ref{fig:super-plane}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{supercritical-phaseplane.pdf} \caption{\label{fig:super-plane} {\bf Phase plane of the Hamiltonian dynamical system (\ref{BD-Hamiltonian-dynamics}), for a supercritical SIS model} ($R_0=2$). Arrows depict the flow of the dynamics of $x$ and $p$. The three invariant curves of the dynamics (solution curves of $H=0$) are shown in gray: the two axes of the space, and one nontrivial curve. The nontrivial curve corresponds to the quasistationary solution of the stochastic SIS model, as discussed in the text. } \end{figure} Stationary solutions of the PDE correspond to solutions of $H(x,p)=0$ on this plane, when $p$ is interpreted as ${\partial u}/{\partial x}$. The Hamiltonian factors into three parts: \begin{dmath*} H(x,p) = x(R_0(1-x) - e^{-p})(e^p-1), \end{dmath*} which directly identifies the three solution curves of $H=0$ in the plane: two trivial solutions, \begin{dgroup*} \begin{dmath*} x = 0, \end{dmath*}\begin{dmath*} p = 0, \end{dmath*} \end{dgroup*} and one nontrivial solution, \begin{dmath} \label{eqn:SIS-curve} p = -\ln( R_0 (1-x) ), \end{dmath} shown in figure~\ref{fig:super-plane}. These curves are trajectories of the Hamiltonian dynamical system (\ref{BD-Hamiltonian-dynamics}). The horizontal axis of the phase plane, which is the $p=0$ solution, is isomorphic to the deterministic SIS system. Two of the fixed points of the Hamiltonian system are the fixed points of that deterministic system --- the disease-free equilibrium at $(0,0)$ and the endemic equilibrium at $(1-1/R_0,0)$. They are located at the points where the horizontal axis intersects the other two solution curves. A third fixed point, at $(0,-\ln R_0)$, also corresponds to the disease-free state ($x=0$), but is at the intersection of solution curves away from the horizontal axis. The nontrivial solution curve (\ref{eqn:SIS-curve}) corresponds to the stationary solution of $u(x)$ on which probability concentrates around the endemic equilibrium, and the fixed points on it describe the probability density at the endemic and disease-free equilibria. That solution is a function $u(x)$ that solves \begin{dmath*} \frac{\partial u(x)}{\partial x} = -\ln(R_0(1-x)). \end{dmath*} Changing variables to $s=1-x$ and integrating produces a closed-form solution, \begin{dmath*} u(s) = s\ln(R_0 s) - s + C_0. \end{dmath*} This provides a closed-form solution for the quasistationary probability density: \begin{dmath}[label=eq:approx] \phi(s) = N e^{-Nu(s)}\ \hiderel{=}\ C_1\left(\frac{e}{R_0 s}\right)^{Ns}. \end{dmath} The constant $C_1$ is determined by the constraint that $\int_0^1 \phi(s) ds = 1$. In supercritical models in general, the equipotential surfaces (solutions of $H=0$) near the nontrivial solution of the deterministic subsystem describe the behavior of the probability distribution of rare events, which are located in the tail of the stationary distribution. The above stationary solution approximates the quasistationary density in the finite-$N$ SIS system, in which extinction is a rare event given large $N$. It provides an approximation for the time to extinction in the stochastic dynamics. The function $u$ is the {\it action} of classical mechanics. The most probable path to extinction can be obtained by maximizing the function $u(x)$, which produces the equipotential surfaces $H=0$. The path is explicitly calculated by integrating along the $H=0$ curves, both in this SIS case and in more complex models (\emph{e.g.}\ \cite{schwartz_converging_2011}). \section{Subcritical dynamics} In the deterministic SIS system in the subcritical case, $x$ relaxes to zero for all initial conditions $0\leq x\leq1$. The master equation solution also relaxes to $x=0$, with probability mass declining to zero at all other values of $x$ \cite{nasell_quasi-stationary_1996}. In this case, the quasistationary distribution is not stationary even in the large-$N$ limit due to the deterministic attraction of the origin. The WKB hypothesis that the probability current near the absorbing state $x=0$ vanishes when the system size $N$ grows without bound is not satisfied, and we do not use the stationary behavior of the PDE (which relaxes to a point mass) to analyze the quasistationary behavior of the master equations. Instead we use the transient behavior of the PDE to identify the equilibrium structure in the Hamiltonian phase plane that describes the master equation's quasistationary solution. \subsection{Using the phase plane to analyze dynamics of the Hamilton-Jacobi equation} In the Hamiltonian phase plane for the subcritical model, the same three solution curves for $H=0$ are present as in the supercritical case, but they fall in different places on the phase plane, as shown in figure~\ref{fig:sub-plane}. \begin{figure} \centering \includegraphics[height=0.5\textwidth]{subcritical-phaseplane.pdf} \caption{ \label{fig:sub-plane} {\bf Phase plane of Hamiltonian dynamical system for subcritical SIS system} ($R_0=0.5$). Flow is represented by arrows and the three invariant curves of the dynamics (solution curves of $H=0$) are shown in gray, as in Figure~\ref{fig:super-plane}. In this case, the nontrivial curve is shifted to a different position, and its intersections with the axes are located above and to the left of the origin, where in the supercritical case (Figure~\ref{fig:super-plane}) they are below and to the right of the origin. This leads to qualitatively different dynamics, requiring a different analysis to explain the quasistationary behavior of the model. } \end{figure} In this case, the point of intersection of the nontrivial curve (\ref{eqn:SIS-curve}) and the horizontal axis is shifted to the left of the origin. The endemic equilibrium represented by that point is lost in a transcritical bifurcation when $R_0$ declines below 1, and the origin becomes the attracting solution for the stochastic SIS system. The intercept where the nontrivial curve (\ref{eqn:SIS-curve}) meets the vertical axis, at $p=-\ln R_0$, is now above $p=0$. Because of this bifurcation, in the subcritical case we cannot apply the analysis used for the supercritical case, as the system is drawn to a singular value of $x$ at which the $H=0$ curve crossing the horizontal axis is vertical, and can not be translated to values of ${\partial u}/{\partial x}$ as a function of $x$. To study the quasistationary distribution of this system requires further analysis. \begin{figure} \centering \includegraphics[height=0.5\textwidth]{subcritical-init-phaseplane.pdf} \caption{ \label{fig:initial-u} {\bf Initial condition for the subcritical SIS system} on the Hamiltonian phase plane, represented by a curve of $p$ values as a function of $x$. In this and following figures, the initial condition used is a $\beta$ distribution with $\alpha=\beta=2$, i.e. $\phi_0(x)=6x(1-x)$, and using $N=100$, transformed to a curve in the $x$-$p$ plane using the relations $u(x)=-\ln({\phi_0(x)}/N)/N$ and $p=\partial u/\partial x$. } \end{figure} Any smooth initial distribution $\phi(x)$ can be mapped onto a curve in the $(x,p)$ plane on which $p={\partial u}/{\partial x}$ at every value of $x$, where $u$ is defined by $\phi(x) = Ne^{-Nu(x)}$ as above. This curve for an example initial distribution is plotted in figure~\ref{fig:initial-u}. Integrating points of this curve forward along trajectories of this system produces a geometric representation of the time evolution of the system as a moving curve in the phase plane, on which the changing shape of ${\partial u}/{\partial x}$ is visible, and that relation between $\partial u/\partial x$ and $x$ provides information about the form of the function $u(x)$. In terms of Hamiltonian dynamics, the function $u(x,t)$ is the \emph{action} of the system, a scalar quantity that can be evaluated by integrating along its trajectories: \begin{dmath*} \frac{du(x,t)}{dt} = \frac{\partial u}{\partial x}\frac{dx}{dt} + \frac{\partial u}{\partial t} = p \frac{\partial H}{\partial p} - H. \end{dmath*} For convenience, it is possible to calculate $u$ directly when integrating the Hamiltonian dynamics numerically, by extending the dynamical system to include $u$ as a state variable: \newcommand*{\vphantom{\displaystyle\frac{\partial H}{\partial p}}}{\vphantom{\displaystyle\frac{\partial H}{\partial p}}} \begin{dmath*} \frac{\partial}{\partial t} \begin{pmatrix} x \vphantom{\displaystyle\frac{\partial H}{\partial p}}\\[12pt] p \vphantom{\displaystyle\frac{\partial H}{\partial p}}\\[12pt] u \vphantom{\displaystyle\frac{\partial H}{\partial p}} \end{pmatrix} = \begin{pmatrix} \displaystyle \frac{\partial H}{\partial p} \\[12pt] \displaystyle -\frac{\partial H}{\partial x} \\[12pt] \displaystyle p\frac{\partial H}{\partial p} - H \end{pmatrix}. \end{dmath*} Assigning $u(x,0)=u_0(x)$ at each point of the initial curve and integrating forward then yields values of $u(x,t)$ explicitly with increasing $t$. \subsection{Evolution of the subcritical system from initial conditions} As time passes, each point of the $p$-versus-$x$ curve moves on the phase plane according to the Hamiltonian dynamics. Their evolution stretches and translates the curve across the phase plane, as shown in figure~\ref{fig:s-evolution}. While any given point may move in somewhat strange ways, including many that tend to infinity in the upper right direction, the curve moves smoothly to the left, approaching the vertical line $x=0$ and the gray curve that extends into the first quadrant. \begin{figure} \centering \includegraphics[height=0.5\textwidth]{subcritical-transient-phaseplane.pdf} \caption{ \label{fig:s-evolution} {\bf Transient dynamics of the subcritical SIS system} on the Hamiltonian phase plane, evolving from the initial condition depicted in figure~\ref{fig:initial-u} (red) toward later states (yellow, green, blue), as each point of the initial curve moves according to the Hamiltonian dynamics (\ref{BD-Hamiltonian-dynamics}). } \end{figure} From the moving points $(x,p,u)$ of this curve, a plot of $u$ versus $x$ can be constructed, or of $\phi=Ne^{-Nu}$ versus $x$, at each time $t$. Figure~\ref{fig:phi-vs-t} presents this plot of $\phi$ versus $x$ in time. The peak of the probability density moves asymptotically toward $x=0$, and there is a declining tail to the right of the peak. A number of features of the evolution of $u(x,t)$ versus $x$ are visible in this view of the dynamics. As discussed above, the dynamics on the horizontal axis of the phase plane is identical to the usual deterministic ODE for the SIS system. When $p$ is read as ${\partial u}/{\partial x}$, it follows that that horizontal axis, where $p=0$, corresponds to the extrema of the potential function $u(x,t)$ with respect to $x$. In the case pictured in these figures, the only extremum is a minimum of $u(x,t)$, which is a maximum of $\phi(x,t)$. This implies that the maximum point of the probability density function $\phi$, which is the mode of the probability distribution, in the large-system approximation we are using (\ref{BD-HJ}), moves in exact accordance with the deterministic SIS dynamics. Regions of $x$ values for which a curve in the $x$-$p$ plane is below the horizontal axis are regions where ${\partial u}/{\partial x}<0$, and equivalently on which $\phi(x,t)$ is increasing in $x$, and regions where the curve is above the axis are where $\phi(x,t)$ is decreasing in $x$. Near the vertical axis, the $p$-versus-$x$ curve diverges to $p=-\infty$. The fact that $p$, representing ${\partial u}/{\partial x}$, becomes negatively infinite there strongly suggests that $u(x)$ is divergent to $+\infty$ at $x=0$, and so that $\lim_{x\to0^+}\phi(x,t)=0$, at least in cases like the one illustrated in which $\phi(0)$ is zero in the initial conditions. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{subcritical-transient-phi-normalized.pdf} \caption{\label{fig:phi-vs-t} {\bf Transient dynamics of probability density} in the subcritical SIS system, displayed as $\phi(x,t) = Ce^{-Nu(x,t)}$ versus $x$ using the same data points as in figure~\ref{fig:s-evolution}, with $N=100$. Each curve is normalized to total probability one. The quasistationary distribution (\ref{eqn:SIS-QS}) is plotted in gray. } \end{figure} If the Hamilton-Jacobi PDE (\ref{BD-HJ}) is used to approximate any finite-$N$ system, by grouping the probability density into bins of width $1/N$, the result will be that probability mass accumulates in the bin that includes $x=0$, and all the other bins contain a tail that is decreasing in $x$, and whose total mass declines asymptotically to zero as $t\to\infty$. Figure~\ref{fig:s-evolution} demonstrates that in the long term, the $p$-versus-$x$ curve becomes asymptotically close to the union of the vertical axis below the positive-$p$ equilibrium and the nontrivial $H=0$ curve (\ref{eqn:SIS-curve}) at and above that equilibrium. We conclude that as the probability density accumulates near $x=0$, the shape of the tail of the density on $x>0$ approaches a function described by the diagonal curve, which is the nontrivial solution (\ref{eqn:SIS-curve}) of $H=0$. That tail defines the conditional distribution of $x$ given $x>0$, and therefore the limiting curve (\ref{eqn:SIS-curve}) should provide an approximation for the quasistationary distribution of the SIS master equations. \subsection{Explicit approximation for the quasistationary distribution} From the above analysis we conclude that the quasistationary probability density function of the master equation system (\ref{eqn:BD-master}) is approximated by the density function represented by the nontrivial $H=0$ curve (\ref{eqn:SIS-curve}). This is solved in the same way as in the supercritical case: \begin{dmath}\label{eqn:SIS-QS} \phi(s) = C_1 \left(\frac{e}{R_0s}\right)^{Ns}, \end{dmath} where $s=1-x$. While in the supercritical case this density function has a mode at the endemic value $s=1/R_0$, in this case the density is greatest at $x=0$ ($s=1$), as the function is monotonic decreasing on the interval $0<x<1$. Changing variables back to the number infective, $I=Nx=N(1-s)$, the quasistationary approximation becomes \begin{dmath} \label{eqn:SIS-QS-discrete} P(I) \hiderel{=} \frac{1}{N}\phi(1-I/N) = C_2 \left(\frac{eN}{R_0(N-I)}\right)^{N-I}, \end{dmath} using the appropriate normalizing factor $C_2$ for this discrete probability mass function. This quasistationary approximation is closely related to the classical approximation $p^{(1)}$ of Kryscio and Lef\`evre \cite{kryscio_extinction_1989,nasell_quasi-stationary_1996}:% \footnote{We are thankful to an anonymous reviewer for this observation (and see also \cite{kurtz1971limit}).} their approximation, \begin{dmath*} p^{(1)}(I) = C_3 \frac{1}{(N-I)!} \left(\frac{R_0}{N}\right)^I, \end{dmath*} when transformed using Stirling's approximation for factorials, \begin{dmath*} \ln n! \approx n\ln n - n, \end{dmath*} yields the approximation we have derived: \begin{dmath*} p^{(1)}(I) \approx C_3 \left(\frac{e}{N-I}\right)^{(N-I)} \left(\frac{N}{R_0}\right)^{-I} \approx C_4 \left(\frac{eN}{R_0(N-I)}\right)^{N-I} \end{dmath*} (where $C_3$, $C_4$ are normalizing constants). Previous approximations and numeric evaluation have established \cite{cavender_quasi-stationary_1978,kryscio_extinction_1989,nasell_quasi-stationary_1996} that the quasistationary distribution of the subcritical SIS system is approximately geometric near $I=0$, with the probabilities of successive values of $I$ having ratio $R_0$. Thus the approximating geometric distribution has the form \begin{dmath*} \Gamma(I) = C_5 (R_0)^{I}. \end{dmath*} The geometric distribution is characterized by the constant slope of its logarithm: \begin{dmath*} \frac{d}{dI}\ln\Gamma(I) = \frac{d}{dI}\left[\ln C_5 + I\ln R_0\right] = \ln R_0. \end{dmath*} Comparing to our approximation $p$, the slope of $\ln p$ is not constant: \begin{dmath*} \frac{d}{dI}\ln P(I) = \frac{d}{dI}\left[ \ln C_2 + (N-I)\left( 1 + \ln N - \ln R_0 - \ln(N-I) \right) \right] = - \left( 1 + \ln N - \ln R_0 - \ln(N-I) \right) + (N-I) \left( \frac{1}{N-I} \right) = \ln R_0 + \ln\frac{N-I}{N}. \end{dmath*} However, near $I=0$, the non-constant term is approximately zero, and the slope of the logarithm is approximately $\ln R_0$, with the consequence that the distribution is approximately geometric with the desired ratio when $I\ll N$. Since the ratio $(N-I)/N$ is smaller than one when $0<I<N$, and thus its logarithm is negative, it follows that the probability mass function $p$ decreases to zero more rapidly than the geometric function $\Gamma$ does as $I$ increases. In an appendix we compare the SIS process to a birth-death process that has the transmission and removal rates of the SIS model without the effect of depletion of susceptibles, and whose quasistationary distribution is exactly the geometric distribution that approximates the above distribution. The phase plane analysis of the birth-death process provides visual evidence that the parameter characterizing the approximating geometric distribution by its rate of decay is determined by the intercept where the nontrivial curve (\ref{eqn:SIS-curve}) crosses the vertical axis. \section{Statistics of declining trachoma case counts} While the SIS model has proven a theoretically interesting, simple model of disease transmission, as discussed above, it has also been used in practice in trachoma research, In trachoma research, it has been used to assess to assess treatment frequency needed for elimination \cite{lietman-porco-dawson1999}, efficacy of antibiotic treatment \cite{liu-porco-mkocha2014}, and waning of immunity \cite{Liu2013}, and in forecasting \cite{liu-porco-amza2015}, among other applications \cite{melese-chidambaram-alemayehu2004,Ray2007,Ray2009,Lietman2011,liu-porco-mkocha2014,liu-porco-amza2015b,Gao2016}. Trachoma is a common subclinical childhood infection in certain regions of the less-developed world. Repeated infection results in scarring of the eyelid, trichiasis (turning inward of the eyelashes, so that the eyelids scrape against the cornea). Millions of cases of blindness have resulted. The causative agent, \emph{Chlamydia trachomatis}, can be cleared with high efficacy with a single dose of azithromycin \cite{Schachter1999,Chidambaram2006}. The World Health Organization currently recommends annual mass treatment in affected communities as a public health control measure \cite{melese-chidambaram-alemayehu2004,solomon2006trachoma,Chidambaram2006,House2009}. During a clinical trial of timing of mass administration of azithromycin in the Amhara Region of Ethiopia \cite{House2009,Stoller2011,Gebre2012}, village-level prevalence data were collected. At baseline the probability distribution of village-level prevalences, omitting zero values, had a mean of 0.39 (range 0.08--0.62) (figure~\ref{fig:tana-density}, top plot). After the initiation of mass treatment at or exceeding recommended WHO levels, the mean prevalence declined, and the distributions became indistinguishable from exponential \cite{lietman-gebre-abdou2015} (figure~\ref{fig:tana-density}, subsequent plots). This finding is consistent with the approximately exponential distributions predicted by simple epidemic models, as discussed above. The matter is of more than theoretical interest, as mentioned in our introduction: the long tail of the exponential distribution implies that during an elimination campaign, some communities may have unexpectedly large prevalence and appear to be outliers when in fact they are entirely consistent with the variation expected. Figure~\ref{fig:tana-phaseplane} displays these probability density functions $\phi(x)$ transformed to the phase-plane representation defined above, $p(x) = -\frac{d}{dx}\ln(\phi(x)/N)/N$. We assume a population size $N=100$ per village, which is approximately the number of children at risk in one of these villages \cite{Gebre2012}. In this plot, the same motion from lower right to upper left is visible, with convergence to the vertical axis and possibly to a curve leaving that axis in the positive quadrant. More abundant data may permit location of such a limiting curve that would intersect the vertical axis in this representation of the data. That curve would provide an estimate of the quasistationary behavior of the disease, and its intercept would provide an estimate of the disease's $R_0$. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{fit-tanachi-beta.pdf} \caption{ \label{fig:tana-density} {\bf Changing trachoma prevalence} at baseline, and at 6-month intervals during the TANA trial of mass administration of azithromycin \cite{Gebre2012}. As the trial progresses, the prevalences become smaller and become more closely approximated by the exponential \cite{lietman-gebre-abdou2015}. (Individual village prevalences are shown in tick marks on the horizontal axis. Curves result from beta distribution kernel density smoothing \cite{chen1999beta}, with smoothing parameter determined from leave-one-out cross-validation \cite{burnham1998model}.) } \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{fit-tanachi-beta-phaseplane.pdf} \caption{ \label{fig:tana-phaseplane} {\bf Phase plane representation of changing trachoma prevalence} data from TANA trial shown in figure~\ref{fig:tana-density}. Each curve on the plot corresponds to one of the distributions shown in figure~\ref{fig:tana-density}, transformed to the $x$-$p$ plane as in earlier figures (see text for details). Over time the curves shift upward and to the left, moving close to the vertical axis for smaller values of $p$ and diverging from it at larger values of $p$, similar to the motion seen in the Hamiltonian analysis of the SIS model (figure~\ref{fig:s-evolution}). Each curve in this figure is restricted to the range of the nonzero prevalence values. } \end{figure} \section{Summary} Hamiltonian structures describing master equation and diffusion equation systems are the subject of ongoing exploration in stochastic processes research, where the solution sets of $H=0$ near the deterministic subspace are used to model quasistationary behaviors and rare transition events, such as switching between states or noise-induced extinctions. We have presented an application of these structures far away from the deterministic subsystem, to approximate the probability distribution of a process near an absorbing singular point, where the WKB hypothesis does not hold and transient dynamics of the limiting PDE rather than its large-time limit behavior must be used to identify the structure corresponding to the quasistationary probability distribution of the finite-size system. Quasistationary solutions in epidemic models can generally not be solved exactly, so approximation techniques are crucial in analysis of these processes. We present an alternative approach to this approximation problem, which may be extensible to other similar model settings and whose full usefulness is yet to be discovered. The WKB approximation and the Hamiltonian and Lagrangian techniques of analysis that it makes available are powerful and flexible, and may have applications in subcritical disease settings that go well beyond the quasistationary distribution. Our exploration of cross-sectional prevalence data from trachoma trials, when the prevalence distributions are represented as curves on the Hamiltonian phase plane, reveals a pattern of motion consistent with the motion on the phase plane predicted by this analysis for a subcritical transmission model. Thus it is consistent, at least qualitatively, with a hypothesis that trachoma transmission in that trial setting is in fact subcritical and stochastic. This analysis fails to disconfirm that hypothesis, though other explanations are possible. In epidemiological settings where more data are available, it may become possible to observe an upper limiting curve in such a plot as well as the convergence to the vertical axis. By revealing an emerging shape of the tail of the prevalence distribution, information about that curve could contribute to description of the quasistationary behavior of the disease. Such information also may contribute to an estimate of its basic reproduction number, arrived at independently of any estimate based on temporal change in prevalences. Beyond the one-variable birth-death models that we have analyzed, the techniques that we explore here for study of quasistationary dynamics may be of use with models with more stages of disease progression or differing transition rates, multitype models, models with patch or network structure (cf.\ \cite{hindes_epidemic2016}), and other cases that are more complex than the simple models presented here. In population biology, the SIS model we have discussed is also known as a stochastic logistic model \cite{nasell1999}, and this analysis has promise for population biology models that are similar but not identical to this model. While the primary goal in conservation biology is to preserve the populations in question, rather than to eradicate them as in epidemiology, declining populations are clearly of interest and the models in use may benefit from a similar analysis. This analysis may be of use in other applications as well, where quasistationary dynamics near an absorbing state is of interest. \section{Acknowledgments} This study was supported by a Models of Infectious Disease Agent Study (MIDAS) grant from the US NIH/NIGMS to the University of California, San Francisco (U01GM087728), by US NEI R01-EY025350, and by a Research to Prevent Blindness Award. IBS was supported by NRL base funding (N0001414WX00023) and Office of Naval Research (N0001414WX20610). We are grateful to two anonymous reviewers for helpful comments on an earlier version of this manuscript. \section{Conflict of Interest Statement} The authors declare they each have no conflicts of interest. \section{References} \bibliographystyle{elsarticle-num}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,708
Boris Mitawski (; * 12. August 1948 in Leningrad; eigentlich Boris Nikolajewitsch Iwanow) ist ein russischer Kunstmaler. Er war maßgeblich an Organisation und Durchführung der ersten in einer Privatwohnung veranstalteten Ausstellung russischer Künstler in Leningrad beteiligt. Leben Nach einem abgeschlossenen Studium zum Diplom-Chemiker studierte er Kunst an der Staatlichen Kunsthochschule für Malerei, Bildhauerei und Architektur I. Repin in Leningrad, der führenden Kunsthochschule des Landes. Zusammen mit Sergei Kowalski und Wiktor Bogorad organisierte er 1973 die erste nichtamtliche Kunstausstellung Sankt Petersburgs, in der seine Werke auch erstmals der Öffentlichkeit präsentiert wurden. Die Ausstellung dieser inoffiziellen Künstlergruppe Inaki fand in einer nur 11 m² großen Wohnung im Baskow-Weg statt. Im weiteren Verlauf seiner Künstlertätigkeit trat er als Mitbegründer der zweiten Welle der Nonkonformistischen Kunst der Sowjetunion in Erscheinung. 1978 entstand die Künstlergruppe Letopis (= Chronik), die Ausstellungen "in der Natur" und in einer vernachlässigten Kirche veranstaltete. Eine 1981 stattfindende Ausstellung dieser Gruppe im Sankt Petersburger Jugendpalast endete in einem Skandal, als die Verwaltung den Kontakt zwischen den Künstlern und den Besuchern verbot und die Ausstellung drei Tage früher beendete. Im selben Jahr wurde von Boris Mitavski und Jaroslaw Suchow die Künstlergruppe Ostrow (Die Insel) gegründet. Sein künstlerisches Schaffen beschränkte sich jedoch nicht nur auf Nonkonformismus, seine früheren Werke zeigen eine Reise durch nahezu alle Stilistiken der zeitgenössischen Kunst, unter anderem Abstraktionismus, Konstruktivismus, Expressionismus, Surrealismus sowie jegliche Formen der Graphik. Nebenbei arbeitete er einige Jahre als Industrie-Designer. Internationale Ausstellungen und Auktionen 1988: Teilnahme an der Versteigerung des New Yorker Auktionshauses Guernsey's: "Artwork of the Soviet Union" 1989: Gruppenausstellung Sankt Petersburger Künstler in Göttingen 1990: Teilnahme an der Versteigerung des Pariser Auktionshauses Drout: "Leningrad - Tradition et Perestroika" 1997: Einzelausstellung in Stadthagen 1998: Einzelausstellung in der LVA Hannover 1999: Teilnahme an der Ausstellung im Rahmen der russischen Kulturtage in Hannover: "Puschkin 1799 - 1999" 2003: Teilnahme an der Ausstellung im Rahmen der russischen Kulturtage in Berlin 2008: Teilnahme an der Ausstellung der Künstlergruppe Insel "20 Jahre danach" Sankt Petersburg Quellen Sergej Kovalski: Chronik der inoffiziellen Kunst Leningrads Sergej Kovalski: Vom Selbstausdruck zur Selbstverwirklichung Sergey Kovalski: Apartment Exhibitions of Underground of Russion Avant-Gard Art Biografie von Boris Mitavski Weblinks Biographie (ru) Homepage (de) MySpace (ru) Maler (Russland) Russe Geboren 1948 Mann
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,924
{"url":"https:\/\/www.physicsforums.com\/threads\/conservation-of-energy-momentum-tensor.980236\/","text":"# Conservation of Energy momentum tensor\n\nkent davidge\nUnfortunetly, I found across the web only the case where there's no source, in which case ##\\partial_\\alpha T^{\\alpha \\beta} = 0##. I'm considering Minkowski space with Minkowski coordinates here.\n\nWhen there's source, is it true that ##\\partial_\\alpha (T^{\\alpha \\beta}) = 0## or is it ##\\int \\partial_\\alpha (T^{\\alpha \\beta}) = 0##? Where now this latter ##T^{\\alpha \\beta}## is the result of the variation of the complete action (source included).\n\n2022 Award\n##\\nabla_\\alpha T^{\\alpha\\beta}=0## (edit: and not ##\\partial_\\alpha T^{\\alpha\\beta}=0##, as I incorrectly typed originally). Since the stress-energy tensor is the same as the Einstein tensor give or take a constant factor, this turns out to be simply a statement of the Bianchi identity.\n\nLast edited:\nkent davidge\nMentor\nWhen there's source, is it true that ##\\partial_\\alpha (T^{\\alpha \\beta}) = 0##\n\nNo, because if there's stress-energy present, spacetime is not flat, so you have to use the correct curved spacetime equation, which is\n\n$$\\nabla_\\alpha T^{\\alpha \\beta} = 0$$\n\n##\\partial_\\alpha T^{\\alpha\\beta}=0##. Since the stress-energy tensor is the same as the Einstein tensor give or take a constant factor, this turns out to be simply a statement of the Bianchi identity.\n\nCareful! If there is a non-zero stress-energy tensor, spacetime isn't flat. If spacetime is flat, then it is true that ##\\partial_\\alpha T^{\\alpha \\beta} = 0##, but only in the vacuous sense that ##T^{\\alpha \\beta} = 0##.\n\nIbix","date":"2023-03-24 23:18:06","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9518109560012817, \"perplexity\": 1408.188270273507}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296945289.9\/warc\/CC-MAIN-20230324211121-20230325001121-00028.warc.gz\"}"}
null
null
AUI.add('aui-video', function(A) { var Lang = A.Lang, UA = A.UA, getClassName = A.getClassName, NAME = 'video', CSS_VIDEO = getClassName(NAME), CSS_VIDEO_NODE = getClassName(NAME, 'node'), DEFAULT_PLAYER_PATH = A.config.base + 'aui-video/assets/player.swf?t=' + Lang.now(), DOC = A.config.doc, TPL_VIDEO = '<video id="{0}" controls="controls" class="' + CSS_VIDEO_NODE + '"></video>', TPL_VIDEO_FALLBACK = '<div class="' + CSS_VIDEO_NODE + '"></div>'; var Video = A.Component.create( { NAME: NAME, ATTRS: { url: { value: '' }, ogvUrl: { value: '' }, swfUrl: { value: DEFAULT_PLAYER_PATH }, poster: { value: '' }, fixedAttributes: { value: {} }, flashVars: { value: {} }, render: { value: true } }, BIND_UI_ATTRS: ['url', 'poster', 'ogvUrl', 'swfUrl', 'fixedAttributes', 'flashVars'], SYNC_UI_ATTRS: ['url', 'poster', 'ogvUrl'], prototype: { renderUI: function () { var instance = this; instance._renderVideoTask = A.debounce(instance._renderVideo, 1, instance); instance._renderSwfTask = A.debounce(instance._renderSwf, 1, instance); instance._renderVideo(!instance.get('ogvUrl')); }, bindUI: function () { var instance = this; instance.publish( 'videoReady', { fireOnce: true } ); }, _createSource: function(type) { var instance = this; var sourceNode = new A.Node(DOC.createElement('source')); sourceNode.attr('type', type); return sourceNode; }, _renderSwf: function () { var instance = this; var swfUrl = instance.get('swfUrl'); if (swfUrl) { var videoUrl = instance.get('url'); var posterUrl = instance.get('poster'); var flashVars = instance.get('flashVars'); A.mix( flashVars, { controls: true, src: videoUrl, poster: posterUrl } ); var flashVarString = A.QueryString.stringify(flashVars); if (instance._swfId) { instance._video.removeChild(A.one('#' + instance._swfId)); } else { instance._swfId = A.guid(); } var tplObj = '<object id="' + instance._swfId + '" '; if (UA.ie) { tplObj += 'classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" '; } else { tplObj += 'type="application/x-shockwave-flash" data="' + swfUrl + '" '; } tplObj += 'height="100%" width="100%">'; if (UA.ie) { tplObj += '<param name="movie" value="' + swfUrl + '"/>'; } var fixedAttributes = instance.get('fixedAttributes'); for (var i in fixedAttributes) { tplObj += '<param name="' + i + '" value="' + fixedAttributes[i] + '" />'; } if (flashVarString) { tplObj += '<param name="flashVars" value="' + flashVarString + '" />'; } if (posterUrl != '') { tplObj += '<img src="' + posterUrl + '" alt="" />'; } tplObj += '</object>'; instance._video.append(tplObj); } }, _renderVideo: function(fallback) { var instance = this; var tpl = TPL_VIDEO; if (UA.gecko && fallback) { tpl = TPL_VIDEO_FALLBACK; } var tplObj = Lang.sub(tpl, [A.guid()]); var video = A.Node.create(tplObj); instance.get('contentBox').append(video); instance._video = video; }, _uiSetFixedAttributes: function (val) { var instance = this; instance._renderSwfTask(); }, _uiSetFlashVars: function (val) { var instance = this; instance._renderSwfTask(); }, _uiSetOgvUrl: function (val) { var instance = this; if (UA.gecko || UA.opera) { var video = instance._video; var usingVideo = instance._usingVideo(); if ((!val && usingVideo) || (val && !usingVideo)) { video.remove(true); instance._renderVideoTask(!val); } if (!val) { instance._renderSwfTask(); } else { var sourceOgv = instance._sourceOgv; if (!sourceOgv) { sourceOgv = instance._createSource('video/ogg; codecs="theora, vorbis"'); video.append(sourceOgv); instance._sourceOgv = sourceOgv; } sourceOgv.attr('src', val); } } }, _uiSetPoster: function (val) { var instance = this; var video = instance._video; if (instance._usingVideo()) { video.setAttribute('poster', val); } instance._renderSwfTask(); }, _uiSetSwfUrl: function (val) { var instance = this; instance._renderSwfTask(); }, _uiSetUrl: function (val) { var instance = this; var ogvUrl = instance.get('ogvUrl'); var video = instance._video; var sourceMp4 = instance._sourceMp4; if (UA.gecko && !instance._usingVideo()) { if (sourceMp4 != null) { sourceMp4.remove(true); instance._sourceMp4 = null; } } else { if (video || !ogvUrl) { if (!sourceMp4) { sourceMp4 = instance._createSource('video/mp4;'); video.append(sourceMp4); instance._sourceMp4 = sourceMp4; } sourceMp4.attr('src', val); } } instance._renderSwfTask(); }, _usingVideo: function() { var instance = this; return (instance._video.get('nodeName').toLowerCase() == 'video'); } } } ); A.Video = Video; }, '@VERSION@' ,{skinnable:true, requires:['aui-base','querystring-stringify-simple']});
{ "redpajama_set_name": "RedPajamaGithub" }
621
.class Lcom/google/android/gms/internal/zzfb$1; .super Ljava/lang/Object; # interfaces .implements Landroid/content/DialogInterface$OnClickListener; # annotations .annotation system Ldalvik/annotation/EnclosingMethod; value = Lcom/google/android/gms/internal/zzfb;->execute()V .end annotation .annotation system Ldalvik/annotation/InnerClass; accessFlags = 0x0 name = null .end annotation # instance fields .field final synthetic zzAa:Lcom/google/android/gms/internal/zzfb; # direct methods .method constructor <init>(Lcom/google/android/gms/internal/zzfb;)V .registers 2 iput-object p1, p0, Lcom/google/android/gms/internal/zzfb$1;->zzAa:Lcom/google/android/gms/internal/zzfb; invoke-direct {p0}, Ljava/lang/Object;-><init>()V return-void .end method # virtual methods .method public onClick(Landroid/content/DialogInterface;I)V .registers 6 .param p1, "dialog" # Landroid/content/DialogInterface; .param p2, "which" # I .prologue iget-object v0, p0, Lcom/google/android/gms/internal/zzfb$1;->zzAa:Lcom/google/android/gms/internal/zzfb; invoke-virtual {v0}, Lcom/google/android/gms/internal/zzfb;->createIntent()Landroid/content/Intent; move-result-object v0 invoke-static {}, Lcom/google/android/gms/ads/internal/zzp;->zzbv()Lcom/google/android/gms/internal/zzid; move-result-object v1 iget-object v2, p0, Lcom/google/android/gms/internal/zzfb$1;->zzAa:Lcom/google/android/gms/internal/zzfb; invoke-static {v2}, Lcom/google/android/gms/internal/zzfb;->zza(Lcom/google/android/gms/internal/zzfb;)Landroid/content/Context; move-result-object v2 invoke-virtual {v1, v2, v0}, Lcom/google/android/gms/internal/zzid;->zzb(Landroid/content/Context;Landroid/content/Intent;)V return-void .end method
{ "redpajama_set_name": "RedPajamaGithub" }
7,203
{"url":"http:\/\/infinityblade.wikia.com\/wiki\/Still_Plate_(IB3)","text":"# Still Plate (IB3)\n\n906pages on\nthis wiki\n\n### Still Plate (IB3)\n\nLocked: No Enemies using: WIP Used by: Person\nCost: 42,000 \u00a0??? \u00a0??? \u00a0??? \u00a0??? \u00a0??? \u00a0???\nSell: \u00a0??? \u00a0??? \u00a0??? \u00a0??? \u00a0??? \u00a0??? \u00a0???\nHP: +50 ??? ??? ??? ??? ??? ???\nXP: 30,000 ??? ??? ??? ??? ??? ???\nSlots: ??? ??? ??? ??? ??? ???\nEvade: +5 ??? ??? ??? ??? ??? ???\nChips: ??? ??? ??? ??? ??? ??? ???\nBy Hour: ??? h ??? h ??? h ??? h ??? h ??? h ??? h\nLevel: 1 2 3 4 5 6 7 8 9 10\nCont. 11 12 13 14 15 16 17 18 19 20","date":"2017-03-27 20:26:00","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9905593991279602, \"perplexity\": 11411.67280211935}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-13\/segments\/1490218189525.23\/warc\/CC-MAIN-20170322212949-00135-ip-10-233-31-227.ec2.internal.warc.gz\"}"}
null
null
River Ave. Blues » Next wave of pitching prospects emerging in the minors Next wave of pitching prospects emerging in the minors August 13, 2015 by Mike 333 Comments Curry: Yankees calling up Greg Bird Hal: "I didn't want to give those kids up" at the trade deadline Kaprielian. (Presswire) Coming into the season, the Yankees had a very position player heavy farm system, with only two of their top ten prospects doing their work on the mound. One was Luis Severino, who is currently in the big league rotation, and the other was Ian Clarkin, who has not pitched in an official minor league game this season due to an ongoing elbow problem. Clarkin is currently on a throwing program, supposedly. Beyond Severino and Clarkin, the Yankees had a lot of interesting arms in the lower levels of the minors but not much else. The kind of pitching prospects every team has, really. It didn't help that Domingo German, the team's third best pitching prospect coming into 2015, blew out his elbow in Spring Training and needed Tommy John surgery. That's two of their three best pitching prospects down for the season. Yikes. Thankfully, a new wave of pitching prospects has emerged this summer, giving the Yankees more potential rotation help in the near future. First and foremost, the Yankees added to their pitching inventory by selecting UCLA righty James Kaprielian in the first round of June's draft. He has yet to pitch in a game since turning pro but was scheduled to do so this week. (That didn't happen for some reason, I think because the team didn't want him pitching with the threat of rain in Tampa.) Assuming Severino throws more than 50 innings with the Yankees down the stretch, Kaprielian takes over as New York's top pitching prospect, and he could be big league ready next August or September a la Ian Kennedy in 2007. Kaprielian is not quite as refined as Kennedy but he has better pure stuff and the Yankees were very aggressive with Severino, so I assume they will be with Kaprielian as well. There's no reason to select a pitcher like this only to take it slow as he climbs the ladder. Behind Kaprielian, both Brady Lail and Rookie Davis have stepped forward this summer to establish themselves as no doubt rotation prospects, albeit with different styles. Lail is closer to the big leagues — he was promoted to Triple-A not too long ago — and is more of a command and control guy than a big stuff guy. The Yankees did a great job developing him into a legitimate prospect after drafting him as a raw Utah high schooler. Davis is a classic fastball/curveball power pitcher whose control has improved tremendously as a pro. He spent most of the year at High-A Tampa and was recently moved up to Double-A Trenton, replacing Lail in the rotation. Lail could help as soon as next season in a David Phelps/Adam Warren role, assuming the Yankees are willing to put him on the 40-man roster at some point. He is not Rule 5 Draft eligible this winter. Davis is. While Davis and to a slightly lesser extent Lail are the Yankees' top two pitching development successes this year, they aren't the only ones. Jordan Montgomery and Jonathan Holder, two mid-round draft picks last year, have handled Single-A ball well. That's not surprising for Montgomery after he spent three years in an SEC rotation. Holder is a reliever turned starter however, and he's had success in his new role. Both guys figure to join Davis in the Double-A rotation to open 2016. For the most part the Yankees have had their starters stay healthy this year. Masahiro Tanaka spent a month on the DL and Michael Pineda is expected to miss about a month as well, but that's it. In the grand scheme of things, two starters missing a month each is nothing. Last year almost the entire rotation was on the DL with long-ish term injuries by May, remember. That led to Shane Greene getting a chance as well as the Brandon McCarthy and Chris Capuano pickups. The Yankees could have used another starter at the deadline but they weren't desperate like last year, when he were out of viable rotation arms. That's a good thing because outside of Severino and Warren, the Yankees didn't have much upper level rotation depth in the minors. That does not figure to be the case next year, with Lail set for Triple-A and the trio of Davis, Holder, and Montgomery set for Double-A. Kaprielian is on the way too. Do the Yankees have a bunch of budding aces in the minors? No, of course not. No team does. (Except the Mets the last few years, I guess.) What the Yankees do have now is a collection of competent pitching prospects reaching the upper levels of the minors, putting them in position to step in and help very soon. They didn't have those guys coming into 2015. It was Severino and that's it. A new batch of arms emerged this year and the Yankees will surely need 'em going forward. Filed Under: Minors Tagged With: Brady Lail, James Kaprielian, Jonathan Holder, Jordan Montgomery, Rookie Davis
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,489
A temp track is an existing piece of music or audio which is used during the editing phase of television and film production, serving as a guideline for the tempo, mood or atmosphere the director is looking for in a scene. It is also referred to as scratch music, temp score or temp music. The track is usually replaced before release by an original soundtrack composed specifically for the film. While some feel that having to follow a temp track can be limiting for a composer, it can be a useful tool in finding the right style of music for a particular scene and can be a time-saver for both the composer and director. References Film production Sound Music and video
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,812
{"url":"http:\/\/mathhelpforum.com\/algebra\/215751-prove-whole-cube-expansion.html","text":"# Thread: Prove this by whole cube expansion\n\n1. ## Prove this by whole cube expansion\n\nprove that (cos theta + i sin theta) ^ 3\n\nequals\n\ncos 3theta + i sin 3 theta\n\nby expanding the whole cube\n\n2. ## Re: Prove this by whole cube expansion\n\nWell, have you tried expanding the cube?\n\n3. ## Re: Prove this by whole cube expansion\n\nyeah i did........ actually we had to it for square and cube, i did for the square but for the cube i can not derive this, i can not prove it using x^3 + y^3 + 3xy^2 + 3yx^2\n\n4. ## Re: Prove this by whole cube expansion\n\nso kindly prove it for me\n\n5. ## Re: Prove this by whole cube expansion\n\nI'll do no such thing, you'll prove it yourself. Start by substituting \"cos(theta)\" for x and \" i sin(theta) \" for y. Simplify and collect your real parts and imaginary parts.\n\n7. ## Re: Prove this by whole cube expansion\n\nthanks man, ibdutt, thanks a lot","date":"2017-01-17 20:34:42","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.829888105392456, \"perplexity\": 4341.819334829328}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-04\/segments\/1484560280065.57\/warc\/CC-MAIN-20170116095120-00253-ip-10-171-10-70.ec2.internal.warc.gz\"}"}
null
null
Tobson predicts: Melodifestivalen 2018 The tour has reached its end - Melodifestivalen has arrived in Stockholm (or Solna to be more precise) and Sweden will find itself a song for Lisbon. If you read my previous posts on the semi finals, you will be aware that I have been less than thrilled with the overall quality this year, concerning the songs as well as the show itself. The criticism this year has been deafening and the ratings have not been what they should be. Will the viewers come back for the final or will we have the lowest amount of people watching in years? As I listened through the 12 finalists this morning I have to admit they are a better bunch than I would have remembered. The overall quality is fine and there is no entirely hopeless entry here. Not everything's my cup of tea but that is a very different thing indeed. Had this been a national final in Slovenia or Spain it would have been sensational. Melodifestivalen is still the most solid national final around. But I think Sweden has lost its grip a little bit. Sweden aims at being a super power at Eurovision. The clear aim every year is to win. After a few years of tremendous success, it feels like Sweden lost the lead. Other countries are pushing the contest in other directions - who would have thought Belgium would be one of the countries pushing the envelope? - and Sweden is no longer the clear big brother. The Zeitgeist passed from the slightly generic but super effective songs into something else. Something touched by the fingers of the Sobral siblings, possibly. Most of the songs chosen for 2018 have personality and something of their own, something fresh, while Sweden is still in the same place as they were two years ago. The same radio friendly sound, the same preoccupation with staging over songwriting. Maybe tonight's winner will still do well in Lisbon. A top ten is in no way out of reach. With a bit of luck the Swedish entry could fill a gap in the lineup and snatch another top five spot. But Eric, Loreen, Sanna, Måns and Frans all went in and showed everyone how it should be done. They didn't rely on luck. And that is a major shift. Maybe SVT won a bit too much and got a bit too content with themselves and thought the audience would stay put regardless. Some quotes from the production suggest this could be the case. Maybe a real shakeup in the team could go some good for 2019? But tonight, then? Who would be the best choice for Lisbon? Jessica Andersson had been a good choice has the choice worked better live. A surprising lack of energy in the semi just underlined the impression of her being a budget version of Helene Fischer. Samir & Viktor has the most infectious jam going on but they have no interest in taking it abroad. Felix Sandman has momentum but would be perceived too much like last year's flavour come May. John Lundvik, then? There is a buzz around him that can't be denied and he has a very inviting aura and a fantastic voice. But will the international jury go for something as traditional? They shouldn't. Tobson's prediction: For the second year running Sweden will go for a pop guy with a smooth song and stunning visuals. It is highly unusual that Sweden goes for two so similar packages on consecutive years, but Benjamin Ingrosso has a much better song than last year's entry. "Dance You Off" is also the song that feels most like an idea of its own and could possibly be a nod towards more creativity among songwriters next year. And of course I will be on Twitter tonight. Join me there! Labels: 2018, melodifestivalen, national final, prediction, Sweden UMK 2018: it all depends on Saara This week I am completely ditching Sweden and Andra Chansen in order to focus on Estonia (I haven't had time to catch up there so no review coming up, unfortunately) and above all Finland. Since the three competing entries were revealed, there has been a certain amount of buzz around Suomi. Is it time for me to eat my words from when Saara Aalto was selected? Seriously, folks. It is not. The songs lined up all have their qualities and are solid efforts so everything comes down to Saara Aalto herself in the end. And that is where my worries were in the first place. The performance costumes - presented with much pomp and circumstance earlier this week - suggest all three performances could be pretty OTT and that is a concern of mine. What will we see tonight? Will there be huge show numbers that can't be recreated in Lisbon or are tonight's performances what would be shown on stage in May as well? If there is too much madness going on show-wise, is that an attempt to highlight Saara or could she get lost in the middle of everything? Saara is in many ways a Finnish Linda Bengtzing who wants to do well so badly that she loses her cool every time it really matters. If Saara can keep a lid on herself tonight - and even more importantly in Portugal later in spring - Finland could be a contender. It all comes down to whether the can deliver and actually be the performer she wants to be. What about the songs, then? 1. Monsters The first song revealed also turned out the be the strongest of the bunch. A contemporary pop number with a clear hook and possibly not too much space for Saara to go bonkers. Staying controlled is a key word in all three performances. My main objection is that it wouldn't have hurt had it been a bit heavier. Is it too late to remix it for Eurovision? 2. Domino A very good song but also a relic from a time gone by. This is the perfect G:son/Ljunggren ballad that proved outdated already in 2012 when the fabulous "Quédate conmigo" somehow barely made it into the top ten with a similar sound and a similar high note climax. Here is also a big risk of Saara running away with herself and over-perform vocally like there was no tomorrow. 3. Queens I had hoped the last song would be the real killer track but instead it is a surprisingly shattered effort that mainly sounds like something Britney Spears might have toyed with a few years ago. Not having any crystal clear artistic identity as yet, Saara should definitely refrain from straying into someone else's territory and run the risk of being labeled a copycat. Prediction: While "Domino" is perhaps the closest to what Saara really wants to be, "Monsters" would clearly be Finland's best shot at ESC success. I predict - and hope - that tonight's result will be the same as the running order. And more than anything, I hope Saara will keep herself composed and pull this off. As per usual, I will also live tweet during the show. Feel free to follow and discuss and talk back at me. Labels: 2018, Finland, national final, prediction, Tobson review
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,351
LOMBOK, AN UNDISCOVERED PARADISE Lombok, although only 70 miles to the east of Bali, feels as if it's almost another country in another time. Lombok's beautiful landscapes are some of the finest in the Indonesian archipelago, with only a fraction of the hustle and bustle of its neighbors, Bali and Java. Villagers commute unhurriedly via cidomos (horse drawn carts) through small family farms from town to town. There are also a diverse selection of hotels to stay in on Lombok, giving you more choice and better value! The time in Lombok is GMT + 8 hours. Lombok Island is truly an undiscovered paradise. Lombok and Sumbawa, the two main Islands of West Nusa Tenggara, and other small islands known as The Gilis, invite you with their charming variety of landscapes, places of interest, and exciting outdoor activities for tourists. The view from our open-air dining terrace. Exotic tropical islands and coastlines lined with pristine beaches, a wonderful climate, and the natural beauty of this island combine to create a truly unique tropical paradise that everyone should visit at least once. In addition, Lombok also has ideal waters for all underwater activities (such as swimming, sunbathing, sailing, diving, surfing, fishing and much more). Finish your trek off with a relaxing swim in the ocean, or try any of the exciting water sports we have on offer here! Lombok is situated between 115°45 and 119°10 east and to the south of the equator between 8°52 and 9°52 south. As such, this island is perfectly located within the "golden triangle" that contains many of Indonesia's best-kept secrets: East Nusa Tenggara (Komodo Island), Bali (the Island of Gods), and Toraja Tribal Island in Sulawesi. It comprises of an area of 20,153 km2. For the entire part, it is mountainous and hilly with Low and High Plains from the western part until the eastern end of Sumbawa Island. It's Length from west to east 80 km. Sumbawa is 300 km from west to east and of 100 km from north to south its coastline extends 2500 km and territorial waters of 29,000 km2 inclusive 137 islands of which 70 are inhabited. Sumbawa with 75 % of the land area owns only 25% of total population, even though it is opposite to Lombok which owns only 75% of the population. The climate on Lombok is generally similar to other areas with a tropical climate in Indonesia. The temperature ranges from 21° to 33° Celsius. There are two seasons – namely the wet and dry seasons. Here on Lombok, the wet season runs from October to March and the dry season runs from April to September. January in particular can be very rainy and windy, reaching its peak in February. Marine Ecology has played an important role in forming beautiful coral reefs with sea grasses and seaweed vegetation, where the majority of reefs are found in Nusa Tenggara Region. In the future, these areas could potentially accelerate the development of tourism, fisheries, and pharmaceutical products. With the seasonal nature of the monsoon, the vegetation on Nusa Tenggara is influenced by the distribution of rainfall throughout the year, rather than by the total annual rainfall. Flora and fauna in Lombok are based on the well-known "Wallace Line" that separates flora and fauna in Asia from Australia, which runs north to south between the islands of Lombok and Bali. Forests here are found mostly in mountain areas. Huge parts of Lombok are coastal wetlands, irrigated fields and other wetlands – while extensive dry land areas can be found on the island of Sumbawa. Many species of plants growing here have specific importance that influences the life of our people (such as kesambi, bungur, sonokeling, mahogany, teak, kelicung, pala, ipil, bamboo, and tutul). There are also several types animals spread over the area according to climate and natural conditions (such as wild pigs, small deer, deer, iguanas, porcupines, turtles, and many kinds of snakes and birds). The original inhabitants of Lombok formerly followed an animistic belief system. The Sasak, a local tribe of Lombok, is believed to have come from northwest India or Myanmar. Nowadays, the people of Lombok are 90% Muslim but people of other religions can also be found, including Balinese Hindus (who mostly live in the western part), Christians, and Buddhists. It's fascinating to see such varied forms of religious life with different worship places and practices coexisting with mutual respect for one another – it's one of the beautiful things about our culture here on Lombok. Lombok's ethnic groups have their own respective costumes and traditions, which are still alive today. The marriage custom ceremonies are the most dominant ones, and the time-honored tradition of catching sea worms (nyale) usually takes place on the southern coast of Lombok (which is comprised of Seger Desa Beach, Kuta, Selong Blanak, Mawun and Kaliantan Beach) As a province, this autonomous region is governed by a Governor of the 1st Level Regional Administration, who is elected for a term of 5 years by the Regional Parliament and confirmed by The President of Republic of Indonesia. The Governor plays an important role for conducting administration, coordinating, planning, and developing and promoting all aspects of social life. There are seven regencies and two municipalities on Lombok: West Lombok Regency with its capital in Gerung Central Lombok Regency with its capital in Praya East Lombok Regency with its capital in Selong West Sumbawa Regency with its capital in Taliwang Sumbawa Regency with its capital in Sumbawa Besar Dompu Regency with its capital in Dompu Bima Regency with its capital in Bima In addition, there are two municipalities: Mataram and Bima City.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
799
I am NOT a Halloweenie – part two. Sometimes, I feel like I am battling alone. Take today, for example. You know that I'm no Halloween lover. We have always had a rule that if (IF!) we were going to allow the children to go out in the neighborhood for All Hallow's Eve, they would be dressed as angels or saints – as some sort of a visable sign of light in the proverbial (and eventually literal) darkness. It's actually been really neat to hear my children's responses to folks' age-old Halloween question as they open their front door: "And who might you be?" We've had smiles, questions, and, generally speaking, friendly comments to their responses of "St. John the Baptist," "St. Queen Margaret of Scotland," " St. Francis of Assisi," etc. This year, we just haven't been able to get anything together. Between working and a non-sleeping infant and a few different ailments that have kept us off our "A" game, there are no costumes. And tomorrow is Halloween. Honestly, I would prefer that my kids have a Harvest Party, or Holyween Party, or messy activities involving apples and caramel and leaf bouquets – ANYTHING other than going out and being subject to the ghouls and goblins and other junk that's out there. However, again, as I said, we don't have anything together. And they – the kids – know it. So, this afternoon comes time for the conversation with hubby – what are we going to do for Halloween? It's the day before and we're – deep breath yet no surprise here – unprepared. We're unprepared for responding to the genuinely nice invitation from the classmate's Mom inviting our eldest out to trick-or-treat. We're unprepared to come to consensus on whether they can trick-or-treat at all this year. We're unprepared to follow through on our own years-old rule. We're just flat out tired. And unprepared. Lord, are you kidding me?? In my post about Halloween, I know I said something about praying before allowing our kids to even go around the block. I was pretty much asking – begging – people – friends and perfect strangers alike – to push back against the tide of consumerism, peer pressure, and cultural norms and tell the whole Halloween thing to take a hike. And now I'm weak. And I am unprepared. And I don't know if I can push back against it by myself this year. Because sometimes I feel like I'm battling alone. So … I am potentially a Pharisee and a hypocrite. Both. My last ditch effort of desperate soul shielding this afternoon went something like, "Can't they all wear red and each have a letter? They could be L-O-V-E. Or – I know! They could wear white and be L-I-G-H-T. I would be "T." The eldest actually cried – CRIED at this idea. You see, Dad had really, truly almost already sounded like he was letting her be a princess this year. I know there are a TON of people who would think – what is the BIG DEAL?? So what? Your kids go out and they get candy and they have fun – it's not like they'll be dressing like witches or vampires, zombies or skeletons. Quit being such a whiny, uptight, rigid, religious, fanatical, stick-in-the-mud already, and let your kids be kids!! Yup. That's me. I am uptight, rigid, religious, fanatical, and possibly on this point – a stick-in-the-mud – about this. Because it's their souls. And I'm the one who has to answer for those. Everyone is welcome to my house for HOLYWEEN next year. Sigh. Still not sure about what will happen this year. Since I don't want to be a COMPLETE downer, tomorrow (when I most assuredly will have my aforementioned yet erstwhile missing "A" game back) I'll post a wonderful picture of my eldest from All Hallow's Eve a couple of years ago. She was determined to be St. Francis of Assisi. And, for all those who opened the door to her smiling face that crisp Autumnal night, she was. I am NOT a Halloweenie!
{ "redpajama_set_name": "RedPajamaC4" }
8
Born: 6-Feb-1564 Birthplace: Canterbury, England Died: 30-May-1593 Location of death: London, England Cause of death: Murder Remains: Buried, St. Nicholas Churchyard, Deptford, London, England Religion: Atheist Sexual orientation: Gay Executive summary: Doctor Faustus English dramatist, the father of English tragedy and the first practitioner of English dramatic blank verse, the eldest son of a shoemaker at Canterbury, was born in that city on the 6th of February 1564. He was christened at St. George's Church, Canterbury, on the 26th of February, 1563/4, some two months before William Shakespeare's baptism at Stratford-on-Avon. His father, John Marlowe, is said to have been the grandson of John Morley or Marlowe, a substantial tanner of Canterbury. The father, who survived by a dozen years or so his illustrious son, married on the 22nd of May 1561 Catherine, daughter of Christopher Arthur, at one time rector of St. Peter's, Canterbury, who had been ejected by Queen Mary as a married minister. The dramatist received the rudiments of his education at the King's School, Canterbury, which he entered at Michaelmas 1578, and where he had as his fellow-pupils Richard Boyle, afterwards known as the great Earl of Cork, and Will Lyly, the brother of the dramatist John Lyly. Stephen Gosson entered the same school a little before, and William Harvey, the famous physician, a little after Marlowe. He went to Cambridge as one of Archbishop Parker's scholars from the King's School, and matriculated at Benet (Corpus Christi) College, on the 17th of March 1571, taking his B.A. degree in 1584, and that of M.A. three or four years later. Francis Kett, the mystic, burnt in 1589 for heresy, was a fellow and tutor of his college, and may have had some share in developing Marlowe's opinions in religious matters. Marlowe's classical acquirements were of a kind which was then extremely common, being based for the most part upon a minute acquaintance with Roman mythology, as revealed in Ovid's Metamorphoses. His spirited translation of Ovid's Amores (printed 1596), which was at any rate commenced at Cambridge, does not seem to point to any very intimate acquaintance with the grammar and syntax of the Latin tongue. Before 1587 he seems to have quitted Cambridge for London, where he attached himself to the Lord Admiral's Company of Players, under the leadership of the famed actor Edward Alleyn, and almost at once began writing for the stage. Of Marlowe's career in London, apart from his four great theatrical successes, we know hardly anything; but he evidently knew Thomas Kyd, who shared his unorthodox opinions. Thomas Nashe criticized his verse, Robert Greene affected to shudder at his atheism; Gabriel Harvey maligned his memory. On the other hand Marlowe was intimate with the Walsinghams of Scadbury, Chiselhurst, kinsmen of Sir Francis Walsingham; he was also the personal friend of Sir Walter Raleigh, and perhaps of the poetical earl of Oxford, with both of whom, and with such men as Walter Warner and Robert Hughes the mathematicians, Thomas Harriott the notable astronomer, and Matthew Royden, the dramatist is said to have met in free converse. Either this free converse or the licentious character of some of the young dramatist's tirades seems to have sown a suspicion among the strait-laced that his morals left everything to be desired. It is probable enough that this attitude of reprobation drove a man of so exalted a disposition as Marlowe into a more insurgent attitude than he would have otherwise adopted. He seems at any rate to have been associated with what was denounced as Sir Walter Raleigh's school of atheism, and to have dallied with opinions which were then regarded as putting a man outside the pale of civilized humanity. As the result of some depositions made by Thomas Kyd under the influence of torture, the Privy Council were upon the eve of investigating some serious charges against Marlowe when his career was abruptly and somewhat scandalously terminated. The order had already been issued for his arrest, when he was slain in a quarrel by a man variously named (Archer and Ingram) at Deptford, at the end of May 1593, and he was buried on the 1st of June in the churchyard of St. Nicholas at Deptford. The following September Gabriel Harvey referred to him as "dead of the plague." The disgraceful particulars attached to the tragedy of Marlowe in the popular mind would not seem to have appeared until four years later (1597) when Thomas Beard, the Puritan author of The Theatre of God's Judgements, used the death of this playmaker and atheist as one of his warning examples of the vengeance of God. Upon the embellishments of this story, such as that of Francis Meres the critic, in 1598, that Marlowe came to be "stabbed to death by a bawdy servingman, a rival of his in his lewde love", or that of William Vaughan in the Golden Grove of 1600, in which the unfortunate poet's dagger is thrust into his own eye in prevention of his felonious assault upon an innocent man, his guest, it is impossible now to pronounce. We really do not know the circumstances of Marlowe's death. The probability is he was killed in a brawl, and his atheism must be interpreted not according to the ex parte accusation of one Richard Baines, a professional informer (among the Privy Council records), but as a species of rationalistic antinomianism, dialectic in character, and closely related to the deflection from conventional orthodoxy for which Kett was burnt at Norwich in 1589. A few months before the end of his life there is reason to believe that he transferred his services from the Lord Admiral's to Lord Strange's Company, and may have thus been brought into communication with Shakespeare, who in such plays as Richard II and Richard III owed not a little to the influence of his romantic predecessor. Marlowe's career as a dramatist lies between the years 1587 and 1593, and the four great plays to which reference has been made were Tamburlaine the Great, an heroic epic in dramatic form divided into two parts of five acts each (1587, printed in 1590); Dr. Faustus (1588, entered at Stationers' Hall 1601); The Famous Tragedy of the Rich Jew of Malta (dating perhaps from 1589, acted in 1592, printed in 1633); and Edward the Second (printed 1594). The very first words of Tamburlaine sound the trumpet note of attack in the older order of things dramatic: From jigging veins of riming mother wits And such conceits as clownage keeps in pay We'll lead you to the stately tent of war, Where you shall hear the Scythian Tamburlaine Threatening the world with high astounding terms And scourging kingdoms with his conquering sword. It leapt with a bound to a place beside Kyd's Spanish Tragedy, and few plays have been more imitated by rivals (Greene's Alphonsus of Aragon, George Peele's Battle of Alcazar, Selimus, Scanderbeg) or more keenly satirized by the jealousy and prejudice of out-distanced competitors. The majestic and exquisite excellence of various lines and passages in Marlowe's first play must be admitted to relieve, if it cannot be allowed to redeem, the stormy monotony of Titanic truculence which blusters like a simoom through the noisy course of its ten fierce acts. With many and heavy faults, there is something of genuine greatness in Tamburlaine the Great; and for two grave reasons it must always be remembered with distinction and mentioned with honor. It is the first poem ever written in English blank verse, as distinguished from mere rhymeless decasyllabics; and it contains one of the noblest passages, perhaps indeed the noblest, in the literature of the world, ever written by one of the greatest masters of poetry in loving praise of the glorious delights and sublime submission to the everlasting limits of his art. In its highest and most distinctive qualities, in unfaltering and infallible command of the right note of music and the proper tone of color for the finest touches of poetic execution, no poet of the most elaborate modern school, working at ease upon every consummate resource of luxurious learning and leisurely refinement, has ever excelled the best and most representative work of a man who had literally no models before him and probably or evidently was often if not always compelled to write against time for his living. The just and generous judgment passed by Goethe on the Faustus of his English predecessor in tragic treatment of the same subject is somewhat more than sufficient to counterbalance the slighting or the sneering references to that magnificent poem which might have been expected from the ignorance of Lord Byron or the incompetence of Hallam. And the particular note of merit observed, the special point of the praise conferred, by the great German poet should be no less sufficient to dispose of the vulgar misconception yet lingering among sciolists and pretenders to criticism, which regards a writer than whom no man was ever born with a finer or a stronger instinct for perfection of excellence in execution as a mere noble savage of letters, a rough self-taught sketcher or scribbler of crude and rude genius, whose unhewn blocks of verse had in them some veins of rare enough metal to be quarried and polished by Shakespeare. What most impressed the author of Faust in the work of Marlowe was a quality the want of which in the author of Manfred is proof enough to consign his best work to the second or third class at most. "How greatly it is all planned!" the first requisite of all great work, and one of which the highest genius possible to a greatly gifted barbarian could by no possibility understand the nature or conceive the existence. That Goethe "had thought of translating it" is perhaps hardly less precious a tribute to its greatness than the fact that it has been actually and admirably translated by the matchless translator of Shakespeare -- the son of Victor Hugo; whose labor of love may thus be said to have made another point in common, and forged as it were another link of union, between Shakespeare and the young master of Shakespeare's youth. Of all great poems in dramatic form it is perhaps the most remarkable for absolute singleness of aim and simplicity of construction; yet is it wholly free from all possible imputation of monotony or aridity. Tamburlaine is monotonous in the general roll and flow of its stately and sonorous verse through a noisy wilderness of perpetual bluster and slaughter; but the unity of tone and purpose in Doctor Faustus is not unrelieved by change of manner and variety of incident. The comic scenes, written evidently with as little of labor as of relish, are for the most part scarcely more than transcripts, thrown into the form of dialogue, from a popular prose History of Dr. Faustus, and therefore should be set down as little to the discredit as to the credit of the poet. Few masterpieces of any age in any language can stand beside this tragic poem -- it has hardly the structure of a play -- for the qualities of terror and splendor, for intensity of purpose and sublimity of note. In the vision of Helen, for example, the intense perception of loveliness gives actual sublimity to the sweetness and radiance of mere beauty in the passionate and spontaneous selection of words the most choice and perfect; and in like manner the sublimity of simplicity in Marlowe's conception and expression of the agonies endured by Faustus under the immediate imminence of his doom gives the highest note of beauty, the quality of absolute fitness and propriety, to the sheer straightforwardness of speech in which his agonizing horror finds vent ever more and more terrible from the first to the last equally beautiful and fearful verse of that tremendous monologue which has no parallel in all the range of tragedy. It is now a commonplace of criticism to observe and regret the decline of power and interest after the opening acts of The Jew of Malta. This decline is undeniable, though even the latter part of the play (the text of which is very corrupt) is not wanting in rough energy; but the first two acts would be sufficient foundation for the durable fame of a dramatic poet. In the blank verse of John Milton alone -- who perhaps was hardly less indebted than Shakespeare was before him to Marlowe as the first English master of word-music in its grander forms -- has the glory or the melody of passages in the opening soliloquy of Barabbas been possibly surpassed. The figure of the hero before it degenerates into caricature is as finely touched as the poetic execution is excellent; and the rude and rapid sketches of the minor characters show at least some vigor and vivacity of touch. In Edward the Second the interest rises and the execution improves as visibly and as greatly with the course of the advancing story as they decline in The Jew of Malta. The scene of the king's deposition at Kenilworth is almost as much finer in tragic effect and poetic quality as it is shorter and less elaborate than the corresponding scene in Shakespeare's King Richard II. The terror of the death scene undoubtedly rises into horror; but this horror is with skilful simplicity of treatment preserved from passing into disgust. In pure poetry, in sublime and splendid imagination, this tragedy is excelled by Doctor Faustus; in dramatic power and positive impression of natural effect it is certainly the masterpiece of Marlowe. It was almost inevitable, in the hands of any poet but Shakespeare, that none of the characters represented should be capable of securing or even exciting any finer sympathy or more serious interest than attends on the mere evolution of successive events or the mere display of emotions (except always in the great scene of the deposition) rather animal than spiritual in their expression of rage or tenderness or suffering. The exact balance of mutual effect, the final note of scenic harmony, between ideal conception and realistic execution is not yet struck with perfect accuracy of touch and security of hand; but on this point also Marlowe has here come nearer by many degrees to Shakespeare than any of his other predecessors have ever come near to Marlowe. Of The Massacre at Paris (acted in 1593, printed around 1600) it is impossible to judge fairly from the garbled fragment of its genuine text which is all that has come down to us. To Mr. Collier, among numberless other obligations, we owe the discovery of a noble passage excised in the piratical edition which gives us the only version extant of this unlucky play, and which, it must be allowed, contains nothing of quite equal value. This is obviously an occasional and polemical work, and being as it is overcharged with the anti-Catholic passion of the time has a typical quality which gives it some empirical significance and interest. That antipapal ardor is indeed the only note of unity in a rough and ragged chronicle which shambles and stumbles onward from the death of Queen Jeanne of Navarre to the murder of the last Valois. It is possible to conjecture, what it would be fruitless to affirm, that it gave a hint in the next century to Nathaniel Lee for his far superior and really admirable tragedy on the same subject, issued ninety-seven years after the death of Marlowe. In the tragedy of Dido Queen of Carthage (completed by Thomas Nashe, produced and printed 1594), a servile fidelity to the text of Virgil's narrative has naturally resulted in the failure which might have been expected from an attempt at once to transcribe what is essentially inimitable and to reproduce it under the hopelessly alien conditions of dramatic adaptation. The one really noble passage in a generally feeble and incomposite piece of work is, however, uninspired by the unattainable model to which the dramatists have been only too obsequious in their subservience. It is as nearly certain as anything can be which depends chiefly upon cumulative and collateral evidence that the better part of what is best in the serious scenes of King Henry VI is mainly the work of Marlowe. That he is at any rate the principal author of the second and third plays passing under that name among the works of Shakespeare, but first and imperfectly printed as The Contention between the two Famous Houses of York and Lancaster, can hardly be now a matter of debate among competent judges. The crucial difficulty of criticism in this matter is to determine, if indeed we should not rather say to conjecture, the authorship of the humorous scenes in prose, showing as they generally do a power of comparatively high and pure comic realism to which nothing in the acknowledged works of any pre-Shakespearian dramatist is even remotely comparable. Yet, especially in the original text of these scenes as they stand unpurified by the ultimate revision of Shakespeare or his editors, there are tones and touches which recall rather the clownish horseplay and homely ribaldry of his predecessors than anything in the lighter interludes of his very earliest plays. We find the same sort of thing which we find in their writings, only better done than they usually do it, rather than such work as Shakespeare's a little worse done than usual. And even in the final text of the tragic or metrical scenes the highest note struck is always, with one magnificent and unquestionable exception, rather in the key of Marlowe at his best than of Shakespeare while yet in great measure his disciple. A Taming of a Shrew, the play of which Shakespeare's comedy was founded, has been attributed, without good reason, to Marlowe. The passages in the play borrowed from Marlowe's works provide an argument against, rather than for his authorship; while the humorous character of the play is not in keeping with his other work. He may have had a share in The Troublesome Raigne of King John (1591), and Fleay conjectured that the plays Edward III. and Richard III usually included in editions of Shakespeare are at least based on plays by Marlowe. Lust's Dominion, printed in 1657, was incorrectly ascribed to him, and a play no longer extant, The True History of George Scanderbage, was assumed by Fleay on the authority of an obscure passage of Gabriel Harvey to be his work. The Maiden's Holiday, assigned to Day and Marlowe, was destroyed by Warburton's cook. Day was considerably Marlowe's junior, and collaboration between the two is not probable. Had every copy of Marlowe's boyish version or perversion of Ovid's Elegies (P. Ovidii Nasonis Amorum compressed into three books) deservedly perished in the flames to which it was judicially condemned by the sentence of a brace of prelates, it is possible that an occasional bookworm, it is certain that no poetical student, would have deplored its destruction, if its demerits could in that case have been imagined. His translation of the first book of Lucan alternately rises above the original and falls short of it, often inferior to the Latin in point and weight of expressive rhetoric, now and then brightened by a clearer note of poetry and lifted into a higher mood of verse. Its terseness, vigor and purity of style would in any case have been praiseworthy, but are nothing less than admirable, if not wonderful, when we consider how close the translator has on the whole (in spite of occasional slips into inaccuracy) kept himself to the most rigid limit of literal representation, phrase by phrase and often line by line. The really startling force and felicity of occasional verses are worthier of remark than the inevitable stiffness and heaviness of others, when the technical difficulty of such a task is duly taken into account. One of the most faultless lyrics and one of the loveliest fragments in the whole range of descriptive and fanciful poetry would have secured a place for Marlowe among the memorable men of his epoch, even if his plays had perished with himself. His Passionate Shepherd remains ever since unrivalled in its way a way of pure fancy and radiant melody without break or lapse. The untitled fragment, on the other hand, has been very closely rivalled, perhaps very happily imitated, but only by the greatest lyric poet of England -- by Shelley alone. Marlowe's poem of Hero and Leander (entered at Stationers' Hall in September 1593; completed and brought out by George Chapman, who divided Marlowe's work into two sestiads and added four of his own, 1598), closing with the sunrise which closes the night of the lovers' union, stands alone in its age, and far ahead of the work of any possible competitor between the death of Spenser and the dawn of Milton. In clear mastery of narrative and presentation, in melodious ease and simplicity of strength, it is not less pre-eminent than in the adorable beauty and impeccable perfection of separate lines or passages. It is doubtful whether the heroic couplet has ever been more finely handled. The place and the value of Christopher Marlowe as a leader among English poets it would be almost impossible for historical criticism to overestimate. To none of them all, perhaps, have so many of the greatest among them been so deeply and so directly indebted. Nor was ever any great writer's influence upon his fellows more utterly and unmixedly an influence for good. He first, and he alone, guided Shakespeare into the right way of work; his music, in which there is no echo of any man's before him, found its own echo in the more prolonged but hardly more exalted harmony of Milton's. He is the greatest discoverer, the most daring and inspired pioneer, in all our poetic literature. Before him there was neither genuine blank verse nor a genuine tragedy in our language. After his arrival the way was prepared, the paths were made straight, for Shakespeare. Father: John Marlowe (shoemaker) Mother: Katherine Arthur High School: King's School, Canterbury (1578-) University: BA, Benet College, Cambridge University (1584) University: MA, Benet College, Cambridge University (1587) Murder arrested 18-Sep-1589, charges dropped Treason indicted 18-Mar-1593 Heresy indicted 18-Mar-1593 Stabbed
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,470
You are such a dear friend. I feel so blessed to have you in my life. Thank you so much for joining me and my family to celebrate my birthday. Thank you also for the sweet birthday card filled with glitter and a cash gift. This weekend I'm planning to go shopping for summer clothes and your thoughtful money gift will help me purchase something really nice. I was nearly speechless when I opened up my birthday card and saw the check you enclosed. I had to sit down and take it all in. Thank you so much for this incredible cash gift for my 18th birthday. You are so generous and thoughtful. I read the enclosed note that you would like me to use the money to purchase a new Mac computer so I'll be all set for college. I never thought in a billion years that I would be able to afford such a luxury. I still can't believe this is happening. My heart is filled with so much gratitude and joy. Thanks a million!
{ "redpajama_set_name": "RedPajamaC4" }
3,772
PV, EV, & Storage Beyond Efficiency: Noesis Expands Into Commercial Solar and Possibly Battery Storage The startup continues evolving to match the demands of the market. Stephen Lacey October 02, 2014 Noesis Energy, the startup that uses a combination of software analytics, online matchmaking and traditional finance to broker efficiency deals, has always modeled itself after similar companies in the solar industry. Now Noesis is actually in the solar business itself. According to CEO Scott Harmon, roughly one-third of the commercial efficiency projects closed on Noesis' online marketplace now include solar. The company has seen an increasing number of developers pairing small commercial solar PV projects with lighting, HVAC and building controls retrofits over the last year. "There are a lot more hybrid proposals," said Harmon. "The economics of blending can be very attractive." The change in demand may signal another natural course shift for Noesis. Although the company is firmly committed to the commercial and industrial efficiency space, it expects to get deeper in the solar market -- and possibly help execute battery storage projects as they become more realistic for commercial applications. The Texas-based startup, which has raised $19 million since 2011, has gone through a couple of organic changes. The company's first product was an online marketplace that allowed building owners to analyze their energy consumption and allowed energy service providers to use that data to bid on projects. It wasn't long until Noesis realized that financing was the missing piece for closing small and mid-sized commercial projects. So the company pulled together a handful of financing partners to help execute proposals on the marketplace, eventually raising a $30 million fund. Harmon often cited the residential solar marketplace Clean Power Finance as a model for what Noesis was trying to build. Based on the number of PV projects now getting integrated with efficiency retrofits, Noesis appears to have built the equivalent marketplace for the commercial and industrial market. That could help developers in the small commercial solar market -- also known as the middle market -- which have struggled to get financing and keep pace with residential growth. Middle market projects are typically sized between 50 kilowatts and 1 megawatt. However, the smaller companies developing these systems often don't have credit scores that might allow them to secure debt. And because of their size, they also have a hard time securing project finance. The market also suffers from a lack of standards, making it even less attractive to investors. But what if small commercial systems could be blended together with efficiency retrofits? Harmon said a hybrid model can very often pull solar out of limbo. Noesis does not deal with tax equity or structure power-purchase agreements. Rather, it simply offers leases. And the company's financing partners are responding. "We have five lenders that are starting to like those deals. With very straightforward leasing, mixing solar and efficiency together, you can accelerate the combined payback to about eight years," said Harmon. A commercial lease exclusively for solar may not be attractive to investors. But mixing it together with other building equipment retrofits can improve the payback by five or more years, he said. Noesis is seeing three types of solar offerings: installers hiring efficiency experts; energy service professionals hiring solar installers or business development experts to help them expand; and partnerships between solar installers and traditional energy service companies. Noesis isn't limiting itself to specific technologies. The company wants to help broker whatever types of projects building owners are asking for and project developers are providing, assuming they're within the middle market. Behind-the-meter commercial battery storage, which could become a 720-megawatt market by the end of the decade, is another area of interest for the company. In fact, the distributed storage provider Stem is a partner on the Noesis platform. "The market is absolutely going to shift, and we plan to be squarely in the middle," said Harmon. "That's the leading edge. It's not just efficiency -- it's energy optimization for C&I customers." intelligent efficiency
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,325
The Desert Inn, also known as the D.I., was a hotel and casino on the Las Vegas Strip in Paradise, Nevada, which operated from April 24, 1950, to August 28, 2000. Designed by architect Hugh Taylor and interior design by Jac Lessman, it was the fifth resort to open on the Strip, the first four being El Rancho Vegas, The New Frontier, the still-operating Flamingo, and the now-defunct El Rancho (then known as the Thunderbird). It was situated between Desert Inn Road and Sands Avenue. The Desert Inn opened with 300 rooms and the Sky Room restaurant, headed by a chef formerly of the Ritz Paris, which once had the highest vantage point on the Las Vegas Strip. The casino, at , was one of the largest in Nevada at the time. The nine-story St. Andrews Tower was completed during the first renovation in 1963, and the 14-story Augusta Tower became the Desert Inn's main tower when it was completed in 1978 along with the seven-story Wimbledon Tower. The Palms Tower was completed in 1997 with the second and final renovation. The Desert Inn was the first hotel in Las Vegas to feature a fountain at the entrance. In 1997, the Desert Inn underwent a $200 million renovation and expansion, but after it was purchased for $270 million by Steve Wynn in 2000, he decided to demolish it and build a new hotel and resort and casino. The remaining towers of the Desert Inn were imploded in 2004. Today, the Wynn and Encore are where the Desert Inn once stood. The original performance venue at the Desert Inn was the Painted Desert Room, later the Crystal Room, which opened in 1950 with 450 seats. Frank Sinatra made his Las Vegas debut there on September 13, 1951, and became a regular performer. The property included an 18-hole golf course which hosted the PGA Tour Tournament of Champions from 1953 to 1966. The golf course is now a part of the Wynn resort. History The hotel was situated at 3145 Las Vegas Boulevard South, between Desert Inn Road and Sands Avenue. The original name was Wilbur Clark's Desert Inn. Wilbur Clark, described by Frank Sinatra biographer James Kaplan as a "onetime San Diego bellhop and Reno craps dealer", originally began building the resort with his brother in 1947 with $250,000, but ran out of money. Author Hal Rothman notes that "for nearly two years the framed structure sat in the hot desert sun, looking more like an ancient relic than a nascent casino". Clark approached the Reconstruction Finance Corporation for investment, but it was struggling financially. In 1949, he met with Moe Dalitz, the head of the notorious Cleveland Syndicate, which had ties to the Mayfield Road Mob, and Dalitz agreed to fund 75% of the project with $1.3 million, and construction resumed. Much of the financing came from the American National Insurance Company (ANICO), though Clark became the public frontman of the resort while Dalitz remained quietly in the background as the principal owner. The resort would eventually be renamed Desert Inn and was called the "D.I." by Las Vegas locals and regular guests. The Desert Inn opened formally on April 24, 1950, at a two-day gala which was heavily publicized nationally. Journalists from all of the major newspapers and magazines were invited, and the hotel paid $5,700 to cover air tickets. 150 invitations were sent out by Clark to VIPs with a credit limit of $10,000. About half the attendees at the opening were from California and Nevada. At the opening show in the Painted Desert Room were performers such as Edgar Bergen and Charlie McCarthy, Vivian Blaine, Pat Patrick, The Donn Arden Dancers, Van Heflin, Abbott and Costello, and the Desert Inn Orchestra, led by Ray Noble. In attendance were a number of mafiosi, including Black Bill Tocco, Joe Massei, Sam Maceo, Peter Licavoli, and Frank Malone in a gala which Barbara Greenspun believed marked the beginning of heavy involvement of the mafia in the development of Las Vegas. Sidney Korshak was one of its early investors. The Desert Inn became known for its "opulence" and top-notch service. The first manager of the Desert Inn had previously worked as the manager at the Clift Hotel in San Francisco. Lew and Edie Wasserman were frequent guests of the hotel. During the 1950s, the hotel often hosted the Duke and Duchess of Windsor, Winston Churchill, Adlai Stevenson, Senator John F. Kennedy, and former President Harry S. Truman. In the mid 1940s and early 1950s the city and its Chamber of Commerce worked to keep the Vegas nickname of the "Atomic City" going to attract tourists. After the Desert Inn opened, so called "bomb parties" famously took place in the hotel's panoramic Sky Room, where patrons could view the detonations from a relatively safe distance while drinking Atomic Cocktails. In 1959, Lawrence Wien, owner of New York City's Plaza Hotel purchased the hotel, but signed a management deal for Clark to remain as manager. In the early 1960s, the mafia-financed casino hotels of the Las Vegas Strip and Nevada came under close scrutiny by the FBI, and they placed increased pressure on the Nevada Gaming Control Board to force the mobsters out of Las Vegas. After Sam Giancana was spotted on the premises of Frank Sinatra's Cal Neva Lodge & Casino at Lake Tahoe, his gambling license was removed by the Board and he was forced to sell up and forfeit his share in the Sands Hotel and Casino. The Desert Inn faced similar scrutiny by the FBI, attracting controversy at the same time for the involvement of Dalitz and his mobster associates, but simultaneously called for the prosecution of the FBI for illegal wiretapping. In 1964, Clark sold his remaining share in the hotel to Dalitz and business associates Morris Kleinman, Thomas McGinty and Sam Tucker. He died of a heart attack the following year. The bell captain of the Desert Inn, Jack Butler, remembered Clark: "Wilbur was the greatest guy. Without him this town never would've got off the ground. Everyone came into the club just to see him and he was all over the postcards. He was the only boss who would agree to have his picture taken". The Desert Inn's most famous guest, businessman Howard Hughes, arrived on Thanksgiving Day 1966, renting the hotel's entire top two floors. After staying past his initial ten-day reservation, he was asked to leave in December so that the resort could accommodate the high rollers who were expected for New Year's Eve. Instead of leaving, Hughes started negotiations to buy the Desert Inn. On March 27, 1967, Hughes purchased the resort from Dalitz for $6.2 million in cash and $7 million in loans. This was the first of many Las Vegas resort purchases by Hughes, including the Sands Hotel and Casino ($14.6 million) and the Frontier Hotel and Casino ($23 million). However, Hughes refused to include the PGA Tour Tournament of Champions in the deal, so Dalitz moved the tournament to his Stardust Resort and Casino in 1967 and 1968. The reclusive Hughes continued to live in his penthouse suite at the Desert Inn for four years, never leaving his bedroom. Usually unclothed, he spent his time "negotiating purchases and business deals with the curtains drawn and windows and doors sealed shut with tape", and did not allow anyone from the hotel staff to come in and clean his room. On the eve of Thanksgiving 1970, he was removed from his room on a stretcher and flown to the Bahamas. After Hughes's death in 1976, the hotel remained under the Summa Corporation, which completed the extensive renovation that he had ordered. Summa sold the hotel to Kirk Kerkorian and the Tracinda Corporation in 1986, and it became known as the MGM Desert Inn. Kerkorian sold it to ITT-Sheraton in 1993 for $160 million. Modern history In 1992, Frank Sinatra celebrated his 77th birthday at the hotel in an event that generated much media attention. Dick Taylor, the CEO of public relations firm Rogers & Cowan recalled: "We had the stars assemble in the casino's presidential suite and then took them in limos to the entrance of the hotel, where the press and hundreds of fans were gathered, like a Hollywood movie premiere. The stars were interviewed on the red carpet and in they went to the famed Crystal Room. It was a very big deal." The property was sold to ITT Sheraton in 1993 for $160 million and renamed the Sheraton Desert Inn. Four years later, in 1997, ITT Sheraton undertook a $200 million renovation of the Augusta Tower and St. Andrews Tower and expansion, with the building and completion of the Palms Tower. The resort was returned to its historic name, The Desert Inn, dropping the Sheraton name, and was placed in the ITT Sheraton Luxury Collection division. ITT Sheraton itself was sold the following year to Starwood. Due to losing money, Starwood immediately put The Desert Inn up for sale, and contracted a sale to Sun International Hotels Ltd. on May 19, 1999, for $275 million. The sale to Sun International fell through the following March, however. Also in 1999, Sinatra's and the Rat Pack's estate managers, Sheffield Enterprises Inc., sued the Desert Inn, claiming an infringement of rights in their use of Sinatra's name and persona in its advertising and sales, including the words "Frank", "Ol' Blue Eyes", "the chairman of the Board" and "The Rat Pack". Sinatra's estate specifically objected to their use in "billboard advertising, marquees, alcoholic beverages and wine menus, and on the front and back of tee-shirts and caps at its gift shop" and alleged photographs of Sinatra and his signature on the walls behind the bar near the entrance to the Starlight Lounge of the Desert Inn. The Desert Inn celebrated its 50th anniversary on April 24, 2000. Celebrations were held for a week and a celebrity golf tournament was held with the likes of Robert Loggia, Chris O'Donnell, Robert Urich, Susan Anton, Vincent Van Patten and Tony Curtis. As part of the festivities, a time capsule was buried in a granite burial chamber on April 25, to be reopened on April 25, 2050. Three days later, on April 27, Steve Wynn purchased the resort from Starwood for $270 million. Wynn closed the Desert Inn at 2:00 a.m. on August 28, 2000. On October 23, 2001, the Augusta Tower, the Desert Inn's southernmost building, was imploded to make room for a mega-resort that Wynn planned to build. Coming a month after the September 11 attacks, the implosion was marked with less fanfare than previous Las Vegas demolition spectacles due to its similarity to the collapse of the Twin Towers. Originally intended to be named Le Rêve, the new project opened as Wynn Las Vegas. The remaining two towers, the St. Andrews Tower and Palms Tower were both temporarily used as the Wynn Gallery, spanning to display some of Wynn's art collection. The St. Andrews Tower and Palms Tower were finally imploded on November 16, 2004. Architecture and features The initial hotel, a $6.5 million property set in 200 acres, was designed by Hugh Taylor who was hired after Wilbur Clark and Wayne McAllister could not agree on the design. Interiors were by noted New York architect Jac Lessman. The property conveyed the image of a "southwestern spa" that was "half ranch house, half nightclub". It was built of "cinder blocks but trimmed with sandstone and finished throughout the inside with redwood". The logo of the hotel was a Joshua tree cactus. The driveway into the hotel passed under an "old-fashioned ranch sign" bearing the name Wilbur Clark's Desert Inn in scripted letters. The Desert Inn was the first hotel in Las Vegas to feature a fountain at the entrance. A "Dancing Waters" show involved the fountain jets choreographed to music. The interior of the hotel was finished in redwood with flagstone flooring. The public space included a registration area, a casino, two bars, a coffee shop, a restaurant, various commercial shops and services, and a broadcasting station for K-RAM radio. Guest rooms were located in wings situated behind the main building, surrounding the figure-eight swimming pool. The hotel originally had 300 rooms, each outfitted with air conditioning with individual thermostats. The lounge was located in a three-story, glass-sided tower at the front of the hotel known as the Sky Room, which was the largest structure on the Strip at the time of its construction and commanded views of the mountains and desert all around, as well as overlooked the "Dancing Waters" feature. The Sky Room restaurant was headed by a chef formerly of the Ritz Paris. The original performance venue at the Desert Inn was the 450 seat Painted Desert Room, later the Crystal Room, which opened in 1950 with 450 seats. Charles Cobelle created the handpainted murals, and a "band car" was used to move the orchestra within the showroom. Next door was a restaurant, the Cactus Room. The Kachina Doll Ranch was a supervised play area for guests' children. The hotel had a ladies salon and health club from the outset. Another performance venue at the hotel was the Lady Luck Lounge. The hotel first underwent renovation in the early 1960s, during which the St. Andrews Tower was built in 1963. In the 1970s, the hotel underwent a $54-million renovation under Howard Hughes, which resumed under the responsibility of the Summa Corporation after his death in 1976. The 14-story Augusta Tower became the Desert Inn's main tower when it was completed in 1978. The seven-story Wimbledon Tower contained duplex suites, and resembled a modern version of a Mayan pyramid. It overlooked the golf course and was built at the same time, bringing the total room count to 825. By 1978, most of the 1950s structures on the property had been replaced with modern buildings and the property was renamed the Desert Inn and Country Club. It featured full country club amenities open to guests of the hotel, including a club house, driving range, pro shop, restaurant and lounge at the golf club; 10 tournament-class outdoor tennis courts; and a spa. Three restaurants were added: the "small, intimate" Monte Carlo Room, the "gourmet" Portofino Room, and the Ho Wan Chinese restaurant. At the time of its sale to ITT-Sheraton in 1993, the Desert Inn had the largest frontage of any casino hotel on the Las Vegas Strip, measuring feet. In 1997, the Desert Inn underwent a $200 million renovation and expansion by Steelman Partners, giving it a new Mediterranean-looking exterior with white stucco and red clay tile roofs. The room count was reduced to 715 to provide more luxurious accommodations. The nine-story Palm Tower was completed, the lagoon-style pool was added, and notable changes were made to the Grand Lobby Atrium, Starlight Lounge, Villas Del Lago, and new golf shop and country club. The seven-story lobby, fully built in marble, was also a major part of the renovation. Casino At its opening in 1950, the casino, at , was one of the largest in Nevada at the time. The windowless room included "five crap tables, three roulette wheels, four black jack tables and 75 slot machines", together with a sportsbook. Hundreds of coin-operated gambling machines – including slot machines, video poker, 21, and keno – were installed during the 1978 renovation. The casino acquired a reputation for attracting the high rollers. On January 27, 2000, the Megabucks jackpot record for Las Vegas was broken when $34,955,489 was won by an anonymous gambler at the Desert Inn, playing a bank of six Megabucks machines near the hotel's coffee shop. Golf course and country club The 18-hole, par-72 Desert Inn Golf Club opened in 1952. Initially, Dalitz had pushed the idea of opening a golf course next to the hotel with an entrance off the Strip, which would be accessible to other hotels and boost the city's profile as a resort destination. When other hotel owners rejected this idea, Dalitz built the course on the hotel premises. He also opened an outdoor dining area, to accommodate golfers and swimmers who might prefer a more informal atmosphere. The course hosted the PGA Tour Tournament of Champions from 1953 to 1966, attracting professional golfers such as Sam Snead, Arnold Palmer, and Jack Nicklaus. Allard Roen was director of the tournament for many years, and was instrumental in breaking down the racial barrier on the Strip. He broke the all-white club convention by permitting Sammy Davis, Jr. to play on the course. From 1958 it hosted the Golf Cup Golf Tournament, the largest tournament in the world for amateur golfers. According to the Las Vegas Sun, the course "held the distinction of being the only golf course in the United States to have annually hosted three championship tour events – the PGA Tour's Las Vegas Invitational, the Las Vegas Senior Classic and the LPGA Las Vegas International". The Panasonic Las Vegas Invitational, now the Las Vegas Invitational, returned to the Desert Inn in 1983, and became known as the wealthiest PGA event in the world. It has since been won by the likes of Fuzzy Zoeller, Curtis Strange, Greg Norman, and Paul Azinger. The Las Vegas Senior Classic event at the Desert Inn was added to the Senior PGA Tour in 1986, and has since been won by Bruce Crampton (1986), Al Geiberger, who equaled the course record at the time of 62 (1987), Larry Mowry (1988) and Lee Trevino (1992). Wilbur Clark was the first to build a home on the golf course in the 1950s. Additional homes were added to the Desert Inn Country Club Estates from the 1960s on. During his ownership of the hotel, Howard Hughes built 100 residential units on the property. After Steve Wynn purchased the resort in 2000 and announced that the real estate was too valuable to leave as a golf course, homeowners were forced to sell their properties to Wynn and his property developer Irwin Molasky. Molasky bought homes closest to the golf course for $2 million each, and homes on the perimeter of the resort for $900,000 to $1.2 million each. The Junior League of Las Vegas convinced Wynn to save one house from demolition and moved it to a lot in downtown Las Vegas to serve as its headquarters. This was the Morelli House, designed by architect Hugh Taylor for Antonio Morelli, a "rare example of modernist architecture in Las Vegas". The house was subsequently listed on the City and State historic registers. Performances Almost every major star of the latter half of the 20th century played at the Desert Inn. Frank Sinatra made his Las Vegas debut at the Desert Inn on September 13, 1951. He later said of it: "Wilbur Clark gave me my first job in Las Vegas. That was in 1951. For six bucks you got a filet mignon dinner and me". Noël Coward performed at the Inn on one occasion for an entire month. In 1954, after a performance at the Desert Inn, Betty Hutton announced one of her several retirements. In 1958, Tony Martin was signed to a five-year deal at $25,000 per week, making him the highest paid performer in Las Vegas. Eddie Fisher was heckled by a disguised Elizabeth Taylor during a 1961 performance, in a year which saw Dinah Shore booked for her fourth performance and debut Vegas performances at the Desert Inn by both Benny Goodman and Rosemary Clooney. In 1979, Jet magazine noted that Wayne Newton was "enthroned" at the Desert Inn as king of entertainment idols", earning $10 million a year, which made him the highest-paid nightclub performer of all time. Other performers in its famous "crystal showroom" over the years included Patti Page, Ted Lewis, Joe E. Lewis, Bobby Darin, Jimmy Durante, Tony Bennett, Paul Anka, Dionne Warwick, Louise Mandrell, and more. Louis Prima and Keely Smith recorded their 1960 Dot Records LP On Stage live at the Desert Inn. Bobby Darin's famous album Live! At the Desert Inn was recorded at the hotel in February 1971. In 1992, a week-long celebration of Frank Sinatra's 77th birthday at the Desert Inn was held and later in January it was announced that Sinatra, Liza Minnelli, Paul Anka, Shirley MacLaine, Dean Martin, Steve Lawrence and Eydie Gorme had all signed a two-year engagement agreeing to perform at least five weeks annually. Film and television Portions of Ocean's 11 were shot at the Desert Inn. It is one of the five Las Vegas hotels robbed on New Year's Eve by the characters played by Frank Sinatra, Dean Martin and others in the film. Orson Welles' film F for Fake covers, among other topics, the scandal of a fake biography of Howard Hughes, and the billionaire's Desert Inn residence is illustrated by Welles. In the 1985 film Lost in America, Julie Hagerty's character Linda Howard loses the couple's "nest egg" at the Desert Inn, leading to a memorable scene in which Albert Brooks' character David Howard tries to convince the Casino manager (Garry Marshall) to give them their money back. David, an ad man, proposes a campaign centered around the generosity of the casino in his case, replete with a jingle: "The Desert Inn has heart... The Desert Inn has heart." The opening scene to the 1993 film Sister Act 2: Back in the Habit took place in the Grand Ballroom of the hotel. The Desert Inn saw its last commercial use in the 2001 film Rush Hour 2, shortly before it was imploded. It was converted into the "Red Dragon", an Asian-themed casino set. The hotel served as the primary backdrop for the TV show Vega$ which aired on ABC from 1978 to 1981. The 1980s Aaron Spelling soap opera Dynasty included footage of the hotel, and use of the Presidential Suite. The hit 1980s NBC TV series Remington Steele filmed its 60th Las Vegas-set episode at the inn, where both the exterior and interior are shown regularly throughout the episode. Legacy The closure of the Desert Inn in 2000 and subsequent demolition was unpopular with many as it seemed to mark the end of old Las Vegas. Historian Michael Green stated: "To a lot of people outside of Las Vegas, these two places (the Desert Inn and the Sands) really meant Las Vegas. These were the places that represent the images of Las Vegas, in a far greater way than the Dunes, the Aladdin, the Hacienda and the Landmark". Robert Maheu, Howard Hughes's head of Nevada operations and publicist for many years, remarked that the "Desert Inn was the gem of Las Vegas". The hotel remained popular with locals until the end, as the heavily tourism-driven modern Las Vegas emerged in the 1990s. Desert Inn Road Desert Inn Road is a 17¼ mile west–east road part of the Las Vegas Valley grid road system. It travels through residential, commercial, and industrial areas and exists as a major thoroughfare in the area. At the Las Vegas Strip exists a 2½ mile expressway portion of the road officially called the Desert Inn Road Super Arterial that acts an arterial road between Winchester and Paradise. The expressway opened in 1996 and had a construction cost of US$84 million. See also References Citations Sources External links Video of Desert Inn implosion, October 23, 2001 Jac Lessman architectural records and papers, 1925-1975. Held by the Department of Drawings & Archives, Avery Architectural & Fine Arts Library, Columbia University. Defunct casinos in the Las Vegas Valley Defunct hotels in the Las Vegas Valley Las Vegas Strip Howard Hughes Landmarks in Nevada Skyscraper hotels in Paradise, Nevada Casinos completed in 1950 Hotel buildings completed in 1950 Hotel buildings completed in 1967 Hotel buildings completed in 1997 Demolished hotels in Clark County, Nevada Buildings and structures demolished by controlled implosion Buildings and structures demolished in 2001 Buildings and structures demolished in 2004 Hotels established in 1950 1950 establishments in Nevada 2000 disestablishments in Nevada Casino hotels Hotels disestablished in 2000
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,789
package de.tud.vcd.votedevice.municipalElection.view; import java.awt.Color; import java.awt.Font; import java.awt.FontMetrics; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.Image; import java.awt.RenderingHints; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.awt.event.MouseAdapter; import java.awt.event.MouseEvent; import java.io.IOException; import java.net.URL; import java.util.ArrayList; import javax.imageio.ImageIO; import javax.swing.JComponent; import javax.swing.JPanel; import javax.swing.SwingUtilities; import de.tud.vcd.votedevice.municipalElection.model.Candidate; /** * Zeichnet einen Kandidaten auf die Oberfläche. Die Größe wird dabei auch wieder variabel berechnet. * * @author Roman Jöris <roman.joeris@googlemail.com> * */ public class ShowACandiate extends JPanel { private static final long serialVersionUID = 1L; private Color foreground; private Color crossedColor; private String id; private int id_maxSize; private String name; private String prename; private boolean crossed; private int votes; private int manualVotes; private Font fontId; private Font fontName; private Font fontForename; private int maxVotes; private Image imgBallotChecked; private Image imgBallotUnchecked; private Image imgBallotCheckedGray; boolean pressed; private ArrayList<ActionListener> al; private String actionCommand; private JComponent comp; /** * Erzeugt ein Zeichenobjekt * @param width * @param height * @param foreground * @param crossedColor * @param maxVotes */ public ShowACandiate(int width, int height, Color foreground,Color crossedColor, int maxVotes) { this.foreground=foreground; this.crossedColor=crossedColor; setOpaque(false); setSize(width, height); // fontId= new Font("SanfSerif", Font.PLAIN,(int)(this.getSize().height*0.7) ); fontName=new Font(fontId.getFamily(), Font.BOLD, (int)(this.getSize().height*0.577)); fontForename=new Font(fontId.getFamily(), Font.PLAIN, (int)(this.getSize().height*0.5)); id="0000"; FontMetrics fm = getFontMetrics(fontId); id_maxSize=fm.stringWidth(id); name="Name"; prename="Prename"; crossed=false; votes=0; manualVotes=0; this.maxVotes=maxVotes; URL imageURLl =getClass().getClassLoader().getResource("ballotChecked.gif"); imgBallotChecked=null; if (imageURLl != null) { try { imgBallotChecked = ImageIO.read(imageURLl); } catch (IOException e) { //e.printStackTrace(); } } imageURLl =getClass().getClassLoader().getResource("ballotUnchecked.gif"); imgBallotUnchecked=null; if (imageURLl != null) { try { imgBallotUnchecked = ImageIO.read(imageURLl); } catch (IOException e) { //e.printStackTrace(); } } imageURLl =getClass().getClassLoader().getResource("ballotCheckedGray.gif"); imgBallotCheckedGray=null; if (imageURLl != null) { try { imgBallotCheckedGray = ImageIO.read(imageURLl); } catch (IOException e) { //e.printStackTrace(); } } al=new ArrayList<ActionListener>(); addMouseListener(new MouseAdapter() { public void mousePressed(MouseEvent e) { if (SwingUtilities.isLeftMouseButton(e) && contains(e.getPoint())) { pressed = true; //repaint(); } } public void mouseReleased(MouseEvent e) { if (SwingUtilities.isLeftMouseButton(e) && pressed) { //call all actionPerformed: for(ActionListener a:al){ a.actionPerformed(new ActionEvent(comp, ActionEvent.ACTION_PERFORMED, actionCommand)); } pressed = false; //repaint(); } } }); } /** * @param al */ public void addActionListener(ActionListener al){ this.al.add(al); } /** * @param actionCommand */ public void setActionCommand(String actionCommand){ this.actionCommand=actionCommand; //.paramString= } /** * @return */ public String getActionCommmand(){ return actionCommand; } /* (non-Javadoc) * @see javax.swing.JComponent#paintComponent(java.awt.Graphics) */ public void paintComponent(Graphics g) { super.paintComponent(g); //Upcast --> more functions in Graphics2D Graphics2D g2d=(Graphics2D)g; // Antialiasing einschalten g2d.setRenderingHint( RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON ); //g.drawString("FOO", 10, 5); g2d.setColor(foreground); g2d.drawLine(0 , this.getSize().height-1 , this.getSize().width, this.getSize().height-1); //print id: //fontId=new Font(fontId.getFamily(), Font.BOLD, fontId.getSize()-2); g2d.setFont(fontId); g2d.drawString(id, 0, this.getSize().height-5); g2d.setFont(fontName); g2d.drawString(name, id_maxSize+5, this.getSize().height-5); FontMetrics fm = getFontMetrics(fontName); int name_Size=fm.stringWidth(name); g2d.setFont(fontForename); g2d.drawString(", "+prename, id_maxSize+5+name_Size, this.getSize().height-5); //paint boxes int imgSize=(int)(this.getSize().height*0.77); // 20/26 int imgMarginTop=(int)(this.getSize().height*0.15); // 4/26 int imgPlace=(int)(this.getSize().height*0.846); // (20+2)/26 for (int i=0;i<maxVotes;i++){ if (i<votes){ if (manualVotes<=i){ g2d.drawImage(imgBallotCheckedGray, this.getSize().width-(maxVotes*(imgPlace) -i*imgPlace) , imgMarginTop, imgSize, imgSize, null, null); }else{ g2d.drawImage(imgBallotChecked, this.getSize().width-(maxVotes*(imgPlace) -i*imgPlace) , imgMarginTop, imgSize, imgSize, null, null); } }else{ g2d.drawImage(imgBallotUnchecked, this.getSize().width-(maxVotes*(imgPlace) -i*imgPlace), imgMarginTop, imgSize, imgSize, null, null); } } if (crossed){ g2d.setColor(crossedColor); g2d.drawLine(0 , this.getSize().height/2 , this.getSize().width, this.getSize().height/2); g2d.drawLine(0 , this.getSize().height/2+1 , this.getSize().width, this.getSize().height/2+1); } } /** * @param c */ public void setCandidate(Candidate c) { if (!(c == null)) { id = c.getId() + ""; actionCommand = c.getId() + ""; name=c.getName(); prename=c.getPrename(); votes=c.getCountedVotes(); manualVotes=c.getVotes(); crossed=c.isCrossedOut(); setVisible(true); } else { id="0"; name=""; prename=""; crossed=false; votes=0; manualVotes=0; setVisible(false); } repaint(); } /** * @return */ public int getCandidateId(){ return Integer.parseInt(this.id); } }
{ "redpajama_set_name": "RedPajamaGithub" }
5,130
<?php namespace app\controllers; use Yii; use app\models\Izinpenggilinganpadi; use app\models\IzinpenggilinganpadiSearch; use yii\web\Controller; use yii\web\NotFoundHttpException; use yii\filters\VerbFilter; /** * IzinpenggilinganpadiController implements the CRUD actions for Izinpenggilinganpadi model. */ class IzinpenggilinganpadiController extends Controller { public $layout = "defaultadmin.php"; public function behaviors() { return [ 'verbs' => [ 'class' => VerbFilter::className(), 'actions' => [ 'delete' => ['post'], ], ], ]; } /** * Lists all Izinpenggilinganpadi models. * @return mixed */ public function actionIndex() { $searchModel = new IzinpenggilinganpadiSearch(); $dataProvider = $searchModel->search(Yii::$app->request->queryParams); return $this->render('index', [ 'searchModel' => $searchModel, 'dataProvider' => $dataProvider, ]); } /** * Displays a single Izinpenggilinganpadi model. * @param integer $id * @return mixed */ public function actionView($id) { return $this->render('view', [ 'model' => $this->findModel($id), ]); } /** * Creates a new Izinpenggilinganpadi model. * If creation is successful, the browser will be redirected to the 'view' page. * @return mixed */ public function actionCreate() { $model = new Izinpenggilinganpadi(); if ($model->load(Yii::$app->request->post()) && $model->save()) { return $this->redirect(['view', 'id' => $model->id_ipdhdpb]); } else { return $this->render('create', [ 'model' => $model, ]); } } /** * Updates an existing Izinpenggilinganpadi model. * If update is successful, the browser will be redirected to the 'view' page. * @param integer $id * @return mixed */ public function actionUpdate($id) { $model = $this->findModel($id); if ($model->load(Yii::$app->request->post()) && $model->save()) { return $this->redirect(['view', 'id' => $model->id_ipdhdpb]); } else { return $this->render('update', [ 'model' => $model, ]); } } /** * Deletes an existing Izinpenggilinganpadi model. * If deletion is successful, the browser will be redirected to the 'index' page. * @param integer $id * @return mixed */ public function actionDelete($id) { $this->findModel($id)->delete(); return $this->redirect(['index']); } /** * Finds the Izinpenggilinganpadi model based on its primary key value. * If the model is not found, a 404 HTTP exception will be thrown. * @param integer $id * @return Izinpenggilinganpadi the loaded model * @throws NotFoundHttpException if the model cannot be found */ protected function findModel($id) { if (($model = Izinpenggilinganpadi::findOne($id)) !== null) { return $model; } else { throw new NotFoundHttpException('The requested page does not exist.'); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
1,145
15.04.21 How has the structure of the fleet by body type changed for 10 years? At the beginning of 2011 the share of sedans was more than a half (52%). Sollers Ford launches Ford Transit subscription Sollers Ford launches Russia's first commercial vehicle subscription service from an automaker. The entire line of commercial vehicles Ford Transit is involved in the program, the press service of Sollers Ford reports. The monthly payment starts from 46,024 rubles for a subscription to the all-metal Ford Transit van and depends on the subscription period and the annual mileage, which can be up to 100 thousand km per year. You can subscribe to a car at any Ford dealership or online at www.fordonline.ru. A Ford Transit subscriber gets for use a completely new car for a period from 12 to 60 months, already registered with the traffic police and insured under OSAGO and CASCO. The subscription includes a wide range of included services: vehicle maintenance and repairs at authorized Ford dealers; roadside assistance; round-the-clock technical support; telematics with full vehicle control via mobile app or web portal. As AUTOSTAT reported earlier, in July Russian dealers of Ford sold 1,789 light commercial vehicles Ford Transit, which is 5% higher than in the previous year. Thus, the model showed the best in the history of July sales in the Russian market. According to the results of seven months of 2021, sales of Ford Transit in our country were 10,094 cars, which is 65% higher than in the same period of the previous year. Tags: automotive market, sales, russia, sollers ford, subscription
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,833
Previous (Arbitrage) Next (Arcangelo Corelli) Thuja standishii foliage and cones Kingdom: Plantae Class: Pinopsida Order: Pinales Family: Cupressaceae Genus: Thuja Thuja koraiensis Thuja occidentalis Thuja standishii Thuja sutchuenensis Arborvitae' is the common name for any of the coniferous evergreen trees or shrubs comprising the genus Thuja (pronounced "thoo-ya" or "thoo-ja") in the cypress family, Cupressaceae. There are five species in the genus, two native to North America and three from Eastern Asia. Some are colloquially known as cedars, such as the Western redcedar, Thuja plicata, one of the largest trees in total volume in the world. (However, the "true cedars" comprise comprising trees in the genus Cedrus.) Two species in other genera, Platycladus and Thujopsis, also have the common name of arborvitae. Platycladus orientalis is known as the Chinese or oriental arborvitae, while Thujopsis dolabrata is known variously as the false arborvite or as Hiba arborvitae. Thuja plicata, the giant arborvitae or Western redcedar (or western red cedar), is a popular timber tree, as is Thuja occidentalis, the Northern white cedar or American arborvitae. Arborvitae are also popular as ornamental trees, particularly given their rapid growth and value for hedges, and various parts have been used for medicinal purposes. These values, along with its ecological importance, reflects the principle of interdependence, whereby species not only advance their own individual purpose of survival and reproduction, but also provide a larger value (for the ecosystem, humans). 3 Species of Thuja The name arborvitae comes from the Latin for "tree of life." Arborvitae (Thuja) is a type of conifer. The conifers comprise division Pinophyta (also known as division Coniferae), one of 13 or 14 division-level taxa within the Plant Kingdom (Plantae). They are cone-bearing seed plants (specifically gymnosperms) with vascular tissue. All living conifers are woody plants, the great majority being trees with just a few being shrubs. Typical examples of conifers include cedars, cypresses, firs, junipers, pines, redwoods, spruces, and yews. As gymnosperms, conifers bear their seeds "naked"; not covered by an ovary. The other type of seed plants, the angiosperms (flowering plants), cover their seeds by including them in a true fruit. All living conifers are woody plants, and most are trees, the majority having monopodial growth form (a single, straight trunk with side branches) with strong apical dominance (the truck is dominant over the branches). Arbovitae belong to the cypress family, Cupressaceae. This is a conifer family with worldwide distribution, including about 27 to 30 genera with about 130-140 species. The bark of mature trees is commonly orange- to red- brown and of stringy texture, often flaking or peeling in vertical strips, but smooth, scaly or hard and square-cracked in some species. The leaves are arranged either spirally, in decussate pairs (opposite pairs, each pair at 90° to the previous pair) or in decussate whorls of three or four, depending on the genus. On young plants, the leaves are needle-like, becoming small and scale-like on mature plants of many (but not all) genera; some genera and species retain needle-like leaves throughout their life. Most are evergreen with the leaves persisting 2-10 years, but three genera (Glyptostrobus, Metasequoia, Taxodium) are deciduous or include deciduous species. The seeds are mostly small and somewhat flattened, with two narrow wings, one down each side of the seed; rarely (e.g., Actinostrobus) triangular in section with three wings; in some genera (e.g. Glyptostrobus, Libocedrus) one of the wings is significantly larger than the other, and in some others (e.g., Juniperus, Microbiota, Platycladus, Taxodium) the seed is larger and wingless. The pollen cones are more uniform in structure across the family, 1-20 mm long, with the scales again arranged spirally, decussate (opposite) or whorled, depending on the genus; The Cupressaceae family is notable for including the largest, tallest, and stoutest individual trees in the world, and also the second longest lived species in the world. The largest is the Giant Sequoia (1486.9 m³ trunk volume), the tallest is the Coast Redwood (115.55 meters tall), the stoutest is the Montezuma Cypress or Ahuehuete (11.42 meters in diameter), and the second oldest is the Alerce (3622 years). Quinault Lake Redcedar, third largest tree in volume in the world The leaves of Thuja are evergreen, opposite, and scale-like, except young seedlings, where they are needle-like. The scales are arranged in four rows along the twigs. The branches are flattened and spraylike or fanlike. The male cones are small, inconspicuous, and are located at the tips of the twigs. The female cones start out similarly inconspicuous, but grow to about 1-2 centimeters (cm) long with 6-12 overlapping, thin, leathery scales. The outer bark is thin and scaly. Thuja species can grow three to five feet (1 to 1.5 meters) per year and can obtain heights of nearly 50 feet (15 meters). A western redcedar, Thuja plicata, known as the Quinault Lake Redcedar, has an estimated total volume of 500 m³, making it the third largest tree in total volume, after a giant sequoia, Sequoiadendron giganteum (the General Sherman tree), and a coast redwood Sequoia sempervirens (Del Norte Titan tree). Species of Thuja Thuja koraiensis - Korean Thuja Thuja occidentalis - Eastern Arborvitae, Northern Whitecedar, American arborvitae Thuja plicata - Western Redcedar, giant arbovitae, red cedar Thuja standishii - Japanese Thuja Thuja sutchuenensis - Sichuan Thuja A hybrid between T. standishi and T. plicata has been named as the cultivar Thuja "Green Giant." Another very distinct and only distantly related species, formerly treated as Thuja orientalis (oriental or Chinese arborvitae), is now treated in a genus of its own, as Platycladus orientalis. The closest relatives of Thuja are Thujopsis dolabrata, distinct in its thicker foliage and stouter cones, and Tetraclinis articulata, distinct in its quadrangular foliage (not flattened) and cones with four thick, woody scales. Arbovitae have commercial, medicinal, and aesthetic uses. The wood of Thujas is light, soft, durable, and aromatic. It can be easily split and resists decay. The wood has been used for many applications, from making chests that repel moths to shingles. Arbovitae poles are also often used to make fence posts and rails. The wood of the giant arborvitae, Thuja plicata, is commonly used for guitar soundboards. Overall, Thuja plicata is a particularly important timber tree, with the American arborvitae (Thuja occidentalis) also popular. The foliage of Thujas is rich in Vitamin C, and was used by Native Americans and early European explorers as a cure for scurvy. The leaves have been used as a treatment for rheumatism. Oil of arborvitae is often quoted as an herbal remedy to be used topically to aid in the treatment of HPV, genital, or common warts. However, clinical evidence for this action is lacking. Thuja serves as a popular homeopathic remedy, used to treat a variety of psychological and physiological conditions. Arborvitae are popular ornamental trees. As a very fast growing tree, they are particularly popular for the ability to create a natural privacy fence in a very short time. Thuja species are used as food plants by the larvae of some Lepidoptera species including Autumnal Moth, The Engrailed and Juniper Pug. Dallimore, W., and A. B. Jackson. Revised by S. G. Harrison. 1967. A Handbook of Coniferae and Ginkgoaceae. New York: St. Martin's Press. Earle, C. J. 2006. Thuja. Gymnosperm Database. Retrieved December 19, 2007. Farjon, A. 2005. A Monograph of Cupressaceae and Sciadopitys. Kew: Royal Botanic Gardens. ISBN 1842460684. All links retrieved April 11, 2016. Arboretum de Villardebelle - Photos of cones. About Arborvitae. Thuja history Cupressaceae history History of "Arborvitae" Retrieved from //www.newworldencyclopedia.org/p/index.php?title=Arborvitae&oldid=995194
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
14
According to Emile Albouy, the chronological data (46) of Camille Jullian concerning the siege of Uxellodunum, are limited to two dates, that of the Drappès defeat and that of the surrender of the place, which the historian respectively suggested to be "in July" and "in August". We should translate this to: "around mid-July" and "around mid-August" as a starting point. The dates indicated by Champollion Figeac (47), are "mid-September" and "mid-October". Between the two chronologies, the difference is two months, but the duration of the event in both cases is the same: about a month. According to Camille Jullian (48), Drappès was defeated "at harvest time": after which the historian locates the event in July. But, it is necessary to take into account that the Gauls had refueled with grain and not with ears. Moreover, harvesting before mid-July is an exceptional occurrence in Quercy. Therefore, the date of the event "in mid-August" must be moved to a later date. As for the date of the surrender of Uxellodunum, Camille Jullian places it (page 563-3) in August since, in chapter (BG: VIII, 46), it is said that after the surrender of the place, Caesar goes to Aquitaine with two legions to spend the last days of the summer campaign there. It is a fact that summer ends not in August, nor in October, but in September, and that a "summer campaign" would end practically at the end of September. The departure of Caesar as a consequence of the surrender of Uxellodunum, must have taken place in mid-September or late September. We thus arrive at the dates of the corrected chronology: hence this conclusion that the dates indicated by Camille Jullian must need to be delayed by one month or, which amounts to the same thing, that the corresponding dates of the chronology Champollion- Figeac must be advanced accordingly. It is also easy to see that the elapsed time between the two benchmark events (one month) cannot be increased or decreased by more than ten days without one or other of the conditions of the problem being invalidated. That is to say that unless there is unexpected interpretation of the words "summer campaign" of the chapter (B.G.: VIII, 46) or there was an exceptional precocity of the wheat harvest in the year 51 BC, the timing is accurate to ten days or so. With regard to the timing of the events that occurred during that year, as well as the position, in time, of the few other facts prior to the defeat of Drappes that Champollion-Figeac identified in his chronology, this revised chronology could only be inaccurate by a few days at most. Uxellodunum capitulates towards the end of September or at the latest at the beginning of October. Eloi Itard and André Noché (49) arrive at the same conclusion by putting forward the facts: 49 to 76 days: from the end of July or beginning of August to the end of September 51 BC. The arrival of Lucterios and Drappes at Puy d'Issolud. The arrival of Caninius, the installation of three camps and the start of defensive construction works. Decision of Lucterios and Drappes. Their departure by night. Requisition of grain. Repeated nocturnal attacks on the Romans' defensive structures so that convoys of supplies could be taken up to the oppidum. During the last convoy, Lucerios fled. The camp of the supply convoys was destroyed and Drappes takeb prisoner. Courriers sent to Caesar who was with the Carnutes (Chartres region). Work on the defences is restarted. Arrival of Fabius. The works are completed. The courriers having been received at Chartres, Caesar leaves for Uxellodunum with his cavalry and arrives with the two legions of Calenius. Caesar weighs up the situation and decides on a siege, cutting off the Gauls' water supply. Constructions are put in place to prevent the Gauls' access to water from the river. That leaves them no choice but to to get water from the Loulié spring. Construction of the platform (agger) 17m high and a tower 27 m high. At the same time, digging of the tunnels. Firing from the top of the tower is partly successful. Animals and men suffer and perish, but the Gauls continue to resist. Operation 'barrels'. Setting fire to the platform and tower. Followed by a simulated attack by the Romans. The besieged Gauls are tenacious despite their thirst. Decisive success of the tunneling. The water source dries up.
{ "redpajama_set_name": "RedPajamaC4" }
5,081