text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
{"url":"https:\/\/math.stackexchange.com\/questions\/3229517\/what-is-the-radical-of-upper-triangular-matrices","text":"# What is the radical of upper triangular matrices?\n\nLet $$B$$ be the algebraic group of upper triangular matrices with entires in some algebraically closed field. I would like to know what is the radical of this group is... Any explanation would be appreciated. thanks you.\n\nThe unipotent radical of the group $$B_n(K)$$, which is the standard Borel subgroup of $$GL_n(K)$$, consists of unitriangular uppertriagular matrices, i.e., with all diagonal elements equal to $$1$$. The (solvable) radical of $$B_n(K)$$ equals $$B_n(K)$$ itself, since the Borel subgroup is solvable.","date":"2021-06-19 13:24:30","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 6, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9200955033302307, \"perplexity\": 74.59778084353219}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623487648194.49\/warc\/CC-MAIN-20210619111846-20210619141846-00571.warc.gz\"}"}
null
null
\section{Introduction} \setcounter{equation}{0} \ \indent SU(2) BPS monopoles are topological soliton solutions of a Yang-Mills-Higgs gauge theory in three space dimensions. They are Bogomolny solitons, in that they attain a topological lower bound on the total energy, and so can be obtained as solutions of a first order equation (the Bogomolny equation) rather than the more general second order field equations. In this letter we shall mainly be concerned with static monopoles. The ingredients of the static theory are the Higgs field $\Phi$, and the gauge potential $A_i, \ i=1,2,3$, both of which are {\sl su(2)}-valued. The static theory can be defined by its energy density \begin{equation} {\cal E}=-\frac{1}{2}\mbox{tr}(D_i\Phi)(D_i\Phi)-\frac{1}{4} \mbox{tr}(F_{ij}F_{ij}) \label{energy} \end{equation} where $D_i=\frac{\partial} {\partial x_i}+[A_i,$ \ is the covariant derivative and $F_{jk}$ the gauge field. Integrating the energy density over all \hbox{\upright\rlap{I}\kern 1.7pt R}$^3$ gives the energy $E$ of any configuration. The boundary condition \begin{equation} \|\Phi\|\rightarrow 1 \hskip 10pt\mbox{as} \hskip 10pt r\rightarrow\infty \label{bc} \end{equation} where $r=\vert\mbox{\boldmath $x$}\vert$, $\|\Phi\|^2=-\frac{1}{2}\mbox{tr}\Phi^2$, is imposed and may be thought of as a residual finite energy condition derived from a vanished Higgs potential. The topological aspect arises because the Higgs field at infinity induces a map between spheres: \begin{equation}\Phi:S^2(\infty)\rightarrow S^2(1)\end{equation} where $S^2(\infty)$ is the two-sphere at spatial infinity and $S^2(1)$ is the two-sphere of vacuum configurations given by $\{\Phi\in su(2) : \|\Phi\|=1\}.$ The degree of this map is an integer $k$, the winding number, which (in suitable units) is the total magnetic charge of the monopole. We shall refer to a monopole with magnetic charge $k$ as a $k$-monopole. The Bogomolny bound $$E\ge 8\pi \vert k\vert$$ gives a lower bound on the total energy of a configuration in terms of its winding number $k$. For each $k$ this bound can be attained, with the relevant configuration being a solution of the first order Bogomolny equation \begin{equation} D_i\Phi=\pm\frac{1}{2}\epsilon_{ijk}F_{jk} \label{bog} \end{equation} where the lower sign corresponds to positive $k$, with solutions called monopoles, and the upper sign corresponds to $k$ being negative, where solutions are called anti-monopoles. We now choose to consider monopoles ie. $k>0$, and so fix the sign in (\ref{bog}) to be the lower one. By topological considerations the total number of zeros of the Higgs field {\sl counted with multiplicity} is $k$ for a $k$-monopole. These zeros need not be distinct, for example, axially symmetric monopoles exist for all $k\ge 2$ \cite{W,P} for which there is a single zero that has multiplicity $k$. When the zeros are distinct, and well separated, the $k$-monopole solution has a natural interpretation as $k$ well-separated unit charge monopoles, each one centered at a zero of the Higgs field. Such solutions exist because physically there are no static forces between equally charged monopoles \cite{Ma}. The moduli space of charge $k$ monopoles is $4k$ dimensional \cite{CG}, and this is again consistent with the well-separated picture, where each individual charge one monopole has three position degrees of freedom and one internal phase. So the general picture appears to be that the Higgs field of a charge $k$ monopole has $k$ simple zeros, which may be thought of as the location of each monopole, and these zeros can coalesce as the monopoles merge. Although this picture has never been rigorously proved, it is widely accepted as true. Indeed there are at least three compelling reasons for believing the above. Firstly, all the known explicit monopole solutions do indeed have a Higgs field of this form. Secondly, in the analogous two dimensional case of abelian Higgs vortices at critical coupling, it has been proved that the total number of zeros of the Higgs field is bounded by the number of vortices \cite{JT}. Furthermore, in other models with topological solitons, such as the O(3) $\sigma$-model in the plane, the general static $k$-soliton solution can be given explicitly and a suitable field shown to have the above structure of zeros \cite{BP}. Finally, if the total number of Higgs zeros (ignoring multiplicities) were greater than $k$ then this would imply that zeros with {\sl negative multiplicity} must exist, for the summed multiplicities to equal $k$. From now on we shall refer to a zero with a negative multiplicity as an anti-zero. An example of a configuration with an anti-zero is of course a single anti-monopole, which has $k=-1$. So it would seem highly unlikely that a monopole configuration with $k>0$ could contain anti-zeros which were well-separated from other zeros, since we could interpret such a configuration as composed of monopoles and anti-monopoles. Since there are attractive forces between monopoles of opposite charge such a configuration could not saturate the Bogomolny energy bound. Despite this wealth of circumstantial evidence, it was argued in a recent paper \cite{HSc}, with the aid numerical results, that a positive charge monopole solution exists which contains anti-zeros. In this letter we briefly review this result for the tetrahedral 3-monopole and then investigate the presence of anti-zeros for the remaining Platonic monopoles. Other aspects of anti-zeros will also be discussed, such as a signal for their occurrence and their relevance to skyrmions. \section{Zeros of Platonic Monopoles} \setcounter{equation}{0} \ \indent Recently it has been shown that monopoles exist which have the symmetries of the Platonic solids \cite{HMM,HSb}. The actual monopole fields $\Phi,A_i$, were not calculated explicitly but rather a twistor approach was taken in which monopole solutions can be shown to be equivalent to certain algebraic objects, called spectral curves \cite{Ha}. The spectral curves were explicitly found, from which the existence and symmetries of the monopoles follows. Using a numerical implementation of the twistor transform \cite{HSa}, the Higgs field and energy density of these monopoles can be computed. The results were displayed graphically in the form of a three-dimensional plot of a surface of constant energy density. For each of the four newly discovered monopoles it was found that a surface of constant energy density resembled a Platonic solid. The results are summarized in Table 1, where we give the monopole charge $k$ and the Platonic solid it resembles. In each case the energy density takes its maximum values on the vertices of the relevant Platonic solid. \begin{center} \begin{tabular}{|c|c|} \hline Charge k & Platonic solid \\ \hline 3 & Tetrahedron \\ 4 & Cube \\ 5 & Octahedron \\ 7 & Dodecahedron \\ \hline \end{tabular} \end{center} \begin{center} Table 1. {\sl Charges of Platonic Monopoles } \end{center}\ In these papers the Higgs field and its zeros were not studied, since if there are $k$ zeros then in each case the symmetry group acting implies that all $k$ zeros must be at the origin. For example, for the tetrahedral monopole $k=3$ and if there are only three zeros then in order to arrange three points with tetrahedral symmetry all three points must be at the origin. However, using the moduli space approximation \cite{M,S} the dynamics of $k$ slowly moving monopoles can be approximated by geodesic motion on the monopole moduli space ${\cal M}_k$. In \cite{HSc} we presented a totally geodesic one dimensional submanifold of the 3-monopole moduli space, which contains on it the tetrahedral 3-monopole. This geodesic may therefore be interpreted in terms of the scattering of three monopoles which instantaneously form the tetrahedral 3-monopole. These results make it appear very unnatural (see \cite{HSc} for more details) that there are three zeros at the origin and so we examined the Higgs field in more detail. Writing the Higgs field in terms of Pauli matrices as \begin{equation} \Phi=i\sigma_1\varphi_1+i\sigma_2\varphi_2 +i\sigma_3\varphi_3 \end{equation} we plotted the components $\varphi_1,\varphi_2,\varphi_3$ along the line $x_1=x_2=x_3=L$, which goes through a vertex (at a negative value of $L$) and the center of a face (at a positive value of $L$) of the tetrahedron associated with the tetrahedral monopole. Fig 1. shows the results, and it is clear that along this line there are two points at which all the component of the Higgs field vanish. One point is the origin ($L=0$) and the second occurs at a negative value of $L$, which indicates it is associated with a vertex of the tetrahedron, rather than a face. There are another three similar lines, going through the remaining vertices of the tetrahedron, and these were the only other lines along which Higgs zeros were found. So the result is that there are five Higgs zeros, one associated with each vertex of the tetrahedron and one at the origin. Since the monopole charge is three, then the zero at the origin must be an anti-zero. This can be checked numerically (see \cite{HSc} for details of the scheme) by computing the winding number $Q(r)$, of the unit 3-vector \begin{equation} \psi=(\varphi_1,\varphi_2,\varphi_3) \frac{1}{\sqrt{\varphi_1^2+\varphi_2^2+\varphi_3^2}} \end{equation} corresponding to the normalized Higgs field on a two-sphere of radius $r$, centred at the origin. Note that by definition $Q(R)=k$, if $R$ is sufficiently large, so that all zeros of the Higgs field are contained within the ball of radius $R$ centred at the origin. Such a calculation gives $Q(0.2)=-1$ and $Q(1.0)=+3$, confirming that there is indeed an anti-zero at the origin. Having briefly reviewed the results for the tetrahedral monopole we now go on to investigate the Higgs zeros of the other Platonic monopoles. We begin with the cubic 4-monopole. First of all we explicitly prove that the Higgs field of the cubic monopole is zero at the origin. It is useful to give the details of this calculation, since it demonstrates the kind of work required to prove the results which the numerical evidence suggests. Recall the ADHMN construction \cite{N,Hb} which is an equivalence between $k$-monopoles and Nahm data $(T_1,T_2,T_3)$, which are three $k\times k$ matrices which depend on a real parameter $s\in[0,2]$ and satisfy the following;\\ \newcounter{con} \setcounter{con}{1} (\roman{con}) Nahm's equation \begin{equation} \frac{dT_i}{ds}=\frac{1}{2}\epsilon_{ijk}[T_j,T_k] \nonumber \end{equation}\\ \addtocounter{con}{1} (\roman{con}) $T_i(s)$ is regular for $s\in(0,2)$ and has simple poles at $s=0$ and $s=2$,\\ \addtocounter{con}{1} (\roman{con}) the matrix residues of $(T_1,T_2,T_3)$ at each pole form the irreducible $k$-dimensional representation of SU(2),\\ \addtocounter{con}{1} (\roman{con}) $T_i(s)=-T_i^\dagger(s)$,\\ \addtocounter{con}{1} (\roman{con}) $T_i(s)=T_i^t(2-s)$.\\ Finding the Nahm data effectively solves the nonlinear part of the monopole construction and is enough to prove existence of the monopole and compute its spectral curve. In fact this is how the spectral curves of the Platonic monopoles were calculated. However in order to calculate the Higgs field the linear part of the ADHMN construction must also be implemented. Given Nahm data $(T_1,T_2,T_3)$ for a $k$-monopole we must solve the ordinary differential equation \begin{equation} ({{\upright\rlap{1}\kern 2.0pt 1}}_{2k}\frac{d}{ds}+{{\upright\rlap{1}\kern 2.0pt 1}}_k\otimes x_j\sigma_j +iT_j\otimes\sigma_j){\bf v}=0 \label{lin} \end{equation} for the complex $2k$-vector ${\bf v}(s)$, where ${\upright\rlap{1}\kern 2.0pt 1}_k$ denotes the $k\times k$ identity matrix, $\sigma_j$ are the Pauli matrices and ${\bf x}=(x_1,x_2,x_3)$ is the point in space at which the Higgs field is to be calculated. Introducing the inner product \begin{equation} \langle{\bf v}_1,{\bf v}_2\rangle =\int_0^2 {\bf v}_1^\dagger{\bf v}_2\ ds \label{ip} \end{equation} then the solutions of (\ref{lin}) which we require are those which are normalizable with respect to (\ref{ip}). It can be shown that the space of normalizable solutions to (\ref{lin}) has (complex) dimension 2. If $\widehat {\bf v}_1,\widehat {\bf v}_2$ is an orthonormal basis for this space then the Higgs field $\Phi$ is given by \begin{equation} \Phi=i\left[ \begin{array}{cc} \langle(s-1)\widehat {\bf v}_1,\widehat {\bf v}_1\rangle & \langle(s-1)\widehat {\bf v}_1,\widehat {\bf v}_2\rangle \\ \langle(s-1)\widehat {\bf v}_2,\widehat {\bf v}_1\rangle & \langle(s-1)\widehat {\bf v}_2,\widehat {\bf v}_2\rangle \end{array} \right]. \label{higgs} \end{equation} For the cubic monopole $k=4$ and the Nahm data is explicitly known \cite{HMM}. Writing ${\bf v}=(v_1,v_2,v_3,v_4,v_5,v_6,v_7,v_8)^t$ then (\ref{lin}) becomes the set of equations \begin{eqnarray} & &\dot v_1+x_3v_1+(x_1+ix_2)v_2 +(4y+3x)v_1+20yv_8=0 \nonumber\\ & &\dot v_2-x_3v_2+(x_1-ix_2)v_1 -(4y+3x)v_2+2\sqrt{3}(-2y+x)v_3=0 \nonumber\\ & &\dot v_3+x_3v_3+(x_1+ix_2)v_4 +2\sqrt{3}(-2y+x)v_2+(-12y+x)v_3=0 \nonumber\\ & &\dot v_4-x_3v_4+(x_1-ix_2)v_3 +(12y-x)v_4+4(3y+x)v_5=0 \nonumber\\ & &\dot v_5+x_3v_5+(x_1+ix_2)v_6 +4(3y+x)v_4+(12y-x)v_5=0 \nonumber\\ & &\dot v_6-x_3v_6+(x_1-ix_2)v_5 +(-12y+x)v_6+2\sqrt{3}(-2y+x)v_7=0 \nonumber\\ & &\dot v_7+x_3v_7+(x_1+ix_2)v_8 +2\sqrt{3}(-2y+x)v_6-(4y+3x)v_7=0 \nonumber\\ & &\dot v_8-x_3v_8+(x_1-ix_2)v_7 +20yv_1+(4y+3x)v_8=0. \end{eqnarray} where dot denotes differentiation with respect to $s$ \begin{eqnarray} x(s)&=&\frac{\kappa}{5}\left(-2\sqrt{\wp(\kappa s)}+\frac{1}{4}\frac{\wp^\prime(\kappa s)}{\wp(\kappa s)}\right)\\ y(s)&=&\frac{\kappa}{20}\left(\sqrt{\wp(\kappa s)}+\frac{1}{2} \frac{\wp^\prime(\kappa s)}{\wp(\kappa s)}\right) \end{eqnarray} with $\kappa$ a known constant and $\wp$ the elliptic function satisfying \begin{equation} \wp^{\prime 2}=4\wp^3-4\wp.\end{equation} Here prime denotes differentiation with respect to the argument. The constant $\kappa=\Gamma(1/4)^2/\sqrt{8\pi}$ is such that the real period of the elliptic function is $2\kappa$. We wish to calculate the Higgs field at the origin so we set $x_1=x_2=x_3=0$. Then the first and last equations decouple from the rest, so we may look for a solution with $v_2=v_3=v_4=v_5=v_6=v_7=0$ to give \begin{eqnarray} & &\dot v_1 +(4y+3x)v_1+20yv_8=0 \nonumber\\ & &\dot v_8 +(4y+3x)v_8+20yv_1=0. \end{eqnarray} The symmetry of this system allows the reduction $v_8=v_1$ which brings us to the single equation \begin{equation} \dot v_1+(24y+3x)v_1=0. \end{equation} Now \begin{equation} 24y+3x=\frac{3\kappa}{4}\frac{\wp^\prime}{\wp}=\frac{3}{4} \frac{\dot\wp}{\wp} \end{equation} so the equation is \begin{equation} \dot v_1+\frac{3}{4}\frac{\dot\wp}{\wp} v_1=0 \end{equation} with solution \begin{equation} v_1=A\wp^{-3/4} \end{equation} where $A$ is a constant. The properties of the elliptic function $\wp$ are such that $v_1$ is finite for $s\in[0,2]$. Hence we have our first unit norm solution \begin{equation} \widehat{\bf v}_1=B^{-1}\wp^{-3/4}(1,0,0,0,0,0,0,1)^t \end{equation} where $B$ is the constant \begin{equation} B^2=2\int_0^2 \wp^{-3/2}\ ds. \end{equation} In a similar way the fourth and fifth equations in (\ref{lin}) decouple to give \begin{equation} \widehat{\bf v}_2=B^{-1}\wp^{-3/4}(0,0,0,1,1,0,0,0)^t. \end{equation} Substituting these solutions into (\ref{higgs}) we have that \begin{equation} \Phi=i2B^{-2} {\upright\rlap{1}\kern 2.0pt 1}_2 \int_0^2 (s-1)\wp^{-3/2}\ ds. \label{phizero} \end{equation} However, $\wp$ is symmetric on the real line about its half period, so the integrand in (\ref{phizero}) is antisymmetric about $s=1$, and hence $\Phi=0$. So finally, we have proved that the Higgs field of the cubic monopole has a zero at the origin. To find the Higgs field at points other than the origin requires a similar calculation, but it is more involved, since for a general point the set of equations will no longer decouple in a simple way. This is why it is a difficult task to prove that the tetrahedral 3-monopole has five zeros, and instead we rely on numerical results. Essentially the numerical scheme \cite{HSa} solves the linear differential system (\ref{lin}), extracts an orthonormal basis for the normalizable solutions and performs the required integrations to obtain the Higgs field. Returning to a numerical investigation of the cubic monopole we plot, in Fig 2. (the solid curve), the norm squared of the Higgs field $\|\Phi\|^2$ along the line $x_1=x_2=x_3=L$. This line goes through two vertices of the cube associated with the cubic monopole. It is not easy to see exactly where this function is zero, so we also plot in Fig 3. the component $\phi_2$ (solid curve). From this plot it seems relatively clear that the cubic monopole has a zero of the Higgs field only at the origin, and not at points associated with the vertices of a cube. Calculation of the Higgs field along other lines, for example through the centre of a face, did not reveal any other zeros. So it would seem that the cubic monopole is not like the tetrahedral monopole, and does not possess anti-zeros. Supporting evidence comes from a calculation of the winding number around the origin, which gives $Q(0.1)=+4$, in agreement with the cubic monopole having just four zeros, which are all located at the origin. Having demonstrated that the tetrahedral monopole appears to have an anti-zero, it may now seem surprising that the cubic monopole has no anti-zeros. However, there are several reasons why we might expect the cubic monopole to be unlike the tetrahedral monopole. The first is obvious, in that if the cubic monopole had a zero at each vertex then, since it has charge four, there would have to be four anti-zeros at the origin ie. an anti-zero with local winding -4. This requirement of multiple anti-zeros is not impossible, but it does seem a little contrived. A second reason comes from the study of monopole scattering, which can be addressed using the moduli space approximation, as mentioned earlier. There is a totally geodesic one dimensional submanifold of ${\cal M}_3$ which contains the tetrahedral monopole \cite{HSc}. The Nahm data associated with this submanifold involves the family of elliptic curves \begin{equation} y^2=4x^3-3(a^2-4)^{2/3}x-4 \label{curvea} \end{equation} where $a\in\hbox{\upright\rlap{I}\kern 1.7pt R}$ is a parameter, such that $a=\pm 2$ gives the tetrahedral monopole. Now there are points on the geodesic which correspond to three well-separated unit charge monopoles, so we know that at such points the corresponding monopole configuration can have no anti-zeros. Since the tetrahedral monopole has anti-zeros there must be special \lq splitting points\rq\ along the geodesic at which anti-zeros appear/disappear. The discriminant of the elliptic curve (\ref{curvea}) is \begin{equation} \Delta=27a^2(a^2-8) \end{equation} which vanishes at the three points $a=0,\pm\sqrt{8}$. The numerical results are consistent with the conjecture that the splitting points occur at these three parameter values at which the elliptic curve is singular. There is a geodesic in ${\cal M}_4$ which contains the cubic monopole \cite{HSa,Sa} and is associated with the family of elliptic curves \begin{equation} y^2=4x^3-4x+12a^2 \label{curveb} \end{equation} where $a\in(-3^{5/4}\sqrt{2},+3^{5/4}\sqrt{2})$. The discriminant of this elliptic curve is \begin{equation} \Delta=16(4-3^5a^4) \end{equation} which never vanishes for $a$ in its allowed range. Hence, if the above conjecture is correct, then there are no splitting points along this geodesic. Since the geodesic contains points which correspond to four well-separated unit charge monopoles, then this implies that the cubic monopole has no anti-zeros. In Fig 2. and Fig 3. we plot $\|\Phi\|^2$ and $\varphi_2$ (dashed lines) along the line $x_1=x_2=x_3=L$ for two monopole configurations corresponding to two other points on the geodesic in ${\cal M}_4$. From these (and similar) plots it can be seen that in each case there is only one zero along this line, which can be tracked as it moves in from infinity, through the origin and back out to infinity in the opposite direction along the line. There is no signature of any splitting of zeros taking place. A final indication that the tetrahedral and cubic monopoles have a different structure in their zeros comes from an analogy with another kind of topological soliton, the skyrmion. Numerical evidence suggests \cite{BTC} that the minimal energy 3-skyrmion and the minimal energy 4-skyrmion resemble a tetrahedron and a cube respectively, so there is some similarity between monopoles and skyrmions. Using instanton generated skyrmions \cite{LM} these kinds of configurations were investigated in more detail, and it was found that the tetrahedral skyrmion has regions in which the baryon density is negative, but no such regions were found for the cubic skyrmion. For skyrmions the baryon density is the quantity which when integrated over all \hbox{\upright\rlap{I}\kern 1.7pt R}$^3$ gives the number of skyrmions ie. it is the topological charge density. For an anti-skyrmion the baryon density is negative, so for a skyrmion configuration with positive topological charge to have a region in which the baryon density is negative is analogous in the monopole context to a region containing an anti-zero. Hence, if the tetrahedral monopole contains anti-zeros, but the cubic monopole does not then this is yet again another parallel between monopoles and skyrmions. Having looked for anti-zeros in the tetrahedral and cubic monopoles and finding apparently different answers for each, it is by no means clear what the situation will be for the other two Platonic monopoles. Note that for the octahedron, since the charge is five, a zero at each vertex would imply that only a single anti-zero is required at the origin. Thus in this respect the octahedral monopole is like the tetrahedral monopole rather than the cubic monopole, and is a candidate for anti-zeros. Fig 4. shows a plot of $\|\Phi\|^2$ for the octahedral monopole, along the line $x_1=x_2=0, \ x_3=L$, which passes through two vertices of the associated octahedron. This clearly suggests that there are three zeros along this line. There are two other similar lines, so we find that the octahedral monopole has a zero on each of the six vertices of the octahedron and an anti-zero at the origin. This conclusion is supported by a winding number calculation which gives $Q(3.0)=+5$ and $Q(0.1)=-1$. Numerical results for the dodecahedral 7-monopole are not as conclusive as for the other three Platonic monopoles, but seem to suggest that it is like the cubic 4-monopole in not possessing anti-zeros. This would seem the most acceptable result, since if the dodecahedral monopole had anti-zeros in the same manner as the tetrahedral and octahedral monopole then this would require multiple anti-zeros (in fact thirteen) at the origin. It would clearly be desirable to test the conjecture, relating splitting points to singular elliptic curves, with other examples. In particular it would be instructive if Nahm data could be found which corresponds to a geodesic in ${\cal M}_5$ that includes the octahedral monopole. The conjecture implies that the associated family of elliptic curves should contain singular curves. Two appropriate one-dimensional totally geodesic submanifolds of ${\cal M}_5$ are known \cite{HSb,HSc}, but unfortunately the computation of the associated Nahm data appears not to be a tractable problem. However, a more suitable candidate does appear to exist and is obtained by imposing tetrahedral symmetry on five monopoles. This should be investigated as it could provide a simple counter example to prove the conjecture false, if it could be shown that such a geodesic exists and its associated family of elliptic curves contained no singular curves. \section{Conclusion} \setcounter{equation}{0} \ \indent In this letter we point out that it appears that monopole solutions exist which saturate the Bogomolny energy bound and yet which have more zeros of the Higgs field than number of monopoles. We refer to such spurious zeros as anti-zeros, since they have a local winding which has opposite sign to the total charge of the monopole. Whether such monopole configurations could be interpreted as BPS monopole anti-monopole states is not yet known, since such an interpretation would require a local definition of magnetic charge density (because the zeros and anti-zeros are close together). At present no useful definition exists, since the standard definition relies upon a consideration of the asymptotic field far from the monopole where the non-abelian symmetry is broken to an abelian symmetry which can be identified with electromagnetism. Some discussion on a signature for the appearance of anti-zeros has been given, and a conjecture made relating this to the singular behaviour of certain elliptic curves. Further work needs to be made on checking this conjecture with other examples, on proving that anti-zeros do exist, and on finding indications for their existence in other approaches, such as rational maps and spectral curves. \newpage \noindent{\bf Acknowledgements} Many thanks to Conor Houghton for useful discussions.
{ "redpajama_set_name": "RedPajamaArXiv" }
6,832
Conchocele disjuncta är en musselart som beskrevs av William More Gabb 1866. Conchocele disjuncta ingår i släktet Conchocele och familjen Thyasiridae. Inga underarter finns listade i Catalogue of Life. Källor Musslor disjuncta
{ "redpajama_set_name": "RedPajamaWikipedia" }
416
\section{Introduction}\label{Int1} In this paper we will investigate the equivalence of various geometric inequalities on gradient shrinking Ricci solitons. As applications, We apply the Sobolev inequality to give some integral gap theorems for compact shrinking Ricci solitons. Recall that an $n$-dimensional Riemannian manifold $(M,g)$ is called a \emph{gradient shrinking Ricci soliton or shrinkers} (see \cite{[Ham]}) if there exists a smooth function $f$ on $M$ such that the Ricci curvature $\text{Ric}$ and the Hessian of $f$ satisfy \begin{align}\label{Eq0} \Ric+\mathrm{Hess}\,f=\lambda g \end{align} for some positive number $\lambda$. Function $f$ is often called a \emph{potential} of the shrinker. For simplicity, we often normalize $\lambda=\frac 12$ by scaling the metric $g$ so that \begin{align}\label{Eq1} \Ric +\mathrm{Hess}\,f=\frac 12g. \end{align} According to the work of \cite{[Ham],[CaNi]}, without loss of generality, adding $f$ by a constant if necessary, we can assume that equation \eqref{Eq1} simultaneously satisfies \begin{equation}\label{Eq2} \R+|\nabla f|^2=f \quad\mathrm{and}\quad (4\pi)^{-\frac n2}\int_Me^{-f} dv=e^{\mu}, \end{equation} where $\R$ is the scalar curvature of $(M,g)$ and $\mu=\mu(g,1)$ is the entropy functional of Perelman \cite{[Pe]}; see also the detailed explanation in \cite{[LLW]} or \cite{[W20], [W20b]}. For a compact shrinker, $\mu$ has a lower bound; but for the non-compact case, we generally need to assume $\mu>-\infty$ so that our discussion makes sense. In particular, for the Euclidean space, we have $\mu=0$. In \cite{[LLW]}, Li, Li and Wang proved that $e^{\mu}$ is nearly equivalent to $V(p_0,1)$, i.e., the volume of geodesic ball $B(p_0,1)$ centered at point $p_0\in M$ and radius $1$. Here $p_0\in M $ is a point where $f$ attains its infimum, which always exists on shrinkers but possibly is not unqiue; see \cite{[HaMu]}. \vspace{.1in} \emph{In this paper we always let the triple $(M, g, f)$ denote the shrinking gradient Ricci soliton (or shrinker) with \eqref{Eq1} and \eqref{Eq2} satisfying $\mu>-\infty$.} \vspace{.1in} Shrinkers are natural extension of Einstein manifolds and can be regarded as the critical point of Perelman's $\mathcal{W}$-functional \cite{[Pe]}. In addition, shrinkers are self-similar solutions to the Ricci flow and naturally rise as singularity analysis of the Ricci flow \cite{[Ham]}. For example, Enders, M\"uller and Topping \cite{[EMT]} proved that the proper rescaling limits of a type-I singularity point always converge to non-trivial shrinkers. At present, one of main issue in the Ricci flow theory is the understanding on the geometry and classification of shrinkers. For dimensions 2 and 3, the classification is complete by the works of \cite{[Ha88]}, \cite{[Ivey]}, \cite{[Pe]}, \cite{[NW]} and \cite{[CCZ]}. For dimension 4 and higher, the classification remains open, though much progress has been made. The interested reader can refer to \cite{[Cao]} for an excellent survey. \vspace{.1in} In recent years, many geometric and analytic results about shrinkers have been investigated. Wylie \cite{[Wy]} proved that any complete shrinker has finite fundamental group (the compact case due to Derdzi\'nski \cite{[De]}). Chen \cite{[Chen]} showed that the scalar curvature $\mathrm{R}\geq0$; Pigola, Rimoldi and Setti \cite{[PiRS]} proved that $\mathrm{R}>0$ unless $(M,g,f)$ is the Gaussian shrinker; Chow, Lu and Yang \cite{[CLY]} showed that the scalar curvature of non-trivial shrinkers has at least quadratic decay of distance function. Chen and Zhou \cite{[CaZh]} showed that the potential function $f$ is uniformly equivalent to the distance squared; they \cite{[CaZh]} also showed that all shrinkers have at most Euclidean volume growth by combining an observation of Munteanu \cite{[Mun]}. Later, Munteanu and Wang \cite{[MuW14]} proved that shrinkers have at least linear volume growth. These volume growth properties are similar to manifolds with nonnegative Ricci curvature. \vspace{.1in} On the other hand, Haslhofer and M\"uller \cite{[HaMu],[HaMu2]} proved a Cheeger-Gromov compactness theorem of shrinker. Huang \cite{[Hua]} proved an $\epsilon$-regularity theorem for $4$-dimensional shrinkers, which was later improved by Ge and Jiang \cite{[GeJi]}. Their result gives an answer to Cheeger-Tian's question \cite{[CT]}. In \cite{[W15]}, the author applied gradient estimate technique to prove a Liouvlle type theorem for ancient solutions to the weighted heat equation on shrinkers. In \cite{[WW15], [WW16]}, P. Wu and the author applied weighted heat kernel upper estimates to give a sharp weighted $L^1$-Liouville theorem for weighted subharmonic functions on shrinkers. \vspace{.1in} By analyzing the Perelman's functional under the Ricci flow, Li and Wang \cite{[LiWa]} obtained a sharp logarithmic Sobolev inequality on complete (possible non-compact) shrinkers, which says that for any $\tau>0$, \begin{equation}\label{LSI} \int_M\varphi^2\ln \varphi^2dv\leq\tau\int_M\left(4|\nabla\varphi|^2+\mathrm{R}\varphi^2\right)dv -\left(\mu+n+\frac n2\ln(4\pi\tau)\right). \end{equation} for any compactly supported locally Lipschitz function $\varphi$ with $\int_M\varphi^2dv=1$. The equality case could be attained when $\tau=1$ (see Carrillo and Ni \cite{[CaNi]}). This sharp inequality is useful for understanding shrinkers. Indeed the author \cite{[W20b]} was able to apply \eqref{LSI} to give sharp upper diameter bounds of compact shrinkers in terms of the integral of scalar curvature. The author \cite{[W20]} also used \eqref{LSI} to study the Schrodinger heat kernel on shrinkers. Here the definition of the Schr\"odinger heat kernel is similar to the classical heat kernel. That is, for each $y\in M$, we say that $H^{\mathrm{R}}(x,y,t)$ is called the Schrodinger heat kernel if $H^{\mathrm{R}}(x,y,t)=u(x,t)$ is a minimal positive smooth solution of the Schr\"odinger heat equation \[ -\Delta u+\tfrac{\R}{4} u+\partial_t u=0 \] satisfying $\lim_{t\to0}u(x,t)=\delta_y(x)$, where $\delta_y(x)$ is the delta function defined as \[ \int_M\phi(x)\delta_y(x)dv=\phi(y) \] for any $\phi\in C_0^{\infty}(M)$. In general, the Schr\"odinger heat kernel always exists on compact shrinkers. For non-compact shrinkers, we do not know if it still exists without any assumption but it indeed exists when the scalar curvature of $(M,g,f)$ is bounded. The Schr\"odinger heat kernel of shrinkers shares many kernel properties of the classical Laplacian heat kernel on manifolds; see \cite{[W20]}. In this paper, we always assume that the Schr\"odinger heat kernel exists on $n$-dimensional complete shrinker $(M,g,f)$. \vspace{.1in} In \cite{[W20]} the author applied \eqref{LSI} to prove that \begin{equation}\label{upp2} H^{\R}(x,y,t)\le\frac{e^{-\mu}}{(4\pi t)^{\frac n2}} \end{equation} for all $x,y\in M$ and $t>0$. We remark that this type upper bound of the conjugate heat kernel under the Ricci flow was obtained by Li and Wang \cite{[LiWa]}. The author also obtained its Gaussian type upper bounds by the iteration argument. That is, for any $\alpha>4$, the author showed that there exists a constant $A=A(n,\alpha)$ depending on $n$ and $\alpha$ such that \begin{equation}\label{upp} H^{\R}(x,y,t)\le\frac{A e^{-\mu}}{(4\pi t)^{\frac n2}}\exp\left(-\frac{d^2(x,y)}{\alpha t}\right) \end{equation} for all $x,y\in M$ and $t>0$, where $d(x,y)$ denotes the geodesic distance between $x$ and $y$. Considering the classical Laplace heat kernel of Euclidean space, estimate \eqref{upp} is obvious sharp. Moreover the heat kernel estimate is useful for analyzing eigenvalues of the Schr\"odinger operator. Indeed the author \cite{[W20]} used the upper bounds to get lower bounds of their eigenvalues. Namely, for any open relatively compact set $\Omega\subset M$, let $0<\lambda_1(\Omega)\le\lambda_2(\Omega)\le\ldots$ be the Dirichlet eigenvalues of the Schr\"odinger operator in $\Omega$. Then we have \begin{equation}\label{multie} \lambda_k(\Omega)\ge\frac{2n\pi}{e}\left(\frac{k\,e^\mu}{V(\Omega)}\right)^{2/n},\quad k\ge1, \end{equation} where $V(\Omega)$ is the volume of $\Omega$. Recall that the classical Weyl's asymptotic formula of the $k$-th Dirichlet eigenvalue of Laplacian in open relatively compact set $\Omega\subset\mathbb{R}^n$ states that \[ \lambda_k(\Omega)\sim c(n)\left(\frac{k}{V(\Omega)}\right)^{2/n},\quad k\to\infty, \] which indicates that \eqref{multie} is sharp for the exponent $2/n$. We remark that by the Rozenblum-Cwikel-Lieb inequality \cite{[Ro],[Cw],[Lie]} (see also estimate \eqref{RCLnum} in Section \ref{sec2pre}), we easily get that eigenvalues of the Schr\"odinger operator $-\Delta+\frac{\mathrm{R}}{4}$ on non-trivial shrinkers are all positive. \vspace{.1in} Besides of the above results, combining \eqref{LSI} and the Markov semigroup technique of Davies \cite{[Dav]}, Li and Wang \cite{[LiWa]} proved a local Sobolev inequality of shrinkers. Namely, for each compactly supported locally Lipschitz function $\varphi$ in $M$, \begin{equation}\label{sobo} \left(\int_M \varphi^{\frac{2n}{n-2}}\,dv\right)^{\frac{n-2}{n}}\le C(n)e^{-\frac{2\mu}{n}} \int_M\left(4|\nabla \varphi|^2+\R\varphi^2\right) dv \end{equation} for some constant $C(n)$ depending only on $n$. Here $e^{\mu}$ can be viewed as the volume of unit geodesic ball and hence this Sobolev inequality is very similar to the classical Sobolev inequality on manifolds, which plays an important role in some PDE ways. For example, P. Wu and the author \cite{[WW19]} applied \eqref{sobo} to study the dimensional estimates for the spaces of harmonic functions and Schr\"odinger functions with polynomial growth. In \cite{[W19]}, the author used \eqref{sobo} to derive a mean value type inequality and further study the analyticity in time for solutions of the heat equation on shrinkers. \vspace{.1in} In this paper we continue to study geometric inequalities and their relations on shrinkers. We first show that the above geometric inequalities, and the Nash inequality, the Rozenblum-Cwikel-Lieb inequality all equivalently exist on shrinkers, which may be regarded as natural generalizations of the case of manifolds with nonnegative Ricci curvature \cite{[Gr],[Sa],[Zhq]}. \begin{theorem}\label{thmequ} Let $(M,g, f)$ be an $n$-dimensional complete (compact or noncompact) shrinker. The following six properties are equivalent up to constants. \begin{itemize} \item [(I)] The Sobolev inequality \eqref{sobo} holds. \item [(II)] The logarithmic Sobolev inequality \eqref{LSI} holds. \item [(III)] The Schr\"odinger heat kernel upper bound \eqref{upp} holds. \item [(IV)] The Faber-Krahn inequality holds. That is, for all open relatively compact set $\Omega\subset M$ with smooth boundary, \[ \lambda_1(\Omega)\ge\frac{2n\pi}{e}\left(\frac{e^\mu}{V(\Omega)}\right)^{\frac 2n}, \] where $\lambda_1(\Omega)$ is the lowest Dirichlet eigenvalue of the Schr\"odinger operator in $\Omega$. \item [(V)] The Nash inequality holds. That is, there exists a constant $c(n)$ depending on $n$ such that \begin{equation}\label{Nash} \parallel\varphi\parallel^{2+\frac 4n}_2\le c(n)e^{-\frac{2\mu}{n}}\parallel\varphi\parallel^{\frac 4n}_1 \int_M\left(4|\nabla \varphi|^2+\R\varphi^2\right)dv \end{equation} for any compactly supported locally Lipschitz function $\varphi$ in $M$. \item [(VI)] The Rozenblum-Cwikel-Lieb inequality holds. That is, there exists a constant $c(n)$ depending on $n$ such that \begin{equation}\label{RCL} \mathcal{N}\left(-\Delta+\tfrac{\R}{4}+V\right)\le c(n)e^{-\mu}\int_M V^{\frac n2}_{-}dv, \end{equation} for any function $V\in L^1_{loc}(M)$, where $V_{-}:=\max\{0,-V\}\in L^{n/2}(M)$ is the non-positive part of $V$, and $\mathcal{N}(A)$ is the number of non-positive $L^2$-eigenvalues of the operator $A$, counting multiplicity. \end{itemize} \end{theorem} \begin{remark}\label{inesca} For (III), (IV) and (VI), we need to assume the existence of Schr\"odinger heat kernel on complete shrinkers because our proof will be involved with the Schr\"odinger heat kernel. For compact shrinkers, the Schr\"odinger heat always exists; but for the non-compact case, we only know that it exists when scalar curvature is bounded (see \cite{[W20]}). \end{remark} \begin{remark}\label{inequvar} We point out that (III) is equivalent to \eqref{upp2}. Indeed, (III) $\Rightarrow$ \eqref{upp2} is obvious, while \eqref{upp2} $\Rightarrow$ (III) due to the work \cite{[W20]}. We also point out that (IV) is equivalent to \eqref{multie}. Indeed, from Theorem \ref{thmequ}, we first have (IV) $\Rightarrow$ (III), and from the work \cite{[W20]}, we then know (III) $\Rightarrow$ \eqref{multie}. Combining two parts finally yields (IV) $\Rightarrow$ \eqref{multie}. The converse is trivial. \end{remark} \begin{remark}\label{inequvar2} Estimates in (III) and (IV) are both sharp; see \cite{[W20]}. In addition \eqref{RCL} is also sharp in some sense. Indeed on Gaussian shrinker $(\mathbb{R}^n, \delta_{ij}, \frac{|x|^2}{4})$, where $\delta_{ij}$ is the standard flat Euclidean metric, we know $\mathrm{R}=0$ and $\mu=0$. For a large parameter $\alpha$, replacing $V$ by $\alpha V$ in \eqref{RCL}, \[ \alpha^{-\frac n2}\mathcal{N}\left(-\Delta+\alpha V\right)\le c(n)\int_{\mathbb{R}^n}V^{\frac n2}_{-}dv. \] In turns out that for any potential $V$ with $V_{-}\in L^{n/2}(\mathbb{R}^n)$ and $V\in L^1_{loc}(\mathbb{R}^n)$, we have the Weyl asymptotics \[ \lim_{\alpha\to \infty}\alpha^{-\frac n2}\mathcal{N}\left(-\Delta+\alpha V\right)=\frac{(2\sqrt{\pi})^{-n}}{\Gamma(1+\frac n2)}\int_{\mathbb{R}^n}V^{\frac n2}_{-}dv. \] This indicates that \eqref{RCL} is sharp in order in $\alpha$ for a class function of $V$. \end{remark} The proof strategy of Theorem \ref{thmequ} is as follows. Li and Wang \cite{[LiWa]} proved that (II) holds on shrinkers and they confirmed that (II) $\Rightarrow$ (I). The author \cite{[W20]} proved that (II) $\Rightarrow$ (III) $\Rightarrow$ (IV). The rest part is the following. \begin{itemize} \item [(1)] We will apply the Jensen inequality to confirm (I) $\Rightarrow$ (II). \item [(2)] We shall apply the Davies' argument \cite{[Dav]} and the Markov semigroup to reprove Li-Wang's Sobolev inequality \eqref{sobo}, i.e., (III) $\Rightarrow$ (I). In particular we will talk about the scope of Sobolev constant, which will be useful to study gap theorems. \item [(3)] We will apply the Schr\"odinger heat kernel and the level set method to prove (IV) $\Rightarrow$ (III) by the approximation argument. \item [(4)] We will use the H\"older inequality to prove (I) $\Rightarrow$ (V). \item [(5)] By the approximation argument, we only need to apply the Schr\"odinger heat kernel and analytical technique to prove (V) $\Rightarrow$ (III) with Dirichlet condition. \item [(6)] We will apply Schr\"odinger heat kernel properties and some functional theory to prove (I) $\Leftrightarrow$ (VI). \end{itemize} We remark that the proof of (III) $\Rightarrow$ (I) will be separately provided in Section \ref{sec2}; the proof of (I) $\Leftrightarrow$ (VI) will be given in Section \ref{sec2pre}; the rest cases will be discussed in Section \ref{sec3}. \vspace{.1in} As applications, we will apply the Sobolev inequality of shrinkers to give integral gap results for the Weyl tensor on compact shrinkers by adapting the proof strategy of \cite{[Cati]}. To state the result, we fix some notations. We denote by $W$, $\rd$ and $V(M)$ the Weyl tensor, traceless Ricci tensor and the volume of manifold $(M,g)$ respectively. The norm of a $(k,s)$-tensor $T$ of $(M,g)$ is defined by $|T|^2_g:=g^{i_1m_1}\cdot\cdot\cdot g^{i_km_k}g_{j_1n_1}...g_{j_sj_s} T^{j_1...j_s}_{i_1...i_k}T^{n_1...n_s}_{m_1...m_k}$. \begin{theorem}\label{pingap} Let $(M,g, f)$ be an $n$-dimensional, $4\le n\le 8$, compact shrinker. If \begin{align*} &\left(\int_M\Big|W+\frac{\sqrt{2}}{\sqrt{n}(n-2)}\rd\circ g \Big|^{\frac n2}dv\right)^{\frac 2n}+\left(\frac{\sqrt{n}}{2}-\sqrt{\frac{n-1}{2(n-2)}}\,\right)V(M)^{\frac 2n}\\ &\qquad<\left(\frac{1}{\sqrt{n}}-\frac 1n \sqrt{\frac{2(n-2)}{n-1}}\,\right)\frac{e^{\frac{2\mu}{n}}}{C(n)}, \end{align*} where $\circ$ denotes the Kulkarni-Nomizu product and $C(n)$ is the constant in the Sobolev inequality \eqref{sobo}, then $(M,g,f)$ is isometric to a quotient of the round sphere. \end{theorem} \begin{remark} Chang, Gursky and Yang \cite{[CGY]} obtained integral gap results for compact manifolds in terms of the Yamabe constant. Catino \cite{[Cati]} proved some integral gap results for compact shrinkers, which was later improved in \cite{[FX]}. Our result involves the the Sobolev constant of shrinkers rather than the Yamabe constant. \end{remark} The main ingredients of proving Theorem \ref{pingap} are Bochner-Weitzenb\"ock type formulas for the norm of curvature tensors, curvature algebraic inequalities and Kato type inequalities. The above pinching assumption is not true when $n\ge 9$. Indeed we will see that constant $C(n)$ in Sobolev inequality \eqref{sobo} cannot be sufficiently small (see Remark \ref{xian}), i.e., \[ C(n)\ge\frac{n-1}{2n(n-2)\pi e}. \] This range obviously affects the dimensional valid of the pinching assumption. On the other hand, algebraic curvature inequalities and elliptic equation of the norm of traceless Ricci tensor also restrict the choice of dimension $n$, such as inequality \eqref{picond} in Section \ref{sec4}. \vspace{.1in} In particular, inspired by Cao-Tran's result \cite{[CaTr]}, we apply the Sobolev inequality of shrinker to get an integral gap result for the half Weyl tensor on compact four-dimensional shrinkers. \begin{theorem}\label{hagap} Let $(M,g, f)$ be a four-dimensional oriented compact shrinker. Let $C(4)$ denote the constant in the Sobolev inequality \eqref{sobo} in $(M,g, f)$. If \[ \left(\int_M|W^{\pm}|^2dv\right)^{\frac{1}{2}}<\frac{e^{\frac{\mu}{2}}}{4\sqrt{6}C(4)} \] and \[ \int_M|\delta W^{\pm}|^2dv\le\frac{1}{8}\int_M{\R}|W^{\pm}|^2dv, \] then $W^{\pm}\equiv0$ and hence $(M^4,g, f)$ is isometric to a finite quotient of the round sphere or the complex projective space. \end{theorem} \begin{remark} For relevant notations, see Section \ref{half}. Gursky \cite{[Gu]} proved an integral pinching result for four-dimensional Einstein manifolds involving the norm of half Weyl tensor in terms of the Euler characteristic and the signature. Cao and Tran \cite{[CaTr]} generalized Gursky's result to shrinkers. Our gap result depends on the Sobolev constant of shrinkers rather than topological invariants. \end{remark} Inspired by Catino's result \cite{[CaTr]}, if we use the Yamabe constant instead of the Sobolev inequality, then we have another integral gap result. \begin{theorem}\label{hagap2} Let $(M,g, f)$ be a four-dimensional oriented compact shrinker satisfying \eqref{Eq0}. If \begin{align*} 216\int_M|W^{\pm}|^2dv+12\int_M|\rd|^2dv\le\int_M{\R}^2dv \end{align*} and \begin{align*} \int_M|\delta W^{\pm}|^2dv\le\frac{1}{6}\int_M{\R}|W^{\pm}|^2dv, \end{align*} then $(M,g, f)$ is isometric to a finite quotient of the round sphere or the complex projective space. \end{theorem} \begin{remark} In Theorems \ref{hagap} and \ref{hagap2}, the first assumption is an pinching condition of half Weyl tensor; while the second assumption is an pinching condition of harmonic half Weyl tensor. It is an interesting question if the second assumption is unnecessary. \end{remark} There are many gap results for Einstein manifolds, Ricci solitons and closed manifolds; such as Catino and Mastrolia \cite{[CaMa]}, Hebey and Vaugon \cite{[HV]}, Li and Wang \cite{[LiW9], [LiWa]}, Munteanu and Wang \cite{[MuW17]}, Petersen and Wylie \cite{[PW]}, Singer \cite{[Si]}, Tran \cite{[Tr]}, Zhang \cite{[Zhz]} and their references. In this paper we provide a different gap criterion, which depends on the constant $C(n)$ of Sobolev inequality. It is an interesting question to estimate a best upper bound of $C(n)$ on shrinkers. \vspace{.1in} The structure of this paper is the following. In Section \ref{sec2}, we will recall some basic results about algebraic inequalities of curvature tensors and some formulas of shrinkers. In particular, we will reprove the Sobolev inequality by the Schr\"odinger heat kernel upper bound. Meanwhile, we will discuss the best Sobolev constant of shrinkers. In Section \ref{sec2pre}, we will apply the Schr\"odinger heat kernel to study the equivalence between (I) and (VI) of Theorem \ref{thmequ}. In Section \ref{sec3}, we will prove the rest cases of Theorem \ref{thmequ}. In Section \ref{sec4}, we will apply the Sobolev inequality of shrinkers and Weitzenb\"ock formulas for curvature tensors to prove Theorem \ref{pingap}. In Section \ref{half}, we will study the gap results for half Weyl tensor. We shall prove Theorems \ref{hagap} and \ref{hagap2}. \vspace{.1in} \textbf{Acknowledgement}. This work is supported by the NSFC (11671141) and the Natural Science Foundation of Shanghai (17ZR1412800). \section{Decomposition and Sobolev inequality}\label{sec2} In this section we first give a brief introduction of curvature notations of the Riemannian manifold $(M^n,g)$ and some algebraic inequalities of curvature tensors. Then we review some geometric equations and formulas about shrinkers, especially for the Sobolev inequality and its explicit coefficient. These results will be used in the following sections. For more related results, see \cite{[LiWa]}, \cite{[W20]}. We use $g_{ij}$ to be the local components of metric $g$ and its inverse by $g^{ij}$. Let $\Rm$ be the $(4,0)$ Riemannian curvature tensor, whose local components denoted by $\R_{ijkl}$. Let $\Ric$ denote the Ricci curvature with local components $\R_{ik}=g^{jl}\R_{ijkl}$, and let $\R=g^{ik}\R_{ik}$ be the scalar curvature. The traceless Ricci tensor is denoted by \[ \rd=\Ric-\frac 1n\R g, \] whose local components \[ \rdc_{ik}=\R_{ik}-\frac 1n\R g_{ik}. \] When $n\ge 4$, the Weyl tensor $W$ is defined by the orthogonal decomposition \[ W=\Rm-\frac{\R}{2n(n-1)}g\circ g-\frac{1}{n-2} \rd\circ g, \] where $\circ$ denotes the Kulkarni-Nomizu product for two symmetric tensors $A$ and $B$, which is defined as: \[ (A\circ B)_{ijkl}=A_{ik}B_{jl}-A_{il}B_{jk}-A_{jk}B_{il}+A_{jl}B_{ik}. \] In local coordinates, we can write $W$ as \begin{equation*} \begin{aligned} W_{ijkl}=& \R_{ijkl}-\frac{1}{n-2}(g_{ik}\R_{jl}+g_{jl}\R_{ik} -g_{il}\R_{jk}-g_{jk}\R_{il})\\ &+\frac{1}{(n-1)(n-2)}\R(g_{ik}g_{jl}-g_{il}g_{jk}). \end{aligned} \end{equation*} The Weyl tensor has the same algebraic symmetries as the Riemannian curvature tensor. It is well-known that the Weyl tensor is totally trace-free and it is conformall invariant: \[ W(e^{2\varphi}g)=e^{2\varphi}W(g) \] for any smooth function $\varphi$ on $M$. \vspace{.1in} In \cite{[Cati]}, Catino proved two algebraic curvature inequalities for any $n$-dimensional Riemannian manifold, which will be used in the gap theorems. \begin{lemma}\label{prlg} Each $n$-dimensional Riemannian manifold $(M^n,g)$ satisfies the estimate \[ \left|-W_{ijkl}\rdc_{ik}\rdc_{jl}+\frac{2}{n-2}\rdc_{ij}\rdc_{jk}\rdc_{ik}\right|\le \sqrt{\frac{n-2}{2(n-1)}}\left(|W|^2+\frac{8|\rd|^2}{n(n-2)}\right)^{\frac 12} |\rd|^2. \] \end{lemma} \begin{lemma}\label{leal2} On an $n$-dimensional Riemannian manifold $(M^n,g)$, there exists a positive constant $c(n)$ such that \[ 2W_{ijkl}W_{ipkq}W_{pjql}+\frac 12W_{ijkl}W_{klpq}W_{pqij} \le c(n)|W|^3. \] We can take $c(4)=\frac{\sqrt{6}}{4}$, $c(5)=1$, $c(6)=\frac{\sqrt{70}}{2\sqrt{3}}$ and $c(n)=\frac 52$ for $n\geq 7$. \end{lemma} \vspace{.1in} On shrinker $(M,g,f)$, by Proposition 2.1 in \cite{[ELM]}, we have the following basic formulas, which will be also used in the proof of gap theorems. \begin{lemma}\label{formu} Let $(M,g, f)$ be an $n$-dimensional complete shrinker. Then, \[ \Delta f=\frac n2-\R, \] \[ \Delta_f\R=\R-2|\Ric|^2, \] \begin{align*} \Delta_f\R_{ik}&=\R_{ik}-2W_{ijkl}\R_{jl}+\frac{2}{(n-1)(n-2)}\\ &\quad\times\Big(\R^2g_{ik}-n\R\R_{ik}+2(n-1)g^{mn}\R_{im}\R_{nk}-2(n-1)|\Ric|^2g_{ik}\Big), \end{align*} where $\Delta_f:=\Delta-\nabla f\cdot\nabla$. \end{lemma} In the end of this section, we will give the following Sobolev inequality of shrinkers, which was proved by Li and Wang \cite{[LiWa]} by using the logarithmic Sobolev inequality. Here we shall reprove it by using the upper bound of Schr\"odinger heat kernel. Meanwhile we will provide an explicit Sobolev constant and discuss its range. This constant will play a key role in proving gap results of shrinkers. \begin{lemma}\label{lemm2} Let $(M,g, f)$ be an $n$-dimensional complete shrinker. Then for each compactly supported locally Lipschitz function $\varphi$ in $M$, \begin{equation}\label{sobo2} \left(\int_M\varphi^{\frac{2n}{n-2}}\,dv\right)^{\frac{n-2}{n}} \le C(n)e^{-\frac{2\mu}{n}}\int_M\left(4|\nabla \varphi|^2+\R\varphi^2\right)dv \end{equation} for some dimensional constant $C(n)$. In particular, we can take \[ C(n)=\frac{1}{\pi^2}\left(\frac{2}{n-2}\right)^{\frac{4}{n}}. \] \end{lemma} \begin{remark}\label{xian} For an $n$-sphere $S^n$ of radius $\sqrt{2(n-1)}$ with its standard metric, we have $\mathrm{Ric}=\tfrac1 2 g$. Recall that on $S^n$, Aubin \cite{[Au]} (see also Proposition 4.21 in \cite{[He]}) proved that for any $\varphi\in W^{1,2}(S^n)$, \begin{equation}\label{sph} \left(\int_{S^n}\varphi^{\frac{2n}{n-2}}\,dv\right)^{\frac{n-2}{n}}\le \frac{8(n-1)}{n(n-2)}V(S^n)^{-2/n}\int_{S^n}|\nabla \varphi|^2dv+V(S^n)^{-2/n}\int_{S^n}\varphi^2 dv. \end{equation} This inequality is optimal in the sense that the two constants $\frac{8(n-1)}{n(n-2)}V(S^n)^{-2/n}$ and $V(S^n)^{-2/n}$ can not be lowered. On the other hand, from \eqref{Eq2}, we see that \[ \mathrm{R}=f=\frac n2\quad \mathrm{and}\quad (4\pi e)^{-\frac n2}V(S^n)=e^{\mu}. \] Substituting them into \eqref{sobo2} and comparing with \eqref{sph}, we easily conclude that \[ C(n)\ge\frac{n-1}{2n(n-2)\pi e}. \] \end{remark} \begin{proof}[Proof of Lemma \ref{lemm2}] We essentially follow the argument of Davies \cite{[Dav]} (see also \cite {[LiWa], [Zhq]}). Since $H^{\mathrm{R}}=H^{\mathrm{R}}(x,y,t)$ is the Schr\"odinger heat kernel of the operator $-\Delta+\frac{\R}{4}$, then \[ \int_MH^{\mathrm{R}}(x,y,t)dv(y)\leq 1. \] In \cite{[W20]} we proved an upper of the Schr\"odinger heat kernel \[ H^{\mathrm{R}}(x,y,t)\le\frac{e^{-\mu}}{(4\pi t)^{\frac n2}} \] for all $x,y\in M$ and $t>0$. In the following we will use this estimate to prove \eqref{sobo2}. \vspace{.1in} Now using the H\"older inequality, for any $u(x)\in L^2(M)$, we have \begin{equation*} \begin{aligned} \parallel H^{\R}\ast u\parallel_{\infty}&=\sup_{x\in M}\left|\int_MH^{\R}(x,y,t) u(y)dv(y)\right|\\ &\le\sup_{x\in M}\left(\int_M(H^{\R})^2(x,y,t)dv(y)\right)^{\frac 12}\cdot\parallel u\parallel_2. \end{aligned} \end{equation*} By \eqref{upp2}, we further have \begin{equation*} \begin{aligned} \parallel H^{\R}\ast u\parallel_{\infty} &\le\frac{e^{-\frac{\mu}{2}}}{(4\pi t)^{\frac n4}}\sup_{x\in M}\left(\int_M H^{\R}(x,y,t)dv(y)\right)^{\frac 12}\cdot\parallel u\parallel_2\\ &\le\frac{c^{1/2}_1}{t^{n/4}}\parallel u\parallel_2, \end{aligned} \end{equation*} where $c_1=\frac{e^{-\mu}}{(4\pi)^{\frac n2}}$. Similarly, by the H\"older inequality, for any $u(x)\in L^q(M)$, $q\in[1,n)$ and $\widetilde{q}=q/(q-1)$, we may obtain another estimate \begin{equation*} \begin{aligned} \parallel H^{\R}\ast u\parallel_{\infty}&\le\sup_{x\in M}\left(\int_M(H^{\R})^{\widetilde{q}}(x,y,t)dv(y)\right)^{1/\widetilde{q}}\cdot\parallel u\parallel_q\\ &\le\sup_{x\in M}\left(\sup_{y\in M}(H^{\R})^{\frac{1}{q-1}}(x,y,t)\int_MH^{\R}(x,y,t)dv(y)\right)^{1/\widetilde{q}}\cdot\parallel u\parallel_q\\ &\le\sup_{x\in M}\left(\sup_{y\in M}(H^{\mathrm{R}})^{\frac{1}{q-1}}(x,y,t)\right)^{1/\widetilde{q}}\cdot\sup_{x\in M}\left(\int_MH^{\R}(x,y,t)dv(y)\right)^{1/\widetilde{q}}\cdot\parallel u\parallel_q. \end{aligned} \end{equation*} Using \eqref{upp2} again, for any $u(x)\in L^q(M)$ and $q\in[1,n)$, we finally get \begin{equation}\label{semineq} \parallel H^{\R}\ast u\parallel_{\infty}\le\frac{c^{1/q}_1}{t^{\frac{n}{2q}}}\parallel u\parallel_q. \end{equation} Now we consider the integral operator \[ L:=\left(\sqrt{-\Delta+\tfrac{\R}{4}}\right)^{-1}. \] Since $-\Delta+\tfrac{\R}{4}$ is a self-adjoint operator, by the eigenfunction expansion, for any $u(x)\in C_0^\infty(M)$, we have \begin{align*} (Lu)(x)&=\Gamma(1/2)^{-1}\int^{\infty}_0 t^{-\frac 12}\big(e^{(\Delta-\mathrm{R}/4)t}u\big)(x,t)dt\\ &=\Gamma(1/2)^{-1}\int^{\infty}_0 t^{-\frac 12}\big(H^{\mathrm{R}}\ast u\big)(x,t)dt, \end{align*} where $e^{(\Delta-\mathrm{R}/4)t}u$ denotes the semigroup of $H^{\mathrm{R}}\ast u$. Fix $T>0$, which will be determined later and let \begin{equation*} \begin{aligned} (Lu)(x)&=\Gamma(\tfrac12)^{-1}\int^T_0 t^{-\frac 12}\big(H^{\mathrm{R}}\ast u\big)(x,t)dt +\Gamma(\tfrac12)^{-1}\int^{\infty}_T t^{-\frac 12}\big(H^{\mathrm{R}}\ast u\big)(x,t)dt\\ &:=(L_1u)(x)+(L_2u)(x). \end{aligned} \end{equation*} For any $\lambda>0$, we see that \begin{equation}\label{set} \left|\{x\big|\,|(Lu)(x)|\ge\lambda\}\right|\le\left|\{x\big|\,|(L_1u)(x)|\ge\lambda/2\}\right| +\left|\{x\big|\,|(L_2u)(x)|>\lambda/2\}\right|. \end{equation} By \eqref{semineq} and the definition of $L_2u$, since $\Gamma(\tfrac12)=\sqrt{\pi}$, we have \begin{equation*} \begin{aligned} \parallel L_2 u\parallel_{\infty}&\le\Gamma(1/2)^{-1}\int^{\infty}_T t^{-\frac 12}\left(\frac{c^{1/q}_1}{t^{\frac{n}{2q}}}\parallel u\parallel_q\right)dt\\ &=\frac{2qc_1^{1/q}}{(n-q)\sqrt{\pi}}\cdot T^{\frac 12-\frac{n}{2q}}\parallel u\parallel_q. \end{aligned} \end{equation*} We now choose $T$ such that \[ \frac{\lambda}{2}=\frac{2qc_1^{1/q}}{(n-q)\sqrt{\pi}}\cdot T^{\frac 12-\frac{n}{2q}}\parallel u\parallel_q. \] Then the set $\{x\big|\,|(L_2u)(x)|>\lambda/2\}=\emptyset$ and \eqref{set} becomes \begin{equation*} \begin{aligned} \left|\{x\big|\,|(Lu)(x)|\ge\lambda\}\right|&\le\left|\{x\big|\,|(L_1u)(x)|\ge\lambda/2\}\right|\\ &\le(\lambda/2)^{-q}\int_M|(L_1u)(x)|^qdv(x). \end{aligned} \end{equation*} We will estimate the right hand side of the above inequality. By the Minkowski inequality for two measured spaces and the H\"older inequality, we get that \begin{equation*} \begin{aligned} \parallel L_1u\parallel_q&\le\Gamma(\tfrac12)^{-1}\int^T_0 t^{-\frac 12}\parallel H^{\mathrm{R}}\ast u(\cdot,t)\parallel_q dt\\ &\le\Gamma(\tfrac12)^{-1}\int^T_0 t^{-\frac 12}\sup_{x\in M}\parallel H^{\mathrm{R}}(x,\cdot,t)\parallel_1\cdot\parallel u\parallel_q dt\\ &\le\frac{2}{\sqrt{\pi}}T^{1/2}\parallel u\parallel_q. \end{aligned} \end{equation*} Hence, \[ \left|\{x\big|\,|(Lu)(x)|\ge\lambda\}\right|\le\left(\frac{4}{\sqrt{\pi}}\right)^q\lambda^{-q}T^{q/2}\parallel u\parallel^q_q. \] According to the above choice of $T$, we have \[ \left|\{x\big|\,|(Lu)(x)|\ge\lambda\}\right|\le c(n,q)c_1^{\frac{q}{n-q}}\lambda^{-r}\parallel u\parallel^r_q, \] where $r=qn/(n-q)$ and \[ c(n,q)=\left(\frac{4}{\sqrt{\pi}}\right)^{\frac{nq}{n-q}}\left(\frac{q}{n-q}\right)^{\frac{q^2}{n-q}}. \] We see that for all $q\in[1,n)$, the linear operator $L$ could map the space $L^q(M)$ into the weak $L^r(M)$ space. That is, \begin{align*} \parallel L u\parallel_{r,w}&\le c(n,q)^{\frac 1r}c_1^{\frac 1n}\parallel u\parallel_q\\ &=\frac{4}{\sqrt{\pi}}\left(\frac{q}{n-q}\right)^{\frac{q}{n}} c_1^{\frac 1n}\parallel u\parallel_q. \end{align*} For any $0<\epsilon<<1$, letting $q_1=q-\epsilon$, $q_2=q+\epsilon$, $r_i=q_in/(n-q_i)$ ($i=1,2$), we indeed have \[ \parallel L u\parallel_{r_i,w}\le\frac{4}{\sqrt{\pi}}\left(\frac{q_i}{n-q_i}\right)^{\frac{q_i}{n}} c_1^{\frac1n}\parallel u\parallel_{q_i}. \] Applying the Marcinkiewicz interpolation theorem to the above case, for any $0<t<1$, we get that \[ \parallel L u\parallel_b\le\frac{4}{\sqrt{\pi}}\left[\left(\frac{q_1}{n-q_1}\right)^{\frac{q_1}{n}}\right]^t \left[\left(\frac{q_2}{n-q_2}\right)^{\frac{q_2}{n}}\right]^{1-t} c_1^{\frac1n}\parallel u\parallel_a, \] where \[ \frac1a=\frac{t}{q_1}+\frac{1-t}{q_2},\quad \frac{1}{b}=\frac{t}{r_1}+\frac{1-t}{r_2}. \] Since the coefficient is continuous with respect to $\epsilon$ at $\epsilon=0$, letting $\epsilon\to 0+$ and choosing $q=2$ and $p=2n/(n-2)$, then $a\to 2$ and $b\to 2n/(n-2)$ and we finally get \[ \parallel L u\parallel_p\le\frac{4}{\sqrt{\pi}}\left(\frac{2}{n-2}\right)^{\frac{2}{n}}c_1^{\frac1n}\parallel u\parallel_2, \] where $c_1=\frac{e^{-\mu}}{(4\pi)^{\frac n2}}$. Let $\varphi=Lu$ and then $u=L^{-1}\varphi$ and \begin{equation*} \begin{aligned} \parallel u\parallel^2_2&=\langle L^{-1}\varphi,L^{-1}\varphi\rangle\\ &=\langle L^{-2}\varphi,\varphi\rangle\\ &=\left\langle-\Delta\varphi+\tfrac{\R}{4} \varphi,\varphi\right\rangle\\ &=\int_M\left(|\nabla\varphi|^2+\tfrac{\R}{4}\varphi^2\right)dv. \end{aligned} \end{equation*} Substituting this into the above inequality proves the theorem. \end{proof} In the above proof course , if we let $q_1=3$, $q_2=1$, $t=3/4$, $r_1=\frac{3n}{n-3}$ and $r_2=\frac{n}{n-1}$, we can also take \[ C(n)=\frac{1}{\pi^2}\left(\frac{3}{n-3}\right)^{\frac{9}{2n}}\left(\frac{1}{n-1}\right)^{\frac{1}{2n}}. \] Obviously, when $4\le n\le 14$, this constant is bigger than the one in Lemma \ref{lemm2} but when $n\ge15$, it is reverse. Naturally it is to ask \vspace{.1in} \noindent \textbf{Question}. \emph{For a complete gradient shrinking Ricci soliton $(M, g, f)$, specially for the compact case, What is the best constant $C(n)$? } \section{Rozenblum-Cwikel-Lieb inequality}\label{sec2pre} It is well-known that the spectrum of the Laplacian $-\Delta$ on Riemannian manifold $(M,g)$ is contained in the internal $[0,\infty]$. This can be proved by taking the Fourier transform and using the Plancherel theorem. If one considers the Schr\"odinger operator $-\Delta+V$ for some function $V$ on $M$, then $-\Delta+V$ may have some negative spectrum. However, if we have some restriction on $V$, like decay conditions at infinity, we may hope that the essential spectra of $-\Delta+V$ and $-\Delta$ coincide. In this case, the negative spectrum of $-\Delta+V$ is a discrete set with possibly an accumulation point at $0$ (if $0$ is indeed the bottom of the spectrum of $-\Delta$). It is an important question in mathematical physics to estimate the number of these negative eigenvalues. One of beautiful results about this question is the Rozenblum-Cwikel-Lieb (RCL) inequality \begin{equation}\label{RCLnum} \mathcal{N}\left(-\Delta+V\right)\le c(n)\int_M V^{\frac n2}_{-}dv, \end{equation} where $V_{-}$ is the negative part of function $V\in L^1_{loc}(M)$, and $\mathcal{N}(A)$ is the number of non-positive $L^2$-eigenvalues of the operator $A$. The RCL inequality was first established by Rozenblum \cite{[Ro]}, and it was independently found by Lieb \cite{[Lie]} and Cwikel \cite{[Cw]} for $n\ge 3$. Afterwards, another remarkable proof with sharper constant were provided by Li and Yau \cite{[LiY]}, where their proof relies only on the positive of heat kernel and the Sobolev inequality. \vspace{.1in} On a shrinker $(M,g,f)$, one may would like to consider the special Schr\"odinger operator \[ -\Delta^{\R}:=-\Delta+\tfrac{\R}{4} \] instead of the usual Laplacian. Since scalar curvature $\R\ge0$ on $(M,g,f)$, by the RCL inequality, it is easy to see that its eigenvalues are all nonnegative. If one considers a perturbation of $-\Delta^{\R}$ by a real-valued potential $V$ and defines another Schr\"odinger operator $-\Delta^{\R}+V$, then the nonnegative property of eigenvalues is not necessarily satisfied. Naturally one may ask: \vspace{.1in} \emph{What assumption on function $V$ will imply a bound on the number of negative eigenvalues of $-\Delta^{\R}+V$ on shrinkers?} \vspace{.1in} In the following we will give an answer to this question. i.e., Theorem \ref{thmequ}: (I) $\Rightarrow$ (VI). To prove this result, we start with an important proposition, which is a key step of proving the RCL type inequality on shrinkers. \begin{proposition}\label{proeige} Let $D$ be a bounded domain in a shrinker $(M^n,g,f)$, where $n\ge 3$. Assume that $q(x)$ is a positive function defined on $D$. Let $\lambda_k$ be the $k$-th eigenvalue of the equation \[ -\Delta^{\R}\phi(x)=\lambda q(x)\phi(x) \] on $D$ with the Dirichlet boundary condition $\phi|_{\partial D}\equiv 0$. Then, \[ \lambda_k^{n/2}\int_Dq^{n/2}(x)dv(x)\ge c(n)\,e^{\mu}\, k \] for some dimensional constant $c(n)$. \end{proposition} \begin{proof}[Proof of Proposition \ref{proeige}] Inspired by the Li-Yau work \cite{[LiY]}, we consider the ``heat" kernel of the parabolic operator \[ -\Delta^{\R}/{q}+\partial_t \] on shrinker $(M,g,f)$. Let $\{\phi_i(x)\}^{\infty}_{i=1}$ be a set of orthonormal eigenfunctions such that \[ -\Delta^{\R}\phi_i=\lambda_iq\phi_i, \] where $\lambda_i$ denote the eigenvalues of the corresponding eigenfunctions $\{\phi_i(x)\}^{\infty}_{i=1}$ . Then the kernel $\widetilde{H}(x,y,t)$ of $-\Delta^{\R}/q+\partial_t$ must have the following expression \[ \widetilde{H}(x,y,t)=\sum^{\infty}_{i=1}e^{-\lambda_it}\phi_i(x)\phi_i(y). \] By the property of Schr\"odinger heat kernel $H^{\R}(x,y,t)$ (see \cite{[W20]}), we have $\widetilde{H}(x,y,t)>0$ in the interior of $D\times D$ and $\widetilde{H}(x,y,t)\equiv0$ on $\partial D\times \partial D$ for any $t$. At this time, the $L^2$-norm is given by the weighted volume $q(x)dv$ and \[ \int_D\phi_i(x)\phi_j(x)q(x)dv=\delta_{ij} \] Since \begin{equation}\label{defheat} h(t):=\sum^{\infty}_{i=1}e^{-2\lambda_it}=\int_D\int_D\widetilde{H}^2(x,y,t)q(x)q(y)dv(x)dv(y), \end{equation} then we have \begin{equation} \begin{aligned}\label{heatev} \frac{d h}{d t}&=2\int_D\int_D\widetilde{H}(x,y,t)\widetilde{H}_t(x,y,t)q(x)q(y)dv(x)dv(y)\\ &=2\int_D\int_D\widetilde{H}(x,y,t)\Delta^{\R}_{y}\widetilde{H}(x,y,t)q(x)dv(y)dv(x)\\ &=-2\int_D\int_D\left(|\nabla\widetilde{H}(x,y,t)|^2+\frac{\R}{4}\widetilde{H}^2(x,y,t)\right)q(x)dv(y)dv(x), \end{aligned} \end{equation} where we used \[ \left(\frac{\Delta^{\R}_y}{q(y)}-\partial_t\right)\widetilde{H}(x,y,t)=0. \] On the other hand, using the Cauchy-Schwarz inequality, we have \begin{equation} \begin{aligned}\label{evol} h(t)&=\left[\int_Dq(x)\left(\int_D\widetilde{H}^{\frac{2n}{n-2}}(x,y,t)dv(y)\right)^{\frac{n-2}{n}}dv(x)\right]^{\frac{n}{n+2}}\\ &\quad\times\left[\int_Dq(x)\left(\int_D\widetilde{H}(x,y,t)q^{\frac{n+2}{4}}(y)dv(y)\right)^2dv(x)\right]^{\frac{2}{n+2}}. \end{aligned} \end{equation} Let us now analyze the above inequality. Consider the quantity \[ Q(x,t):=\int_D\widetilde{H}(x,y,t)q^{\frac{n+2}{4}}(y)dv(y) \] and it satisfies \[ \left(\frac{\Delta^{\R}_x}{q(x)}-\partial_t\right)Q(x,t)=0 \] with $Q(x,t)\equiv0$ on $\partial D$ for $t>0$ and $Q(x,0)=q^{\frac{n-2}{4}(x)}$. We observe that \begin{align*} \partial_t\int_D q(x)Q^2(x,t)dv(x)&=2\int_Dq(x)Q(x,t)\partial_tQ(x,t)dv(x)\\ &=2\int_DQ(x,t)\Delta^{\R}_xQ(x,t)dv(x)\\ &=-2\left(\int_D|\nabla_xQ(x,t)|^2+\frac{\R}{4}Q^2(x,t)\right)dv(x)\\ &\le0, \end{align*} where we used the scalar curvature $\R\ge 0$ on shrinkers. This implies that \begin{align*} \int_D q(x)Q^2(x,t)dv(x)&\le\int_D q(x)Q^2(x,0)dv(x)\\ &=\int_Dq^{n/2}(x)dv(x). \end{align*} Using this, from \eqref{evol} we have \begin{equation}\label{evol3} h^{\frac{n+2}{n}}(t)\left(\int_Dq^{n/2}(x)dv\right)^{-\frac2n}\le\int_Dq(x)\left(\int_D\widetilde{H}^{\frac{2n}{n-2}}(x,y,t)dv(y)\right)^{\frac{n-2}{n}}dv(x). \end{equation} Recall that the Sobolev inequality \eqref{sobo} of shrinkers by letting $\varphi=\widetilde{H}(x,y,t)$ says that \[ \left(\int_D |\widetilde{H}|^{\frac{2n}{n-2}}\,dv(y)\right)^{\frac{n-2}{n}}\le C(n)e^{-\frac{2\mu}{n}} \int_D\left(4|\nabla\widetilde{H}|^2+\R\widetilde{H}^2\right) dv(y). \] Combining this with \eqref{evol3} and \eqref{heatev} yields \[ \frac{d h}{d t}\le -\frac{e^{\frac{2\mu}{n}}}{2C(n)}\left(\int_Dq^{n/2}(x)dv(x)\right)^{-\frac2n}\cdot h^{\frac{n+2}{n}}(t). \] Dividing this by $h^{\frac{n+2}{n}}(t)$ and integrating with respect to $t$, \[ h(t)\le (nC(n))^{\frac n2} e^{-\mu} \left(\int_Dq^{n/2}(x)dv(x)\right) t^{-\frac n2}. \] Combining this with \eqref{defheat}, we get \[ \sum^{\infty}_{i=1}e^{-2\lambda_it}\le(nC(n))^{\frac n2} e^{-\mu} \left(\int_Dq^{n/2}(x)dv(x)\right) t^{-\frac n2}. \] Setting $t=\frac{n}{4\lambda_k}$, we conclude that \[ \sum^{\infty}_{i=1}e^{-\frac{n\lambda_i}{2\lambda_k}}\le(nC(n))^{\frac n2} e^{-\mu}\int_Dq^{n/2}(x)dv(x) \left(\frac{n}{4\lambda_k}\right)^{-\frac n2}. \] Noticing that \[ \sum^{\infty}_{i=1}e^{-\frac{n\lambda_i}{2\lambda_k}}\ge ke^{-n/2}, \] so we have \[ \lambda_k^{n/2}\int_Dq^{n/2}(x)dv(x)\ge c(n)\,e^{\mu}\, k \] for some dimensional constant $c(n)$. \end{proof} Now we will apply Proposition \ref{proeige} to prove Theorem \ref{thmequ}: (I) $\Rightarrow$ (VI). \begin{proof}[Proof of Theorem \ref{thmequ}: (I) $\Rightarrow$ (VI)] By the monotonicity of $\mathcal{N}\left(-\Delta^{\R}+V\right)$ with respect to the function $V(x)$ on shrinker $(M,f,g)$, we may assume $V(x)\le 0$ by replacing $V(x)$ by $-V_{-}(x)$. Moreover $-V_{-}(x)$ can be approximated by a sequence of strictly negative function. So we can assume $V(x)<0$ for all $x\in M$. By the exhausting argument (see Lemma 3.2 in \cite{[OP]}), we only need to prove \[ \mathcal{N}\left(-\Delta^{\R}+V\right)\le c(n)\int_D V_{-}^{\frac n2}dv \] for the equation \begin{equation}\label{numest} \left(-\Delta^{\R}+V\right)\phi=\lambda\phi \end{equation} with $\phi|_{\partial D}\equiv 0$ for any fixed domain $D\in M$. It is easy to see that the number of non-positive eigenvalues $\mathcal{N}\left(-\Delta^{\R}+V\right)$ for \eqref{numest}, counting multiplicity, is equal to the number of eigenvalues less than $1$ for the case in Proposition \ref{proeige} by considering \[ q(x)=-V(x). \] Indeed, since $V(x)<0$, from the relation \[ \frac{\int_D\left(|\nabla \phi|^2+\frac{\R}{4}\phi^2\right)dv+\int_DV\phi^2dv}{\int_D\phi^2dv} =\frac{\int_D|V|\phi^2dv}{\int_D\phi^2dv}\left(\frac{\int_D\left(|\nabla \phi|^2+\frac{\R}{4}\phi^2\right)dv}{\int_D|V|\phi^2dv}-1\right), \] we conclude that the dimension of the subspace on which the left hand side is non-positive is equal to the dimesion of the subspace on which the quadratic form \[ \frac{\int_D\left(|\nabla \phi|^2+\frac{\R}{4}\phi^2\right)dv}{\int_D|V|\phi^2dv} \] associated to Proposition \ref{proeige} is not more than $1$. Now we let $\lambda_k$ be the greatest eigenvalue which is not more than $1$. By Proposition \ref{proeige}, we have \begin{align*} \int_D|V|^{n/2}dv(x)&\ge\lambda_k^{n/2}\int_D|V|^{n/2}dv(x)\\ &\ge c(n)\,e^{\mu}\, k\\ &\ge c(n)\,e^{\mu}\cdot\mathcal{N}(-\Delta^{\R}+V), \end{align*} which completes the proof of (I) $\Rightarrow$ (VI). \end{proof} Next we will prove the easy implication (VI) $\Rightarrow$ (I). \begin{proof}[Proof of Theorem \ref{thmequ}: (VI) $\Rightarrow$ (I)] We assume \eqref{RCL} holds for all potentials $V\in C^{\infty}_0(M)$. Then for all $V\in C^{\infty}_0(M)$ satisfying \[ \|V\|_{n/2}< c(n)e^{-\frac{2\mu}{n}}, \] we know that $-\Delta^{\R}+V$ is a non-negative operator. That is, \[ \int_M|\nabla \varphi|^2+\frac{\R}{4}\varphi^2 dv+\int_MV\varphi^2dv\ge 0 \] for all such $V$ and all $\varphi\in C^{\infty}_0(M)$. In other words, \[ \int_M|\nabla \varphi|^2+\frac{\R}{4}\varphi^2 dv\ge\sup_{V\in C^{\infty}_0(M)}\left\{\int_M-V\varphi^2dv\right\} \] satisfying \[ \|V\|_{n/2}<c(n)e^{-\frac{2\mu}{n}}. \] Since the dual of $L^{n/2}(M)$ is $L^{n/(n-2)}(M)$, the above functional inequality implies \[ c(n)e^{-\frac{2\mu}{n}}\int_M\left(|\nabla \varphi|^2+\frac{\R}{4}\varphi^2\right)dv\ge\left(\int_M \varphi^{\frac{2n}{n-2}}\,dv\right)^{\frac{n-2}{n}} \] and Theorem \ref{thmequ} (I) follows. \end{proof} \section{Equivalence of geometric inequalities}\label{sec3} In this section, we will give rest proofs of Theorem \ref{thmequ}. This part of theorem mainly says that the (logarithmic) Sobolev inequality, the Schr\"odinger heat kernel upper bound, the Faber-Krahn inequality and the Nash inequality are all equivalent up to possible different constants. Notice that (II) $\Rightarrow$ (I) was proved in \cite{[LiWa]}; (II) $\Rightarrow$ (III) $\Rightarrow$ (IV) was proved in \cite{[W20]}. So we only need to consider the following cases: (I) $\Rightarrow$ (II), (III) $\Rightarrow$ (I), (IV) $\Rightarrow$ (III), (I) $\Rightarrow$ (V), (V) $\Rightarrow$ (III). \begin{proof}[Proof of Theorem \ref{thmequ}] (I) $\Rightarrow$ (II): We may assume \eqref{sobo} holds on shrinkers. That is, for each compactly supported locally Lipschitz function $\varphi$ in $(M,g,f)$, \[ \left(\int_M\varphi^{\frac{2n}{n-2}}\,dv\right)^{\frac{n-2}{n}} \le C(n)e^{-\frac{2\mu}{n}}\int_M\left(4|\nabla \varphi|^2+\R \varphi^2\right)dv \] for some dimensional constant $C(n)$. Given function $\varphi$ with $\|\varphi\|_2=1$, we introduce the weighted measure $d\mu=\varphi^2dv$ on shrinker $(M,g,f)$, then $\int_M d\mu=1$. Since function $\ln G$ is concave with respect to parameter $G$, letting $G=\varphi^{q-2}$, where $q=\frac{2n}{n-2}$, and applying the Jensen inequality \[ \int_M\ln Gd\mu\le\ln\left(\int_MGd\mu\right), \] we have that \begin{align*} \int_M(\ln \varphi^{q-2})\varphi^2dv&\le\ln\left(\int_M\varphi^{q-2}\varphi^2dv\right)\\ &=\ln \parallel\varphi\parallel^q_q, \end{align*} That is, \begin{align*} \int_M\varphi^2\ln\varphi dv&\le\frac{q}{q-2}\ln \parallel\varphi\parallel_q\\ &=\frac n2\ln \parallel\varphi\parallel_q. \end{align*} Combining this with the Sobolev inequality \eqref{sobo}, we get \begin{align*} \int_M\varphi^2\ln\varphi^2dv&\le\frac n2\ln \parallel\varphi\parallel^2_q\\ &\le\frac n2\ln\left[C(n)e^{-\frac{2\mu}{n}}\int_M\left(4|\nabla \varphi|^2+\mathrm{R}\varphi^2\right)dv\right]\\ &=\frac n2\ln C(n)-\mu+\frac n2\ln\left[\int_M\left(4|\nabla \varphi|^2+\mathrm{R}\varphi^2\right)dv\right]. \end{align*} Using an elementary inequality: \[ \ln x\leq \sigma x-(1+\ln\sigma) \] for any $\sigma>0$, the above estimate can be further reduced to \[ \int_M\varphi^2\ln\varphi^2dv(x)\le\frac n2\ln C(n)-\mu+\frac{n\sigma}{2}\int_M\left(4|\nabla \varphi|^2+\mathrm{R}\,\varphi^2\right)dv-\frac n2(1+\ln\sigma). \] Setting $\tau=\frac{n\sigma}{2}$, we obtain \[ \int_M\varphi^2\ln \varphi^2dv\le\tau\int_M\left(4|\nabla\varphi|^2+\mathrm{R}\varphi^2\right)dv-\mu-n-\frac n2\ln(4\pi\tau) +\frac n2\ln(2ne\pi\cdot C(n)) \] and hence (b) follows with possible different constants. \vspace{.1in} (III) $\Rightarrow$ (I): Since (III) implies \eqref{upp2} with different constants, then \eqref{upp2} further implies (I) by following the proof of Lemma \ref{lemm2} in Section \ref{sec2}. \vspace{.1in} (IV) $\Rightarrow$ (III): Since \eqref{upp2} is equivalent to (III), we only need to apply (IV) to prove \eqref{upp2}. By the approximation argument, we only need to prove \eqref{upp2} for the Dirichlet Schr\"odinger heat kernel $H_{\Omega}^\mathrm{R}(x,y,t)$ of any relatively compact set $\Omega$ in $(M,g,f)$. In fact, let $\Omega_i$, $i=1,2,...$, be a sequence of compact exhaustion of $M$ such that $\overline{\Omega}_i\subset\Omega_{i+1}$ and $\cup_i\Omega_i=M$. If we are able to prove \eqref{upp2} for the Dirichlet Schr\"odinger heat kernel $H_{\Omega_i}^\mathrm{R}(x,y,t)$ for any $i$, then the result follows by letting $i\to\infty$. For a fixed point $y\in \Omega$, let $u=u(x,t)=H_{\Omega}^{\mathrm{R}}(x,y,t)$ and consider the integral \[ I(t):=\int_{\Omega}u^2(x,t)dv. \] Then, \begin{equation}\label{deriv} I'(t)=2\int_{\Omega}uu_tdv=-2\int_{\Omega}\left(|\nabla u|^2+\frac14\R u^2\right)dv. \end{equation} For any positive number $s$, we have \[ u^2\le (u-s)^2_{+}+2s u \] and therefore, \[ I(t)\le\int_{\Omega}(u-s)_{+}^2dv+2s\int_{\Omega} u dv. \] Now for fixed $s,\, t>0$, consider the set \[ D(s,t):=\{x|x\in M, u(x,t)>s\} \] and its the first eigenvalue \[ \lambda(D(s,t))=\inf_{0\neq \varphi\in C^{\infty}_0(D(s,t))}\frac{\int_{D(s,t)}(|\nabla\varphi|^2+\frac{\R}{4} \varphi^2)dv}{\int_{D(s,t)}\varphi^2dv}. \] Letting $\varphi=(u-s)_{+}$, then \begin{align*} \lambda(D(s,t))\left(I(t)-2s\right)&\le \int_{D(s,t)}\left(|\nabla(u-s)_{+}|^2+\frac{\R}{4}\left((u-s)_{+}\right)^2\right)dv\\ &\le\int_{\Omega}\left(|\nabla u|^2+\frac{\R}{4} u^2\right)dv. \end{align*} Note that in the above first inequality, we threw away a positive term and used the Schr\"odinger heat kernel property $\int_{\Omega}u(x,t)dv\le 1$. This property also indicates that \[ V(D(s,t))\le s^{-1}. \] On the other hand, by the Faber-Krahn inequality, we have \begin{align*} \lambda(D(s,t))&\ge\frac{2n\pi}{e}\left(\frac{e^\mu}{V(D(s,t))}\right)^{\frac 2n}\\ &\ge\frac{2n\pi}{e}\left(e^\mu\cdot s\right)^{\frac 2n}. \end{align*} We remark that if the set $D(s,t)$ is not relatively compact, we can choose a sequence of relatively compact sets which converges to it such that the Faber-Krahn inequality remains valid for $D(s,t)$. Combining the above two inequalities, we obtain \[ I(t)\le\frac{e^{1-2\mu/n}}{2n\pi}\int_{\Omega}\left(|\nabla u|^2+\frac{\R}{4} u^2\right)dv\cdot s^{-2/n}+2s. \] Minimizing the right hand side of the above inequality, \[ I(t)\le c(n)e^{-\frac{2\mu}{n+2}}\left[\int_{\Omega}\left(|\nabla u|^2+\frac{\R}{4} u^2\right)dv\right]^{\frac{n}{n+2}}. \] Combining this with \eqref{deriv}, we have \[ I'(t)\le c(n)e^{\frac{2\mu}{n}}I^{\frac{n+2}{n}}. \] Integrating this from $t/2$ to $t$, \[ I(t)\le c(n)\frac{e^{-\mu}}{t^{n/2}} \] for $t>0$. In other words, we in fact get \[ \int_{\Omega}H_{\Omega}^{\mathrm{R}}(x,y,t)H_{\Omega}^{\mathrm{R}}(x,y,t)dv(y)\le c(n)\frac{e^{-\mu}}{t^{n/2}}. \] By the Schr\"odinger heat kernel property, we indeed show that \begin{align*} H_{\Omega}^{\mathrm{R}}(x,x,2t)&=\int_{\Omega}H_{\Omega}^{\mathrm{R}}(x,y,t)H_{\Omega}^{\mathrm{R}}(y,x,t)dv(y)\\ &\le c(n)\frac{e^{-\mu}}{t^{n/2}}. \end{align*} This further implies \begin{align*} H_{\Omega}^{\mathrm{R}}(x,y,t)&=\int_{\Omega}H_{\Omega}^{\mathrm{R}}(x,z,t/2)H_{\Omega}^{\mathrm{R}}(z,y,t/2)dv(z)\\ &\le\left(\int_{\Omega}(H_{\Omega}^{\mathrm{R}})^2(x,z,t/2)dv(z)\right)^{1/2} \left(\int_{\Omega}(H_{\Omega}^{\mathrm{R}})^2(z,y,t/2)dv(z)\right)^{1/2}\\ &=(H_{\Omega}^{\mathrm{R}})^{1/2}(x,x,t)(H_{\Omega}^{\mathrm{R}})^{1/2}(y,y,t)\\ &\le c(n)\frac{e^{-\mu}}{t^{n/2}}. \end{align*} Next we apply the same argument of proving Theorem 1.1 in \cite{[W20]} to get an upper bound with a Gaussian exponential factor and finally (III) follows. \vspace{.1in} (I) $\Rightarrow$ (V): We remark that the Nash inequality can be viewed as an interpolation between the H\"older inequality and the Sobolev inequality. Assume that $(M,g,f)$ admits \eqref{sobo}. By the H\"older inequality, for $p_1=\frac{n+2}{n-2}$ and $p_1=\frac{n+2}{4}$, we have \begin{align*} \int_M\varphi^2dv&=\int_M\varphi^{\frac{2n}{n+2}}\varphi^{\frac{4}{n+2}}dv\\ &\le \left(\int_M\varphi^{\frac{2n}{n+2}p_1}dv\right)^{1/{p_1}}\left(\int_M\varphi^{\frac{4}{n+2}p_2}dv\right)^{1/{p_2}}\\ &=\left(\int_M\varphi^{\frac{2n}{n-2}}dv\right)^{\frac{n-2}{n+2}}\left(\int_M|\varphi|dv\right)^{\frac{4}{n+2}} \end{align*} and hence, \[ \parallel\varphi\parallel^{2+\frac 4n}_2\le \left(\int_M\varphi^{\frac{2n}{n-2}}dv\right)^{\frac{n-2}{n}}\left(\int_M|\varphi|dv\right)^{\frac 4n}. \] Combining this with the Sobolev inequality \eqref{sobo} gives the Nash inequality \eqref{Nash}. \vspace{.1in} (V) $\Rightarrow$ (III): Using the approximation argument, it suffices to prove \eqref{upp} for the Dirichlet Schr\"odinger heat kernel $H_{\Omega}^\mathrm{R}(x,y,t)$ of any relatively compact set $\Omega$ in $(M,g,f)$. For any $y\in M$, let $\varphi=\varphi(x,t)=H_{\Omega}^{\mathrm{R}}(x,y,t)$. Then \begin{align*} \frac{\partial}{\partial t}\left(\int_{\Omega}\varphi^2dv\right)&=\int_{\Omega}2\varphi\varphi_tdv\\ &=\int_{\Omega}2\varphi(\Delta\varphi-\frac 14\R\varphi)dv\\ &=-\frac 12\int_{\Omega}\left(4|\nabla \varphi|^2+\R\varphi^2\right)dv. \end{align*} Scaling function $\varphi$ such that $\parallel\varphi\parallel_1=1$, by our assumption, we may assume the Nash inequality \[ \parallel\varphi\parallel^{2+\frac 4n}_2\le c(n)e^{-\frac{2\mu}{n}} \int_{\Omega}\left(4|\nabla \varphi|^2+\mathrm{R}\,\varphi^2\right)dv. \] Combining the above estimates, we have \[ \frac{\partial}{\partial t}\left(\int_{\Omega}\varphi^2dv\right)\le-\frac{e^{\frac{2\mu}{n}}}{2c(n)}\parallel\varphi\parallel^{2+\frac 4n}_2. \] Let \[ F(s):=\int_{\Omega}\varphi^2(x,s)dv, \] where $s\in(0,t]$. Then \[ \frac{\partial}{\partial s}F(s)\le-\frac{e^{\frac{2\mu}{n}}}{2c(n)}F(s)^{1+\frac 2n}. \] Integrating it from $t/2$ to $t$ yields \[ -\frac n2\left(F(t)^{-\frac 2n}-F(t/2)^{-\frac 2n}\right)\le-\frac{e^{\frac{2\mu}{n}}}{2c(n)}\cdot\frac t2, \] which implies that \[ F(t)\le\left[2nc(n)\right]^{n/2}\frac{e^{-\mu}}{t^{n/2}}. \] This estimate is the same as $I(t)$ in the proof of the case (IV) $\Rightarrow$ (III). Therefore we only use the same strategy of proving Theorem 1.1 in \cite{[W20]} to get an upper bound with a Gaussian exponential factor and finally prove (III). \end{proof} \section{Gap result for Weyl tensor}\label{sec4} In this section we will prove Theorem \ref{pingap} by using the arguments of \cite{[Cati],[FX]}. We first recall the elliptic equation of the norm of traceless Ricci tensor on shrinkers, which can be directly computed by Lemma \ref{formu} (see also Lemma 3.2 in \cite{[Cati]}). \begin{lemma}\label{formulas2} If $(M^n,g, f)$ be an $n$-dimensional shrinker satisfying \eqref{Eq1}, then \[ \frac 12\Delta_f |\rd|^2=|\nabla\rd|^2 +|\rd|^2 -2W_{ijkl}\rdc_{ik}\rdc_{jl}+\frac{4}{n-2}\rdc_{ij}\rdc_{jk}\rdc_{ik}-\frac{2(n-2)}{n(n-1)}\mathrm{R}|\rd|^2. \] \end{lemma} In the following we will apply the similar arguments of \cite{[Cati],[FX]} to prove gap theorems on shrinkers. In our case, we need to carefully deal with the constant $C(n)$ of Sobolev inequality \eqref{sobo}. \begin{proof}[Proof of Theorem \ref{pingap}] By Lemma \ref{formulas2}, using \[ \frac 12\Delta|\rd|^2=|\nabla|\rd||^2+|\rd|\Delta|\rd| \] and the Kato inequality \[ |\nabla\rd|^2\geq |\nabla|\rd||^2 \] at each point where $|\rd|\neq 0$, we obtain \begin{equation}\label{ricci} |\rd|\Delta|\rd|\ge |\rd|^2-2W_{ijkl}\rdc_{ik}\rdc_{jl}+\frac {4}{n-2}\rdc_{ij}\rdc_{jk}\rdc_{ki}-\frac {2(n-2)}{n(n-1)}\R |\rd|^2+\frac 12\langle\nabla f, \nabla|\rd|^2\rangle. \end{equation} To simplify the notation, we let $u:=|\rd|$. Then for any positive number $s$, which will be determined later, by \eqref{ricci}, we compute that \begin{align*} u^s\Delta u^s&=s(s-1)u^{2s-2}|\nabla u|^2+s u^{2s-1}\Delta u\\ &=\left(1-\frac{1}{s}\right)|\nabla u^s|^2+s u^{2s-2}\,u\Delta u\\ &\ge\left(1-\frac{1}{s}\right)|\nabla u^s|^2+su^{2s}+s\left(-2W_{ijkl}\rdc_{ik}\rdc_{jl}+\frac{4}{n-2}\rdc_{ij}\rdc_{jk}\rdc_{ki}\right)u^{2s-2}\\ &\quad-\frac {2(n-2)}{n(n-1)}s \R u^{2s}+\frac{s}{2} u^{2s-2}\langle\nabla f, \nabla u^2\rangle. \end{align*} Using Lemma \ref{prlg}, we further have \begin{align*} u^s\Delta u^s &\ge\left(1-\frac{1}{s}\right)|\nabla u^s|^2+s u^{2s}-\sqrt{\frac {2(n-2)}{n-1}}s\left(|W|^2+\frac{8u^2}{n(n-2)}\right)^{\frac 12}u^{2s}\\ &\quad-\frac {2(n-2)}{n(n-1)}s\R u^{2s}+\frac 12\langle\nabla f, \nabla u^{2s}\rangle. \end{align*} Since $M^n$ is closed, integrating by parts over $M^n$ and using the equality \[ \Delta f=\frac n2-\R \] from Lemma \ref{formu}, we have that \begin{align*} 0&\ge\left(2-\frac{1}{s}\right)\int_M |\nabla u^s|^2dv+s\int_M u^{2s}dv -\sqrt{\frac{2(n-2)}{n-1}}s\int_M\left(|W|^2+\frac{8 u^2}{n(n-2)}\right)^{\frac 12}u^{2s}dv\\ &\quad-\frac{2(n-2)}{n(n-1)}s\int_M \R u^{2s}dv -\frac 12 \int_M u^{2s}\Delta fdv\\ &=\left(2-\frac{1}{s}\right)\int_M |\nabla u^s|^2dv -\sqrt{\frac {2(n-2)}{n-1}}s\int_M\left(|W|^2+\frac{8 u^2}{n(n-2)}\right)^{\frac 12}u^{2s}dv\\ &\quad-\left(\frac n4-s\right)\int_M u^{2s}dv+\frac{n(n-1)-4(n-2)s}{2n(n-1)}\int_M \R u^{2s}dv. \end{align*} For $2-{1}/{s}>0$, by the Sobolev inequality of shrinker using $\varphi=u^s$ \begin{equation}\label{sobinequ} \int_M|\nabla u^s|^2dv\ge\frac{e^{\frac{2\mu}{n}}}{4C(n)}\left(\int_Mu^{\frac{2ns}{n-2}}\,dv\right)^{\frac{n-2}{n}} -\frac 14\int_M\R u^{2s}dv, \end{equation} the above inequality becomes \begin{align*} 0&\ge\left(2-\frac{1}{s}\right)\frac{e^{\frac{2\mu}{n}}}{4C(n)}\left(\int_M u^{\frac{2ns}{n-2}}dv\right)^{\frac{n-2}{n}} -\sqrt{\frac {2(n-2)}{n-1}}s\int_M\left(|W|^2+\frac{8 u^2}{n(n-2)}\right)^{\frac 12}u^{2s}dv\\ &\quad-\left(\frac n4-s\right)\int_M u^{2s}dv+\frac{n(n-1)-8(n-2)s^2}{4n(n-1)s}\int_M \R u^{2s}dv. \end{align*} By the H\"{o}lder inequality, for $n-4s\ge0$, we get that \begin{align*} 0&\ge\left\{\left(2-\frac{1}{s}\right)\frac{e^{\frac{2\mu}{n}}}{4C(n)}-\sqrt{\frac {2(n-2)}{n-1}}s\left[\int_M\left(|W|^2+\frac{8 u^2}{n(n-2)}\right)^{\frac n4}dv\right]^{\frac 2n}-\left(\frac n4-s\right)V(M)^{\frac 2n}\right\}\\ &\quad\times\left(\int_M u^{\frac{2ns}{n-2}}dv\right)^{\frac{n-2}{n}} +\frac{n(n-1)-8(n-2)s^2}{4n(n-1)s}\int_M \R u^{2s}dv. \end{align*} Now we choose \[ s=\sqrt{\frac{n(n-1)}{8(n-2)}}\in\left(\frac 12,\,\,\frac n4\right], \] and the last term of the above inequality vanishes. Moreover, notice that the curvature integral assumption of theorem is equivalent to \begin{equation}\label{pinchco} \left(2-\frac{1}{s}\right)\frac{e^{\frac{2\mu}{n}}}{4C(n)}-\sqrt{\frac {2(n-2)}{n-1}}s\left[\int_M \left(|W|^2+\frac{8 u^2}{n(n-2)}\right)^{\frac n4}dv\right]^{\frac 2n}-\left(\frac n4-s\right)V(M)^{\frac 2n}>0, \end{equation} where we used the equality \[ \left|W+\frac{\sqrt{2}}{\sqrt{n}(n-2)}\rd\circ g\right|^2=|W|^2+\frac{8}{n(n-2)}|\rd|^2 \] due to the totally trace-free tensor $W$. Therefore, we conclude that $|\rd|\equiv 0$ and $(M,g, f)$ is Einstein. Now we have $\Ric=\frac12 g$. By \eqref{Eq2}, we know \[ f=\frac n2 \quad\mathrm{and}\quad (4\pi e)^{-\frac n2}V(M)=e^{\mu} \] and the pinching condition \eqref{pinchco} reduces to \begin{equation}\label{pic} \left({\bbint}_M|W|^{\frac n2}dv\right)^{\frac 2n}\le\epsilon_1(n):= \sqrt{\frac{n-1}{32(n-2)}}\left[\left(\frac{2}{s}-\frac{1}{s^2}\right)\frac{C(n)^{-1}}{4\pi e}+4-\frac{n}{s}\right], \end{equation} where ${\bbint}_M$ denotes the average of the integration, i.e., \[ {\bbint}_M|W|^{\frac n2}dv=\frac{1}{V(M)}\int_M|W|^{\frac n2}dv. \] By Remark \ref{xian}, we see \[ C(n)\ge\frac{n-1}{2n(n-2)\pi e} \] and hence \[ \epsilon_1(n)\leq\sqrt{\frac{n-1}{32(n-2)}}\left[\left(\frac{2}{s}-\frac{1}{s^2}\right)\frac{n(n-2)}{2(n-1)} +4-\frac{n}{s}\right]. \] Notice that the right hand side of the above inequality is nonnegative if \begin{equation}\label{picond} s\ge\frac{n+\sqrt{n^2+8n(n-1)(n-2)}}{8(n-1)}. \end{equation} Since $s=\sqrt{\frac{n(n-1)}{8(n-2)}}$, it is easy to check that the above inequality only holds only when $4\le n\le 8$. We then carefully calculate the constants as follows: \begin{align*} \epsilon_1(4)&\le\frac{5\sqrt{3}-6}{18}\approx .1478, \quad\epsilon_1(5)\le\frac{7\sqrt{6}-6\sqrt{5}}{48}\approx .0778, \quad\epsilon_1(6)\le \frac{9\sqrt{10}-10\sqrt{6}}{100}\approx .0397,\\ \epsilon_1(7)&\le\frac{\sqrt{3}(11-\sqrt{105})}{36\sqrt{5}}\approx .0162, \quad\epsilon_1(8)\le\frac{13\sqrt{21}-42\sqrt{2}}{294}\approx 6\times 10^{-4}. \end{align*} Obviously, these constants in \eqref{pic} are strictly smaller than those in the following Proposition \ref{Eisrig}, and hence $(M,g,f)$ is isometric to a quotient of the sphere. \end{proof} The following result is an gap result for Einstein manifolds, which was essentially proved by Catino \cite{[Cati]}. The present version of pinching constants is a little better than Theorem 3.3 in \cite{[Cati]} and seems to be more suitable to our applicable purpose. \begin{proposition}\label{Eisrig} Let $(M^{n},g)$ be an $n$-dimensional Einstein manifold with $\mathrm{Ric}=k g$, where $k>0$ is a constant. There exists a positive constant $\epsilon_2(n)$ depending only on $n$ such that if \[ \left(\bbint_M|W|^{\frac n2}dv\right)^{\frac 2n}<\epsilon_2(n), \] where ${\bbint}_M|W|^{\frac n2}dv=\frac{1}{V(M)}\int_M|W|^{\frac n2}dv$, then $(M^{n},g)$ is isometric to a quotient of the round sphere with radius $\sqrt{\frac{n-1}{k}}$. We can take $\epsilon_2(4)=\frac{14}{5\sqrt{6}}k$, $\epsilon_2(5)=\frac{4}{5}k$, $\epsilon_2(6)=\frac{16\sqrt{3}}{9\sqrt{70}}k$, $\epsilon_2(7)=\frac{49}{125}k$, $\epsilon_2(8)=\frac{267}{625}k$, $\epsilon_2(9)=\frac{23}{50}k$ and $\epsilon_2(n)=\frac{2n}{5(n-1)}k$ if $n\ge 10$. \end{proposition} \begin{proof}[Proof of Proposition \ref{Eisrig}] Following the argument in \cite{[HV],[Cati]}, we have the Bochner type formula for $|W|^2$, \[ \frac{1}{2}\Delta|W|^2=|\nabla W|^2+2k|W|^2-2\left(2W_{ijkl}W_{ipkq}W_{pjql}+\frac{1}{2}W_{ijkl}W_{klpq}W_{pqij}\right). \] Since \[ \frac 12\Delta|W|^2=|\nabla|W||^2+|W|\Delta|W|, \] then we have \[ |W|\Delta|W|=|\nabla W|^2-|\nabla|W||^2+2k|W|^2-2\left(2W_{ijkl}W_{ipkq}W_{pjql}+\frac{1}{2}W_{ijkl}W_{klpq}W_{pqij}\right). \] Using Lemma \ref{leal2} and the refined Kato inequality \[ |\nabla W|^2\ge\frac{n+1}{n-1}|\nabla|W||^2 \] at every point where $|W|\neq 0$, we get \begin{equation}\label{wye} |W|\Delta|W|\ge\frac{2}{n-1}|\nabla W|^2+2k|W|^2-2c(n)|W|^3, \end{equation} where $c(n)$ is a dimensional constant, which is defined by $c(4)=\frac{\sqrt{6}}{4}$, $c(5)=1$, $c(6)=\frac{\sqrt{70}}{2\sqrt{3}}$ and $c(n)=\frac{5}{2}$ for $n\geq 7$. Similar to the preceding computation, we consider the quantity $u^s:=|W|^s$, where $s$ is a positive number, which will be chosen later. Using \eqref{wye}, we compute \begin{align*} u^s\Delta u^s&=\left(1-\frac{1}{s}\right)|\nabla u^s|^2+s u^{2s-2}\,u\Delta u\\ &\ge\left(1-\frac{1}{s}\right)|\nabla u^s|^2+\frac{2s}{n-1}u^{2s-2}|\nabla u|^2+2ksu^{2s}-2c(n)su^{2s+1}\\ &=\left(1-\frac{n-3}{(n-1)s}\right)|\nabla u^s|^2+2ksu^{2s}-2c(n)su^{2s+1}. \end{align*} Since $M^n$ is closed, integrating the above inequality over $M^n$ yields \begin{equation}\label{wye2} 0\ge\left(2-\frac{n-3}{(n-1)s}\right)\int_M|\nabla u^s|^2dv+2ks\int_Mu^{2s}dv-2c(n)s\int_{M}u^{2s+1}dv. \end{equation} Recall that Ilias \cite{[Il]} proved the Sobolev inequality of manifolds satisfying $\mathrm{Ric}\ge kg$, where $k>0$, (by letting $f=|W|^s=u^s$ in \cite{[Il]}), which can be stated that in our setting \[ \left(\int_M u^{\frac{2ns}{n-2}}\,dv\right)^{\frac{n-2}{n}}\le \frac{4(n-1)}{n(n-2)k}V(M)^{-2/n}\int_M|\nabla u^s|^2dv+V(M)^{-2/n}\int_M u^{2s} dv. \] Applying the H\"older inequality and the above Sobolev inequality, \eqref{wye2} becomes \begin{align*} 0\ge&\left(2-\frac{n-3}{(n-1)s}\right)\int_M|\nabla u^s|^2dv+2ks\int_Mu^{2s}dv -2c(n)s\left(\int_{M}u^{\frac n2}dv\right)^{\frac 2n}\left(\int_Mu^{\frac{2ns}{n-2}}dv\right)^{\frac{n-2}{n}}\\ \ge&\left(2-\frac{n-3}{(n-1)s}\right)\int_M|\nabla u^s|^2dv+2ks\int_Mu^{2s}dv\\ &-2c(n)sV(M)^{-2/n}\left(\int_Mu^{\frac n2}dv\right)^{\frac 2n}\left(\frac{4(n-1)}{n(n-2)k}\int_M|\nabla u^s|^2dv +\int_Mu^{2s}dv \right). \end{align*} By the proposition assumption, we have the following equivalent form \[ \left(\int_Mu^{\frac n2}dv\right)^{\frac 2n}< \epsilon_2(n)V(M)^{2/n}. \] Therefore, for $s>0$, if $\epsilon_2(n)$ satisfies \begin{equation*} \begin{cases} 2-\frac{n-3}{(n-1)s}-8c(n)s\epsilon_2(n)\frac{(n-1)}{n(n-2)k} \,\geq\, 0,\\ 2ks-2c(n)\epsilon_2(n) \,\geq\, 0, \end{cases} \end{equation*} we immediately have $W\equiv 0$ and $g$ has constant positive sectional curvature. Here we give explicit constants such that the above two inequalities holds. When $n=4$ and $s=\frac{7}{10}$, since $c(4)=\frac{\sqrt{6}}{4}$, we can take $\epsilon_2(4)=\frac{14}{5\sqrt{6}}k$. When $n=5$ and $s=\frac{4}{5}$, since $c(5)=1$, we can take $\epsilon_2(5)=\frac{4}{5}k$. When $n=6$ and $s=\frac{9}{10}$, since $c(6)=\frac{\sqrt{70}}{2\sqrt{3}}$, we can take $\epsilon_2(6)=\frac{16\sqrt{3}}{9\sqrt{70}}k$. When $n=7$ and $s=\frac{49}{50}$, since $c(7)=\frac{5}{2}$, we can take $\epsilon_2(7)=\frac{49}{125}k$. When $n=8$ and $s=\frac{267}{250}$, since $c(8)=\frac{5}{2}$, we can take $\epsilon_2(8)=\frac{267}{625}k$. When $n=9$ and $s=\frac{23}{20}$, since $c(9)=\frac{5}{2}$, we can take $\epsilon_2(9)=\frac{23}{50}k$. When $n\ge10$ and $s=\frac{n}{n-1}$, since $c(n)=\frac{5}{2}$, we can take $\epsilon_2(n)=\frac{2n}{5(n-1)}k$. \end{proof} \section{Gap result for half Weyl tensor}\label{half} In this section we will apply the similar argument of Section \ref{sec4} to talk about an gap phenomenon for shrinkers under the integral condition of half Weyl tensor. Recall that, on an oriented $4$-dimensional shrinker $(M,g,f)$, the bundle of $2$-forms $\wedge^2 M$ can be decomposed as a direct sum \[ \wedge^2 M=\wedge^+ M+\wedge^-M, \] where $\wedge^{\pm} M$ is the $(\pm 1)$-eigenspace of the Hodge star operator \[ \star: \wedge^2 M \rightarrow \wedge^2 M. \] Let $\{e_i\}^4_{i=1}$ be an oriented orthonormal basis of tangent bundle $T M$. For any pair $(ij)$, $1\leq i\neq j\leq 4$, let $(i'j')$ denote the dual of $(ij)$, i.e., the pair such that \[ e_i\wedge e_j\pm e_{i'}\wedge e_{j'}\in \wedge^{\pm}M. \] In other words, $(iji'j')=\sigma(1234)$ for some even permutation $\sigma\in S_4$. For the Weyl tensor $W$, its (anti-)self-dual part is \begin{equation*} W^{\pm}_{ijkl}=\frac{1}{4}\left(W_{ijkl}\pm W_{ijk'l'}\pm W_{i'j'kl}+W_{i'j'k'l'}\right). \end{equation*} It is easy to check that \[ W^{\pm}_{ijkl}=\pm W^{\pm}_{ijk'l'}=\pm W^{\pm}_{i'j'kl}=W^{\pm}_{i'j'k'l'}=\frac{1}{2}(W_{ijkl}{\pm}W_{ijk'l'}). \] On shrinkers, we have the following Weitzenb\"ock formula for $W^{\pm}$ (see \cite{[CaTr]} or its generalization \cite{[Wp]}), and it is useful for analyzing the structure of shrinkers. \begin{lemma} \label{weitenbock} Let $(M, g, f)$ be a four-dimensional shrinker satisfying \eqref{Eq0}. Then \[ \frac 12\Delta_f|W^{\pm}|^2=|\nabla W^{\pm}|^2+2\lambda|W^{\pm}|^2-18\det W^{\pm}-\frac 12\langle(\overset{\circ}{\mathrm{Ric}}\circ\overset{\circ}{\mathrm{Ric}})^{\pm},W^{\pm}\rangle. \] \end{lemma} Using Lemma \ref{weitenbock}, we can prove Theorem \ref{hagap} in the introduction. \begin{proof}[Proof of Theorem \ref{hagap}] By Lemma \ref{weitenbock}, using the following algebraic inequality observed by Cao and Tran \cite{[CaTr]} \[ \det W^{\pm}\le \frac{\sqrt{6}}{18}|W^{\pm}|^3 \] and the Kato inequality \[ |\nabla W^{\pm}|^2\ge|\nabla|W^{\pm}||^2 \] at every point where $|W^{\pm}|\neq 0$, we get \[ \frac 12\Delta_f|W^{\pm}|^2\ge|\nabla|W^{\pm}||^2+2\lambda|W^{\pm}|^2-\sqrt{6}|W^{\pm}|^3-\frac 12\langle(\overset{\circ}{\mathrm{Ric}}\circ\overset{\circ}{\mathrm{Ric}})^{\pm},W^{\pm}\rangle. \] Since \[ \frac 12\Delta|W^\pm|^2=|\nabla|W^\pm||^2+|W^\pm|\Delta|W^\pm|, \] then we have \[ |W^\pm|\Delta|W^\pm|\ge 2\lambda|W^{\pm}|^2-\sqrt{6}|W^{\pm}|^3-\frac 12\langle(\overset{\circ}{\mathrm{Ric}}\circ\overset{\circ}{\mathrm{Ric}})^{\pm},W^{\pm}\rangle+\frac 12\langle\nabla f, \nabla|W^{\pm}|^2\rangle. \] Since $M^n$ is closed, integrating the above inequality and integrating by parts over $M^n$, we have \begin{equation} \begin{aligned}\label{inteineqs} 0&\ge\int_M|\nabla |W^\pm||^2dv+2\lambda\int_M|W^{\pm}|^2dv-\sqrt{6}\int_M|W^{\pm}|^3dv\\ &\quad-\frac 12\int_M\langle(\overset{\circ}{\mathrm{Ric}}\circ\overset{\circ}{\mathrm{Ric}})^{\pm},W^{\pm}\rangle dv -\frac 12\int_M |W^{\pm}|^2 \Delta fdv. \end{aligned} \end{equation} Note that Cao and Tran (Corollary 5.8 in \cite{[CaTr]}) observed \[ \int_M\langle(\overset{\circ}{\mathrm{Ric}}\circ\overset{\circ}{\mathrm{Ric}})^{\pm},W^{\pm}\rangle dv =4\int_M|\delta W^{\pm}|^2dv, \] and hence the second assumption of theorem in fact is \[ \int_M\langle(\overset{\circ}{\mathrm{Ric}}\circ\overset{\circ}{\mathrm{Ric}})^{\pm},W^{\pm}\rangle dv \le\frac{1}{2}\int_M{\R}|W^{\pm}|^2dv. \] Using this, \eqref{inteineqs} becomes \begin{align*} 0&\ge\int_M|\nabla |W^\pm||^2dv+2\lambda\int_M|W^{\pm}|^2dv-\sqrt{6}\int_M|W^{\pm}|^3dv\\ &\quad-\frac{1}{4}\int_M{\R}|W^{\pm}|^2dv-\frac 12\int_M |W^{\pm}|^2 \Delta fdv. \end{align*} Using the equality of shrinkers \[ \Delta f=4\lambda-\R, \] we further have \begin{equation}\label{evoineqty} 0\ge\int_M|\nabla |W^\pm||^2dv-\sqrt{6}\int_M|W^{\pm}|^3dv+\frac{1}{4}\int_M{\R}|W^{\pm}|^2dv. \end{equation} By the Sobolev inequality of shrinker \eqref{sobo} by letting $\varphi=|W^{\pm}|$, \[ \int_M|\nabla |W^{\pm}||^2dv\ge\frac{e^{\frac{\mu}{2}}}{4C(4)}\left(\int_M|W^{\pm}|^4dv\right)^{\frac{1}{2}} -\frac 14\int_M\R |W^{\pm}|^2dv, \] then \eqref{evoineqty} can be simplified as \begin{align*} 0&\ge\frac{e^{\frac{\mu}{2}}}{4C(4)}\left(\int_M|W^{\pm}|^4dv\right)^{\frac{1}{2}}-\sqrt{6}\int_M|W^{\pm}|^3dv. \end{align*} Using the H\"{o}lder inequality, \begin{align*} 0&\ge\left[\frac{e^{\frac{\mu}{2}}}{4C(4)}-\sqrt{6}\left(\int_M|W^{\pm}|^2dv\right)^{\frac{1}{2}}\right] \left(\int_M|W^{\pm}|^4dv\right)^{\frac{1}{2}}. \end{align*} By the first assumption of theorem, we immediately get $W^{\pm}\equiv0$. Finally we apply the classification of Chen and Wang \cite{[ChWa]} to conclude that the shrinker is isometric to a finite quotient of the round sphere or the complex projective space. \end{proof} In the end of this section, following the argument of Catino \cite{[Cati]}, we can apply the Yamabe constant to give another gap result, i.e., Theorem \ref{hagap2} in introduction. \begin{proof}[Proof of Theorem \ref{hagap2}] Similar to the argument of Theorem \ref{hagap}, using the second assumption of theorem, \eqref{inteineqs} can also be written as \begin{align*} 0&\ge\int_M|\nabla |W^\pm||^2dv+2\lambda\int_M|W^{\pm}|^2dv-\sqrt{6}\int_M|W^{\pm}|^3dv\\ &\quad-\frac 13\int_M{\R}|W^{\pm}|^2dv-\frac 12\int_M |W^{\pm}|^2 \Delta fdv. \end{align*} Using the shrinker's equality \[ \Delta f=4\lambda-\R, \] we obtain \begin{equation}\label{evoineqtyd} 0\ge\int_M|\nabla |W^\pm||^2dv-\sqrt{6}\int_M|W^{\pm}|^3dv+\frac 16\int_M{\R}|W^{\pm}|^2dv. \end{equation} We will apply the Yamabe constant to estimate the first gradient term in the above inequality. Recall that the Yamabe constant $Y(M,[g])$ is defined by \[ Y(M,[g]):=\inf_{\varphi\in W^{1,2}(M)}\frac{\frac{4(n-1)}{n-2}\int_M|\nabla\varphi|^2dv_g+\int_M\R \varphi^2dv_g} {(\int_M|\varphi|^{2n/(n-2)}dv_g)^{(n-2)/n}}, \] where $[g]$ denotes the conformal class of $g$. As we all known, $Y(M,[g])$ is positive on a compact manifold if and only if there exits a conformal metric in $[g]$ whose scalar curvature is positive everywhere. Hence the compact shrinker has positive Yamabe constant $Y(M,[g])$. If we let $\varphi=|W^{\pm}|$ on a four-dimensional compact shrinker, then the Yamabe constant $Y(M,[g])$ implies the following inequality \[ \int_M|\nabla |W^{\pm}||^2dv\ge\frac{Y(M,[g])}{6}\left(\int_M|W^{\pm}|^4dv\right)^{\frac{1}{2}} -\frac 16\int_M\R |W^{\pm}|^2dv. \] Using this, \eqref{evoineqtyd} can be reduced to \[ 0\ge\frac{Y(M,[g])}{6}\left(\int_M|W^{\pm}|^4dv\right)^{\frac{1}{2}} -\sqrt{6}\int_M|W^{\pm}|^3dv. \] By the H\"{o}lder inequality, we have \begin{equation}\label{evqya} 0\ge\left[\frac{Y(M,[g])}{6}-\sqrt{6}\left(\int_M|W^{\pm}|^2dv\right)^{\frac{1}{2}}\right] \left(\int_M|W^{\pm}|^4dv\right)^{\frac{1}{2}}. \end{equation} Recall that Gursky \cite{[Gu94]} proved the following estimate on a compact four-dimensional manifold \[ \int_M\R^2dv-12\int_M|\rd|^2dv\le Y^2(M,[g]). \] Here this inequality is strict unless the manifold is conformally Einstein. Combining this with the first assumption of theorem, we have \[ 6\sqrt{6}\left(\int_M|W^{\pm}|^2dv\right)^{\frac{1}{2}}\le Y(M,[g]). \] Combining this with \eqref{evqya} we conclude that $W^{\pm}\equiv 0$ or $(M,g)$ is conformally Einstein. When $W^{\pm}\equiv0$, by Theorem \ref{hagap}, $(M^4,g, f)$ is isometric to a finite quotient of the round sphere or the complex projective space. When $(M,g)$ is conformally Einstein, it is naturally Bach flat (see Proposition 4.78 in \cite{[Be]}) and hence is Einstein (see Theorem 1.1 in \cite{[CaCh]}). Now since $(M^4,g)$ is Einstein, combining the first pinching condition of theorem and a gap result of Gursky and Lebrun (see Theorem 1 in \cite{[GL]}), we also get $W^{\pm}\equiv0$ and hence $(M^4,g, f)$ is also isometric to a finite quotient of the round sphere or the complex projective space. \end{proof} \bibliographystyle{amsplain}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,014
Indonesian Parliament Calls for Urgent Action to End Violence Against Women in Politics Ten signers of the declaration who represent Women's Parliamentary Caucus, the House of RegionalRepresentatives, and eight party fractions of the parliament in Indonesia committed to endingviolence against women in politics in the country. Photo: Fajar Nur Cahyadi TEMPO.CO, Jakarta - The Women's Parliamentary Caucus of the Republic of Indonesia signed a declaration today, November 29, 2022, to "condemn any form of gender-based violence that hinders women from fulfilling their equal rights" and to urge all groups to allow women to safely participate in politics. The document, signed at the parliament building, is the first-ever official declaration in the country that explicitly addresses gender-based violence in politics, one of the biggest obstacles to women achieving full political rights. The Women's Parliamentary Caucus comprises all 167 women members of the House of Representatives and the House of Regional Representatives. The signing was a part of the "parliamentarians standing up to violence against women in politics" event, organized by Westminster Foundation for Democracy and UN Women Indonesia to mark the United Nations 16 Days of Activism against Gender-Based Violence. Puan Maharani, Speaker of the House of Representatives (the country's first female Speaker), signed the declaration virtually. "From gendered double standards to sexual harassment, the unique obstacles faced by women running for offices need to be brought into sharp relief. Today, we gather here to convey a clear message: we must act together to break the culture of silence that perpetuates violence against women," stated Puan. Diah Pitaloka, Chairperson of the Presidium of the Women's Parliamentary Caucus, called on all parties to immediately ensurethe protection of women from all forms of violence as citizens who actively participate in both general and regional elections. During the event, representatives from the parliament, the National Commission on Violence against Women, leaders of political parties, and civil society activists discussed the structural and normative barriers to women in politics. The panelists spoke of how they themselves faced discrimination and hostility while running for office. Women's representation in the parliament has been increasing – they now occupy almost 22 percent of seats, compared to nine per cent in the country's first democratic election in 1999. However, Agus Wijayanto, Indonesia Country Representative of Westminster Foundation for Democracy, said that women make up almost 50 percent of the Indonesian population, yet they have not been adequately represented in our parliament. "Having more women win parliamentary seats is crucial to allow them to be fully involved in decision-making to benefit all women and girls. At WFD, we are committed to helping remove barriers to Indonesian women entering politics," said Agus. Jamshed Kazi, Representative and Liaison to ASEAN of UN Women Indonesia stated more men are needed to walk alongside women – as allies, peers and enablers – to break the glass ceilings that hinder women's meaningful political participation. "To ensure that all spaces where decision-making takes place are free from discrimination and violence against women in politics," added Kazi. A donation booth was set up at the event in partnership with Pundi Perempuan, a women's trust fund for responding to cases of violence against women and girls, to allow members of the parliament to support survivors of violence – the first step in turning their declaration into action. After the half-day signing event, the members of parliament resumed discussing the enforcement of the Sexual Violence Crime Bill, which the parliament passed earlier in April. womenPoliticssexual violence Afghanistan Opens Market for Women to Run Businesses This is the second market opened for women in Afghanistan's Herat province over the past couple of months. Gibran Rakabuming Discloses Kaesang Pangarep's Political Plans Solo Mayor Gibran Rakabuming Raka confirmed that Kaesang Pangarep expressed interest to go into politics. President Jokowi's Son Kaesang Hints Interest Towards Politics Jokowi's son, Solo Mayor Gibran Rakabuming, opens up on the chances of younger sibling Kaesang entering Indonesia's politics. Sri Mulyani Honored on Forbes '50 Over 50: Asia 2023' List Indonesia's Finance Minister Sri Mulyani Indrawati is among the women honored on the Forbes '50 Over 50: Asia 2023' List Young Women at Risk of Getting Heart Attack, Expert Explains Researches find that young women who experience heart attacks are more commonly known to have a history of high blood pressure. An All-Women Flogging Squad in Indonesia Aceh has introduced a female flogging squad to deter crimes. But activists say it is a distraction from bigger problems. Oligarchy May Keeps Controlling Politics: Economist on Job Creation Perppu Senior economist Didin S Damanhuri predicts that the economic oligarchy will increasingly control Indonesian politics in the next two years. BKKBN Advises Women to Avoid Pregnancy After 30 BKKBN advises women to avoid getting pregnant after they turn 30 years old to prevent the risk of fatality Sexual Harassment at Gunadarma University, Women Commission: Cannot Be Solved Amicably Komnas Perempuan says solving a sexual harassment case cannot be done in an amicable approach, such as what happened at the Gunadarma University case. Indonesian Female Comedians and Musicians Unite to Put an End to Violence Against Women The UNiTE event is held to spread awareness of gender-based violence through music and stand-up comedy.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,330
Tannenburg ist der Name folgender Burgen: Tannenburg (Bühlertann) (auch Schloss Tannenburg), Burg und Ortsteil der Gemeinde Bühlertann im Landkreis Schwäbisch Hall in Baden-Württemberg Burg Tannenberg (Nentershausen) (auch Tannenburg), Burg in der Gemeinde Nentershausen im Landkreis Hersfeld-Rotenburg in Hessen der Burgstall Alt-Tannenburg, am Ortsrand von Nentershausen im Richelsdorfer Gebirge, vermutlicher Vorgänger der Tannenburg Tannenburg (Schönau vor dem Walde), eine hochmittelalterliche Befestigungsanlage in der heutigen Gemeinde Leinatal im Landkreis Gotha in Thüringen Siehe auch Tannenberg
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,399
Q: How can I tell if a machine has PAE? I want to help someone upgrade an oldish laptop from 11.10 to 12.04, which requires PAE. I am not sure whether they have PAE or not. I know it is likely that they do have it after all, but how can I tell before trying to upgrade? A: Another option (which uses a GUI) involves using Hardinfo (System Profiler and Benchmark. * *Under devices, select Processor. *From here, you can see the processor's capabilities (along with their simple descriptions). *If PAE is not listed, your processor does not support it. * *As you can tell, the processor in this example does not. * *and this one does. A: From the terminal, simply type the following. cat /proc/cpuinfo Scroll down and check the flags. PAE will be listed in the flags if supported. A: Citing https://help.ubuntu.com/community/EnablingPAE: To check if your processor supports PAE, try grep --color=always -i PAE /proc/cpuinfo If it outputs something, you have PAE support. Otherwise, the output will be empty. A: Try: inxi -f Or, to see it in red color (if present): inxi -f | grep -i pae
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,991
Pedestrians and transit riders come last [VIDEO] By Sarah Goodyear on Jul 22, 2011 [vodpod id=ExternalVideo.1011881&w=525&h=350&fv=video%3D1550369887%26player%3Dviral%26end%3D501829%26lr_admap%3Din%3Apbs%3A0] After my post yesterday about the devastating case of Raquel Nelson — the Atlanta-area woman who was convicted of vehicular homicide after her son was struck by a driver while they were crossing a busy road — I've been hearing from a lot of readers. Commenter pdxcityscape posted a link to a PBS Blueprint America segment I can't believe I hadn't seen yet. It exposes the dangerous design flaws on another Atlanta-area road, Buford Highway, and explains how outdated, auto-centric planning standards fail to serve an increasingly poor and carless suburban population. The results are often fatal. It's a terrific report. If you care about this stuff, watch the whole thing. Another reader, Alison Stone, sent me a link to a story from back in the late '90s that underlines the classism inherent in the way transit riders and pedestrians are treated. Seventeen-year-old Cynthia Wiggins was hit and killed by a dump truck driver as she crossed a busy road from a public bus stop to her job at the Galleria, an upscale mall near Buffalo, N.Y. Why? Because although mall management let tour buses stop in its parking lot, they wouldn't let the public bus coming from the inner city stop there: According to regional transit officials, the mall's developers refused to allow [the no. 6 bus] on their property, which meant anyone coming from central Buffalo had to disembark 300 yards away, on the other side of seven-lane Walden Avenue, a highway feeding into the New York State Thruway without a sidewalk or crossing. Then, just before Christmas, a young, single mother from Buffalo was killed by a dump truck as she walked from the No. 6 bus stop to her job at the mall. Cynthia Wiggins became a cause celebre. Her death was taken up in local newspapers and on talk radio. A boycott was threatened. Yesterday morning, after a concession by Galleria officials, the first No. 6 bus stopped in front of the Lord & Taylor department store. "Does it make a difference?" asked Michelle Simmons, as she stepped off the bus after the 40-minute ride from downtown Buffalo. "Yes, it makes a difference. I don't have to cross that damned street any more and then walk across the parking lot." As the PBS report shows, such small victories are hard to come by. Because apparently, the very lives of poor people aren't worth as much as the mere convenience of people with enough money to own a car. More in Sprawl All Sprawl This abandoned Walmart has been reclaimed as a public library Visualize a shorter commute — or a better job These elderly fatality statistics may spoil your affection for big-box stores Read a prophetic Ray Bradbury story about car culture
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,465
BMX The Game | Latest news: about arcade mode and new stadium! You are here: Home / Blog / Latest news: about arcade mode and new stadium! Latest news: about arcade mode and new stadium! Hello everyone! We're back one more Friday to let you know the latest BMX The Game development news produced here at Barspin Studios. As you asked us last week, today we are explaining what will be the arcade mode you could see in the menu that we are preparing. The main difference between the arcade mode and the simulator mode will revolve around the physics, also affecting the control, since the simulator, as you know, will require a higher ability with the controller. The arcade mode is designed for those beginners who start to play for the first time as well as for those who want a more fun and fast style, without requirements. Have in mind that the tournaments and competitions will be only in simulation mode, as it is aimed at those who want a challenge. This functionality is under development and you will be able to test it (in a work in progress state) on the coming beta. Finally, we show you a few images (work in progress) of the Palau San Jordi that we are recreating to get inside the game. Among other things, it will be scene of the park editor. For more details, stay tuned to the next posts on the blog. That's all for now. Have a nice weekend! Latest news: new setting menu! Latest news: reaction time comparative video!
{ "redpajama_set_name": "RedPajamaC4" }
794
Q: How do I use UISearchController in iOS 8 where the UISearchBar is in my navigation bar and has scope buttons? I'm trying to use the new UISearchController from iOS 8, and embed its UISearchBar in my UINavigationBar. That's easily done as follows: searchController = UISearchController(searchResultsController: nil) searchController.searchResultsUpdater = self searchController.delegate = self searchController.searchBar.delegate = self searchController.dimsBackgroundDuringPresentation = false searchController.hidesNavigationBarDuringPresentation = false navigationItem.titleView = searchController.searchBar But when I add the scope buttons: searchController.searchBar.showsScopeBar = true searchController.searchBar.scopeButtonTitles = ["Posts, Users, Subreddits"] It adds the buttons behind the UISearchBar and obviously looks very odd. How should I be doing this? A: You're bumping into a "design issue" where the scopeBar is expected to be hidden when the searchController is not active. The scope bar buttons appear behind (underneath) the search bar since that's their location when the search bar becomes active and animates itself up into the navigation bar. When the search is not active, a visible scope bar would take up space on the screen, distract from the content, and confuse the user (since the scope buttons have no results to filter). Since your searchBar is already located in the titleView, the (navigation and search) bar animation that reveals the scope bar doesn't occur. * *The easiest option is to locate the search bar below the navigation bar, and let the searchBar animate up into the title area when activated. The navigation bar will animate its height, making room to include the scope bar that was hidden. This will all be handled by the search controller. *The second option, almost as easy, is to use a Search bar button icon, which will animate the searchBar and scopeBar down into view over the navigation bar. - (IBAction)searchButtonClicked:(UIBarButtonItem *)__unused sender { self.searchController = [[UISearchController alloc] initWithSearchResultsController:nil]; self.searchController.searchResultsUpdater = self; self.searchController.hidesNavigationBarDuringPresentation = NO; self.searchController.dimsBackgroundDuringPresentation = NO; self.definesPresentationContext = YES; self.searchController.searchBar.scopeButtonTitles = @[@"Posts", @"Users", @"Subreddits"]; [self presentViewController:self.searchController animated:YES completion:nil]; } *If you want the searchBar to remain in the titleView, an animation to do what you want is not built in. You'll have to roll your own code to handle the navigationBar height change and display your own scope bar (or hook into the internals, and animate the built-in scopeBar down and into view). If you're fortunate, someone else has written willPresentSearchController: code to handle the transition you want. *If you want to always see a searchBar and scopeBar, you'll probably have to ditch using the built-in scopeBar, and replace it with a UISegmentedControl which the user will always see, even when the search controller is not active. Update: This answer suggested subclassing UISearchController to change its searchBar's height.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,610
<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog" xmlns:ext="http://www.liquibase.org/xml/ns/dbchangelog-ext" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-2.0.xsd http://www.liquibase.org/xml/ns/dbchangelog-ext http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-ext.xsd"> <changeSet id="1" author="nvoxland"> <createTable tableName="extTest1"> <column name="id" type="int"/> </createTable> </changeSet> <changeSet id="2" author="nvoxland"> <ext:sampleChange/> </changeSet> <changeSet id="3" author="nvoxland"> <ext:changeWithNestedTags name="bob"> <ext:sampleSubValue name="standard"> <ext:farNested columnName="asdf"/> </ext:sampleSubValue> <ext:otherSampleValue name="second"> <ext:farNested columnName="asdf2"> <ext:greatGrandChild name="greatGrandChild"/> </ext:farNested> </ext:otherSampleValue> </ext:changeWithNestedTags> </changeSet> <changeSet id="4" author="nvoxland"> <createTable tableName="extTest2"> <column name="id" type="int"/> </createTable> </changeSet> </databaseChangeLog>
{ "redpajama_set_name": "RedPajamaGithub" }
1,877
{"url":"https:\/\/hippocampus-garden.com\/pandas_sparse\/","text":"# Hippocampus's Garden\n\nUnder the sea, in the hippocampus's garden...\n\nSearch by\n\n# Meet Pandas: Converting DataFrame to CSR Matrix\n\nMarch 08, 2022\u00a0 | \u00a03 min read\u00a0 | \u00a0318 views\n\nWelcome back to the \ud83d\udc3cMeet Pandas\ud83d\udc3c series (a.k.a. my memorandum for learning Pandas)!\n\nWhen working with big data, we often encounter interactions between users and items. Examples of such data include:\n\n\u2022 User ratings of movies, restaurants, or marchandise\n\u2022 Number of times a song or video is played by each user\n\nRepresenting these data as a dense matrix, where each row represents a user and each column represents an item, can lead to prohibitively large memory consumption. And, since interaction data are usually sparse, there must be more efficient ways to store the data. In such cases, representing the data as a sparse matrix is a good choice.\n\nIn this post, I will briefly show how to convert a DataFrame of user-item interactions to a compressed sparse row (CSR) matrix, the most common format for sparse matrices.\n\nAs an example, we use the MovieLens dataset provided here. This is a collection of rmovie ratings by users. Movies are rated by a small fraction of users, so this is a perfect use case for a sparse matrix. MovieLens can be loaded by the following code:\n\nfrom urllib.request import urlretrieve\nimport zipfile\n\nimport pandas as pd\n\nurlretrieve(\"http:\/\/files.grouplens.org\/datasets\/movielens\/ml-100k.zip\", \"movielens.zip\")\nzip_ref = zipfile.ZipFile('movielens.zip', \"r\")\nzip_ref.extractall()\n\n'ml-100k\/u.data', sep='\\t', names=['user_id', 'movie_id', 'rating', 'timestamp'], encoding='latin-1'\n)\ndf\n\nThe dataframe should look something like this (a screenshot from Colaboratory):\n\n## Converting to CSR Matrix\n\nTo convert a DataFrame to a CSR matrix, you first need to create indices for users and movies. Then, you can perform conversion with the sparse.csr_matrix function. It is a bit faster to convert via a coordinate (COO) matrix.\n\nfrom pandas.api.types import CategoricalDtype\nfrom scipy import sparse\n\nusers = df[\"user_id\"].unique()\nmovies = df[\"movie_id\"].unique()\nshape = (len(users), len(movies))\n\n# Create indices for users and movies\nuser_cat = CategoricalDtype(categories=sorted(users), ordered=True)\nmovie_cat = CategoricalDtype(categories=sorted(movies), ordered=True)\nuser_index = df[\"user_id\"].astype(user_cat).cat.codes\nmovie_index = df[\"movie_id\"].astype(movie_cat).cat.codes\n\n# Conversion via COO matrix\ncoo = sparse.coo_matrix((df[\"rating\"], (user_index, movie_index)), shape=shape)\ncsr = coo.tocsr()\n\nFor your information, I compare a dense matrix and its COO and CSR format in the figure below:\n\nCSR format consumes far less memory than its dense format for sparse matrices. Also, SciPy\u2019s CSR matrix is compatible with many other libraries such as scikit-learn and XGBoost. For a more detailed explanation about sparse matrices, I refer readers to this post.\n\nCSR matrix can be converted back to COO matrix by .tocoo() method and to dense matrix by todense().\n\n## References\n\nWritten by Shion Honda. If you like this, please share!\n\nHippocampus's Garden \u00a9 2022, Shion Honda. Built with Gatsby","date":"2022-05-21 05:42:30","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.28831952810287476, \"perplexity\": 4285.118323658939}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652662538646.33\/warc\/CC-MAIN-20220521045616-20220521075616-00512.warc.gz\"}"}
null
null
<resources> <string name="app_name">SectionLibrary</string> </resources>
{ "redpajama_set_name": "RedPajamaGithub" }
6,876
Q: Sensor simulator's NetworkOnMainThreadException I followed this instruction to debug my app on emulator using sensor simulator: http://code.google.com/p/openintents/wiki/SensorSimulator#How_to_use_the_in_your_application I completed all steps above, but in my case everything was not as simple as described in instructions. I got NetworkOnMainThreadException trying to connect, register listener etc. So I created async task to solve this problem. Now, I have the following: connection to simulator: @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); ... sensorManager = SensorManagerSimulator.getSystemService(this, SENSOR_SERVICE); this.new ConnectToSimulator().execute(); } onResume() listner's registration: @Override public void onResume() { super.onResume(); this.new RegisterToSimulator().execute(); } finally my tasks: class ConnectToSimulator extends AsyncTask<Object, Object, Object> { @Override protected Object doInBackground(Object... arg0) { try{ sensorManager.connectSimulator(); }catch(Exception e) { Log.i("error", e.getMessage()); } return null; } } class RegisterToSimulator extends AsyncTask<Object, Object, Object> { @Override protected Object doInBackground(Object... arg0) { try{ sensorManager.registerListener(Compass.this, sensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER), SensorManager.SENSOR_DELAY_FASTEST); sensorManager.registerListener(Compass.this, sensorManager.getDefaultSensor(Sensor.TYPE_MAGNETIC_FIELD), SensorManager.SENSOR_DELAY_FASTEST); sensorManager.registerListener(Compass.this, sensorManager.getDefaultSensor(Sensor.TYPE_ORIENTATION), SensorManager.SENSOR_DELAY_FASTEST); }catch(Exception e) { Log.i("error", e.getMessage()); } return null; } } but even after that, I still receive the error trying to launch app on emulator: 12-15 14:23:28.877: E/AndroidRuntime(3724): FATAL EXCEPTION: main 12-15 14:23:28.877: E/AndroidRuntime(3724): android.os.NetworkOnMainThreadException 12-15 14:23:28.877: E/AndroidRuntime(3724): at android.os.StrictMode$AndroidBlockGuardPolicy.onNetwork(StrictMode.java:1117) 12-15 14:23:28.877: E/AndroidRuntime(3724): at libcore.io.BlockGuardOs.recvfrom(BlockGuardOs.java:163) 12-15 14:23:28.877: E/AndroidRuntime(3724): at libcore.io.IoBridge.recvfrom(IoBridge.java:513) 12-15 14:23:28.877: E/AndroidRuntime(3724): at java.net.PlainSocketImpl.read(PlainSocketImpl.java:488) 12-15 14:23:28.877: E/AndroidRuntime(3724): at java.net.PlainSocketImpl.access$000(PlainSocketImpl.java:46) 12-15 14:23:28.877: E/AndroidRuntime(3724): at java.net.PlainSocketImpl$PlainSocketInputStream.read(PlainSocketImpl.java:240) 12-15 14:23:28.877: E/AndroidRuntime(3724): at java.io.InputStreamReader.read(InputStreamReader.java:244) 12-15 14:23:28.877: E/AndroidRuntime(3724): at java.io.BufferedReader.fillBuf(BufferedReader.java:130) 12-15 14:23:28.877: E/AndroidRuntime(3724): at java.io.BufferedReader.readLine(BufferedReader.java:354) 12-15 14:23:28.877: E/AndroidRuntime(3724): at org.openintents.sensorsimulator.hardware.SensorSimulatorClient.readSensor(SensorSimulatorClient.java:654) 12-15 14:23:28.877: E/AndroidRuntime(3724): at org.openintents.sensorsimulator.hardware.SensorSimulatorClient.readSensor(SensorSimulatorClient.java:571) 12-15 14:23:28.877: E/AndroidRuntime(3724): at org.openintents.sensorsimulator.hardware.SensorSimulatorClient.access$1000(SensorSimulatorClient.java:53) 12-15 14:23:28.877: E/AndroidRuntime(3724): at org.openintents.sensorsimulator.hardware.SensorSimulatorClient$1.handleMessage(SensorSimulatorClient.java:505) 12-15 14:23:28.877: E/AndroidRuntime(3724): at android.os.Handler.dispatchMessage(Handler.java:99) 12-15 14:23:28.877: E/AndroidRuntime(3724): at android.os.Looper.loop(Looper.java:137) 12-15 14:23:28.877: E/AndroidRuntime(3724): at android.app.ActivityThread.main(ActivityThread.java:5039) 12-15 14:23:28.877: E/AndroidRuntime(3724): at java.lang.reflect.Method.invokeNative(Native Method) 12-15 14:23:28.877: E/AndroidRuntime(3724): at java.lang.reflect.Method.invoke(Method.java:511) 12-15 14:23:28.877: E/AndroidRuntime(3724): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:793) 12-15 14:23:28.877: E/AndroidRuntime(3724): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:560) 12-15 14:23:28.877: E/AndroidRuntime(3724): at dalvik.system.NativeStart.main(Native Method) Can someone clarify why I have so strange error? moreover there is no info in simulator instruction about async task usage necessity, maybe I am doing something wrong. Please help, thanks in advance. A: Unfortunately this error cannot be bypassed because library is not implemented having "new" SDK limits. Simplest solution is to set minSdkVersion in manifest to something before Honeycomb (e.g. android:minSdkVersion="8"). Reson for this is, although you did connected on background thread, you still created object on your UI thread, which in return means SensorSimulatorClient will use socket communication on your UI thread.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,384
{"url":"http:\/\/new-contents.com\/Washington\/estimate-standard-error-of-regression.html","text":"Address PO Box 19345, Seattle, WA 98109 (206) 414-8904 http:\/\/www.apluscomputertech.com\n\n# estimate standard error of regression Bellevue, Washington\n\nOften, researchers choose 90%, 95%, or 99% confidence levels; but any percentage can be used. Pearson's Correlation Coefficient Privacy policy. For example, let's sat your t value was -2.51 and your b value was -.067. There's not much I can conclude without understanding the data and the specific terms in the model.\n\nDiese Funktion ist zurzeit nicht verf\u00c3\u00bcgbar. An unbiased estimate of the standard deviation of the true errors is given by the standard error of the regression, denoted by s. Find critical value. Difference Between a Statistic and a Parameter 3.\n\nThis would be quite a bit longer without the matrix algebra. Actually: $\\hat{\\mathbf{\\beta}} = (\\mathbf{X}^{\\prime} \\mathbf{X})^{-1} \\mathbf{X}^{\\prime} \\mathbf{y} - (\\mathbf{X}^{\\prime} \\mathbf{X})^{-1} \\mathbf{X}^{\\prime} \\mathbf{\\epsilon}.$ $E(\\hat{\\mathbf{\\beta}}) = (\\mathbf{X}^{\\prime} \\mathbf{X})^{-1} \\mathbf{X}^{\\prime} \\mathbf{y}.$ And the comment of the first answer shows that more explanation of variance Why I Like the Standard Error of the Regression (S) In many cases, I prefer the standard error of the regression over R-squared. That's it!\n\nI love the practical, intuitiveness of using the natural units of the response variable. Return to top of page. I was looking for something that would make my fundamentals crystal clear. r regression standard-error lm share|improve this question edited Aug 2 '13 at 15:20 gung 74.1k19160309 asked Dec 1 '12 at 10:16 ako 378146 good question, many people know the\n\nThe second column (Y) is predicted by the first column (X). Rather, the sum of squared errors is divided by n-1 rather than n under the square root sign because this adjusts for the fact that a \"degree of freedom for error\u2033 How do I help minimize interruptions during group meetings as a student? Wird geladen...\n\nI would really appreciate your thoughts and insights. Andale Post authorApril 2, 2016 at 11:31 am You're right! Since we are trying to estimate the slope of the true regression line, we use the regression coefficient for home size (i.e., the sample estimate of slope) as the sample statistic. Schlie\u00c3\u0178en Weitere Informationen View this message in English Du siehst YouTube auf Deutsch.\n\nThe correlation coefficient is equal to the average product of the standardized values of the two variables: It is intuitively obvious that this statistic will be positive [negative] if X and Smaller is better, other things being equal: we want the model to explain as much of the variation as possible. The following R code computes the coefficient estimates and their standard errors manually dfData <- as.data.frame( read.csv(\"http:\/\/www.stat.tamu.edu\/~sheather\/book\/docs\/datasets\/MichelinNY.csv\", header=T)) # using direct calculations vY <- as.matrix(dfData[, -2])[, 5] # dependent variable mX Similar formulas are used when the standard error of the estimate is computed from a sample rather than a population.\n\nAbout all I can say is: The model fits 14 to terms to 21 data points and it explains 98% of the variability of the response data around its mean. The standard error of the forecast for Y at a given value of X is the square root of the sum of squares of the standard error of the regression and Bitte versuche es sp\u00c3\u00a4ter erneut. You'll see S there.\n\ncurrent community blog chat Cross Validated Cross Validated Meta your communities Sign up or log in to customize your list. Sign Me Up > You Might Also Like: How to Predict with Minitab: Using BMI to Predict the Body Fat Percentage, Part 2 How High Should R-squared Be in Regression asked 3 years ago viewed 67511 times active 2 months ago Linked 0 calculate regression standard error by hand 0 On distance between parameters in Ridge regression 1 Least Squares Regression Similar formulas are used when the standard error of the estimate is computed from a sample rather than a population.\n\nEvenSt-ring C ode - g ol!f Is intelligence the \"natural\" product of evolution? It is calculated through the equation ; therefore, the means of both variables in the sample and the value of b must be known before a can be calculated. Formulas for a sample comparable to the ones for a population are shown below. The simple regression model reduces to the mean model in the special case where the estimated slope is exactly zero.\n\nWhen must I use #!\/bin\/bash and when #!\/bin\/sh? Wird geladen... However, you can\u2019t use R-squared to assess the precision, which ultimately leaves it unhelpful. The 20 pounds of nitrogen is the x or value of the predictor variable.\n\nIdentify a sample statistic. Suppose our requirement is that the predictions must be within +\/- 5% of the actual value. Describe multiple linear regression. 6. A Hendrix April 1, 2016 at 8:48 am This is not correct!\n\nTherefore, the 99% confidence interval is -0.08 to 1.18. Example: A farmer wised to know how many bushels of corn would result from application of 20 pounds of nitrogen. Due to the assumption of linearity, we must be careful about predicting beyond our data. Was there something more specific you were wondering about?\n\nA model does not always improve when more variables are added: adjusted R-squared can go down (even go negative) if irrelevant variables are added. 8. However, as I will keep saying, the standard error of the regression is the real \"bottom line\" in your analysis: it measures the variations in the data that are not explained However, with more than one predictor, it's not possible to graph the higher-dimensions that are required! The only difference is that the denominator is N-2 rather than N.\n\nPlease answer the questions: feedback The Minitab Blog Data Analysis Quality Improvement Project Tools Minitab.com Regression Analysis Regression Analysis: How to Interpret S, the Standard Error of the Often X is a variable which logically can never go to zero, or even close to it, given the way it is defined. The regression model produces an R-squared of 76.1% and S is 3.53399% body fat. In a simple regression model, the percentage of variance \"explained\" by the model, which is called R-squared, is the square of the correlation between Y and X.\n\nKey. Step 6: Find the \"t\" value and the \"b\" value. Wiedergabeliste Warteschlange __count__\/__total__ Standard Error of the Estimate used in Regression Analysis (Mean Square Error) statisticsfun AbonnierenAbonniertAbo beenden50.42050\u00c2\u00a0Tsd. The accuracy of a forecast is measured by the standard error of the forecast, which (for both the mean model and a regression model) is the square root of the sum\n\nPlease help. For each value of X, the probability distribution of Y has the same standard deviation \u03c3. Find the margin of error. The standard error for the forecast for Y for a given value of X is then computed in exactly the same way as it was for the mean model:\n\nThe slope and Y intercept of the regression line are 3.2716 and 7.1526 respectively. Definition Equation = a = b = 3.","date":"2019-01-17 18:00:09","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7145469188690186, \"perplexity\": 793.0104193992663}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-04\/segments\/1547583659056.44\/warc\/CC-MAIN-20190117163938-20190117185938-00301.warc.gz\"}"}
null
null
WantedLive . JulicaSweetieBB. susanbaker. Aleksphilosopher. UFOpoliceJulicaSweetieBBIvanovabellaUFOpolice .alydas22EvolutionXXXMIRANNDATSIvanovabella .AngelBeast91susanbakerASweetJessieXLuisaPorto .0NaughtyMayaXDeliciousBalckmarianabigcockJennyRosy .funtimez1984ExtraLargeCock4ufuntimez1984PantheraBlack .YammyAmyhotgirl699marianabigcockPerfectionxx .KikiRopePlayHavenShazxxnaughtylovexxChloeKarter .AlmostMiracleMmJulicaSweetieBBJessyBlairLadyKeissha .LuisaPortomrstattooqueenxxnaughtylovexxTaraTGdom .Moreno92TamynaSweet5DEIVYXXXHOTRavenDevon .HairykittyvipSweetiMod3ldeeajuiceTiffanyyStar . CameronScottyDoctorLove22gianyyhotAngelBeast91 .LellyaaaKosmoKiraPerfectionxxSweetiMod3l .JessyBlairJOSHUAWALLmarianabigcockPervyPairs2 .AwesomeChloeASweetJessieXMayaPrettyGirlXDeliciousBalck .EvolutionXXXUFOpolicegianyyhotAlmostMiracleMm .JennyRosyYammyAmykarladirty2AshtonStor .AndaBellaJennaPetiteGorgeousLinaEylaKlum .0NaughtyMayaAndaBella0NaughtyMayaExtraLargeCock4u .UFOpoliceWONDERFULKARAXDeliciousBalckmrstattooqueen .UFOpoliceChristmasAmmyyThaliaMagicFlamingIllusions .MayaPrettyGirlSashaSweeetRavenDevonMikeJonathan .
{ "redpajama_set_name": "RedPajamaC4" }
4,568
{"url":"http:\/\/cvgmt.sns.it\/paper\/4762\/","text":"# Rectifiability and approximate differentiability of higher order for sets\n\ncreated by santilli on 21 Jul 2020\n\n[BibTeX]\n\nPublished Paper\n\nInserted: 21 jul 2020\nLast Updated: 21 jul 2020\n\nJournal: Indiana Univ. Math. J\nVolume: 68 (2019)\nYear: 2017\nDoi: 10.1512\/iumj.2019.68.7645\n\nAbstract:\n\nThe main goal of this paper is to develop a concept of approximate differentiability of higher order for subsets of the Euclidean space that allows to characterize higher order rectifiable sets, extending somehow well known facts for functions. We emphasize that for every subset $A$ of the Euclidean space and for every integer $k \\geq 2$ we introduce the approximate differential of order $k$ of $A$ and we prove it is a Borel map whose domain is a (possibly empty) Borel set. This concept could be helpful to deal with higher order rectifiable sets in applications.\n\nCredits | Cookie policy | HTML 5 | CSS 2.1","date":"2020-08-05 08:35:45","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6068958640098572, \"perplexity\": 718.2729911789154}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-34\/segments\/1596439735916.91\/warc\/CC-MAIN-20200805065524-20200805095524-00587.warc.gz\"}"}
null
null
\section{Introduction} \label{sec:intro} In a variety of applications in public policy, governance, medicine, economics, education, energy, and e-commerce, a main goal is to make better decisions that are both {\em personalized} and {\em dynamic}. This requires learning from a data set which actions to choose and when to apply them given the dynamic conditions of each subject (e.g., an individual). One of the main factors that makes this learning process challenging is that one needs to estimate the impact of an alternative sequence of actions that {\em could have been} used in order to improve outcomes. This requires {\em causal reasoning}, as the estimand---the effect of an alternative sequence of actions---is a {\em counterfactual} quantity \citep[see, e.g.,][]{Murphy2001, Murphy2003, Namkoong2020}. Dynamic Treatment Regimes (DTRs) have been widely studied for this goal, enabling finding effective alternative policies from observational data \citep{Robins1986, Robins1997, Murphy2001, Murphy2003, Robins2004, Zhao2015, Zhang2018, Wang2018, Tsiatis2019, Kosorok2019, Luckett2020, Nie2021, Leqi2021}. A DTR is, in essence, a set of rules that prescribe individualized sequence of actions by mapping a subject's history to a series of recommended treatments \citep{Murphy2003, Chakraborty2014, Tsiatis2019, Luckett2020, Xu2020}. Using available results in finding effective DTRs, however, requires making strong assumptions that might not hold in real-world applications, especially when the data in hand is {\em observational}. Notably, one needs to assume {\em sequential ignorability}\footnote{This assumption has also appeared in the literature under other names such as ``sequential randomization" \citep{Tsiatis2019} and ``sequential backdoor criterion" \citep{PearlRobins1995}.} \citep{Robins1986, Robins1997, Murphy2001, Murphy2003, Robins2004}, meaning that the data is rich enough, and hence, unobserved/latent/unmeasured confounding variables either do not exist or their effects can be ignored. When using observational data sets, this assumption is often violated in many real-world applications. Even in some secondary analyses of experimental data sets (e.g., those obtained under Micro-Randomized Trials (MRTs) in some mobile health studies where the goals is to study the effect of users following a treatment regime and not just being assigned to it), various practical challenges (e.g., user habituation, user engagement, and/or user compliance) may lead to unobserved confounding; see, e.g., \cite{Saghafian2021} for some discussions on scientific challenges in mobile health applications. Furthermore, unobserved confounders are typically time-varying in most applications: they are themselves affected by the previous actions taken. Adjusting for them, thus, is a perplexing task, making standard approaches for adjustment of confounding erroneous \citep[see, e.g.,][]{Robins2000}. Correctly adjusting for unobserved time-varying confounding can be managed, if one assumes a specific causal model for the data generating process.\footnote{For example, this can be done under an assumed model for the dynamics of unobserved confounders (e.g., how they are affected by actions taken) and their relationship to observed values (e.g., how unobserved time-varying confounders affect the actions under which data is generated).} Assuming such a model can allow estimating a distribution for potential trajectories under any alternative decision-making policy (i.e., treatment regime), which is central to estimating its effect. However, since time-varying confounders are often unobserved, estimating and assuming any such model is subject to significant misspecifications (a.k.a. model ambiguity). We address this challenge by extending the analyses of DTRs to a new class termed {\em Ambiguous DTRs (ADTRs)}, in which the impact of any sequence of actions is evaluated based on a ``cloud" of potential data generating models as opposed to a single one. Specifically, we allow for non-probabilistic ambiguity (a.k.a. Knightian uncertainty) about the true data generating model, while (similar to the literature on DTRs) we assume that under any given potential model, there is a certain probabilistic understanding of how data is generated (see, e.g., \cite{Saghafian2018} and Chapter 11 of \cite{Manski2007} for further discussions, \cite{Stoy2011} for an axiomatic treatment of statistical decision-making under these conditions, and \cite{Saghafian20161} for an information entropy view of data-driven decision-making under ambiguity).\footnote{This view of data-driven decision-making under ambiguity has also been shown useful in various applications, including in designing and optimizing queueing systems under model ambiguity \citep{BrenSaghafian2019} and medical decision-making \citep{Boloori2020}.} This allows for (a) directly taking into account potential model misspecifications when estimating causal impacts, and (b) distinguishing between {\em ambiguity} (lack of knowledge about the true model) and {\em risk} (probabilistic consequences of decisions under a known model).\footnote{This view is also aligned with that of \cite{Arrow1951} who stated: ``There are two types of uncertainty: one as to the hypothesis, which is expressed by saying that the hypothesis is known to belong to a certain class or model, and one as to the future events or observations given the hypothesis, which is expressed by a probability distribution.".} In extending DRTs to ADTRs, we are particularly motivated by our various collaborations with our partner hospital, the Mayo Clinic. In various studies \citep[see, e.g.,][]{Boloori2015, Boloori2020, Munshi2020b, Munshi2020, Munshi2021}, we have collected data sets from our partner hospital and have examined clinical decisions for patients who undergo a solid organ transplantation and develop what is known as {\em New Onset Diabetes After Transplantation (NODAT)}. In practice, physicians often use an intensive amount of an immunosuppressive drug (e.g., tacrolimus) to reduce the risk of organ rejection post-transplant \citep[see, e.g.,][]{Boloori2015, Boloori2020}. Due to a well-established effect known as the diabetogenic effect, this can increase the risk of NODAT, which prompts physicians to use a glucose control drug (e.g., insulin). Learning better ways to prescribe these drugs (e.g., tacrolimus and insulin) in both a personalized and dynamic way to jointly control risks of NODAT and organ rejection is not an easy endeavor; the available data sets are only observational, the main health states are hidden \citep[see, e.g.,][]{Boloori2020}, and the existence of unobserved confounders that are time-varying disallow using existing methods. Our approach in extending DTRs to ADTRs and analyzing them involves the following three main steps. (1) We make use of a utility function that is appropriate under model ambiguity (instead of the expected value of outcomes widely used in the literature). (2) We generalize traditional importance sampling methods to accommodate model ambiguity. (3) We connect ADTRs to {\em Ambiguous Partially Observable Mark Decision Processes} (APOMDPs) proposed by \cite{Saghafian2018}, and develop Reinforcement Learning (RL) algorithms that allow learning an optimal treatment regime from the observed data in efficient ways. The utility function we use is based on a generalization of the traditional {\em maximin expected utility (MEU)} theory (a.k.a. Wald's or robust optimization criterion). The MEU theory assumes that outcomes should by maximizing with respect to the worst possible member of the ambiguity set (cloud of potential causal models in our setting). In most applications, using the MEU approach yields overly conservative decisions \citep[for related discussions, see, e.g.,][and the references therein]{Saghafian2018}, and furthermore, does not allow for representing meaningful human choices such as those of ambiguity seeking individuals established in some behavioral studies \citep[see, e.g.,][]{Bhide2000, Heath1991, Ahn2014}. This was also recognized in the seminal work of \cite{Savage1951} who wrote that this criterion is ``ultrapessimisitic" and ``can lead to absurd conclusion[s]". The generalization we use is known as {\em $\alpha$-maximin expected utility ($\alpha$-MEU)}, which allows for both optimistic and pessimistic views of the world \citep{ArrowHurwitcz, Hurwicz1951a, Hurwicz1951b, Ghiradato2004, Saghafian2018}. Unlike studies that use the MEU criterion, using the $\alpha$-MEU criterion avoids overly conservative decisions by allowing for a controllable {\em pessimism level} (denoted by the parameter $\alpha$) that can take values in [0, 1]. Within the utility theory literature, early studies \citep[see, e.g.,][]{ArrowHurwitcz, Hurwicz1951a, Hurwicz1951b} provided four axioms that a choice operator must satisfy. These axioms allowed such studies to show that, under complete ignorance, one can focus merely on two extreme cases: the best-case and the worst-case. Later studies \citep[see, e.g.,][]{Ghiradato2004, Marinacci2002} further axiomatized preferences under the $\alpha$-MEU criterion and also highlighted another importance of using the $\alpha$-MEU criterion in decision-making: it allows for differentiating between the {\em inherent ambiguity} (a property related to the true causal model) and {\em ambiguity attitude} (a property related to the decision-maker). In our study, using the $\alpha$-MEU criterion not only allows us to provide an alternative for the expectation operator---the conventional measure of performance used in the literature surrounding DTRs\footnote{For studies in this literature that consider other measure instead of the expected value of outcomes, we refer to \cite{Wang2018} (quantile performance) and \cite{Leqi2021} (median performance). These studies, however, do not consider model ambiguity, existence of unobserved confounders, or other challenges we aim to address. While by using the $\alpha$-MEU criterion we primarily generalize the expected value of outcomes, it should be noted that our results can also be used to study generalizations of other measures such as the quantile or median measures.}---but also allows finding treatment regimes that are tailored to the preferences and attitudes of the decision-maker. Importantly, this means that our work enables a {\em two-way personalization}: treatment regimes are personalized based on both the subject's and the decision-maker's characteristics. This is important in various applications such as medicine, where not only the treatment plan needs to be customized for each patient, but also the physician in charge should be given the ability to include his/her preferences in providing the best course of treatment. We start our analyses by showing how a generalization of importance sampling methods (a.k.a. inverse-probability-weighting) widely used in the literature \citep[see, e.g.,][]{Robins2000, Precup2000, Murphy2005, Tsiatis2019} can be utilized to find optimal regimes for ADTRs without requiring the dynamics of observed or unobserved variables to be memoryless (i.e., satisfy the Markov property).\footnote{See also \cite{Zhang2019} for more discussions related to fining the optimal treatment regime under model ambiguity without a Markovian structure.} Specifically, we start by generalizing importance sampling methods by allowing sampling across a {\em cloud} of potential data generating models (a.k.a. ambiguity set). We show that under some conditions the resulted method, which we term {\em Generalized Sequential Importance Sampling (GSIS)}, provides a baseline for estimating the causal impact of any dynamic treatment regime, and hence, finding the optimal one. When the dynamics of variables satisfy the Markov property, we connect ADTRs to APOMDPs recently introduced by \cite{Saghafian2018}. APOMDPs generalize traditional POMDPs by allowing model ambiguity. APOMDPs, however, were proposed without any causal inference application in mind. In this paper, for the first time, we make use of them through a causal inference lens. Notably, by connecting ADTRs to APOMDPs, we consider time-varying unobserved confounders as dynamic latent states while allowing ambiguity regarding the true (data generating) causal model.\footnote{Since APOMDPs generalize POMPDs, our results can also be viewed as generalizations of those in the literature that use a POMDP setting to perform off-policy evaluation \citep[see, e.g.,][and the references therein]{Tennenholtz2019, Xu2020, Bennett2021proximal, Hu2021, Thomas2016}.} We then make use of known structural results for APODMPs (e.g., piecewise linearity and continuity of the value function) established in the literature \citep{Saghafian2018}, and develop two RL approaches that can efficiently provide effective treatment regimes. In developing these RL approaches, as is common, we view the problem of finding an effective treatment regime as an off-policy RL problem. However, in contrast to main RL methods such as Q-Learning (an approximate dynamic programming approach that uses regression to learn the ``quality" function) and A-Learning (which tries to learn the ``Advantage" function) our approaches try to learn the value function directly. Thus, roughly speaking they are within the V-Learning methods \cite[see, e.g.,][]{Luckett2020, Xu2020}. We term our proposed learning algorithms {\em Direct Augmented V-Learning} ($\texttt{DAV-Learning}$) and {\em Safe Augmented V-Learning} ($\texttt{SAV-Learning}$) as they augment the V-Learning methods by (a) making use of the structural properties of the value function, and (b) incorporating model ambiguity (in a direct and safe way, respectively). For our proposed learning approaches, we establish important theoretical results, including weak consistency and asymptotic normality of both the estimated optimal treatment regime and the associate overall gain. To establish these results, we require specific but relatively common ``regularity" conditions, including conditions on (a) basic ``complexity" properties of the class of allowable policies (measured by entropy-based versions of the Donsker theorems with bracketing integrals), and (b) absolute regularity of the underlying empirical processes. We also examine the performance of our proposed approaches by applying them to a clinical data set of over 63,000 observations made of patients who underwent kidney transplantation in our partner hospital and faced NODAT. We find promising results, indicating that using $\texttt{DAV-Learning}$ and $\texttt{SAV-Learning}$ yields notable improvements over the treatment regime used in practice; depending on the decision-maker's pessimism level, these improvements are in the ranges (10\%, 42\%) and (10\%, 32\%) for $\texttt{DAV-Learning}$ and $\texttt{SAV-Learning}$, respectively. Furthermore, we observe that the performance of the $\texttt{SAV-Learning}$ regime is much more robust to the value of the pessimism level (parameter $\alpha$) than that of $\texttt{DAV-Learning}$, and hence, a decisions-maker who uses $\texttt{SAV-Learning}$ does not need to be worried about the value of $\alpha$ s/he uses in obtaining an optimal treatment regime. We further investigate the performance of our proposed approaches using simulations experiments (synthetic data). Our results show that $\texttt{DAV-Learning}$ and $\texttt{SAV-Learning}$ can improve the observed regime by an amount that ranges in (1\%,\,37\%) and (1\%,\,8\%), respectively. Furthermore, we make use of our simulation experiments to quantify the robustness of our approaches to model ambiguity, and find that $\texttt{DAV-Learning}$ and the $\texttt{SAV-Learning}$ are able to strongly shield against model ambiguity: the gain loss under these approaches compared to an imaginary oracle who knows both the true data generating model and the optimal treatment regime under that model is very low (below 0.6\%), regardless of the value of $\alpha$. Thus, a decision-maker who is facing model ambiguity can make use of our proposed approaches and obtain a treatment policy that has a similar performance to that of an imaginary decision-maker who knows both the true data generating model and the optimal policy under that model. Finally, our results show that the gain loss compared to such an imaginary decision-maker has a U-shape curve in the pessimism level: the minimum loss for both $\texttt{DAV-Learning}$ and $\texttt{SAV-Learning}$ are obtained at a mid-value of $\alpha$. This implies that (a) using extreme cases of $\alpha =0$ (a maximax view) or $\alpha=1$ (a maximin view) is almost never {\em robustness-maximizing}, and (b) by viewing $\alpha$ as a tuning parameter (when needed) in our proposed approaches, one can efficiently obtain a treatment regime that performs best across all possible pessimism levels. In closing this section, we note that our work in incorporating model ambiguity a priori in the analyses not only provides robustness to potential misspecifications, but more broadly, can bridge the gap between two philosophical views of decision-making using causal inference: {\em model-based} and {\em model-free}. The former postulates that any sensible causal reasoning for decision-making needs to be based on a specific model and set of assumptions in addition to data, while the latter advocates that it needs to rely only on data. We hope that our work in taking a middle ground and considering a cloud of models can serve as a step for future research in trying to further bridge the gap between the two. The importance of doing so has its roots in seminal work in Statistical Decision Theory \citep[see, e.g.,][]{Wald1939, Wald1945, Wald1950}, but has also been highlighted in various more recent studies. For example, \cite{Manski2021} emphasizes that ``models can at most approximate actualities" and highlights that statistical inference for decision-making needs to be performed across all feasible models. Similarly, referring to the famous quote from \cite{Box1979}, \cite{Watson2016} state that ``statisticians are taught from an early stage that essentially all models are wrong, but some are useful," and stress that decision-making needs to rely on a set of models that are misspecified (hence ``wrong") but useful in that they can be ``helpful for aiding actions (taking decisions)." \section{The Framework} \label{s-framework} Throughout the paper, the notation $``\triangleq"$ is used to differentiate between definitions and equations. For a set $\mathscr T\triangleq\{1, 2, 3,\cdots, T\}$, the notations $(X_t)_{t\in\mathscr T}$ and $\mathscr T_{\leq t}$ are used to represent the vector $(X_1, X_2,\cdots,X_T)$ and the set $\mathscr T\setminus \{t+1,t+2,\cdots, T\}$, respectively. All vectors are consider to be in the column format (e.g., $(X_t)_{t\in\mathscr T}$ is $|\mathscr T| \times 1$). For any finite set $\Xi\subset \mathbb R$, we let $\Delta_\Xi$ denote the probability simplex induced by $\Xi$. The notations $\overset{p}{\to}$ and $\overset{d}{\to}$ denote convergence in probability and distribution, respectively. The set $\mathscr I$ represents the interval $[0,1]$. We let the observed data be a collection of $n\in\mathbb N$ i.i.d. realizations (called trajectories) of the vector of variables $(O_t, A_t)_{t\in \mathscr T}$. For a realized trajectory, $(o_t, a_t)_{t\in \mathscr T}$, $o_t\in\mathscr O$ is the observation made about a subject (e.g., a patient's observed covariates or an observed health state serving as a summary of them) at time $t\in\mathscr T$, and $A_t\in\mathscr A$ denotes the action/treatment assigned at time $t\in\mathscr T$, where $\mathscr T\triangleq\{1, 2, 3,\cdots, T\}$ is the set of time periods (e.g., patients' visits/follow-ups).\footnote{We do not assume that time points are evenly distributed or homogenous across patient trajectories. Importantly, in some applications, the treatment times are random. For simplicity, we assume treatment times are fixed. However, extending our results to scenarios with random treatment times is relatively straightforward.} For example, in our study of NODAT patients, observations made about each patient ($O_t$) include various test results, demographic information, and other observed risk factors such as diabetes history, body mass index, blood pressure, triglyceride, uric acid, and lipoprotein information (see Table~\ref{Table:Observations}). Actions taken ($A_t$) include low dose (non-aggressive) or high-dose (aggressive) tacrolimus prescriptions as well as information on whether insulin has been used (see Table \ref{Table:Actions}), Finally, $\mathscr T\triangleq\{1, 2, 3,\cdots, 12\}$, since patient follow-ups are monthly for a year after transplantation. Besides the observed data, there are often unobserved variables that might have affected what is observed in the data. Let $S_t$ denote a summary of them at time $t$, and let $\mathscr S$ be the support of $S_t$. For example, in mHealth applications, $S_t$ might include information relating to the patient's habituation level \citep[see, e.g.,][]{Saghafian2021} and/or patient true health state, both of which are often unobserved. In our case study of NODAT patients, $S_t$ is a nine-level variable that summarizes the unobserved health state of the patient in terms of both transplantation and diabetes conditions (see Table \ref{Table:States}). We denote the observable history up to each time $t\in\mathscr T$ by $\mathbf{H}^o_t\triangleq(O_1, A_1, O_2, A_2,\cdots, O_t)$ and let $\mathscr H_t^o$ be the support of $\mathbf{H}^o_t$. Similarly, we denote the (partially) unobservable history up to each time $t\in\mathscr T$ by $\mathbf{H}^u_t\triangleq(S_1, O_1, A_1, S_2, O_2, A_2, \cdots, S_t, O_t)$ and let $\mathscr H_t^u$ be the support of $\mathbf{H}^u_t$. It is important to note that in general both variables $S_t$ and $O_t$ depend on the previous treatments. However, for notational simplicity, we suppress the dependency of $S_t$ and $O_t$ on the vector $(a_t)_{t\in\mathscr T_{\leq t-1}}\triangleq (a_1, a_2, \cdots, a_{t-1})$. We assume the latent state summaries $(S_t)_{t\in\mathscr T}$ are such that the immediate gain in each decision epoch depends on the history only through them. This can always be achieved with appropriate definition of variables $(S_t)_{t\in\mathscr T}$ \citep[see, e.g.,][]{Xu2020}. For example, in our case study, the immediate gains are based on predefined {\em Quality of Life (QoL)} scores that depend only on patient summaries defined by $S_t$ (see Table \ref{Table:Gain}). Thus, we denote the immediate gain at time $t$ through $G_t\triangleq g (S_t, A_t)\in\mathbb R$, where $g$ is a known function.\footnote{It should be noted that $S_t$, in general, depends on the history up to time $t$. Thus, $G_t\triangleq g (S_t, A_t)$ also depends on the history. But this dependency is only through $S_t$, which as noted earlier, can always be achieved with appropriate definition of summary variables $(S_t)_{t\in\mathscr T}$ \citep[see, e.g.,][]{Xu2020}.} The set of all possible immediate gains can be denoted by $\mathscr G\triangleq\{\mathbf{g}^a\in \mathbb R^{|\mathscr S|}:\, \forall a\in \mathscr A\}$, where $\mathbf{g}^a\triangleq (g(s, a))_{s\in\mathscr S}$. A treatment regime (hereafter also ``policy" for simplicity) $\boldsymbol{\lambda}\triangleq(\lambda_t)_{t\in\mathscr T}$ in this setting is a vector of time-dependent mappings from the available history at each time $t$ to the probability simplex induced by actions, $\Delta_\mathscr A$. It defines the probability of assigning each action/treatment at each decision epoch given the available history up to that point. Policies are compared using the overall gain they generate. The overall gain of a policy $\boldsymbol{\lambda}$ is defined by the discounted sum of immediate gains it generates, which we denote by \begin{equation}\label{disgain} \Gamma_T(\boldsymbol{\lambda})\triangleq\sum_{t\in\mathscr T} \beta^{t-1} G_t^{\boldsymbol{\lambda}}, \end{equation} where $\beta\in\mathscr I\setminus\{1\}$ is a discount factor. Similarly, the long-run impact of $\boldsymbol{\lambda}$ can be analyzed using $\Gamma_\infty(\boldsymbol{\lambda})\triangleq\lim_{T\to\infty} \Gamma_T(\boldsymbol{\lambda})$.\footnote{While we focus on discounted sum of immediate gains, we note that many of our results readily extend to the average overall gains $\bar\Gamma(\boldsymbol{\lambda})\triangleq\frac{1}{T}\sum_{t\in\mathscr T} G_t^{\boldsymbol{\lambda}}$, and in particular, to its long-run counterpart $\lim\inf_{T\to\infty} \bar\Gamma(\boldsymbol{\lambda})$. This is because under some mild conditions $\lim_{T\to\infty} \bar\Gamma(\boldsymbol{\lambda})= \lim_{T\to\infty} \lim_{\beta\to1} \frac{\Gamma_T(\boldsymbol{\lambda})}{1-\beta}$.} Here, we shall note that $G_t^{\boldsymbol{\lambda}}$, and hence $\Gamma_T(\boldsymbol{\lambda})$, should be viewed with a {potential outcomes} lens \citep[for more discussions, see, e.g.,][]{Robins1986, Rubin1986, Angrist1996, Robins1997,Murphy2001}; equivalently, in the language of {do calculus}, $G_t^{\boldsymbol{\lambda}}$ and $\Gamma_T(\boldsymbol{\lambda})$ should be viewed as $G_t| do(\boldsymbol{\lambda})$ and $\Gamma_T| do(\boldsymbol{\lambda})$, respectively \citep[see, e.g.,][]{Pearl2009}. In addition to this, which is implicit in our notation, our notation also implicitly implies {\em consistency}\footnote{This assumption links the counterfactual data with the factual one \citep{Robins1997}, and can be violated if treatment of a subject impacts another subject's variables (e.g., vaccinating a group of individuals may decrease exposure of others to a disease).}, which is a standard assumption in the causal inference literature with time-varying variables \citep[see, e.g.,][]{Robins1997,Murphy2001} and holds in our motivating study of NODAT patients. In settings we consider, however, the distribution of $\Gamma_T(\boldsymbol{\lambda})$ cannot be solely identified from the observed data alone. In fact, there are often a variety of plausible data generating models all agreeing with the observed part of the data, but with different implications about the distribution of $\Gamma_T(\boldsymbol{\lambda})$. We let $\mathscr M$ denote the set of all such models (a.k.a. an ambiguity set). Each given model $m\in\mathscr M$ implies a distribution for $\Gamma_T(\boldsymbol{\lambda})$, which we denote by $f_m\in\mathscr F$, where $\mathscr F\triangleq\{f_m: m\in\mathscr M\}$. Finally, since the distribution of $\Gamma_T(\boldsymbol{\lambda})$ varies across the models in $\mathscr M$, we define a utility function that allows us to compare the performance of different policies. To this end, we make use of the $\alpha$-MEU utility, which is suitable for decision-making under ambiguity \citep[see, e.g.,][]{Ghiradato2004, Marinacci2002,Saghafian2018}. Specifically, by considering $Y(\boldsymbol{\lambda})\triangleq\Gamma_T(\boldsymbol{\lambda})$ or $Y(\boldsymbol{\lambda})\triangleq\Gamma_\infty(\boldsymbol{\lambda})$ as our main outcome variable of interest, we make use of \begin{equation}\label{MEU} MEU_\alpha [Y (\boldsymbol{\lambda})] \triangleq \alpha\,\inf_{f^m\in\mathscr F} \mathbb E^{f^m} [Y (\boldsymbol{\lambda})]+ (1-\alpha) \sup_{f^m\in\mathscr F} \mathbb E^{f^m} [Y (\boldsymbol{\lambda})]\ \ \ \ \ \alpha\in\mathscr I, \end{equation} as the utility of $Y(\boldsymbol{\lambda})$, where $\alpha$ represents the pessimism level and $\mathbb E^{f^m}$ denotes the expectation operator with respect to the distribution $f^m$. For example, at $\alpha=1$, (100\% pessimism level), policies are compared with respect to their worst-case performance. At $\alpha=0$ (0\% pessimism level), on the other hand, policies are compared with respect to their best case performance. Of note, when $|\mathscr M|=1$, $MEU_\alpha [Y (\boldsymbol{\lambda})]$ returns the expected value of $Y (\boldsymbol{\lambda})$, and hence, the utility function in (\ref{MEU}) provides a generalization for the traditional expectation operator that is widely used in the causal inference literature. We say that the effect of treatment policy $\boldsymbol{\lambda}$ is ``$\alpha$-MEU identifiable," if $\big|MEU_\alpha [Y (\boldsymbol{\lambda})]\big|<\infty$ and $MEU_\alpha [Y (\boldsymbol{\lambda})]$ can be identified given $\mathscr M$. Since a main goal is to learn the optimal policy, we next define the following notion of optimality in ADTRs, which is a generalization of the traditional notion of optimality used in analyzing DTRs. \begin{definition} [\textbf{Optimality}]\label{def:opt} Let $\Lambda$ be the set of all $\alpha$-MEU identifiable policies. We say that a policy ${\boldsymbol{\lambda}}^*\in\Lambda$ is optimal, if with $Y(\boldsymbol{\lambda})\triangleq\Gamma_T(\boldsymbol{\lambda})$, we have \begin{equation}\label{opttreatment} MEU_\alpha [Y (\boldsymbol{\lambda}^*)]\geq MEU_\alpha [Y (\boldsymbol{\lambda})]\hspace{10mm} \forall\boldsymbol{\lambda}\in\Lambda. \end{equation} \end{definition} To perform our analyses, it is useful to differentiate between the policy under which the data has been generated (hereafter the ``behavior policy") and the policy that we would like to evaluate and recommend (hereafter the ``evaluation policy"). The behavior policy denoted by $\boldsymbol{\lambda}^b\triangleq(\lambda_t^b)_{t\in\mathscr T}$ is a vector of time-dependent mappings $\lambda_t^b: \mathscr H^u\to\Delta_\mathscr A$ whereas the evaluation policy denoted by $\boldsymbol{\lambda}^e\triangleq(\lambda_t^e)_{t\in\mathscr T}$ is a vector of time-dependent mappings $\lambda_t^e: \mathscr H^o\to\Delta_\mathscr A$. An important difference between the evaluation and the behavior policies relates to a condition known as {\em sequential ignitability}\footnote{See also the {\em sequential backdoor} criterion \citep{PearlRobins1995}.} \citep[see, e.g.,][]{Robins1986, Robins1997, Murphy2001, Murphy2003, Robins2004}, which we define next. \begin{definition}[\textbf{Sequential Ignorability}] For any policy $\boldsymbol{\lambda}\triangleq(\lambda_t)_{t\in\mathscr T}$, let $\mathbf{H}^{o,m}_t (\boldsymbol{\lambda})\triangleq(O^m_1, A^m_1, O^m_2, A^m_2,\cdots, O^m_t)$ denote the observable history up to time $t\in\mathscr T$, generated under $\boldsymbol{\lambda}$ and model $m\in\mathscr M$. We say that $\boldsymbol{\lambda}$ satisfies sequential ignorability under model $m\in\mathscr M$, if for all $t\in\mathscr T$, the action generated by $\lambda_t$ is independent of $G^m_{t}, O^m_{t+1}, G^m_{t+1}, O^m_{t+2}, \cdots, G^m_{T}$ conditional on $\mathbf{H}^{o,m}_t (\boldsymbol{\lambda})$. \end{definition} Both by definition and naturally, any evaluation policy $\boldsymbol{\lambda}^e\triangleq(\lambda_t^e)_{t\in\mathscr T}$ (where $\lambda_t^e: \mathscr H^o\to\Delta_\mathscr A$) satisfies sequential ignorability under any model $m\in\mathscr M$. In contrast, a behavior policy $\boldsymbol{\lambda}^b\triangleq(\lambda_t^b)_{t\in\mathscr T}$ (where $\lambda_t^b: \mathscr H^u\to\Delta_\mathscr A$) may or may not satisfy this condition, since it might depend on unobservable confounders (variables in $(S_t)_{t\in\mathscr T}$ that affect both the gain and the actions selected by $\boldsymbol{\lambda}^b$). In fully randomized experiments (e.g., Micor Randomized Trials), the behavior policy may satisfy sequential ignorability. However, when the data is observational, it is often impossible to test whether the behavior policy satisfies this assumption, and in addition, it is often highly likely that this assumption does not hold. \subsection{Analyzing ADTRs via Generalized Sequential Importance Sampling (GSIS)} We now show that, under some conditions, an optimal policy for an ADRT can be found using a generalized version of sequential importance sampling, which we term {\em Generalized Sequential Importance Sampling (GSIS)}. While allowing for model ambiguity, GSIS assigns weights under each model and sequentially adjusts the trajectory probabilities that occur under a given evaluation policy compared to those observed in the data set. Of note, we use GSIS in this section to study ADTRs that do not satisfy any Markovian (a.k.a. memoryless) property regarding the dynamics of the underlying variables. In the next section, we show how the analyses of ADTRs can be simplified when such dynamics satisfy a Markovian structure. To present GSIS, we first suppress the dependencies to the underlying model by assuming the model is fixed. Consider an evaluation policy $\boldsymbol{\lambda}^e$, and let $\mathbf{H}^o_t (\boldsymbol{\lambda}^e)$ be the history that will be observed under $\boldsymbol{\lambda}^e$ up to time $t$. Also, denote by $\lambda_t^e(A_t|\mathbf{H}^o_t (\boldsymbol{\lambda}^e))$ the probability that actions $A_t$ is chosen under $\boldsymbol{\lambda}^e$ when the observed history is $\mathbf{H}^o_t (\boldsymbol{\lambda}^e)$. Furthermore, while the behavior policy is not known (e.g., due to its potential dependency on unobserved variables), we can observe the {\em marginalized} probabilities of action selection under the behavior policy, which we denote by $\lambda_t^b(A_t|\mathbf{H}^o_t (\boldsymbol{\lambda}^b))$. These allow us to define importance sampling weights \begin{equation}\label{eq:W} w_t (\boldsymbol{\lambda}^e)\triangleq \frac{\lambda_t^e(A_t|\mathbf{H}^o_t (\boldsymbol{\lambda}^e))}{\lambda_t^b(A_t|\mathbf{H}^o_t (\boldsymbol{\lambda}^b))} \ \ \ \ \forall t\in\mathscr T. \end{equation} Proposition \ref{prop:SIS} establishes that, under some conditions, the optimal policy for an ADTR governed by a set of models $\mathscr M$ can be found via GSIS. The proof follows by showing that GSIS provides a {\em distributionally robust} way of estimating the outcome variable of interest under the evaluation policy (see the appendix for further details). This result, in turn, is built by understanding how the impact of an evaluation policy can first be analyzed for any given (a) sequence of actions, and (b) model $m\in\mathscr M$ under which the data might be generated (Lemma \ref{lemma:actionseq} below). \begin{lemma}[Evaluation Using Fixed Sequences of Actions]\label{lemma:actionseq} Suppose that an evaluation policy $\boldsymbol{\lambda}^e\triangleq(\lambda_t^e)_{t\in\mathscr T}$ satisfies sequential ignorability under a given model $m\in\mathscr M$, and let the outcome of interest be $Y(\boldsymbol{\lambda}^e)\triangleq \Gamma_T (\boldsymbol{\lambda}^e)$. Defining $\boldsymbol{\tau}_T\triangleq (a_t)_{t\in\mathscr T}$ and $\boldsymbol{\tau}_{t-1}\triangleq (a_t)_{t\in\mathscr T_{\leq t-1}}$, we have: \begin{equation}\label{prop:ec:SISADTR3} \mathbb E^{f_m}\big[Y(\boldsymbol{\lambda}^e)\big]= \sum_{\boldsymbol{\tau}_T} \mathbb E^m \big[Y(\boldsymbol{\tau}_T)\prod_{t\in\mathscr T} \lambda_t^e (a_t| \mathbf{H}^o_t (\boldsymbol{\tau}_{t-1}))\big], \end{equation} where $Y(\boldsymbol{\tau}_T)$ and $\mathbf{H}^o_t (\boldsymbol{\tau}_{t-1})$ denote $Y(\boldsymbol{\lambda}^e)$ and $\mathbf{H}^o_t (\boldsymbol{\lambda}^e)$ when the actions taken are given by $\boldsymbol{\tau}_T$ and $\boldsymbol{\tau}_{t-1}$, respectively, \end{lemma} As mentioned earlier, any evaluation policy satisfies sequential ignorability both naturally and by definition. Thus, the condition in Lemma \ref{lemma:actionseq} is not restrictive. Notably, however, this lemma allows us to establish an $MEU_\alpha$--unbiased estimator of $\Gamma_T(\boldsymbol{\lambda}^e)$ in Proposition \ref{prop:SIS}, where the notion of $MEU_\alpha$--unbiased estimation is defined below. \begin{definition}[\textbf{$MEU_\alpha$--Unbiasedness}]\label{def:unbias} An estimator $\hat Y$ of an outcome variable of interest $Y$ is said to be $MEU_\alpha$--unbiased if, and only if, $MEU_\alpha[\hat Y]=MEU_\alpha[Y]$ for any $\alpha\in\mathscr I$. \end{definition} To establish an $MEU_\alpha$--unbiased estimator of $\Gamma_T(\boldsymbol{\lambda}^e)$, we also need to make sure that the evaluation and the behavior policies sufficiently {\em overlap}. Specifically, we need to ensure that these policies overlap almost surely (defined below). \begin{definition}[\textbf{Almost Sure Overlap}] We say that the evaluation and the behavior policy almost surely overlap if, and only if, ${\lambda_t^b(a_t|\mathbf{H}^o_t (\boldsymbol{\lambda}^b))}>0$ whenever ${\lambda_t^e(a_t|\mathbf{H}^o_t (\boldsymbol{\lambda}^e))}>0$ a.s. over $\mathbf{H}^o_t(\boldsymbol{\lambda}^b)$ and $\mathbf{H}^o_t(\boldsymbol{\lambda}^e)$ for all $t\in\mathscr T$ and $a\in\mathscr A$. \end{definition} Intuitively, the evaluation and the behavior policy need to overlap to ensure that trajectories obtained under the behavior policy are to some extent informative about the trajectories under the evaluation policy. When the evaluation and the behavior policy almost surely overlap, the importance sampling weights defined in \eqref{eq:W} are well-defined for all $t\in\mathscr T$ (except perhaps on histories that might happen with probability zero). \begin{proposition}[Generalized Sequential Importance Sampling (GSIS)]\label{prop:SIS} Suppose that the evaluation and behavior policies (a)~satisfy sequential ignorability under all models $m\in\mathscr M$, and (b)~almost surely overlap. Then, for any $\alpha\in\mathscr I$, we have \begin{equation}\label{prop:SISADTR} MEU_\alpha\big[\Gamma_T(\boldsymbol{\lambda}^e)\big]=MEU_\alpha\big[\Gamma_T(\boldsymbol{\lambda}^b) \prod_{t\in\mathscr T} w_t(\boldsymbol{\lambda}^e)\big], \end{equation} and hence, $\hat \Gamma_T(\boldsymbol{\lambda}^e)\triangleq \Gamma_T(\boldsymbol{\lambda}^b) \prod_{t\in\mathscr T} w_t(\boldsymbol{\lambda}^e)$ is an $MEU_\alpha$--unbiased estimator of $\Gamma_T(\boldsymbol{\lambda}^e)$. \end{proposition} Of note, Proposition \ref{prop:SIS} also provides a partial way of characterizing the optimal evaluation policy, since it provides a way of estimating $MEU_\alpha\big[\Gamma_T(\boldsymbol{\lambda}^e)\big]$ under any given evaluation policy. That is, using this proposition and optimizing $MEU_\alpha\big[\Gamma_T(\boldsymbol{\lambda}^e)\big]$ over a given set of policies (which for use in practice might be restricted to those satisfying desired attributes such as fairness, interpretability, etc.) can shed light on the optimal evaluation policy. However, Proposition \ref{prop:SIS} provides only a partial way of characterizing the optimal evaluation policy, because analyzing ADTRs often requires considering a behavior policy that might not satisfy sequential ignorability (at least under some models in $\mathscr M$). Therefore, we next study scenarios in which the behavioral policy does not fully satisfy sequential ignorability, but satisfies it {\em to some extent}. This entails limiting the impact of unobserved confounders (which make the probability of observing certain trajectories in the observed data biased compared to what would have happen if we could observe unobservables) on the behavior policy under each model. In limiting the impact of unobserved confounders on the behavior policy, we are mainly motivated by extending the analyses of confounding in causal inference \citep[see, e.g.,][]{Rosenbaum2002} from a traditional setting in which $|\mathscr T|=1$, the treatment variable is binary $|\mathscr A|=2$, and there is no model ambiguity $|\mathscr M|=1$, to ADTRs in which these restrictions are all relaxed. Two notable challenges in doing so are: (1) since future actions depend on the history, a confounding decision/treatment in any period can make future decisions confounding as well; (2) since the trajectory probabilities depend on the underlying model, the impact of unobserved confounding depends on the underlying model. We next introduce the notion of {\em bounded unobservable confoundedness}, which we define using the likelihood ratios of treatment propensities (functions $\ell (\cdot)$ in the following definition). This, in turn, allows us to provide a version of GSIS under bounded unobservable confoundedness (Proposition \ref{prop:buc}). \begin{definition}[\textbf{Bounded Unobservable Confounding (BUC)}]\label{def:boundedconf} We say that the behavioral policy satisfies Bounded Unobserved Confounding (BUC) under a model $m\in\mathscr M$ if, and only if, there exist constants $\eta_t^m\in[1,\infty)$ such that \begin{equation} (\eta^m_t)^{-1}\leq\frac{\ell (a_t,a'_t, \mathbf{H}_t^{o,m}, \mathbf{S}^m_t=\mathbf{s})}{\ell (a_t,a'_t, \mathbf{H}_t^{o,m}, \mathbf{S}^m_t=\mathbf{s}')}\leq \eta^m_t \end{equation} a.s. over observable history $\mathbf{H}_t^{o,m}$, for all $t\in\mathscr T$, $a,a'\in \mathscr A$, $\mathbf{s}, \mathbf{s}'\in\mathscr S^{(T)}$, where $\mathbf{S}^m_t\triangleq(S^m_t)_{t\in\mathscr T_{\leq t}}$ and $$ \ell (a_t,a'_t, \mathbf{H}_t^{o,m}, \mathbf{S}^m_t=\mathbf{s})\triangleq \frac{\lambda^b_t (a_t|\mathbf{H}_t^{o,m}, \mathbf{S}^m_t=\mathbf{s})}{\lambda^b_t (a'_t|\mathbf{H}_t^{o,m},\mathbf{S}^m_t=\mathbf{s})}.$$ \end{definition} The above definition bounds the impact of the vector of the unobservable confounder variables, $\mathbf{S}^m_t$, in each period. In Lemma \ref{lemma:ec:boundedconf} (Appendix B) we show that this definition results in $$(\eta^m_t)^{-1}\leq\frac{\lambda^b_t (a_t|\mathbf{H}_t^{u,m})}{\lambda^b_t (a_t|\mathbf{H}_t^{o,m})}\leq \eta^m_t\ \ \ \ \ a.s.$$ over $\mathbf{H}_t^{o,m}$ and $\mathbf{H}_t^{u,m}=(\mathbf{H}_t^{o,m}, \mathbf{S}^m_t)$ for all $t\in\mathscr T$ and $a\in\mathscr A$. Thus, benefiting from the observed history (as opposed to the unobserved one) and making use of marginalized propensities $\lambda^b_t (a_t|\mathbf{H}_t^{o,m})$ as an estimate of the true treatment propensities $\lambda^b_t (a_t|\mathbf{H}_t^{u,m})$ will not be unboundedly misleading. The results provided in the following proposition are analogous to {\em design sensitivity} analyses \citep[see, e.g.,][]{Rosenbaum2010} in static (i.e., $T=1$) settings, where the idea is to examine how much propensity odds need to vary so that the gained causal understanding becomes invalid \citep[see, also, ][]{Kallus2020Conf, Kallus2021}. \begin{proposition}[GSIS under Bounded Unobservable Confounding]\label{prop:buc} Suppose the behavior policy satisfies BUC under all models $m\in\mathscr M$. If the evaluation policy satisfies sequential ignorability under all models $m\in\mathscr M$, and it overlaps with the behavior policy almost surely, then: \begin{itemize}[leftmargin=1cm,align=left] \item[(i)] Under each model $m\in\mathscr M$ we have: $$\mathbb E^m\big[\Gamma_T(\boldsymbol{\lambda}^b) \prod_{t\in\mathscr T} \underline w_t(\boldsymbol{\lambda}^e)\big] \leq \mathbb E^m\big[\Gamma_T(\boldsymbol{\lambda}^e)\big]\leq \mathbb E^m\big[\Gamma_T(\boldsymbol{\lambda}^b) \prod_{t\in\mathscr T} \overline w_t(\boldsymbol{\lambda}^e)\big],$$ where $$\underline w_t(\boldsymbol{\lambda}^e)\triangleq w_t(\boldsymbol{\lambda}^e)\, \Big(({\eta^m_t})^{-1}\, \hbox{\rm 1\kern-.35em 1}_{\{\Gamma_T(\boldsymbol{\lambda}^b) >0\}}+ {\eta^m_t}\, \hbox{\rm 1\kern-.35em 1}_{\{\Gamma_T(\boldsymbol{\lambda}^b) <0\}}\Big),$$ and $$\overline w_t(\boldsymbol{\lambda}^e)\triangleq w_t(\boldsymbol{\lambda}^e)\, \Big(({\eta^m_t})^{-1}\, \hbox{\rm 1\kern-.35em 1}_{\{\Gamma_T(\boldsymbol{\lambda}^b) <0\}}+ {\eta^m_t}\, \hbox{\rm 1\kern-.35em 1}_{\{\Gamma_T(\boldsymbol{\lambda}^b)>0\}}\Big).$$ \item[(ii)] For any $\alpha\in\mathscr I$, there exists $\tilde\alpha\in\mathscr I$ such that $MEU_\alpha\big[\Gamma_T(\boldsymbol{\lambda}^e)\big]= f(\tilde\alpha)$, where $f (\tilde\alpha)\triangleq \tilde\alpha\, MEU_\alpha\big[\Gamma_T(\boldsymbol{\lambda}^b) \prod_{t\in\mathscr T} \underline w_t(\boldsymbol{\lambda}^e)\big]+ (1-\tilde\alpha) MEU_\alpha\big[\Gamma_T(\boldsymbol{\lambda}^b) \prod_{t\in\mathscr T} \overline w_t(\boldsymbol{\lambda}^e)\big].$ \end{itemize} \end{proposition} Similar to Proposition \ref{prop:SIS}, part (ii) of Proposition \ref{prop:buc} provides a way of fining the optimal evaluation policy, since it characterizes the causal impact of any such policy. Whereas Proposition 1 requires the behavior policy to satisfy sequential ignorability---an unrealistic assumption in most applications---Proposition \ref{prop:buc} only requires the unobserved variables to have a bounded impact. Importantly, however, Proposition \ref{prop:SIS} directly provides an $\alpha$-MEU unbiased estimator, but Proposition \ref{prop:buc} does so subject to a tuning parameter $\tilde\alpha$. Specifically, in part (ii) of Proposition \ref{prop:buc}, the function $f$ can be computed using only observed data. This, in turn, resolves the issue that the outcome of interest under the evaluation policy as well as the time-varying confounders needed to estimate it are unobservable. But to use Proposition \ref{prop:buc} part (ii), one needs to tune the parameter $\tilde\alpha$. Since $f$ is a decreasing function and $\tilde\alpha\in\mathscr I$, tuning $\tilde\alpha$ can be done in an structured way. For example, in practice, one is often interested in evaluating policies that are known to be better than the behavior policy. Thus, we have $f (\tilde\alpha)\geq MEU_\alpha\big[\Gamma_T(\boldsymbol{\lambda}^b)\big]$, implying that one can start tuning $\tilde\alpha$ using the threshold value $\tilde\alpha^*\triangleq \min\bigg\{f^{(-1)} \Big(MEU_\alpha\big[\Gamma_T(\boldsymbol{\lambda}^b)\big]\Big), 1\bigg\}$ and only consider values of $\tilde\alpha$ that are in $[0,\tilde\alpha^*]$. More importantly, it should be noted that the parameters $(\eta_t^m)_{t\in\mathscr T,m\in\mathscr M}$ are design sensitivity parameters. Specifically, for any $\epsilon>0$, they can be chosen so that $f(0)-f(\tilde\alpha^*)<\epsilon$. Since $f(0)-f(\tilde\alpha^*)\geq 0$, this allows one to use any $\tilde\alpha$ in $[0,\tilde\alpha^*]$ and obtain an {\em approximate} unbiased $MEU_\alpha$ estimator for $\Gamma_T(\boldsymbol{\lambda}^e)$ with a guaranteed approximation error of $\epsilon$. Finally, we note that one can extend Propositions \ref{prop:SIS} and \ref{prop:buc} to provide doubly robust estimators\footnote{For related studies on doubly robust estimators, we refer to \cite{Bang2015, Jiang2015, Thomas2016, Kallus2020, Athey2021}, and the references therein.} to account for the fact that, under each given model $m\in\mathscr M$, the variance of an importance sampling based estimator can be high. Such an extension is, however, not that useful in our work, because we are (a) directly allowing for a cloud of models, and (b) using $MEU_\alpha$ of the outcome variable as opposed to its expected value (the criterion used in the studies related to doubly robust estimation). Instead, we next develop two RL methods based on our results, and establish that they have suitable asymptotic behavior, including consistency and asymptotic normality. We also test their performance directly using both a clinical data set and simulation experiments, and find that our proposed learning methods provide strong robustness to model ambiguity (see, e.g., Section~\ref{sec:robsutness}). \section{Analyzing ADTRs via APOMDPs}\label{sec:APOMDP} In this section, we show that a tractable way of analyzing ADTRs is via APOMDPs. Specifically, analyzing ADTRs via APOMDPs enables (a) considering unobserved variables as latent time-varying states while allowing for model ambiguity, and (b) developing efficient RL methods.\footnote{For other approaches in modeling confounders as hidden states see, e.g., \cite{Bennett2021, Xu2020}, and the references therein.} An APOMDP can be represented via the Directed Acyclic Graph (DAG) depicted in Figure \ref{fig:apomdpascm}. The ambiguous mechanisms in this figure represent causal relationships that cannot be quantified from the data alone. The main assumption needed to represent an ADTR via an APOMDP is that the dynamics of the variables is Markovian. In various applications, it is often possible to transform data so that this assumption holds \citep[see, e.g.,][]{Xu2020}. Specifically, while the observed history $\mathbf{H}^o_t$ grows over time, we can assume that there are summary functions $\nu_t:\mathscr H_t^o\to\Delta_\mathscr S$ such that $\boldsymbol{\pi}_t\triangleq\nu_t(\mathbf{h}_t^o)$ (a belief distribution over the latent states) is a sufficient statistics.\footnote{For POMDPs and APOMDPs, it is known that the belief distribution over latent states is a sufficient statistics \citep[see, e.g.,][and the references therein]{Saghafian2018, Boloori2020,Saghafian20194}.} Using the belief distribution $\boldsymbol{\pi}_t$, we can work with transformed policies: we can consider $\boldsymbol{\mu}^e\triangleq\big(\mu^e_t(\boldsymbol{\pi}_t)\big)_{t\in\mathscr T}$ and $\boldsymbol{\mu}^b\triangleq\big(\mu^b_t(\boldsymbol{\pi}_t)\big)_{t\in\mathscr T}$ as the evaluation and behavior policies, respectively, where $\mu^e_t, \mu^b_t: \Delta_\mathscr S\to\Delta_\mathscr A$. We denote the probability that an action $a_t$ is applied at time $t$ (when the belief distribution is $\boldsymbol{\pi}_t$) under these transformed evaluation and behavior policies by $\mu^e_t(a_t|\boldsymbol{\pi}_t)$ and $\mu^b_t(a_t|\boldsymbol{\pi}_t)$, respectively. In what follows, we first define APOMDPs and then develop two RL algorithms that enable finding the optimal policy by efficiently learning the causal impact of any given evaluation policy. \begin{figure}[t] \begin{center} \includegraphics[scale=0.65]{APOMDPSCM.pdf}\vspace{5mm} \caption{\scriptsize DAG representation of APOMDPs. {\em Circles:} observable variables; {\em Rectangles:} unobservable variables; {\em Solid arrows:} unambiguous causal mechanisms; {\em Dashed arrows:} ambiguous causal mechanisms.}\label{fig:apomdpascm} \end{center}\vspace{-4mm} \end{figure} As defined in \cite{Saghafian2018}, a time-homogenous APOMDP is an extension of the classical POMDPs, and can be defined by the tuple ($\alpha$, $\beta$, $\mathscr S$, $\mathscr O$, $\mathscr A$, $\mathscr G$, $\mathscr P$, $\mathscr Q$). The notation used in the first part of this tuple is as introduced earlier. $\mathscr P$ and $\mathscr Q$ are the sets of possible transition probability matrices with respect to (latent) states and observations, respectively \citep{Saghafian2018}. These sets define the ambiguous causal mechanisms depicted in Figure \ref{fig:apomdpascm}. To simplify the analyses, we can index members of the set $\mathscr P\times\mathscr Q$ using $\mathscr M$ so that each $m\in\mathscr M$ represents a specific (unambiguous) POMDP model. In particular, associated with each $m\in\mathscr M$ is a set of the form $P_m \times Q_m$ with $P_m\in\mathscr P$ and $Q_m\in\mathscr Q$ denoting the set of state and observation transition probabilities under model $m$, respectively \citep{Saghafian2018}. In this setting, (a) $P_m\triangleq\{P_m^{a}: a\in \mathscr A\}$, where for each $a\in \mathscr A$, $P_m^a\triangleq [p^a_{ij}(m)]_{i,j\in\mathscr S}$ is an $|\mathscr S|\times |\mathscr S|$ matrix with $p^a_{ij}(m)\triangleq Pr\{j|i,a,m\}$ denoting the probability that the (latent) state moves to $j$ from $i$ under action $a$ and model $m$, and (b) $Q_m\triangleq\{Q_m^a: a\in \mathscr A\}$, where for each $a\in \mathscr A$, $Q_m^a\triangleq [q^a_{jo}(m)]_{j\in\mathscr S, o\in\mathscr O}$ is an $|\mathscr S|\times |\mathscr O|$ matrix with $q^a_{jo}(m)\triangleq Pr\{o|j,a,m\}$ denoting the probability of observing $o$ under action $a$ and model $m$ when the (latent) state is $j$ \citep{Saghafian2018}. If $\mathscr M$ was a singleton with its only member being $m$, the optimal gain and policy for any $t\in\mathscr T$ and $\boldsymbol{\pi} \in \Delta_\mathscr S$ could be obtained by a traditional POMDP Bellman equation (along with the terminal condition $V_0^m(\boldsymbol{\pi})\triangleq 0$): \begin{equation}\label{traditional} V^m_t(\boldsymbol{\pi})=\max_{a\in \mathscr A}\Big\{\boldsymbol{\pi}'\mathbf{g}^a+\beta\sum_{o\in\mathscr O} Pr\{o|\boldsymbol{\pi},a, m\}V^m_{t-1}(T(\boldsymbol{\pi}, a, o, m))\Big\}, \end{equation} where $V^m_t(\boldsymbol{\pi})$ denotes the value function under model $m$ when the belief distribution is $\boldsymbol{\pi}$ and there are $t$ periods to go, `` $'$ " represents the transpose operator, $Pr\{o|\boldsymbol{\pi},a, m\}=\sum_i\sum_j \pi_i p_{ij}^a (m) q_{jo}^a (m)$, and the belief updating operator $T:\, \Delta_\mathscr S\times \mathscr A\times\mathscr O\times\mathscr M\to\Delta_\mathscr S$ is defined by the Bayes' rule (in the matrix form): \begin{equation}\label{beliefupd} T(\boldsymbol{\pi}, a, o, m)=\frac{\big(\boldsymbol{\pi}'P^a_m Q^a_m(o)\big)'}{Pr\{o|\boldsymbol{\pi}, a, m\}}, \end{equation} with $Q^a_m(o)\triangleq\text{diag} (q^a_{1o}(m), q^a_{2o}(m),\ldots, q^a_{no}(m))$ denoting the diagonal matrix made of the $o$th column of $Q^a_m$ \citep{Saghafian2018}. Unlike POMDPs, in AMPOMDs $\mathscr M$ is not a singleton. However, it is shown in \cite{Saghafian2018} that the APOMDP value function, a model independent function which we denote by $V_t(\boldsymbol{\pi})$, can still be obtained using dynamic programming. Furthermore, the underlying Bellman operator in the APOMDP is a contraction mapping with modulus $\beta$ on a complete metric space (under some mild conditions), which in turn allows analyzing the APOMDP value function in infinite-horizon settings as the limit of its finite-horizon version. More importantly, \cite{Saghafian2018} establishes some structural properties for the value function of the APOMDP (e.g, piecewise linearity and continuity in $\boldsymbol{\pi}$). In the next section, we make use of these structural properties to develop effective and efficient RL approaches (termed Augmented V-Learning). We start our analyses by first developing suitable algorithms for learning the value function in POMDPs (i.e., when $|\mathscr M|=1$), and then show how they can be extended to learn the APOMDP value function. \section{Augmented V-Learning for POMDPs and APOMDPs} \subsection{Augmented V-Learning for POMDPs}\label{sec:AVPOMDP} To develop our results, we require that the behavior policy, $\boldsymbol{\mu}^b$, satisfies {\em positivity} defined below. \begin{definition}[Positivity] We say that a policy $\boldsymbol{\mu}\triangleq (\mu_t)_{t\in\mathscr T}$ satisfies positivity if, and only if, there exists a constant $c_0>0$ such that $\mu_t (a_t| \boldsymbol{\pi}_t)\geq c_o$ for all $t\in\mathscr T$, $\boldsymbol{\pi}_t\in\Delta_\mathscr S$, and $a_t\in\mathscr A$ \end{definition} Positivity implies that all actions have a positive chance of being selected (appear in the observed data) regardless of the belief. The behavior policy, $\boldsymbol{\mu}^b$, automatically satisfies positivity when the data is collected based on a randomized trial. When using observational data this assumption is sensible, because inference involving treatment patterns (using action $a_t$ when the belief is $\boldsymbol{\pi}_t$) that cannot occur in the observational study requires further knowledge and assumptions \citep{Murphy2001}. If the behavior satisfies positivity, we can establish the following result (see also Lemma 4.1 of \cite{Murphy2001} and Lemma 2.1 of \cite{Luckett2020} for related results in settings with fully observable states). \begin{proposition}[Weight-Adjusted Bellman Equation]\label{prop:POMDPest} Suppose $|\mathscr M|=1$ and denote the only member of $\mathscr M$ by $m$. If $\boldsymbol{\mu}^b$ satisfies positivity and sequential ignorability, then for any policy $\boldsymbol{\mu}^e$, the finite-horizon value function satisfies the weight-adjusted Bellman equation \begin{equation}\label{eq:POMDPest1} V_{T-t+1}^{m,\boldsymbol{\mu}^e} (\boldsymbol{\pi}_t)= \mathbb E^m \bigg[\frac{\mu_t^e(A_t|\boldsymbol{\Pi}^m_t)}{\mu_t^b(A_t|\boldsymbol{\Pi}^m_t)}\Big[G_t+ \beta\, V_{T-t}^{m,\boldsymbol{\mu}^e} (T(\boldsymbol{\Pi}^m_t, A_t, O_t, m))\Big]\Big| \boldsymbol{\Pi}^m_t=\boldsymbol{\pi}_t \bigg], \end{equation} for all $t\in\mathscr T$ and $\boldsymbol{\pi}_t\in\Delta_\mathscr S$, where $\boldsymbol{\pi}_t$ is considered as a realization (of a model dependent random variable denoted by $\boldsymbol{\Pi}^m_t$) and $V_{0}^{m,\boldsymbol{\mu}^e} (\boldsymbol{\pi})\triangleq 0$. Therefore, for any function $\phi$ defined on $\Delta_\mathscr S$, and for all $t\in\mathscr T$, we have: \begin{equation} \label{eq:POMDPest2} \mathbb E^m \bigg[ \frac{\mu_t^e(A_t|\boldsymbol{\Pi}^m_t)}{\mu_t^b(A_t|\boldsymbol{\Pi}^m_t)}\Big[G_t+ \beta\, V_{T-t}^{m,\boldsymbol{\mu}^e} (T(\boldsymbol{\Pi}^m_t, A_t, O_t, m))- V_{T-t+1}^{m,\boldsymbol{\mu}^e} (\boldsymbol{\Pi}^m_t)\Big] \phi (\boldsymbol{\Pi}^m_t)\bigg]=0. \end{equation} \end{proposition} The importance of Proposition \ref{prop:POMDPest} (which is built on the importance sampling results of the previous section) is that it allows us to empirically estimate the value function under any evaluation policy, and hence, learn the optimal policy. Specifically, using the data, we can make use of the sample-average version of \eqref{eq:POMDPest2}: \begin{equation} \label{eq:POMDPest3} \mathbb E^{\mathbb P} \Bigg[ \sum_{t\in\mathscr T}\bigg[ \frac{\mu_t^e(A_t|\boldsymbol{\Pi}^m_t)}{\mu_t^b(A_t|\boldsymbol{\Pi}^m_t)}\Big[G_t+ \beta\, V_{T-t}^{m,\boldsymbol{\mu}^e} (T(\boldsymbol{\Pi}^m_t, A_t, O_t, m))- V_{T-t+1}^{m,\boldsymbol{\mu}^e} (\boldsymbol{\Pi}^m_t)\Big] \phi (\boldsymbol{\Pi}^m_t)\bigg]\Bigg]=0, \end{equation} where $\mathbb E^{\mathbb P}$ denotes average with respect to the empirical probability measure.\footnote{For a random variable $X$ with $n$ observed values denoted by $x_1, x_2,\cdots, x_n$, $\mathbb E^{\mathbb P}[X]\triangleq n^{-1} \sum_{i=1}^n x_i$. Similarly, for a function $f$, $\mathbb E^{\mathbb P}[f(X)]\triangleq n^{-1} \sum_{i=1}^n f(x_i)$.} It is important to note that while we are using sample-average in \eqref{eq:POMDPest3}, the result still depends on the assumed $m$, because while the sequence $\{(A_t, O_t)\}_{t\in\mathscr T}$ is observable to us, to form the sequence $\{\boldsymbol{\Pi}^m_t\}_{t\in\mathscr T}$, we need to have an assumed model. That is, due to the existence of unobserved variables, the empirical measure alone is {\em insufficient} for our goal. \begin{remark} [\textbf{Efficient Approximation}] In establishing the properties of our approaches (e.g., consistency, asymptotic normality, etc.) we only require an {\em approximate} solution to \eqref{eq:POMDPest3}. Thus, how the solution to \eqref{eq:POMDPest3} is obtained is not that restrictive. Indeed, there are many ways to obtain an approximate solution to \eqref{eq:POMDPest3}. In what follows, however, we provide a data-efficient way of estimating the optimal policy and optimal value function using \eqref{eq:POMDPest3}. We do so by taking advantage of important structural properties of the optimal value function of POMDPs and APOMDPs. Specifically, the optimal value function of POMDPs is known to be piecewise linear and convex in $\boldsymbol{\pi}$ under some mild conditions \citep{Sondik_Finite_1973}. \cite{Saghafian2018} shows that in general the convexity does not hold in APOMDPs, and some additional conditions are needed (see Proposition 2 of \cite{Saghafian2018}). To be consistent, for both POMDPs and APOMDP settings, we only assume {\em piecewise linearity} and {\em continuity}, but do not impose any assumption on convexity. This, in turn, helps us in another way: while piecewise linear and continuous functions can be efficiently learned from data, learning a function that is both piecewise linear and convex (i.e., is {\em point-wise maximum} of a set of linear functions) is much harder \citep[see, e.g.,][and the references therein]{Magnani2009}. \end{remark} Let $\mathscr V$ denote the set of real-valued piecewise linear and continuous bounded functions defined on $\Delta_\mathscr S$, and assume $V_t^{m,\boldsymbol{\mu}^e}\in\mathscr V$. To learn $V_t^{m,\boldsymbol{\mu}^e}\in\mathscr V$ using \eqref{eq:POMDPest3}, we consider the parametric version of the value function: $V_t^{m,\boldsymbol{\mu}^e}(\boldsymbol{\pi};\boldsymbol{\psi}_t)\triangleq \big(\mathbf{b} (\boldsymbol{\pi}))'\, \boldsymbol{\psi}_t$, where $\mathbf{b} (\boldsymbol{\pi})\triangleq\big(\mathbf{b}_1 (\boldsymbol{\pi}), \mathbf{b}_2 (\boldsymbol{\pi}),\cdots, \mathbf{b}_{d_t} (\boldsymbol{\pi})\big)'$ is a predefined {\em basis function} that allows us to ensure that the learned function is in $\mathscr V$, and $\boldsymbol{\psi}_t \in \boldsymbol{\Psi}_t\subseteq \mathbb R^{d_t}$ is the parameter.\footnote{Allowing the dimensionality of the parameter space, $d_t$, to depend on $t$ can enable us increase flexibility as $t$ grows (e.g., by introducing more knots). The special case where $d_t$ does not depend on $t$ is still useful in some settings, including those where the goal is to learn the long-run impact of a policy (see, e.g., Algorithms \ref{alg:DAV} and \ref{alg:SAV} in the next sections).} This also enables us to set $\phi(\boldsymbol{\pi})\triangleq \mathbf{b} (\boldsymbol{\pi})$ in \eqref{eq:POMDPest3}, since $\mathbf{b} (\boldsymbol{\pi})$ can be thought of as the gradient of $V_t^{m,\boldsymbol{\mu}^e}(\boldsymbol{\pi};\boldsymbol{\psi}_t)$ with respect to its parameter, which only depends on $\boldsymbol{\pi}$ (and not the parameter) and is almost everywhere defined. Furthermore, since $\boldsymbol{\psi}_{t}$ can be high-dimensional in some applications (especially when $t$ is large), we estimate it using a regularized approach as follows (to avoid overfitting). Starting with $V_0^m(\boldsymbol{\pi})=0$ and moving backwards iteratively, having an estimation of $T-t$ periods to go value function in hand ($\hat V_{T-t}^{m,\boldsymbol{\mu}^e}$), we define \begin{equation} \label{eq:POMDPest4} \varphi^{m,\boldsymbol{\mu}^e} (\boldsymbol{\psi}_t)\triangleq \mathbb E^{\mathbb P} \bigg[ \frac{\mu_t^e(A_t|\boldsymbol{\Pi}^m_t)}{\mu_t^b(A_t|\boldsymbol{\Pi}^m_t)}\Big[G_t+ \beta\, \hat V_{T-t}^{m,\boldsymbol{\mu}^e} (T(\boldsymbol{\Pi}^m_t, A_t, O_t, m))- V_{T-t+1}^{m,\boldsymbol{\mu}^e} (\boldsymbol{\Pi}^m_t; \boldsymbol{\psi}_t)\Big] \mathbf{b}(\boldsymbol{\Pi}^m_t)\bigg]. \end{equation} We then obtain the estimate \begin{equation} \label{eq:POMDPest5} \hat \boldsymbol{\psi}_t^{\boldsymbol{\mu}^e}=\arg\!\min_{\boldsymbol{\psi}_t\in\boldsymbol{\Psi}_t} \bigg\{\big(\varphi^{m,\boldsymbol{\mu}^e} (\boldsymbol{\psi}_t)\big)'\, \boldsymbol{\Omega}\, \varphi^{m,\boldsymbol{\mu}^e} (\boldsymbol{\psi}_t)+ \theta_t \mathcal P (\boldsymbol{\psi}_t)\bigg\}, \end{equation} where $\boldsymbol{\Omega}$ is an arbitrary positive definite matrix, $\mathcal P (\cdot)$ is a penalty function, and $\theta_t$ is a tuning parameter.\footnote{In our case study, simulations experiments, and theoretical results, we make use of the squared Euclidean norm as the penalty function, and hence, assume $\mathcal P (\boldsymbol{\psi}_t)= \boldsymbol{\psi}_t'\boldsymbol{\psi}_t$.} Consequently, we plug in $\hat \boldsymbol{\psi}_t^{\boldsymbol{\mu}^e}$ in $V_{T-t+1}^{m,\boldsymbol{\mu}^e} (\boldsymbol{\pi}_t; \boldsymbol{\psi}_t)$ and thereby obtain an estimate for the value function $V_{T-t+1}^{m,\boldsymbol{\mu}^e} (\boldsymbol{\pi}_t)$, and move to the next period (backward). This procedure, under a given model $m\in\mathscr M$, yields an estimator for the gain under $\boldsymbol{\mu}^e$. That is, $\hat \Gamma_T^m (\boldsymbol{\mu}^e)\triangleq \int \hat V_{T}^{m,\boldsymbol{\mu}^e} (\boldsymbol{\pi})\, dF(\boldsymbol{\pi})$ can be used as an estimator for $\Gamma_T^m (\boldsymbol{\mu}^e)= \int V_{T}^{m,\boldsymbol{\mu}^e} (\boldsymbol{\pi})\, dF(\boldsymbol{\pi})$, where $dF(\boldsymbol{\pi})$ is a given distribution on (starting) belief values. Since we have an estimator for the gain under any policy $\boldsymbol{\mu}^e$, we can obtain $\hat\boldsymbol{\mu}{^{e*}}\triangleq\arg\!\max_{\boldsymbol{\mu}^e\in\Upsilon} \hat \Gamma_T^m (\boldsymbol{\mu}^e)$ as an estimate for the optimal policy under model $m$, where $\Upsilon$ is a given set of policies.\footnote{We use the ``$\max$" operator instead of ``$\sup$," because in most real-world applications, $\Upsilon$ is first identified by a set of domain experts and is such that maximum is obtained. We later make this assumption more implicit (see, e.g., Condition (C4) in Section \ref{sec:assymptotics}. Furthermore, in various practical applications, $\Upsilon$ is often restricted to the set of policies that satisfy specific attributes such as fairness or interpretability.} Finally, an estimate of the gain under the optimal policy is $\hat \Gamma_T^m (\hat\boldsymbol{\mu}{^{e*}})$. In an infinite-horizon setting, the procedure above simplifies. This is because in homogenous POMDPs (and APOMDPs) the value function with $t$ periods to go converges to a stationary value function as $t\to\infty$ \citep[see, e.g., Proposition 1 of][]{Saghafian2018}. Therefore, in \eqref{eq:POMDPest3} we can replace both $V^{m,\boldsymbol{\mu}^e}_{T-t} (\cdot)$ and $V^{m,\boldsymbol{\mu}^e}_{T-t+1} (\cdot)$ with the same function. This removes the need for recursive calculations and allows us to follow a one-shot data-efficient method. We discuss this further in the next sections, and also study the asymptotic behavior of our proposed approach. \subsection{Augmented V-Learning for APOMDPs} Motivated by the results in the previous section, we now extend our approach to APOMDPs, where the condition $|\mathscr M|=1$ does not hold. We propose two approaches termed {\em Direct Augmented V-Learning} ($\texttt{DAV-Learning}$) and {\em Safe Augmented V-Learning} ($\texttt{SAV-Learning}$). As we will see, in $\texttt{DAV-Learning}$, we {\em directly} extend the approach presented in the previous section for POMDPs by first obtaining a value function separately for each POMDP model in $\mathscr M$. These values are then combined at the end of the horizon to provide an estimate of the value function for the APOMDP. In $\texttt{SAV-Learning}$, however, we make use of a {\em safe} estimation approach upfront that takes into account ambiguity and removes the need to obtain a value function separately for each POMDP model in $\mathscr M$. \subsubsection{Direct Augmented V-Learning ($\texttt{DAV-Learning}$).}\label{sec:DAV} Recall that for each evaluation policy $\boldsymbol{\mu}^e$ and each given $m\in\mathscr M$, we can use the approach proposed for POMDPs in Section \ref{sec:AVPOMDP} to obtain an estimate for the value function $V_{T}^{m,\boldsymbol{\mu}^e} (\cdot)$, which we denote by $\hat V_{T}^{m,\boldsymbol{\mu}^e} (\cdot)$. Thus, we can first obtain an estimate for the APOMDP value function: \begin{equation} \hat V_{T}^{\boldsymbol{\mu}^e} (\boldsymbol{\pi}) = MEU_\alpha \big[\hat V_{T}^{m,\boldsymbol{\mu}^e} (\boldsymbol{\pi})\big] \triangleq \alpha\,\inf_{m\in\mathscr M} \hat V_{T}^{m,\boldsymbol{\mu}^e} (\boldsymbol{\pi})+ (1-\alpha)\,\sup_{m\in\mathscr M} \hat V_{T}^{m,\boldsymbol{\mu}^e}(\boldsymbol{\pi}). \end{equation} Next, to estimate the optimal policy, we note that for any policy $\boldsymbol{\mu}^e$, the estimator of the gain is $\hat \Gamma_T ({\boldsymbol{\mu}^e})= \int \hat V_{T}^{\boldsymbol{\mu}^e} (\boldsymbol{\pi})\, dF(\boldsymbol{\pi})$, where $dF(\boldsymbol{\pi})$ is a given distribution on (starting) belief values. This means that we can obtain an estimate of the optimal policy as $\hat \boldsymbol{\mu}{^{e*}}\triangleq\arg\!\max_{\boldsymbol{\mu}^e\in\Upsilon} \hat \Gamma_T ({\boldsymbol{\mu}^e})$. Finally, the estimated optimal gain is $\hat \Gamma_T (\hat \boldsymbol{\mu}{^{e*}})$. This $\texttt{DAV-Learning}$ approach for APOMDPs in the infinite-horizon case is presented in Algorithm \ref{alg:DAV}. In presenting this algorithm, as is often the case, we assume that the data only includes a finite number of periods for each subject, but the goal is to estimate the long-run performance of policies \citep[see, e.g.,][]{Luckett2020, Xu2020}. We also use subscript $n$ to highlight the dependency of our estimators to the number of trajectories in the data set, which in turn allows us to investigate the behavior of our proposed learning algorithm as $n\to\infty$ (see Section \ref{sec:assymptotics}). Our estimation equations for the infinite-horizon gain are \begin{equation} \label{eq:SAV4} \varphi_n^{m,\boldsymbol{\mu}^e} (\boldsymbol{\psi})\triangleq \mathbb E^{\mathbb P} \Bigg[\sum_{t\in\mathscr T}\bigg[ \frac{\mu^e(A_t|\boldsymbol{\Pi}^m_t)}{\mu^b(A_t|\boldsymbol{\Pi}^m_t)}\Big[G_t+ \beta\, V_\infty^{m,\boldsymbol{\mu}^e} (T(\boldsymbol{\Pi}^m_t, A_t, O_t, m))- V_\infty^{m,\boldsymbol{\mu}^e} (\boldsymbol{\Pi}^m_t)\Big] \mathbf{b} (\boldsymbol{\Pi}^m_t)\bigg]\Bigg] \end{equation} and \begin{equation} \label{eq:SAV5} \hat \boldsymbol{\psi}_n^{m,\boldsymbol{\mu}^e}=\arg\!\min_{\boldsymbol{\psi}\in\boldsymbol{\Psi}} \bigg\{\big(\varphi_n^{m,\boldsymbol{\mu}^e} (\boldsymbol{\psi})\big)'\, \boldsymbol{\Omega}\, \varphi_n^{m,\boldsymbol{\mu}^e} (\boldsymbol{\psi})+ \theta_n \mathcal P (\boldsymbol{\psi})\bigg\}, \end{equation} where $\boldsymbol{\Psi}\subseteq \mathbb R^{d}$. Similar to before, we make use of the piecewise linearity and continuity of the value function (i.e., the fact that $V^{m,\boldsymbol{\mu}^e}_{\infty}\in \mathscr V)$ for all $m\in\mathscr M$. This allows us to use predefined basis function to ensure that the learned function remains in $\mathscr V$ when we use the parametric form $ V^{m,\boldsymbol{\mu}^e}_{\infty} (\boldsymbol{\pi},\boldsymbol{\psi})\triangleq \big(\mathbf{b} (\boldsymbol{\pi}))'\, \boldsymbol{\psi}$. Using \eqref{eq:SAV5}, we then set $\hat V^{m,\boldsymbol{\mu}^e}_{\infty} (\boldsymbol{\pi})\triangleq V^{m,\boldsymbol{\mu}^e}_{\infty} (\boldsymbol{\pi}; \hat \boldsymbol{\psi}_n^{m,\boldsymbol{\mu}^e})$. In addition, denoting the infinite-horizon gain under any policy $\boldsymbol{\mu}^e$ and $m\in\mathscr M$ by $\Gamma^m_{\infty} (\boldsymbol{\mu}^e)\triangleq \int V_{\infty}^{m,\boldsymbol{\mu}^e} (\boldsymbol{\pi})\, dF(\boldsymbol{\pi})$, we consider $\hat \Gamma^m_{\infty} (\boldsymbol{\mu}^e)\triangleq \int \hat V_{\infty}^{m,\boldsymbol{\mu}^e} (\boldsymbol{\pi})\, dF(\boldsymbol{\pi})$ as an estimator for $\Gamma^m_{\infty} (\boldsymbol{\mu}^e)$. With estimated values under each model $m$ in hand, we next define the estimated overall gain (a model independent value) as $\hat \Gamma_{\infty} (\boldsymbol{\mu}^e)\triangleq \alpha \inf_{m\in\mathscr M} \hat \Gamma^m_{\infty} (\boldsymbol{\mu}^e) + (1-\alpha) \sup_{m\in\mathscr M} \hat \Gamma^m_{\infty} (\boldsymbol{\mu}^e)$, which provides an estimation for the overall gain $\Gamma_{\infty} (\boldsymbol{\mu}^e)\triangleq \alpha \inf_{m\in\mathscr M} \Gamma^m_{\infty} (\boldsymbol{\mu}^e) + (1-\alpha) \sup_{m\in\mathscr M} \Gamma^m_{\infty} (\boldsymbol{\mu}^e)$. Finally, the estimated optimal policy and its infinite-horizon value for the APOMDP are obtained as $\hat\boldsymbol{\mu}{^{e*}}\triangleq \arg\!\max_{\boldsymbol{\mu}^e\in\Upsilon} \hat \Gamma_{\infty} (\boldsymbol{\mu}^e)$ and $\hat \Gamma_{\infty} (\hat\boldsymbol{\mu}^{e*})=\max_{\boldsymbol{\mu}^e\in\Upsilon} \hat \Gamma_{\infty} (\boldsymbol{\mu}^e)$, respectively, where the latter provides an estimate for $\Gamma_{\infty} (\boldsymbol{\mu}^{e*})\triangleq \max_{\boldsymbol{\mu}^e\in\Upsilon} \Gamma_{\infty} (\boldsymbol{\mu}^{e*})$. Similarly, under each model $m$, we denote the estimated optimal policy and its infinite-horizon value as $\hat\boldsymbol{\mu}{^{e*,m}}\triangleq \arg\!\max_{\boldsymbol{\mu}^e\in\Upsilon} \hat \Gamma^m_{\infty} (\boldsymbol{\mu}^e)$, and $\hat \Gamma^m_{\infty} (\hat\boldsymbol{\mu}^{e*,m})=\max_{\boldsymbol{\mu}^e\in\Upsilon} \hat \Gamma^m_{\infty} (\boldsymbol{\mu}^e)$, respectively, where the latter provides an estimate for $\Gamma^m_{\infty} (\boldsymbol{\mu}^{e*,m})\triangleq \max_{\boldsymbol{\mu}^e\in\Upsilon} \Gamma^m_{\infty} (\boldsymbol{\mu}^{e*,m})$. \begin{algorithm}[t]\label{alg:DAV} \scriptsize \DontPrintSemicolon \For{each observed trajectory and model $m\in\mathscr M$} { Initialize $\boldsymbol{\pi}^m_0$ using a random draw from $F(\boldsymbol{\pi})$; \; set t=1;\; \While{$t+1\in\mathscr T$} { $\boldsymbol{\pi}^m_{t+1}\leftarrow T(\boldsymbol{\pi}^m_t, a_t, o_t, m)$; } } \For{any given $\boldsymbol{\mu}^e\in\Upsilon$ and $m\in\mathscr M$} { $ \varphi_n^{m,\boldsymbol{\mu}^e} (\boldsymbol{\psi})\leftarrow \mathbb E^{\mathbb P} \Bigg[\sum_{t\in\mathscr T}\bigg[ \frac{\mu^e(A_t|\boldsymbol{\Pi}^m_t)}{\mu^b(A_t|\boldsymbol{\Pi}^m_t)}\Big[G_t+ \beta\, V_\infty^{m,\boldsymbol{\mu}^e} (T(\boldsymbol{\Pi}^m_t, A_t, O_t, m))- V_\infty^{m,\boldsymbol{\mu}^e} (\boldsymbol{\Pi}^m_t)\Big] \mathbf{b} (\boldsymbol{\Pi}^m_t)\bigg]\Bigg]$; \ $\hat \boldsymbol{\psi}_n^{m,\boldsymbol{\mu}^e}\leftarrow\arg\!\min_{\boldsymbol{\psi}\in\boldsymbol{\Psi}} \bigg\{\big(\varphi_n^{m,\boldsymbol{\mu}^e} (\boldsymbol{\psi})\big)'\, \boldsymbol{\Omega}\, \varphi_n^{m,\boldsymbol{\mu}^e} (\boldsymbol{\psi})+ \theta_n \mathcal P (\boldsymbol{\psi})\bigg\}$;\; $\hat V^{m,\boldsymbol{\mu}^e}_{\infty}(\boldsymbol{\pi}) \leftarrow \big(\mathbf{b} (\boldsymbol{\pi}))'\, \hat \boldsymbol{\psi}_n^{m,\boldsymbol{\mu}^e}$;\; $\hat \Gamma^m_{\infty} (\boldsymbol{\mu}^e) \leftarrow\int \hat V_{\infty}^{m,\boldsymbol{\mu}^e} (\boldsymbol{\pi})\, dF(\boldsymbol{\pi})$;\; } \For{any given $\boldsymbol{\mu}^e\in\Upsilon$} { $\hat \Gamma_{\infty} (\boldsymbol{\mu}^e) \leftarrow \alpha \inf_{m\in\mathscr M} \hat \Gamma^m_{\infty} (\boldsymbol{\mu}^e) + (1-\alpha) \sup_{m\in\mathscr M} \hat \Gamma^m_{\infty} (\boldsymbol{\mu}^e)$;\; } $\hat\boldsymbol{\mu}{^{e*}} \leftarrow \arg\!\max_{\boldsymbol{\mu}^e\in\Upsilon} \hat \Gamma_{\infty} (\boldsymbol{\mu}^e)$;\; $\hat \Gamma_{\infty} (\hat\boldsymbol{\mu}^{e*} )\leftarrow \max_{\boldsymbol{\mu}^e\in\Upsilon} \hat \Gamma_{\infty} (\boldsymbol{\mu}^e)$; \caption{$\texttt{DAV-Learning}$} \end{algorithm} \subsubsection{Safe Augmented V-Learning ($\texttt{SAV-Learning}$).} \label{sec:SAV} The $\texttt{DAV-Learning}$ algorithm presented in the previous section is a direct extension of the approach proposed for POMDPs (Section \ref{sec:AVPOMDP}) in which ``the curse of ambiguity" is overcome at the end. In contrast, in $\texttt{SAV-Learning}$, this curse is overcome upfront via a ``safe method" for estimating the underlying parameter $\boldsymbol{\psi}_t$, and hence, the value function. To develop the $\texttt{SAV-Learning}$ algorithm, similar to before, we first denote the APOMDP value function with $t$ periods to go under policy $\boldsymbol{\mu}^e$ (a model independent function) with $V^{\boldsymbol{\mu}^e}_t$, assume that $V^{\boldsymbol{\mu}^e}_t\in\mathscr V$, and parameterize it via $V_t^{\boldsymbol{\mu}^e}(\boldsymbol{\pi};\boldsymbol{\psi}_t)\triangleq \big(\mathbf{b} (\boldsymbol{\pi})\big)'\, \boldsymbol{\psi}_t$. We then estimate its parameter as \begin{equation} \label{eq:SAV2} \hat \boldsymbol{\psi}_t^{\boldsymbol{\mu}^e} \triangleq MEU_{\alpha}\big[\hat \boldsymbol{\psi}_t^{m,\boldsymbol{\mu}^e}\big] \triangleq \alpha \, \hat \boldsymbol{\psi}_t^{\underline m,\boldsymbol{\mu}^e}+ (1-\alpha)\, \hat\boldsymbol{\psi}_t^{\overline m,\boldsymbol{\mu}^e}, \end{equation} where $\alpha\in\mathscr I$ can be viewed as a tuning parameter, $\underline m\triangleq \arg\!\inf_{m\in\mathscr M} ||\hat\boldsymbol{\psi}_t^{m,\boldsymbol{\mu}^e}||$, $\overline m\triangleq \arg\!\sup_{m\in\mathscr M} ||\hat\boldsymbol{\psi}_t^{m,\boldsymbol{\mu}^e}||$,\footnote{We assume $\mathscr M$ is such that $\inf_{m\in\mathscr M} ||\boldsymbol{\psi}_t^{m,\boldsymbol{\mu}^e}||$ and $\sup_{m\in\mathscr M} ||\boldsymbol{\psi}_t^{m,\boldsymbol{\mu}^e}||$ are both finite, and $\underline m$ and $\overline m$ are both in $\mathscr M$.} and \begin{equation} \label{eq:SAV3} \hat \boldsymbol{\psi}_t^{m,\boldsymbol{\mu}^e}=\arg\!\min_{\boldsymbol{\psi}_t\in\boldsymbol{\Psi}_t} \bigg\{\big(\varphi^{m,\boldsymbol{\mu}^e} (\boldsymbol{\psi}_t)\big)'\, \boldsymbol{\Omega}\, \varphi^{m,\boldsymbol{\mu}^e} (\boldsymbol{\psi}_t)+ \theta_t \mathcal P (\boldsymbol{\psi}_t)\bigg\}, \end{equation} where $\varphi^{m,\boldsymbol{\mu}^e} (\boldsymbol{\psi}_t)$ is defined in \eqref{eq:POMDPest4}. Consequently, we plug $\hat \boldsymbol{\psi}_t^{\boldsymbol{\mu}^e}$ obtained in \eqref{eq:SAV2} in $V_{T-t+1}^{\boldsymbol{\mu}^e} (\boldsymbol{\pi}_t; \boldsymbol{\psi}_t)$, which yields an estimate for the APOMDP value function $V_{T-t+1}^{\boldsymbol{\mu}^e} (\boldsymbol{\pi}_t)$, and move to the next period (backwards) as before. This yields an estimated value function $\hat V_{T}^{\boldsymbol{\mu}^e} (\boldsymbol{\pi})$. Denoting the gain under any policy $\boldsymbol{\mu}^e$ by $\Gamma_T (\boldsymbol{\mu}^e)\triangleq \int V_{T}^{\boldsymbol{\mu}^e} (\boldsymbol{\pi})\, dF(\boldsymbol{\pi})$, we use $\hat \Gamma_T (\boldsymbol{\mu}^e)\triangleq \int \hat V_{T}^{\boldsymbol{\mu}^e} (\boldsymbol{\pi})\, dF(\boldsymbol{\pi})$ as an estimator for $\Gamma_T (\boldsymbol{\mu}^e)$. Finally, optimization over $\boldsymbol{\mu}^e\in\Upsilon$ will provide the estimated optimal policy of the APOMDP under the $\texttt{SAV-Learning}$ approach: $\hat\boldsymbol{\mu}{^{e*}}\triangleq \arg\!\max_{\boldsymbol{\mu}^e\in\Upsilon} \hat \Gamma_T (\boldsymbol{\mu}^e)= \arg\!\max_{\boldsymbol{\mu}^e\in\Upsilon} \int \hat V_{T}^{\boldsymbol{\mu}^e} (\boldsymbol{\pi})\, dF(\boldsymbol{\pi})$. The estimated optimal gain under this approach is $\hat \Gamma_T (\hat\boldsymbol{\mu}^{e*})=\max_{\boldsymbol{\mu}^e\in\Upsilon} \int \hat V_{T}^{\boldsymbol{\mu}^e} (\boldsymbol{\pi})\, dF(\boldsymbol{\pi})$, which provides an estimate for $\Gamma_T (\boldsymbol{\mu}^{e*})\triangleq \max_{\boldsymbol{\mu}^e\in\Upsilon} \int V_{T}^{\boldsymbol{\mu}^e} (\boldsymbol{\pi})\, dF(\boldsymbol{\pi})$. Similar to before, this procedure can also be used for the infinite-horizon case by noting that since both $V_{T-t}(\cdot)$ and $V_{T-t+1}(\cdot)$ become $V_\infty(\cdot)$ the calculations simplifies. The $\texttt{SAV-Learning}$ approach for infinite-horizon case is presented in Algorithm \ref{alg:SAV}. Besides their benefit in analyzing the long-run impact of different treatment regimes, both Algorithms \ref{alg:DAV} and \ref{alg:SAV} can also be used as {\em approximations} for learning policies that work well over a finite but long horizon. \begin{algorithm}[t]\label{alg:SAV} \scriptsize \DontPrintSemicolon \For{each observed trajectory and model $m\in\mathscr M$} { Initialize $\boldsymbol{\pi}^m_0$ using a random draw from $F(\boldsymbol{\pi})$; \; set t=1;\; \While{$t+1\in\mathscr T$} { $\boldsymbol{\pi}^m_{t+1}\leftarrow T(\boldsymbol{\pi}^m_t, a_t, o_t, m)$; } } \For{any given $\boldsymbol{\mu}^e\in\Upsilon$ and $m\in\mathscr M$} { $ \varphi_n^{m,\boldsymbol{\mu}^e} (\boldsymbol{\psi})\leftarrow \mathbb E^{\mathbb P} \Bigg[\sum_{t\in\mathscr T}\bigg[ \frac{\mu^e(A_t|\boldsymbol{\Pi}^m_t)}{\mu^b(A_t|\boldsymbol{\Pi}^m_t)}\Big[G_t+ \beta\, V_\infty^{m,\boldsymbol{\mu}^e} (T(\boldsymbol{\Pi}^m_t, A_t, O_t, m))- V_\infty^{m,\boldsymbol{\mu}^e} (\boldsymbol{\Pi}^m_t)\Big] \mathbf{b} (\boldsymbol{\Pi}^m_t)\bigg]\Bigg]$; \ $\hat \boldsymbol{\psi}_n^{m,\boldsymbol{\mu}^e}\leftarrow\arg\!\min_{\boldsymbol{\psi}\in\boldsymbol{\Psi}} \bigg\{\big(\varphi_n^{m,\boldsymbol{\mu}^e} (\boldsymbol{\psi})\big)'\, \boldsymbol{\Omega}\, \varphi_n^{m,\boldsymbol{\mu}^e} (\boldsymbol{\psi})+ \theta_n \mathcal P (\boldsymbol{\psi})\bigg\}$;\; } \For{any given $\boldsymbol{\mu}^e\in\Upsilon$} { $\underline m \leftarrow \arg\!\inf_{m\in\mathscr M} ||\hat\boldsymbol{\psi}_n^{m,\boldsymbol{\mu}^e}||$; \; $\overline m \leftarrow \arg\!\sup_{m\in\mathscr M} ||\hat\boldsymbol{\psi}_n^{m,\boldsymbol{\mu}^e}||$; \; $\hat \boldsymbol{\psi}^{\boldsymbol{\mu}^e}_n \leftarrow \alpha \, \hat \boldsymbol{\psi}_n^{\underline m,\boldsymbol{\mu}^e}+ (1-\alpha)\, \hat\boldsymbol{\psi}_n^{\overline m,\boldsymbol{\mu}^e}$; \; $\hat V^{\boldsymbol{\mu}^e}_{\infty}(\boldsymbol{\pi}) \leftarrow \big(\mathbf{b} (\boldsymbol{\pi}))'\, \hat \boldsymbol{\psi}_n^{\boldsymbol{\mu}^e}$;\; $\hat \Gamma_{\infty} (\boldsymbol{\mu}^e) \leftarrow \int \hat V_{\infty}^{\boldsymbol{\mu}^e} (\boldsymbol{\pi})\, dF(\boldsymbol{\pi})$;\; } $\hat\boldsymbol{\mu}{^{e*}} \leftarrow \arg\!\max_{\boldsymbol{\mu}^e\in\Upsilon} \hat \Gamma_{\infty} (\boldsymbol{\mu}^e)$;\; $\hat \Gamma_{\infty} (\hat\boldsymbol{\mu}^{e*} )\leftarrow \max_{\boldsymbol{\mu}^e\in\Upsilon} \hat \Gamma_{\infty} (\boldsymbol{\mu}^e)$; \caption{$\texttt{SAV-Learning}$} \end{algorithm} \section{Performance Analyses: Theoretical Results}\label{sec:assymptotics} We now establish some theoretical results for the performance of our proposed approaches. Specifically, we demonstrate the asymptotic properties of the estimators under our main proposed algorithm, $\texttt{DAV-Learning}$ (Algorithm \ref{alg:DAV}). With some minor modifications, one can then also establish similar results for the estimators under the second proposed approach, $\texttt{SAV-Learning}$ (Algorithm~\ref{alg:SAV}).\footnote{For general results related to the asymptotic properties of V-Learning algorithms when all variables are observable and there is no model ambiguity, we refer interested readers to \cite{Luckett2020}.} The main results of this section are as follows. Under some conditions discussed below, we first establish weak consistency and asymptotic normality of the estimators under any policy $\boldsymbol{\mu}^e\in\Upsilon$ (Theorem~\ref{theo:asymptotics:fixed}). We then move to the estimators related to the optimal policy, and establish weak consistency and asymptotic normality of both the estimated optimal policy and its estimated value (Theorem~\ref{theo:asymptotics:opt}). To establish our results, we make use of arguments in {\em empirical processes} (specifically for stationary process as opposed to i.i.d. ones; see, e.g., \cite{Dedecker2002, Kosorok2008}), and think of each realization of the underlying stochastic process as a function in $\ell^\infty(\Upsilon)$ (i.e., the set of real-valued bounded functions indexed by $\boldsymbol{\mu}^e\in\Upsilon$). We assume $\boldsymbol{\Omega}$ in \eqref{eq:SAV5} is an arbitrary positive-definite matrix, $\mathcal P (\cdot)$ is the squared norm penalty function, and $\theta_n$ is a tuning parameter satisfying $\theta_n=o_p(n^{-1/2})$. We also assume that $\mathbb E^m\big[||\mathbf{b}(\boldsymbol{\Pi}_t)||^2\big]$ and $\mathbb E^m \big[G_t^2\big]$ are both finite values for all $m\in\mathscr M$ and $t\in\mathscr T$. Some other technical conditions are needed, mainly because of two broad set of challenges in our setting which make establishing asymptotic results more involved: (1) the underlying process is not i.i.d over time, and (2) there is model ambiguity ($|\mathscr M|\neq 1$). Specifically, we need the following ``regularity" conditions on the parameter space, trajectories space, policy space, and models space: \begin{itemize}[leftmargin=1cm,align=left] \item[(C1)] For every $\boldsymbol{\mu}^e\in\Upsilon$ and $m\in\mathscr M$ there exist a unique solution to $\varphi^{m,\boldsymbol{\mu}^e} (\boldsymbol{\psi})=0$ denoted by $\boldsymbol{\psi}_\diamond^{m,\boldsymbol{\mu}^e}\in\boldsymbol{\Psi}\subseteq \mathbb R^d$, where $\sup_{\boldsymbol{\mu}^e\in\Upsilon}||\boldsymbol{\psi}_\diamond^{m,\boldsymbol{\mu}^e}||<\infty$, $\boldsymbol{\psi}_\diamond^{m,\boldsymbol{\mu}^e}$ is an interior point of $\boldsymbol{\Psi}$, and $\boldsymbol{\Psi}$ is compact subset of $\mathbb R^d$. \item[(C2)] There exists a $2<\rho<\infty$ such that for all $m\in\mathscr M$: \begin{itemize}[leftmargin=1.5cm] \item[(C2a)] The class of policies ($\Upsilon$) is either finite, or its bracketing integral satisfies $J_{[]}(\infty,\Upsilon, L_\rho(P^m))<\infty$, where $P^m$ is the marginal stationary distribution of the sequence $\{(\boldsymbol{\Pi}^m_t, A_t)\}_{t\geq 1}$.\footnote{For the definition of the bracketing integral, $J_{[]}(\infty,\Upsilon, L_\rho(P^m))$, see, e.g., \cite{Kosorok2008}.} \item[(C2b)] The sequence $\{(\boldsymbol{\Pi}^m_t, A_t)\}_{t\geq 1}$ is an absolutely regular stationary process with its $\beta$-mixing coefficients $\zeta^m (t)$ satisfying $\sum_{t=1}^{\infty} k^{2/(\rho-2)} \zeta^m (t)<\infty$.\footnote{For the definition of an absolutely regular stationary process and its $\beta$-mixing coefficients, see, e.g., \cite{Dedecker2002, Kosorok2008}, and the references therein.} \end{itemize} \noindent\item[(C3)] There exists a constant $c_1>0$ such that for all $m\in\mathscr M$, $t\in\mathscr T$, $\boldsymbol{\mu}^e\in\Upsilon$, and $\mathbf{c}\in\mathbb R^d$: \begin{equation} \mathbf{c}'\,\mathbb E^{m} \Bigg[\frac{\mu^e(A_t|\boldsymbol{\Pi}^m_t)}{\mu^b(A_t|\boldsymbol{\Pi}^m_t)}\, \mathbf{b}(\boldsymbol{\Pi}_t^m)\, \Big(\mathbf{b}(\boldsymbol{\Pi}_t^m)-\beta\, \mathbf{b}(T(\boldsymbol{\Pi}^m_t, A_t, O_t, m))\Big)'\Bigg] \mathbf{c}\geq c_1 ||\mathbf{c}||^2. \end{equation} \noindent\item[(C4)] $\boldsymbol{\mu}^{e*}$ is a unique and well septated maximizer of $\Gamma_\infty(\boldsymbol{\mu}^{e})$ and $\boldsymbol{\mu}^{e*}$ is in the interior $\Upsilon$. \noindent\item[(C5)] For every $\boldsymbol{\mu}^e\in\Upsilon$: $|\inf_{m\in\mathscr M} \Gamma^m_{\infty} (\boldsymbol{\mu}^e)|<\infty$, $|\sup_{m\in\mathscr M} \Gamma^m_{\infty} (\boldsymbol{\mu}^e)|<\infty$, and $\mathscr M$ contains both $\arg\!\inf_{m\in\mathscr M} \Gamma^m_{\infty} (\boldsymbol{\mu}^e)$ and $\arg\!\sup_{m\in\mathscr M} \Gamma^m_{\infty} (\boldsymbol{\mu}^e).$ \end{itemize} Assumptions related to these conditions are relatively common in the Z-estimation and M-estimation theories \citep[see, e.g.,][]{Kosorok2008}. Some of these conditions are also assumed to hold in the {\em Generalized Method of Moments} (GMM) \citep[for asymptotic properties of GMM, see, e.g.,][]{Hansen1982}. These conditions hold both in our case study of NODAT patients (Section \ref{sec:case}) and in our simulation experiments (Section \ref{sec:synthetic}). (C1) is a regularity condition on the parameter space, and ensures that the solutions obtained by solving $\varphi^{m,\boldsymbol{\mu}^e} (\boldsymbol{\psi})=0$ are ``well-behaved." (C2a) is a regularity condition on the policy space, and requires that the set of policies under consideration satisfy a minimum level of ``complexity" (measured by an appropriate {\em entropy-based} metric). This condition clearly allows working with any finite set of policies, but also holds for many infinite sets of policies \citep[see, e.g., the parametric class of policies in][]{Luckett2020}. (C2b) is a regulatory condition on the space of trajectories and allows viewing their formation as a suitable stationary process. The $\beta$-mixing coefficients $\zeta^m (t)$ quantify dependency of the observed values in the process $t$ steps removed, and are zero when there is no such dependency. (C3) ensures that the matrix $\mathbf{C}^m(\boldsymbol{\mu}^e)$ defined in Theorem \ref{theo:asymptotics:fixed} below is positive-definite, and hence, invertible. One can empirically check whether (C3) holds by creating certain matrixes using data and testing whether they are positive-definite. (C4) is needed to establish that the sequence of estimated optimal policies converges to the true optimal policy, which is a stronger result than just the gain of these policies converging to each other. (C5) is a regularity condition on the space of models, $\mathscr M$, which holds in most real-wrold applications, because any set of models can be represented/approximated with a finite set (with any required level of accuracy). We first establish the asymptotic behavior of our estimators under any given policy $\boldsymbol{\mu}^e\in\Upsilon$ by only requiring (C1)-(C3). The proof is based on some additional results provided in Appendix B (see Lemmas \ref{ec: lemma:asymptotics:Donsker} and \ref{ec: lemma:asymptotics:11.24}), which establish Donsker properties and asymptotic normality in $\ell^\infty (\Upsilon)$ for the underlying absolutely regular stationary process in our setting . \begin{theorem}[Asymptotic Behavior: Fixed Policy and its Value]\label{theo:asymptotics:fixed} Suppose (C1)-(C3) hold and the behavior policy satisfies positivity. Then under $\texttt{DAV-Learning}$ (Algorithm \ref{alg:DAV}), for any $\boldsymbol{\mu}^e\in\Upsilon$ and $m\in\mathscr M$, we have: \begin{itemize}[leftmargin=1cm,align=left] \item[(i)] $\hat \boldsymbol{\psi}_n^{m,\boldsymbol{\mu}^e}\overset{p}{\to} \boldsymbol{\psi}_\diamond^{m,\boldsymbol{\mu}^e}$. \item[(ii)] $\sqrt{n} \,\big[\hat \boldsymbol{\psi}_n^{m,\boldsymbol{\mu}^e}- \boldsymbol{\psi}_\diamond^{m,\boldsymbol{\mu}^e}\big]\overset{d}{\to} \mathbb{G}(\boldsymbol{\mu})$ in $\ell^\infty (\Upsilon)$, where $\mathbb{G}(\boldsymbol{\mu})$ is a zero-mean and tight Gaussian process indexed by $\boldsymbol{\mu}\in\Upsilon$ with the covariance function given by \begin{equation} \mathbb E\Big[\mathbb{G}(\boldsymbol{\mu})\mathbb{G}(\tilde\boldsymbol{\mu})\Big]= \Big(\mathbf{C}^m(\boldsymbol{\mu}^e)\Big)^{-1} \, \tilde\mathbf{C}^m(\boldsymbol{\mu}^e, \tilde\boldsymbol{\mu}^e)\, \Big(\big(\mathbf{C}^m(\boldsymbol{\mu}^e)\big)^{-1}\Big)' \ \ \ \ \ \forall\boldsymbol{\mu},\tilde{\boldsymbol{\mu}}\in\Upsilon, \end{equation} where \begin{equation} \mathbf{C}^m(\boldsymbol{\mu}^e)\triangleq \mathbb E^{m} \Bigg[\frac{\mu^e(A_t|\boldsymbol{\Pi}^m_t)}{\mu^b(A_t|\boldsymbol{\Pi}^m_t)}\, \mathbf{b}(\boldsymbol{\Pi}_t^m)\, \Big(\mathbf{b}(\boldsymbol{\Pi}_t^m)-\beta\, \mathbf{b}(T(\boldsymbol{\Pi}^m_t, A_t, O_t, m))\Big)'\Bigg], \end{equation} \begin{equation} \tilde\mathbf{C}^m(\boldsymbol{\mu}^e, \tilde\boldsymbol{\mu}^e)\triangleq \mathbb E^{m} \Bigg[\frac{\mu^e(A_t|\boldsymbol{\Pi}^m_t)\tilde\mu^e(A_t|\boldsymbol{\Pi}^m_t)}{\mu^b(A_t|\boldsymbol{\Pi}^m_t)\,\mu^b(A_t|\boldsymbol{\Pi}^m_t)}\, \boldsymbol{\vartheta}(\boldsymbol{\Pi}^m_t,\boldsymbol{\psi}_\diamond^{\boldsymbol{\mu}^e})\, \boldsymbol{\vartheta}(\boldsymbol{\Pi}^m_t,\boldsymbol{\psi}_\diamond^{\tilde\boldsymbol{\mu}^e})\,\mathbf{b}(\boldsymbol{\Pi}_t^m)\, \Big(\mathbf{b}(\boldsymbol{\Pi}_t^m)\Big)' \Bigg], \end{equation} and \begin{equation} \boldsymbol{\vartheta}(\boldsymbol{\Pi}^m_t,\boldsymbol{\psi}_\diamond^{\boldsymbol{\mu}^e})\triangleq G_t+ \Big[\beta\, \mathbf{b}(T(\boldsymbol{\Pi}^m_t, A_t, O_t, m))-\mathbf{b}(\boldsymbol{\Pi}_t^m)\Big]\boldsymbol{\psi}_\diamond^{\boldsymbol{\mu}^e}. \end{equation} \item[(iii)] $\hat \Gamma^m_\infty (\boldsymbol{\mu}^e) \overset{p}{\to} \Gamma^m_\infty (\boldsymbol{\mu}^e)$. \item[(iv)] $\hat \Gamma_\infty (\boldsymbol{\mu}^e) \overset{p}{\to} \Gamma_\infty (\boldsymbol{\mu}^e)$ assuming (C5) holds. \end{itemize} \end{theorem} We next establish the asymptotic properties of the optimal policy and the gain under it. The proof of the following theorem is based on an additional result provided in Appendix B (see Lemma \ref{ec: lemma:asymptotics:M-Estimation}), which in turn relies on results from the $M$-estimation theory. \begin{theorem}[Asymptotic Behavior: Optimal Policy and its Value]\label{theo:asymptotics:opt} Suppose (C1)-(C5) hold and the behavior policy satisfies positivity. Then, considering a metric space $(\Upsilon, d_{\Upsilon})$, under $\texttt{DAV-Learning}$ (Algorithm \ref{alg:DAV}) we have: \begin{itemize}[leftmargin=1cm,align=left] \item[(i)] $d_{\Upsilon}(\hat\boldsymbol{\mu}^{e*,m},\boldsymbol{\mu}^{e*,m})\overset{p}{\to} 0$ for all $m\in\mathscr M$. \item[(ii)] $d_{\Upsilon}(\hat\boldsymbol{\mu}^{e*},\boldsymbol{\mu}^{e*})\overset{p}{\to} 0$. \item[(ii)] $\hat \Gamma^m_{\infty}(\hat \boldsymbol{\mu}^{e*,m}) \overset{p}{\to} \Gamma^m_{\infty}(\boldsymbol{\mu}^{e*,m})$. \item[(iv)] $\hat \Gamma_{\infty}(\hat \boldsymbol{\mu}^{e*}) \overset{p}{\to} \Gamma_{\infty}(\boldsymbol{\mu}^{e*})$. \end{itemize} \end{theorem} \section{Performance Analyses: Numerical Results} To gain further insights into the performance of our purposed algorithms, we now perform two sets of numerical experiments. The first is a case study of a medical decision-making problem faced by physicians at our partner hospital, and involves using a clinical data set of patients with a kidney transplant operation. In the second set, we make use of synthetic data in which we simulate patient trajectories under different models while controlling the true data generating model. \subsection{Case Study: New Onset Diabetes After Transplantation (NODAT)}\label{sec:case} In this section, we apply our proposed algorithms ($\texttt{DAV-Learning}$ and $\texttt{SAV-Learning}$) on a clinical data set that contains over 63,000 data points pertaining 407 patients who had a kidney transplant operation during a seven year period at our partner hospital. Details about the data set can be found in the author's previous publications \citep{Boloori2015, Boloori2020, Munshi2020, Munshi2021}. Patients who undergo transplantation often face a significant risk of organ rejection. To mitigate this risk, physicians typically use an intensive amount of an immunosuppressive drug (e.g., tacrolimus). Immunosuppressive drugs, however, have a well-established effect known as the diabetogenic effect, and thus, can elevate the risk of {\em New Onset Diabetes After Transplantation (NODAT)}. NODAT refers to incidence of diabetes in a patient with no history of diabetes prior to transplantation \citep[see, e.g., ][ and the references therein]{Chakkera2009, Boloori2015, Boloori2020}. To control the risk of NODAT, physicians have to decide whether or not to put the patient on insulin. Table \ref{Table:Observations} describes the observed patient covariates (observations) and their levels. As the table shows, some of these observations are time-varying. Furthermore, most of them are dichotomized to high versus low level values. However, the medical tests used to measure the blood glucose (FPG and Hb1Ac) and the lowest concentration of tacrolimus in the patient's body---a quantity known as {\em trough level} or $C_0$---have three levels. These levels are defined based on both the medical literature and the practice at our partner hospital. Tables \ref{Table:States} and \ref{Table:Actions} show the patients' latent states and physicians' actions/prescriptions during each visit post-transplant, respectively. Latent states described in Table \ref{Table:States} are summary variables that describe the main condition of the patient in terms of decision-making related to use of an immunosuppressive drug (e.g., tacrolimus) and insulin therapy (i.e., the actions in Table~\ref{Table:Actions}). These patient summary variables are, however, hidden to physicians, since physicians can only rely on medical tests, which have a wide range of false-positive and false-negative errors. In particular, blood glucose levels are measured by two medical tests {\em Fasting Plasma Glucose} (FPG) and {\em Hemoglobin A1c} (HbA1c), which are subject to false-positive and false-negative errors. Similarly, the concentration of immunosuppressive drugs is measured through tests such as {\em Abbott Architect} and {\em Magnetic Immunoassay}, which are error-prone. \noindent\textbf{Data Pre-processing Steps.} Our data set includes information related to patients' follow-up visits during months 1, 4, and 12 post transplantation. However, for the goals of this study, we make use of the same data preprocessing steps as those in \citep{Boloori2020}. In particular, we use imputation to replace missing values \citep[see also][]{Munshi2021} and also make use of cubic spline interpolation to create a test bed with clinical history of patients for months 1 to 12 after transplant. That is, for the purpose of this study, we consider monthly visits that occur for a year post-transplant. Thus, we let $T\triangleq 12$ and $\mathscr T\triangleq\{1,2,\cdots, 12\}$. The imputed data includes the 13 variables listed in Table \ref{Table:Observations} for each of the 407 patients and every month during a year of follow-up post-transplant (a total of $13\times 407\times 12=63,492$ data points). { \renewcommand{\arraystretch}{1.2} \begin{table}[t] \caption{\baselineskip=10pt Observed Covariates (Observations)}\label{Table:Observations} \begin{center} \scriptsize \resizebox{\textwidth}{!}{\begin{tabular}{clccccc} \hline {\bf Var. No.} & {\bf Risk Factor (Abbr.)} & {\bf Unit} & {\bf Low Level} & {\bf Mid Level} & {\bf High Level} & {\bf Time-Varying} \\ \hline 1 & Glucose test$^\dag$ (FPG, HbA1c) & mg/dL, \% & Healthy & Pre-Diabetic & Diabetic& Yes \\ 2 & Trough level test$^\ddag$ ($C_0$) & mg/dL &$[4,8)$ &$[8,10)$& $[10,14]$& Yes \\ 3 & Age & Years & $<$50 & --- & $\geq$ 50 & No \\ 4 & Gender & --- & Female &--- & Male & No \\ 5 & Race & --- & White & --- & non-White & No \\ 6 & Diabetes history (Diab Hist) & --- & No & --- & Yes & No \\ 7 & Body mass index (BMI) & kg/m$^2$ & $<$30 (non-obese) & --- & $\geq$30 (obese) & Yes \\ 8 & Blood pressure (BP) & --- & Normal$^\sharp$ &--- & Hypertension & Yes \\ 9 & Total cholesterol (Chol) & mg/dL & $<$200 & --- & $\geq$200 & Yes \\ 10 & High-density lipoportein (HDL) & mg/dL & $\geq$40 & --- & $<$40 & Yes\\ 11 & Low-density lipoportein (LDL) & mg/dL & $<$130 & --- & $\geq$130 & Yes \\ 12 & Triglyceride (TG) & mg/dL & $<$150 & --- & $\geq$150 & Yes \\ 13 & Uric acid (UA) & mg/dL & $<$7.3 & --- & $\geq$7.3 & Yes \\ \hline \multicolumn{6}{l}{$^\dag$\normalfont A patient with FPG$\geq$126 ($100\leq$FPG$<126$) mg/dL or HbA1c$\geq$6.5\% ($5.7\leq$HbA1c$<$6.5\%) is labeled as diabetic (pre-diabetic),}\\ \multicolumn{6}{l}{and a patient with FPG$<$100 mg/dL or HbA1c$<$5.7\% is labeled as healthy \citep[see, e.g.,][]{ada2012}.}\\ \multicolumn{6}{l}{$^\ddag$\normalfont $C_0 \in [4,8)$, $[8,10)$, $[10,14]$ mg/dL is label as ``low,'' ``medium,'' and ``high,'' respectively \citep[see, e.g.,][]{Boloori2020}.}\\ \multicolumn{6}{l}{$^\sharp$\normalfont Normal Blood Pressure (BP) is defined as systolic (diastolic) BP less than 120 (80) mmHg \citep[see, e.g.,][]{Whelton2018}.}\\ \multicolumn{6}{l}{\normalfont Note: All variables with three levels are coded as 1,2, 3 (low, mid, high). All variables with two levels are coded as 1, 2 (low, high).}\\ \end{tabular}} \end{center} \vspace{-10pt} \end{table} } \noindent\textbf{Behavior Policy.} We estimate the behavior policy based on the actions we observe in our data. These actions are mainly based on the the clinical protocols followed at our partner hospital. A detailed summary of the main immunosuppression protocol can be found in \citep{Munshi2021}, which includes induction therapy with either rabbit anti-thymocyte, immunoglobulin, or basiliximab, as well as a tapering course of glucocorticoids. However, here our focus is on the use of tacrulimus, and we observe that patients are often put on high (i.e., aggressive) dose tacrolimus during the first months post-transplant, and in later months, depending on the observations made about the patient patients, they might be transferred to a low (i.e., non-aggressive) dose. This is consistent with the fact that patients in most medical practices are consistently kept on high levels of tacrolimus in early stages post-transplant \citep[see, e.g., ][]{Ghisdal2012, Boloori2020}. Furthermore, with respect to the use of insulin, patients are primarily put on insulin when their Hb1Ac and FPG tests indicates that they are not diabetic free (see definitions of pre-diabetic and diabetic in Table \ref{Table:Observations}). Using the observed actions in our data set as well as the estimated belief vectors $\{\boldsymbol{\pi}_t^m\}_{t\in\mathscr T}$ for each patient (for further details, see the ``Time-Dependent Belief Vectors" paragraph below), we next estimate $\mu^b(A_t|\boldsymbol{\Pi}^m_t)$ by training a multi-class multiple logistic regression classifier. This classifier is endowed with an $\ell_2$-norm penalty, which is tuned to ensure that each action is selected with an estimated probability of $0.05$ or higher across all observations \citep[see, e.g., ][]{Murphy2016}. { \renewcommand{\arraystretch}{1.2} \begin{table}[t] \caption{\baselineskip=10pt Latent Health States}\label{Table:States} \begin{center} \scriptsize \begin{tabular}{ccc} \hline \multirow{2}{*}{{\bf State}} & \multicolumn{1}{c}{\bf Transplant Condition} & \multirow{2}{*}{{\bf Diabetes Condition}} \\ & \multicolumn{1}{c}{\bf (Tacrolimus $C_0$)} & \\ \hline $1$ & Low & \multirow{3}{*}{Diabetes (type II)} \\ $2$ & Medium & \\ $3$ & High & \\ \hline $4$ & Low & \multirow{3}{*}{Pre-diabetes} \\ $5$ & Medium & \\ $6$ & High & \\ \hline $7$ & Low & \multirow{3}{*}{Healthy} \\ $8$ & Medium & \\ $9$ & High & \\ \hline \end{tabular} \end{center} \end{table} } { \renewcommand{\arraystretch}{1.2} \begin{table}[b] \caption{\baselineskip=10pt Actions}\label{Table:Actions} \begin{center} \scriptsize \begin{tabular}{ccc} \hline \multirow{2}{*}{{\bf Action}} & \multicolumn{1}{c}{\bf Prescription} & \multicolumn{1}{c}{\bf Prescription} \\ & \multicolumn{1}{c}{\bf (Tacrolimus dose)} & \multicolumn{1}{c}{\bf (Insulin use)} \\ \hline $1$ & Low (Non-Aggressive) & \multirow{2}{*}{No} \\ $2$ & High (Aggressive) & \\ \hline $3$ & Low (Non-Aggressive) & \multirow{2}{*}{Yes} \\ $4$ &High (Aggressive)& \\ \hline & & \\ \\ & & \\ \\ \end{tabular} \end{center} \end{table} } \noindent\textbf{Immediate Gain Variable.} To calculate the immediate gains, we use a similar approach to our previous work \citep[see, e.g.,][]{Boloori2020}. In particular, we make use of {\em Quality of Life (QoL) scores}, which take values in $[0,1]$. This allows us to differentiate between the Quality of Life of being in a diabetic, prediabetic, or healthy state and also having different concentration of the immunosuppressive in the body, which are in turn associated with differing risks of organ rejection. Table \ref{Table:Gain} shows the yearly-based QoL scores associated with each state, which are divided by 12 to represent the fact that patients' visits are monthly.\footnote{The QoL values shown in Table \ref{Table:Gain} are average values and are approximate values based on various reports in the medical literature \citep[see, e.g., the extended appendix of][and the references therein]{Boloori2020}. In addition to immediate gains, our framework allows including lump-sum gains (i.e., gains at the end of the horizon to reflect the Quality of Life associated with the remaining years). For the purposes of this study, however, we simply set $V_0(\boldsymbol{\pi})\triangleq 0$.} { \renewcommand{\arraystretch}{1.2} \begin{table}[t] \caption{\baselineskip=10pt Immediate Gain Values}\label{Table:Gain} \begin{center} \scriptsize \begin{tabular}{cccc} \hline \multirow{2}{*}{{\bf State}} & \multicolumn{1}{c}{\bf Transplant Condition} & \multirow{2}{*}{{\bf Diabetes Condition}}& \multirow{2}{*}{{\bf Immediate Gain Value$^\dag$}} \\ & \multicolumn{1}{c}{\bf (Tacrolimus $C_0$)} & \\ \hline $1$ & Low & \multirow{3}{*}{Diabetes (type II)} & $0.68/12$\\ $2$ & Medium & &$0.72/12$\\ $3$ & High & & $0.76/12$\\ \hline $4$ & Low & \multirow{3}{*}{Pre-diabetes}& $0.82/12$ \\ $5$ & Medium & &$0.87/12$\\ $6$ & High & & $0.89/12$\\ \hline $7$ & Low & \multirow{3}{*}{Healthy} &$0.90/12$\\ $8$ & Medium & &$0.92/12$\\ $9$ & High & &$0.95/12$\\ \hline \multicolumn{4}{l}{$^\dag$\normalfont Immediate gains are average values approximated based on $QoL$ scores reported in other studies and include combined}\\ \multicolumn{4}{l}{disutility of (a) being in a diabetic state, and (b) having high risk of organ rejection. Yearly-based values are divided by}\\ \multicolumn{4}{l}{12 to represent monthly measures.}\\ \end{tabular} \end{center} \end{table} } \noindent\textbf{Other Details.} The belief state space in our setting, $\Delta_\mathscr S$, is a $8$-simplex, since there are 9 latent states (Table \ref{Table:States}). The vector of basis functions $\mathbf{b}(\boldsymbol{\pi})$ maps this $8$-simplex to $\mathbb R^{13}$, which allows us to include enough cut points (while making sure that the value function is piecewise linear and continuous). Thus, both the belief space and the parameter space in our setting are continuous and relatively high-dimensional. To perform our analyses, we use a discount factor of $\beta=0.95$. We also tune a penalty parameter $\theta_t=\theta$. To create the set of models $\mathscr M$, we make use of the algorithm in Table 3 of our earlier work \citep{Boloori2020}. Specifically, first the Baum–Welch algorithm is used to obtain point estimations for state transition and observation probability matrices. Next, an entropy ball is constructed (using the Kullback–Leibler divergence criterion) around these point estimate matrices. For tractability, we set $|\mathscr M|=4$ in this case study. However, our framework is general and can be used for any number of estimated models. In Section \ref{sec:synthetic}, for example, we change our assumption on the number of models and consider $|\mathscr M|=10$ different models. Our framework is also not restricted to any specific way of estimating the underlying models. For example, in Section \ref{sec:synthetic}, we make use of a different way of constructing the set $\mathscr M$. Finally, we consider the distribution $F(\boldsymbol{\pi})$ to be uniform. That is, we use a uniform prior belief at time zero, and implement the Bayesian belief updating operator (see Eq. \ref{beliefupd}) to create a sequence of belief vectors $\{\boldsymbol{\pi}_t^m\}_t\in\mathscr T$ for each patient under each model $m\in\mathscr M$ (see, e.g., steps 1-5 in Algorithms \ref{alg:DAV} and \ref{alg:SAV}). \noindent\textbf{Results.} The performance of the three treatment regimes ($\texttt{DAV-Learning}$, $\texttt{SAV-Learning}$, and observed) are compared in Table \ref{Table:Results1}. Average and standard deviations in these tables are calculated using Monte Carlo replications.\footnote{The number of these replications is chosen so that the confidence intervals are tight enough, while maintaining reasonable computational times.} As can be seen from the results in Table \ref{Table:Results1}, $\texttt{DAV-Learning}$ outperforms $\texttt{SAV-Learning}$ in terms of the mean performance for most values of the pessimism level, $\alpha$. As both Table \ref{Table:Results1} and Figure \ref{fig:improvementcase} show, however, both $\texttt{DAV-Learning}$ and $\texttt{SAV-Learning}$ approaches significantly outperform the observed regime. In particular, as Figure \ref{fig:improvementcase} shows, the improvements over the observed regime when using $\texttt{DAV-Learning}$ and $\texttt{SAV-Learning}$ are in the ranges $(10\%, 42\%)$ and $(10\%, 32\%)$, respectively, depending the value of $\alpha$. Of note, these ranges also imply that the mean performance of the $\texttt{SAV-Learning}$ regime is much more robust to the value of $\alpha$ than that of $\texttt{DAV-Learning}$. This is due to the fact that $\texttt{SAV-Learning}$ uses a ``safe estimation" of the underlying parameter of the value function (see, e.g., step 12 of Algorithm \ref{alg:SAV}). This allows $\texttt{SAV-Learning}$ to guard against ambiguity up-front (i.e., in parameter estimation) in contrast to $\texttt{DAV-Learning}$ which combines policy values at the end. Thus, a decisions-maker who uses $\texttt{SAV-Learning}$ does not need to be that concerned about the value of $\alpha$ s/he uses (or try to tune it). Finally, as can be seen from both Table \ref{Table:Results1} and Figure \ref{fig:improvementcase}, the performance of $\texttt{DAV-Learning}$ and $\texttt{SAV-Learning}$ regimes degrades as the pessimism level $\alpha$ increases. This is fully expected, since as we move from a maximax view to a maximin one $\texttt{DAV-Learning}$ and $\texttt{SAV-Learning}$ tend to put more weight on the worst-case scenario, and hence, perform more {\em conservatively}. More conservativeness, however, does not necessarily mean more {\em robustness} to model ambiguity. We further investigate this issue in Section \ref{sec:robsutness}, and generate important insights into the values of $\alpha$ that can provide the highest level of robustness to model ambiguity. { \renewcommand{\arraystretch}{1.2} \begin{table}[t] \caption{\baselineskip=10pt Estimated Total Discounted Gain Under Observed and Proposed Regimes (Case Study with $\beta=0.95$)}\label{Table:Results1} \begin{center} \scriptsize \begin{tabular}{cccc} \hline {\bf Pessimism Level ($\alpha$)} & {\bf Observed Regime$^\dag$} & {\bf $\texttt{DAV-Learning}$$^\dag$} & {\bf $\texttt{SAV-Learning}$$^\dag$} \\ \hline 0.00 & 1.472 (1.455, 1.489) & {\bf 2.085} (2.061, 2.108) & 1.949 (1.770, 2.128) \\ 0.25 & 1.468 (1.456, 1.480) & {\bf 1.939} (1.920, 1.958) & 1.888 (1.566, 2.210) \\ 0.50 & 1.464 (1.457, 1.471) & {\bf 1.794} (1.779, 1.808) & 1.786 (1.534, 2.039) \\ 0.75 & 1.460 (1.458, 1.462) & 1.648 (1.638, 1.658) & {\bf 1.682} (1.658, 1.706) \\ 1.00 & 1.455 (1.452, 1.458) & {\bf 1.609} (1.560, 1.657) & 1.606 (1.585, 1.627) \\ \hline \multicolumn{4}{l}{$^\dag$\normalfont Values in parenthesis represent $95\%$ confidence intervals. Values in bold font represent the best performance.}\\ \multicolumn{4}{l}{\normalfont For all values, only the first three decimal places are shown.}\\ \end{tabular} \end{center} \vspace{-10pt} \end{table} } \begin{figure}[t] \begin{center} \includegraphics[scale=0.5]{ImprovementCaseStudy.pdf}\vspace{2mm} \caption{\scriptsize Percentage improvement over the observed regime (case study with $\beta=0.95$). Gray areas represent error bands with the curve at the center of each error band representing the mean value.}\label{fig:improvementcase} \end{center}\vspace{-4mm} \end{figure} \subsection{Synthetic Data Analyses}\label{sec:synthetic} We now use similar assumptions to those described in the case study, but instead of using actual patient traceries, simulate random patient trajectories for 100 patients with 10 follow-up periods, and use ($|\mathscr M|=10$) different models. These yield randomly generated belief data of the form $(\boldsymbol{\pi}^m_t)_{t\in\mathscr T}$ under each $m \in\mathscr M$. We keep the other assumptions (e.g., the action space, the number of hidden states, the parameter space, basis functions, etc.) the same as those in the previous section. We assume patient trajectories are such that for each $m\in\mathscr M$ the belief vector $(\boldsymbol{\pi}^m_t)_{t\in\mathscr T}$ is generated via a Dirichlet distribution with the vector of parameters $(p^m_i)_{i\in\{1,2\cdots,9\}}$. All of these models are misspecified, and hence, for each model, we randomly draw each $p^m_i$ from a Uniform$(0,1)$ distribution. We assume the true model is such that all $p_i$ values are equal to 0.5. Furthermore, we specify the behavior policy as follows. For actions $a=1,2,3$, we set \begin{equation}\label{behv1} \mu^b(A=a|\boldsymbol{\Pi}=\boldsymbol{\pi})=\frac{exp (\boldsymbol{\pi}' \, \boldsymbol{\varrho}_a)}{1+\sum_{a=1}^3 exp (\boldsymbol{\pi}' \, \boldsymbol{\varrho}_a)}, \end{equation} and for action $a=4$ we set \begin{equation}\label{behv2} \mu^b(A=a|\boldsymbol{\Pi}=\boldsymbol{\pi})=\frac{1}{1+\sum_{a=1}^3 exp (\boldsymbol{\pi}' \, \boldsymbol{\varrho}_a)}. \end{equation} where $\boldsymbol{\varrho}_1$, $\boldsymbol{\varrho}_2$, and $\boldsymbol{\varrho}_3$ are 9-dimensional predefined vectors. To perform our analyses, we choose each $\boldsymbol{\varrho}_a$ ($a=1,2,3$) as a vector with all elements equal to $0.1$, except the $a$-th element, which is set to $-1$. Table \ref{Table:Results2} and Figure \ref{fig:improvementsynthetic} present our results using the same immediate gain values as those in the case study (see Table \ref{Table:Gain}). Similar to the case study, we observer that both $\texttt{DAV-Learning}$ and $\texttt{SAV-Learning}$ approaches outperform the observed regime. The percentage improvement of $\texttt{DAV-Learning}$ an $\texttt{SAV-Learning}$ over the observe regime ranges in $(1\%, 37\%)$ and $(1\%, 8\%)$, respectively, depending on the value of $\alpha$. In addition, similar to our observation in the case study, $\texttt{DAV-Learning}$ outperforms $\texttt{SAV-Learning}$ for most vales of the pessimism level, $\alpha$. Another similar observation is that the performance of $\texttt{SAV-Learning}$ is much more robust to the value of $\alpha$ compared to $\texttt{DAV-Learning}$. Hence, a decision-maker who uses $\texttt{SAV-Learning}$ has the advantage that s/he does not need to be that concerned with the value of $\alpha$ that s/he uses (or with tuning it). In the next section, we further investigate the robustness of our proposed approaches to model ambiguity, and generate insights into the best value of $\alpha$ that a decision-maker can use to achieve the highest level of robustness. { \renewcommand{\arraystretch}{1.2} \begin{table}[t] \caption{\baselineskip=10pt Estimated Total Discounted Gain Under Observed and Proposed Regimes (Synthetic Data Analyses with $\beta=0.95$)}\label{Table:Results2} \begin{center} \scriptsize \begin{tabular}{cccc} \hline {\bf Pessimism Level ($\alpha$)} & {\bf Observed Regime$^\dag$} & {\bf $\texttt{DAV-Learning}$$^\dag$} & {\bf $\texttt{SAV-Learning}$$^\dag$} \\ \hline 0.00 & 1.441 (1.440, 1.442) & {\bf 1.973} (1.969, 1.977) & 1.442 (1.441, 1.442) \\ 0.25 & 1.415 (1.415, 1.416) & {\bf 1.815} (1.811, 1.818) & 1.434 (1.433, 1.434) \\ 0.50 & 1.389 (1.389, 1.390) & {\bf 1.656} (1.654, 1.659) & 1.428 (1.428, 1.429) \\ 0.75 & 1.364 (1.364, 1.364) & {\bf 1.498} (1.496, 1.499) & 1.434 (1.434, 1.434) \\ 1.00 & 1.338 (1.338, 1.339) & 1.348 (1.348, 1.348) & {\bf 1.444} (1.444, 1.444) \\ \hline \multicolumn{4}{l}{$^\dag$\normalfont Values in parenthesis represent $95\%$ confidence intervals. Values in bold font represent the best performance.}\\ \multicolumn{4}{l}{\normalfont For all values, only the first three decimal places are shown.}\\ \end{tabular} \end{center} \vspace{-10pt} \end{table} } \begin{figure}[t] \begin{center} \includegraphics[scale=0.4]{ImprovementSynthetic.pdf}\vspace{2mm} \caption{\scriptsize Percentage improvement over the observed regime (synthetic data analyses with $\beta=0.95$). Error bands for both approaches, and especially for the $\texttt{SAV-Learning}$ approach, are very tight (hence, not depicted).}\label{fig:improvementsynthetic} \end{center}\vspace{-4mm} \end{figure} \subsection{Robustness to Model Ambiguity}\label{sec:robsutness} We now compare our proposed approaches in terms of their percentage {\em gain loss} (a.k.a. {\em regret}). That is, we first consider an {\em oracle} who knows both the true data generating model and the optimal policy under it, and then compare the performance of a decision-maker who is blind to the true data generating model (is facing model ambiguity) but uses either $\texttt{DAV-Learning}$ or $\texttt{SAV-Learning}$. How much robustness to model ambiguity using the proposed $\texttt{DAV-Learning}$ or the $\texttt{SAV-Learning}$ approaches provide? What is the maximum gain loss of these approaches? For what value of $\alpha$ the gain loss is minimized? Importantly, in order to minimize the gain loss, should the decision-maker use an extreme value of $\alpha$ (e.g., $\alpha=0,1$) or a mid level value (e.g., $\alpha=0.5$)? And does the answer depend on which of the two learning approaches is used? \begin{figure}[t] \begin{center} \includegraphics[scale=0.4]{RobustnessSynthetic.pdf}\vspace{2mm} \caption{\scriptsize Percentage Gain Loss of $\texttt{DAV-Learning}$ and $\texttt{SAV-Learning}$ (Synthetic Data Analyses with $\beta=0.95$). Minimum loss is obtained for a mid level value of the pessimism level ($\alpha=0.25$).}\label{fig:robustness} \end{center}\vspace{-4mm} \end{figure} To answer these questions, we make use of a similar setup to the one discussed in Section \ref{sec:synthetic}. The results are shown in Figure \ref{fig:robustness}, which depicts the percentage gain loss of $\texttt{DAV-Learning}$ and $\texttt{SAV-Learning}$ compared to the imaginary oracle. From this figure, we make three main observations: (1) Gain loss has a U-shape curve as $\alpha$ varies. Importantly, the minimum loss for both $\texttt{DAV-Learning}$ and $\texttt{SAV-Learning}$ are obtained at a mid value of $\alpha$ (approximately $\alpha=0.25$), which implies that using extreme cases of $\alpha=0.0$ (a maximax view) or $\alpha=1.0$ (a maximin view) does not provide the highest level of robustness to model ambiguity. That is, neither the maximax view nor the maximin view is {\em robustness-maximizing}. (2) The gain loss under $\texttt{SAV-Learning}$ is much more robust to the changes in value of $\alpha$ compared to $\texttt{DAV-Learning}$, which is consistent with our observations in Sections \ref{sec:case} and \ref{sec:synthetic} that the performance of the $\texttt{SAV-Learning}$ regime is in general more robust to the value of $\alpha$. (3) Both $\texttt{DAV-Learning}$ and $\texttt{SAV-Learning}$ are able to strongly shield against model ambiguity, regardless of the value of $\alpha$ used. Specifically, the gain loss under these approaches (compared to the imaginary oracle) is very low (below $0.6\%$). This implies that a decision-maker who is facing model ambiguity can use these approaches and obtain policies that have similar performance to the very best policy that could be used, if the true data generating was known (i.e., if there was no ambiguity regarding the underlying causal model). \section{Conclusion}\label{sec:conclude} We propose a mathematical framework as well as learning algorithms for finding an effective dynamic treatment regime under model ambiguity. Incorporating model ambiguity a priori in the analyses not only provides robustness to inevitable misspecifications (e.g., caused by hidden confounders with unknown dynamics and/or impact on the observed variables), but more broadly can bridge the gap between two philosophical views of causal inference: model-based and model-free. Our work also tries to close the gap between RL techniques and dynamic causal inference methods. Specifically, as is common, we view the problem of finding an effective treatment regime as an ``off-policy" RL problem. However, unlike the existing work, we allow the learning to occur across a ``cloud" of potential data generating models. This is specifically useful when data is observational, the behavior policy is unknown, and the existence of time-varying unmeasured confounders (which are themselves affected by previous actions) make the task of learning the causal impact of an evaluation policy challenging. Unlike the available RL techniques, or the methods related to causal inference in dynamic settings, our work also allows for a two-way personalization: the obtained treatment policies are not only personalized based on the subject's variables (e.g., a patient's covariates), but also based on the ambiguity attitude and preferences of the decision-maker (e.g., the physician). Given the importance of this two-way personalization in a variety of applications (e.g., medical decision-making or public policy), we hope that future research can develop further data-driven methods to learn policies that are personalized in both ways. We also hope that the future research can test and implement our prosed learning algorithms ($\texttt{DAV-Learning}$ and $\texttt{SAV-Learning}$) in a variety of other applications. In this study, we investigate the performance of these learning algorithms in three ways. First, we analytically establish their asymptotic behavior, including (weak) consistency and asymptotic normality. Second, we examine them in a case study using clinical data related to NODAT patients. Third, we make use of simulation experiments (synthetic data), in which we control the true data generating model and compare the performance of our proposed methods with that of an imaginary oracle who knows both the true data generating model and the optimal policy under that model. All these investigations reveal promising results. However, further research is needed to more broadly investigate the performance of our proposed methods in other applications and domains. Finally, future research can also examine the interpretability of the policies that are obtained via $\texttt{DAV-Learning}$ and $\texttt{SAV-Learning}$, and propose adjustments (if needed) to ensure that they can be effectively used in practice.
{ "redpajama_set_name": "RedPajamaArXiv" }
9,000
Q: Why does stl implement std::greater using functor? I was looking at the std::greater function the other day and was very confused as to why they implemented it as a functor. Were they just showing off their c++ skills or was there an actual design decision that led to that?
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,689
Q: Adding an event listener to a canvas? I've seen similar posts to this, and watched a few tutorials, but I can't get my canvas to have an event listener. Here's what I've got: <canvas id="ctx" width="599" height="575" style="border:1px solid #000000;"></canvas> <script> var ctx = document.getElementById("ctx").getContext("2d"); ctx.addEventListener('mousedown',onDown,false); function onDown(event){ console.log("click"); }; ctx.font = "normal 20pt Pixelate"; ctx.fillStyle= "gray"; ctx.fillRect (0,0, 700, 700); </script> So how do I add this? I have a link to an external js script after this that needs a reference to a click to change a variable, but even putting the event listener on there, which is located after the declaration of the canvas, still can't establish it. The browser says ctx.addEventListener is not a function. A: The event listener should be applied to the HTMLCanvasElement (the actual DOM element) rather than to the RenderingContext. For example, this would get your sample to work: var canvas = document.getElementById("ctx"); var ctx = canvas.getContext("2d"); canvas.addEventListener('mousedown',onDown,false); Sources: * *https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API *https://developer.mozilla.org/en-US/docs/Web/API/EventTarget
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,304
Q: How to configure "Directories" when using a Symfony project in PhpStorm I use PhpStorm to work on a Symfony project. In the File > Settings > Project … > Directories configuration, I defined the vendor/ directory as a Resource root in order to have auto-completion and as an Excluded folder because I want to ignore vendors when performing a search in my project's code. But my problem is that vendors are still shown in search results. Here is my current configuration: Here is what I'm trying to avoid: results from vendor/ are shown: Here is the PHP configuration: I can restrict search by selecting Scope = Custom but sometimes I forget to change this. I'm looking for some settings that I can use in my different Symfony2/3 projects. How should I mark the vendor/ directory in order to allow PhpStorm to use it as a resource root and ignore it when performing a search? And what is the correct configuration for the default directories structure of a Symfony2 project? Here are the default directories after a Symfony 2.8 installation with composer create-project symfony/framework-standard-edition symfony-2.8 "~2.8": app/ ├ config ├ cache ├ logs └ Resources src/ └ AppBundle/ vendor/ web/ Here is how I marked the directories at this moment: .idea [excluded] app/ ├ config ├ cache [excluded] ├ logs [excluded] └ Resources src/ [source] └ AppBundle/ └ Tests/ [test source folders] vendor/ [excluded] web/ Note: I installed the Symfony plugin for PhpStorm, I don't know if this change the IDE behaviour. A: The vendor folder is not a resource root. A resource root is a folder where resources such as images and scripts will be served from by the web server. In your case the only folder that should be marked as a resource root is probably the web folder, but ironically, is almost the only one you haven't selected as a resource root. Marking web as the resource root means that the absolute URLs /css/foo.css and /images/foo.jpg could be valid resources served by the web server; you probably want to remove all other folders from resource roots. It is correct to exclude the vendor folder because it is not part of your first-party project code. In order for code completion to work for third-party code you must add the vendor folder as an external library. This can be done by navigating to Languages & Frameworks > PHP in the options and specifying the vendor folder as an include path. A: Another option which is easier than manually excluding vendor and then including it again in php settings, is to tell PhpStorm about composer.json and composer.phar in the composer settings as showin in this question. A: After having used advices from Quolonel Questions's answer, here is a summary of my configuration for Symfony2 (see Symfony3 at the end of this answer): For auto-completion, use the vendor/ directory in Include path: In order to avoid irrelevant results when searching in the project, the following directories have to be ignored: .idea [excluded] app/ ├ cache [excluded] └ logs [excluded] vendor/ [excluded] Here is my full configuration: .idea [excluded] app/ ├ cache [excluded] └ logs [excluded] src/ [source] └ AppBundle/ └ */Tests/ [test source folders] vendor/ [excluded] web/ [resources root] Test Source Folders are optional, if they are defined the will appear in the toolbar: With the default configuration for Symfony3, the directories are slightly different: .idea [excluded] src/ [source] tests/ [test source folders] var/ ├ cache [excluded] └ logs [excluded] vendor/ [excluded] web/ [resources root] Update: after updating my dependencies with composer update, PhpStorm perform searches in the vendor/ directory, even if these directories are ignored. The solution is to remove all the vendor/* directories from Include path and keep only vendor/ directory, as on the first screenshot. I'll have to test if marking all the vendor/* directories as ignored can work and avoid to repeat this after each time composer update is used. A: I use PhpStorm 10 as my primary IDE for Symfony2. You don't need to install any Symfony plugins, because PhpStorm support Symfony2 by default. * *Your should mark your public_html directory as a Resource Root, or whatever you have that is going to be public *Sources - your app/ directory *If you don't want vendors/ in search, that's what I exclude also, you press on vendor and "Excluded" button on the top. You also want to exclude, tmp/ and app/cache/ directories *As you already know, you can define scope and search there. When you exclude directory, it also helps performance since PhpStorm not indexing and watching files there, something you don't want anyway. As for directory structure of Symfony2, it's pretty flexible, I use my own. Here is Symfony 2.8 directory structure from the docs. Excluded folders for me are: * *app/DoctrineMigrations *app/cache *app/logs/ *tmp/
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,115
The concept of the fixer-upper is well known in residential real estate, but a local law firm is employing the same concept in the commercial office space realm. With its expansion last month into the historic Eden-Talbott House at 1336 N. Delaware St., the local environmental law firm Plews Shadley Racher & Braun now owns and occupies three historic homes and a 1950s-era office building in the same block. The firm's 55 employees are distributed between those four buildings and a nearby fifth building where it leases 1,000 square feet. The piecemeal headquarters, which totals 30,000 square feet and was assembled over the last 21 years, puts the firm in rare company. It's almost unheard of for an organization of its size to conduct business out of five separate buildings. And in today's market, office users are more likely to rent space than own it. It's definitely a tenant's market, said Jon Owens, a senior vice president with the local office of Colliers Turley Martin Tucker. With office vacancy rates topping 19 percent, landlords are cutting attractive deals. A few years ago, when vacancies were relatively low and landlords were calling the shots, a wave of law firms turned to the market for buildings they could buy and occupy. That's rare today, Owens said, but it still happens. Decision-makers in a firm don't always follow the crowd. That's certainly true at Plews, where the partners prefer to own rather than lease and are proud of the firm's support of historic preservation, said Jeffrey Featherstun, a partner in the firm. "We like to think we've improved the neighborhood," Featherstun said. The firm was founded with three partners in 1988, the same year it bought the Italian Renaissance-style house at 1346 N. Delaware known as the William B. Wheelock House. Wheelock, an executive of the former L.S. Ayres & Co. department store, built the house in 1912. By 1994, Plews needed more space and bought the 1892 Alvin S. Lockard House across the street. By early 2000 it had purchased a small office building in the same block, and at the end of last year the firm bought the Eden-Talbott House. The 1871 house had a variety of owners before Historic Landmarks Foundation of Indiana bought it in 1979 to spur redevelopment of the Old Northside. Plews bought the house from the National Federation of Music Clubs and hired Marten Construction Management to renovate the nearly 8,000-square-foot structure. Featherstun said the firm's 17 partners are distributed about evenly between the five buildings, two of which are on the east side of Delaware. The other three are across the street. Employees have gotten used to the unconventional office arrangement. "More business is transacted electronically on bad-weather days," said Featherstun, but "we keep umbrellas by the door." The flip side is that on good-weather days, employees have a good excuse to get some fresh air, he said. Featherstun, who's been with Plews since 1992, said the firm reexamines its real estate needs every time it runs out of space and regularly hears from real estate brokers interested in getting the firm under one roof. But clients, guests and employees enjoy the unique set-up, he said, and the partners haven't been persuaded to give it up.
{ "redpajama_set_name": "RedPajamaC4" }
7,505
Q: AssemblyTitle is not shown in the Windows 7 task bar when WinForms Application is running Here is the strange problem on Windows 7 64-bit machine which I am not able to understand. I have a WinForms application. It's AssemblyInfo.cs contains the following this: [assembly: AssemblyTitle("Test Application")] [assembly: AssemblyDescription("Test Application")] #if DEBUG [assembly: AssemblyConfiguration("Debug")] #else [assembly: AssemblyConfiguration("Release")] #endif [assembly: CLSCompliant(true)] [assembly: AssemblyInformationalVersion("1.0.000")] When I right click on the .exe file and see the Properties of the file, I can see correct File Description. BUT, when I run the application, I don't see correct name (from the AssemblyTitle) in the Windows 7 taskbar's jump-list. What I get there is the namespace in which my application's Main method is. To add to the surprise, it sometimes shows correctly on some other Windows 7 machines. Has anybody encountered such problem? Do we need to set some attribute other than AssemblyTitle for the .NET 4.0 Winform applications running on Windows 7? A: You have a shortcut to your application with this exact name somewhere on your desktop, in the start menu or pinned to your taskbar. Windows actually searches for shortcuts corresponding to the application, and will get the icon and name from here.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,078
Brrrrrrr . . . we drove north to Concord, New Hampshire today and as soon as we were out of Boston, we started seeing some piles of snow along the road. I know I heard that there was snow in the Northeast over the weekend, but not until we got to Concord and found the ground still covered with a few inches of that white, fluffy stuff did the reality really hit me. The last time it snowed in Concord for Halloween was over a hundred years ago! I sure hope this is not a precursor of a colder than normal winter. But despite the snow today, the sun was shining and it felt warm while we were working in the storage unit. It is usually freezing when we are there, so we stayed a little longer than usual and really tried to go through things. We have a much better handle on what is there and we came away with winter coats and clothes for the cold time. This evening we went to a Concord Yacht Club meeting. It was great fun to see old friends, the dinner was fabulous, and we truly enjoyed the speaker. Mark Klinker, a Concord cardiologist who decided to spend six months as the physician at the South Pole. He has always had a fascination with that part of the world and his photographs depicting day to day life in that VERY chilly part of the world were fascinating. After the yacht club meeting, we came back to Alan and Helaine Kanegsberg's home. It was great catching up on what has been happening in their lives. All in all, it was a very successful trip. We just hope we don't take any of this snowy weather back to the Cape with us tomorrow. We just got home from Heather's and we left here twelve hours ago. It has been a long day. We have definitely decided to make the trip to New Hampshire tomorrow and we heard back from our friends Alan and Helaine Kanegsberg telling us we are welcome to stay with them tomorrow night. So all is set. It seems we are always making a last minute decision to go to Concord and the Kanegsbergs are always welcoming. We truly appreciate their flexibility and willingness to put up with us. Thank you, Alan and Helaine. And we are so looking forward to seeing all our friends at the Concord Yacht Club meeting tomorrow night. The last time we saw them was in January of 2008. When we came home in 2009 it was summer time and there are no yacht club meetings. We spent the entire morning sorting through things we have stored in Heather and Jed's basement and packed those things that will go to our storage unit in New Hampshire in the car. We are taking all of our paper charts (eleven chart tubes worth), summer clothes, spare boat parts, and loads of boating books. We still have way too much stuff in Heather and Jed's basement, but we are planning to put things on Craig's List to try and sell them. If they don't sell and we can't give the stuff away, we'll take it to the dump. In the meantime, we also thank Heather and Jed for putting up with our stuff for the past six years and now that we are home. We know it crowds an already crowded basement. Heather and Jed, we are so appreciative. Thank you. We know we have got to deal with the whole storage issue over the next year. The storage unit is not cheap and we have definitely paid more for it over the last six years than the things in it are worth. But it is truly hard to part with some things that are never going to fit on a boat. Our friends Heather and Jon who just sailed south did a very smart thing that I wish we had done. They bought a small piece of land in Vermont near where Heather's parents live and built a garage on the property. They store all of their things there and the money they have invested is not lost. We have basically thrown away the $9,000 we have paid for the storage unit over six years, so we definitely have to do something different soon. I spent the afternoon with Sam and Jonah at Heather's while Mark returned to the boat to meet Randy, the refrigeration repairman. It appears that our refrigeration system has a slow Freon leak, but Randy can't find it. He charged it and put some sort of dye in the system. When the charge runs down, he will return to charge it again and hopefully the dye will help him find the leak. He is still waiting for parts to repair our reverse cycle heater/air conditioner. He hopes to get that done before we move to Fiddler's Cove late next week. So the boat repairs continue. With Jed gone, we stayed at Heather's for dinner and then helped with after dinner play and bath time for the boys. We are so delighted to be here to help out. It helps erase the guilt for not being around for the past six years. But it's more than that. We just enjoy being around the grandchildren. I just wish Justin and Jo lived closer so we could spend more time with Ziggy. But we'll just have to make the best of our two weeks with him at Christmas. The ospreys have all flown south for the winter. And daily we see flocks of geese headed that direction. Bob Morris, on his boat Apogee, left Eel Pond today headed for Bermuda. A couple of weeks ago, Steve and Irene of Star left the pond for their winter home in Nevis in the Caribbean. And our friends Jon and Heather who came through Buzzards Bay last week emailed today saying they had to divert because of the weather and go through the Long Island Sound. They will head down the East River tomorrow and on to Norfolk. As the weather gets cooler and cooler, heading south sure is tempting. Lots of friends are headed south, but we're staying put. Right now we don't have anything to shield us from the cold other than polar fleece vests and windbreakers. We do have our foul weather gear and we have been using that, but getting to our storage unit in New Hampshire is something we need to do sooner rather than later. Since Jed is out of town for the next eight days, we thought we should stay close-by to help Heather with the boys. But she said today that she can take Thursday afternoon off work to be with Sam and Jonah, so we might take advantage and head for New Hampshire. Mark doesn't have to be at work on Friday until noon, so we could actually go to the Concord Yacht Club meeting on Thursday evening and head home on Friday morning with winter coats and sweaters. It will help tremendously to have weather-appropriate clothing! To pee, or not to pee . . . paraphrasing Hamlet, "that is the question" that Mark has been asking since June. Today he went to Mass General to have a prostatic stent put in the urethra. This was a temporary experimental procedure to see if expanding the urethra would allow him to use his abdominal muscles to pee without using a catheter. Neither Mark nor the doctor was very hopeful, but it worked! So for the first time since June, Mark is fairly optimistic that he might not actually have to continue to self-catheterize every time he goes to the bathroom. After they do the laser procedure to widen the urethra on December 9, he should be able to pee normally as he did all day today with the stent in. The name of the prostatic stent is The Spanner. Most descriptive. Otherwise, today was all about Halloween. At 11:15 am, Sam and Jonah's classes from the Woods Hole Cooperative Day Care paraded down the main street in Woods Hole. The two year-olds were adorable dressed in their ladybug, Nemo, fairy princess, and lion costumes. The four year-olds paraded as super heroes, dragons, skeletons, Count Dracula, a duck, and of course, more princesses. The teachers were dressed as the characters from the Wizard of Oz. It was all very cute. Tonight we went to Heather and Jed's and gave out candy while a Dinosaur Heather, Wizard Jed, Superman Sam, and Ladybug Jonah went out Trick or Treating. The dino and wizard costumes were made by Mark and me over 25 years ago. They have certainly been well used over the years. Jed leaves tomorrow for eight days in Europe. He starts out at a conference in France and ends up in England before returning home next week. While Jed is gone, I'm sure Mark and I will be spending more time at Heather's in the evenings helping with the kids. On Friday another Xantrax inverter/charger should arrive and Mark will spend much of his weekend doing the installation. When he got to Boston this morning, he called the Xantrax service center in Massachusetts to see if he could bring in the inverter/charger for them to check. They told him to take it back to West Marine and have Xantrax send a new one. Evidently this particular unit is so new that they have no repair parts yet. So let's hope the second one doesn't fail as it appears we have no local support. This is a test and I'm not sure I'm going to pass. We have just had one thing after another go wrong and this morning I couldn't believe it when the both the refrigerator and the brand new inverter/charger were both NOT working. We think the refrigerator has a slow leak somewhere and needs to be recharged with Freon, but we have absolutely no idea what went wrong with the inverter/charger. Mark installed the new inverter/charger on Thursday and it was working great. Then during the night last night it started making a loud buzzing noise and then just totally quit working. We are at a loss in figuring out what happened. Mark goes to Boston to the doctor tomorrow and will take the new Xantrax to the closest repair facility a little north of Boston. Then we will just have to wait and see what they say. Mark worked at West Marine today and I spent my day being frustrated by one thing and then another. I had planned to bake a pumpkin pie today. I bought the things needed on Wednesday, but not until I went to get the can of pumpkin out today did I realize that all the groceries I had bought that day were in the bag that was stolen. I went to the little market here in Woods Hole and bought what was available. By making some changes in the recipe, I did end up with a pumpkin pie. Heather, Jed, and boys were in Woods Hole for a Halloween story telling evening at the community center. Afterwards they walked across the street to Windbird for dinner. Jonah was dressed in his ladybug costume and Sam arrived as Superman. They were both "super" cute. Tomorrow they will wear the costumes to school and there will be a parade through town just before noon. You know Oma will be out there taking photos of Ladybug, Superman, and all their friends. When we got home from the Captain Kidd this evening, we got on the computer and continued searching for the cheapest way to travel from here to New Mexico to visit with Justin, Jo, and Ziggy. We talked with them last night and decided that a Christmas visit would work for all. So we searched all day and then we searched all evening trying to find fares that we can afford. We actually ended up deciding to travel by bus instead of flying. That gives us the option of leaving Woods Hole on a bus and not having to worry about where to leave our car in Boston. And when we return, we can hop on a bus in Boston and be home in less than two hours. On our way out to New Mexico, the timing is such that we will be able to get off the bus in Albuquerque and then ride the New Mexico Road Runner train from Albuquerque to Santa Fe where Justin and Jo will pick us up for the last part of the journey. If we flew out, we would have to spend the night in a hotel in Boston in order to catch the 6 am flight out and we would have to spend the night in a hotel in Albuquerque waiting for the flight home. The bus will cost us just a little more than the flight for one of us. It will be slow, but we have convinced ourselves that it will be like being on passage. We'll take our headlamps and at night we'll use them to read, alternating with trying to sleep. This little venture will certainly give us a chance to see what the bus system is like in this country. It is a slow way to travel, but we definitely have more time than money, and it is certain to be an interesting trip. We got the last of our computers and parts back from the computer shop today. The Sony Vaio went in because of a keyboard problem and it still has a keyboard problem. They installed a new one but found that it was really the connection of the keyboard to the mother board. Instead of spending $300 for a new mother board and $100 for installation, the computer shop recommendation is to buy an external keyboard that plugs into one of the USB hubs or just abandon this computer and buy a new one. The good news of the day was that they were able to obtain all the data from my old external hard drive that crashed. That doesn't give me back the photo changes and writing of the past four months, but at least I have everything prior to July of this year. After checking with the drive recovery business the shop here recommended about my new external hard drive, we have decided that the minimum cost quoted of $700 and going up to $2800 to get the data off that drive is just not worth it. I have already started the rewriting and am going to start reorganizing the Year 1 photos today. We spent the early part of the evening at the Captain Kidd having drinks with Victoria and Chad, the young couple we met last night on the dock. Both of them have traveled extensively and it was fun to share stories. They spent their honeymoon in Savusavu in Fiji and we both agreed that the people of Fiji just might be the friendliest in the world. They are very interested in hearing more our travels, so we hope to get together with them at least one more time before we leave Woods Hole to go to our winter home at Fiddler's Cove. I don't like it when the outside temperature is lower than my refrigerator temp, but that is the case tonight and it will continue to be the case through the next few months. Ouch! Once we get to our storage unit in New Hampshire and get appropriate clothing for the cold months I'm sure we'll feel a bit better about this, but after spending most of the past six years in the tropics, the onset of winter is especially tough. Right now the winds are howling and the rain is pouring outside, but at least it is warm here inside Windbird.
{ "redpajama_set_name": "RedPajamaC4" }
3,023
// // This file was generated by the JavaTM Architecture for XML Binding(JAXB) Reference Implementation, v2.0-b52-fcs // See <a href="http://java.sun.com/xml/jaxb">http://java.sun.com/xml/jaxb</a> // Any modifications to this file will be lost upon recompilation of the source schema. // Generated on: 2012.10.05 at 01:53:24 오후 KST // package com.athena.chameleon.engine.entity.xml.application.weblogic.v9_0; import javax.xml.bind.annotation.XmlAccessType; import javax.xml.bind.annotation.XmlAccessorType; import javax.xml.bind.annotation.XmlAttribute; import javax.xml.bind.annotation.XmlElement; import javax.xml.bind.annotation.XmlID; import javax.xml.bind.annotation.XmlType; import javax.xml.bind.annotation.adapters.CollapsedStringAdapter; import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter; /** * <p>Java class for loggingType complex type. * * <p>The following schema fragment specifies the expected content contained within this class. * * <pre> * &lt;complexType name="loggingType"> * &lt;complexContent> * &lt;restriction base="{http://www.w3.org/2001/XMLSchema}anyType"> * &lt;sequence> * &lt;element name="log-filename" type="{http://java.sun.com/xml/ns/j2ee}string" minOccurs="0"/> * &lt;element name="logging-enabled" type="{http://www.bea.com/ns/weblogic/90}true-falseType" minOccurs="0"/> * &lt;element name="rotation-type" type="{http://java.sun.com/xml/ns/j2ee}string" minOccurs="0"/> * &lt;element name="number-of-files-limited" type="{http://www.bea.com/ns/weblogic/90}true-falseType" minOccurs="0"/> * &lt;element name="file-count" type="{http://java.sun.com/xml/ns/j2ee}xsdPositiveIntegerType" minOccurs="0"/> * &lt;element name="file-size-limit" type="{http://java.sun.com/xml/ns/j2ee}xsdPositiveIntegerType" minOccurs="0"/> * &lt;element name="rotate-log-on-startup" type="{http://www.bea.com/ns/weblogic/90}true-falseType" minOccurs="0"/> * &lt;element name="log-file-rotation-dir" type="{http://java.sun.com/xml/ns/j2ee}string" minOccurs="0"/> * &lt;element name="rotation-time" type="{http://java.sun.com/xml/ns/j2ee}string" minOccurs="0"/> * &lt;element name="file-time-span" type="{http://java.sun.com/xml/ns/j2ee}xsdPositiveIntegerType" minOccurs="0"/> * &lt;element name="date-format-pattern" type="{http://java.sun.com/xml/ns/j2ee}string" minOccurs="0"/> * &lt;/sequence> * &lt;attribute name="id" type="{http://www.w3.org/2001/XMLSchema}ID" /> * &lt;/restriction> * &lt;/complexContent> * &lt;/complexType> * </pre> * * */ @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "loggingType", propOrder = { "logFilename", "loggingEnabled", "rotationType", "numberOfFilesLimited", "fileCount", "fileSizeLimit", "rotateLogOnStartup", "logFileRotationDir", "rotationTime", "fileTimeSpan", "dateFormatPattern" }) public class LoggingType { @XmlElement(name = "log-filename", namespace = "http://www.bea.com/ns/weblogic/90") protected com.athena.chameleon.engine.entity.xml.application.weblogic.v9_0.String logFilename; @XmlElement(name = "logging-enabled", namespace = "http://www.bea.com/ns/weblogic/90") protected TrueFalseType loggingEnabled; @XmlElement(name = "rotation-type", namespace = "http://www.bea.com/ns/weblogic/90") protected com.athena.chameleon.engine.entity.xml.application.weblogic.v9_0.String rotationType; @XmlElement(name = "number-of-files-limited", namespace = "http://www.bea.com/ns/weblogic/90") protected TrueFalseType numberOfFilesLimited; @XmlElement(name = "file-count", namespace = "http://www.bea.com/ns/weblogic/90") protected XsdPositiveIntegerType fileCount; @XmlElement(name = "file-size-limit", namespace = "http://www.bea.com/ns/weblogic/90") protected XsdPositiveIntegerType fileSizeLimit; @XmlElement(name = "rotate-log-on-startup", namespace = "http://www.bea.com/ns/weblogic/90") protected TrueFalseType rotateLogOnStartup; @XmlElement(name = "log-file-rotation-dir", namespace = "http://www.bea.com/ns/weblogic/90") protected com.athena.chameleon.engine.entity.xml.application.weblogic.v9_0.String logFileRotationDir; @XmlElement(name = "rotation-time", namespace = "http://www.bea.com/ns/weblogic/90") protected com.athena.chameleon.engine.entity.xml.application.weblogic.v9_0.String rotationTime; @XmlElement(name = "file-time-span", namespace = "http://www.bea.com/ns/weblogic/90") protected XsdPositiveIntegerType fileTimeSpan; @XmlElement(name = "date-format-pattern", namespace = "http://www.bea.com/ns/weblogic/90") protected com.athena.chameleon.engine.entity.xml.application.weblogic.v9_0.String dateFormatPattern; @XmlAttribute @XmlJavaTypeAdapter(CollapsedStringAdapter.class) @XmlID protected java.lang.String id; /** * Gets the value of the logFilename property. * * @return * possible object is * {@link com.athena.chameleon.engine.entity.xml.application.weblogic.v9_0.String } * */ public com.athena.chameleon.engine.entity.xml.application.weblogic.v9_0.String getLogFilename() { return logFilename; } /** * Sets the value of the logFilename property. * * @param value * allowed object is * {@link com.athena.chameleon.engine.entity.xml.application.weblogic.v9_0.String } * */ public void setLogFilename(com.athena.chameleon.engine.entity.xml.application.weblogic.v9_0.String value) { this.logFilename = value; } /** * Gets the value of the loggingEnabled property. * * @return * possible object is * {@link TrueFalseType } * */ public TrueFalseType getLoggingEnabled() { return loggingEnabled; } /** * Sets the value of the loggingEnabled property. * * @param value * allowed object is * {@link TrueFalseType } * */ public void setLoggingEnabled(TrueFalseType value) { this.loggingEnabled = value; } /** * Gets the value of the rotationType property. * * @return * possible object is * {@link com.athena.chameleon.engine.entity.xml.application.weblogic.v9_0.String } * */ public com.athena.chameleon.engine.entity.xml.application.weblogic.v9_0.String getRotationType() { return rotationType; } /** * Sets the value of the rotationType property. * * @param value * allowed object is * {@link com.athena.chameleon.engine.entity.xml.application.weblogic.v9_0.String } * */ public void setRotationType(com.athena.chameleon.engine.entity.xml.application.weblogic.v9_0.String value) { this.rotationType = value; } /** * Gets the value of the numberOfFilesLimited property. * * @return * possible object is * {@link TrueFalseType } * */ public TrueFalseType getNumberOfFilesLimited() { return numberOfFilesLimited; } /** * Sets the value of the numberOfFilesLimited property. * * @param value * allowed object is * {@link TrueFalseType } * */ public void setNumberOfFilesLimited(TrueFalseType value) { this.numberOfFilesLimited = value; } /** * Gets the value of the fileCount property. * * @return * possible object is * {@link XsdPositiveIntegerType } * */ public XsdPositiveIntegerType getFileCount() { return fileCount; } /** * Sets the value of the fileCount property. * * @param value * allowed object is * {@link XsdPositiveIntegerType } * */ public void setFileCount(XsdPositiveIntegerType value) { this.fileCount = value; } /** * Gets the value of the fileSizeLimit property. * * @return * possible object is * {@link XsdPositiveIntegerType } * */ public XsdPositiveIntegerType getFileSizeLimit() { return fileSizeLimit; } /** * Sets the value of the fileSizeLimit property. * * @param value * allowed object is * {@link XsdPositiveIntegerType } * */ public void setFileSizeLimit(XsdPositiveIntegerType value) { this.fileSizeLimit = value; } /** * Gets the value of the rotateLogOnStartup property. * * @return * possible object is * {@link TrueFalseType } * */ public TrueFalseType getRotateLogOnStartup() { return rotateLogOnStartup; } /** * Sets the value of the rotateLogOnStartup property. * * @param value * allowed object is * {@link TrueFalseType } * */ public void setRotateLogOnStartup(TrueFalseType value) { this.rotateLogOnStartup = value; } /** * Gets the value of the logFileRotationDir property. * * @return * possible object is * {@link com.athena.chameleon.engine.entity.xml.application.weblogic.v9_0.String } * */ public com.athena.chameleon.engine.entity.xml.application.weblogic.v9_0.String getLogFileRotationDir() { return logFileRotationDir; } /** * Sets the value of the logFileRotationDir property. * * @param value * allowed object is * {@link com.athena.chameleon.engine.entity.xml.application.weblogic.v9_0.String } * */ public void setLogFileRotationDir(com.athena.chameleon.engine.entity.xml.application.weblogic.v9_0.String value) { this.logFileRotationDir = value; } /** * Gets the value of the rotationTime property. * * @return * possible object is * {@link com.athena.chameleon.engine.entity.xml.application.weblogic.v9_0.String } * */ public com.athena.chameleon.engine.entity.xml.application.weblogic.v9_0.String getRotationTime() { return rotationTime; } /** * Sets the value of the rotationTime property. * * @param value * allowed object is * {@link com.athena.chameleon.engine.entity.xml.application.weblogic.v9_0.String } * */ public void setRotationTime(com.athena.chameleon.engine.entity.xml.application.weblogic.v9_0.String value) { this.rotationTime = value; } /** * Gets the value of the fileTimeSpan property. * * @return * possible object is * {@link XsdPositiveIntegerType } * */ public XsdPositiveIntegerType getFileTimeSpan() { return fileTimeSpan; } /** * Sets the value of the fileTimeSpan property. * * @param value * allowed object is * {@link XsdPositiveIntegerType } * */ public void setFileTimeSpan(XsdPositiveIntegerType value) { this.fileTimeSpan = value; } /** * Gets the value of the dateFormatPattern property. * * @return * possible object is * {@link com.athena.chameleon.engine.entity.xml.application.weblogic.v9_0.String } * */ public com.athena.chameleon.engine.entity.xml.application.weblogic.v9_0.String getDateFormatPattern() { return dateFormatPattern; } /** * Sets the value of the dateFormatPattern property. * * @param value * allowed object is * {@link com.athena.chameleon.engine.entity.xml.application.weblogic.v9_0.String } * */ public void setDateFormatPattern(com.athena.chameleon.engine.entity.xml.application.weblogic.v9_0.String value) { this.dateFormatPattern = value; } /** * Gets the value of the id property. * * @return * possible object is * {@link java.lang.String } * */ public java.lang.String getId() { return id; } /** * Sets the value of the id property. * * @param value * allowed object is * {@link java.lang.String } * */ public void setId(java.lang.String value) { this.id = value; } }
{ "redpajama_set_name": "RedPajamaGithub" }
3,284
#ifndef MMAL_PARAMETERS_CLOCK_H #define MMAL_PARAMETERS_CLOCK_H #include "mmal_parameters_common.h" /************************************************* * ALWAYS ADD NEW ENUMS AT THE END OF THIS LIST! * ************************************************/ /** Clock-specific MMAL parameter IDs. * @ingroup MMAL_PARAMETER_IDS */ enum { MMAL_PARAMETER_CLOCK_REFERENCE /**< Takes a MMAL_PARAMETER_BOOLEAN_T */ = MMAL_PARAMETER_GROUP_CLOCK, MMAL_PARAMETER_CLOCK_ACTIVE, /**< Takes a MMAL_PARAMETER_BOOLEAN_T */ MMAL_PARAMETER_CLOCK_SCALE, /**< Takes a MMAL_PARAMETER_RATIONAL_T */ MMAL_PARAMETER_CLOCK_TIME, /**< Takes a MMAL_PARAMETER_INT64_T */ MMAL_PARAMETER_CLOCK_TIME_OFFSET, /**< Takes a MMAL_PARAMETER_INT64_T */ MMAL_PARAMETER_CLOCK_UPDATE_THRESHOLD, /**< Takes a MMAL_PARAMETER_CLOCK_UPDATE_THRESHOLD_T */ MMAL_PARAMETER_CLOCK_DISCONT_THRESHOLD, /**< Takes a MMAL_PARAMETER_CLOCK_DISCONT_THRESHOLD_T */ }; /** Media-time update thresholds */ typedef struct MMAL_PARAMETER_CLOCK_UPDATE_THRESHOLD_T { MMAL_PARAMETER_HEADER_T hdr; /** Time differences below this threshold are ignored (microseconds) */ int64_t threshold_lower; /** Time differences above this threshold reset media time (microseconds) */ int64_t threshold_upper; } MMAL_PARAMETER_CLOCK_UPDATE_THRESHOLD_T; /** Media-time discontinuity settings */ typedef struct MMAL_PARAMETER_CLOCK_DISCONT_THRESHOLD_T { MMAL_PARAMETER_HEADER_T hdr; /** Threshold after which backward jumps in media-time are treated as a * discontinuity (microseconds) */ int64_t threshold; /** Duration in microseconds for which a discontinuity applies (wall-time) */ int64_t duration; } MMAL_PARAMETER_CLOCK_DISCONT_THRESHOLD_T; #endif /* MMAL_PARAMETERS_CLOCK_H */
{ "redpajama_set_name": "RedPajamaGithub" }
4,715
{"url":"http:\/\/www.maa.org\/programs\/faculty-and-departments\/classroom-capsules-and-notes\/a-bug-problem","text":"# A Bug Problem\n\nby Aaron Melman (Santa Clara University)\n\nCollege Mathematics Journal\nMay, 2006\n\nSubject classification(s): Differentiation | Single Variable Calculus | Calculus\nApplicable Course(s): 3.4 Non-mainstream Calc I | 3.3 Mainstream Calculus III, IV | 3.1 Mainstream Calculus I\n\nA bug is on the inside of a container that has the shape of a paraboloid $$y=x^2$$ revolved about the $$y$$-axis. If a liquid is poured into the container at a constant rate, how fast does the bug have to crawl to stay dry?\n\nA pdf copy of the article can be viewed by clicking below. Since the copy is a faithful reproduction of the actual journal pages, the article may not begin at the top of the first page.","date":"2013-12-08 15:55:19","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2448689192533493, \"perplexity\": 1249.0577348349218}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-48\/segments\/1386163066152\/warc\/CC-MAIN-20131204131746-00094-ip-10-33-133-15.ec2.internal.warc.gz\"}"}
null
null
Home / Tag: Iowa NYBC Powered by Brita to Celebrate its 10th Year in 2017 NYBC Tags: Arizona, California, Florida, Illinois, Iowa, Louisiana, Michigan, Missouri, National Youth Baseball Championships, Nevada, New Mexico, New York, NYBC, Qualifiers, Texas The National Youth Baseball Championships powered by Brita will celebrate its 10th anniversary in 2017 as travel baseball teams from around the country will meet on July 24-30 at Baseball Heaven in Yaphank, N.Y. Steel Sports is teaming up with top tournament organizations and venues on 18 qualifiers spread around the United States from January-June. All World Baseball, Baseball Heaven, Game Day USA, The Baseball Legends, Travel Sports Baseball and Xtreme Diamond Sports will operate qualifiers nationwide. With the inclusion of first class venues in Louisiana's Cypress Mounds Baseball Complex, Iowa's All-Star Ballpark Heaven and Missouri's Ballparks of America on the qualifier schedule, the NYBC will feature events in 12 states, the most in its 10-year history. "Steel Sports is honored to carry the National Youth Baseball Championships into its 10th year," Steel Sports CEO David Shapiro said. "We are committed to creating a major league experience on and off the field when it comes to the competition, venue, national television, social media coverage, educational opportunities and interaction with MLB alumni. With the help of outstanding partners like Brita, the NYBC will host more youth players and teams than ever before in 2017 and continue to shine as one of the premier youth events in the country." CBS Sports Network will nationally televise 12 games during NYBC's championship week, and MLB.com will live stream all 12 broadcasts online. The week will also feature an opening ceremonies, a televised 12U All-Star Game, appearances from Major League Baseball celebrities and a festival throughout the week to commemorate the 10th anniversary. "Having traveled to youth baseball events all over the country, I can say the National Youth Baseball Championships is unlike anything I have experienced," former Major League Baseball player and manager Bobby Valentine said. "The NYBC is everything youth baseball should be about, a positive and fun experience that offers much more than high level competition on the field. I've been fortunate to be a part of the NYBC the last two years, and it's something I want to continue being a part of for as long as I can." NYBC qualifying begins on Martin Luther King Jr. weekend (Jan. 14-16) with events in Arizona, California and Florida. Additional qualifiers will be held later in the year in Illinois, Iowa, Louisiana, Michigan, Missouri, Nevada, New Mexico, New York and Texas. Click to see the full list of 2017 NYBC Qualifiers! NYBC Crowns 2018 National Champions at 11th Annual Event August 1, 2018 Baseball Lifestyle 101 Joins NYBC as Content Partner March 10, 2018 NYBC Organization of the Month – Scorpions Baseball Academy November 30, 2017
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,407
Brandon Cox (born October 31, 1983) is a former American football quarterback, who played collegiately for Auburn University. As Auburn's starting quarterback from 2005 to 2007 he guided the Tigers to a 29–9 record and was a member of the winningest senior class in Auburn history, winning 50 games during their time on the Plains. Cox attended Hewitt-Trussville High School, the same school as Jay Barker, former quarterback for rival Alabama. He was diagnosed with myasthenia gravis in his 10th grade year in high school, but fought the disease and continued to play football. Cox, a left-hander, was recruited to Auburn in 2003 but redshirted his freshman year. After serving one season as backup, Cox stumbled to begin the 2005 season before leading the Tigers to a 9–3 finish. He returned his junior year in 2006 to lead Auburn to an 11–2 finish, including a victory over Nebraska in the 2007 Cotton Bowl Classic. Cox began the season as the starter for the third season for the Tigers in 2007. Prior to the season, Cox was one of 35 quarterbacks named to the 2007 Manning Award Watch List. For much of the 2007 season, Cox struggled to find consistency behind an offensive line starting three freshmen. He was briefly benched in favor of true freshman quarterback Kodi Burns during the Mississippi State game. Cox rebounded from being benched to lead Auburn to victories over undefeated Florida, Arkansas and Alabama. In winning the 2007 Iron Bowl over Alabama, 17–10, the team set a school record with six consecutive wins over its rival. Cox became only the second Auburn quarterback to be 3–0 against Alabama, with his predecessor, Jason Campbell, being the other quarterback to record this feat. Cox's last win came in the 2007 Chick-fil-A Bowl when he led Auburn past Clemson, 23–20 (OT). Cox completed a career-high 25 passes, but it was Burns who ended the game with a touchdown run in overtime. Cox finished the regular season of his senior year with a 117.58 passer rating. As of the 2007 Iron Bowl, Cox had 6,748 career passing yards, a 59.12% completion percentage (525/888), 42 touchdowns on 31 interceptions for a career NCAA passer rating of 131.58. After leaving Auburn with a business administration degree, Cox became an account manager for Ready Mix USA. He later worked in the construction industry, serving as commercial leasing associate for Daniel Corporation and later the Director of Business Development for Hoar Construction. References External links 1983 births Living people American football quarterbacks Auburn Tigers football players People from Trussville, Alabama Players of American football from Alabama
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,492
\section{\label{Sec:Intro}Introduction} After more than a century of research, the evolutionary basis of sex and recombination remains enigmatic \citep{Otto2009}. In view of the complex evolutionary conditions faced by natural populations, the search for a single answer to the question of why sexual reproduction has evolved and is maintained in the vast majority of eukaryotic species may well be futile. Nevertheless, theoretical population genetics has identified several simple, paradigmatic scenarios in which the conditions for an evolutionary advantage of sex can be identified in quantitative terms, and which are therefore open (at least in principle) to experimental verification. One such scenario was proposed in the context of the adaptation of a population in a constant environment encoded by a fitness landscape, which assigns fitness values to all possible genotypes. In this setting, the relative advantage of sexual reproduction with respect to, say, the speed of adaptation or the mean fitness level at mutation-selection balance, depends crucially on the epistatic interactions between different loci \citep{deVisser2007,Kouyos2007}. In its simplest form, epistasis is associated with the curvature of the fitness surface, that is, the deviation of the fitness effect of multiple mutations, all of which are either deleterious or beneficial, compared to that predicted under the assumption of independent mutations which contribute multiplicatively or additively to fitness. It is well established that recombination speeds up the adaptation in such a fitness landscape if the epistatic curvature is negative, in the sense that the fitness of multiple mutants is lower than expected for independent mutations \citep{Kondrashov1988}. \begin{figure} \begin{center} \mbox{ \includegraphics[width=0.7\textwidth]{2peak.eps} } \end{center} \caption{\label{Fig:2peak} Schematic of a two-allele, two-locus fitness landscape with reciprocal sign epistasis and unequal peak heights. Fitness is represented by the height above the plane in which the four genotypes (labeled 0, 1, 2, 3) reside.} \end{figure} Unfortunately, experimental evidence indicates that negative epistasis is not sufficiently widespread to provide a general explanation for the evolution of recombination \citep{deVisser1997,Elena1997,Bonhoeffer2004}. Instead, empirical studies have highlighted the importance of a more complex form of epistasis, termed \textit{sign epistasis}, in which not only the magnitude but also the sign of the fitness effect of a given mutation depends on the presence of mutations at other loci \citep{Weinreich2005}. A simple example of sign epistasis is shown in Fig.~\ref{Fig:2peak}, which depicts a haploid two-locus fitness landscape. In this case the sign epistasis between the two loci is reciprocal, which leads to the appearance of two fitness peaks separated by a fitness valley \citep{Poelwijk2007}. In the two-locus setting sign epistasis can be viewed as an extreme case of positive epistasis, and it may be expected to lead to a disadvantage of recombination. In a recent simulation study of the effect of recombination on empirically motivated five-locus fitness landscapes displaying sign epistasis \citep{deVisser2009}, we found that recombination can hamper adaptation on such a landscape in a rather dramatic way: Instead of moving to the global fitness optimum, populations get trapped at local optima from which (in the limit of infinite population size) they never escape. Mathematically, these numerical calculations suggest the appearance of multiple stable stationary solutions of the deterministic, infinite population dynamics above a threshold value $r_c$ of the recombination rate. The simplest situation where this phenomenon can occur is the haploid two-locus landscape shown in Fig.~\ref{Fig:2peak}, where it implies a bistability with two stationary solutions localized near each of the fitness peaks labeled by 0 and 3. Indications for the occurrence of bistability in this system can be found in several earlier studies of the two-locus problem. \citet{CK1965} derived a condition for the initial increase of the high peak mutant in a population dominated by the low peak genotype. \citet{Eshel1970} established a sufficient condition for the low peak population to remain trapped for all times when mutations are unidirectional ($0 \to 1,2 \to 3$), which was subsequently refined by \citet{Karlin1971}. \citet{Feldman1971} and \citet{Rutschman1994} obtained conditions for the existence of multiple stationary solutions in the absence of mutations. The case of symmetric fitness peaks was considered as a model for compensatory mutations in RNA secondary structure by \citet{Stephan1996} and \citet{H1998}, who also addressed the role of genetic drift. Recent studies have considered the effect of recombination on the dynamics of peak escape in the asymmetric two-locus landscape for finite as well as infinite populations \citep{Weinreich2005a,Jain2010}. The corresponding diploid problem has also been studied extensively \citep{Kimura1956,Bodmer1967}. \citet{B1989,Buerger2000} established conditions for bistability in a class of quantitative genetics models of stabilizing selection that are formally equivalent to the diploid version of our problem with symmetric fitness peaks. Finally, bistability induced by recombination has been observed in multilocus models in the context of quasispecies theory \citep{Boerlijst1996,Jacobi2006} and evolutionary computation \citep{Wright2003}. However, a comprehensive analysis of the paradigmatic haploid two-locus system with reversible mutations, reciprocal sign epistasis and fitness peaks of unequal height does not seem to be available, and the present paper aims to fill this gap. In the next section we introduce the evolutionary dynamics used in this work. We then show that finding the stationary solutions of the model amounts to analyzing the zeros of a fourth order polynomial, and devote the bulk of the paper to extracting useful information about the critical recombination rate, the genotype frequency distribution, and the mean fitness in various parameter regimes. We conclude with a summary of our results and a discussion of open problems. \section{\label{Sec:Model}Model} We consider a haploid, diallelic two-locus system. The alleles at a locus are denoted by 0 and 1, and the four genotypes are labeled according to $00 \to 0, \; 01 \to 1, \; 10 \to 2, \; 11 \to 3$. Each genotype $i$ is endowed with a fitness $w_i$, which defines the fitness landscape. The population evolves in discrete, non-overlapping generations under the influence of selection, mutation, and recombination. Since we are interested in the emergence of bistability, in the sense of the existence of multiple stationary frequency distributions, the limit of infinite population size is assumed. Thus, the frequency of each genotype evolves deterministically. The frequency changes according to the following order: selection, mutation, then recombination. Let $f_i(\tau)$ denote the frequency of genotype $i$ at generation $\tau$. The frequency of the genotype at generation $\tau+1$ after selection is proportional to the product of its frequency at generation $\tau$ and its fitness $w_i$. After selection, mutations can change the frequency of genotypes. We assume that the mutation probability per generation and locus, $\mu$, depends neither on the location of the locus in the sequence nor on the alleles, and mutations are assumed to be independent of each other. Mathematically speaking, the mutation from sequence $j$ to sequence $i$ then occurs with probability \begin{equation} U_{ij} = (1-\mu)^{2 - d(i,j)} \mu^{d(i,j)}, \end{equation} where $d(i,j)$ is the Hamming distance between two sequences $i$ and $j$, i.e. the number of loci at which the two sequences differ. After selection and mutation, the frequency distribution will be \begin{equation} \tilde f_i(\tau+1) = \sum_{j} U_{ij} \frac{w_{j}}{\bar w\left (\tau \right )} f_{j}\left (\tau\right ), \label{frequency_pool1} \end{equation} where $\bar w(\tau) \equiv \sum_i w_i f_i(\tau)$ is the mean fitness at generation $t$. After selection and mutation, recombination follows. At first, two sequences, say $j$ and $k$, are selected with probability $\tilde f_{j} \tilde f_{k}$. With probability $1-r$, no recombination happens. With probability $r$, however, the two sequences recombine in a suitable way and generate a recombinant, which may be identical to or different from $j$ and $k$ depending on the recombination scheme and the properties of the two initial sequences. One may interpret the case $r<1$ as a model for organisms which can reproduce sexually as well as asexually. We will use the one-point crossover scheme, which is summarized as \begin{equation} \begin{array}{l} \chi_1 \chi_2 \\ \chi_1'\chi_2' \end{array} \rightarrow \left \{ \begin{array}{lll} \chi_1&\chi_2'&\quad \text{with prob. }\displaystyle r/2,\\ \chi_1'&\chi_2&\quad \text{with prob. }\displaystyle r/2,\\ \chi_1&\chi_2&\quad \text{with prob. }\displaystyle (1-r)/2,\\ \chi_1'&\chi_2'&\quad \text{with prob. }\displaystyle (1-r)/2, \end{array} \right . \label{Eq:single_crossover} \end{equation} where $\chi_1$ ($\chi_2$) is the allelic type at the $1^\mathrm{st}$ ($2^\mathrm{nd}$) locus, $j = \chi_1 \chi_2$, $k = \chi_1'\chi_2'$ are parental genotypes, and the four types on the right hand side are the possible resulting recombinants with their respective probabilities. If $R_{i|jk}$ denotes the probability that the resulting allelic type is $i$ in case types $j$ and $k$ (not necessarily different) attempt to recombine, the frequency of type $i$ at generation $\tau+1$ becomes $f_i' = \sum_{jk} R_{i|jk} \tilde f_j \tilde f_k$. After selection, mutation, and recombination, the frequency of each sequence in the next generation becomes \begin{eqnarray} \nonumber \bar w(\tau) f_0' = f_0 w_0 (1-\mu)^2 + (f_1 w_1 + f_2 w_2) \mu (1-\mu) + f_3 w_3 \mu^2 - r(1-2\mu)^2 \frac{\delta(\tau)}{\bar w(\tau)},\\ \bar w(\tau) f_1' = f_1 w_1 (1-\mu)^2 + (f_0 w_0 + f_3 w_3) \mu (1-\mu) + f_2 w_2 \mu^2 + r(1-2\mu)^2 \frac{\delta(\tau)}{\bar w(\tau)}, \nonumber\\ \bar w(\tau) f_2' = f_2 w_2 (1-\mu)^2 + (f_0 w_0 + f_3 w_3) \mu (1-\mu) + f_1 w_1 \mu^2 + r(1-2\mu)^2 \frac{\delta(\tau)}{\bar w(\tau)}, \nonumber\\ \bar w(\tau) f_3' = f_3 w_3 (1-\mu)^2 + (f_1 w_1 + f_2 w_2) \mu (1-\mu) + f_0 w_0 \mu^2 - r(1-2\mu)^2 \frac{\delta(\tau)}{\bar w(\tau)}, \label{Eq:two_locus_eq} \end{eqnarray} where $\delta(\tau) = f_0(\tau) f_3(\tau) w_0 w_3 - f_1(\tau) f_2(\tau) w_1 w_2$ and $f_i$'s and $f_i'$'s mean the frequency at generation $\tau$ and $\tau +1$, respectively. Note that the case of free recombination frequently considered in the literature corresponds to $r=\frac{1}{2}$ in Eq.~(\ref{Eq:two_locus_eq}). By rearranging (\ref{Eq:two_locus_eq}), we get \begin{eqnarray} \frac{\bar w(\tau)}{ 1 - 2 \mu }\left ( f'_0- f'_3\right ) &=& f_0w_0 - f_3w_3 , \label{Eq:pf1} \\ \frac{\bar w(\tau)}{ 1 - 2 \mu }\left ( f'_1- f'_2\right ) &=& f_1w_1 - f_2w_2 ,\\ f'_0 f'_3 - f'_1 f'_2 &=&\frac{(1-r)(1-2\mu)^2}{ \bar w(\tau)^2}\delta(\tau). \label{Eq:LD} \end{eqnarray} A nontrivial conclusion one can draw from Eq.~(\ref{Eq:LD}) is that linkage equilibrium, $f'_0 f'_3 = f'_1 f'_2$, is attained in one generation if $r=1$, which corresponds to the one-point crossover scheme with recombination probability 1. The stationary distribution is calculated by setting $f'_i=f_i$ in the above three equations, which gives \begin{eqnarray} \frac{f_0}{f_3} &=& \frac{\bar w -(1-2\mu) w_3}{ \bar w - (1-2\mu) w_0}\equiv A,\label{Eq:03}\\ \frac{f_1}{f_2} &=& \frac{\bar w -(1-2\mu) w_2}{ \bar w - (1-2\mu) w_1},\label{Eq:12}\\ \frac{f_0 f_3}{f_1 f_2} &=& \frac{ \bar w^2 - (1-r)(1-2\mu)^2 w_1 w_2}{ \bar w^2 - (1-r)(1-2\mu)^2 w_0 w_3}\equiv B, \label{Eq:link} \end{eqnarray} where $\bar w$ is the mean fitness at stationarity. With the two additional conditions \begin{equation} \label{normalization1} f_0 + f_1 + f_2 + f_3 = 1 \end{equation} and \begin{equation} \label{normalization2} \bar w = w_0 f_0 + w_1 f_1 + w_2 f_2 + w_3 f_3 \end{equation} the steady state solution is fully determined. Without loss of generality, $w_3$ is set to be largest (with possible degeneracy). For simplicity, we set $w_1=w_2$, which by (\ref{Eq:12}) implies that $f_1 = f_2 \equiv f$. Since the dynamics is invariant under multiplication of all fitnesses by the same factor, we can choose \begin{equation} \label{fitnesses} w_3=1, w_0=1-t, w_1=w_2=1-t-s \end{equation} with $0<t<1$ and $-t<s<1-t$. For the sake of brevity, however, we will sometimes use $w_0$ and $w_1$ rather than $s$ and $t$ in what follows. The behavior for unequal valley fitnesses ($w_1 \neq w_2$) will be discussed in Sect.\ref{Sec:asymmetric}. Note that the problem studied by \citet{H1998} corresponds to the case with $t=0$ and $0<s$. In this paper, we will assume that $t>0$ and $s>0$, that is, the fitness landscape has a unique global optimum and reciprocal sign epistasis (Fig.~\ref{Fig:2peak}). Unlike the fitness landscape with symmetric peaks, it is difficult to find the solution exactly (see also Appendix~\ref{Sec:Sympeak}). We will investigate the approximate solutions by assuming that some of the parameters are very small compared to others. \section{\label{Sec:bi}Bistability} \subsection{\label{Sec:gen}General behavior of solutions} Since the frequency of each genotype is strictly positive in the steady state, Eq. (\ref{Eq:03}) excludes the mean fitness $\bar w$ from being in the range between $(1-2\mu) w_0$ and $(1-2\mu) w_3$. For obvious reasons, we will refer to a solution with $\bar w > (1-2\mu) w_3$ as high-fitness solution (HFS) and a solution with $\bar w<(1-2\mu) w_0$ as low-fitness solution (LFS). As an immediate consequence of Eq.~(\ref{Eq:03}), we see that a HFS (LFS) implies $f_3>f_0$ ($f_3<f_0$). The largest fitness being $w_3$, the existence of a HFS is expected regardless of the strength of the recombination probability (we will see later that this is indeed the case). Hence we are mainly interested in the conditions which allow for a LFS. Later, we will see that if it exists, there are actually two LFS's, only one of which is locally stable. Since we are interested in stable solutions, in what follows LFS refers exclusively to the locally stable solution. Intuitively, the HFS should be locally stable, so the emergence of a LFS naturally implies bistability. Without much effort, one can easily find necessary conditions for the bistability. First note that $\bar w$ is larger than $w_1$ by definition. Therefore, a LFS is possible only if $ w_1 < (1-2\mu) w_0$, or \begin{equation} \mu< \frac{s}{2(1-t)}. \label{Eq:mu_cond} \end{equation} If we put $\mu =\frac{1}{2}$ in Eq.~(\ref{Eq:two_locus_eq}), every $f_i'$ becomes $\frac{1}{4}$ regardless of $r$ and the $f_i$'s. Since this is a unique equilibrium state for $\mu =\frac{1}{2}$, it could have been anticipated that bistability requires a restriction on $\mu$. A necessary condition on $r$ can be obtained from Eq.~(\ref{Eq:link}). While the numerator of the expression defining $B$ is always positive, the denominator would be negative for the LFS if $w_0 - (1-r) w_3 < 0$. Hence a necessary condition for bistability is \begin{equation} r > 1 - \frac{w_0}{w_3} = t, \label{Eq:r_cond} \end{equation} which appears also in earlier works \citep{CK1965,Eshel1970,Karlin1971,Feldman1971,Rutschman1994}. Most of the calculations in this paper are devoted to refining the conditions for bistability. To this end, we will reduce the five equations (\ref{Eq:03},\ref{Eq:12},\ref{Eq:link},\ref{normalization1},\ref{normalization2}) to a single equation for $\bar w$. Equations (\ref{Eq:03}) and (\ref{Eq:link}) along with the normalization (\ref{normalization1}) yield the relations \begin{equation} f = \frac{\sqrt{A}}{2 \sqrt{A} + \sqrt{B} (1+A)},\;\; f_3 = \frac{\sqrt{B}}{2 \sqrt{A} + \sqrt{B} (1+A)},\;\; f_0 = A f_3, \label{Eq:reduce} \end{equation} where we have used the fact that the $f_i$'s should be positive, and the definition (\ref{normalization2}) of the mean fitness $\bar w$ implies that \begin{equation} 2 \left ( \bar w - w_1 \right ) = \sqrt{\frac{B}{A}} \left ( w_3 + w_0 A - (1+A)\bar w \right ). \label{Eq:eqbarw} \end{equation} Since our main focus is on the LFS, it is convenient to define the auxiliary variable $x$ through \begin{equation} \label{xdef} \bar w = (1-2\mu) (w_0-x), \end{equation} which implies that $x < -t$ and $x > 0$ for the HFS and the LFS, respectively. Note that $x$ is equivalent to the mean fitness and equilibria will be found in terms of $x$. We also note for future reference that with this reparametrization, $A$ and $B$ in Eqs.~(\ref{Eq:03}) and (\ref{Eq:link}) become \begin{equation} \label{AA} A = 1+\frac{t}{x},\quad B = 1+ (1-r) \frac{ w_0 w_3 - w_1^2}{(w_0-x)^2 - (1-r) w_0 w_3} \end{equation} Taking the square on both sides of Eq.~(\ref{Eq:eqbarw}) results in a quartic equation \begin{equation} \label{quartic} h(x) \equiv h_0(x) + r h_1(x) = 0, \end{equation} where the polynomials $h_0(x)$ and $h_1(x)$ are \textit{independent of $r$} (see Appendix~\ref{Sec:Hx} and the Mathematica file in online supplement for explicit expressions). We can draw some general conclusions concerning solutions of the quartic equation by evaluating $h(x)$ at selected points. Since $h(x)$ is negative at $x=0$, $x=-t$, and at the point $x=x_1$ defined by $(1-2\mu)(w_0 -x_1) = w_1$, and the coefficient of fourth order term is positive (see Appendix~\ref{Sec:Hx}), there always exist solutions in $x > x_1$ and $x<-t$ which correspond to $\bar w <w_1$ and $\bar w > (1-2\mu) w_3$, respectively. The meaningless solution, $\bar w<w_1$, has appeared because we took squares on both sides of Eq.~(\ref{Eq:eqbarw}). In the Mathematica file in online supplement, we show that $h(x)$ is positive when $x = 1- t - 1/(1-2\mu)$, or $\bar w = w_3 = 1$. This proves, as anticipated, that the HFS with the mean fitness in the range $(1-2\mu) w_3 < \bar w< w_3$ is present for all values of $r$. Hence the condition for bistability is recast as the condition for $h(x)$ to have a positive solution which is smaller than $x_1$. Because $h(0)$ and $h(x_1)$ are negative, the existence of a solution in the range $0<x<x_1$, or equivalently $w_1 < \bar w < (1-2\mu) w_0$, always entails two solutions in the same region if we count the number of degenerate solutions (that is, two identical solutions) as 2. Let us assume that $h(x) = 0$ has two degenerate solutions at $x = x_c$ when $r=r_c$ (as we will see, $r_c$ is the critical recombination rate above which bistability exists). This means that $x_c$ is the simultaneous solution of two equations $h(x_c) = h'(x_c)=0$, where the prime denotes the derivative with respect to the argument. Later, this simple relation will play a crucial role in finding $r_c$ as well as $x_c$. Now let us change $r$ infinitesimally from $r_c$ to $r_c + \varepsilon_r$, and let the solutions of $h(x) = 0$ for $r=r_c + \varepsilon_r$ take the form $x_c + \varepsilon_x$. Note that we are only interested in solutions with $\varepsilon_x \rightarrow 0$ as $\varepsilon_r\rightarrow 0$ in the complex $x$ plane. Since two other solutions exist outside of the range $0<x<x_1$ and both $h(0)$ and $h(x_1)$ are negative, $h(x)$ with $r=r_c$ has local maximum at $x=x_c$, that is, $h''(x_c;r_c)<0$. Figure~\ref{Fig:hx} illustrates this situation. \begin{figure}[t] \begin{center} \mbox{ \includegraphics[width=0.9\textwidth]{hx.eps} } \end{center} \caption{\label{Fig:hx} Behavior of the function $h(x)$ around the critical recombination probability. In this example, $s=0.5$, $t=0.4$, and $\mu=0.01$ are used, which yields $r_c \approx 0.965$. The curves meet at (three) points where $h_1(x)$ vanishes. The locations of the points $x=-t$ and $x=x_1$ are indicated by filled circles. Low fitness solutions correspond to zero crossings in $0<x<x_1$, and high fitness solutions to those with $x<-t$. The HFS's are indicated by an arrow, which happens to be close to $-t$. The zero crossings with $x>x_1$ are spurious. Inset: Close-up view of the boxed area. For clarity, the vertical line at $x=0$ is also drawn. One of the solutions of $h_1(x)=0$ happens to be close to $x=0$. Note that for $r=r_c$, two solutions become identical (degenerate). For $r>r_c$, the LFS is indicated by an arrow.} \end{figure} Using Eq.~(\ref{quartic}) we get \begin{equation} h(x_c + \varepsilon_x ;r_c + \varepsilon_r) \approx -\frac{1}{2} |h''(x_c;r_c) | \varepsilon_x^2 + \varepsilon_r h_1 (x_c) =0. \label{Eq:dr} \end{equation} Hence real solutions are possible if the second term is positive. By definition, $r_c h_1(x_c)= - h_0(x_c)$. For $r=0$, Eq.~(\ref{quartic}) reduces to $h_0(x) = 0$, and we may conclude from the condition Eq.~(\ref{Eq:r_cond}), which is violated for $r=0$, that this equation does not have a solution in the region $0<x<x_1$. Hence $h_0(x_c)<0$ because $h_0(0)<0$ and $h_0(x_1)<0$ (see Appendix~\ref{Sec:Hx}), which, in turn, implies that $h_1(x_c)>0$. To summarize, we have proved that if $x_c$ is the degenerate solution of $h(x)=0$ when $r=r_c$, there are two solutions in the region $0<x<x_1$ when $r > r_c$. One should note that $r_c$, if it exists, is unique, as otherwise a contradiction to Eq.~(\ref{Eq:dr}) would arise. To establish the existence of bistability for $r>r_c$, it remains to find $r_c$. Even though general solutions of quartic equations are available, it is difficult to extract useful information from them. Rather, we will look for approximate solutions by assuming that one of the three parameters $\mu$, $s$, and $t$ is much smaller than the other two. In fact, it follows from the condition (\ref{Eq:mu_cond}) that there cannot be any bistability when $s \ll \mu$ (unless $t \to 1$). Hence, in this paper, we will not pursue the case where $s$ is the smallest parameter. Before turning to the derivation of the approximate solutions, we further exploit the linear $r$-dependence of $h(x)$ in Eq.(\ref{quartic}). It implies that the two equations $h(x_c;r_c) = h'(x_c;r_c) = 0$ with two unknowns can be reduced to a single equation for $x_c$ which does not involve $r_c$. To be specific, from $h_0(x_c) + r_c h_1(x_c)=h_0'(x_c) + r_c h_1'(x_c)=0$ we obtain \begin{equation} r_c = - \frac{ h_0 (x_c)}{h_1(x_c)} = - \frac{ h_0'(x_c)}{ h_1'(x_c)}, \label{Eq:rc_condition} \end{equation} or \begin{equation} H(x_c) \equiv h_0 (x_c) h_1'(x_c) - h_1(x_c) h_0'(x_c) =0. \label{Eq:xc_condition} \end{equation} Thus, rather than finding $r_c$ and $x_c$ simultaneously, we will first find $x_c$ by solving Eq.~(\ref{Eq:xc_condition}), which in turn, gives $r_c$ from Eq.~(\ref{Eq:rc_condition}). Equation (\ref{Eq:xc_condition}) will be analyzed below, after we have introduced one more useful concept. \subsection{\label{Sec:muc}Critical mutation probability} As evidenced by Eq.~(\ref{Eq:mu_cond}), a finite critical recombination probability $r_c$ can exist only if $\mu$ is sufficiently small. Mathematically, this implies that $r_c$ diverges as $\mu$ approaches a certain critical mutation probability $\mu_c$, such that bistability is not possible for $\mu > \mu_c$. Although $r$ cannot, strictly speaking, exceed unity, we will see later that this definition of $\mu_c$ will be of great use to find an accurate expression for $r_c$. Setting $r_c=\infty$ in Eq.~(\ref{Eq:rc_condition}), we see that $\mu_c$ is the solution of the equations $h_1(x_c) = h_1'(x_c) = 0$. Since $h_1(x)$ is a cubic function taking the form $h_1(x) = -C_3 x^3 - C_2 x^2 + C_1 x - C_0$, $x_c$ is also the solution of the equation \begin{equation} G(x) = x h_1'(x) - 3 h_1(x)= C_2 x^2 - 2 C_1 x + 3 C_0 =0. \end{equation} From $G(x)$ and $h_1'(x)$, we can construct two linear equations for $x$ such that \begin{eqnarray} G_1(x) = C_2 h_1'(x) + 3 C_3 G(x) = - 2 (3 C_1 C_3 + C_2^2) x + C_1 C_2 + 9 C_0 C_3=0,\\ G_2(x) = \frac{1}{x} ( C_1 G(x) - 3 C_0 h_1'(x)) = (C_1 C_2 + 9 C_0 C_3) x - 2 (C_1^2 - 3 C_0 C_2)=0, \end{eqnarray} where we have used the fact that $x_c \neq 0$. Hence, the value of $x_c$ for $r_c =\infty$ is given by \begin{equation} x_c^\infty = \frac{C_1 C_2 + 9 C_0 C_3}{2(C_2^2 + 3 C_1 C_3 )} = \frac{2(C_1^2 - 3 C_0 C_2)}{C_1 C_2 + 9 C_0 C_3}. \label{Eq:muc_xc} \end{equation} Note that $C_i$'s depend on $\mu$, $s$, and $t$, which means we have an equation for $\mu_c$ such that \begin{equation} (C_1 C_2 + 9 C_0 C_3)^2- 4 (C_1^2 - 3 C_0 C_2)(C_2^2 + 3 C_1 C_3) =0, \label{Eq:muc} \end{equation} which, in turn, provides $x_c^\infty$ by inserting $\mu_c$ in Eq.~(\ref{Eq:muc_xc}). In fact, Eq.~(\ref{Eq:muc}) is equivalent to the discriminant of cubic polynomials (see the Mathematica file in online supplement). We now assume that $s,t\ll 1$, which also implies $\mu_c \ll 1$ by Eq.~(\ref{Eq:mu_cond}). Then to leading order the $C_i$'s become \begin{equation} C_3 = 2s + t,\; C_2 = (t+2\mu)(2s+t) - s^2,\; C_1 = t(s^2 - 2 \mu (2 s + t)),\; C_0 =\mu^2 t^2, \end{equation} and Eq.~(\ref{Eq:muc}) becomes (see the Mathematica file in online supplement) \begin{equation} 3 s^{10} \nu^2(1-t) \left ( 1+\nu-z(2+\nu)\right )^2 M(z) = 0, \label{Eq:muc_eq} \end{equation} where $z = \mu/s$, $\nu = t/s$, and $M(z)$ is a cubic polynomial with \begin{eqnarray} M(z) = 32 ( \nu+2) z^3 - ( 13 \nu^2 + 48 \nu + 48) z^2 + 2 (2 \nu^3 + 7 \nu^2 + 9 \nu + 6) z - (1+\nu)^2. \end{eqnarray} Note that $z=(1+\nu)/(2+\nu)$ cannot be a solution because of Eq.~(\ref{Eq:mu_cond}). Hence the critical mutation probability is the solution of the equation $M(z) = 0$. As shown in the Mathematica file in online supplement, $M(z)$ is an increasing function of $z$, which, along with $M(0)<0$, allows only one positive real solution of $M(z)=0$ unless $\nu=0$. One can find the exact solution in the Mathematica file in online supplement, which is not suitable to present here. We will just present the asymptotic behavior of the solution for later purposes (see the Mathematica file in online supplement). When $t \ll s$ ($\nu \ll 1$), \begin{equation} \mu_c = \frac{s}{4} \left [ 1 - 3 \left ( \frac{ t}{4 s} \right )^{2/3} \right ] \textrm{ and } x_c^\infty = \left (\frac{s t^2}{16} \right )^{1/3}, \label{Eq:muc_smallt} \end{equation} and when $s \ll t$ ($\nu \gg 1$) \begin{equation} \mu_c \approx \frac{s^2}{4 t} \textrm{ and } x_c^\infty = \frac{s^2}{4 t}. \label{Eq:muc_smalls} \end{equation} Although we derived the above two expressions from the exact solution (see the Mathematica file in online supplement), it is not difficult to find the asymptotic behavior without resorting to the exact solution. When $\nu$ is finite, it would be more useful to have a numerical value. In the case $\nu=1$ ($t=s$) we get \begin{equation} \mu_c \approx 0.107 s \textrm{ and } x_c^\infty \approx 0.0616 s. \label{Eq:muc_st} \end{equation} Below we will see how $\mu_c$ can be used to derive improved approximations for $r_c$. \subsection{\label{Sec:Smallmu}Approximation for small mutation probability} Now we will move on to finding the critical recombination probability. We begin with the investigation of the approximate solutions for small mutation probability ($\mu \ll s, t$). Let us assume that $x_c = x_0 + a_\mu \mu + O(\mu^2)$, which should be justified self-consistently. For later purposes, we introduce the parameters \begin{equation} \label{alphabeta} \alpha = (1-t)(s+t)^2 - s^2, \;\;\; \beta = (1-t)(s+t)^2 + s^2. \end{equation} Note that $\alpha$ is positive if $s < \sqrt{1-t} + 1 -t$, which is automatically satisfied because $s$ is smaller than $1-t$ by definition. The leading behavior of $H(x_c)$ becomes \begin{equation} H(x_c) = x_0^2 (t+x_0)^2 ( a_2 x_0^2 + 2 a_1 x_0 + a_0) + O(\mu), \end{equation} where the $a_i$'s are parameters satisfying $a_1^2 - a_0 a_2 < 0$ (see the Mathematica file in online supplement). Hence, the solutions for $x_0$ are 0, $-t$, and two complex numbers. Since $x_c$ must be positive, the only possible solution for $x_0$ is $x_0=0$. Accordingly, the actual leading behavior of $H(x_c)$ becomes (contributions of order $O(1)$ and $O(\mu)$ vanish) \begin{equation} H(x_c) = - \mu^2 s^2 t^2 \left ( (1-t)^2 \alpha - a_\mu^2 \beta \right ) + O(\mu^3) = 0, \end{equation} which gives \begin{equation} a_\mu = (1-t) \left (\frac{\alpha}{\beta}\right )^{1/2}. \end{equation} By putting $x_c=a_\mu \mu$ into Eq.~(\ref{Eq:rc_condition}) and keeping terms up to $O(\mu)$, we find \begin{equation} r_c = t + 2 \frac{1-t}{s^2} \left ( \alpha + \sqrt{\alpha \beta} \right ) \mu \equiv t + c_\mu \mu, \label{Eq:rc_mu} \end{equation} which clearly satisfies the bound Eq.~(\ref{Eq:r_cond}), and shows that this bound becomes an equality when $\mu \to 0$. The approximation for $r_c$ can be significantly improved by matching the approximation for small $\mu$ with the behavior of $r_c$ when $\mu$ is close to $\mu_c$. Since $r_c$ becomes infinite at $\mu = \mu_c$, we write \begin{equation} r_c = t \frac{1 + \rho \mu}{1 - \mu/\mu_c}, \quad \textrm{ with }\quad \rho = 2 \frac{1-t}{s^2 t} \left ( \alpha + \sqrt{\alpha \beta} \right ) - \frac{1}{\mu_c}. \label{Eq:rc_match} \end{equation} where $\rho$ is determined such that the correct leading behavior is reproduced when $\mu$ is small. The specific form of the divergence at $\mu=\mu_c$ is motivated by the behavior in the case of symmetric fitness peaks (see Eq.(\ref{Eq:rc0})). \begin{figure} \includegraphics[width=\textwidth]{rcmu.eps} \caption{\label{Fig:rcmu} Comparison of the exact $r_c$ with the approximate solutions Eqs.~(\ref{Eq:rc_mu}) and (\ref{Eq:rc_match}) for three cases: (a) $s\ll t$ and (b) $t\ll s$. Eq.~(\ref{Eq:rc_match}) shows an excellent agreement with the exact $r_c$ when $t$ is not too small compared to $s$. Eq.~(\ref{Eq:rc_mu}) is generally poor in predicting $r_c$.} \end{figure} In Fig.~\ref{Fig:rcmu}, the exact values of $r_c$ obtained from numerical calculations are compared with the two approximations Eqs.~(\ref{Eq:rc_mu}) and (\ref{Eq:rc_match}). One can find more such graphs in the Mathematica file in online supplement. As expected, both expressions give a reliable prediction when $\mu$ is sufficiently small. However, as the exact value of $r_c$ becomes larger, Eq.~(\ref{Eq:rc_mu}) does not give an accurate estimation, which is not surprising. The more surprising observation is that Eq.~(\ref{Eq:rc_match}) gives an excellent prediction for $r_c$ in almost all ranges. However, Eq.~(\ref{Eq:rc_match}) becomes a poor guidance for $r_c$ as $t$ gets smaller, which suggests that the case with small $t$ should be treated separately. In the next subsection, we will study the two-locus model for small $t$. \subsection{\label{Sec:Smallt}Approximation for small $t$} Now let us move on to the case with small $t$ ($t\ll \mu, s$) which connects the present study to the problem with symmetric fitness peaks considered by~\citet{H1998}. As before, we will find $x_c$ from Eq.~(\ref{Eq:xc_condition}). As shown by~\citet{H1998} (see also Appendix~\ref{Sec:Sympeak}), $x_c$ should approach zero as $t\rightarrow 0$. This can be rigorously shown from Eq.~(\ref{Eq:xc_condition}). If we assume that $x_c$ is finite as $t\rightarrow 0$, the leading order of Eq.~(\ref{Eq:xc_condition}) becomes \begin{equation} 2s^2(2-s)(1-2 \mu)\left [\left \{(1-2\mu) x - 2(\mu_{c0} - \mu)\right \}^2 + 4 (1-s) \mu_{c0}^2 \right ] x^4 =0, \end{equation} where \begin{equation} \label{Eq:muc0} \mu_{c0} = \frac{s}{2(2-s)} \end{equation} is the critical mutation probability for the symmetric problem derived in Appendix~\ref{Sec:Sympeak}. Since the terms in the square brackets are strictly positive, the only real solution is $x=0$. It might seem natural to assume that $x_c = c_1 t+O(t^2)$ as in Sec.~\ref{Sec:Smallmu}. However, this turns out to be wrong. Not to be misled by an incorrect intuition, let us expand $H(x_c)$ only with the assumption that $x_c$ is small, i.e., we do not specify how small $x_c$ is compared to $t$. Then, to leading order, the equation $H(x_c)=0$ becomes \begin{equation} 2 x_c^4 + 4 x_c^3 t - 2 a_t x_c t^2 - a_t t^3=0, \label{Eq:t_app} \end{equation} where \begin{equation} a_t = \frac{(s-2\mu)^2 \mu^2}{2 s (1-2\mu)(2 \mu^2 + \mu_{c0} (s-4\mu))}. \end{equation} Note that terms of order $x_c^5$ and $x_c^6$ are neglected compared to $x_c^4$. Actually, there is a term of order $x_c^2$ in $H(x_c)$, but its coefficient is $O(t^2)$, so it is neglected compared to $x_c t^2$. Let us assume that $x_c \sim t^\zeta$. If $\zeta \ge 1$, the solution of Eq.~(\ref{Eq:t_app}) is $x_c = - t/2$ which does not fall into the range $0<x_c < x_1$. If $\zeta < 1$, the leading behavior of Eq.~(\ref{Eq:t_app}) should be $x_c^4 - a_t t^2 x_c = 0$ which gives $x_c \approx (a_t t^2)^{1/3}$. A more accurate estimate of $x_c$ is derived in the Mathematica file in online supplement, which reads \begin{equation} x_c \approx (a_t t^2)^{1/3}-\frac{t}{2}. \label{Eq:t_xc} \end{equation} Note that the power $\frac{2}{3}$ was already observed when we calculated $\mu_c$ for $t\ll s$ in Eq.~(\ref{Eq:muc_smallt}). From Eq.~(\ref{Eq:rc_condition}) along with Eq.~(\ref{Eq:t_xc}) we get \begin{eqnarray} r_c \approx r_{c0} \left ( 1 + \frac{3\mu_{c0} (2 \mu^2 + \mu_{c0} (s-4 \mu))}{2 s \mu^2 (\mu_{c0}-\mu)} \left ( a_t t^2 \right )^{1/3} - \frac{2 \mu_{c0}^2 (1+s)}{s^2(\mu_{c0}-\mu)} t\right ), \label{Eq:rc_t} \end{eqnarray} where $r_{c0}$ is the critical recombination for $t=0$ given by (see Appendix~\ref{Sec:Sympeak} for derivation) \begin{equation} r_{c0} = \frac{2 \mu^2}{(1-2\mu)(\mu_{c0} - \mu)}. \label{Eq:rc0} \end{equation} We will use the same technique as in Sec.~\ref{Sec:Smallmu} to improve the quality of the approximation for $r_c$. Since $r_c$ diverges at $\mu = \mu_c$ rather than at $\mu_{c0}$ (note that $\mu_{c0} > \mu_c$), we can associate the behavior for small $r_c$ with that for larger $r_c$ in such a way that \begin{equation} r_c = \frac{2 \mu^2}{(1-2\mu)(\mu_c - \mu)} ( 1 + \tilde \rho_0 t^{2/3} + \tilde \rho_1 t), \label{Eq:rc_t_match} \end{equation} where the coefficients $\tilde \rho_0$ and $\tilde \rho_1$ are determined by requiring that the leading behavior of $r_c$ in Eq.~(\ref{Eq:rc_t_match}) is same as that in Eq.~(\ref{Eq:rc_t}). This yields \begin{eqnarray} \tilde \rho_0 = \frac{3 \mu_{c0}}{2 s(\mu_{c0}-\mu)} \left \{ \frac{2 \mu^2 + \mu_{c0} (s- 4\mu)}{\mu^2} a_t^{1/3} -(1-s)(2\mu_{c0})^{1/3} \right \}, \quad \tilde \rho_1 =0, \end{eqnarray} where we have used a more accurate expression for $\mu_c$ than Eq.~(\ref{Eq:muc_smallt}), derived in the Mathematica file in online supplement, which reads \begin{equation} \mu_c = \mu_{c0} - \frac{3(1-s)}{4(2-s)} (2 \mu_{c0} t^2)^{1/3} + \frac{2\mu_{c0}^2(1+s)}{s^2} t. \end{equation} For completeness, we present the corresponding $x_c^\infty$ which reads \begin{equation} x_c^\infty = \left ( \frac{\mu_{c0} t^2}{4} \right )^{1/3} - \frac{t}{2}. \end{equation} \begin{figure} \includegraphics[width=\textwidth]{rct.eps} \caption{\label{Fig:rct} Comparison of the exact $r_c$ with the approximate solutions Eqs.~(\ref{Eq:rc_t}) and (\ref{Eq:rc_t_match}) for $s=0.04$ and (a) $t=10^{-4}$, (b) $t=0.002$. For comparison, we also draw the critical recombination probability when $t=0$ ($r_{c0}$). For $t=10^{-4}$, both approximations are in good agreement with the exact solutions. As $t$ increases, Eq.~(\ref{Eq:rc_t}) starts to deviate from the exact solution, but the improved approximation still has predictive power. } \end{figure} Figure~\ref{Fig:rct} compares the exact values with the approximations Eqs.~(\ref{Eq:rc_t}) and (\ref{Eq:rc_t_match}). $s$ and $t$ in Fig.~\ref{Fig:rct}a are same as those in Fig.~\ref{Fig:rcmu}b. For sufficiently small $t$, both approximations show a good agreement. As anticipated, Eq.~(\ref{Eq:rc_t}) becomes worse as $t$ increases, even though Eq.~(\ref{Eq:rc_t_match}) is still in good agreement with the exact solution. Needless to say, Eq.~(\ref{Eq:rc_t_match}) fails when $t$ is not much smaller than $s$. Although Fig.~\ref{Fig:rct} seems to suggest that the agreement is good even for very small $\mu$, this is an artifact because $r_c$ itself is too small. In this regime, one should use the approximation developed in Sec.~\ref{Sec:Smallmu}. To summarize our findings up to now, we have provided approximate expressions for $r_c$ which are valid in the specified regimes. Taken together, these expressions cover essentially the full range of biologically relevant parameters. \subsection{\label{Sec:Fre}Frequency distributions} So far we have investigated the critical recombination and mutation probabilities. To complete the analysis, we need to determine the frequency distribution at stationarity. For the LFS, this can be readily done using Eq.~(\ref{Eq:dr}). From Eq.~(\ref{xdef}) we see that the solution with the smaller $x>0$ will confer the larger mean fitness. Let $x_s$ ($x_l$) be the smaller (larger) positive solution. Equation~(\ref{AA}) shows that $B(x_s) < B(x_l)$ and $A(x_s) > A(x_l)$. Note that we are treating $A$ and $B$ as functions of $x$. If we rewrite $f_3$ as $1/(2 \sqrt{A/B} + 1 + A)$, it is clear that $f_3(x_s)$ must be smaller than $f_3(x_l)$. Introducing $\Delta f_3 = f_3 (x_l) - f_3(x_s)$ and $\Delta f_0 = f_0(x_l) - f_0(x_s)$, the mean fitness difference can be written as \begin{equation} \Delta \bar w \equiv \bar w(x_2) - \bar w(x_1) = (w_0 - w_1)\Delta f_0 + (w_3 - w_1)\Delta f_3 . \end{equation} Since $\Delta \bar w$ is negative and $\Delta f_3$ is positive, $\Delta f_0$ must be negative. That is, the solution with the smaller $x>0$ should confer the larger value of $f_0$. Below we will argue that the solution with the larger $f_0$ is stable. Hence we limit ourselves to the study of the solution with smaller $x$. We also present approximate expressions for the HFS. As before, we have to conduct separate analyses depending on which parameter is the smallest. For these analyses we will use the expansions Eqs.~(\ref{Eq:rc_mu}) and (\ref{Eq:rc_t}) for $r_c$ rather than the improved approximations Eqs.~(\ref{Eq:rc_match}) and (\ref{Eq:rc_t_match}), which are not suited for a systematic perturbative solution. We begin with the case when $\mu \ll s,t$. The functions $h''(x_c,r_c)$ and $h_1(x_c)$ in (\ref{Eq:dr}) then become \begin{equation} - h''(x_c,r_c) \approx 2 t \beta,\; h_1(x_c) \approx a_\mu s^2 t \mu, \end{equation} which gives \begin{equation} \varepsilon_x \approx - x_c \left (\frac{s^2 \varepsilon_r }{ (1-t)\sqrt{\alpha\beta} \mu}\right)^{1/2} \equiv - x_c \epsilon, \end{equation} where $x_c = a_\mu \mu$ and the definition of $\epsilon$ is clear. If we set $x = x_c+ \varepsilon_x$ and $r = r_c + \varepsilon_r = t+ c_\mu \mu + \varepsilon_r$ from (\ref{Eq:rc_mu}), we get \begin{equation} A = 1 + \frac{t}{x} \approx \frac{t}{x_c(1 - \epsilon)},\quad B = \frac{\alpha}{t \left (\varepsilon_r - 2 \varepsilon_x + (c_\mu - 2 a_\mu) \mu\right )}, \end{equation} which are large. Note that $A$ becomes negative when $\epsilon> 1$, which means that the regime of validity of the above approximation is quite narrow. To leading order, the population frequencies are then obtained from (\ref{Eq:reduce}) as \begin{eqnarray} f \approx \frac{1}{ \sqrt{A B}} \approx \sqrt{\alpha}\frac{1-t}{s}\mu \left (1 + \sqrt{\frac{\alpha}{\beta} } ( 1- \epsilon)\right ) ,\; f_3 \approx \frac{1}{A} \approx \frac{1-t}{t} \sqrt{\frac{\alpha}{\beta}}(1 - \epsilon)\mu, \label{Eq:small_mu_f0} \end{eqnarray} and, by normalization, $f_0 = 1 - f_3 - 2 f$. The second (unstable) solution is obtained by setting $\epsilon \mapsto - \epsilon$. To find the HFS, we shift the variable $x$ to $y = -(x+t)$ and look for a solution of $g(y) \equiv h(-t -y)=0$. As shown in the Mathematica file in online supplement, the HFS is located at $y = 0$ ($x = -t$) for $\mu \rightarrow 0$. This is a consequence of the fact that when $\mu=0$, the stationary fitness of the HFS is $\bar w = w_3$. If we now set $y = \sum_{n\ge 1} y_n \mu^n$ and expand $g(y)$ as a power series in $\mu$, the equation $g(y)=0$ gives (see the Mathematica file in online supplement) \begin{equation} y_1 = 0,\; y_2 = \frac{r(1-s-t)^2 + (s+t)(2 - s - t)}{(s+t)^2(r(1-t)+t)} t,\; y_3 = 2 y_2 \frac{ t(s+t)^2 (1-r) + r (2 s + t)}{(r (1-t) + t)(s+t)^2} \label{Eq:y2} \end{equation} Up to $O(\mu^2)$, the genotype frequencies for the HFS become \begin{equation} f \approx \frac{\mu}{s+t}(1-y_2 \mu),\quad f_0 \approx A \approx \frac{y_2}{t} \mu^2, \label{Eq:small_mu_f3} \end{equation} and $f_3 = 1 - 2 f - f_0$. One can see qualitative differences between Eqs.~(\ref{Eq:small_mu_f3}) and (\ref{Eq:small_mu_f0}). First, the frequency of the less populated fitness peak genotype is different in the two cases. In Eq.~(\ref{Eq:small_mu_f0}), $f_3 = O(\mu)$, but in Eq.~(\ref{Eq:small_mu_f3}), $f_0 = O(\mu^2)$. However, this tendency cannot persist when $r$ is large. For example, if $r=1$, Eq.~(\ref{Eq:link}) suggests that $f_3$ should be $O(\mu^2)$ provided $f$ is still $O(\mu)$. Hence, this qualitative difference only occurs when $r$ is close to $r_c$. Second, the leading behavior of the frequency $f$ of valley genotypes does not depend on $r$ in Eq.~(\ref{Eq:small_mu_f3}), which is not true in Eq.~(\ref{Eq:small_mu_f0}) because of the dependence on $\epsilon$. To analyze the stability of the solutions, we linearize Eq.~(\ref{Eq:two_locus_eq}) at the steady state frequency. For the stability analysis, we assume that $f_1(\tau) = f_2(\tau)$ for all $\tau$, which is true if they are equal initially. The linearization then yields a square matrix with rank 2, whose largest eigenvalue (in absolute value) determines the stability. For the HFS, the eigenvalues up to $O(\mu^0)$ are $1-s-t$ and $(1-t)(1-r)$ which are smaller than 1. Hence, the HFS is always stable. At $r=r_c$, the largest eigenvalue for the LFS is expected to be 1. Since we are restricted to an approximation up to $O(\mu)$, all we can show is that the largest eigenvalue of the LFS is $1+O(\mu^2)$ at $r=r_c$. In the Online Supplement, we show that the largest eigenvalue for $s=t$ becomes $1+O(\mu^2)$ and the smaller one is $(1-s-t)/(1-t)$. When treated numerically, it is easy to see that the stable solution indeed corresponds to the smaller $x$ (details not shown). The next step we will take is to find the frequency distribution of the LFS in the case that $t$ is much smaller than $s$ and $\mu$. From Eqs.~(\ref{Eq:t_xc}) and (\ref{Eq:rc_t}), we get (up to leading order) \begin{eqnarray} - h''(x_c,r_c) = \frac{6 s (2 \mu^2 + \mu_{c0} (s-4\mu))}{\mu_{c0} -\mu} (a_t t^2)^{1/3},\nonumber\\ h_1(x_c) = 2s(2-s)(1-2\mu)(\mu_{c0}-\mu) (a_t t^2)^{2/3}, \end{eqnarray} thus from Eq.(\ref{Eq:dr}) \begin{equation} \varepsilon_x =- (\mu_{c0}-\mu)(a_t t^2)^{1/3} \left ( \frac{2 (2-s)(1-2\mu) }{3(2 \mu^2 + \mu_{c0} (s-4\mu))} \frac{\varepsilon_r}{(a_t t^2)^{1/3}} \right)^{1/2}\equiv - x_c \eta, \end{equation} where we have kept $x_c = (a_t t^2)^{1/3}$ up to leading order from Eq.~(\ref{Eq:t_xc}) and $\eta$ has an obvious meaning. Accordingly, $A$ and $B$ become \begin{eqnarray} A &\approx& 1 + A_t,\\ \sqrt{B}&\approx& \frac{s-2\mu}{2 \mu} \left ( 1 + B_t x_c + B_r \varepsilon_r + B_x \varepsilon_x \right ), \end{eqnarray} where \begin{eqnarray} A_t = \frac{t}{x_c} \left ( 1 - \eta\right )^{-1},\quad B_t = \frac{8 \mu^2 - 4 s \mu (1+\mu) - s^2(1-4\mu)}{8 a_t (s^2 + 8 \mu^2 -4 s \mu (1+\mu))},\nonumber \\ B_r = - \frac{2 s^2 \mu^2}{\mu_{c0} (s-2\mu)^2 r_{c0}^2} ,\quad B_x = \frac{s(1-2\mu)(s-4\mu)(\mu_{c0} - \mu)}{2 (s-2\mu)^2 \mu^2}. \end{eqnarray} The above approximation is valid only when $\eta\ll 1$ ($\varepsilon_r \ll t^{2/3}$). Note that unlike the previous case, $A$ is close to $1$ ($A_t \sim t^{1/3}$). Hence the frequency distribution for the LFS becomes \begin{eqnarray} f \approx \frac{\mu}{s} \left ( 1 - 2 \left (\frac{1}{2}-\frac{ \mu}{s} \right )\left ( B_t x_c + B_r \varepsilon_r +B_x\varepsilon_x + \frac{A_t^2}{8}\right )\right ) \label{Eq:f1t} ,\\ \label{Eq:f3t} f_3 \approx \left (\frac{1}{2} - \frac{\mu}{s} \right ) \left (1 - \frac{A_t}{2} \right ) ,\\ f_0 \approx \left (\frac{1}{2} - \frac{\mu}{s} \right ) \left (1 + \frac{A_t}{2} \right ), \label{Eq:f0t} \end{eqnarray} where we have kept the leading order of each term. Since the above approximation requires that $\varepsilon_r = r-r_c \ll t^{2/3}$, it cannot reproduce the symmetric solution in Appendix~\ref{Sec:Sympeak}, which applies when $t \to 0$ at fixed $r$. \subsection{\label{Sec:Landau}Landau theory} In this subsection, we develop an approximation that is valid when $r$ is close to $r_c$ and the asymmetry of the fitness landscape is small, in the sense that $t$ is smaller than all other parameters. This approximation is inspired by the Landau theory from the physics of phase transitions, and it will allow us to represent both the LFS and the HFS in a simple, compact form. We start from the observation that, according to Eqs.~(\ref{Eq:f1t}), (\ref{Eq:f3t}), and (\ref{Eq:f0t}), the valley genotype frequency $f \approx \mu/s$ in the regime of interest, with the peak frequencies $f_3$ and $f_0$ symmetrically placed around $1/2 - \mu/s \approx 1/2 - f$. Moreover, the difference $f_0 - f_3 \approx A_t \sim t^{1/3}$ becomes small for $t \to 0$. This motivates the parametrization \begin{equation} f_0 = \left ( \frac{1}{2} - f \right ) \left ( 1 - u \right ), f_3 = \left ( \frac{1}{2} - f \right ) \left ( 1 + u \right ) \label{Eq:defu} \end{equation} which defines the new variable $u$. Inserting this into Eq.~(\ref{Eq:pf1}) with $f'_i = f_i$ we obtain \begin{equation} \bar w = (1-2\mu) \left ( 1 + \left ( 1 - u \right ) \frac{t}{2 u} \right ). \label{Eq:barwlandau} \end{equation} On the other hand, from the definition (\ref{normalization2}) of $\bar w$, we find a relation between $f$ and $u$ such that \begin{equation} f = - \frac{t}{2 u} \frac{(1-u)(1+u-2\mu)}{2 s + t(1+ u)} + \frac{2\mu}{2 s + t (1+u)}. \label{Eq:fu} \end{equation} Up to now, everything is exact. Note that when $u\ll 1$ and $t\ll u$, the leading behavior of Eq.~(\ref{Eq:fu}) is $\mu/s$ which is consistent with the LFS frequency distribution in Eq.~(\ref{Eq:f1t}). Moreover, as the mean fitness for the HFS in the case of small $t$ is not expected to deviate much from $1-2\mu$, the HFS also requires that $t \ll u$. So for all solutions, the leading behavior of $f$ is $\mu/s$. This is rather different from the case when $\mu$ is the smallest parameter. By keeping the leading terms under the assumption that $t \ll \mu \ll s \ll 1$ and $t\ll u \ll 1$, from Eqs.~(\ref{Eq:LD}) and (\ref{Eq:fu}) we obtain the equation \begin{equation} t - ( r_0 - r )u - r u^3 =0 \label{Eq:Landau} \end{equation} for $u$, where $r_0 = 8\mu^2/s$. If we interpret $r$ as the (inverse) temperature, $t$ as the external magnetic field, and $u$ as the magnetization, this has precisely the form of the Landau equation for the para- to ferromagnetic phase transition \citep{Plischke2006}. The general solution of Eq.~(\ref{Eq:Landau}) can be written in a compact form. Let \begin{equation} \label{Eq:Delta} \Delta = \left(\frac{t}{2 r} \right)^2 - \left( \frac{r-r_{0}}{3 r} \right)^3 \end{equation} denote the discriminant of Eq.~(\ref{Eq:Landau}). When $\Delta>0$, there is only one real solution which reads \begin{equation} u_\mathrm{HFS} = \left ( \frac{t}{2 r} + \sqrt{\Delta}\right)^{1/3}+ \left ( \frac{t}{2 r} - \sqrt{\Delta}\right)^{1/3}. \label{Eq:plusD} \end{equation} For $r$ sufficiently far below $r_0$, in the sense that $r_0 - r \gg (t^2 r)^{2/3}$, this reduces to \begin{equation} \label{Eq:u_HFS_below} u_\mathrm{HFS} \approx \frac{t}{r_0 - r}, \end{equation} which is the solution of Eq.~(\ref{Eq:Landau}) with the cubic term omitted. When $\Delta<0$, there are three real solutions \begin{eqnarray} u_\mathrm{HFS} = 2 \left ( \frac{r-r_{0}}{3 r} \right )^{1/2} \cos\frac{\theta}{3},\quad u = -2 \left ( \frac{r-r_{0}}{3 r} \right )^{1/2} \sin\left ( \frac{\pi}{6}\mp \frac{\theta}{3}\right ), \label{Eq:minusD} \end{eqnarray} where $\tan\theta = 2 r \sqrt{|\Delta|}/t$ with $0\le \theta \le \pi/2$. The stable LFS corresponds to the smallest value of $u$, which yields the larger $f_0$ among the two solutions with negative $u$ (see the discussion in the beginning of Sect.~\ref{Sec:Fre}), \begin{equation} \label{Eq:Landau_LFS} u_\mathrm{LFS} = - 2\left ( \frac{r-r_{0}}{3 r} \right )^{1/2} \sin\left ( \frac{\pi}{6} + \frac{\theta}{3}\right ). \end{equation} One can easily see that for $t\rightarrow 0$ ($\theta\rightarrow \pi/2$) the solutions (\ref{Eq:minusD}) and (\ref{Eq:Landau_LFS}) approach the symmetric peak solutions \begin{equation} \label{Eq:symmetric_peaks} u_\mathrm{HFS} = \sqrt{1 - r_0/r}, \;\;\; u_\mathrm{LFS} = - \sqrt{1 - r_0/r}. \end{equation} The critical recombination probability can be found from $\Delta=0$ which gives \begin{equation} r_c = 8 \frac{\mu^2}{s} \left ( 1 + \frac{3}{4} \left ( \frac{ st}{2 \mu^2} \right )^{2/3} + O(t^{4/3}) \right ). \end{equation} Note that this agrees with Eq.~(\ref{Eq:rc_t}) only up to order $t^{2/3}$. \begin{figure} \begin{center} \mbox{ \includegraphics[width=0.8\textwidth]{landau.eps} } \end{center} \caption{\label{Fig:Landau} Plots of $u = (f_3 - f_0)/(1-2f)$ obtained from the exact numerical solutions (symbols) and from Eq.~(\ref{Eq:Landau}) (full curves) as a function of $r$ for $t=10^{-6}$, $s=10^{-2}$, and $\mu = 10^{-3}$. For these parameters $r_{0} = 8 \times 10^{-4}$ and $r_{c0} \approx 1.325 \times 10^{-3}$. The filled squares are stable solutions and open circles are unstable solutions. The dashed line shows the symmetric peak solutions (\ref{Eq:symmetric_peaks}) for $r > r_{c0}$. The approximate solution is seen to be valid well beyond the regime where $\eta<1$. } \end{figure} Although the LFS in Eqs.~(\ref{Eq:f1t}), (\ref{Eq:f3t}), and (\ref{Eq:f0t}) is valid only when $\epsilon_r \ll t^{2/3}$, the approximate solutions Eqs.~(\ref{Eq:plusD}) and (\ref{Eq:minusD}) turn out to be in good agreement with the exact solutions, provided $r_0$ is replaced by the exact critical recombination rate $r_{c0}$ for the symmetric peak problem [see Eq.(\ref{Eq:rc0})]. In Fig.~\ref{Fig:Landau}, we compare the exact values of $u$ with the approximate solutions for $t=10^{-6}$, $s=10^{-2}$, and $\mu = 10^{-3}$. For these parameters, $\eta$ becomes larger than 1 when $\varepsilon_r \approx 3\times 10^{-5}$. \subsection{\label{Sec:fitness}Behavior of the mean fitness} We are now prepared to discuss the dependence of the mean population fitness $\bar w$ on the recombination rate. Since mean fitness is linearly related to the auxiliary variable $x$ through (\ref{xdef}), this amounts to examining how the solutions of (\ref{quartic}) vary with $r$. Solving (\ref{quartic}) for $r$ we obtain \begin{equation} r(x) = - \frac{h_0(x)}{h_1(x)} \end{equation} and $r'(x) = H(x) /h_1(x)^2$, where $H(x)$ was defined in (\ref{Eq:xc_condition}). Since $H(x)$ has a unique root $x_c$ in the regime of interest, we conclude that $r(x)$ displays a minimum at $x=x_c$. Recalling that the stable LFS corresponds to the smaller of the two solutions of (\ref{quartic}) with $x > 0$, it follows that the mean fitness of the stable (unstable) LFS increases (decreases) with $r$. In addition, the fitness of the HFS must be a monotonic function of $r$. The behavior of all three solutions is illustrated in Fig.\ref{Fig:fitness}, which shows that fitness decreases with $r$ for the HFS. \begin{figure}[t] \includegraphics[width=\textwidth]{mfex.eps} \caption{\label{Fig:fitness}Behavior of mean fitness as a function of $r$ for (a) the HFS in linear scales and (b) the LFS in semi-logarithmic scales. The parameters are $t=s=0.1$ and $\mu = 0.001$. For the purpose of clarity, we show the difference between $\bar w$ and its limiting value for large $r$, which is $w_3(1-2 \mu)$ for the HFS and $w_0(1-2\mu)$ for the LFS. (a) Mean fitness decreases with $r$. (b) Mean fitness increases (decrease) with $r$ for the stable (unstable) LFS. Note that this figure displays the positive quantity $w_0 - \bar w/(1 - 2 \mu)$. } \end{figure} These results can also be deduced from the approximate solutions given in Sects.~\ref{Sec:Fre} and \ref{Sec:Landau}. In particular, taking the derivative of (\ref{Eq:barwlandau}) with respect to $u$ we see that $\partial \bar w/\partial u < 0$ always. Since $u$ is an increasing (decreasing) function of $r$ for the HFS (LFS), it follows that fitness decreases with $r$ in the former case but increases in the latter. For $r \to \infty$ the solutions (\ref{Eq:minusD},\ref{Eq:Landau_LFS}) approach $u_\mathrm{HFS} \to 1$ and $u_\mathrm{LFS} \to -1$, respectively, with corresponding limiting fitness values $\bar w_\mathrm{HFS} \to w_3(1-2 \mu)$ and $\bar w_\mathrm{LFS} \to w_0(1-2 \mu)$. As can be anticipated from Eq.~(\ref{Eq:03}) (see also discussion in Sect.~\ref{Sec:gen}), the limit is approached from above for the HFS but from below for the LFS. Note that in the symmetric case ($t = 0$) the fitness is $\bar w = w_3 (1-2 \mu) = w_0 (1-2\mu)$ independent of $r$ for $r > r_c$ (see Appendix~\ref{Sec:Sympeak}). \subsection{\label{Sec:asymmetric}Asymmetric valley fitnesses} In this final subsection, we will consider briefly how the results would be affected if $w_1 =1-t-s_1$ and $w_2 = 1 - t - s_2$ with $s_1 \neq s_2$ (without loss of generality, we can set $s_1< s_2$). We first show that bistability requires reciprocal sign epistasis, i.e. both $s_1$ and $s_2$ have to be positive. To see this, suppose that $s_1 < 0$ and $s_2 > 0$, such that the ordering of the fitness values is $w_2 < w_0 < w_1 < w_3$. Then positivity of Eq.~(\ref{Eq:12}) requires $\bar w > (1- 2 \mu) w_1$ or $\bar w < (1 - 2 \mu) w_2 < w_2$. The latter possibility is ruled out because the mean fitness cannot be lower than the fitness of the least fit genotype, and the former inequality contradicts the condition $\bar w < (1 - 2 \mu) w_0$ imposed on the LFS by the positivity of Eq.~(\ref{Eq:03}). We conclude that only the HFS with mean fitness in the range $(1 - 2 \mu) w_3 < \bar w < w_3$ can exist. To extract some information about the case $s_2 > s_1 > 0$, we introduce the variable $u$ in a similar way to Eq.~(\ref{Eq:defu}) such that \begin{equation} \label{Eq:u_gen} f_0 = ( 1 - f_1 - f_2) \frac{1-u}{2},\quad f_3 = (1 - f_1 - f_2) \frac{1+u}{2}, \end{equation} which yields again Eq.~(\ref{Eq:barwlandau}) for $\bar w$. The above parametrization along with Eq.~(\ref{Eq:12}) gives \begin{equation} f_1 = f_2 + \frac{2 u (s_2 - s_1)}{t (1+u) + 2 u s_1} f_2. \end{equation} Thus, if $s_2 - s_1 \ll t, s_1, s_2$, setting $f_1 = f_2 = f$ is not a bad approximation and the results presented above remain valid. When $s_2 - s_1$ is comparable to the other parameters the calculation is much more complicated. We will not treat this case in any detail, but we can provide necessary conditions for bistability. First, the necessary condition for $r$ given in Eq.~(\ref{Eq:r_cond}) is still valid, because Eq.~(\ref{Eq:r_cond}) is obtained only from the denominator of Eq.~(\ref{Eq:link}). A necessary condition on $\mu$ similar to Eq.~(\ref{Eq:mu_cond}) is also available from the requirement that $w_2 < \bar w < (1-2 \mu) w_0$, but the result is not very useful because it is independent of the magnitude (or sign) of $s_1$. To get a refined condition, we need some futher analysis. From the definition of $\bar w$, we find (see also the Mathematica file in online supplement) \begin{eqnarray} \label{Eq:f1f2} f_2+f_1 =\frac{\left(t \left(2 \mu (1-u)+u^2-1\right)+4 \mu u\right) (u (s_1+s_2)+t (u+1))}{u \left(s_1 \left(4 s_2 u+t (u+1)^2\right)+t (u+1)^2 (s_2+t)\right)}, \end{eqnarray} which should be smaller than $1$ for $-1 < u <1$. Now assume that a critical value $r_c$ exists ($r_c <r \le 1$) beyond which bistability appears. For $r=1$ Eq.~(\ref{Eq:link}) shows that $f_0 f_3 = f_1 f_2$ for any equilibrium solution. On the other hand, we expect that for small $\mu$ the frequencies of both valley genotypes are small, $f_1 f_2 \ll 1$. Computing $f_0 f_3$ using Eq.~(\ref{Eq:u_gen}) this is seen to imply that $u$ for the LFS should be very close to $-1$ when $r=1$. Expanding Eq.~(\ref{Eq:f1f2}) around $u = -1$ gives \begin{equation} f_1+f_2 = \mu \frac{(s_1 + s_2)(1-t)}{s_1 s_2} + \frac{t}{2 s_1 s_2} (s_1 + s_2 - \mu (2+ s_1 + s_2 - 2 t)) ( 1 + u) + O(1+u)^2. \end{equation} For a LFS to be possible, the leading order term should be smaller than unity, which gives \begin{equation} \mu < \frac{s_1 s_2}{(s_1+s_2) (1-t)} = \frac{s_H}{2(1-t)}, \label{Eq:diffw1w2} \end{equation} where $s_H$ is the harmonic mean of $s_1$ and $s_2$. Note that Eq.~(\ref{Eq:diffw1w2}) reduces to Eq.~(\ref{Eq:mu_cond}) when $s_1=s_2$. Hence, if $0 < s_1\ll s_2$, $\mu$ must be smaller than $s_1/(1-t)$ to have multiple solutions. \section{\label{Sec:Dis}Discussion} In this work we have presented a detailed analysis of a deterministic, haploid two-locus model with a fitness landscape displaying reciprocal sign epistasis. We have established the conditions for the occurrence of bistability, and derived accurate approximations for the critical recombination rate $r_c$ at which bistability sets in. For $r < r_c$ there is a single equilibrium solution in which the fittest genotype is most populated. For $r > r_c$ we find two stable equilibrium solutions, one of which is concentrated on the fittest genotype (the HFS) and a second one concentrated on the lower fitness peak (the LFS). For $\mu \to 0$ these two solutions become point measures, in the sense that $f_3 \to 1$ and $f_0 \to 1$, respectively, but for any finite mutation probability all genotypes are present at nonzero frequency. We briefly summarize the most imporant quantitative results presented in this paper. The expressions (\ref{Eq:rc_mu}) and (\ref{Eq:rc_t}) for $r_c$ are based on a systematic expansion in terms of the mutation probability $\mu$ and the peak asymmetry $t$, respectively, while the interpolation formulae Eq.~(\ref{Eq:rc_match}) and Eq.~(\ref{Eq:rc_t_match}) provide numerically accurate values of $r_c$ over a wide range of parameters. In particular, our results show that the lower bound (\ref{Eq:r_cond}) on $r_c$ becomes an equality for $\mu \to 0$, which is consistent with earlier results obtained either directly at $\mu = 0$ \citep{Feldman1971,Rutschman1994} or using a unidirectional mutation scheme \citep{Eshel1970,Karlin1971}. Clearly the limiting behavior for $\mu \to 0$ should not depend on the mutation scheme employed. Approximate results for the stationary frequency distributions are found in Eqs.~(\ref{Eq:small_mu_f0}), (\ref{Eq:small_mu_f3}) and Eqs.~(\ref{Eq:f1t}), (\ref{Eq:f3t}), (\ref{Eq:f0t}). Of particular interest are the simple formulae derived from the cubic equation (\ref{Eq:Landau}), which are remarkably accurate close to the bistability threshold and for small $t$. The consequences of our results for the question of a possible evolutionary advantage of recombination are twofold. Dynamically, the onset of bistability implies that recombination strongly suppresses the escape of large populations from suboptimal fitness peaks. In the deterministic infinite population limit considered here, the escape time diverges at $r = r_c$ \citep{Jain2010}, whereas in finite populations one expects a rapid (exponential) increase of the escape time for $r > r_c$ \citep{Stephan1996,H1998,Weinreich2005a}. In a multipeaked fitness landscape, recombination can therefore dramatically slow down adaptation \citep{deVisser2009}. On the other hand, we have seen in Sect.~\ref{Sec:fitness} that the equilibrium mean fitness may increase or decrease with the recombination rate when $r > r_c$ depending on which of the two equilibria is considered. In a fitness landscape with more than two peaks one anticipates an even richer structure of stationary solutions with a correspondingly complex dependence on recombination rate. It would be of considerable interest to extend the present study to finite populations. For the case of symmetric peaks ($t=0$) this problem has been addressed by \citet{H1998} in the framework of a diffusion approximation. A key step in his analysis was the reduction to a one-dimensional problem by fixing the frequency of the valley genotypes at its stationary value $f = \mu/s$. However, we have seen above that in the case when $s$ and $t$ are large in comparison to $\mu$, $f$ cannot be treated as a constant and therefore the reduction to a one-dimensional diffusion equation is generally not possible. Some progress could be made in the regime where the (effectively one-dimensional) Landau equation (\ref{Eq:Landau}) applies, and we intend to pursue this approach in the future. \section*{Acknowledgments} We wish to thank to Alexander Altland, Reinhard B\"urger, Arjan de Visser, Paul Higgs and Kavita Jain for useful discussions. Support by Deutsche Forschungsgemeinschaft within SFB 680 \textit{Molecular Basis of Evolutionary Innovations} is gratefully acknowledged. In addition, J.K. acknowledges support by NSF under grant PHY05-51164 during a visit at KITP, Santa Barbara, where this work was begun, and S.-C.P. acknowledges support by the Catholic University of Korea, Research Fund, 2010.
{ "redpajama_set_name": "RedPajamaArXiv" }
2,167
<?php namespace Yoast\WP\SEO\Conditionals\Third_Party; use Yoast\WP\SEO\Conditionals\Conditional; /** * Conditional that is only met when WPML is active. */ class WPML_Conditional implements Conditional { /** * Returns whether or not this conditional is met. * * @return bool Whether or not the conditional is met. */ public function is_met() { return \defined( 'WPML_PLUGIN_FILE' ); } }
{ "redpajama_set_name": "RedPajamaGithub" }
5,206
{"url":"https:\/\/9to5science.com\/count-the-the-number-of-elements-in-a-set-exactly-divisible-by-2-out-of-3-numbers","text":"Count the the number of elements in a set, exactly divisible by 2 out of 3 numbers\n\n2,143\n\nSolution 1\n\nFor a $10$-year old child, (or a substantially older mathematician), a useful way to begin is by experimenting. The problem is about undelining integers. So write down a fairly long initial string $1,2,3,4,5,\\dots$ of natural numbers, and start underlining.\n\nAfter a while, possibly with guidance, it may be discovered that the numbers $1$ to $12$ have $3$ \"doubles,\" and that the underlining pattern starts all over again at $13$. Every full group of $12$ contributes $3$ doubles. So about $1\/4$ of our $2006$ numbers should be doubles.\n\nMore precisely, the last full group of $12$ ends at $2004$, and neither $2005$ nor $2006$ is a double. So the number of doubles is one-quarter of $2004$.\n\nSolution 2\n\nFirst, in order for a number to be underlined twice, it must be even (since it must be divisible by $2$ or $4$). There are are $1003$ such numbers. Every number in this list is even. For a number to be underlined twice, it is either divisible by $2$ and $4$ or $2$ and $3$.\n\nThe numbers in our list are $\\{2(1), 2(2), ..., 2(1003)\\}$. In order for a number to be divisible by $2$ and $4$, it must be $2(n)$, where $n \\in \\{1, ... , 1003\\}$ is even. Exactly two thirds of those numbers will additionally not be divisible by $3$. How many of those are there?\n\nIn order for a number to be divisible by $2$ and $3$, it must be of the form $2(n)$, where $n \\in \\{1, ... , 1003\\}$ is a multiple of $3$. Exactly one third of numbers in $\\{1,...,1003\\}$ are multiples of $3$. Additionally, $n$ must be odd (else 2n is divisible by $4$ as well). How many odd multiples of $3$ are in $\\{1, ..., 10003\\}$?\n\nShare:\n2,143\nAuthor by\n\ngd047\n\nUpdated on June 23, 2022\n\n\u2022 gd047 about 21 hours\n\nI need a hint to solve the following problem, in a way that a 10yr old child can understand.\n\nOn a blackboard, all whole numbers from 1 to 2006 were written. John underlined all numbers divisible by 2, Adam underlined all numbers divisible by 3 and Peter underlined all numbers divisible by 4. How many numbers were underlined exactly twice?\n\n\u2022 JohnPhteven over 9 years\n@experimentX but also the even numbers divisble by 3 right (such as 6).\n\u2022 Santosh Linkha over 9 years\n@ZafarS sorry, yes you are right!!\n\u2022 anonymous over 9 years\nBut the question wants those which were underlined exactly twice. If a number is divisible by 2, 3, and 4, it doesn't meet this criteria (12, for example) because it will be underlined 3 times. You need to discard those numbers.\n\u2022 JohnPhteven over 9 years\nOh, I read over that part. So that just means every even number divisible by 3 but not by 4.\n\u2022 gd047 over 9 years\nSo, how many of them are underlined exactly twice?\n\u2022 JohnPhteven over 9 years\n@gd047 That is for you to figure out. You asked for a hint, not for the answer. If you're having trouble just follow my steps (or anonymous' steps in the other answer) and you'll get there..\n\u2022 gd047 over 9 years\nDivisible by 4 are 501. All even numbers divisible by 3 are 334. Divisible by both 4 and 3 are 167. But the answer in not 501+334-167\n\u2022 JohnPhteven over 9 years\n@gd047 167 are all numbers divisible by 2 and 3 but not by 4. 334 are all numbers divisible by 2 and 4 but not by 3. 334+167=501. I'm assuming this is for your child? If you need further explanation just comment again.\n\u2022 gd047 over 9 years\nNice! Just correct the numbers of \"doubles\" in the group of 12 to 3.\n\u2022 Andr\u00e9 Nicolas over 9 years\n@gd047: Thanks for the correction! The serious point I wanted to make is that we must \"get our hands dirty.\" The only kind of problem for which we don't need to do that is a problem close to one whose solution we have seen before.","date":"2022-06-26 17:51:57","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7032144069671631, \"perplexity\": 400.00607538775654}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656103271763.15\/warc\/CC-MAIN-20220626161834-20220626191834-00559.warc.gz\"}"}
null
null
[Results summary] [Results] Results overview You are in: Catalogue search> Simple search> Single record Single record details Back | Display in tree view Country code GB Repository code 234 Repository National Records of Scotland Reference GD364 Title Papers of the Hope Family of Luffness, East Lothian Access status Open Location Off site Description The military papers of General Sir Alexander Hope are mostly concentrated at nos. GD364/1/1075-1/1341 and GD364/2/208-2/237. Bundles GD364/1/Bundle 1075-1341 are the papers of General Sir Alexander Hope (1769-1837), who is referred to in the survey as 'Alexander Hope'. General Sir John Hope, later 4th Earl of Hopetoun (1765-1823) is referred to as 'Sir John Hope' until his accession to the peerage. Lieutenant-General Sir John Hope (1765-1836) is referred to as 'John Hope'. General Sir James Archibald Hope (1785-1871) is referred to as 'James Hope'. See also GD364/1/Volumes 208-237. Concerning Bavaria: GD364/1/Bundle 1104 - 1121 (For other papers concerning the campaign in Bavaria, 1800-1, see GD364/1/Bundle 1328-1341.) Other papers relating to Sir Alexander Hope's military, political and personal affairs will be found in the main body of the Survey, nos.115 (Edinburgh Castle), 156 and 289 (politics), 287 (Internal Defence), 871 (his Memorial), 901 (military books and equipment), 916-7 and 1051-61 passim (will and executry). Estate and household affairs will also be found under the appropriate year dates. The following bundles contain pamphlets or other small printed works: GD364/1/Bundle 119, 123, 166, 264-8, 272, 284, 286, 350, 351, 389, 393, 438, 443, 444, 449, 450, 454, 456, 461, 462, 465, 468, 472, 473, 478, 479, 482, 483, 495, 497, 499, 501, 534, 542, 565, 571, 854, 899, 900, 1045, 1133, 1142, 1169, 1238, 1240 and 1242. Level Fonds Arrangement This list was first prepared as Survey 1021 of the National Register of Archives (Scotland). In order to preserve the original numbering of the documents the references under GD364 have been divided into four sections. Section 1 consists of bundles of loose documents, Section 2 of MS. volumes, Section 3 of printed volumes and Section 4 of a small number of plans. Thus, for example, Bundle 397 should be called for as GD364/1/397 and Printed Volume 87 as GD364/3/87. The section numbers are given in the catalogue. Finding aids Handlist Format Text Related record RHP83910 RHP83920 RHP83918 RHP83913 RHP83912 RHP83911 RHP83906 RHP83922 RHP83921 RHP83919 RHP83917 R
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,798
Q: ERROR in xxx from UglifyJs when PNPM project are packaged I encountered this error while packaging the project. enter image description here And I know it is probably because of the conversion problem of es5. But when I packaged the project with npm, it was right. So, please give me advice about pnpm configs of my project. This is my project's enter image description here
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,294
Carver Corner Carver Egyptian Blue Elements 4 Nature Blog No Fear: Speak Out ! Elements 4 Nature Blog Site theme is NO FEAR: SPEAK OUT!.......NO FEAR: SPEAK OUT!.......NO FEAR:SPEAK OUT!....... Posted by Jon Adkins at 14:18 | Monday, June 29. 2020 | Comments (0) | Trackbacks (0) By: Brother Jon Adkins Déjà vu! Here we go again. Déjà vu! When the riots of the 1960's had a significant impact on the structure of the United States, the white "American" collective put significant amounts of money in places to strengthen white supremacy in the name of black progress. Dr. Bobby E. Wright (the greatest psychologist of modern times) rest in peace said in one of his speeches in the early 1980's that B. F. Skinner was the single most dangerous white man alive. B. F. Skinner died in 1990. However, B.F. Skinner, who is considered the father of Operant Conditioning (Behaviorism), wrote a book called "Beyond Freedom and Dignity". In that book, B.F. Skinner talked about how to keep black people enslaved or under control. According to Dr. Bobby E. Wright, the implementation of Skinner's theory single-handedly wiped out the civil rights movement of the 1960's. In B.F. Skinner's book, he talked about giving blacks a concept of freedom and still control and enslave them. B.F. Skinner stated in his book, give blacks freedom, justice and equality or at least the concept of freedom, justice and equality and still control and enslave them. What did whites do to achieve this goal after the riots in the 1960's? They put an enormous amount of money to establish Black or African-American Studies programs at every major predominately white college and university. That also caused a brain drain at all the historically Black Colleges and Universities. They used money to lure black scholars away from Black Colleges and universities to teach at these newly formed Black Studies departments at these white colleges. These black scholars still maintain these positions today. The most popular professor who teaches African-American Studies currently at Harvard University is Dr. Cornell West. Now you would think after 5 generations of white students receiving black history instructions from these black scholars, there would be more respect for black people from the leadership of white "Americans." That just tells me the problem with them collectively showing no respect for black people the past 50 years is not rooted in education. It's rooted in their hatred of us. It's never been about white "Americans" being blind to our pain or our history. Well hell, they are the ones that constantly inflicted the pain on us. Their history was never interrupted like ours were through physical slavery. Not only was our history interrupted but they injected in the minds of Black African people slave mentality and self-hatred that has never been reversed collectively by white "Americans" today. Now they collectively take the position chanting "Black Lives Matter" at these protests like they did not know what we've been going through? Please! That's the same logic that white women use when they point the finger blaming the core problem with racism reside at the feet of all white males. When they, white women are the ones who raised the white male to do what they have been doing up to today. Déjà vu! Whites are doing it again, after the reaction to the George Floyd murder by a white police officer in Minneapolis, Minnesota. The white collective is putting lots of money in social justice organizations that I never heard of before and making hypocritical statements about how they are for social justice for black people. Really? I know they are not giving millions of dollars to the Nation of Islam (NOI), led by Minister Louis Farrakhan. I have to say, for years the NOI has been and still is the most productive and effective black organization in this country in helping the black community as a whole. There are no other black organizations that can come close to what the NOI has done and still are doing for black people to address the mental slavery and self-hatred that is at the core of all of our illnesses. So, I know they are not giving money to the NOI. I know they are not giving millions of dollars to Dr. El Senzengakulu Zulu's Ujamaa School for educating Black African children in Washington DC. Ujamaa School teaches and promotes independent black thought. I know they are not giving millions of dollars to Kwame Agyei Akoto's NationHouse School in Washington DC who also teaches and promote independent black thought to Black African children. I know they are not giving millions of dollars to Dr. Umar Johnson's who is forming an independent school to educate black boys called Frederick Douglass Marcus Garvey Academy. And I know they are not buying products from my black based business, Elements 4 Nature. In general, these white companies or organizations are not giving millions of dollars to proven and establish black independent organizations that promotes Black Thought, Black Love, Strong Black Families and Black Generational Wealth and addresses the core problem of our people of slave mentality and self-hatred. These white companies and white people in general who are participating in these protests chanting "Black Live Matter" are implementing B.F. Skinner's strategy of behaviorism. Give black people a concept of freedom from police brutality and systemic racism. In actuality, white people are giving us (black people) the concept of freedom from police brutality through the "Defund the Police" campaign and putting money in these national diversity programs to address systemic racism. Please! These are the same strategies that they used in the sixties. Like B.F. Skinner said, give blacks the concept of freedom and still control and enslave them. Whites think that they are free of racism if they support the Black Live Matter campaign. Black Lives Matter organization is not a revolutionary organization. Let's be clear, Black Lives Matter is an organization design to improve the conditions of black people against racist policemen while still living on the plantation. They are no were near addressing the collective need of the black community like the NOI is. The name itself, "Black Lives Matter" appeals to the moral consciousness of white people. You don't have to tell black people that our lives matter. It's not an organization that pushes for independent Black Thought, Black Love, Strong Black Families and Black Generational Wealth. Black Lives Matter goals and objectives are consistent with Colin Kaepernick's kneeling campaign protesting police brutality during the national Anthem. Let's not get this twisted; kneeling for the national anthem is no bold revolutionary stance. Please! Don't get me wrong, improving our lives or black lives on the plantation is important. We need to save black lives period. I commend brother Colin Kaepernick and Black Lives Matter for the much needed and important work that they do. However, solely kneeling and protesting won't free black people from the control and enslavement of white people. The litmus test for black athletes showing strength in fighting systemic racism should not be whether you kneel during the national anthem to protest police brutality. The litmus test for black athletes showing strength in fighting systemic racism should be whether they are married to a black woman and keeping those resources in house to raise a strong black family to counter white supremacy. Too many of our black athletes marry outside the race which strengthens white supremacy and systemic racism according to our best and brightest black scholars who dedicated their lives in studying and researching this issue. Dr. Bobby E. Wright, Dr. Amos Wilson and Dr. Frances Cress Welsing dedicated their lives in studying psychology of the black mind and black people. Their independent and scientific research should be the center point of our collective actions in fighting white supremacy and systemic racism. Their research findings are consistent with the historical social directions of our other great social scientists' leaders like the Honorable Elijah Muhammad, Marcus Garvey, Malcolm X, Dr. Khalid Muhammad, Booker T. Washington, Dr. Chancellor Williams, Dr. John Henri Clarke and Dr. Yoself Ben-Jochannan to name a few. The NOI led by Minister Farrakhan continues the good work that was started by the Honorable Elijah Muhammad. These black athletes have enormous influence on our black youth. I can remember when driving a part of my son's AAU basketball team down to the national tournament. I asked one of his teammates what was his goals upon completing high school. He said that he wanted to get a scholarship and a white girl. I said what? Why the white girl? He said, I don't know, I just want one. Whites manipulating our collective slave mind are our biggest weakness in fighting white supremacy and systemic racism. Again, interracial relationships are destroying the core nucleus of the black family which strengthens white supremacy. Most Negroes think that marrying a white woman/man is the ultimate form of freedom. On the contrary, it's actually the highest form of mental slavery which strengthens white supremacy. That's why the Negro frontline media is so visible and was created by white TV executives as to be "the voice" of all black people in this country. Let me pause for a minute and address what I call the Negro Frontline media analysts. In this pandemic, we had a significant increase of opinions from this frontline Negro media group that was put together by white TV managers who hired them. Frontline media Negroes were specifically hand-picked by white TV managers. Ninety-nine percent of these frontline Negros are married to white people or people outside the black race. They are completely out of tune and out of touch with what's really happening to black people in these corporate plantations. They think just because white people are participating in these protests in large numbers, that they think a change is a coming. Please! Talk is cheap! Action speaks louder than words. I worked in America's white corporate system for nearly 25 years. Let me give you a peep of how diversity works in corporate America because you're not getting it from black TV and radio analysts who are either in sports, entertainment or part of the news media. They are outsiders and don't really know what's really going on. While working for the Department of Commerce (DOC), the strong and talented brother name Ron Brown was appointed by President Clinton in 1993 to run the DOC. Ron Brown made a significant speech implementing Diversity in the DOC. He said in his speech that he wanted to change the racial composition of upper managers to reflect the racial composition of the American society. Ron Brown said there were currently less than 1% of black upper managers or Senior Executive Service (SES) managers in the DOC. Ron Brown wanted to increase the percentage of black SES up to 12% with his new diversity program. He said he wanted DOC upper management to reflect America since blacks made up 12% of the population. These were drastic changes that Ron Brown made within the DOC that made white people so furious. Whites within the DOC were highly upset about the implementation of the diversity plan. Sadly, Ron Brown was killed in a plane crash in April 1996. After Ron Brown's death, whites changed the direction of the diversity program to advance all other minorities at the expense of black workers within the DOC. They hired a sister to teach the diversity training course. She defined diversity based on people's different backgrounds and experiences versus the race dynamic. For instance, she used an example of a white man from N.Y. City versus a white man from Alabama. Diversity was defined by breaking down predetermine stereotypes from what a fast-talking white man with a northern accent from NYC might think of a slower talking white man from Alabama with a southern accent. So white management changed the way diversity was implemented by taking race out of the equation. Not the way brother Ron Brown had intended it to be. I knew the sister who was teaching the diversity training class and I confronted her at the end of the class. She acknowledged that the white upper management forced her to teach the diversity class that way in contradictory to what Ron Brown had intended. She was honest by saying she had to change it in order to keep her job because she had a mortgage to pay for. The DOC Diversity program only exposed how all white corporations are implementing diversity training today. It's not based on increasing diversity based on race. That's why the lowest minority group participation in these corporate diversity programs are black or African-American workers. Black males are literally excluded from the corporate diversity programs. Let me say something about these hypocritical high-tech companies like Google, Microsoft, Apple, etc. They are not hiring black engineers from historically black colleges. Every black engineer graduate from HBCU's should have a job offer from these high-tech companies. Yet they are quick to put out a statement on how they are for social justice. Hypocrites! I can remember a few years back; I was selling products at an event when I met a brother that graduated from Tuskegee University with an Engineering degree who could not get a job after he graduated. Now this brother was selling "T Shirts". That's a shame! These companies are not hiring black engineers' period. As a matter of fact, these companies make a conscious effort not to hired black engineers from historically black colleges and universities. They rather hired other minorities engineers like Hispanics, Asians, Arabs and Indians from India. That's just how diversity is implemented in majority of the US Companies today. Seeing and fighting racism day to day on the corporate plantation frontline for close to 25 years is a different reality from these Negro TV analysts who are on the peripheral or on the outside looking in. Again, these Negro TV analysts who are so happy to see whites in large numbers participating in these protests, are so gullible and are being fooled by the same white people who implement the day to day racism and who fight against black people every day at their jobs. Don't be fooled by the Okie-Doke. I remember when my immediate boss sent me to a DOC diversity council meeting because he could not make it. At that meeting which I found that I was the only black person there, I lashed out at how that council was not doing anything for black workers. They were shocked at my response and my boss never sent me to one of those meetings again. Again, diversity is used as a tool for years to elevate other minorities groups at the expense of black workers and It's a shame. It's also a shame to have these white socialists who are leading the way in bringing down these confederate statues in the name of George Floyd. Some of our people who are participating in bringing these confederate statues down are playing into the lynch mob tactics without understanding the impact on what they are doing. Taking down these statues is a destruction of evidence that we need as black people to make our case for reparations. These white socialists are destroying evidence. In addition, they act like racism in this country has ended already. Those statues are a better reflection of the current condition of racism in this country. That's stupidity to take these statues down. However, it's consistent with B.F. Skinner strategy of give black people a CONCEPT OF FREEDOM AND STILL CONTORL THEM OR STILL ENSLAVE THEM. Like taking down these statues is a form of some revolutionary act of freedom. Please! It's just like the Defunding Police campaign. All that's going to do is to kill more black people. We are killing our people through black on black crime in our communities because of the historically self-hatred or slave mentality. Again, the only established black organization working to reverse the slave mind of our people is the NOI, led by Minister Farrakhan. And please don't bring that weak-ass kindergarten reason of why black people kill each other; because of the crime that's committed is where most of our people live is a foolishness analysis. If that was true, gentrification would not exist in every major city in this country. Because white people would have been killed collectively the minute they moved into high crime black communities, especially the ones that like to walk their little dogs at night with no worries. The reason why whites are not being killed while moving into these high crime black neighborhoods is because of slave mentality. Again, for several hundreds of years, we were trained to love white people doing the physical captivity of slavery and to hate ourselves. Like Dr. Bobby E. Wright once said, the reason why there's black on black crime is because blacks were trained to kill other black people and never were trained to kill white people. And of course, white people collectively know this. That's why whites walk through high black crime neighborhoods with no worries because they know that collectively majority of us are still operating with that same slave mind. White "Americans" know collectively what they did to us during the physical slavery. They were able to past that knowledge down from one generation to the next generation of whites on how to keep us mentally enslaved. Again, the only established black organization who has been working for years to reverse the slave mind in our people starting with the Honorable Elijah Muhammad is the NOI. That's also the reason why these white socialists are manipulating black people with these false gestures of participating in these protests in large numbers chanting "Black Live Matter"! The white socialists also know that most of our people are operating under that slave mind. They know we are so damn gullible and are easily manipulated, especially those Negroes who are married to white people and who make up the Frontline Negro media. These media frontline Negroes are going to be the ones who will be out front to crush Black independent thought, Black Love, Strong Black Families and Black Generational Wealth because they want to hold on to their little white boys and girls at the expense of the whole black community. This is what these white Socialists crackers are trying to do. They are trying to crush what they call "racist speech" or what they define as "hate speech" in the name of social justice. However, what they are really trying to do is to ban Black Independent Thought, Black Love, Strong Black Families, Black Generational Wealth, and black people speaking out against interracial relationships. They will define those actions as "racist speech" or "hate speech." That's their goal. As mentioned before, these Socialists crackers through Facebook and Instagram have already banned Minister Farrakhan from directly speaking to our people on Facebook and Instagram about a year ago. I wonder if they planned George Floyd death also. This whole thing could have been planned. I would not put it past these white socialists. If you take a closer look at Floyd's death, that police officer knew he was being filmed and still chocked Floyd to death. The white policemen had the intention all along to kill George Floyd. Someone needs to check the racist white police officer's bank account to see if he was paid a large sum of money. This is the first time a large number of white people reacted so quickly in participating in these protests in large numbers. Whites did not react this way when Rodney King was brutality beaten for all to see. Their ultimate goal in all of the reaction and outcomes to George Floyd death is to shut real black progress down in the name of social justice. Wicked devils! Remember, the devil is the master of deceit. The Negroes Frontline media who will be supporting them in this cause are nothing but devil worshippers! Dr. Khalid Muhammad once said to participate in the white man politics will only produce a temporary result for black people. He's right. The presidential election is coming up in November between 2 white men. You can't be a white "American" and not be racist by definition. Therefore, since both men are white that racist issue cancels each other out. So, what are important to African-Americans are the policies of each man. It is clear that President Trump, the businessman, economic policies are better for African-Americans. So, it's a no brainer that African-Americans should vote for president Trump to be re-elected in November. Jerome Powell, Head of Federal Reserve, who in a speech on April 29, 2020 said the following about the economy's impact on African-Americans and other minorities before the Coronavirus hit the United States. Jerome Powell said the following when asked a question about the status of blacks and other minorities in Trump's economy: "We were hearing from minority and low-to-moderate income communities that this was the best labor market they had seen in their lifetime," Powell said. "It is heartbreaking to see that all threatened now." Now Powell may walk back those comments praising Trump's economy if the lynch mob gets to him. The Lynch mob consists of the main stream media; NBC, ABC, CBS, CNN, MSNBC, Fox News, Antifa, the white Democratic Party and other white socialists' groups and local news agencies. Trump's economy has proven to uplift everyone's economic situation. Jerome Powell words were strong in endorsing Trump's economy. I can't see how anyone black can vote for Joe Biden. Especially when Joe Biden said, "you're not black if you don't vote for me." Hold it! Wait a minute. This cracker name Joe Biden who his people robbed black people of our name, our language, our God, and our cultural identity had the audacity to question black peoples identify? Think about that for a minute. He is manipulating the slave mind that exists in most of our people. Joe Biden has done that before when he said, "Republicans economic policies are going to put ya'll back in Chains" in April 2012. How can anyone black vote for this devil? Please. White people, his people are the ones that robbed us of our cultural identity and the knowledge of our history and now he is questioning our black identity? That's a collective insult to every black man, black woman and young black adult! Not to mentioned years ago, Joe Biden did not believe Anita Hill, the black woman who testified and tried to block Clarence Thomas nomination to the Supreme Court. Joe Biden got Clarence Thomas, who was against Affirmative Action laws, nominated to the Supreme Court. Therefore, Joe is responsible for helping to destroy Affirmative Action laws across this country. Joe Biden also voted for the three strikes law that sent millions of young black males/females to prison for years. I'm not voting for Joe Biden. Now I know the choice is between, Satan and the devil. However, you have to choose the one whose policies that will have more of a positive impact on the black community and the clear choice is Donald Trump economic policies. Instead of pushing to defund the police, we need to push to Defund Plan Parenthood who have killed millions of black lives in the name of the white witch founder Margaret Sanger. Police reform is much needed; however, you don't have to defund the police to achieve police reform. We will lose more black lives at a greater rate with no police present. We as black people, however, need to reject any limitations on freedom of speech. They are going to try to squash Black Love and any independent Black Thought or movement all in the name of what they call "social justice". Déjà vu! Don't accept their concept of freedom like fighting for Juneteenth to be a national holiday. That would be foolish because we still are not Free Yet! A Juneteenth holiday falls in line with B.F. Skinner concept of freedom. Majority of us still operate with that slave mind every day. Black thought, Black Love, promoting strong Black Families and Black Generational Wealth are under attack. Again, the lynch mob is trying to shut down what they call "racist speech". Just like what happen after the 911 tragedy, they created the Patriot Act. They will do something similar this time around. They are going to try to establish a way to shut down what they call "racist speech". Believe me, Black Thought, Black Love, Strong Black Families and Black Generational Wealth will fall under that umbrella of what they will define as hate speech. The genocide of black life has always been white people's ultimate goal. This is the avenue that these whites will pursue and the Negro Frontline media will give them their blessings to pursue it. The Black Conscious community needs to take Black Revolutionary Thought to a cosmic level. Meaning, we don't need to form a physical organization. All we need to do is implement Black thought, Black Love, promote Strong Black Families and implement Black Generational Wealth every day. This strong conscious thought will grow stronger and will automatically bring us together as we progress through life every day. Always remember that Black Thought will produce Black Love. Black love will produce Strong Black Families. Strong Black Families will produce Generational Black wealth. Generational Black wealth is the path way towards freedom for our people. Black Political Power and The Cronavirus Posted by Jon Adkins at 06:02 | Monday, March 16. 2020 | Comments (0) | Trackbacks (0) By: Bro. Jon Adkins Obviously the most important current issue is the cronavirus. I will however, get to that later. What's also important is the need for African-Americans to have an independent Black Political Party. We need a Black independent thinking political party more so now than ever before. This new Black Political Party will be dedicated solely in improving the lives of Black families within the United States and promoting Black Love to ensure our survival. The foundation of this Black Political Party has to be based in Black Consciousness. As you know, this is campaign season and we should collectively be voting for the person who will better address the needs of our people. Whether you like that person or not, it does not matter, what is more important will his policies have a positive impact on our people. The choice is between two white men; President Trump or Joe Biden. Before I go into this topic, let me introduce myself first to give you a better idea who is writing this article. My Name is Jon Adkins; I graduated from Tuskegee Institute in 1985 with a BS Degree in Electrical Engineering. The most valuable part of being trained at Tuskegee in the field of Engineering is that they trained my mind to think in an analytical way. I must say Tuskegee did a good job in developing a superior analytical mind that I use on a daily basis. However, it did not hurt that my grandfather and father were math teachers. It also did not hurt that my ancient ancestors were the Black Egyptians who built the Great pyramids of Giza. They obviously gave me a great foundation. I used my analytical mind while working on the corporate plantation for close to 25 years as a System Engineer. The last system that I designed while working for the National Weather Service (NWS), sped up the 21 non-weather alert messages like Child Abduction Warnings, Earthquake Warnings and Law Enforcement Warning as well as the new President National message. Before my system came online, it took 7 1/2 minutes for alert messages to transmit out to the general public once generated by Emergency Mangers. Now each alert message takes a few seconds to be transmitted to the public once generated by Emergency Managers. In addition, my system called NWS HazCollect caught the Isis bomber in NJ. I rarely toot my horn but white people never gave me the credit that I deserved for my excellent work while working on the corporate plantation for close to 25 years. I currently use my analytical mind in writings Elements 4 Nature blogs. Enough of me, Black to this blog. The need for a Black political party that specifically addresses the needs of African-American couples and families will go a long way to ensure our survival. No one president is going to give us liberation. Obama proved that during his presidency. However, we still should participate in the political arena and vote. Like Dr. Khalid Muhammad once said, any political gain is a temporary solution in helping solve our problem. The permanent solution is liberation. We should take advantage of any temporary political gains. Therefore, it's a no-brainer for me to choose President Trump's policies that will have a greater positive impact on the black community. Specifically Trump's immigration policies; a). Eliminate DACA b). Finish building the wall c). Deporting all illegal aliens. I recognize Hispanics are our cousins, however, when your house is burning down your not worried about your cousins living across town. You're only worried about your immediate family period. Let's not get it twisted; we don't have any friends in this society, not from Hispanic-Americans, Asians-Americans, Arabs-Americans or Indian-Americans from India and definitely not from white "Americans". I understand why authorities in the United States are talking precautionary steps like closing schools, canceling major sporting events with large crowds because of this dangerous and highly contagious coronavirus. Even though young people are not dying from this virus, they can easily pass it to the elderly who are the most vulnerable and the elderly are the main ones who babysit young children. What I don't understand, however, is the drastic downturn in the stock market. Such a sudden and consistent drop is not justified. I can understand a significant decline in the travel stocks or vacation related stocks but not all stocks. I've been in and out of the stock market for 35 years. I went through a few drastic ups and downs and never seen this. Something else is going on. These money mangers are being controlled by a small group of white people to bring the stock market down. They are using the cronavirus to justify the drastic downturn in the stock market. They are collectively shorting the market in order to make President Trump look bad. In addition, they are making billions of dollars off the backs of normal American investors whose portfolios are designed for long term investments. This was a coordinated plan by this dangerous and wicked and relatively small group of white people. You can easily catch these culprits by looking at their consistently massive short positions. Just follow the money! You will find out who they are. Not only do this relatively small group of white people control the major money mangers of Wall Street, they control the Democratic Party and all major cable media news outlets like CNN, MSNBC, ABC, CBS, NBC, Bloomberg Business network and some of FOX News programming. The only news cable channel that it seems like they don't have any control over is the One America News Network (OAN). All of these cable news outlets with the exception of OAN and some of Fox Business Network morning shows, promote the same bash Trump narrative. It's a 24/7 bash Trump agenda no matter what obvious positive move Trump may make. I can't watch these channels any more. Major newspapers also push this bash Trump narrative. These are propaganda news channels controlled by the same small group of white people who President Trump calls the swamp. Again, they are using the cronavirus to bring the stock market down to make billions of dollars and to blame Trump for it. Studying Engineering at Tuskegee, we had to study Physics, Chemistry, Thermodynamics, and Electromagnetism. However, we did not study Biology. So I can't tell you whether the Cronavirus was man made or not. However, I can conclude that these major money mangers are taking out large short positions on purpose in order to try to bring Trump down while making billions of dollars at the expense of the common American investor. I knew something was wrong after the first major downturn. Most of the so-called experts said investors should wait to buy stocks again because it's going down more. Now, how in the hell can you determine where the bottom is? After each drastic historical downturn in the stock market within the past three weeks, the same "don't buy yet" response was consistent on most of these financial cable shows by these so-call experts. Like they knew what was going on or they know where the bottom is. I am thoroughly convinced now than ever before that this relatively small group of white people, comprise of people from the FBI, CIA, NSA and other elements of the private and public arena who are in a position of power in controlling the resources of this nation. These are the same people who are committed and are implementing the highly sophisticated genocide plan against Black couples and specifically the African-American family. This highly sophisticated plan was put into motion about 40 years ago. Please read my ARC Blog called "Hispanics are Pushing African-Americans out of the Workforce" (http://ancient-knowledge-breakthrough.net/State-of-Emergency.php) written in August 2012. This highly sophisticated genocide plan was co-author by Joe Biden. He was the one, if you remembered, who helped Supreme Court Justice Clarence Thomas to get confirmed in the mist of the Anita Hill scandal. Supreme Court Justice Clarence Thomas was the one who helped to eliminate quotas which was a significant tool in helping reverse the wrongs done to African-Americans in the past by White "Americans". In Addition, Joe Biden was probably the one who convinced Obama to start the DACA program. The DACA program has been detrimental in significantly reducing young Black college students' entry numbers into predominately white colleges for the benefit of Hispanic or DACA students. This dangerous small group of whites don't like Trump specifically because of his immigration policies. They were the ones behind directing these unjust attacks from the Democrats, the Mueller investigation, the Impeachment and now the manipulation of the coronavirus. This relatively small group of whites who controls the Democratic Party, the mass media and the entertainment industry needs to be defeated in this upcoming election. Today the focus of this group of white people is the extermination of Black families and Black couples. Tomorrow this group can easily decide to exterminate, Jewish Americans or Catholic Americans or Muslim Americans, Italian Americans, Irish Americans, Arab Americans, Polish Americans, Asian Americans, and Indian Americans from India. Currently, this relatively small group of whites is using the influx of Hispanics Americans and illegal aliens as pawns to wipe out African-Americans. This group is dangerous. This group also controls the entertainment industry. They are responsible for the recent over saturation of so many interracial commercials design to destroy black couples and black families. There are very few commercials promoting black couples and black families. Even the Black-ish show has a detrimental off shoot of another show called Mixed-ish where a cracker is married to a black woman. That show is Disgrace-ish. Please, let's stop any and all beefing over these white politicians. Yes, I support Trump's immigration policies because I think it will help delay the highly sophisticated genocide plan being implemented by the Democrats, mass media and the entertainment industry led by this small group of white people. But I'm not going to get mad with a Brother or Sista who thinks Biden's or Bernie Sanders is the answer. Let's stop beefing over these cracker politicians. One of Dr. Bobby E. Wright's lectures, he talked about the fact that as long as white people control the resources of any nation, whatever structure (capitalism, socialism or communism) they decide on, the foundation will be based in white nationalism. It does not matter whether its capitalism, socialism or communism, the foundation will always be based in white nationalism. Which means you still will be a slave even if Bernie Sanders wins the Presidency. Democrats plan to commit to study whether African-Americans would get reparations. Really, let me emphasize that word study. That's an insult! There should not be a debate whether we get reparations! Please, if they are for open borders, the elimination of ICE and free health care for illegal aliens than that will tell you that they are not truly for reparations for black people in America. That's a contradiction. Because they will continue to prefer Hispanics over Black people through out the American society. Their modus operandi is the saying that we grew up on and that is: If your white you alright, if your yellow you mellow, if your brown stick around, but if your black get back. Please, institution racism still rules the day in the minds of the average white "American" in this country. They will consciously pick or prefer the "white" Hispanics over black people in this country. A black teacher who teaches in the Montgomery county public school system in Maryland, said to me, "White teachers will bend over backwards to help a Hispanic student but won't do a damn thing to help a black student". This black teacher is witnessing what is happening to our black children in these public schools. This relatively small dangerous white group is also climate change fanatics. Let me give a scientific explanation of what they call climate change from an African perspective. The earth naturally goes through cycles from a microcosm perspective and a Macrocosm perspective. For instance, let's take the microcosm example first. Everyday the sun rises and gives way to night fall within a 24-hour period. One 24-hour cycle compose of 12 hours of daylight and 12 hours of night. You look at the earth's 24-hour cycle in a microcosm perspective. The Macrocosm perspective of the earth cycle happens over 100 or 1000 years or greater. The Ice Age was when all of the United States was frozen. Over a period of time or so many years, the ice melted in North America. Now when the ice melted, were there any cars, factories, or air pollution like CO2 being released in the atmosphere during that time? The answer is no! So how did that ice melt? The answer is through the natural cycles of the earth. I think what the earth could be currently going through is a transition through a cycle change when it comes to warming of the earth. The current rate of the earth warming, even if you determine that the earth is warming up, is not solely due to the emission of CO2. Now, don't get me wrong, we need to reduce pollution as much as possible. But you can't conclude that the CO2 emission has a direct impact with the rate of the earth warming up. The natural cycle of the earth has more of a dominate effect to the warming of the earth. The next question is why the increase in violent weather storms? The answer is the increase in violent storms is from white people's constant mistreatment of African-Americans and black people world wide. The violent tornadoes, earthquakes, and hurricanes have a direct relationship on white people's mistreatment of black people period. Let's not get it twisted. These violent storms will continue and will intensify as long as whites continue to oppress blacks through these deceitful actions like crushing potential black families from forming through these interracial moves like marrying the best and brightest of our black women and men. As well as using Hispanics as pawns in their highly sophisticated genocide plan to totally and literally replace us with Hispanics. Let me follow up on this detrimental interracial situation. Any Black person who marries a whiteman or woman need to turn in their Black Card. Your perception of black people is warped once you make that move. Let's look at it deeper. What white "Americans" collectively did to us in slavery was never reversed mentally. How in the hell can you marry a cracker, or anyone outside the race when we have not mentally got out of the condition that we are still under? When you marry outside the race you give up half your soul to that person. So now you spend half of your time going to the cracker's cottage sipping tea with Aunt Martha. You cut half of your soul off from your people. You're not qualified to have any say on helping solving Black people's problem. Turn in your damn Black Card. That's why the late great Dr. Bobby E. Wright said, "No black person should be in any leadership position if they marry outside the race". It makes plenty of sense. So John Legend you need to shut the hell up! We don't want to know what your warped ass opinion is on the R Kelly (The King of Baby making music) situation. I'm divorced now looking for a Black fertile wartime queen. However, while I was married to a Black Sista, I inherited and had to deal with or help resolve any drama on her side of the family. So you had to help or deal with the Pookie's in the family, instead of drinking a beer with Uncle Billy Joe Bob. We as African-Americans desperately need a Black Independent Political Party design solely for the advancement of Black couples and the Black family. My company, Elements 4 Nature is currently going through some negative headwinds but we will prevail. However, Elements 4 Nature did sell products last month at our favorite event, The Nation of Islam Saviour's Day 2020. It was a wonderful event as usual ending with a strong speech by the Honorable Minister Louis Farrakhan. Elements 4 Nature in the next couple of months will be coming out with a new Carver Egyptian Blue shirt and a few more of Dr. George Washington Carver and Dr. Austin Curtis products. So stay tune and Black Love Rules! The Nation of Islam: Saving Black America! Posted by Jon Adkins at 19:17 | Monday, November 5. 2018 | Comments (14) | Trackbacks (0) By: Jon Adkins I attended the Nation of Islam (NOI) Saviours' Day 2018 this year for the first time. Those of you who are not familiar with the NOI annual Saviours' Day, it's a celebration of Black Love, Black family unity, spiritual growth and awareness through various informative seminars held during the 4-day event. The NOI is a Black Islamic religious organization started by Master Fraud Muhammad and grew by enormous proportions under the leadership of the Honorable Elijah Muhammad. The Honorable Elijah Muhammad' in my opinion, was the greatest Black leader of modern times. The Honorable Elijah Muhammad raised and taught three powerful leaders: Malcolm X, Dr. Khalid Muhammad and the NOI's current leader, the Honorable Minister Louis Farrakhan. Minister Farrakhan is doing a wonderful job in continuing the work of his teacher Elijah Muhammad. For the record, I've always admired and respected Minister Farrakhan and the NOI for their dedicated work done in our Black communities all across the United States. After attending this years' Saviours' Day 2018, I have even much more respect for Minister Farrakhan and the sisterhood and brotherhood that makeup the NOI. I have to conclude after interacting with the NOI sisterhood and brotherhood during this wonderful 4-day event, Minister Farrakhan deserves enormous credit for re-building and maintaining what I call the largest collection of intelligent Black men and women in any of the established Black organizations in the United States. I was really impressed and moved by the real Black Love that was shared by knowledgeable sistas and brothers within the NOI. The height of the event was of course the speech delivered by Minister Louis Farrakhan on Sunday. Farrakhan gave a wonderful speech delivered in the packed house at Wintrust Area in Chicago. It just amazes me that the white racist media continues to pick sound bites out of Minister Farrakhan's speeches and misrepresents this good man every time he speaks. Fake news did not start with President Trump, it started with America's white racist media telling lies and misrepresenting the truth against strong conscious African-American leader of the past and present. Currently the white racist media is telling lies and misrepresenting the truth against Minister Farrakhan. The Woman's March had the nerve to write a statement denouncing parts of Farrakhan's speech. I was there for the whole speech and there was nothing in Farrakhan's speech that singled out any particular group or person unjustly. The Woman's March organization is pushing this Intersectional movement or concept which will never work to effectively address the needs and wants of African-Americans in this county. Unlike the Intersectional movement, the NOI is a significantly important organization critical to the well being and the survival of the African-American community. The most significant need in our community today is to put emphasis on Black Love, re-building and creating strong Black families and working towards the eradication of mental slavery. Dr. Bobby Wright emphasized that we needed a Black social Theory which is the same as a Black Agenda. The NOI is the only major established Black organization focused on implementing a Black Agenda. The NOI is working 24/7 towards the cause of reversing the slave mind which exist in majority of African-American minds today. The slave mind that currently exist in African-Americans today was created by white "Americans" during close to 300 years of physical slavery. White "Americans" put an enormous amount of wicked time and evil energy or actions into transitioning the Black African mind into a slave mind by murdering,lynching,raping, and separating our families. Breaking the cohesion of the traditional Black African family, where we lost our original names, language, culture, religion and God concept. We were forced to learn and live a completely different life. Today, we as African-Americans collectively are still living under that foreign white European culture and have not advanced because of it. The slave mind that currently exists within majority of African-Americans today was never reversed by white "Americans" after the Emancipation Proclamation was signed in 1863. Therefore, white "Americans" are responsible for that slave mind that currently exists in majority of African-Americans today. The slave mind that White"Americans" created is currently past on or down from one generation of African-Americans to the next. Therefore, White "Americans" are responsible for the collective messed up living conditions and chaotic social structure that we as African-Americans live under each and everyday. Again, the majority of the current generation of African-Americans operate daily with that slave plantation mind. Some African-Americans operate with a larger slave mind than others. Those who operate with a larger slave mind are usually the ones who marry outside the race. Especially the ones who marry the slave master sons and daughters. African-Americans who operate with a larger slave mind have a higher level of self-hatred according to Dr. Bobby E. Wright, the greatest (GOAT) Black psychiatrist of modern times. That's why the work of the NOI is so important to the survival of African-Americans in this country. Again, Minister Farrakhan and the NOI are the only established Black organization working 24/7 towards promoting Black Love/Self Love and working to eradicate mental slavery among our people. I must bring up a speech delivered by one of the greatest Black Soldiers and orators of modern times, Dr. Khalid Muhammad. In his speech given at the Black Psychology Conference years ago, entitled "A Check Up from The Neck Up". Dr. Khalid Muhammad said, and I paraphrase, "We as Black African people were stripped of our names, language, country, religion and culture identity. Stripping Black people of their cultural identity during hundreds of years of physical slavery by White"Americans" was like knocking us upside our heads thus suffering amnesia". Brother Dr. Khalid said, that we collectively represent 30-50 million amnesia victims in America that needs a "Check up from the Neck up". When your knocked upside the head and get amnesia, you lose all your frame of reference. When you lose all your frame of reference, you can't differentiate or tell your friends from your enemies. When you have amnesia, you don't hesitate to hug or marry your enemy. When you have amnesia, you reject your true friends or the ones who truly love you. Like Dr. Khalid said, we as African-Americans represent 30-50 million amnesia victims that need a "Check up from the Neck Up"! I'm tired of hearing that color or your race does not matter. I'm tired of hearing it, especially from my confused and unconscious brothers and sistas. Let me try to explain that this simple kindergarten philosophy is not true. What makes Black people or African-Americans different scientifically from White people or Euro-Americans is the chemical called Melanin. Now, we as Black people can try to escape reality by not recognizing the Melanin that exists in our bodies. It's the same as not acknowledging the wonderful blue color sky and recognizing the beautiful natural colors that nature presents. Melanin is a chemical or molecule that is found not only in the skin and brain of Black people but throughout the body of our people. Go read Carol Barnes book called "Melanin: The Chemical Key to Black Greatness". Carol Barnes found during his research that Melanin dictates a certain rhythmic foundation that Black people live with each and everyday. Melanin is manifested through the highest form of communication which is music. Rhythm and dance in Black music is a clear reflection or verification of the melanin molecules working within our bodies. White people cannot dance to Black music or Soul music with a natural rhythmic motion like Black people because of the low levels of melanin found in their bodies. However, some White people have done a good job of practicing and copying those rhythmic moves either in a form of singing or dancing. These fake ass so-called White Artists like Eminem, Jon B, Justin Timberlake and Robin Thicke are making money, stealing and copying Black Artists natural art form. Our unconscious collective slave mind stupidly accepts these fake ass cracker artists as real artists. When there is nothing authentic about what they are producing. Dancing to a Black rhythm sound exposes the difference between Black and White people. Remember the American Bandstand? That was a comedy show seeing White people trying to dance to a Black rhythmic music beat. You can take these comparisons or rhythmic differences and apply them to relationships. Back in the day, a sista would literally stop dancing with you if you were briefly out of step to the natural rhythmic beat of the song. Sistas and brothers need to apply that unwritten standard of dancing to relationships, particularly with non-rhythmic white people. Why would you want to dance with a non-rhythmic person? Dancing is an expression of two people connecting souls in rhythm to that universal Black African musical beat. That's why Black music use to be called Soul music. The Soul music that Black people made in the past was at a peck of a collective consciousness within the African-American communities during the 1960's and 1970's. That's when the most creative and deep expressions from Black music of love between the Blackman and Black woman was created. Songs like"This must be Heaven", "Heaven Must be Like This", "Stairway to Heaven", etc. I can go on and on about the abundant creative list of beautiful songs of expression of deep love between the Blackman and Black woman. This music of the 1960's and 1970's produced by a higher level of conscious brothers and sistas that touched on the Kemetic Love that was once consistently expressed between the Blackman and Black woman back in ancient Egypt where the Black woman was viewed or defined as Heaven. The Blackman and Black woman are naturally made for each other. The Blackman and Black woman are the only two people who qualify to be in unison with the Black African rhythmic musical beat. Only the Black woman can activate the natural king nature in the Blackman, not anyone else. The Blackman is the only one that can activate the natural queen nature in the Black woman, not anyone else. So if the Black woman marries a white man or any other man outside her race, her natural queen nature or her "Black Girl Magic" remains dormant. Again, there is a distinct physical difference between Black people and White people. However, my issue with White "Americans" is not because of the clear scientific physical difference between Black and White people, but a distinct difference in our paths, journeys and experiences. African-Americans and White "Americans" could both be green in skin color. Let's for one minute say that this is true to prove my point to the Kanye West and Candace Owens thinking brothers and sistas of the world. Even if we accept that both groups were green in skin color for one minute, the difference between both groups comes down to our experiences, paths and journeys. Again, for the sake of proving my point, let's call African-Americans, Green People#1 and White "Americans", Green People #2. Again, as the good Dr. Khalid Muhammad stated, we as African-Americans represent 30-50 million amnesia victims. This point is important to emphasize over and over again. Green People#2 mistreated Green People #1 badly for hundreds of years through the most brutal system of physical slavery. During physical slavery, Green People#2 forcefully took Green People#1 names, language, culture, religion, self love and self pride away from them. Also during slavery, White "Americans" (Green People#2) transformed the independent Black African mind into a slave mind or an amnesia victim (Green People#1). When physical slavery ended in 1863, White "Americans" (Green People#1) never worked to reverse the slave mind or amnesia that they created in the collective mind of African-Americans (Green People#1). That amnesia or slave mind is preventing us (Green People#1) collectively to rise as a people. Green People#2 is responsible for cleaning up the mess that they made to Green People#1 current social structure, family structure and poor economic living conditions. Past generations of Green People#2 put laws in place so a slave could not learn to read or write for hundreds of years. Based on that fact, Green People#2 owes the current generation of Green People#1 free college education. Past generations of Green People#1 passed their slave knowledge down to the current generations of African-Americans (Green People#1). Past generations of Green People#2 passed their knowledge, resources, and money to the current generations of White "Americans" (Green People#2). The current generation of Green People#2 controls all resources (FBI, CIA, NSA, IRS), media and the money flow in the United States. Even though, the current generation of Green People#2 controls all the resources, there has been no desire to truly help Green People#1. With the exception of President Kennedy's positive actions (Affirmative Action) over 50 years ago, Green People#2 have not helped or taken responsibility of what they did to Green People#1. Green People#2 owes Green People#1 reparations based on the injustices that were done to Green People#1. This nation became rich and powerful on the blood, sweat and tears from slave labor done by Green People#1. Instead of helping Green People#1, Green People#2 rather put their compassion and energy in establishing Sanctuary cities and protecting illegal aliens and using tax payers money to fund DACA students. Green People#2 should be ashamed of themselves. The recent implementation of the DACA program is preventing thousands of African-American students or Green People#1 from entering 4-year colleges. Green People#2 systematically destroyed the quota system (Affirmative Action) that forced corporations to hire African-Americans or Green People#1. Yet recently, the white Governor (Green People#2) of California pass a law to force corporations to include a woman on the board of every corporation that establishes a business in California. Of coarse most of these women to be put on these corporate boards will be White (Green People#2). They definitely won't hire a Black woman who is married to a Black man raising Black children (Green People#1). Hold It! Wait a minute. The white woman is part of Green People#2! The white "American" woman for hundreds of years was and still is responsible for raising little Johnny to become Big racist Johnny. White "American" women cannot separate themselves from the evils and wrong doings that Green People#2 did against Green People#1. She's not innocent of the collective crime that Green People#2 forced on Green People#1! The last people that Green People#2 wants to help is Green People#1. That's Green People#2 Modus Operandi. Green People#1 grew up with the following saying or expression that proves true: "If you're white your alright, If you're yellow your mellow, if you're brown stick around but if you're black, get back". Green People#1 are living proof to this reality. With the exception of the American Indians, Green People#2 does not owe any other ethnic group anything but they owe Green People#1 everything (Reparations)! Again, the only established Black organization working 24/7 to reverse the slave mind or to eliminate the collective amnesia in Green People#1 is the Nation of Islam lead by Minister Louis Farrakhan. In addition, Green People#2 are using all of their resources (FBI, CIA, NSA, IRS), media, and money flow to unjustly attack Minister Farrakhan and his organization, the NOI. NOI is doing the work of reversing the collective slave mind in Green People#1 that Green People#2 created in the first place. What is the word or words to describe the historical characteristics and actions of Green People#2 treatment of Green People#1? The word or words to describe Green People#2 historical evil actions towards Green People#1 is Satan, The Devil or Lucifer. There are no better words to describe the historical evil actions of Green People#2 towards Green People#1. If you don't agree with me, disprove it! Actions speak louder than words. Like Malcolm X once said, White "Americans" have a whole lot of pretty sounding words that they use to manipulate and control people. Positive and concert actions collectively by Green People#2 (White "Americans) can only erase my perception of Green People#2 as being devils. However, there are exceptions like President Kennedy and maybe Lyndon B. Johnson who wrote Executive Order #11246 to enforce Affirmative Action. Go and read the transcripts of President Lyndon B. Johnson speech at Howard University. He expressed the right things that Green People#2 need to do and feel towards Green People#1. I hope this Green People analogy will open the eyes of African-Americans across this country. We as African-Americans need a Black Agenda and not blindly follow the Democratic Party, the Republican Party or some insane Intersectional movement. We should only support political policies that effect us from either party and simultaneously support a Black Agenda or organizations like the NOI who pushes a Black Agenda. I'm tired of hearing and seeing on TV these self-righteous hypocritical White "Americans" as news anchors shedding tears for these illegal aliens who are illegally entering this country. They are crying crocodile tears for these families being temporarily separated from their children at the border. When you, White "Americans" (Green People#2), collectively broke our Black Families up and sold our children to the highest bidder during hundreds of years of slavery. I don't see White "Americans" today concerned or trying to help strengthen or working to rebuild African-American families that they seriously weaken during hundreds of years of family separations during slavery. White "Americans" had ample opportunities since the Emancipation Proclamation was signed in 1863 to help Black families rebuild on what they destroyed. Now today you are crying crocodile tears for these illegal aliens children? The current generation of White "Americans" (Green People#2) continues to stomp, split, and defecate on African-Americans at the same time passionately help, assist and protect illegal aliens by forming Sanctuary cities, towns and states and accepting DACA students over African-Americans students in these majority white universities. No, really you don't care about the well being of illegal aliens; it's only only a fake symbol of compassion for them. You're only using them to destroy us (Green People#1). I am thoroughly convinced that White "Americans" (Green People#2), mainly Democrats are participating in a high sophisticated plan of genocide by using and manipulating Hispanics or illegal aliens to eliminate and replace African-Americans. Currently, we are witnessing a civil war between the white American left and the right. The current civil war of today has not evolved into collectively using riffles yet; it's only on the battlefield of ideas and information manipulation. The anti-Semitic white man who shot those innocent elderly Jews in the synagogue recently in Pittsburgh or the white racist man who shot those innocent Black people in the church in South Carolina a few years ago are not part of this civil war. However, the bomber who sent all those bombs to Democratic politicians and supporters is part of the civil war. As well as the Bernie Sanders supporter who shot those Republicans on the baseball field. The White "American" collective left is obsessed in stopping President Trump's immigration plan because it delays their total genocidal plan towards African-Americans (Green People#1). African-Americans should be the main ones supporting Trump's immigration plan. That plan is to deport 16 million plus illegal aliens, build that damn wall, and cancel the DACA program, The DACA program single-handedly is eliminating tens of thousands of Black college students being accepted to a 4-year college each year of it's existence. White administrators at these white colleges and universities are literally accepting Hispanics or DACA students in place of Black college students. The proof of this can be seen just by looking at the sudden over population of Black students attending historically Black colleges. At Tuskegee University in the Fall Semester (2017) last year, Tuskegee had to send some students who had already been accepted home because they did not have enough space to house them. Can you believe it? In addition, they had to use the Kellogg Center Hotel to house the overflow of students. To date, they are still using the campus hotel to house these students. It's not Tuskegee's administrators fault, it's the fault of these racist white administrators at these white colleges and universities who are now accepting these DACA students instead of African-American (Green People#1) students. Again, that's basically institutional racism or White "Americans" (Green People#2) modus operandi, marching to implement the following expression: "If you're white your alright, if you're yellow your mellow, if you're brown stick around, but if you're black get back". These white college administrators are not rejecting white students to accept these DACA students but are rejecting African-American students only. This is part of their genocidal plan to wipe out African-Americans in conjunction with their inter-racial plan of crushing the traditional Black family (Blackman plus Black woman produce black babies). Howard University now only accepts Black college students with a high school grade point average at 3.5 or higher. That's also a result of being overcrowded at Historically Black colleges. I am an advocate of Black students attending Black colleges, however, it's going to be hard now for the 2.0-2.5 GPA Black high school student to attend any 4-year college. We as African-Americans must realize that we are not obligated to fight for any other minority group but our own. Collectively, African-Americans (Green People#1) are in the worst and disadvantage situation than any other group by far in this country. Even the illegal aliens who just arrived yesterday with nothing but their clothes on their backs still have their native language. To quote Dr. Khalid Muhammad again, we as African-Americans in this country represent 30-50 million amnesia victims that need a checkup from the neck up! Self preservation is the first law of nature. White "American" politicians know that Black politicians and/or the Black collective will not go against any policies affecting any other minority group. White "American" politicians along with the white racist media are trying to silently wipe African-Americans out with this silent method of destruction. Just like they silently replaced mostly all the Black major league baseball players with Hispanics and no Black politician said anything about it. No Black politician is saying anything about these racist white administrators who are accepting DACA student over Black college students. No Black politician said anything when the Civil Right Commission issued their report earlier this year on how illegal aliens were a threat to Black employment, specifically Black male employment. Green People#2 plan was put in place 30 years ago. Illegal aliens have been living in the land of milk and honey for 30 years now. Recently, I went to Prince Georges Plaza Mall which is located in the Maryland DC area. I use to go to that Mall all the time but have not been for a couple of years. The Prince Georges Plaza Mall use to be populated by 99% Black people. Now, just after a few years, it's populated with 95% Hispanics. I was shocked at what I saw when I recently went there. Soon, this nation that we live in will be like the current ethnic makeup of major league baseball. White "American" employers are on a mission to replace us with Hispanics. Just look at the ethnic makeup of road construction crew, the lawn service makeup and now the mass media makeup. We are being replaced overnight in all professional areas. Not only do they need to cancel DACA, they need to deport 16 million plus illegal aliens and build that damn wall. I was fined recently by the Maryland State IRS for over $1,000.00 unjustly because they determined that I owed Maryland State taxes for 2011. They took the amount out of my Federal tax refund check. I was a Washington DC resident in 2011. I sent them all the proof that I lived in Washington DC during that time. To date, they still have not sent my refund back to me. They (Green People#2) are using their resources to bully me (Green People#1). Green People#2 continues to use their resources for evil purposes against me and our people (Green People#1). I am convinced with the assistance of the IRS that an illegal alien used my social security number and that's why the IRS garnished my federal refund check to pay for the taxes of an illegal alien. The millions of illegal aliens who are working in this country are using social security numbers given to them by the IRS. That's an identity fraud crime! I bet anyone, the IRS gave millions of illegal aliens the social security numbers of Black brothers and sisters who are currently in jail. Oh, remember the 3 strikes your out law that President Clinton implemented where if a young brother stole a piece of 5 cent bubble gum three times, he would be locked up? Now, I know why they did it! It's part of this high sophisticated plan of genocide. Self preservation is the first law of nature. Therefore, African-Americans collectively should support President Trump's immigration plan. I know the intent of Trump's immigration plan was not intended to help protect African-Americans. The intent of Trump's immigration plan is to preserve the white European culture or Western civilization. Trump is not bashful in admitting saying that he wants to preserve Western civilization. It's no secret that white people or the founding fathers built this country to establish another white European Western civilization separate from the British Empire. When Colin Kaepernick took a knee during the national anthem to protest police brutality against Black people in this country, he insulted most white "Americans". I applauded Colin Kaepernick for what he did and what he continues to do. However, his stance of appealing to the moral consciousness of white people, is a slave stance. Currently, there are basically two schools of thought or thinking within the Black or African-American intellectual community since the 1960's. The two schools of thought are Liberation and Integration/Assimilation. Kaepernick's position is an integration/assimilation stance. Kaepernick's reason for kneeling was to appeal for compassion and respect to white people for better accommodations on the modern slave plantation. The white media is trying to portray Kaepernick position or stance as some revolutionary stance or leader. That he's not. To sum up the flag situation in the language of the grass roots of our people is, "That's They Flag". Malcolm X stood up for the US flag or national anthem. If you don't believe me just go back and check the historical audio files when Malcolm addressed the criticism/question of why he and the NOI was standing up for the national anthem. Malcolm said, and I paraphrase, "The white man is a formidable enemy. I will stand up and respect the whiteman's flag until we as Black people build a Black nation of our own within this white nation. Then, I will only salute the Black flag and not the whiteman flag." Malcolm X stance was a Liberation/revolutionary stance. The only current living revolutionary sport figure is Jim Brown. He always consistently supported the grass roots of our people and is not afraid to speak his mind. A true warrior. Lebron James is also setting a good example and standard on how it should be done by marrying a beautiful Black woman, raising a beautiful Black family and starting a much needed community school. Again, President Kennedy was the only president to do anything significant to help African-Americans when he created and signed Executive Order #10925 which initiated Affirmative Action. I know President Trump's intention of his immigration plan is not to specifically help African-Americans. However, if Trump's immigration plan accidentally helps African-Americans, then I'm all for it. The main stream media is painting Trump and his supporters as your traditional white supremacist (kkk, neo-Nazi etc.). Well, I've never seen a traditional white supremacist consistently brag to his supporters in just about every one of his speeches on how low the unemployment rate among African-Americans fell since he became president. That just doesn't add up. By the way President Trump, you need to get the Black unemployment rate down bellow the white unemployment rate, to record a significant achievement for African-Americans. We are still the highest unemployment rate around 6%. All white "Americans"(Green People#2) to some degree are white nationalist. Don't try to play that game with us. This nations foundation was constructed on Western principles. I don't have a problem with President Trump or any other white person who is trying to preserve the Western principles and values of this country. My problem with white "Americans" (Green People#2) is that you won't take responsibility of what severe damages you did to African-Americans (Green People#1) since we've been in this country. You owe us reparations! We made this country rich and powerful based on our slave labor. White "Americans" had ample opportunities to change the poor conditions which they created within African-American communities since 1863. White "Americans" (Green People#2) showed their true nature by systematically destroying the Affirmative Action program that was initiated by President John F. Kennedy. Instead of using your resources to help African-Americans recover from the trauma of slavery, White "Americans" (Green People#2) historically use their resources (FBI, CIA, NSA, IRS) to crush positive black based organizations like The Black Panther Party, Marcus Garvey U.N.I.A. organization, and currently the NOI. Not to mention the actions of J. Edgar Hoover and the FBI COINTELPRPO program. The FBI recently followed me when I attended and sold products at the NOI Saviours' Day 2018. Let me elaborate on my experiences while attending the NOI Saviours' Day 2018. I decided to attend and register for the NOI Saviours' Day late. Therefore, I was unable to make hotel reservations in the recommended designated hotels for the NOI event. So I made a reservation in a hotel that was a good distance from the event. On Sunday morning, the day that Minister Farrakhan was scheduled to speak, I was waiting for the hotel elevator to go down to eat breakfast and I saw a brother coming out of his room that happened to be next to mine dressed up in a suit and bow tie. He was walking towards the elevator as he got close I said to him, "As-salamu alaykum" and he responded with "Wa-Alaikum-Salaam". Before I was able to strike a conversation with him, his roommate walked up behind him, looked at me and whispered in his ear, then they both walked back to their room. I thought that was kind of strange but did not think anything of it. I kept it moving and got my breakfast. By the way, those were the only people that I saw in this hotel who were associated with the NOI Saviours' Day conference. After Minister Farrakhan gave his typical powerful speech, I decided to eat dinner at the hotel located adjacent to where Farrakhan delivered his speech. Therefore, I did not get back to my hotel until later that night. Now I'm in my hotel room chilling, organizing to leave Monday morning, I hear the brothers talking next door. They were pretty loud because I heard them through the walls of my hotel room. I said to myself, why are these brothers still checked in at this hotel? You would think that they would have checked out the day of Farrakhan speech. Anyway, I had my iPad trying to get access to the internet. Behold, you know how before you get access to the web through the iPad, a number of different Wi-Fi choices pop up that's close or adjacent to you. Not only does the hotel Wi-Fi name pop up but other Wi-Fi names that are close to you pop up also. Well, guess what Wi-Fi name popped up as a choice on my iPAd? The FBI Surveillance WI-Fi! These brothers were FBI agents following me. I just shook my head. I had to go down to the hotel lobby to print out something for my company, Elements 4 Nature. I took my iPad with me to see if the FBI Surveillance Wi-Fi would pop up while in the hotel lobby. The answer was no, the FBI Surveillance Wi-Fi did not pop up while I was in the hotel lobby. The FBI Surveillance Wi-fi popped up again when I went back to my hotel room. So clearly these brothers were FBI agents. Everything made sense now on why they even made reservations at this hotel and why they were still checked in at this hotel. It also explained the strange brief encounter that I had with them at the hotel elevator. I have a friend that I grew up with who lives in Chicago now and we were planning on going out to the club that night. We decided not to go earlier that day. I bet you they had a cake baked for me at one of those clubs. Ironically Farrakhan mention in his speech that just because they dress up in a suit and bow tie don't mean that they are true members of the NOI, they could be FBI agents. So why is the FBI following me? Why is the FBI monitoring what I'm doing with my company, Elements 4 Nature? Why? I have not done any harm to anyone. As a matter of fact, my company is helping heal people by selling George Washington Carver Peanut Rubbing Oil for minor aches and pains. Elements 4 Nature goals are consistent with the spirit of Tuskegee University. Elements 4 Nature goals are consistent with Booker T. Washington and George Washington Carver goals. Elements 4 Nature is committed to and focused in helping to create jobs in African-American communities. So why is the FBI monitoring us? Oh, I know why now, they just want to continue to do harm to us. It's consistent with Green People #2's historical goals and objectives of using their resources not to help Green People #1, but to hurt and inflict pain to Green People #1. That's why devils are the appropriate word to describe Green People#2. Especially the people who are in control of the most powerful and high tech resources within these covert Government agencies like the FBI, CIA, NSA, and IRS. They have access to the most powerful tools to inflict harm to the powerless common man, especially the Blackman on a daily basis. My perspective on President Trump is that he's a builder; he wants to make America great again. He wants to make America the #1 world economical power again. Today China is the #1 world economical power. At one time, America was the leader in all areas. However, America became rich and powerful solely through slave labor. My Black African slave ancestors (Green People#1) made America the richest country in the world. White "Americans" owe my people, African-Americans reparations! If the Democrats and the main stream media are successful in getting President Trump out of office, you will see the civil war transition from basically an ideological battle to white "Americans" killing each other in the streets. If white people start killing each other, this civil war is not our fight. This civil war will not be an African-American fight. I know I will be sitting on the sidelines eating my popcorn and watching white people kill each other. It is well documented that the FBI, CIA, and the NSA plotted against President Trump even before he took office. I would not be surprised if the current dramatic downturn in the stock market in recent weeks was orchestrated intentionally by top money mangers to make President Trump look bad. They have done everything to try to get this man out of office. If my assumption turns out to be a reality, I bet the super rich white people got their money out of the market weeks before the recent planned downfall. Again, if Democrats and the main stream media are successful in getting Trump out of office, it will be the end to democracy for white people in this country. The United States as we know it will cease to exist. For the record, African-Americans never had democracy in this country. I want this democracy to survive because America owes African-Americans so much as far as reparations are concerned. Every American who lives in this country regardless of their race or ethnicity, should be taxed a reparation tax to rebuild the traditional African-American family(Black man+ Black woman=Black child) and African-American communities. Even though your not part of Green People#2, you still benefit from living in this country and enjoying America's riches which Black slave labor produced. White "Americans" who are crying crocodile tears for children who were temporally separated at the broader, now must pay for close to 300 years of Black families separation during physical slavery. Our condition as African-Americans or Green People#1 in this country has not significantly changed. We still live our day to day lives collectively as mental slaves controlled by the white "American" slave master. The best way I can describe our condition is by the following analogy: A lion in the jungle living in his natural habitat is captured by white hunters. These white hunters sell the lion to the circus. The lion is kept chained in a cage. Through the use of food, sex and violence, the lion over a period of time is trained, tamed and taught to perform tricks. Once white people in the circus felt the lion was thoroughly trained, they unchained him and let him out of his cage to perform multiple tricks in the circus. Years passed by and one day the lion realized he was not in his natural habitat and not doing what God put him on this earth to do so the lion starts killing all the white people in the circus. One white man in the circus shot the lion and put him back in the cage to be re-trained. When President Lincoln signed the Emancipation Proclamation to "free" us, they just let a thoroughly trained African-American out of the cage to perform in white people's circus. We were thoroughly trained to please white people or to perform in their circus. The biggest clowns performing in their circus are the thoroughly trained African-Americans who marries or fornicates with white people or anyone outside the Black race. White "Americans" attack, en-prison, and kill those African-Americans who refuse or resist participation in their circus as clowns. White "Americans" consistently attack Minister Farrakhan and the strong sisterhood and brotherhood of the Nation of Islam who are not currently participants in their circus. Most African-Americans or clowns who are visible in the media and other professions with good or high paying circus jobs are married to white people or people outside of their race. These are the people who are the biggest clowns who are elevated to high positions from being so-called leaders to entertainers in the white people's or Green People's #2 circus. Those of you who claim to be a black business yet are afraid to do business with Elements 4 Nature because of pressure that you are receiving from some Hispanic organization, LGBT organization, FBI, CIA, NSA, IRS or some white political organization that claims to be for African-Americans, then your not a true Black Business anyway. Elements 4 Nature is a supporter of all true black based businesses and black organizations like the NOI. We have to support our own because that's all we got. Our only path to survival is to support each other. A 100% Black Agenda that implements Black Love is our path to survival and we must reject interracial relationships. The Intersectionality movement is not the solution for our problems. Those Black advocates who are participating in that movement are wasting their time. In addition, we must stop buying products from businesses who use interracial relationships in their advertisements. The over saturation of these interracial advertisements is a clear indication of the collective destruction that white "Americans" or Green People #2 want to continue to impose on the traditional Black Family (Black man + Black woman = Black Child). Black wealth can only be built through Black Love relationships. You don't need to join an organization to buy products from true black businesses or to implement Black Love. Again, the only black established organization dedicated to rebuilding the Black African-American family is the Nation of Islam lead by Minister Louis Farrakhan. Every Africa-American should support a Black Agenda, Black Love, Black Businesses and Minister Farrakhan and the NOI because Black Love Matters! "Fear of the Lord (Part 2)" Posted by Jon Adkins at 11:02 | Thursday, May 4. 2017 | Comments (5) | Trackbacks (0) Fear the Lord (Part 2) DACA vs Affirmative Action I bring this comparison of DACA vs Affirmative Action up to expose the hypocrisy of white "Americans" particularly the so called white "liberals". Affirmative Action was an Executive Order #10925, written and push by President John F. Kennedy in 1961. By the way, President Kennedy needs to get enormous credit for creating the first Executive Order specifically to help African-Americans. That particular executive order was not enforced by white "Americans". Therefore, Lyndon B. Johnson wrote another Executive Order, #11246 to enforce Affirmative Action. Basically, the enforcement of Affirmative Action did not reach it's peak until the late 1970's and mid 1980's. Jessie Jackson had a lot to do with protesting and pushing white politicians to enforce Affirmative Action. Just when African-Americans started reaping the benefits of Affirmative Action, the white "Americans" of different religious and political backgrounds collectively and systematically destroyed Affirmative Action programs and policies. I was just hired out of college during the implementation of Affirmative Action in the public and private sectors of cooperate America. I witnessed the collective systematic destruction of Affirmation Action in the workplace. The quota system was developed and white managers would collectively hire the least qualified African-American on purpose. Then would not give that African-American worker proper training once they hired him. Then these white mangers would expose the least qualified African-American worker to the world to show that the quota system of the implementation of Affirmative Action did not work. In the field of education, you had the white "American" Jew, Allan Bakke who started the demise of the implementation of Affirmative Action in educational universities across this country. Mr. Bakke file suite against the University of California because they accepted an African-American student over him in medical school. The Supreme Court rules in favor of Bakke in 1978. My question to white "Liberals", where was the 9th Circuit Court of Appeals to strike this law suit down by Bakke before it got to the Supreme Court? Where were the sanctuary colleges to fight the Bakke Supreme Court decision? Where were the sanctuary colleges for Black or African-American students? DACA funds with tax payers money, 800,000 illegal aliens free college tuition and acceptances all across this country. Have you heard of any legal cases where a white person was denied acceptance to college because of a DACA student? Since you have sanctuary colleges protecting illegal aliens you probably will not hear of such cases. The university structure have not changed in terms of the amount of students that they accept at these universities. So who are these new DACA students replacing whiles being accepted in these universities? The answer is they are not replacing white students, they are replacing black or African-American students. Colleges and universities only can accept a certain number of students each year due to limited dorm space and the classroom size. I recently went on a college tour at an Historically Black College University (HBCU) since my son plans to enter college this Fall. The administrator mentioned that their Freshman class enrollment spiked significantly this Fall. The school did not know the reason for the spike but I bet you it is because of the significantly less number of African-Americans being accepted at these white colleges because of DACA. I'm am a big advocate of black students attending HBCU's on the under graduate level. However, the implementation of DACA reduces the choices of African-American students as a whole. Where are the white "liberal" advocate groups or protesters for African-American students? They don't exist! DACA continues to have an significant negative impact on African-American students and no one is fighting for us not even the so called black political leadership. It is really sad that they are silent on this issue. President Trump said he was going to do away with DACA and he has yet to keep his promise. With our life preservers on in a sea of confusion, white "Americans" continue to collectively work towards our destruction. The white "American" collective, especially the white so called "liberals" use the fact that they supported Obama or voted for him to hide their racist actions. Newsflash, Obama did not fight for the rights of African-Americans! He fought for the rights of everyone else but his own people. The white "American" collective knew Obama was a product of a mixed race couple. Therefore they had high confidence that he would not do anything significant for African-Americans while in office. Children who are a product of a mixed race couple (Black man/Woman mixes with a white man/woman or any other race) identify with the white side of their family and reject the black side of the family. I know because all of my kids participated in sports as early as 4 years old. My kids had teammates who were mixed race. I saw how these mixed race kids interacted with black kids. These mixed race kids think they are better than black kids. They acted that way towards black kids. They look down on black kids. As early as 4 years old and up through high school. These mixed race kids apparently embrace their white side of the family and reject their black side of the family. This is primarily a result of institutional racism and white supremacy. These mixed kids minds are shaped from day one. Even if the parents of these mixed kids are teaching them the right things like everyone is the same and everybody is equal, they cannot compete with institutional racism. These parents cannot compete with the 24/7 news media and schools which are structured on institutional racism and white supremacy. Teachers, administrators and coaches amplify white supremacy and institutional racism in these schools by treating mixed kids better than blacks kids who come from the traditional black family. White "Americans" are not accepting or acknowledging the significant part that they played in making African-Americans mental slaves. In addition, white "Americans" are actively working to maintain that slave mind that exists in African-Americans today. White "Americans" collectively not only criticize conscious African-American leadership who are actively working to eradicate mental slavery among African-Americans but they work to destroy them. White "Americans" collectively utilize the resources of the US government covert agencies like the FBI, NSA, and the CIA in order to try to destroy conscious African-American groups as well as businesses like Elements 4 Nature. Historically, we as African-Americans living in this country have not had any friends only enemies. President Kennedy was the only president who made a significant action towards helping us as a people by writing an executive order creating Affirmative Action programs which was later systematically destroyed by the white "American" collective. In order for our people in this country to survive the continued silent slaughter at the hands of White "Americans" through this high sophisticated tactics of genocide, we must embrace the recommendations and findings of our great Black conscious social scientists like Dr. Bobby E. Wright, Dr. Frances Welsing and Dr. Amos Wilson. We must embrace Dr. Bobby Wright's Black Social Theory. I hope Dr. Wright, rest his soul, don't mine me renaming his Black Social Theory and calling it the "Black Love Agenda". We as African-Americans must embrace self-love like no other time in history. White "Americans" collectively have been exploiting the slave mind of our people for many years. The interracial relationship campaign has been embraced and implemented by large white companies who are actively participating in a nation wide ad campaign promoting interracial relationships. We must counter their national interracial ad campaign with a national "Black Love Agenda" campaign. The "Black Love Agenda" campaign will help save us from being systematically annihilated by institutional racism and white supremacy. Black Love Rules! This "Black Love Agenda" campaign is about self-love and loving yourself and its not about hating others. This "Black Love Agenda" needs to be pushed by every Blackman and woman for the survival of our people. Regardless of your religion or your political affiliation, we as African-Americans need to unite on this powerful concept of Black Love. You don't need to join any organization to implement this concept of Black Love, just do it. The accumulation of Black Wealth is directly dependent on the implementation of Black Love or the Black Love Agenda. The accumulation of Black Wealth is wiped out through interracial marriages. Black Love must exist in order for Black People to accumulate Black Wealth. Statistics show the enormous gap in life expectancy between Black people and White people. With interracial marriages, in most cases, the white person will live longer than the Black person. Therefore, all the wealth that the interracial couple accumulates will go to the survivor which will more likely be a white person or someone from another race. None of that wealth will be spread around to his or her Black brothers/sisters, aunts/uncles, nieces/nephews and cousins. The "Black Love Agenda" also needs to be spread to black based businesses like Elements 4 Nature. We sell cosmetic products created by the greatest scientist of our time, Dr. George Washington Carver. I want to create the largest black based company in the world. I envision Elements 4 Nature creating lots of jobs for our people. The Carver products that we sell blows away all other competing products. We at Elements 4 Nature not only want our people to buy Carver products but we want all ethnic groups to experience using these wonderful products that Carver created. We will do business with anyone, however, we are not a boot licking, knee bending, foot shuffling, head scratching company. Elements 4 Nature is no slave company. We speak and express the truth without fear at Elements 4 Nature. We take the truth, justice and righteous road. This road or path that we take is not easy but beneficial. In the Bible Luke 9 verse 24, Jesus states the following: " For if you want to save your own life, you will lose it, but if you lose your life for my sake, you will save it". That means, if you choose the easy path of being silent, submissive and not fighting against injustice you will lose your life, you will die. However, if you seek the harder path, the path of speaking the truth without fear and fighting injustice like Jesus, you will live. Walking in the path of righteousness like Jesus did, like Prophet Muhammad did and like our other Black African ancestors did when they created and followed Maat(42 Declarations of Innocence) long before Judaism, Christianity and Islam came into existence. Living a righteous life is the core and strength of our Black African culture. There is no other love that is stronger and more intense than the love of a righteous Blackman loving a righteous Black woman. Black Love is at it's strongest when it's base or foundation is structured on truth, righteous thought and actions. Righteous "Black Love" is the highest form of any love. Our ancient Black African ancestors practiced the righteous way of life religiously. The foundation of Black Love is a pure(virgin) Black African woman marries a pure(virgin) Blackman. That way of life was practiced in our Black African culture from day one in ancient times and though out all of Africa. The greatest love affair ever was the union of the Black God-King Rameses II and the Black Goddess-Queen Nefertari II. Their relationship was based on the ancient concept of purity in the institution of marriage. The North African Church is where Christianity was born. The North African Church taught the Black African version of Christianity for a few hundred years until the White European Romans invaded Egypt and changed it to the Roman Catholic Church. Thus changing Christianity from the Black based interpretation to a white based interpretation. That's why all of the ancient statues and paintings of Jesus are Black. These Black ancient statues and paintings of Jesus were the first images of Jesus in history. These ancient Black statues and paintings were created by the North African Church. That's why the Pope of Rome today worships these ancient statues and paintings of the Black Madonna and Child that was initially created by the North African Church. These ancient statues and paintings are located in Rome today. The first representation of Jesus being White was not until 1507 AD, when Michelangelo painted the first White Jesus in history. It is clear that the foundation of Black African culture and the foundation of White European culture are completely opposite in many many ways. For instance, just take a look at the differences from a religion perspective. Our Black African culture going back to ancient times, always taught that man was sacred and had the potential in him/her to be an extension of God. Our ancient Black African ancestors taught that the body, soul and spirit is a sacred temple or a Godly being. On the other hand, the White European version of Christianity teaches that man was born with a sin. Further verification that our Black African culture historically viewed man as scared and has a potential of becoming an extension of God, comes out through the ancient writings of our Black African ancestors in the creation of Maat. All 42 Declarations of Innocence of Maat starts off with "I have not". For instance, #11 of Maat states the following: "I have not committed fornication". All 42 Declarations of Innocence starts off with "I have not committed whatever etc...". The ancient writings of Maat verifies that we as a Black African people historically defined the body, soul and spirit as a sacred entity of a Godly being and that my future actions did not pollute my sacred being or scared foundation with a negative action. Again our ancient Black African ancestors lived by Maat religiously. So much so our ancient Black African ancestors separated Egypt geographically by the 42 Declarations of Innocence or the 42 major laws of Maat. The land mass of Egypt was separated by 42 states or provinces with each state emphasizing one of the 42 major laws of Maat. For instance, #20 of the Declaration of Innocence states: "I have not had intercourse with a married woman" will be the theme of two states since #20 and #21 of the 42 major laws of Maat states the same thing. The European culture stole that concept from our ancient Black African ancestors like they did with everything else. For instance, the United States is separated by 50 states. Each of the 50 states in the US have a theme; Virginia's state theme is, "Virginia is for Lovers". Note: This is the final part of this E4N Blog. I started writing this blog entitle "Fear of the Lord" before President Trump's inauguration and I am finishing up the conclusion of this blog tomorrow June 25, 2017. I apologize that it took so long to finish. Vengeance is the Lord is a popular verse in the Bible. About 2 weeks ago, I was in an accident that totaled my car. I thought this accident was really a hit job on me. It happen on Rhode Island Avenue in Washington DC. I was sitting at a traffic light when I looked into my rear view mirror and saw a big speedy white truck approaching me. The truck accelerated before slamming into my car with such a force that my car hit the other car in front of me. I got out of the car and asked the white man who was driving the trunk, what were you thinking? He apologized and said his foot slipped off the brakes and automatically hit the accelerator. Yeah right! Who literally put the hit on me, I don't know! I Thank the Lord that I came out of that alive! There were not any noticeable injuries. However, at times I feel something may not be normal in my lower back. I will get myself checked out by the doctor to make sure there were no structural damage to my back. Vengeance is the Lord! The actual biblical verse is found in Romans 12:19, which states: "Dearly beloved, avenge not yourselves, but rather give place unto wrath: for it is written, Vengeance is mine; I will repay, saith the Lord." According to Dr. Ben, the greatest historian of out time, our ancient Black African ancestors along the Nile Valley believed and implemented the "After Life" concept. Where each individual had their chance to develop their own vision or story of the "After Life". It was based on the spiritual perspective or spiritual vision on what happens after you die of a physical death. Naturally everyone will die a physical death. The last I checked, death is undefeated. Well, I know what my "After Life" vision or story will be. Once I make the transition to my ancestors, I will be 100% in union with the Lord. I will be 100% in union with the forces of the Earth, with the forces of the Water, with the forces of the Air and with the forces of Fire. When I make the transition to my ancestors, I will be 100% in union with the Lord. Vengeance is the Lord! Since vengeance is the Lord, at that point, I am going after to destroy every wicked white person who did harm to me while I was on earth. I am going after to destroy every wicked white person who plotted and planned my destruction on earth. Since vengeance is the Lord, I am going after to destroy every wicked white person and those who assisted them in the destruction of me and my family while on earth. I am going after every wicked white person who specifically participated in the plotting and planning of the destruction of my kids or seeds. Since these wicked white people went after my kids or seeds, I will go after to destroy their kids or seeds. Vengeance is the Lord! That's my "After Life" vision or plan. Vengeance is the Lord! My "After Life" vision or plan does not keep me from praying and fighting for the destruction of my enemies who are currently working against me and my family on earth. Back to this life, to this point, I care less whether White"Americans" decide finally to truly help us out of the mess that they put African-Americans in. White "Americans" have demonstrated that they don't care about us. Historically, instead of White "Americans" accepting, embracing, and expanding Affirmative Action, they worked to destroy Affirmative Action. Yet they have passionately embraced illegal aliens and immigrants living in this country. For the past 30 years White "Americans" have collectively helped illegal aliens become more economically empowered in this country. Over the past 30 years White "Americans" secretly helped illegal aliens to organize in establishing control of the lawn service businesses, the janitorial services businesses and helped them to be employed with most of the road and building construction jobs in this country. Just do an eye test of any road construction site in the United States and you will see very few African-Americans employed at these construction sites. Mostly illegal aliens who can't speak English are working at these construction sites. This could not have happen without White "Americans" collectively working, organizing and helping illegal aliens over the last 30 years to make this a reality. Don't get me wrong, I'm not blaming illegal aliens for this reality, I'm blaming White "Americans" who created this reality. Illegal aliens are only naturally taking advantage of these opportunities that were created by White "Americans" at the expense of African Americans. White "Americans" did not create this reality because they love illegal aliens, it's because they hate African-Americans so much. It's no secret that majority of White "Americans" collectively planned a coup against President Trump and his administration ever since he won the election. The only reason they are against Trump is because he wants to build a wall, deport 13-15 million illegal aliens and promise to do away with DACA. All three of these actions will significantly help African-Americans. President Trump has been wishy washy in doing away with DACA. Yet DACA is the most single threat in reducing opportunities of young African-Americans students being accepted into colleges and also reducing the chances of young African-Americans obtaining high paying professional jobs in this country. For the record, however, I'm against President Trump's policy of Stop and Frisk! I am not excited about his infrastructure plan either. African-Americans are not currently in position or in the employment pipeline of taking advantage of getting these road construction jobs. That's why white democrats are so excited about Trump's infrastructure plan because most illegal aliens are in a better position to be hired for these construction jobs than African-Americans. It's a shame! I am thoroughly convinced now than ever before on witnessing the implementation of White "Americans" genocidal plan to promote and support illegal aliens with the ultimate goal to crush African-American progress. Again, that's why most White "Americans" are so upset with President Trump. Trump's immigration plan is hated by most White "Americans". We as African-Americans need to be politically engaged and focused only on the policies that benefit us! Like Dr. Khalid Muhammad once said, that these political solutions are only temporary solutions to our problems. We need to be politically engaged but at the same time we need to spend more time and energy focused on improving our situation as a people. Liberation and independence is what we as African-Americans are seeking. We are fighting for our lives against White Supremacy and Institutional Racism. White Supremacy and Institutional Racism were designed specifically to crush and destroy Black people period. In order to effectively fight White Supremacy, we as African-Americans need to focus on self-love, Black Love and improving self. We need to work to eradicate mental slavery! We need to abandon this slave mind that gets us in the mode to worry about the well-being of every other minority group and at the same time seriously neglect our own causes. This is not logical. Historically, it has never worked where other minority groups supported our causes with the same energy and commitment that we supported their causes. No, no never again. I've seen that scenario play out too many times on the corporate plantation. African-Americans first should be our only policy. We have given more to this country and suffered more in this country than any minority group and received significantly less than any other ethnic group. Again, we need to eradicate mental slavery as our top priority, form strong Black families and build Black Wealth based on Black Love! Self-Love and improving self is necessary towards the survival of our people. We must reject interracial relationships in any shape and form in order to build true Black Wealth. I'm trying to build Black Wealth through my company, Elements 4 Nature. I'm looking for that cookie type Sista to help me towards that goal to build an Empire. Black Love is what we need to emphasize on to combat White Supremacy and Institutional Racism. We need our Sistas and Brothers to have a ride or die attitude. We don't need a weak white minded Sista or brother who wants to marry or have an intimate relationship with every other race of people other than their own. We don't need any Sista or brother who worships and fears the devil. No devil worshipers please! Fear the Lord and not the devil. This current national interracial Ad campaign is designed to destroy the traditional African-American family. The implementation of Black Love or the Black Love Agenda must be religiously implemented in our communities. The national implementation of Black Love or the Black Love Agenda will effectively help our people to fight White Supremacy and Institutional Racism. By implementing the Black Love Agenda and rejecting interracial relationships, we will start preserving our best minds to focus on solving our own problems instead of our best minds being stolen through interracial relationships. By implementing the Black Love Agenda, we will start protecting our Black Athletes and Black Entertainers riches and resources from being stolen through interracial relationships. By protecting our Black Athletes and Black Entertainers riches and resources, we will begin to build true Black Wealth. By implementing the Black Love Agenda, we will build a stronger and a more powerful traditional African-American family. Black Love Matters! So what are we as a people going to do? Are we going to ride with Black Love or die perpetuating White Supremacy through interracial relationships? Are we going to ride with Maat, speaking the truth without fear for our people and walking in the path of righteousness or die worshiping the devil? Are we going to ride standing up for Justice by fearing the Lord or die in silence fearing the devil? Let's ride for liberation and freedom for our people and fight White Supremacy through Black love. Black Love Matters! "Fear The Lord (Part 1)" Posted by Jon Adkins at 19:18 | Monday, January 2. 2017 | Comments (6) | Trackbacks (0) By:Jon Adkins Life is short! I recently lost my big brother a few years ago. I never thought he would be gone so early in his life. He died suddenly and mysteriously. You never know what can happen to your love ones around you. That's why it's so important to live life to the fullest. "Eat, Drink, and be Merry for tomorrow we shall die", is a famous quote from the great ancient Black African Egyptian Architect and multi-genius, Imhotep. Again, life is short and that's why it's important to speak and express the truth while living on this earth regardless of the consequences or outcome while speaking the truth. Fear the Lord! Yes, "Fear the Lord" is a powerful truth. "Fear the Lord", is one of the most common saying from the powerful biblical text, the Bible. The Bible is a very powerful and truthful book. Most of the scriptures in the Bible are misinterpreted by many. Fear the Lord, does not mean to be afraid of the Lord. How can you be afraid of your friend? Fear the Lord means fearing in what will happen if you don't follow the Lord's principles or God's laws. There are certain laws of God to be followed or you will be hurt or punished. Let me briefly take this out of the religious context for a minute and give you some examples to prove my point. There are certain common laws that people need to abide by to keep from getting hurt. For instance, don't place your hand on a really hot stove or your hand will get burned! Don't walk across a busy street on a green light where cars are speeding or you will get killed or seriously injured. There are also positive common laws that you can do to be rewarded. Let's go back to the religious context now, fear no one but God is the same thing as fearing the Lord, which means you should fear the consequences of not following God's laws. You should fear what will happen if you lie, steal, or hurt someone or go against any of God's established moral laws. That's what fearing the Lord really means, what are the consequences of doing wrong or going against God's moral laws. Our ancient Black African ancestors of Egypt interpreted and recorded history's first record of God's moral laws, the 42 Declarations of Innocence. The "42 Declarations of Innocence" or the 42 Negative Confessions was created and religiously followed by our Black African ancestors of the Nile Valley culture in ancient Egypt. The 42 Declarations of Innocence or the 42 major laws states as follows: 42 Declarations of Innocence/42 Negative Confessions 1. I have not done wrong to people 2. I have not committed robbery with violence 3. I have not stolen 4. I have done no murder;I have done no harm 5. I have not done evil 6. I have not begun the day by demanding more work than is due me 7. I have not stolen God's property 8. I have not spoken any lies 9. I have not stolen food 10. I have not caused anyone pain 11. I have not committed fornication 12. I have not caused anyone hunger 13. I have not caused anyone to weep 14. I have not killed anyone 15. I have not commanded anyone to kill 16. I have not caused anyone to suffer 17. I have not discussed secrets 18. I have not set my lips in motion (against any man) 19. I have not been angry and wrathful except for a just cause 20. I have not had intercourse with a married woman 21. I have not had intercourse with a married woman(twice) 22. I have not committed acts of impurity or sodomy 23. I have not terrorized anyone 24. I have not transgressed 25. I have not been hot-tempered 26. I have not been deaf to words of truth 27. I have not eaten my heart(done anything to my regret) 28. I have not been violent 29. I have not caused strife 30. I have not been impatient 31. I have not eavesdropped 32. I have not been talkative 33. I have not done evil 34. I have not cursed or opposed the King 35. I have never fouled the water 36. I have not raised my voice 37. I have not cursed or opposed God 38. I have not exalted myself 39. I have not stolen(destroyed) divine offerings 40. I have not stolen the offerings of the departed ones 41. I have not taken bread from a child or blasphemed the God of my city 42. I have not killed sacred cattle For the record, according to Dr. Yosef Ben-Jochannan, the greatest historian and Egyptologist of our time, said that the ten commandments documented in the Bible were taken from the 42 Declarations of Innocence that our ancient Black African ancestors wrote. Dr. Ben also said, just to give you a historical perspective, when the Romans invaded ancient Egypt (the Black African Empire) they converted the North African Church which was the birthplace of Christianity into the Roman Catholic Church. They converted the Black based christian religion implemented by the North African Church into a white based Christian religion that's currently practiced in North America and throughout the world today. Christianity was originally founded by two Black men, one named Pantheous and the other named Botheous. I just had to mention that. Let's get back to the topic at hand. Again, when you "Fear the Lord", you are obeying the principles of God or the laws of God. By following the laws of God, you automatically are worshiping God. I view God as the heavenly holy father and mother of creation, God of our Black African ancestors. When you fear man and not God, you automatically worship the devil. When you fear man you follow the instructions of that man or woman. Silence is endorsing whatever wrongs you are fearful in speaking up about. Therefore, I am committed in speaking the truth regardless of the consequences. Self-preservation is the first law of nature. We must speak out and defend our people regardless of the consequences. The negative consequences are much higher when you decide not to speak the truth due to fear. Our theme on the E4N Blog Site is No Fear: Speak Out! I say all of this to say the following: The United States government is structured on the foundation of white supremacy or institutional racism. I must not be silent during this crucial time in history. During a time where a large group of white "Americans" are collectively practicing a high sophisticated form of genocide on our people (African-Americans) in this country. I will prove my point in this E4N Blog through logical deductions. Most African-Americans will not agree with me due to the slave mentality that dominates majority of our people as a result of close to two hundred and fifty years of physical slavery at the hands of white "Americans". White "Americans" systematically created the slave mind that handicaps majority of African-Americans in this country still even in 2017. Even the first Black president suffers from this disease called mental slavery. There are many different plots and plans in which white "Americans" in these covert government agencies participated in the past in trying to destroy African-Americans. However, until the recent plot of using illegal aliens as a shield to crush African-Americans progress, I never seen so much participation by a large group of the white "American" collective in their open compassionate support of illegal aliens. Especially the compassionate support that a large group of whites "Americans" are currently displaying in trying to keep president elect Trump from cancelling the executive order called DACA that president Obama signed on June 15, 2012. Where was the compassionate support from the white "American" collective when president Kennedy signed the executive order that initiated Affirmative Action in this country? Let me answer that question, no where to be found. As a matter of fact, the white "American" collective destroyed Affirmative Action. There is only one reason why an unprecedented large number of white "Americans" and the establish media, Democrats as well as some Republicans are against the president elect Donald Trump and that is his stance on illegal aliens. Illegal aliens are having a significant negative impact on lives of African-American people in the workforce. The signing of DACA exacerbated that impact on the lives of African-American people. I will discuss the negative impact of DACA on the African-American community in details later in this E4N Blog. However, on our sister site, Alkebulan Reference Center (www.ancient-knowledge-breakthrough.net/state-of-emergency.php) in the State of Emergency Series on August 24, 2012, I wrote "Hispanics are Pushing African-Americans out of the work force". Please read that when you get a chance. Hopefully, after reading what I have to say in this current E4N Blog, most of you will agree with me. I really did not want to discuss this issue on our Elements 4 Nature Blog site. However, I can't be silent on this issue. This issue is negatively impacting too many of our people. Again, the foundation of America is rooted in white supremacy or institutional racism as it relates to African-American people. I am not going to spend time writing about institutional racism. There are numerous books written about institution racism. Just go read Neely Fuller's book, "The United Independent Compensatory Code" a book or a guide for the victims of white supremacy in knowing how to counter white supremacy that's practice everyday in this country. However, to summarize institutional racism or white supremacy all you need to do is to go back with the saying that most African-Americans grew up saying and that is the following: "If your white, your alright, if your yellow, your mellow, if your brown, stick around but if your Black, get back". A few other phases or names that reflect institutional racism in this country is angel food cake is white, devil's food cake is black. In addition, it's alright to tell a "white lie". These are just a few examples. We as African-American historically suffered and currently suffers the most by far more than any other ethnic group living in this country. Let me first briefly highlight African-Americans history overall because in order to clearly understand today's events you need to know what happened yesterday. The following summary of events occurred in chronological order: 1. The Black African named Imhotep designed the first pyramid in ancient Egypt and was the first multi-genius in history. Imhotep was a physician as well as an architect. 2. Black ancient African Egyptian civilization was the first civilization in history that used a high level of math and science and discovered the first moral code of conduct,the 42 Declarations of Innocence called Maat. 3. White Europeans conquered ancient Egypt and enslaved Black Africans. 4. White Europeans conquered and enslaved all of Africa except Ethiopia. 5. Whites enslaved, murdered and forced millions of Black Africans out of Africa to America. 6. Millions of Black Africans lives were lost in the transportation to America in what is called the Middle Passage or the trans-Atlantic Slave Trade. 7. Black Africans in America lived under physical slavery for over 250 years and currently live under mental slavery. Let's stop right there and go deeper on what happened to us as African-Americans while being oppressed as physical slaves. Basically, we were stripped of our names, language and culture or our way of life. During the over 250 years in physical bandage, we were not allowed to be taught to read. A slave could be killed or have their hand chopped off, foot chopped off, eye plucked out or lynched if he/she was caught reading or even learning how to read. We were traumatized during this process and forced by white "Americans" to live the life styles of white people. Psychologically, we are still afraid that our feet, hands, fingers, legs, arms will get chopped off or lynched, if we picked up a book. Today, we still live with that psychological trauma that we will get punished if we attempt to read. In a study conducted by the research team at New York's Mount Sinai Hospital led by Rachel Yehuda, shows that when someone is traumatized that trauma is reflected in ones DNA and is past to ones offspring from one generation to the next. In theguardian article dated August 25, 2015 it states the following: "changes stemming from the trauma suffered by Holocaust survivors are capable of being passed on to their children, the clearest sign yet that one person's life experience can affect subsequent generations. This research proves that the Jews passed their DNA of fear and trauma from what they experience in concentration camps of the Holocaust to their children." It's not a surprise that today a large percentage of African-Americans don't have a desire to read. That's a result of what we went through while enslaved in this country which is the psychological effects of slavery. White people were responsible for creating that slave mind that currently exist of every African-American some more than others. White "Americans" today are responsible for the low reading proficiency rates among African-Americans. It should be the responsibility of white Americans today to take the lead to reverse that slave mind that exist in the minds of African-American today. The following analogy sums up what we as African-American people experienced while living in physical bondage as slaves: A lion is pursued in its natural habitat by its captures. Once the lion is caught, he is chained up and locked up in a cell. The lion is considered by its captors to be "wild". Over a period of time, the lion is trained to do tricks. The more the lion is conditioned and trained, the more the lion can be let loose somewhat to perform tricks in the circus. However, when the lion decides he wants to go back to his natural habitat and tries to break loose, he is shot dead. Just like the lion, we were captured and taken away from our natural habitat or our natural way of life. Just like the lion, we were captured, chained and caged on the plantation. For over 250 years, we were trained to love everything white and hate everything black. We were trained to hate ourselves by white "Americans". We were trained to please and be servants of white people. Like the lion, once we were well trained we were "let free" to perform tricks for white "Americans". Currently in 2017, we as African-Americans are performing in the circus, doing tricks for white people. Once we become conscious and decide or try to free ourselves from this slave mentality within the circus, we are shot down in this country like our freedom fighters or our ancestors of the past. President John F. Kennedy signed the Executive Order #10925 in 1961 mandating Affirmative Action for us as African-Americans. The first president to specifically write an executive order for African-Americans. Since President Kennedy Executive Order #10925 was not enforced, President Lyndon B. Johnson signed another executive order on September 24, 1965 to enforce Affirmative Action. In his speech at Howard University in 1965, Lyndon Johnson asserting that civil rights laws alone are not enough to remedy discrimination said the following: "In far too many ways American Negros have been another nation: deprived of freedom, crippled by hated, the doors of opportunity closed to hope.... But freedom is not enough. You do not wipe away the scars of centuries by saying: now, you are free to go where you want, do as you desire, and choose the leaders you please. You do not take a man who for years has been hobbled by chains, liberate him, bring him to the starting line of a race,saying, 'you are free to compete with all the others,' and still justly believe you have been completely fair... This is the next and more profound stage of the battle for civil rights... To this end equal opportunity is essential, but not enough, not enough". President Obama the first Black President for eight years in office never signed an executive order specifically for African-Americans. That's a shame. That's what a slave does is to serve and please everyone else but himself. African-Americans on the plantation think he did a great job for us. He's been a show president for us. All show and no action. President Obama did not feel the need to sign an executive order for African-Americans. Just look at the condition of our people and history of fighting and struggling this country. News flash, we continue to struggle! The traditional African-American family is under siege in this country. However, on June 15, 2012, President Obama signed an executive order called, DACA, for illegal aliens. DACA covers about 800,000 illegal aliens. DACA covers free college education and work permits for the children whose parents committed a crime to enter the United States. Can you believe this? Isn't it kind of strange that white "Americans" did not make a fuss over the DACA executive order like they did Affirmative Action? As a matter of fact, they are so compassionate for the rights of illegal aliens by setting up sanctuary colleges and sanctuary cities. Why? It supports my theory of the high sophisticated genocidal plan of using illegal aliens as a buffer to destroy us as a people. White "Americans" hated and destroyed Affirmative Action. I still can't believe my tax money is funding people who are not citizens to go to college? These illegal aliens are attending colleges that cost sixty-thousand dollars ($60,000)/year in tuition. United States have been the land of milk and honey for illegal aliens the past thirty years. I'm sorry but self-preservation is the first law of nature. Our people must come first! Our people historically have been discriminated against worst than any people in history. Even President Kennedy and President Johnson felt the need to write executive orders specifically for African-American people during a time when it was not popular. President Obama acting like a slave decided to write an executive order for non-American citizens. Hell, we built this country and died for this country. Where is our executive order Mr. President Obama? Historically, the only significant progress that we as African-Americans made in this country were through the executive orders signed by President Kennedy and President Johnson. Why did President Obama decide to conceive and sign the DACA executive order for illegal aliens and not an executive order for us as African-Americans? What I really don't understand is the reaction from our people on the plantation who are walking around praising this man like he did great things for us during his eight year term as president. They praise this man like he walked on water. Why is that? Maybe it's because President Obama seduced all of the Black national syndication radio shows. These Black radio stations host minds were blown because this was the first time they had full access to the president of the United States. These Black radio host drank the Obama's Kool-Aid for eight years so much so that they were convinced that Obama's doo-doo did not stink. No white president would have sign an executive order like DACA. No wonder why white people voted for this man for two terms. He's right, he would have won a third term as president on the plantation. President-elect Trump said that he would do away with the DACA executive order. Don't punk out Trump and go through with your promise. As a matter of fact, in one of his campaign speeches he said, to paraphrase, "Dreamers, I don't agree with DACA, what about African-Americans?" What about African-Americans dreams? "America First" was President-elect Trump theme of his campaign. Here is my open letter to president-elect Trump. Since white "Americans" did not fuss over the DACA executive order liked they fussed and systematically destroyed Affirmative Action for African-Americans, white "Americans" should not fuss over the following executive order that I want the president-elect Trump to write specifically for African-Americans: 1. All African-Americans who graduate from high school should get free college education (100% college tuition paid for) to the college of their choice. 2. All African-Americans who graduate from college primarily from historical Black colleges are guaranteed a job in the public/private sector or can choose to continue their education. 3. Assist all African-American children who fathers and mothers are incarcerated to graduate from high school. President elect-Trump, I hope you or your people will read this and White "Americans" better not say one negative word about this executive order for African-Americans if it becomes a reality. I also want Lindsey Graham to embrace it with the same enthusiasm that he is showing for saving DACA or the 800,000 illegal aliens. White "Americans" should embrace it with compassion the same way they are being compassionate about the over 13 million illegal aliens that currently live in this country. If they don't, they will show their true nature. It would also verify the fact that there is a high sophisticated genocidal plan being implemented by a large number of white "Americans" in this country. It's really simple, and just to do an executive order for African-Americans for education because this country enslaved us and denied us the ability to read or write for over 250 years. The devil is the master of deceit. Most of our people are still walking around in 2017 with a mental illness called mental slavery. Most of our people are easily manipulated by the devil. The devil presents things that seem to be good for us to do or participate in when it really will produce a bad result. For instance, we as African-Americans don't need to support, fight, protest for the rights of illegal aliens, refugees or other minorities in the country. We as African-Americans need to dedicate 100% of our time in improving the conditions of our people within our communities all across this country. The eradication of mental slavery among other things need to be a top priority. The eradication of mental slavery need to be worked on (24/7) around the clock. Again, America is structured on institution racism or white supremacy. White mangers in the public and private sectors of this country hires Hispanics, Asians, Arabs, and Indians from India first before they even think about hiring African-Americans in these jobs. Diversity today is defined by incorporating other minorities in the work place without the participation of African-Americans. Believe me, I work about 25 years as a Systems Engineer in the public and private sectors in this country. My last system design as an engineer was called the HazCollect system. I worked for the National Weather Service(NWS) but the design was for the Department of Homeland Security(DHS). Without getting into the technical details, my design significantly sped up all non-weather messages being disseminated to the public like Child Abduction warnings, Evacuation warnings, Earthquake warnings, Volcanoes warnings, Law Enforcement warnings, etc. My system also created the first national message for the president to send to the nation. My system was recently credited in catching the NY bomber. They caught the ISIS influenced NY bomber in New Jersey after sending a Law Enforcement regional warning message. I was listening to the radio when the Governor was describing in their press conference how great this system was that caught the NY bomber. It was music to my ears hearing how they were explaining in their press conference how great and instrumental my system was in catching the NY Bomber. But did I get a call from NWS informing me that my system caught the NY bomber and congratulating me on a job well done? Hell No! Nor was I expecting a call from NWS. That's just not the characteristic of an agency structured on institutional racism. By the way, that's not the only solid system that I designed for NWS while working there that should have warrant the employee of the year and the bronze metal awards. Thanks to Tuskegee University's School of Engineering for training me well. I'm saying all of this as a message to African-Americans as Tupac put it, "They Don't Give A Fu*k About Us". They spit, stomp, and defecate on us to hire, Hispanics, Asians, Arabs and Indians from India. It is clear to me that White "Americans" for the past 30 years has been using Hispanics (largest minority group) as a shield to crush and destroy the traditional African-American family. It's a high sophisticated form of genocide. Again, the devil is the master of deceit. Let me ask a question, how many African-Americans are against President Trump's policy of deporting 13-14 illegal aliens? How many African-Americans are against President Trump's policy of building a wall? Let me guess, it's a safe bet to estimate that at least 90% of African-Americans don't like President Trump's policy on illegal aliens. Again, the devil is the master of deceit. The devil sets up situations where you will participate or part take in your own demise or destruction. Self-preservation is the first law of nature. We have to focus on saving our lives because the traditional African-American family is under siege. Again, the eradication of mental slavery should be our primary focus and not fighting for other groups. No other people on this earth have suffered more than African-Americans period. Elements 4 Nature sells George Washington Carver products usually once a year at Hampton's University Homecoming. This past year, Elements 4 Nature was selling Carver products next to a brother who was selling "T" shirts. This brother was excited to know that there were Carver products being sold today on the marketplace. He came up to me and told me that he graduated from Tuskegee University in 2004. So, I asked him what degree did he graduate with? He said, Electrical Engineering. I said, what! So did I. He said what! You did too. I said, yes brother. So we happily gave each other some dap. I asked him who did he work for? He sadly said that he was unable to get a job in the field of engineering once he graduated. I said what? I was shocked. That initial happiness suddenly turned into sadness. I felt the brother's pain. He asked me, who did I work for? So, I started telling the brother my work experiences while working in the public and private sections. I was telling him about the HazCollect system that I designed. He was excited to hear what I had to say and was latching onto every word that came out of my mouth. He was so excited to hear what I did as an engineer, it was like he wanted to get my autograph. It was sad to see. He said man, I wish I had the opportunity to work as an Electrical Engineer. It was sad to see that this brother was so heart broken. Hearing this brother and feeling this brother's pain had a significant impact on me. This is one reason why I am so passionate in wanting President Trump to elimination the DACA program. This is also why I am so mad with President Obama for writing the DACA executive order in the first place. Again, DACA not only pays $60,000/year of our tax money for kids who are illegal aliens to go to college for free, but to issue job permits or guaranteed jobs once they graduate from college. Tell me is that fair to the brother who worked hard though all the obstacles to get to college? The answer is no, its not fair. Keep in mind, as physical slaves in this country we as African-Americans were denied for 250 years the ability to learn to read or write while being enslaved. Now, after paying and studying hard to get a college degree in a demanding field like engineering, this brother can't get a job? Please, that's institutional racism for you. Institutional racism or white supremacy was designed to crush and destroy specifically black people. Other minority groups were not and are not the real targets of institutional racism or white supremacy. Every other minority group benefited from the blood that we as African-Americans have shredded in this country. Again, white managers are using other minority groups specifically Hispanics to crush and destroy us by implementing a high sophisticated plan of genocide. White managers are hiring every other minority group but African-Americans and calling it diversity. Every African-American graduating from a historically Black college with degrees in engineering, chemistry, physics, mathematics, computer sciences or any high level science degrees should get guaranteed jobs period. Actually, any African-American graduating from any college in this country with degrees in engineering, chemistry, physics, mathematics, computer sciences or any high level science degrees should get guaranteed jobs. Racist ass Google, Microsoft, Amazon, Facebook, Apple etc. are not hiring African-American scientists. They rather sue president Trump to increase the H-1B visas so they can hire more white European scientists or other scientists from other minority groups. These big racist ass high tech companies are not hiring these African-American scientists. If you want to have a protest, organized one against these large tech companies who refuse to hire African-American scientists. Instead of us as African-Americans focusing on implementing a Black agenda, we get sucked into these protests and fights for immigrants, refugees and other special interest groups by white people. Forget all of these special interest groups organized and funded by white people. We need to focus 100% on the eradication of mental slavery among our own people and save the traditional African-American family from being exterminated. The majority of white people are participating in this high sophisticated plan of genocide on African-Americans and are using Hispanics as a shield to crush the growth and advancement of African-Americans. For instance, in the Washington DC area where I live, there is a Home Depot located in DC on Rhode Island Ave. That particular Home Depot has lots of illegal aliens camped out in the parking lot. These illegal aliens wait for customers going in and out of the store to ask them to do some work for a price. However, you do not see any illegal aliens camped out at Lowe's down in Lexington Park, Maryland where I conduct some business also. Mostly white people live in Lexington Park area. This has been going on for about 15 years since the new DC Home Depot was built. Now I understand why it was very hard for me to find a black plumber, electrician, and a handyman. This high sophisticated system of genocide has put most of the black businesses out of business. I talked to my black plumber before the elections and he said that he was voting for Trump because what was happening to most black businesses in his industry. Blatant isolated police killings happen in this country to often. Where one brother or sister dies unjustifiably so by a white or sometimes a black police officer. Mainly these are isolated killings. However, the mass killings are planned and plotted. Whenever anyone plans to kill a large number of people then you are dealings with a high sophisticated plan of deceit. Again, the devil is the master of deceit. The surprise reaction of a large number of White "Americans" and the main stream media in showing their new discovery of compassion for illegal aliens is unbelievable. By enthusiastically defending illegal aliens and immigrants in general and the violent reaction to President's Trump plan to depot illegal aliens is all the proof that I need to verify that this high sophisticated plan of genocide is being implemented to the fullest. The traditional black family in this country is under siege. You may ask, what is the traditional black family or African-American family? The traditional black family consist of a black man as the father and a black woman as the mother raising black children or offspring. Again, in order to try to get away with killing a large number of people, it must be rooted in deceit. The other deceitful plan that is being implemented by a large group of White-Americans designed in destroying the traditional African-American family is inter-racial relationships. Interracial Marriages and Relationships Strengthens White Supremacy and Institutional Racism CNN has been doing a lot of stories lately about white supremacist. Let me add to the definition of what a white supremacist is. A white supremacist is one who marries or is currently having sexual intercourse with a black woman or man. This added definition is based on research findings by the greatest social scientist of our time, psychiatrist, Dr. Bobby E. Wright. In addition to Dr. Wright's research, Dr. Frances Cress Welsing and Dr. Amos Wilson research findings drew the same conclusions. These were great African-American scholars who dedicated their lives in studying the African-American mind or soul. Our people need to realize that as African-Americans, interracial relationships strengthens white supremacy and institutional racism. White supremacy and institutional racism have been the very cancers that we have been protesting and fighting against in this country. We as African-Americans need not try to reinvent the wheel of knowledge. These are the three greatest psychiatrists scholars of our time. These three great scientists have done the work, research and studies and all have concluded that inter-racial relationships are suicidal and detrimental to the survival of the traditional African-American family. I'm tired of hearing another one of my cousins marrying another white woman or a women from another race or ethnic group. It's depressing! I'm hoping none of my grown children don't fall into that trap. Most of our people think it's a good thing. That's far from the truth. They think that by marrying into a white family you will be exposed to resources or connections that would be beneficial. Not true! Case in point, one of my cousins who married a white woman is an attorney. A few years back when my daughter was a senior in high school, she got accepted to attend college at Northwestern and Notre Dame Universities. My daughter was excited to be accepted at both schools, however, she wanted to attend Northwestern. It was hard for any student to get accepted in Northwestern University's School of Engineering. My daughter graduated with high honors (4.5 GPA). Earlier in the process of applying to colleges, I assisted my daughter in applying for these minority based college scholarships. Most of the minority based scholarships had financial household income limitations on them. In order to qualify for most of these scholarships your combine income household level had to be near poverty levels. This is institutional racism that targets the accumulation of wealth established in the traditional African-American based families. In addition, this favors more Hispanic families who just moved to the United States recently. Northwestern University cost $60,000/year to attend. I could not afford to send my daughter to Northwestern University without being sucked out of all or most of our family's accumulated wealth. In addition, it would have threaten not having enough college tuition money left to send my son to college this coming year. That's why DACA bothers me so much. Again, President Obama signs an executive order to fund up to 800,000 illegal aliens to attend $60,000/year colleges with my tax money and I can't even afford to send my daughter to the college of her choice. That's absolutely wrong and unfair. Especially what we historically went through as African-Americans living in this country. Financial aide assistance at Northwestern University was structured the same way as all of the other scholarships, they took your total household income into consideration before giving you any money. My cousin attorney who married the white women at the time was based in Chicago. So I e-mailed my cousin and told him that my daughter may be attending either Notre Dame or Northwestern Universities. My cousin was excited to hear that my daughter might be attending school in Chicago. I asked my cousin in my next e-mail did he know anyone who had connections to either school because she needed financial assistance in order to attend either school. He enthusiastically responded yes! He told me his father-in-law was one of the senior members of the administration staff at Northwestern University who was on the Board of Trustees. He had a high influential position at the University. My cousin seem to be very optimistic about her chances of getting financial assistance. My daughter was excited and was checking with me everyday on the status. A few weeks passed by and I called my cousin to get a status. My cousin responded in an emotional way that his father-in-law did not do, or as he put it, did not want to do a damn thing for my daughter. I respect my cousin for telling me the truth about his father-in-law. So there you have it! For those confused brothers and sisters who think by marrying into a white family will exposed to you more resources, not true in this case. Just like Dr. Bobby E. Wright and Dr. Frances Cress Welsing have stated, black women and men are being chosen by whites and lured into these interracial relationships. The strategy or goal by "White-Americans" is to block blacks from forming powerful black couples to build towards having a strong traditional African-American family. There are generally three types of African-Americans that whites target to be lured into these interracial relationships: 1. Whites target African-Americans who have the brightest of the brightest minds among our people. These particular black people are mostly found in colleges and universities, specifically the Ivy League schools. Ivy League schools produce more interracial marriages and relationships than any other type of college or university. Believe me, I know because I married a sister who graduated from Princeton University. I went to a few of her reunions and that's all you saw were interracial marriages and couples at these reunion events. By the way, my wife or I should say my ex-wife went to Princeton during the same time that Michelle Obama attended Princeton. My wife left me recently because she was not fully down with the Black Agenda. Even with the negative action that my wife took in leaving me, I will never be with anyone else but another Black woman period. 2. Whites target African-Americans athletes and talented people who may be projected to make a lot of money after graduating from college. 3. Whites target African-Americans who come from a large family base. These are blacks who have a potential to make a lot of babies. The goal of all these tactics is to prevent strong black families or a strong traditional African-American family from forming. These are similar tactics that were used by the white slave masters on us while we were enslaved on the physical plantation. However, the slave master only reacted once strong black families were formed on the plantation. The slave master broke up those strong black families and made them weaker by selling them to other slave plantation owners. Not only our survival is in jeopardy but our unique cultural creativity is being diminished through inter-racial relationships and our cultural creativity is being endangered of completely being destroyed. Dr. Bobby E. Wright stated in one of his speeches, "Sanction (socially ostracize) completely and refuse to support those Blacks who marry and cohabit with Whites and members of other races". Which means African-Americans should not have inter-racial relationships with Asians (Chinese, Japanese, Koreans), Arabs, Hispanics, Puerto Ricans, Indians from India, Mexicans and Europeans (Whites). The most deadly, however, out of all interracial relationships that an African-American can get involved with is a European or White person. Our people don't realize the magnitude of what's happening to our traditional African-American families. We as Black people don't realize that when Black relationships are in jeopardy, the survival of the traditional Black family and Black culture are also in jeopardy. Our whole existence is in jeopardy. The creativity that we produce within our culture is in jeopardy. For instance, "Black Love" music is a direct product of the love experiences shared between the Black man and Black woman. You would not have the beautiful passionate lyrical love songs/music, if it were not for the past love relationships shared between the Black man and woman. Get rid of Black love relationships, you will eliminate the deep and passionate lyrical love songs/music that we all grew up on. If you take a closer look at today's music, you can see the connection between the current state of Black male/female relationships and the current state of Black love songs. Black love songs and the people who sing them have significantly been reduced from past years solely because of the weak state of Black male/female relationships. It's no secret why the 1960's produced so many strong and powerful "Black Love" songs. During the sixties and early seventies, the collective Black consciousness was at the highest level since we have lived in this country. Those "Black Love" songs written in the sixties and early seventies without question was due to the higher collective conscious levels or a stronger love experiences between the Black man and Black woman. From the deep pulsating music of old school groups like the O'Jays, Isley Brothers, Chi-lites, Delfonics, Stylistics, Blue Magic and the Temptations to the solo performers like Stevie Wonder, Smokey Robinson, Otis Redding, Gladys Knight, Roberta Flack, and Aretha Franklin, these artists and groups produced strong and powerful "Black Love" songs. These old school "Black Love" songs were directly related to the unity state between the Black man and Black woman. There were numerous "Black Love" songs in the seventies that equated "Black Love" or the Black woman to heaven. One song stated just that and it was called, "This Must Be Heaven" by Brainstorm. Ohio Players song, "Heaven Must Be Like This", was another example of the high level of collective consciousness that we as African-Americans were expressing through our music. Equating the Black woman or "Black Love" to heaven is an ancient Black African or Egyptian concept that our ancestors lived by back in the day. "Black Love" songs can be used as a measurement of the unity between the Black man and Black woman and a measurement of our collective consciousness. Music is the highest form of communication because it unifies or synchronize the Body, Soul, and Spirit in a harmonic way. Obviously, the love that was shared between the Black man and Black woman during the sixties and seventies was much stronger than it is today due to the significantly less number of "Black Love" songs being produced in today's music. We as African-Americans need to re-focus our efforts on re-establishing and strengthening "Black Love" for our survival and to preserve and strengthen our Black Culture. The music that we produce is just one area of creativity within the Black Culture that's being diminished through inter-racial relationships and other genocidal tactics imposed by whites who operate within the system of white supremacy. The impact of interracial relationships and other genocidal tactics are having a significant impact in diminishing Black Cultural creativity in every field of study. Let me try to break this issue of interracial relationships within the African-American community down to its lowest common denominator. Let's image that white supremacy is reversed to a society where black supremacy is practiced. Which means in this case, black supremacy is defined the same way that white supremacy is practiced and implemented today in America. Now under the system of black supremacy, the angel's food cake is black or chocolate and the devil's food cake is white or Vanilla. Under the system of black supremacy, young whites say the following growing up: "If your black, your ahead of the pack, if your brown, stick around, if your yellow, your mellow but if your white, your a grave-site. Under the system of black supremacy, Black people are given credit for discoveries and accomplishments of things that they did not do. In addition, under the system of black supremacy, all the positive things that black people do are significantly amplified and all of the negative things are significantly suppressed. Under the system of black supremacy, the negative things that white people do are significantly amplified and the positive that white people do are suppressed. From a historical standpoint, keep in mind that under the system of black supremacy, white people were enslaved in this country by black people. So now we have a system of black supremacy being implemented and maintained by black people in America. Under the system of black supremacy, whites are forced to worship a black religion, a Black image of God, speak a different language (Swahili), live and support a black based culture. Under the system of black supremacy, it would be alright for black people to marry and have sexual relationships with white people because when a white person marries a black person under the system of black supremacy, she/he submits to the black way of life. A white person who marries or have sexual relations with a black person under the system of black supremacy endorses black supremacy. White people who marry or have sexual relations with black people under the system of black supremacy are the least intelligent ones among their people and are the ones who have the least cultural identity among white people even through they are college degree professionals working as doctors, engineers and lawyers. Interracial relationships are not a threat to the continuation and expansion of the system of black supremacy. As a matter of fact, inter-racial relations strengthens the system of black supremacy. Children or offspring of interracial relationships that grow under the system of black supremacy identify with the black side of their family and they reject the white side of their family. I'm sure white people would reject interracial relationships under the system of black supremacy. Therefore, today every African-American should reject inter-racial relationships under the current system of white supremacy. I believe even Dr. Martin Luther King Jr. would not agree on promoting and encouraging African-Americans to participate in interracial relationships under the system of white supremacy. Dr. King believed in equality and not the amplification of white supremacy! White supremacy is fine as long it's not used to suppress black people or any other ethnic group. Black supremacy is fine as long as it's not used to suppress white people or any other ethnic group. Every culture has a right to set up a structure that supports and promotes their own culture. Throughout history, every culture's image of God was a reflection of their own image. Our ancient ancestors, the Black Africans of Ancient Egypt had a Black God that promoted our black culture. White Europeans adopted Christianity and change the Black Jesus to a White Jesus to reflect and promote European or white culture. This only means that there is no such thing as a universal consciousness. The reality is that you only have cultural consciousness. Again, let's not re-invent the wheel of knowledge, Dr. Frances Welsing and Dr. Bobby E. Wright emphasized the importance of rejecting inter-racial relationships within the African-American community. They dedicated their lives in studying the Black African mind and we as African-Americans should follow their recommendations to the fullest. The late great Dr. Bobby Wright, who was one of the greatest analytical thinkers and social scientist of our time, stressed the importance of a Black Social Theory or a Black Agenda. Dr. Bobby Wright defined certain social values or rules as the basis or foundation for The Black Social Theory or Black Agenda. Those special social rules defined by Dr. Bobby E. Wright as the foundation of a Black Social Theory are highlighted as follows: We as African-Americans: 1. Must place the interest of our race above all other interests. 2. Sanction(socially ostracize) and punish those Blacks who operate against our interests. 3. Sanction(socially ostracize) completely and refuse to support those Blacks who marry and cohabit with Whites or members of other races. 4. Make a conscious effort to fully accept(instead of rejecting) and embrace our people who have a higher concentration of melanin or dark-skinned Blacks when adopting Black children or looking for a mate. 5. No Black person can be considered a leader or hold leadership position who marries outside the race. 6. Stop using the word Black in negative terms (nigger, Tom, etc.) when addressing each other. 7. Consciously expose our children to strong positive Black images. Black professionals need to spend time volunteering their services to independent Black schools. 8. Have Black Artists to show strong Black images. The campaign to push and promote to increase inter-racial relationships by whites among African-Americans has migrated to large companies who now use TV advertisements to promote inter-racial relationships. These large companies who run these TV Ads promoting inter-racial relationships among African-Americans are strengthening white supremacy and institutional racism. Since our people are currently in the the mode to protest and boycott, then boycott these large companies who are perpetuating strengthening white supremacy and institutional racism by running these inter-racial TV ads. Let me be clear here, I am all about self-preservation, self-love or "Black Love" and supporting and implementing strategies from our renowned scholars that will help save us as a people from being exterminated by the system of white supremacy. I don't hate anyone! I don't hate anyone's race, religion or sexual orientation. However, whenever black people place emphasis on self love or "Black Love", whites people and others claim that we are hating. Let me make this crystal clear, we as African-Americans in this country don't hate anyone including most whites who have and continue to be enemies of black people in this country. It's not in our culture to hate. Now let's not get confused, however, you don't have to hate your enemy to acknowledge who your enemy is. Does God hate the devil? I say No. God is all about love. However, God is completely opposite of what the devil represents. I do believe however, to have no hatred in your heart or any fear of your enemy is a high goal to achieve. Dr. King had it right for all but one exception, he should have emphasized self preservation and self love first because self preservation is the first law of nature. Dr. King fell short of that goal. The ultimate goal to achieve is love for self or "Black Love". When I was a student at Tuskegee, one of Dr. King's close followers (can't remember his name), spoke to the Tuskegee student body. I will never forget the speech that this man delivered. This man, who was walking on crutches, questioned the strategy of Dr. King because he said in his emotional speech as tears poured out of his eyes, that every young child that was locked up in these demonstrations or protests were sexual abused by the white prison guards. Thousands of young children lives were destroyed in these protests according to him. According to this brother, everyone within the movement knew about this but was hush, hush and kept it a secret. Shocked and in disbelief, I and most of the Tuskegee student body had tears running down our faces after his speech. We need to focus on loving ourselves more, "Black Love" is what we need the most. Mathew 5:5 states, "Blessed are the meek, for they shall inherit the earth", is a very powerful verse from the Bible. God especially protects those who are not afraid to speak the truth and do not have any hatred in their hearts. Another strong and potent biblical verse states in John 8:32, "If you continue in My word, you are truly My disciples. Then you will know the truth, and the truth will set you free." Collectively we as African-Americans know that we are not free. Does that mean collectively, we don't know the truth? I do know that you must speak the truth in order to receive the absolute truth that's inside or within your heart. I strongly feel that these biblical verses will "come to life" and work for you more so if there is no fear and no hatred in your heart. In order to receive or to be in tune with the truth within one's self or one's heart, you must not be afraid to speak the truth in the presence of your enemy. Creativity within happens at the maximum level when no fear and hatred exists within your heart. The DNA within every African-American woman and man is the same DNA of our ancient Black African Egyptian ancestors who built the Great Pyramids of Giza and who established the first advanced civilization that excelled in science and mathematics. Again, in order to receive the truth or access the truth that's inside of you which is ancient knowledge or the same knowledge that our wise ancient Black ancestors of the Nile Valley utilized in the past, you must not be afraid to speak the truth. Speaking the truth as well as implementing "Black Love" or strong self love significantly strengthens that spiritual connection to our ancient Black African Egyptian ancestors to access more ancient knowledge. George Washington Carver had a strong spiritual connection to our ancient Black ancestors of the Nile Valley. Carver proved it by re-discovering and reproducing the Egyptian Blue color that was only produced by our ancient Black Egyptian ancestors. The Egyptian Blue color was not just another color but it had a spiritual characteristic associated with the color. For instance, if you shine red light on the Egyptian Blue color, it will emit infrared light. Now emitting infrared light or an infrared light signal is the same signal used in remote control technology of today. The ancient Egyptians used the Egyptian Blue color in most of their sacred tombs. Carver was the only one to reproduced that sacred Egyptian Blue color. I was proud to be one of the brothers who made a contribution in raising this revolutionary information about Carver recreating the Egyptian Blue color. This information about Carver recreating the Egyptian Blue color was suppressed by the white establishment. Go download my presentation given at Tuskegee's Agricultural Conference in 2014 that's entitled "George Washington Carver & The Ancient Egyptians Connection". In that presentation, I explain exactly how the knowledge on Carver recreating the sacred ancient color called Egyptian Blue was raised by me and others. Like Dr. King once said, "The truth crushed to the ground, shall rise again" was proven in this case. Again, the scriptures of the Bible are very powerful if you trust in the Lord while having no hatred in your heart. Everyone of us is living on this earth for a reason. I feel it's my duty to speak and disseminate the truth as I see it. Sojourner Truth, once said, "I feel safe in the midst of my enemies, for the truth is all powerful and will prevail." Minister Louis Farrakhan is the only Black Leader who have consistently spoke the truth without fear of our enemy. We need to follow Minister Farrakhan's lead. Writing this article is just a small contribution towards helping to uplift the collective consciousness of Black people living in this country. We as African-Americans need to push or implement this Black Agenda or Black Social Theory that Dr. Bobby E. Wright talked about while he was on this earth. This Black Agenda is really our Black survival. We need to embrace it! It emphasizes "Black Love" and "Self Love" which we desperately need to stress over and over again. This Black Agenda is consistent with the philosophy of the Honorable Elijah Muhammad, Marcus Garvey, Booker T. Washington, Dr. Chancellor Williams, Dr. Yosef Ben-Jochannan, Malcolm X, Dr. Frances Cress Welsing, Dr. John Henrik Clarke, Dr. Khalid Muhammad, Dr. Amos Wilson and many more. These are our great renowned scholars and social scientists who emphasized "Black Love" and "Self Determination". African-Americans are in an active war with white supremacy, whether you acknowledge it or not. We are in a daily 24/7 struggle to survive under this psychological war. A psychological warfare that we as African-Americans collectively are the least fit to fight. We lost our language, religion, names, and country at the hands of white "Americans". We are holding on with a life-preserver in a sea where other cultures have boats, ships and vessels to navigate. Therefore, it's my duty to mention the majority of white "Americans" who are showing passionate support of illegal aliens and immigrants by ensuring that they are safe in boats, ships and vessels and at the same time leave Black people in life-preservers to be killed by sharks. Leave us in the life-preservers that they forced us in when we were on the physical slave plantation. Its my duty to speak the truth on what former president Obama did not do for Black people the eight years that he was in office. It's my duty to speak the truth on what the Civil Rights commissioner recently said in their report about the negative effects on jobs that illegal aliens and immigrants are having on Black men being employed in this country. I'm going to speak the truth without fear on every negative thing that effects African-Americans in this country. White "Americans" are the most deceitful and hypocritical people on the planet as it relates to the historical relationship with African-Americans. Again, let me briefly highlight their collective negative actions against Black people in this country. We as African-Americans were not only enslaved physically for over 250 years but mentally also which was more devastating. While in physical captivity on the slave plantation for 250 years, our language, names, religion and our cultural way of life was replaced with white names, language, religion, and the white cultural way of life. White "Americans" made African-Americans mental slaves. The Emancipation Proclamation which was passed in 1863 never freed us from the mental slavery that White "Americans" produced in us while in physical captivity for 250 years on the US plantations. The proof of their wicked intentions of never wanting to reverse the slave mind within African-Americans is a result from their pass and present actions. The natural flow of Knowledge is to passed knowledge from one generation to the next generation. Through the process of White "Americans" making us mental slaves, we lost the ability to pass our cultural knowledge from one generation to the next. Today, operating with that dominate slave mind, we as African-Americans pass slave knowledge down to our future generation. White "Americans" knowledge was never interrupted, therefore they continue to pass knowledge down to the future generations of white children. White "Americans" continue to pass the slave master knowledge to the future generation of slave masters. White "Americans" had an opportunity to work to reverse the slave mind that they produced in African-Americans during the writing of the Emancipation Proclamation but they refused to and they never had any intentions to do so. I still don't understand today, how and why a Black man or Black woman can marry or have any romantic relationship with any of the current generation of white people? Let me answer that question, its the slave mind that makes these unintelligent decisions. The current generation of White "Americans" are just as bad if not worst than what their ancestors did to our people on the slave plantations in the past. Again, White "Americans" had no intentions of reversing the slave mind that currently exists in African-Americans today. The proof of that fact is by the reaction of White "Americans" to the conscious efforts of our people who consciously realized that this collective slave mind that exists among our people was a significant problem towards the progression of our people. Black conscious leaders like Marcus Garvey, the Honorable Elijah Muhammad, Malcolm X, Carter G. Woodson and others work exhaustively to eradicate mental slavery among African-Americans. White "Americans" used their resources of this country like the FBI, CIA, and the NSA to quash that effort. This country deported Marcus Garvey and crushed his movement. This country infiltrated the Honorable Elijah Muhammad organization, The Nation of Islam. The Nation of Islam(NOI) had multiple businesses throughout the United States. The NOI had land and farms to grow food for our people. The FBI poisoned their farms and infiltrated their business. The CIA assassinated Malcolm X and tried to blame it on the NOI. Malcolm X even said before he was killed and I quote, "This is bigger than the Muslims". The FBI also infiltrated and destroyed the Black Panther Party. All these conscious individuals and groups had a common goal and that was to eradicate mental slavery from the minds of our people, Black people living in this country. The current generation of White "Americans" work in these covert organizations to maintain that slave mind within African-Americans that was produced by their white slave master ancestors. We see that in their treatment of my company Elements 4 Nature(E4N) today. The current generations of White "Americans" is working 24/7 to maintain that slave mind within our people by working to destroy black based companies like E4N. The current generation of White "Americans" maintain the FBI COINTELPRO program that J. Edgar Hoover created to destroy Black organizations, Black Pride and Black Love movements. We at E4N know that these covert white agents intimidate companies who supply us with materials to make our products. We know that these covert agents intimidate and influence businesses, who want to carry the Carver products in their stores, not to do business with our company. I'm just saying, the current generation of White "Americans" is doing the same thing that their white slave masters did on these plantations in the past. So, how can you marry or have sexual relationships with anyone from the current generation of white people, Black man and woman? The answer is slave mentality is the only reason. Therefore, what I just mentioned is more than enough proof of white "Americans" intentions to keep African-Americans with the slave mind that they produced while on the plantation for 250 years. What makes White "Americans" actions more hypocritical is their passionate support of illegal aliens and migrates. White "Americans" passionate support of sanctuary cities and states for people who are not even US citizens is another proof of their hypocrisy towards African-Americans. Why is that? Why do they leave us in a sea with only a life preserver on to be eaten up by sharks and at the same time are passionately trying to rescue everyone else? To Be Continued in Fear the Lord (Part 2) (Page 1 of 2, totaling 9 entries) » next page Copyright ©2008-2020 Elements 4 Nature LLC.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,808
De Nationale Vergadering van Libanon (Arabisch: مجلس النواب Majlis an-Nuwwab, Frans: Assemblée Nationale Libanaise) is het eenkamerparlement van Libanon. De Nationale Vergadering bestaat uit 128 leden die voor een termijn van vier jaar worden gekozen. Volwassen Libanezen hebben kiesrecht en brengen hun stem uit op de kandidaat van keuze in hun kieskring. Alle zetels zijn gereserveerd voor de grotere religieuze minderheden van Libanon. Tot 1989 telde de Nationale Vergadering 99 zetels en hadden de christenen recht op de meeste zetels (54), de moslims hadden, ofschoon zij waarschijnlijk sinds de jaren 60 de meerderheid van de bevolking uitmaakten, recht op 45 zetels. Aan het einde van de Libanese burgeroorlog werd in 1989 het Akkoord van Taif gesloten, waarna het zetelaantal van de Nationale Vergadering werd verhoogd naar 128 en de christenen en de moslims ieder 64 zetels kregen in het parlement. Bevoegdheden Het parlement kiest de president en moet het kabinet goedkeuren. De president moet met tweederdemeerderheid worden gekozen. Ofschoon de premier door de president wordt benoemd, moeten hij en de andere ministers het vertrouwen van het parlement krijgen. Andere belangrijke bevoegdheden van de Nationale Vergadering zijn het goedkeuren van wetten en uitgaven. Voorzitter De voorzitter van de Nationale Vergadering wordt voor een termijn van vier jaar gekozen (en kan herkozen worden). Hij moet een sjiiet zijn. De voorzitter maakt deel uit van het zogenaamde "driemanschap" (president, premier, parlementsvoorzitter) die in feite de staatsmacht in handen hebben. De voorzitter van de Nationale Vergadering bezit uitgebreide bevoegdheden en kan over iedere wet zijn veto uitspreken. De huidige voorzitter van de Nationale Vergadering is (sinds 1992) Habih Berri. Zetelverdeling naar godsdienst Zetelverdeling na de verkiezingen van 2005 Verwijzing Zie ook Politiek in Libanon Externe link Officiële website Nationale Vergadering Politiek in Libanon Liban
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,104
{"url":"http:\/\/forums.slipstick.com\/threads\/95582-shortcut-for-attaching-the-same-file-repeatedly\/","text":"# Shortcut for attaching the same file repeatedly\n\n#### RJL\n\nNew Member\nI frequently attach one specific file to outgoing messages (i.e., it is the same file every time). I am looking for a short cut for doing this. I have already put the attach-file command on the quick access bar. Unfortunately, I do not want to put the text into the body of the message because it contains images, so Quick Steps doesn't help me.\n\nIdeally, I'd like to click an icon or use a keyboard shortcut to attach this file. I am using Outlook 2013 with Windows 10.\n\n#### Diane Poremsky\n\nSenior Member\nCan you use a macro? i have some here - although they are a bit more than you need but could trim the middle out to make it perfect to open a message with the file attached.\nUse a Macro to Attach Files to New Messages\n\nBut.. you have to think to run the macro and it doesn't help if you reply with attachment (although a macro could do that too) It might be better to have a macro that adds the attachment to the message you are composing when you run it - like this simple macro - run it any time you are composing a message to add the attachment.\n\nCode:\nPublic Sub AddAttachment()\nDim objItem As Object\nSet objItem = Application.ActiveInspector.CurrentItem\n\nstrAtt = \"D:\\path\\to\\file.docx\"\n\nSet objItem = Nothing\n\nEnd Sub\n\n#### RJL\n\nNew Member\nA macro would be fine. However, I have not used macros before, though I have tried to create a macro to solve this problem by creating a macro by recording a sequence of steps. It didn't work.\n\nWhat does the macro below do? Is there a place you can point me to that would show me how to install the macro?\n\nActually, in some cases I will attach the file when replying to messages as well as when composing a new message.\n\nCan you use a macro? i have some here - although they are a bit more than you need but could trim the middle out to make it perfect to open a message with the file attached.\nUse a Macro to Attach Files to New Messages\n\nBut.. you have to think to run the macro and it doesn't help if you reply with attachment (although a macro could do that too) It might be better to have a macro that adds the attachment to the message you are composing when you run it - like this simple macro - run it any time you are composing a message to add the attachment.\n\nCode:\nPublic Sub AddAttachment()\nDim objItem As Object\nSet objItem = Application.ActiveInspector.CurrentItem\n\nstrAtt = \"D:\\path\\to\\file.docx\"\n\nSet objItem = Nothing\n\nEnd Sub\n\n#### Diane Poremsky\n\nSenior Member\nA macro would be fine. However, I have not used macros before, though I have tried to create a macro to solve this problem by creating a macro by recording a sequence of steps. It didn't work.\n\nWhat does the macro below do? Is there a place you can point me to that would show me how to install the macro?\n\nActually, in some cases I will attach the file when replying to messages as well as when composing a new message.\n\nThe macro I posted will work on any compose message (new or replies) - but you need to click a button to run it. It's not automatic.\n\nThis shows how to use macros How to use Outlook's VBA Editor\n\nSent from my iPad using Tapatalk\n\n#### RJL\n\nNew Member\nThe macro works! However, I'm having trouble finding the file that needs to be digitally signed. What is the filename?\n\n#### Diane Poremsky\n\nSenior Member\nif you are digitally signing the macro, you will sign the entire macro project through the vb editor, not sign a specific file.\n\n#### RJL\n\nNew Member\nThanks for the quick response. I mis-read the instructions on the VBA editor, re-checked and figured out my mistake.\nThanks for all your help. This did the trick.","date":"2019-06-25 15:30:19","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6009166240692139, \"perplexity\": 1317.380602843163}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-26\/segments\/1560627999853.94\/warc\/CC-MAIN-20190625152739-20190625174739-00510.warc.gz\"}"}
null
null
Q: How to incorporate user input into lookahead and lookbehind assertion in regex How do I incorporate user input together with lookahead/lookbehind assertion in regex to obtain the context of the word? user_term = input('Enter a term: ') word = 'Hello, this is an autogenerated message. Do not reply' res_bef = re.search('(\w+-?,?.?\s){3}(?=autogenerated)', word) print(res_bef.group(0)) Currently, I'm manually changing this part of the code (?=autogenerated) to get the terms I want, but I want the code to be more flexible to take any user input. A: You can format the user input in: user_term = input('Enter a term: ') word = 'Hello, this is an autogenerated message. Do not reply' res_bef = re.search(r'(\w+-?,?.?\s){{3}}(?={user})'.format(user=user_term), word) print(res_bef.group(0))
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,030
Hadran: Advancing Talmud Study for Women The Learners Tapestry Today's Daf Beyond the Daf Catch up on the Daf Circle of Friends 2022 Siyum What is a Siyum? Siyum Registration Siyum Ceremony Text Siyum Masechet Archive Nashim Kit (Bookmarks) Visual Tools – Yevamot Talmud by Daf Search Talmud by Daf Learning for the Holidays Sponsor a Daf About Hadran Learning Recommendations for the Holidays Search by Daf/Masechet February 1, 2023 | י׳ בשבט תשפ״ג | TODAY'S DAF: Nazir 8 Search by Daf... Arakhin Avodah Zarah Bava Batra Bava Kamma Bava Metzia Beitzah Bekhorot Berakhot Chagigah Chullin Eruvin Gittin Hagigah Horayot Keritot Ketubot Kiddushin Kinnim Makkot Megillah Meilah Menachot Middot Moed Katan Nazir Nedarim Niddah Pesachim Rosh Hashanah Sanhedrin Shabbat Shekalim Shevuot Sotah Sukkah Taanit Tamid Temurah Uncategorized Yevamot Yoma Zevachim Today's Daf Yomi October 10, 2020 | כ״ב בתשרי תשפ״א Masechet Eruvin is sponsored by Adina and Eric Hagege in honor of our parents, Rabbi Dov and Elayne Greenstone and Roger and Ketty Hagege who raised children, grandchildren and great grandchildren committed to Torah learning. This month's learning is sponsored by Shlomo and Amalia Klapper in honor of the birth of Chiyenna Yochana, named after her great-great-grandmother, Chiyenna Kossovsky. This month's learning is sponsored by Elaine Hochberg in honor of her husband, Arie Hochberg, who continues to journey through Daf Yomi with her. "And with thanks to Rabbanit Farber and Hadran who have made our learning possible." Eruvin 62 What are the laws regarding carrying in a courtyard where a Jew and a non-Jew are living? Does it matter if it is one Jew or two Jews living there? On what principles are these laws based? The gemara discusses the importance of the law to not teach a halacha if there is a more senior rabbi there – he should be the one to answer the question. https://traffic.libsyn.com/secure/hadran/Eruvin62.mp3 https://traffic.libsyn.com/secure/hadran-he/EruvinHeb62.mp3 Podcast (דף יומי לנשים - עברית): Play in new window | Download View Daf Text גמ׳ יתיב אביי בר אבין ורב חיננא בר אבין ויתיב אביי גבייהו ויתבי וקאמרי בשלמא רבי מאיר קסבר דירת גוי שמה דירה ולא שנא חד ולא שנא תרי GEMARA: Abaye bar Avin and Rav Ḥinana bar Avin were sitting, and Abaye was sitting beside them, and they sat and said: Granted, the opinion of Rabbi Meir, the author of the unattributed mishna, is clear, as he holds that the residence of a gentile is considered a significant residence. In other words, the gentile living in the courtyard is considered a resident who has a share in the courtyard. Since he cannot join in an eiruv with the Jew, he renders it prohibited for the Jew to carry from his house to the courtyard or from the courtyard to his house. Consequently, the case of one Jew living in the courtyard is no different from the case of two Jews living there. In both cases, the gentile renders it prohibited for carrying. אלא רבי אליעזר בן יעקב מאי קסבר אי קסבר דירת גוי שמה דירה אפילו חד נמי ניתסר ואי לא שמה דירה אפילו תרי נמי לא ניתסר But Rabbi Eliezer ben Ya'akov, what does he hold? If you say he holds that the residence of a gentile is considered a significant residence, he should prohibit carrying even when there is only one Jew living in the courtyard. And if it is not considered a significant residence, he should not prohibit carrying even when there are two Jews living there. אמר להו אביי וסבר רבי מאיר דירת גוי שמה דירה והתניא חצירו של נכרי הרי הוא כדיר של בהמה Abaye said to them: Your basic premise is based on a faulty assumption. Does Rabbi Meir actually hold that the residence of a gentile is considered a significant residence? Wasn't it taught in the Tosefta: The courtyard of a gentile is like the pen of an animal, i.e., just as an animal pen does not render it prohibited to carry in a courtyard, so too, the gentile's residence in itself does not impose restrictions on a Jew. אלא דכולי עלמא דירת גוי לא שמה דירה והכא בגזירה שמא ילמד ממעשיו קא מיפלגי Rather, this explanation must be rejected, and the dispute in the mishna should be understood differently: Everyone agrees that the residence of gentile is not considered a significant residence, and here they disagree about a decree that was issued lest the Jew learn from the gentile's ways. The disagreement is with regard to whether this decree is applicable only when there are two Jews living in the courtyard, or even when there is only one Jew living there. רבי אליעזר בן יעקב סבר כיון דגוי חשוד אשפיכות דמים תרי דשכיחי דדיירי גזרו בהו חד לא שכיח לא גזרו ביה רבנן The disagreement should be understood as follows: Rabbi Eliezer ben Ya'akov holds that since a gentile is suspected of bloodshed, it is unusual for a single Jew to share a courtyard with a gentile. However, it is not unusual for two or more Jews to do so, as they will protect each other. Therefore, in the case of two Jews, who commonly live together with a gentile in the same courtyard, the Sages issued a decree to the effect that the gentile renders it prohibited for them to carry. This would cause great inconvenience to Jews living with gentiles and would thereby motivate the Jews to distance themselves from gentiles. In this manner, the Sages sought to prevent the Jews from learning from the gentiles' ways. However, in the case of one Jew, for whom it is not common to live together with a gentile in the same courtyard, the Sages did not issue a decree that the gentile renders it prohibited for him to carry, as the Sages do not issue decrees for uncommon situations. ורבי מאיר סבר זמנין דמקרי ודייר ואמרו רבנן אין עירוב מועיל במקום גוי ואין ביטול רשות מועיל במקום גוי עד שישכיר וגוי לא מוגר On the other hand, Rabbi Meir holds that sometimes it happens that a single Jew lives together with a gentile in the same courtyard, and hence it is appropriate to issue the decree in such a case as well. Therefore, the Sages said: An eiruv is not effective in a place where a gentile is living, nor is the renunciation of rights to a courtyard in favor of the other residents effective in a place where a gentile is living. Therefore, carrying is prohibited in a courtyard in which a gentile resides, unless the gentile rents out his property to one of the Jews for the purpose of an eiruv regardless of the number of Jews living there. And as a gentile would not be willing to rent out his property for this purpose, the living conditions will become too strained, prompting the Jew to move. מאי טעמא אילימא משום דסבר דלמא אתי לאחזוקי ברשותו הניחא למאן דאמר שכירות בריאה בעינן The Gemara poses a question: What is the reason that a gentile will not rent out his property for the purpose of an eiruv? If you say it is because the gentile thinks that perhaps they will later come to take possession of his property based on this rental, this works out well according to the one who said that we require a full-fledged rental, i.e., that rental for the purpose of an eiruv must be proper and valid according to all the halakhot of renting. אלא למאן דאמר שכירות רעועה בעינן מאי איכא למימר דאתמר רב חסדא אמר שכירות בריאה ורב ששת אמר שכירות רעועה However, according to the one who said that we require only a flawed, symbolic rental, i.e., all that is needed is a token gesture that has the appearance of renting, what is there to say? The gentile would understand that it is not a real rental, and therefore he would not be wary of renting out his residence. As it was stated that the amora'im disputed this issue as follows: Rav Ḥisda said that we require a full-fledged rental, and Rav Sheshet said: A flawed, symbolic rental is sufficient. מאי רעועה מאי בריאה אילימא בריאה בפרוטה רעועה פחות משוה פרוטה מי איכא למאן דאמר מגוי בפחות משוה פרוטה לא והא שלח רבי יצחק ברבי יעקב בר גיורי משמיה דרבי יוחנן הוו יודעין ששוכרין מן הגוי אפילו בפחות משוה פרוטה Having mentioned this dispute, the Gemara now clarifies its particulars: What is a flawed rental, and what is a full-fledged one? If you say that a full-fledged rental refers to a case where one gives another person a peruta as rent, whereas in a flawed rental he provides him with less than the value of a peruta, this poses a difficulty. Is there anyone who said that renting from a gentile for less than the value of a peruta is not valid? Didn't Rabbi Yitzḥak, son of Rabbi Ya'akov bar Giyorei, send in the name of Rabbi Yoḥanan: You should know that one may rent from a gentile even for less than the value of a peruta? ואמר רבי חייא בר אבא אמר רבי יוחנן בן נח נהרג על פחות משוה פרוטה ולא ניתן להשבון And Rabbi Ḥiyya bar Abba said that Rabbi Yoḥanan said: A Noahide, i.e., a gentile who stole is executed for his crime, according to the laws applying to Noahides, even if he stole less than the value of a peruta. A Noahide is particular about his property and unwilling to waive his rights to it, even if it is of minimal value; therefore, the prohibition against stealing applies to items of any value whatsoever. And in the case of Noahides, the stolen item is not returnable, as the possibility of rectification by returning a stolen object was granted only to Jews. The principle that less than the value of a peruta is not considered money applies to Jews alone. With regard to gentiles, it has monetary value, and therefore one may rent from a gentile with this amount. אלא בריאה במוהרקי ואבורגני רעועה בלא מוהרקי ואבורגני הניחא למאן דאמר שכירות בריאה בעינן Rather, the distinction between a full-fledged rental and a flawed rental should be explained as follows: A full-fledged rental refers to one that is confirmed by legal documents [moharkei] and guaranteed by officials [aburganei]; and a flawed rental means one that is not confirmed by legal documents and guaranteed by officials, an agreement that is unenforceable in court. Based on this explanation, the Gemara reiterates what was stated earlier with regard to the gentile's concern about renting: This works out well according to the one who said that we require a full-fledged rental, as it is clear why the gentile would refuse to rent out his property. אלא למאן דאמר שכירות רעועה בעינן מאי איכא למימר אפילו הכי חשיש גוי לכשפים ולא מוגר But according to the one who said that we require only a flawed rental, what is there to say in this regard? Why shouldn't the gentile want to rent out his residence? The Gemara answers: Even so, the gentile is concerned about witchcraft, i.e., that the procedure is used to cast a spell on him, and therefore he does not rent out his residence. גופא חצירו של גוי הרי הוא כדיר של בהמה ומותר להכניס ולהוציא מן חצר לבתים ומן בתים לחצר The Gemara examines the ruling in the Tosefta cited in the previous discussion. Returning to the matter itself: The courtyard of a gentile is like the pen of an animal, and it is permitted to carry in and carry out from the courtyard to the houses and from the houses to the courtyard, as the halakhot of eiruvin do not apply to the residences of gentiles. ואם יש שם ישראל אחד אוסר דברי רבי מאיר But if there is one Jew living there in the same courtyard as the gentile, the gentile renders it prohibited for the Jew to carry from his house to the courtyard or vice versa. The Jew may carry there only if he rents the gentile's property for the duration of Shabbat. This is the statement of Rabbi Meir. רבי אליעזר בן יעקב אומר לעולם אינו אוסר עד שיהו שני ישראלים אוסרים זה על זה Rabbi Eliezer ben Ya'akov says: Actually, the gentile does not render it prohibited for the Jew to carry unless there are two Jews living in the same courtyard who themselves would prohibit one another from carrying if there were no eiruv, and the presence of the gentile renders the eiruv ineffective. אמר מר חצירו של גוי הרי הוא כדיר של בהמה והא אנן תנן הדר עם הנכרי בחצר הרי זה אוסר עליו The Gemara proceeds to analyze the Tosefta: The Master said above: The courtyard of a gentile is like the pen of an animal, which implies that the residence of a gentile is not considered a significant residence. But didn't we learn otherwise in the mishna: One who resides with a gentile in the same courtyard this person prohibits him from carrying? This implies that a gentile's residence is in fact of significance. לא קשיא הא דאיתיה הא דליתיה The Gemara answers: That is not difficult. This halakha in the mishna is referring to a situation where the gentile is present, and therefore carrying is prohibited, whereas that halakha in the Tosefta refers to a situation where he is not present, and therefore carrying is permitted. ומאי קסבר אי קסבר דירה בלא בעלים שמה דירה אפילו גוי נמי ניתסר ואי קסבר דירה בלא בעלים לא שמה דירה אפילו ישראל נמי לא ניתסר The Gemara poses a question: What does Rabbi Meir hold? If he holds that a residence without its owners is still considered a residence, and it is prohibited to carry in the courtyard even when the owner is away, then even a gentile in absentia should likewise render it prohibited for carrying. And if he holds that a residence without its owners is not considered a residence, then even a Jew who is away should also not render it prohibited for carrying. לעולם קסבר דירה בלא בעלים לא שמה דירה וישראל דכי איתיה אסר כי ליתיה גזרו ביה רבנן The Gemara answers: Actually, he holds that a residence without its owners is not considered a residence, but nevertheless, he draws a distinction between a Jew and a gentile. In the case of a Jew, who renders it prohibited to carry for those who dwell in the same courtyard when he is present in his residence, the Sages decreed with regard to him that even when he is not present, his residence renders it prohibited for them to carry as though he were present. גוי דכי איתיה גזירה שמא ילמד ממעשיו כי איתיה אסר כי ליתיה לא אסר However, with regard to a gentile, who even when he is present does not fundamentally render it prohibited to carry, but only due to a rabbinic decree that was issued lest the Jew learn from the gentile's ways, no further decree was necessary. Thus, when he is present, the gentile renders it prohibited to carry; but when he is not present, he does not render it prohibited to carry. וכי ליתיה לא אסר והתנן המניח את ביתו והלך לו לשבות בעיר אחרת אחד נכרי ואחד ישראל אוסר דברי רבי מאיר The Gemara asks: And when the gentile is not present, does he really not render it prohibited for carrying? Didn't we learn elsewhere in a mishna: With regard to one who left his house without establishing an eiruv and went to spend Shabbat in a different town, whether he was a gentile or a Jew, he renders it prohibited for the other residents of his courtyard to carry objects from their houses to the courtyard and vice versa. This is the statement of Rabbi Meir. This indicates that according to Rabbi Meir, a gentile renders it prohibited to carry in the courtyard even if he is not present. התם דאתי ביומיה The Gemara answers: There, it is referring to a situation where the person who left his house without establishing an eiruv intends to return on that same day, on Shabbat. Since upon his return he will render it prohibited for others to carry in the courtyard, the decree is applied even before he returns home. However, if he left his house intending to return after the conclusion of Shabbat, he does not render it prohibited to carry, in absentia. אמר רב יהודה אמר שמואל הלכה כרבי אליעזר בן יעקב ורב הונא אמר מנהג כרבי אליעזר בן יעקב ורבי יוחנן אמר נהגו העם כרבי אליעזר בן יעקב Rav Yehuda said that Shmuel said: The halakha in this dispute is in accordance with the opinion of Rabbi Eliezer ben Ya'akov. And Rav Huna said: This is not an established halakha to be issued publicly; rather, the custom is in accordance with the opinion of Rabbi Eliezer ben Ya'akov, i.e., a Sage would rule according to his opinion for those who come to ask. And Rabbi Yoḥanan said: The people are accustomed to conduct themselves in accordance with the opinion of Rabbi Eliezer ben Ya'akov. Accordingly, a Sage would not issue such a ruling even to those who inquire, but if someone acts leniently in accordance with his opinion, he would not object. אמר ליה אביי לרב יוסף קיימא לן משנת רבי אליעזר בן יעקב קב ונקי ואמר רב יהודה אמר שמואל הלכה כרבי אליעזר בן יעקב Abaye said to Rav Yosef, his teacher: We maintain that the teaching of Rabbi Eliezer ben Ya'akov measures a kav, but is clean, meaning that it is small in quantity but clear and complete, and that the halakha is in accordance with his opinion in all instances. Moreover, with regard to our issue, Rav Yehuda said that Shmuel said: The halakha is in accordance with the opinion of Rabbi Eliezer ben Ya'akov, and therefore there is no doubt about the matter. מהו לאורויי במקום רבו However, what is the halakha with regard to whether a disciple may issue a ruling according to the opinion of Rabbi Eliezer ben Ya'akov in his teacher's place of jurisdiction, i.e., in a place where he is the recognized authority? Although it is usually prohibited to do so, perhaps such an evident and well-known principle such as this does not fall into the category of rulings that a disciple may not issue in his teacher's territory. אמר ליה אפילו ביעתא בכותחא בעו מיניה מרב חסדא כל שני דרב הונא ולא אורי Rav Yosef said to Abaye: Even when Rav Ḥisda was asked about the permissibility of cooking an egg in kutaḥ, a dairy dish, throughout the years of Rav Huna's life, he refused to issue a ruling. Rav Ḥisda was a disciple of Rav Huna, and a disciple may not issue a ruling in his teacher's place of jurisdiction about even the simplest of matters. אמר ליה רבי יעקב בר אבא לאביי כגון מגלת תענית דכתיבא ומנחא מהו לאורויי באתריה דרביה אמר ליה הכי אמר רב יוסף אפילו ביעתא בכותחא בעו מיניה מרב חסדא כל שני דרב הונא ולא אורי Rabbi Ya'akov bar Abba said to Abaye: With regard to matters such as those detailed in Megillat Ta'anit, which is written and laid on the shelf for all to access and offers a list of the days on which fasting is prohibited, what is the halakha concerning whether or not a disciple may rule about these matters in his teacher's place of jurisdiction? Abaye said to him: Rav Yosef said as follows: Even when Rav Ḥisda was asked about the permissibility of cooking an egg in kutaḥ throughout the years of Rav Huna's life, he refused to issue a ruling. רב חסדא אורי בכפרי בשני דרב הונא The Gemara relates that Rav Ḥisda nonetheless issued halakhic rulings in the town of Kafri during the years of Rav Huna's life, as he was not actually in his teacher's place. ← Eruvin 61 Eruvin 63 → Rabbanit Michelle Farber Subscribe to Hadran's Daf Yomi Updates via Whatsapp Want to explore more about the Daf? See insights from our partners, contributors and community of women learners Eruvin 59-65 – Daf Yomi: One Week at a Time This week we will review concepts in Daf 59-65 including making an eruv in public vs. private cities, are ladders... Eruvin 62: No Eruv with Non-Jews A new perek (#6), a new mishnah... What happens when there's no possibility of carrying, even given the hope of... View Daf Yomi Archives The William Davidson Talmud | Powered by Sefaria Knowledge-based. Women-focused. Community-driven. Inspiring. Women's Siyum HaShas © 2023 Hadran. All Rights Reserved. Website photos by Ardon Bar-Hama
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,044
{"url":"https:\/\/www.vedantu.com\/question-answer\/area-of-the-triangle-with-vertices-aleft-class-11-maths-jee-main-62f4cfc6f5a5cd4cc89a48bb","text":"Questions & Answers\n\n# What is the area of the triangle with vertices $A\\left( z \\right)$, $B\\left( {iz} \\right)$, and $C\\left( {z + iz} \\right)$?A. $\\dfrac{1}{2}{\\left| {z + iz} \\right|^2}$B. 1C. $\\dfrac{1}{2}$D. $\\dfrac{1}{2}{\\left| z \\right|^2}$\n\nAnswer\nVerified\n10.2k+ views\nHint: The angle between vertex $A\\left( z \\right)$ and $B\\left( {iz} \\right)$ is ${90^ \\circ }$. So, the base of the triangle is $\\left| z \\right|$ and the height of the triangle is $\\left| {iz} \\right|$. By using the area formula of a triangle, we can calculate the area of the $\\Delta ABC$.\n\nFormula Used:\nThe angle between two complex numbers ${z_1} = {x_1} + i{y_1}$ and ${z_2} = {x_2} + i{y_2}$ is $\\theta = {\\tan ^{ - 1}}\\left( {\\dfrac{{\\dfrac{{{y_1}}}{{{x_1}}} - \\dfrac{{{y_2}}}{{{x_2}}}}}{{1 + \\dfrac{{{y_1}}}{{{x_1}}} \\cdot \\dfrac{{{y_2}}}{{{x_2}}}}}} \\right)$.\nThe modulus of complex number $z = x + iy$ is $\\left| z \\right| = \\sqrt {{x^2} + {y^2}}$\nArea of a triangle$= \\dfrac{1}{2} \\times {\\rm{base}} \\times {\\rm{height}}$\n\nComplete step by step solution:\nGiven that the vertices of the triangle are $A\\left( z \\right)$, $B\\left( {iz} \\right)$, and $C\\left( {z + iz} \\right)$.\n\nImage: Triangle ABC\nNow putting $z = x + iy$ in vertices\nThe vertices are $A\\left( {x + iy} \\right)$, $B\\left( {i\\left[ {x + iy} \\right]} \\right)$, and $C\\left( {x + iy + i\\left[ {x + iy} \\right]} \\right)$ or $A\\left( {x + iy} \\right)$, $B\\left( { - y + ix} \\right)$, and $C\\left( {x - y + iy + ix} \\right)$. $\\because i^{2}=-1$\nNow applying the formula $\\theta = {\\tan ^{ - 1}}\\left( {\\dfrac{{\\dfrac{{{y_1}}}{{{x_1}}} - \\dfrac{{{y_2}}}{{{x_2}}}}}{{1 + \\dfrac{{{y_1}}}{{{x_1}}} \\cdot \\dfrac{{{y_2}}}{{{x_2}}}}}} \\right)$ to calculate the angle between $A\\left( {x + iy} \\right)$ and $B\\left( { - y + ix} \\right)$.\nThe angle between $OA$ and $OB$ is\n$\\theta = {\\tan ^{ - 1}}\\left( {\\dfrac{{\\dfrac{y}{x} - \\left( { - \\dfrac{x}{y}} \\right)}}{{1 + \\dfrac{y}{x} \\cdot \\left( { - \\dfrac{x}{y}} \\right)}}} \\right)$\n$\\Rightarrow \\theta= {\\tan ^{ - 1}}\\left( {\\dfrac{{\\dfrac{y}{x} - \\left( { - \\dfrac{x}{y}} \\right)}}{{1 - 1}}} \\right)$\n$\\Rightarrow \\theta= {90^ \\circ }$ [ Since $\\tan {90^ \\circ }$ is undefined]\nSo, the base and height of the triangle are $\\left| {OA} \\right|$ and $\\left| {OB} \\right|$.\nApply the modulus formula to calculate $\\left| {OA} \\right|$.\n$\\left| {OA} \\right| = \\left| z \\right| = \\sqrt {{x^2} + {y^2}}$\nApply the modulus formula to calculate $\\left| {OB} \\right|$.\n$\\left| {OB} \\right| = \\left| {iz} \\right| = \\sqrt {{{\\left( { - y} \\right)}^2} + {x^2}} = \\sqrt {{x^2} + {y^2}} = \\left| z \\right|$\nApply the area formula to calculate the area of the triangle\nThe area of the triangle is\n$= \\dfrac{1}{2} \\times \\left| {OA} \\right| \\times \\left| {OB} \\right|$\n$= \\dfrac{1}{2} \\times \\left| z \\right| \\times \\left| z \\right|$\n$= \\dfrac{1}{2}{\\left| z \\right|^2}$\n\nOption \u2018D\u2019 is correct\n\nNote: If the angle between two complex numbers is $90^{\\circ}$, then these two complex numbers are the legs of a triangle. So half of the product of the magnitude of the complex numbers is the area of the triangle that is made by these two complex numbers.","date":"2022-09-24 16:54:22","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9453057050704956, \"perplexity\": 128.01159302742565}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030331677.90\/warc\/CC-MAIN-20220924151538-20220924181538-00678.warc.gz\"}"}
null
null
class AdapterList { public: virtual ~AdapterList() = default; virtual int getSize(); virtual const std::string& getIdentifier(int index); virtual void setElement(int position, const Nodect& row); virtual std::unique_ptr<ListDataSource> getData(); };
{ "redpajama_set_name": "RedPajamaGithub" }
7,798
<?xml version='1.0' encoding='utf-8' ?> <!DOCTYPE section PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [ <!ENTITY % BOOK_ENTITIES SYSTEM "cloudstack.ent"> %BOOK_ENTITIES; ]> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <section id="api-throttling"> <title>Limiting the Rate of API Requests</title> <para>You can limit the rate at which API requests can be placed for each account. This is useful to avoid malicious attacks on the Management Server, prevent performance degradation, and provide fairness to all accounts.</para> <para>If the number of API calls exceeds the threshold, an error message is returned for any additional API calls. The caller will have to retry these API calls at another time.</para> <section id="api-throttling-configure"> <title>Configuring the API Request Rate</title> <para>To control the API request rate, use the following global configuration settings:</para> <itemizedlist> <listitem><para>api.throttling.enabled - Enable/Disable API throttling. By default, this setting is false, so API throttling is not enabled.</para></listitem> <listitem><para>api.throttling.interval (in seconds) - Time interval during which the number of API requests is to be counted. When the interval has passed, the API count is reset to 0.</para></listitem> <listitem><para>api.throttling.max - Maximum number of APIs that can be placed within the api.throttling.interval period.</para></listitem> <listitem><para>api.throttling.cachesize - Cache size for storing API counters. Use a value higher than the total number of accounts managed by the cloud. One cache entry is needed for each account, to store the running API total for that account. </para></listitem> </itemizedlist> </section> <section id="api-throttling-limitations"> <title>Limitations on API Throttling</title> <para>The following limitations exist in the current implementation of this feature.</para> <note><para>Even with these limitations, &PRODUCT; is still able to effectively use API throttling to avoid malicious attacks causing denial of service.</para></note> <para/> <itemizedlist> <listitem><para>In a deployment with multiple Management Servers, the cache is not synchronized across them. In this case, &PRODUCT; might not be able to ensure that only the exact desired number of API requests are allowed. In the worst case, the number of API calls that might be allowed is (number of Management Servers) * (api.throttling.max). </para></listitem> <listitem><para>The API commands resetApiLimit and getApiLimit are limited to the Management Server where the API is invoked. </para></listitem> </itemizedlist> </section> </section>
{ "redpajama_set_name": "RedPajamaGithub" }
789
September 29- October 3, 2021 2021 COVID Protocols By calabasasfilmfestival Blog, Calabasas, Festival Update, News Article, Press, Uncategorized Meet Kelley and Nicole Fries of Calabasas Film Festival By: VoyageLA Staff Today we'd like to introduce you to Kelley Fries, Nicole Fries. So, before we jump into specific questions about the business, why don't you give us some details about you and your story. One of our favorite memories growing up was going to Blockbuster with our parents every Saturday morning and eyeing all the new releases on display and deciding on that night's family movie. We would constantly fight as one of us liked rom-coms, and the other liked action. But now we can honestly say some of our favorite movies were from each other's picks. Those debates sparked our love of movies and a passion to create a space for other film-lovers to enjoy and celebrate the movies! We started The Calabasas Film Festival in 2014. With the rise of the international awareness of our city, we felt the community was ready for a film and arts festival to celebrate its unique growth. We firmly believe in the importance of keeping film art alive and providing actors, directors, producers, and writers a platform to share and celebrate their stories. With the help of the City of Calabasas, support from the community and all of our other generous sponsors, we have, in just five short years, become a signature event for the entire region and have seen explosive growth year over year. We're always bombarded by how great it is to pursue your passion, etc. – but we've spoken with enough people to know that it's not always easy. Overall, would you say things have been easy for you? Our inaugural year went off without a problem, but our second year was quite the opposite. Everything that could wrong… did! A pipe burst, leaving us without toilets in our main theater venue, a projector malfunctioned forcing the audience to sit through 5 minutes of darkness as the technician rushed to fix the problem. We also had half our volunteers not show up; it was certainly a baptism of fire. Thankfully all these problems were lessons learned and prepared us for the successful years to follow. So let's switch gears a bit and go into the Calabasas Film Festival story. Tell us more about the business. The Calabasas Film Festival is a non-profit organization established to support the awareness and exposure of the cinematic arts in our city and surrounding communities. Each year we continue to push the limits of excellence for all who attend, by showcasing a mix of award-winning features, documentaries, and shorts. In addition to our screenings, we specialize in bringing together the artists and the community to talk about the films, which we have found are best enjoyed along with exquisite wines and cuisine. CFF is also dedicated to honoring the future of our industry by empowering young filmmakers with a platform for outreach through student screenings and events. We gift the winners over $20,000 in prizes, including film editing, equipment rentals, animation software, various educational products, scholarships and youth film camp training. We are proud to be founded and run by women. Not every day do you get to work alongside your best friend and sister and enjoy every moment. Has luck played a meaningful role in your life and business? We've seen both the upsides and downsides of luck. Luck was thrown our way the first year with the first movie that we contracted to show. A retiring distribution executive, who we by chance met at the racetrack, gifted us the US Premiere of the highly successful movie THE EQUALIZER, helping us fill several theaters. In year two, as you've read, we were plagued with bouts of bad luck, but we survived. Since then, we've seen both good times and bad, but have found that the harder you work, the luckier you tend to be. Website: www.calabasasfilmfestival.com Instagram: @Calabasasfilmfest Facebook: @CalabasasFilmFestival source: http://voyagela.com/interview/meet-kelley-fries-nicole-fries-calabasas-film-festival/ Film Lover's Guide to Calabasas CFF 2019 Film Review calabasasfilmfestival The Calabasas Film Festival welcomes avid film goers, studio executives and key players in the entertainment industry. CFF is also dedicated to honoring the future of the industry by empowering young filmmakers with a platform for outreach through student screenings and events. This year's Calabasas Film Festival refuses to take a back seat unless . . . Calabasas Film Festival Launches Online Book your seat at one of our film screenings, after parties, events, or all of the above by purchasing a festival pass before it's too late. Looking forward to seeing you there! CFF Festival Gallery CFF Archives Copyright © 2014-2022 Calabasas Film Festival. All Rights Reserved. Terms and Conditions
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,891
module.exports = { tableName: 'driverTableCustomPK', identity: 'drivercustom', connection: 'associations', primaryKey: 'number', fetchRecordsOnUpdate: true, fetchRecordsOnDestroy: false, fetchRecordsOnCreate: true, fetchRecordsOnCreateEach: true, attributes: { number: { type: 'number', required: true, autoMigrations: { columnType: 'integer', unique: true } }, name: { type: 'string', autoMigrations: { columnType: 'varchar' } }, taxis: { collection: 'taxicustom', via: 'drivers', dominant: true }, // Timestamps updatedAt: { type: 'number', autoUpdatedAt: true, autoMigrations: { columnType: 'bigint' } }, createdAt: { type: 'number', autoCreatedAt: true, autoMigrations: { columnType: 'bigint' } } } };
{ "redpajama_set_name": "RedPajamaGithub" }
7,851
using Android.App; using Android.Content; using Android.OS; namespace AndroidClientChromeCustomTabs { [Activity(Label = "CallbackInterceptorActivity")] [IntentFilter( new[] { Intent.ActionView }, Categories = new[] { Intent.CategoryDefault, Intent.CategoryBrowsable }, DataScheme = "io.identitymodel.native", DataHost = "callback")] public class CallbackInterceptorActivity : Activity { protected override void OnCreate(Bundle savedInstanceState) { base.OnCreate(savedInstanceState); Finish(); // get URI, send with mediator AndroidClientChromeCustomTabsApplication.Mediator.Send(Intent.DataString); StartActivity(typeof(MainActivity)); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
5,190
<div class="inside_page home"> <section class="title"><h1>{{ home.title }}</h1></section> <div class="clear"></div> </div>
{ "redpajama_set_name": "RedPajamaGithub" }
3,912
Q: Slightly off-centered buttons I need these buttons to be horizontally centered within their container..as you can see, they are floating to the left. That is because I have a float: left; attribute on them. However, when I remove the float: left; attribute and apply text-align: center; to the container, this is what happens... Almost centered, but not quite. What's the deal? Thanks! :) The container's CSS: #navbar { background: #303030; width: 100%; box-sizing: border-box; text-align: center; } Each button's CSS: a.button { width: 16.3%; background: #4d4d4d; height: 50px; display: inline-block; text-decoration: none; color: #fff; text-align: center; max-width: 250px; padding-top: 12px; box-sizing: border-box; font-size: 145%; transition-property: background, z-index, font-size, color; transition-duration: 0.35s; } The relevant HTML <div id ="navbar"> <a href="https://www.answers.legal" class="button"><i class="fa fa-home"></i> HOME</a> <a href="https://www.answers.legal/questions" class="button"><i class="fa fa-question"></i> Q&A</a> <a href="https://www.answers.legal/forums" class="button"><i class="fa fa-comment"></i> FORUMS</a> <a href="https://www.answers.legal/contact" class="button"><i class="fa fa-phone"></i> CONTACT</a> <a href="https://www.answers.legal/support" class="button"><i class="fa fa-life-ring"></i> SUPPORT</a> <a href="https://www.answers.legal/about" class="button"><i class="fa fa-info-circle"></i> ABOUT</a> </div> </div> A: The misalignment is probably due to a space character between each link. This is a common problem when using display: inline-block;. You can remove with a couple of methods. Method 1: Set a negative letter-spacing. This depends on font-family and font-size, so try a couple of different values. #navbar { letter-spacing: -4px; } a.button { letter-spacing: normal; } Method 2: Remove any space/linebreaks between the links. This will of course make the code pretty unreadable. <a href="https://www.answers.legal" class="button"><i class="fa fa-home"></i> HOME</a><a href="https://www.answers.legal/questions" class="button"><i class="fa fa-question"></i> Q&A</a><a href="https://www.answers.legal/forums" class="button"><i class="fa fa-comment"></i> FORUMS</a><a href="https://www.answers.legal/contact" class="button"><i class="fa fa-phone"></i> CONTACT</a><a href="https://www.answers.legal/support" class="button"><i class="fa fa-life-ring"></i> SUPPORT</a><a href="https://www.answers.legal/about" class="button"><i class="fa fa-info-circle"></i> ABOUT</a> Method 3: Ommit the closing </a> for all links but the last. This is still valid HTML. An <a> element will automatically close when it's followed by another <a>. <a href="https://www.answers.legal" class="button"><i class="fa fa-home"></i> HOME <a href="https://www.answers.legal/questions" class="button"><i class="fa fa-question"></i> Q&A <a href="https://www.answers.legal/forums" class="button"><i class="fa fa-comment"></i> FORUMS <a href="https://www.answers.legal/contact" class="button"><i class="fa fa-phone"></i> CONTACT <a href="https://www.answers.legal/support" class="button"><i class="fa fa-life-ring"></i> SUPPORT <a href="https://www.answers.legal/about" class="button"><i class="fa fa-info-circle"></i> ABOUT</a>
{ "redpajama_set_name": "RedPajamaStackExchange" }
733
from sympy.core import Expr, S, C, Mul, sympify from sympy.polys import quo, roots from sympy.simplify import powsimp class Product(Expr): """Represents unevaluated product. """ def __new__(cls, term, *symbols, **assumptions): term = sympify(term) if term.is_Number: if term is S.NaN: return S.NaN elif term is S.Infinity: return S.NaN elif term is S.NegativeInfinity: return S.NaN elif term is S.Zero: return S.Zero elif term is S.One: return S.One if len(symbols) == 1: symbol = symbols[0] if isinstance(symbol, C.Equality): k = symbol.lhs a = symbol.rhs.start n = symbol.rhs.end elif isinstance(symbol, (tuple, list)): k, a, n = symbol else: raise ValueError("Invalid arguments") k, a, n = map(sympify, (k, a, n)) if isinstance(a, C.Number) and isinstance(n, C.Number): return Mul(*[term.subs(k, i) for i in xrange(int(a), int(n)+1)]) else: raise NotImplementedError obj = Expr.__new__(cls, **assumptions) obj._args = (term, k, a, n) return obj @property def term(self): return self._args[0] @property def index(self): return self._args[1] @property def lower(self): return self._args[2] @property def upper(self): return self._args[3] def doit(self, **hints): term = self.term lower = self.lower upper = self.upper if hints.get('deep', True): term = term.doit(**hints) lower = lower.doit(**hints) upper = upper.doit(**hints) prod = self._eval_product(lower, upper, term) if prod is not None: return powsimp(prod) else: return self def _eval_product(self, a, n, term): from sympy import sum, Sum k = self.index if not term.has(k): return term**(n-a+1) elif term.is_polynomial(k): poly = term.as_poly(k) A = B = Q = S.One C_= poly.LC() all_roots = roots(poly, multiple=True) for r in all_roots: A *= C.RisingFactorial(a-r, n-a+1) Q *= n - r if len(all_roots) < poly.degree(): B = Product(quo(poly, Q.as_poly(k)), (k, a, n)) return poly.LC()**(n-a+1) * A * B elif term.is_Add: p, q = term.as_numer_denom() p = self._eval_product(a, n, p) q = self._eval_product(a, n, q) return p / q elif term.is_Mul: exclude, include = [], [] for t in term.args: p = self._eval_product(a, n, t) if p is not None: exclude.append(p) else: include.append(t) if not exclude: return None else: A, B = Mul(*exclude), Mul(*include) return A * Product(B, (k, a, n)) elif term.is_Pow: if not term.base.has(k): s = sum(term.exp, (k, a, n)) if not isinstance(s, Sum): return term.base**s elif not term.exp.has(k): p = self._eval_product(a, n, term.base) if p is not None: return p**term.exp def product(*args, **kwargs): prod = Product(*args, **kwargs) if isinstance(prod, Product): return prod.doit(deep=False) else: return prod
{ "redpajama_set_name": "RedPajamaGithub" }
8,805
\section{Introduction} A key achievement of statistical mechanics in the last half of the 20th century is the description of phase transitions and critical phenomena, a universal behavior associated with second-order phase transitions which led to the development of a universal theory of critical phase transitions.\cite{kadanoff1967,griffiths1970} When a system is brought to a critical phase transition, many of its properties exhibit singular behavior.\cite{nishimori2011} In materials with magnetic phase transitions, the order parameter is the (sublattice) magnetization and for antiferromagnets, this spontaneous magnetization disappears at the N{\'e}el temperature, $T_{\rm N}$. While spins do not spontaneously (anti)align on the macroscopic scale at temperatures well above $T_{\rm N}$, the fluctuating spins remain correlated over a length scale, $\xi$ (the correlation length) which grows as $T_{\rm N}$ is approached. The degree of singularity or divergence of physical quantities near the critical point is described by critical exponents.\cite{wilson1983} The system can be described by the {\em reduced temperature}, $t = (T - T_{\rm c})/T_{\rm c}$, with $T_{\rm c}$ the critical temperature, \textit{i.e.} $T_{\rm N}$ for antiferromagnets, and the order parameter follows power laws of $t$. For example, the antiferromagnetic order parameter, $M_{\rm AF}$, obeys $M_{\rm AF} \propto (-t)^\beta$ in close vicinity of $T_{\rm N}$, where $\beta$ is the {\em critical exponent} related to the magnetization. Above the critical temperature, the correlation length of the magnetic order follows a similar behavior, $\xi \propto t^{-\nu}$, where $\nu$ is the critical exponent of the correlation length. Similarly, the correlation length of the disordered domains in the ordered phase also follows a power law. Both correlation lengths diverge at the phase transition, indicating that both ordered and disordered phases percolate at $T_{\rm N}$, which is a key attribute of critical behavior. Neutron scattering is well suited to study critical magnetic behavior, because it provides direct access to the values of the order parameter and correlation length. For example, antiferromagnetic order will lead to additional Bragg reflections in a neutron diffractogram with intensity $I\propto M_{\rm{AF}}^2=(-t)^{2\beta}$, and the magnetic correlation length can be deduced from the width of the reflection. Recent examples of critical magnetic scattering experiments include studies of the magnetic structure of \ce{MnBi2Te4}\cite{yan2019} and the magnetic phase transition of an artificial square ice system.\cite{sendetskyi2019} While the theory of critical phenomena is, strictly speaking, only valid for infinite systems, micrometer sized systems are sufficiently large to behave like an infinite system for all practical purposes. However, it is unclear to what degree nanoscale systems can be described by this theory, as the finite size of nanoparticles naturally prevents the correlation length from diverging. Nanoparticles have obtained increased attention in the last decade for a large variety of both medical and industrial applications.\cite{brigger2012,stark2015} These applications include, but are not limited to, batteries,\cite{sun2012,sun2012b,zhang2014,chen2019,wang2019} capacitors,\cite{zheng2014,bhattacharya2018} catalysis\cite{liao2014,zhao2015,jiang2016,kim2017,velegraki2018,park2018,zheng2018,park2019} and gas sensing.\cite{wang2018} In this work, we investigate the critical behavior in nanoparticles and study how the description of critical phenomena must be adjusted for phase transitions in the nanoscale regime. We use CoO as our model system, as CoO is a structurally simple Ising system and a relatively well-studied material in nanoparticle form.\cite{flipse1999,zhang2002,ghosh2005,ye2006,he2015,sun2012,sun2012b,zhang2014,chen2019,wang2019,zheng2014,bhattacharya2018,liao2014,zhao2015,jiang2016,kim2017,velegraki2018,park2018,zheng2018,park2019,wang2018} Moreover, despite the frustration inherent to its face-centered cubic structure, bulk CoO has a second-order antiferromagnetic phase transition near room temperature (with a critical temperature, $T_{\rm N} \approx 289$~K).\cite{rechtin1971b} As $T_{\rm N}$ is close to room temperature, nanoparticles of CoO can be studied in the temperature region near $T_{\rm N}$ without the samples being destroyed by heating. In the antiferromagnetic phase, the magnetic structure is given by alternating planes of ferromagnetically aligned spins that align antiferromagnetically along the (1 1 1) direction, similar to MnO, FeO and NiO.\cite{roth1958} Moreover, critical magnetic neutron scattering experiments on bulk single crystals of CoO have been performed about half a century ago.\cite{mcreynolds1959,rechtin1971} By means of neutron scattering, we show that in contrast to micrometersized CoO, the theory of critical phenomena breaks down for CoO nanoparticles. Furthermore, we qualitatively support our experimental observation of this nanocritical behavior by Monte Carlo simulations. Our findings provide an additional branch to the theory of critical phenomena, that is important to the understanding of magnetic phase transitions in nanosized or confined systems. \section{Experimental} The CoO nanoparticles were prepared by a method similar to one previously reported.\cite{frandsen2004} Here, \ce{(CH3COO)2Co} $\cdot$ \ce{4H2O} was suspended in ethanol, baked at 100 $^{\circ}$C and consecutively annealed at a temperature between 325 $^{\circ}$C and 425 $^{\circ}$C under constant argon flow to remove acetic acid and water. By increasing the temperature and annealing time, the particles were allowed to grow together, resulting in larger-sized particles. Three CoO nanoparticle samples were obtained with nominal diameters of 20 nm, 30 nm, and 40 nm. The samples were characterized by X-ray diffraction on a Rikagu rotating anode using Cu-K$\alpha$ radiation with a wavelength $\lambda = 1.54$~\AA. As shown in the Supplemental Material (SM),\cite{supplementary} the crystalline size of nanoparticles were determined to be 21.3(8) nm, 29.3(4) nm and 41.7(5) nm, respectively. For comparison, a sample of micrometer-sized CoO particles was commercially purchased. Neutron diffraction was carried out at the Paul Scherrer Institute (CH),\cite{PSI} using the \mbox{RITA-2} cold-neutron spectrometer \cite{bahl2004} in two-axis mode with a wavelength of 4.7~\AA, taking advantage of the large area of the position-sensitive detector. Additional diffraction data were taken at the cold-neutron powder diffractometer DMC at the Paul Scherrer Institute, using a wavelength of 4.2~\AA~{} and the full detector bank covering $80^\circ$ of scattering angles. We fitted the magnetic peaks using a Voigt function, where the Lorentzian half width at half maximum ($\Gamma$) is the broadening caused by the finite size of correlated domains, and the Gaussian width is the resolution of the instrument, determined by fitting the data obtained on the micrometer-sized particles at 10 K. We also allowed for a sloping background in the fitting. Unfortunately, the 20 nm data appeared to be of insufficient quality to perform proper data fitting and are therefore excluded from further analysis in this work (see Fig. S2 in the SM for an example of insufficient data quality \cite{supplementary}). We also note that the work of Ghosh \textit{et al.} demonstrated that very small CoO nanoparticles ($<$ 16 nm) contain different magnetic behavior (ferromagnetic interactions) than their larger counterparts, putting a lower limit to the size of antiferromagnetic CoO nanoparticles.\cite{ghosh2005} \section{Results and discussion} In our neutron experiment, we observed the critical magnetic scattering around the (1/2 1/2 1/2) peak. Typical results of the development of this critical scattering in the 30 nm sample for four temperatures above $T_{\rm N}$ is shown in Fig.~\ref{Fig:criticalmethod}. The 40 nm sample shows very similar behavior, as will become apparent below. It was not possible to measure the critical scattering at temperatures below $T_{\rm N}$, as the very strong signal from the magnetic Bragg peak overshadows the signal of the critical scattering at lower temperatures. This is in contrast to the work by Rechtin and Averbach, who could easily separate the magnetic Bragg-scattering component in their study on a single crystal with sharp collimation.\cite{rechtin1971} The fact that we use nanoparticle samples in our study significantly enhances the complexity of our data analysis. However, we were able to investigate the temperature dependence of the magnetic correlation length, and its critical exponent $\nu$, at the phase transition as approached from the high-temperature side. \begin{figure}[b] \includegraphics[width=0.45\textwidth]{figurer/Machteld/30nm_critical.pdf} \caption{Development of critical scattering near the antiferromagnetic phase transition in 30 nm CoO particles, at \textbf{a)} 290~K, \textbf{b)} 295~K, \textbf{c)} 300~K and \textbf{d)} 303~K. The measured neutron diffraction intensities as a function of momentum transfer are depicted in blue (the error bars represent one standard deviation), the fit to the data is given by the continuous blue line (Voigt) and the red dashed line represents the fit to the long-range ordered signal measured at 10 K, as given in Fig. S3 in the SM,\cite{supplementary} but re-scaled to match the intensity of each peak.} \label{Fig:criticalmethod} \end{figure} The magnetic correlation length, $\xi$, is inversely proportional to the width of the magnetic diffraction peak: $\xi = 1 / \Gamma$, where $\Gamma$ is the half width at half maximum (HWHM). As shown in Fig.~\ref{Fig:criticalmethod}, the width of the critical scattering increases with temperature. This means that the size of magnetic domains, measured as magnetic correlation length, is largest at $T_{\rm N}$ and decreases for increasing temperature, in agreement with the theory of critical phenomena as established for infinitely large systems. However, we observe that the critical scattering as measured very close to $T_{\rm N}$ (see Fig.~\ref{Fig:criticalmethod}\textbf{a}) is slightly broader than the Bragg scattering measured at 10 K. This low-temperature data corresponds to the completely ordered magnetic structure and the width of the peak is given by the resolution of the instrument and the finite size of the magnetic structure of the nanoparticles. Thus, as expected, the magnetic correlation length appears to not diverge at $T_{\rm N}$ for CoO nanoparticles, as it does for infinite systems. In order to analyze the critical scattering data as a function of temperature, it is crucial to precisely determine the critical temperature for each sample, as $T_{\rm N}$ might depend on the particle size. For single-crystal or micrometer-sized samples, $T_{\rm N}$ is usually determined by following the sharp magnetic Bragg peak below $T_{\rm N}$, fitting the peak intensity in the approximate range $0.1 < -t < 0.01$, with $t = (T - T_{\rm N})/T_{\rm N}$, where we expect the peak intensity to scale as $ \propto (-t)^{2\beta}$. However, for the nanoparticles this procedure proved unreliable due to the relatively higher background levels and finite-size broadening of the Bragg peak, giving unacceptable uncertainties of $T_{\rm N}$ of the order 3-4 degrees. Instead, we found that for nanoparticles, a much better method was to determine $T_{\rm N}$ from the high-temperature data. Here, we expect a power-law behaviour of the form $\xi \propto t^{-\nu}$, and $T_N$ was defined as the value where the high-temperature data best follows this equation, using a simple $\chi^2$ fit. For the CoO bulk sample, \textit{i.e.} the commercially-bought micrometer-sized CoO, we performed this power-law analysis of the correlation length, resulting in a $T_{\rm N}$ of 286.2(4) K and a critical exponent of $\nu = 1.6(4)$. As our obtained value of $\nu$ is higher than 0.63, expected for a 3D Ising system,\cite{Guida1998} we argue that the CoO system appears to be more complex and cannot be described by such a simple model. In fact, previous work has shown that the critical exponent $\beta$ (see SM\cite{supplementary}) also does not follow the simple 3D Ising model as it is severely affected by a tetragonal lattice contraction below $T_{\rm N}$. \cite{Rechtin1970} We note, however, that to the best of our knowledge, we are the first to report experimental values of $\nu$ for CoO and further assessment of the origin of this variation in critical exponents is beyond the scope of this work. The value of $T_{\rm N}$ is in agreement with the value found by fitting the peak intensity as explained above. Note, however, that this deviates a bit from the expected value of $T_{\rm N} \approx 289$~K,\cite{rechtin1971b}. The reason for this deviation is that the cryostats are calibrated for low temperatures and not for high temperatures, leading to a small offset in the apparent temperature near room temperature. In addition, the samples were not all measured in the same cryostat, and so the offset might be different for different samples. However, this would not affect any relative temperature differences between the measurements on the same sample. For the 30 nm and 40 nm samples, $T_{\rm N}$ was determined by fitting the highest temperature data to the same model as described for the bulk data, using the least-squares method. We argue that this method is accurate as in the limit of the smallest domain sizes, the size of the particle itself does not matter, and all data should therefore lie on the same line (\textit{i.e.}, the black line shown in Fig.~\ref{Fig:correlationreducedt}). This method yielded a $T_{\rm N}$ of 289.8(5) K and 290.2(4) K for the 30 nm and 40 nm samples, respectively. Additionally, these N\'eel temperatures were used to determine the critical exponent, $\beta$, of the magnetic order parameter, as shown in Fig. S4 in the SM.\cite{supplementary} Fig.~\ref{Fig:correlationreducedt} shows the measured correlation length as a function of reduced temperature for the CoO nanoparticles in comparison to the bulk data. The bulk data clearly shows a divergence of the correlation length near $T_{\rm N}$, as expected by the universal theory of critical phase transitions.\cite{kadanoff1967,griffiths1970} However, two different regions are apparent for the nanoparticle data: 1) the region above $t = 0.03$ where it follows the bulk data, and 2) a converging correlation length below reduced temperatures of $t = 0.01$, corresponding to temperatures between $T_{\rm N}$ and $\approx 292$~K. Thus, as expected, no divergence of the correlation length near $T_{\rm N}$ is observed for the nanoparticle samples. In fact, the value of the converged correlation length depends on the size of the nanoparticles: 55(2) \AA{} and 103(2) \AA{} for the 30 nm and 40 nm particles, respectively. These converged values correspond to roughly 2/3 of that of the long-range ordered state measured at 10 K (77(2) \AA{} and 149(2) \AA{}, for the 30 nm and 40 nm particles, respectively). \begin{figure}[t] \includegraphics[width=0.45\textwidth]{figurer/Machteld/critical.pdf} \caption{Correlation length of short-range order for 30 and 40 nm CoO particles as a function of reduced temperature, $t = (T - T_{\rm N})/T_{\rm N}$. The black line corresponds to the power-law fit to the bulk CoO data. The solid blue and red lines correspond to the converged values of the correlation lengths close to $T_{\rm N}$ for the 30 and 40 nm particles, respectively. The dashed lines denote the correlation length of the corresponding long-range ordered state, measured at 10 K.} \label{Fig:correlationreducedt} \end{figure} Note that the correlation length of the magnetic domains determined as $\xi=1/\Gamma$ is not the same entity as the diameter ($D$) of the ordered magnetic regions in the nanoparticles at low temperatures. The latter can be calculated using the Scherrer equation, $D=\pi K/\Gamma$, where $K$ is a dimensionless shape factor. Using Scherrer's value of $K=0.94$,\cite{Scherrer1918} we find $D=21.3(5)$ nm and $D=44.0(5)$ nm at 10 K for the 30 and 40 nm particles, respectively. These values indicate that there are magnetic dead layers on the surface of the 30 nm particles, as seen in other nanoparticles \cite{Curiale2009}, while the 40 nm particles are fully ordered. We now focus on the correlation length of the magnetic domains near $T_{\rm N}$. It is logical that no infinitely large correlation lengths can be observed in finite systems; the size of the nanoparticles already provides an upper limit. However, for both sizes of nanoparticles, the converged value of the correlation length only correspond to about 2/3 of that of the long-range ordered state as measured at 10 K, which is the longest correlation length observed in the particles. This means that no true long-range order exist in the nanoparticles near the phase transition. We therefore conclude that magnetically ordered and disordered domains coexist over a region of a few degrees around $T_{\rm N}$, in what appears to be a semistable equilibrium. To investigate the behavior of magnetic nanoparticles near $T_{\rm N}$ in more detail, we carried out classical Monte Carlo simulations using a simple nearest neighbor Ising model on a cubic lattice with a lattice constant equal to the Co-Co distance in CoO, $a=4.2615/\sqrt(2)$ \AA{}. To capture the physics of spherical, monodisperse nanoparticles, we used open boundary conditions and included only spins within a sphere of diameter $D$. We used the Metropolis Hastings algorithm\cite{metropolis1953,hastings1970}, where spin flips that reduce the energy were always kept, and spin flips that increase the energy were kept with a probability of $\exp[{-\Delta E/T_s}]$, where $\Delta E$ is the change in energy and $T_s$ is the simulated temperature of the system. The phase transition was found at $T_s\approx 4.5$. The simulations were carried out on the Quantum Wolf Cluster at the Laboratory for Quantum Magnetism, EPFL, Switzerland. More details regarding the simulations are given in the SM.\cite{supplementary} \begin{figure}[t] \includegraphics[width=0.23\textwidth]{figurer/Machteld/fit_examples_T_131.pdf} \includegraphics[width=0.23\textwidth]{figurer/Machteld/fit_examples_T_193.pdf} \caption{Simulated scattering \textbf{a)} below ($t=-0.01$) and \textbf{b)} above ($t=0.01$) the magnetic phase transition in a magnetic nanoparticle with a diameter of 36 nm. The fit to the data is shown by the continuous blue line and the red dashed line represents the fit to the long-range ordered signal at base temperature, but re-scaled to match the intensity of each peak. } \label{Fig:montecarlo_fits} \end{figure} \begin{figure}[b] \includegraphics[width=0.45\textwidth]{figurer/Machteld/correlation_length.pdf} \caption{Correlation length of short-range order for simulated magnetic nanoparticles of different sizes as a function of reduced temperature. The black line corresponds to the expected bulk behavior with $\nu=0.63$. The solid colored lines correspond to the converged values of the correlation lengths close to $T_{\rm N}$ for the nanoparticles of different sizes. The dashed lines denote the correlation length of the corresponding long-range ordered state, obtained at $T = 0$. } \label{Fig:montecarlo_xi} \end{figure} For consistency, we analyzed the simulated data using the same approach as for the experimental data. At the lowest temperatures, the intensity is well approximated by a Gaussian, which captures the finite size of the particles. At higher temperatures, the signal broadens and can be described by a Voigt function, in which the Lorentzian part accounts for the additional broadening. Examples of the simulated signal of a 36 nm particle below and above the phase transition are shown in Fig.~\ref{Fig:montecarlo_fits}. Fig.~\ref{Fig:montecarlo_xi} shows the correlation length for three nanoparticle diameters as function of reduced temperature. As for the experimental data, we calculated the correlation length using $\xi=1/\Gamma$, where $\Gamma$ is the HWHM of the signal. There is a clear qualitative agreement between our experimental data and simulations. The simulations show two different regions, one where the correlation length follows the expected bulk behavior and one where it converges to a constant value, indicating a lack of divergence of the correlation length near the critical temperature in the nanoscale regime. Moreover, this value of the converged correlation length depends on the size of the nanoparticles and only reaches a fraction of that of the long-range ordered state obtained at $T = 0$. This is in close agreement with the experimental data. Note that a quantitative comparison between the experimental and simulated data is compromised by the simplified simple cubic lattice used in the simulations, which does not include the magnetic frustration present in the CoO lattice. For example, the critical exponent of the simulated bulk behavior ($\nu = 0.63(3)$, corresponding to a 3D Ising system\cite{Guida1998}) deviates from that of the experimental data ($\nu = 1.6(4)$), as discussed above. \section{Conclusions} In conclusion, we used neutron scattering to measure the critical magnetic scattering near the antiferromagentic phase transition in CoO at the nanoscale. Our results show that nanoparticles of CoO exhibit a different critical scattering behavior at temperatures close to $T_{\rm N}$, as compared to their bulk counterpart. In contrast to the divergence in correlation length observed for larger systems, a converged value of the correlation length close to the phase transition is observed at the nanoscale. Notably, the converged value is significantly smaller than that of the saturated state observed at low temperatures. Moreover, the size of the maximum correlation length depends on the size of the nanoparticles. Our Monte Carlo simulations support our findings of such a converged correlation length near the phase transition. We hereby show that the theory of critical phenomena, developed for macroscopic systems displaying continuous phase transitions, requires modifications when applied to a nanoscale system in which geometrical constraints on the correlation need to be taken into account. We emphasize that while our study deals with magnetic nanoparticles, such modifications would be required for any nanoscale system. \section*{Acknowledgments} We thank N.H. Andersen, B. Lebech, P.-A. Lindg\aa rd, J. Juul, and H. Bruus for stimulating discussions. A large thank goes to S.~M\o rup for participating in the initial phases of this project. We thank H.M. R\o nnow for providing access to the Quantum Wolf computer cluster at the Laboratory for Quantum Magnetism, EPFL, Lausanne. MEK was supported by MSCA-IF Horizon 2020, grant number 838926. HJ acknowledges support from the Carlsberg Foundation. This work was supported by the Danish Technical Research Council through the Nanomagnetism framework program, and the Danish Natural Science Research Council through DANSCATT. The work is based on experiments performed at SINQ, Paul Scherrer Institute, Switzerland. MEK and JOB contributed equally to this work. \section{Using \protect\revtex} The file \file{README} has retrieval and installation information. User documentation is presented separately in \file{auguide.tex}. The file \file{template.aps} is a boilerplate file. \changes{4.0a}{1998/01/16}{Initial version} \changes{4.0a}{1998/01/31}{Move after process options, so \cs{clearpage} not in scope of twocolumn} \changes{4.0a}{1998/01/31}{Rearrange the ordering so numerical ones come first. AO: David, what does this mean?} \changes{4.0a}{1998/01/31}{use font-dependent spacing} \changes{4.0a}{1998/01/31}{4.0d had twoside option setting twoside switch to false} \changes{4.0a}{1998/01/31}{Move after process options, so the following test works} \changes{4.0a}{1998/01/31}{print homepage} \changes{4.0a}{1998/01/31}{protect against hyperref revtex kludges which are not needed now} \changes{4.0a}{1998/06/10}{multiple preprint commands} \changes{4.0a}{1998/06/10}{comma not space between email and homepage} \changes{4.0a}{1998/06/10}{single space footnotes} \changes{4.0b}{1999/06/20}{First modifications by Arthur Ogawa (mailto:arthur\_ogawa at sbcglobal dot net)} \changes{4.0b}{1999/06/20}{Added localization of \cs{figuresname}} \changes{4.0b}{1999/06/20}{Added localization of \cs{tablesname}} \changes{4.0b}{1999/06/20}{AO: all code for \protect\classoption{10pt} is in this module.} \changes{4.0b}{1999/06/20}{AO: all code for \protect\classoption{11pt} is in this module.} \changes{4.0b}{1999/06/20}{AO: all code for \protect\classoption{12pt} is in this module.} \changes{4.0b}{1999/06/20}{AO: made aps.rtx part of revtex4.dtx} \changes{4.0b}{1999/06/20}{AO: remove duplicates} \changes{4.0b}{1999/06/20}{call \cs{print@floats}} \changes{4.0b}{1999/06/20}{Defer assignment until \cs{AtBeginDocument} time.} \changes{4.0b}{1999/06/20}{Defer decision until \cs{AtBeginDocument} time} \changes{4.0b}{1999/06/20}{Define three separate environments, defer assignment to \cs{AtBeginDocument} time.} \changes{4.0b}{1999/06/20}{Frank Mittelbach, has stated in \protect\classname{multicol}: ``The kernel command \cs{@footnotetext} should not be modified.'' Thus, I have removed David Carlisle's redefinition of that command. Note, however, that later versions of \protect\classname{multicol} do not require this workaround. Belt and suspenders.}% \changes{4.0b}{1999/06/20}{Move this ``complex'' option to the front, where it can be overridden by ``simple'' options.} \changes{4.0b}{1999/06/20}{New option} \changes{4.0b}{1999/06/20}{One-line caption sets flush left.} \changes{4.0b}{1999/06/20}{only execute if appropriate} \changes{4.0b}{1999/06/20}{Processing delayed to \cs{AtBeginDocument} time} \changes{4.0b}{1999/06/20}{Removed invocation of nonexistent class option \protect\classoption{groupauthors} and all other class options that should only be invoked by the document. (Otherwise precedence of class options does not work.)} \changes{4.0b}{1999/06/20}{Restore all media size class option of \protect\file{classes.dtx}} \changes{4.0b}{1999/06/20}{Stack \cs{preprint} args flush right at right margin.} \changes{4.0c}{1999/11/13}{(AO, 115) If three or more preprints specified, set on single line, with commas.} \changes{4.0c}{1999/11/13}{(AO, 129) section* within appendix was producing appendixname} \changes{4.0c}{1999/11/13}{*-form mandates pagebreak} \changes{4.0c}{1999/11/13}{also spelled ``acknowledgements''.} \changes{4.0c}{1999/11/13}{Do not put by REVTeX in every page foot} \changes{4.0c}{1999/11/13}{grid changes via ltxgrid procedures} \changes{4.0c}{1999/11/13}{grid changes with ltxgrid} \changes{4.0c}{1999/11/13}{Insert procedure \cs{checkindate}} \changes{4.0c}{1999/11/13}{Lose compatability mode.} \changes{4.0c}{1999/11/13}{New ltxgrid-based code, other bug fixes} \changes{4.0c}{1999/11/13}{New option ``checkin''} \changes{4.0c}{1999/11/13}{Prevent an inner footnote from performing twice} \changes{4.0d}{2000/04/10}{Also alter how lists get indented.} \changes{4.0d}{2000/04/10}{eprint takes an optional argument, syntactical only in this case.} \changes{4.0d}{2000/04/10}{New option} \changes{4.0d}{2000/05/10}{More features and bug fixes: compatability with longtable and array packages. Now certainly incompatible with multicol.} \changes{4.0d}{2000/05/17}{make longtable trigger the head, too} \changes{4.0d}{2000/05/18}{But alternative spelling is deprecated.} \changes{4.0e}{2000/09/20}{New option showkeys} \changes{4.0e}{2000/11/14}{Bug fixes and minor new features: title block affiliations can have ancillary data, just like authors; clearpage processing revamped, with floats staying in order; widetext ornaments.} \changes{4.0e}{2000/11/21}{adornments above and below.} \changes{4.0f}{2001/02/13}{Last bug fixes before release.} \changes{4.0rc1}{2001/06/17}{Running headers always as if two-sided} \changes{4.0rc1}{2001/06/18}{grid changes with push and pop} \changes{4.0rc1}{2001/06/18}{grid changes with push and pop} \changes{4.0rc4}{2001/07/23}{hyperref is no longer loaded via class option: use a usepackage statement instead} \changes{4.1a}{2008/01/18}{(AO, 457) Endnotes to be sorted in with numerical citations.}% \changes{4.1a}{2008/01/18}{(AO, 451) ``Cannot have more than 256 cites in a document''}% \changes{4.1a}{2008/01/18}{(AO, 457) Endnotes to be sorted in with numerical citations.}% \changes{4.1a}{2008/01/18}{(AO, 460) ``Proper style is "FIG. 1. ..." (no colon)''}% \changes{4.1a}{2008/01/18}{(AO, 478) \cs{ds@letterpaper}, so that ``letterpaper really is the default''}% \changes{4.1a}{2008/01/18}{(AO, 488) Change processing of options to allow an unused option to specify society and journal}% \changes{4.1a}{2008/01/19}{(AO, 461) Change the csname revtex uses from @dotsep to ltxu@dotsep. The former is understood in mu. (What we wanted was a dimension.)}% \changes{4.1a}{2008/01/19}{For natbib versions before 8.21, \cs{NAT@sort} was consulted only as natbib was being read in. Now it is fully dynamic.} \changes{4.1b}{2008/05/29}{The csname substyle@ext is now defined without a dot (.), to be compatible with \LaTeX usage (see @clsextension and @pkgextension).} \changes{4.1b}{2008/06/01}{(AO) Implement bibnotes through \cs{frontmatter@footnote@produce} instead of \cs{bibnotes@sw}}% \changes{4.1b}{2008/06/01}{Add option reprint, opposite of preprint, and preferred alternative to twocolumn} \changes{4.1b}{2008/06/29}{(AO, 455) Be nice to a list within the abstract (assign \cs{@totalleftmargin}).} \changes{4.1b}{2008/06/30}{(AO) Structure the Abstract using the \texttt{bibliography} environment} \changes{4.1b}{2008/07/01}{(AO) coordinate \cs{if@twoside} with \cs{twoside@sw}} \changes{4.1b}{2008/07/01}{(AO) make settings at class time instead of deferring them to later.} \changes{4.1b}{2008/07/01}{(AO) No longer need to test \cs{chapter} as of \texttt{natbib} version 8.2} \changes{4.1b}{2008/07/01}{(AO) No longer use \cs{secnumarabic@sw}, instead use \cs{setup@secnums}} \changes{4.1b}{2008/07/01}{(AO) Provide more diagnostics when \cs{@society} is assigned.} \changes{4.1b}{2008/07/01}{(AO) provide option longbibliography} \changes{4.1b}{2008/07/01}{Add \cs{@hangfroms@section}} \changes{4.1b}{2008/07/01}{Break out \cs{@caption@fignum@sep}} \changes{4.1b}{2008/07/01}{Class option galley sets \cs{preprintsty@sw} to false} \changes{4.1b}{2008/07/01}{Code relating to new syntax for frontmatter has been placed in \file{ltxfront.dtx}} \changes{4.1b}{2008/07/01}{Package textcase is now simply a required package} \changes{4.1b}{2008/07/01}{Procedures \cs{@parse@class@options@society} and \cs{@parse@class@options@journal} and friends} \changes{4.1b}{2008/07/01}{Read in all required packages together} \changes{4.1b}{2008/07/01}{Remove options newabstract and oldabstract} \changes{4.1b}{2008/08/01}{Section numbering via procedures \cs{secnums@rtx} and \cs{secnums@arabic}.} \changes{4.1b}{2008/08/04}{As with author formatting, rag the right more, and assign \cs{@totalleftmargin}. Also neutralize \cs{def@after@address}.}% \changes{4.1b}{2008/08/04}{Rag the right even more: .8\cs{hsize}. Also, assign \cs{@totalleftmargin}.}% \changes{4.1b}{2008/08/04}{The \texttt{rmp} journal substyle selects \texttt{groupedaddress} by default.}% \changes{4.1b}{2008/08/04}{Use \cs{setup@hook} to initialize all.} \changes{4.1c}{2008/08/15}{Document class option longbibliography via \cs{substyle@post}} \changes{4.1d}{2009/03/27}{Definition of \cs{ @fnsymbol} follows fixltx2e.sty} \changes{4.1e}{2008/06/29}{(AO, 455) be nice to a list within the abstract} \changes{4.1f}{2009/07/07}{(AO, 513) Add class option linenumbers: number the lines a la \classname{lineno}} \changes{4.1f}{2009/07/07}{(AO, 516) Merged references are separated with a semicolon} \changes{4.1f}{2009/07/10}{(AO, 520) Automatically produce \cs{bibliography} command when needed}% \changes{4.1f}{2009/07/11}{(AO, 521) Lonely bibliography head}% \changes{4.1f}{2009/07/11}{(AO, 522) Warn if software is expired}% \changes{4.1f}{2009/07/15}{(AO, 523) Add class option nomerge, to turn off new natbib 8.3 syntax} \changes{4.1f}{2009/07/20}{(AO, 524) Makes no sense if citations are superscript numbers and so are footnotes} \changes{4.1f}{2009/10/05}{(AO, 530) \cs{@fnsymbol}: Failed to import fixltx2e.sty technology. Return to LaTeX core.} \changes{4.1g}{2009/10/07}{(AO, 525) Remove phantom paragraph above display math that is given in vertical mode}% \changes{4.1g}{2009/10/07}{(AO, 538) \cs{MakeTextUppercase} inappropriately expands the double backslash} \changes{4.1h}{2009/10/09}{(AO) Remove expiry code in the release software}% \changes{4.1i}{2009/10/23}{(AO, 541) Defer assignment of \cs{cite} until after natbib loads} \changes{4.1j}{2009/10/24}{(AO, 549) Repairing natbib's \cs{BibitemShut} and \cs{bibAnnote}} \changes{4.1j}{2009/10/25}{(AO, 545) hypertext capabilities off by default; enable with \classoption{hypertext}} \changes{4.1j}{2009/10/25}{(AO, 552) Repair spacing in \cs{onlinecite}} \changes{4.1k}{2009/11/06}{(AO, 554) give the \cs{newlabel} command syntax appropriate to the hyperref package} \changes{4.1n}{2009/11/06}{(AO, 565) restore 4.0 behavior: invoking class option preprint implies class option preprintnumbers} \changes{4.1n}{2009/11/30}{(AO, 566) restore 4.0 behavior: flush column bottoms} \changes{4.1n}{2009/12/05}{(AO, 569) Use of \classname{hyperref} interferes with column balancing of last page}% \changes{4.1n}{2009/12/09}{(AO, 569) execute the after-last-shipout procedures from within the safety of the output routine}% \changes{4.1n}{2010/01/02}{(AO, 571) Interface \cs{set@footnotewidth} for determining the set width of footnotes}% \changes{4.1n}{2010/01/02}{(AO, 572) Independent footnote counter for title block. Abstract footnote counter shared with body.}% \changes{4.1n}{2009/12/13}{(AO, 573) arrange to load \classname{lineno} after any other packages.}% \changes{4.1n}{2010/01/04}{(AO, 575) the default for journal prstper is longbibliography}% \changes{4.1n}{2010/01/04}{(AO, 576) In .bst files, remove support for the annote field}% \changes{4.1n}{2010/01/02}{(AO) fine-tune spacing above and below widetext}% \changes{4.1n}{2010/01/02}{(AO, 571) class file must set \cs{splittopskip}; fine tune \cs{skip}\cs{footins}; \cs{footnoterule} defined in terms of \cs{skip}\cs{footins}}% \changes{4.1n}{2010/01/02}{(AO, 572) \cs{@makefntext} and \cs{frontmatter@makefntext} must be defined harmoniously}% \changes{4.1o}{2010/02/02}{(AO, 575) Automatically incorporate the (Bib\TeX-generated) .bbl into an explicit \env{thebibliography}}% \changes{4.1o}{2010/02/05}{(AO, 549) Remove patch to natbib, which is now at version 8.31a} \changes{4.1o}{2010/02/07}{(AO, 578) accommodate the possible space character preceding \cs{BibitemShut}.} \changes{4.1o}{2010/02/05}{(AO, 579) Endnote shall comprise their own Bib\TeX\ entry type: @FOOTNOTE.} \changes{4.1o}{2010/02/10}{(AO, 580) Provide a document class option to turn off production of eprint field in bibliography.} \changes{4.1o}{2010/02/12}{(AO, 580) Control .bst at run time.}% \changes{4.1o}{2010/02/09}{(AO, 581) Handle case: merged references, with first ending in a stop character.} \changes{4.1p}{2010/02/24}{(AO, 583) Provide interface to \classname{ltxgrid} \cs{onecolumn@grid@setup} and \cs{twocolumn@grid@setup}} \changes{4.1p}{2010/02/24}{(AO, 584) Per MD, remove trailing space character from each journal abbreviation: it had caused an extraneous space in the .bbl} \changes{4.1q}{2010/04/01}{(AO, 586) When .bbl is pasted into the document, prevent automatic bibliography inclusion.}% \changes{4.1q}{2010/04/13}{(AO, 588) Only write \revtex-specific BibTeX .bib data if the .bst style is set by REVTeX.}% \changes{4.1r}{2010/06/22}{(AO, 595) Provide \cs{lovname} along with other List of Videos definitions.}% \iffalse ltxdoc klootch This file has version number 4.1r, last revised 2010/07/25/20:33:00.\fi \section{Monte Carlo simulations} The Monte Carlo simulations of the CoO nanoparticles use the nearest neighbour Ising model with spins polarized along the $z$-axis in the absence of external magnetic fields: \begin{equation} H = -J\sum_{\langle i,j\rangle}s_is_j, \end{equation} where $J$ is the coupling strength, $s_i$ ($s_j$) is spin value in units of $1/2\hbar$ at the $i$'th ($j$'th) lattice position, and the brackets indicate that the sum runs over nearest neighbours. Using a simple cubic lattice, each lattice position has six nearest neighbours, except for those at the surface. Approximately spherical, monodisperse nanoparticles were simulated by only including lattice positions within a sphere with a diameter equal to the particle size. The lattice constant was set equal to the Co-Co distance in CoO; $a=4.2615/\sqrt(2)$ \AA{}. We used the Metropolis Hastings algorithm in our simulations.\cite{metropolis1953,hastings1970} In each step, a random lattice point is chosen, and the change in energy upon a spin flip is calculated. If the flip lowers the energy of the system, it is always performed, and if not, the flip is only carried out with a probability of $\exp[{-\Delta E/T_s}]$, where $\Delta E$ is the change in energy and $T_s$ is the simulated temperature of the system. For simplicity, we set $J= 1$. The Mersenne Twister algorithm was used to generate random numbers.\cite{matsumoto1998} The transition temperature of the system was found to be $T_{\rm N}\approx 4.5$. We therefore initiated each simulation at $T_s=6$, well above $T_N$. In this paramagnetic state, the spins were randomly set to either up or down. The temperature was then gradually lowered to $T_{\rm s} = 0$, and a simulation was carried out at each desired temperature, using the final spin configuration of the previous temperature as the starting point. We found that the number of attempted spin flips, $N$, required for the simulations to converge, scales as $N \propto l^4$, where $l$ is the number of spins along the diameter of the particle. To ensure convergence, we set $N = 10 \ l^4$ in all simulations. The simulation temperatures were chosen relative to the bulk transition temperature, $T_{\rm N}$ (which is around 4.5), in three different intervals: a linear distribution of sixteen temperatures between $T_{\rm s} = 6$ and $T_{\rm N}$, 185 temperatures distributed according to a power-law between $T_{\rm N}$ and $T_{\rm s} = 3$, and a linear distribution of five temperatures between $T_{\rm s} = 3$ and $T_{\rm s} = 0$. The neutron scattering intensity was calculated using: \begin{equation} I = \sum_{i,j} s_is_j\exp{i\bf{Q}\cdot\left(\bf{r}_j-\bf{r}_k\right)}, \end{equation} where ${\bf Q}$ is the scattering vector and ${\bf r_i}$ and ${\bf r_j}$ are the positions of the $i$'th and $j$'th spin, respectively. The calculation can be simplified by noting that, since all sites are counted twice, the exponent can be reduced to a cosine summing only once over each lattice position. In this work, we compare the scattering intensity from the simulations to powder-averaged experimental neutron data data. Performing the same powder average on spherically symmetric nanoparticles, only ${\bf{Q}}$ along one direction contributes to the measured signal. Thus, by setting ${\bf {Q}} = (Q,0,0)$, the scattering intensity simplifies to: \begin{equation} I(Q) = \sum_i^{l^3}s_i\sum_k^lP_{\rm k}\cos{Q\left(x_i-x_k\right)}, \end{equation} where $i$ runs over all spins and $k$ runs over all planes of spins orthogonal to ${\bf {Q}}$. In the $k$'th plane, the total spin is $P_{\rm k}$. This greatly reduces the evaluation of the scattering intensity from a sum over $l^6$ combinations to $l^4$. Finally, due to the symmetric setup of the simulations, only positive values of $Q$ were calculated. While this means that the peak in intensity occurs at $Q = 0$, an $x$-axis offset of around 1.28 \AA{} was introduced in Figure 3 of the main text to match the peak position of the simulations with that obtained for the neutron data. All simulations were performed on the Quantum Wolf Cluster located at the Quantum Magnetism Department of \'Ecole Polytechnique F\'ed\'erale de Lausanne. The number of individually simulated system sizes was chosen to optimize the usage of the cluster, with the total number of simulations for each particle size tabulated in Table~\ref{tab:particleSizeSimNumber}. \begin{table}[h!] \centering \begin{tabular}{c|c||c|c} \ Particle size $l$ \ & \ Number of simulations \ & \ Particle size $l$ \ & \ Number of simulations \ \\\hline 19 & 311 & 71 & 310\\ 21 & 201 & 73 & 198\\ 23 & 203 & 75 & 201\\ 25 & 203 & 77 & 198\\ 27 & 139 & 79 & 203\\ 29 & 200 & 81 & 200\\ 31 & 201 & 83 & 200\\ 33 & 201 & 85 & 98\\ 35 & 204 & 87 & 107\\ 37 & 201 & 89 & 100\\ 39 & 201 & 91 & 132\\ 41 & 201 & 93 & 132\\ 43 & 200 & 95 & 99\\ 45 & 200 & 97 & 101\\ 47 & 198 & 99 & 98\\ 49 & 201 & 101 & 100\\ 51 & 200 & 103 & 100\\ 53 & 200 & 105 & 104\\ 55 & 198 & 107 & 50\\ 57 & 194 & 109 & 50\\ 59 & 201 & 111 & 50\\ 61 & 196 & 113 & 49\\ 63 & 306 & 115 & 50\\ 65 & 336 & 117 & 49\\ 67 & 237 & 119 & 50\\ 69 & 198 & 121 & 50\\\hline \end{tabular} \caption{Number of simulations performed for each particle size.} \label{tab:particleSizeSimNumber} \end{table} At zero temperature we find no evidence of any dead layer. \clearpage \section{} \subsection{} \subsubsection{}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,505
<?php namespace Chamilo\Application\Weblcms\Admin\Extension\Platform\Ajax; use Chamilo\Libraries\Architecture\AjaxManager; /** * * @package Chamilo\Application\Weblcms\Admin\Extension\Platform\Ajax * @author Hans De Bisschop <hans.de.bisschop@ehb.be> * @author Magali Gillard <magali.gillard@ehb.be> * @author Eduard Vossen <eduard.vossen@ehb.be> */ abstract class Manager extends AjaxManager { const ACTION_COURSE_CATEGORY_FEED = 'CourseCategoryFeed'; const ACTION_COURSE_FEED = 'CourseFeed'; }
{ "redpajama_set_name": "RedPajamaGithub" }
1,623
FeaturesInterviews An Interview with Say Anything: Admit It!!! Morgan Magid After over a decade of Say Anything, Max Bemis still does things his own way. The band surprised the music world with their most recent release, I Don't Think It Is, dropping around midnight on February 4 without warning. The Beyoncé-esque move is not the only influence Bemis has pulled from today's pop and hip-hop stars. Bemis is no stranger to forging his own path and allowing his uniquely styled creativity to shine through. The band's last album, Hebrews (which has no guitars on it), continued a method of collaboration rarely seen in punk or rock music. Like many of today's rap and hip-hop artists, Bemis works Say Anything almost like a collective with a variety of players contributing different parts on nearly each and every song. Bemis' brother-in-law, Darren King of Mutemath, co-produced the record guest appearance with members of The Hotelier, Little Big League, and Tiny Moving Parts. Say Anything's strain of brash honesty may not appeal to everyone, but after 15-plus years of creative output including multiple bands like Say Anything, Two Tongues and Perma, and comic books series, it's clear that Bemis is comfortable and more than capable of holding his fate in his own hands. We recently caught up with Bemis to discuss I Don't Think It Is, the decision to release the album at midnight, hip-hop and more. The transcription is below: What was the process of picking collaborators like for I Don't Think It Is? It was really natural. Darren being the main collaborator became the standard for the entire record. I think he's the most important collaborator and that really just came about with us saying to each other, "Hey, it would be fun to make music," and we were making music together and then we said, "Hey, this could be Say Anything and let's just start." We just started making songs and eventually it just turned into, "This is definitely an album." We called the label and were like, "We're making an album," and they were down with that so we kept going. And then the other collaborators were kind of similar to Hebrews and Defense Of The Genre. We just started calling up people to contribute and most of them are my friends or people I know through other people so it was easy. With Darren being a full production and composition partner, what was it like not having full control of the reigns this time? It was really liberating actually. He was wonderful. I didn't feel as much pressure and it was a whole new fun dynamic added to the band. On a day-to-day basis just seeing how someone else's imagination could mold some of the things that I'd introduced or working off someone else's imagination. I mean I had a little bit of that when Coby [Linder] was in the band. But Darren is a multi-instrumentalist whereas Coby was just a great drummer. So it was more back and forth dynamic. At what point did you decide to have the surprise release? I think just as it went along we heard the urgent quality of the songs and how it was somewhat different from our older material… It was sort of a meta-weird thing where I couldn't see the album going through that sort of mundane pre-release, the teasing and the bum-bum-bah. And just part of me being sick of that in general you know? It was a mixture of the quality of the music itself and me just hating that (laughs). Was there any backlash from the decision from people in higher up positions? No, no, they got it. Our management and the label got it. Obviously there's consequences to it, which is that we'll probably sell a good amount less records but at the same time, selling CDs in this age doesn't mean much… So the fact that it was up on Spotify and that's monetized and YouTube is monetized, in a way that's kind of where my band sort of makes its most money and vinyl and stuff like that which was not affected by [the decision]. When did you first start to realize the influence hip-hop has on you and how you make records? Well firstly, Darren's style pulls a lot from [that]. We both grew up on hip-hop and electronic influences so automatically with Darren it's going to have that soulful feeling which in turn leads to hip-hop. And then… a lot of the recording came from this experience we had working with Kanye West briefly and when we got to hang out with him and see how he does stuff a little bit. We're like, "Why can't we make an indie rock emo record that utilizes some of the ways someone like him would approach a record?" Why do you think more people haven't adopted your style of having you as this one-centralized figure and then working a collective approach? Well I think sometimes it can come out sounding a little bit like Limp Bizkit—not that they're horrible—but I think to make something that's tasteful also infused with hip-hop and rock is a line that is hard to walk. But for me a lot of people have done it in a great way like Beck, but it's hard to think of. And also emo is such a white, male stereotype. Like you know people tend to associate it with straight white guys who don't really know what soul music is or stuff like that. So I think that's why people wouldn't associate that normally with people who appreciate hip-hop. But I think that's a falsehood, I think there's a lot of diversity in the scene plus a lot of diverse influence. Do you think being open to genres outside of the umbrella of rock and punk puts you at an advantage over other musicians in the emo/punk genre? I wouldn't say it's an advantage because there's other people who are not so keyed into that and they're doing just as cool stuff and even cooler stuff. But for me, yes, it's an advantage for me over me because the less I restrain myself, the less bored I feel. And the more and more I allow my influences to color my work, no matter what they are, I think the cooler Say Anything is going to sound. So there's only so much of the same crap you can do. On "Jiminy" you say, "Destroy our first LP is you know what's good for me." As someone who's clearly extremely creative how do you feel about the chatter for you to make "Is A Real Boy…" part two and the nostalgia surrounding that record? I think it's normal, I mean it obviously annoys me but I don't really put myself in a position to be exposed to it. I'm aware that it exists on like message boards and amongst people who don't go to our shows but used to, but rarely am I put in contact with people like that. So my world is essentially shows where, who's going to pay for a show if they just want to hear three songs? Not many people. Twitter, where not many people are going to take the time to follow me and hear what I have to say if I'm only a guy that they like three songs from 10 years ago. And then the people that surround me—my friends, co-workers, people in business and people in media who generally are really respectful to me. So I think every artist of any note has that album or that era in their existence where people just can't let it go and that's cool, that just means that it's significant. And there are times that I want people to burn that record, but at the same time I really don't. I was just writing in that song about how I don't want people to be stuck on it, but that doesn't mean I'm not aware of how important it is to people and it doesn't negate their emotional affection for it. Just being the type of writer that I am, I am going to write about that feeling that I have once in a while where I'm like, "Shut the fuck up" (laughs). But again I rarely have to face that because I'm generally surrounded by people who have been aware of everything I've done since I was 19. Say Anything will be performing at the Theatre Of Living Arts in Philadelphia on May 12, Webster Hall in New York City on May 13, and at Starland Ballroom in Sayreville, NJ on May 15. For more information, go to sayanythingmusic.com. max bemismorgan magidsay anything An Interview With Third Eye Blind: How's It Going To Be: 20 Years Later This Is Not An Exit: An Interview with Saves The Day Staying Tight With Romp An Interview with The Get Up Kids: Still Finding Something To Write Home About An Interview with Andrew McMahon In The Wilderness: Everything's Still In Transit The Milk Carton Kids: Monterey An Interview with Armor For Sleep: What To Do When You Come Back An Interview with The World Is A Beautiful Place & I Am No Longer Afraid To Die: Complexity Breeds Confidence
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
315
Q: Reinforcement Learning and the Markov Decision Process In Sutton/Barto's book on Introduction to Reinforcement Learning, they introduced the idea of MDP and its advantages to Reinforcement Learning early on. My questions are: * *We still CAN do RL on a system that is non-MDP, right? *What advantages can there be if a task can be described by MDP vs non MDP? *How will I know if a task can be described by an MDP? Is there a statistical test to know this? I would like to ask your insights and intuition on this. Thanks A: The concepts of states, actions, state transitions and rewards all seem kind of necessary, I don't see how Reinforcement Learning can really make sense without those (though there is some room for simplification, e.g. an RL problem with only a single state could be viewed as a Multi-Armed Bandit problem). So, when you say "non-MDP" in your question, I'll assume that to mean that the Markov assumption is violated. This basically means that the current state does not provide sufficient information for determining an optimal policy / history is important / the entire sequence of all past states and actions are relevant / there is partial observability. Under that assumption: * *We still CAN do RL on a system that is non-MDP, right? Yes, but you will for example lose theoretical guarantees for certain algorithms to converge to optimal solutions. In practice you'll probably want to take extra steps in your algorithms to improve performance. For example, you may want to add some memory to your algorithm by replacing your state observations with a concatenation of the last $n$ state observations, instead of just a single state observation. *What advantages can there be if a task can be described by MDP vs non MDP? The advantage of knowing that your task can be described as an MDP is that you can afford to use simpler algorithms / have theoretical proofs telling you that your solution will eventually converge to an optimal solution. The advantage of a non-MDP is... well, you can describe problems which can otherwise not be described because they don't fit in the MDP framework (for example an environment with partial observability, which can for example be the case if you have a robot with a first-person view from its camera, as opposed to a top-down view of the entire environment). *How will I know if a task can be described by an MDP? Is there a statistical test to know this? Not really. It will probably often be obvious if it mostly fits the general framework (can you define States, Actions, Rewards, do you have discrete time-steps which correspond to decision-points for your agent?). Though sometimes this also may not be obvious, sometimes there's not an obvious direct translation, but it's still possible if you're clever about your formulation. On top of that, the Markov property is very important for the problem to truly be an MDP. Does a single observation of the state tell you everything you need to know? If yes, the Markov property is satisfied. Otherwise, it's violated (for example if you can't see everything that's relevant due to camera angle, or if you can't see the velocity/direction in which an object is moving because you're only observing a single frame).
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,453
package systems.rcd.fwk.core.format.json.impl; import java.io.IOException; import java.util.Iterator; import systems.rcd.fwk.core.format.json.RcdJsonService; import systems.rcd.fwk.core.format.json.data.RcdJsonArray; import systems.rcd.fwk.core.format.json.data.RcdJsonBoolean; import systems.rcd.fwk.core.format.json.data.RcdJsonNumber; import systems.rcd.fwk.core.format.json.data.RcdJsonObject; import systems.rcd.fwk.core.format.json.data.RcdJsonString; import systems.rcd.fwk.core.format.json.data.RcdJsonValue; import systems.rcd.fwk.core.format.json.impl.data.RcdSimpleJsonArray; import systems.rcd.fwk.core.format.json.impl.data.RcdSimpleJsonBoolean; import systems.rcd.fwk.core.format.json.impl.data.RcdSimpleJsonNumber; import systems.rcd.fwk.core.format.json.impl.data.RcdSimpleJsonObject; import systems.rcd.fwk.core.format.json.impl.data.RcdSimpleJsonString; public class RcdSimpleJsonService implements RcdJsonService { private static final String NULL_VALUE = "null"; @Override public RcdJsonBoolean instCreateJsonValue( final Boolean value ) { return new RcdSimpleJsonBoolean( value ); } @Override public RcdJsonNumber instCreateJsonValue( final Number value ) { return new RcdSimpleJsonNumber( value ); } @Override public RcdJsonString instCreateJsonValue( final String value ) { return new RcdSimpleJsonString( value ); } @Override public RcdJsonArray instCreateJsonArray() { return new RcdSimpleJsonArray(); } @Override public RcdJsonObject instCreateJsonObject() { return new RcdSimpleJsonObject(); } @Override public void instToString( final RcdJsonValue value, final Appendable output ) throws IOException { if ( value == null ) { output.append( NULL_VALUE ); return; } switch ( value.getType() ) { case BOOLEAN: final Boolean booleanValue = ( (RcdJsonBoolean) value ).getValue(); output.append( booleanValue == null ? NULL_VALUE : booleanValue.toString() ); break; case NUMBER: final Number numberValue = ( (RcdJsonNumber) value ).getValue(); output.append( numberValue == null ? NULL_VALUE : numberValue.toString() ); break; case STRING: final String stringValue = ( (RcdJsonString) value ).getValue(); if ( stringValue == null ) { output.append( NULL_VALUE ); } else { output.append( "\"" ). append( escapeString( stringValue ) ). append( "\"" ); } break; case OBJECT: final RcdJsonObject jsonObject = (RcdJsonObject) value; output.append( "{" ); for ( final Iterator<String> iterator = jsonObject.getKeys().iterator(); iterator.hasNext(); ) { final String key = iterator.next(); output.append( "\"" ).append( key ).append( "\"" ).append( ":" ); instToString( jsonObject.get( key ), output ); if ( iterator.hasNext() ) { output.append( ',' ); } } output.append( "}" ); break; case ARRAY: final RcdJsonArray jsonArray = (RcdJsonArray) value; output.append( "[" ); for ( final Iterator<RcdJsonValue> iterator = jsonArray.iterator(); iterator.hasNext(); ) { final RcdJsonValue jsonValue = iterator.next(); instToString( jsonValue, output ); if ( iterator.hasNext() ) { output.append( "," ); //TODO Add pretty print configuration to this service } } output.append( "]" ); break; } } private String escapeString( String string ) { return string.replaceAll( "\b", "\\\\b" ). replaceAll( "\f", "\\\\f" ). replaceAll( "\n", "\\\\n" ). replaceAll( "\r", "\\\\r" ). replaceAll( "\t", "\\\\t" ). replaceAll( "\"", "\\\\\"" ). replaceAll( "\\{2}", "\\\\" ). replaceAll( "/", "\\/" ); } }
{ "redpajama_set_name": "RedPajamaGithub" }
3,411
## Almost Blue ## Tony O'Neill ### Praise for Tony O'Neill 'O'Neill could be our generation's Jim Thompson.' (James Frey, author of A Million Little Pieces) 'His evocation of the haunted landscapes of Los Angeles... resounds with the gnarled grace of a vintage Tom Waits adage: This stuff will probably kill you – let's do another line.' (The Guardian, reviewing Digging The Vein) 'Fans of Irvine Welsh and Warren Ellis are sure to love this dark, disturbing journey.' (Metro, reviewing Down And Out On Murder Mile) 'If the trajectory from On the Road to Trainspotting is a pharmaceutical decline and fall, then we want to finish with something totally abased. Digging the Vein fits neatly with our confessional times: there's no dignity, there's no road, only the death drive.' (Toby Litt, Esquire) 'Tony O'Neill works his LA people the way Dutch Leonard had his hand down the pants of every degenerate in his great Detroit novels.' (Barry Gifford, author of Wild at Heart) 'Tony O'Neill writes about the Hollywood I know as well as any writer alive. His characters are scorchingly real. His dialogue is note-perfect.' (Dan Fante, author of 86'd ) 'The author once again plumbs the depths of his dope days with this inspired comedy of errors.' (Kirkus, reviewing Sick City) 'Like the bastard child of Dashiell Hammett and Evelyn Waugh.' (Rumpus) Ebook version published in 2013 by Galley Beggar Press Ltd The Book Hive, 53 London Street, Norwich, NR2 1HL Typeset by Galley Beggar Press Ltd All rights reserved © Nikesh Shukla, 2013 The right of Nikesh Shukla to be identified as the author of this work has been asserted by him in accordance with the Copyright, Designs and Patents Act, 1988 This ebook is licensed for your personal enjoyment only, so please don't re-sell it or give it away to other people. We want to be able to pay our writers! If you are reading this book and did not purchase it, please visit www.galleybeggar.co.uk and buy your own edition, or send a donation to make up for the money we and our author would otherwise lose. Thank you for understanding that we are a small publisher dependent on each copy we sell for our survival – and most of all, thank you for respecting the hard work of our author and ensuring we are able to reward him for his labours. And don't forget to keep visiting our site to see what else is happening in the Singles Club! Thank you I am sat in the apartment with William, Susan and Genesis. Susan sucks on a Marlboro and says "That was a close one" while Genesis touches up her lip gloss in a compact mirror, legs crossed, flicking a strand of dirty blonde hair from her sweat-glistening brow, hand trembling slightly as always from the crystal meth she has been smoking. She um-hmmm's in agreement, barely taking her attention from her lips, the compact, the gloss. "I don't feel so good," William offers before staring off into space again. I am shivering slightly, not sure why I am so cold and saying nothing for now. I wonder if it would be rude to sneak off and fix a shot so close to the incident. I absently ponder if there is an established protocol in these situations. A _CRASH!_ interrupts my thoughts and the glasses on the old oak table we are sat around jump and tumble, sending Coke splashing in all directions. I look around to see Susan looking startled, Genesis with one eye on the floor, the other still checking out her reflection and William no longer in his seat. With a groan he gets up from the floor. His right cheek and eye are an ugly red colour from where he caught the corner of the table as he blacked out and slid out of his chair. "You fucker! What'ya do that for?" he moans, raising a hand to his injured eye and staring at me accusingly with the other. I turn my palms up and say "What?" and he spits "You hit me in the fucking face!" Suddenly Genesis laughs loudly and inappropriately at William's pratfall. Susan shoots her a dirty look then in something approximating a comforting voice says, "You fell off your chair and hit the table sweetie; you blacked out." "Oh, sorry..." "That's OK." "I busted my face up good. It hurts." "Yeah, you did." "Fuck. Michelle is gonna kill me!" I laugh nervously. "Michelle's gonna kill _me_." "I just fell off the chair, huh? The last thing I remember is..." He drifts off. "Just tell her what happened. It'll be cool," offers Genesis. William thinks about it for a second and says, "No fucking way, she'd freak out... Uh, look... I'd better get home..." Driving William home I pull into a McDonalds drive-thru and convince him to eat something. "It'll make ya feel better" I say, so he reluctantly orders a cheeseburger that he chews at morosely while we head to his place on Normandie Avenue. As we pass Western he starts to vomit violently, aiming it out of the window but splashing the inside as well as the outside of the door. I suppose I can't be too mad with him considering the circumstances. As we pull up outside of his building I offer an apologetic, "Here we are" rousing him from his stupor. He has drifted off, drool and sick hanging off his chin, half-eaten burger still in his hand. William opens the puke-splattered door and staggers out onto the sidewalk. "Don't fall asleep when you get in," I tell him. "Drink some coffee, stay on your feet for the next couple of hours. Whatever you do, don't sleep. Sleep would be really bad right now." He just nods and turns to head towards his apartment. "Hey" I yell after him. "I'm _real_ sorry, you know?" He half raises his hand in some kind of response as he walks unsteadily to the apartment. I pull away. I am heading towards Fairfax when the oldies station I am tuned into plays Frankie Valli and the Four Seasons' _Big Girls Don't Cry._ It's Friday night and the Santa Ana winds are blowing in from the desert. I'm not really sure why but I start to get the sudden urge to sob. I try to swallow the feeling down inside of myself but it sticks in my throat like wadded cotton wool and the need to vent becomes so overwhelming that it feels like a physical pain in my chest. I need to get high. Higher. As high as I possibly can. I need to cook down all of the drugs in the world into an evil dark brown goo and shoot it straight into my heart and maybe then – maybe – I will feel like a human being again. *** "He's not breathing! Jesus Susan he's fucking DEAD! I FUCKING KILLEDHIMHESDEADIFUCKINGKILLEDHIM!!!" On the other end Susan is screaming, too. Her voice sounds like an angry wasp trapped in the phone. "JUST CALM DOWN... shit, Genesis! Close the _door_ the whole fucking _office_ can hear this... OK listen, do you know CPR?" I am standing over William. He looks dead. Surely he can't come back after this? His face is an unhealthy purple; I have been slapping him around ever since he took his shot and started turning blue. After the third blow he simply slid off of the couch and onto the floor, limp. "No, I don't know CPR you stupid cunt! How the fuck would I know CPR?" "Fuck! Just... pinch his nose and breathe into his mouth! Inflate his lungs. Then pump his chest... Fuck, is it five times? Ten maybe?" "Fuck me! Just guess, you stupid junkie bitch!" "OK... five times, then repeat. Listen I'm hanging up! We're leaving right now! Genesis, we're going—" "Hurry the fuck up!" And then the line goes dead. Dead. Oh Jesus. Susan works at least half an hours drive from here. If I don't do something soon we're either gonna have to call the cops or dispose of William's body and run. I start to vomit. I didn't know fear could make you vomit. It is hot and it burns and it cascades onto my shoes and the floor and some of it splashes onto William. It came so suddenly that I didn't even have time to direct it away from him. Great. I've killed him and puked on his body. I imagine the headline DEGENERATE DRUG FIEND KILLS FRIEND, BEATS HIM, VOMITS ON CORPSE. Suddenly I see a flicker of movement from William. Maybe it is just a twitching nerve. The death twitch. But it happened right after the puke splashed on his face. Maybe it was related. Maybe there's still some life in him. Inspired I run around the house and find a vase. There are dead flowers in it. The water has long-since evaporated. Susan got it when we moved in here and like everything else it was ignored as we just sat around and shot drugs. The flowers withered and died, just like her cat. Poor Hemingway. We heard his meows for days. Maybe weeks. Stupefied by the heroin we ignored the whining. The passage of time became blurred. I found him dead under the sink one morning. I think he starved. Maybe he died of thirst. To avoid one of Susan's periodic psychological breakdowns I simply threw the cat's body over the balcony to the rocks of the Hollywood hills below us for the coyotes to eat. She never once inquired as to where her pet of seven years had vanished to and I never brought it up. I fill up the vase with water and walk over to William. "You'd better wake up cocksucker," I tell him and – my hands shaking with fear, adrenaline – I dump the water all over him. "Please wake up!" Like Lazarus William shakes and twitches. He gasps and coughs and starts to heave. I drop the vase and it shatters with a crash that sounds like all of the hearts in the world exploding at once. "Oh Jesus!" William gasps, "Where am I?" "You're alive. You're fucking alive!" I can hardly believe it myself. "Yeah." "I brought you back to life!" William shoots me a dirty look. I try and think of the right thing to say. "I need a shot," I tell him, "I need one really fucking bad." Then I hear a key scraping in the lock and the door bursts open. "OH JESUS!" Susan cries as she staggers in. Her work clothes are disheveled and she is barefoot, high heels in hand. "He's alive? Oh thank Christ!" "I told you" Genesis drawls, strolling in after her. She is eating a taco. I stare at her in outraged disbelief. "You stopped to pick up FOOD?" "Just drive-thru..." Susan mutters guiltily. "You know how fast they are at Del Taco." "Shit, calm down!" Genesis flops down into the nearest seat and pulls a glass pipe and butane lighter from her purse. "I was fuckin' starving, man. Anyway, he's alright ain't he?" Genesis looks like a meth-whore, even in her cheap attempt at office attire. Susan – the head of finances at a chain of local Laundromats – got Genesis the job as her assistant in an attempt to get more money coming into the house. Genesis arrived here a few months ago and started mooching drugs. She expertly wove herself into the fabric of our lives, taking over the spare room. I had to admire her cunning. She had Susan wrapped around her little finger within days of showing up. In an attempt to keep me sweet Genesis lets me fuck her whenever I like, but with the amount of drugs I consume I cannot manage it very often. Bringing Genesis into the office was an act of madness on Susan's part. I have no doubt that Susan and Genesis will both be fired soon. Somebody will catch Genesis smoking crystal meth in the bathroom. A snooping cleaner will open Susan's desk and find her works and her drugs. Or the owners will realize that Susan has been creaming money off the top and hiding it with fancy account keeping. It is only a matter of time before it all falls apart and I fear what will happen to all of us when that happens. "Oh Jesus fucking Christ, what am I doing here?" William moans. I wonder the same thing as I help him to his feet. *** William is here to write songs with me. The band is stalling badly and I am blocked. The songs I have for him to work on are pitiful, piss poor imitations of better songs by better bands. My creativity is gone. My days are totally focused on the scrabble to maintain the flow of drugs that enters the apartment. I have no time to create. I have no drive to create. The misery inside of me is so great, so oppressive that the only way to deal with it is through constant sedation. I only agreed to this because I am embarrassed to admit to my erstwhile best friend in Los Angeles that we was right about everything. I have indeed become the hopeless fuck up he warned me I would become if I started shooting dope and didn't break off my relationship with Susan. "She may be smart and she may be able to pull in money right now," he told me, "but she's fucking crazy. That girl doesn't want a boyfriend. She wants someone do die with her." And he was right. I suppose I wanted to see what dying felt like. But now I was stuck, neither alive nor dead. I kept this appointment to hang out and work on new songs in an attempt to prove something to William that I already knew was a preposterous lie. To prove that I was OK. "I'm gonna get high before we start," I tell him, "You don't mind do you?" "Nope." Then he says, "Do you have a little spare?" We're in a band. We share drugs. William smokes heroin on occasion but is basically a good-natured cokehead. I suppose he wants to connect with me a little before we start and sees dope as they only way to do it. "Sure. I've only got a little, though. You mind shooting it instead?" "I don't do that—" "Come on, man! I only have a tiny bit 'til Susan gets home... You wont even _feel_ it if you smoke it. Don't worry. I'll make it a tiny shot. You'll get high, nothing more. I'll be careful." "Michelle would kill me if she knew I'd used needles. She didn't even want me coming over here." Now, that hurt. _Fuck Michelle_ I silently fume. "Look, what's the worst that can happen? I've done this a million times. You're doing it ONCE. You're my friend. Nothing bad will happen." "Well, I suppose..." "Trust me," I say as I start to prepare the shots and tell him to wrap my belt good and tight around his arm. We hope you have enjoyed reading this Galley Beggar Single. You can find our other £1 singles in the Galley Beggar Store. Tony O'Neill has also written the following novels: Digging The Vein (Contemporary Press 2006) Down And Out On Murder Mile (Harper Perennial 2008) Sick City (Harper Perennial 2010) Black Neon (Walde and Graf 2012) He has also co-written Neon Angel, a memoir of Runaways singer Cherie Curie and Hero Of The Underground, the incredible story of American Football Player turned drug-hoover, Jason Peter. Tony rocks.
{ "redpajama_set_name": "RedPajamaBook" }
5,601
Naytia glabrata is a species of sea snail, a marine gastropod mollusc in the family Nassariidae, the Nassa mud snails or dog whelks. Description The shell size varies between 7 mm and 10 mm. Distribution This species is distributed in the Atlantic Ocean off Senegal, Gabon and Angola. References Bernard, P.A. (Ed.) (1984). Coquillages du Gabon [Shells of Gabon]. Pierre A. Bernard: Libreville, Gabon. 140, 75 plates pp Cernohorsky W. O. (1984). Systematics of the family Nassariidae (Mollusca: Gastropoda). Bulletin of the Auckland Institute and Museum 14: 1–356 Adam W. & Knudsen J. 1984. Révision des Nassariidae (Mollusca : Gastropoda Prosobranchia) de l'Afrique occidentale. Bulletin de l'Institut Royal des Sciences Naturelles de Belgique 55(9): 1–95, 5 pl. Gofas, S.; Afonso, J.P.; Brandào, M. (Ed.). (S.a.). Conchas e Moluscos de Angola = Coquillages et Mollusques d'Angola. [Shells and molluscs of Angola]. Universidade Agostinho / Elf Aquitaine Angola: Angola. 140 pp External links Sowerby G.B., II. (1842). Monograph of the genus Strombus. In G. B. Sowerby II (ed.), Thesaurus conchyliorum, or monographs of genera of shells. Vol. 1 (1): 25-39, pls 6-10. London, privately published. Galindo, L. A.; Puillandre, N.; Utge, J.; Lozouet, P.; Bouchet, P. (2016). The phylogeny and systematics of the Nassariidae revisited (Gastropoda, Buccinoidea). Molecular Phylogenetics and Evolution. 99: 337-353 Nassariidae Gastropods described in 1842
{ "redpajama_set_name": "RedPajamaWikipedia" }
335
different! After all, when they sit down to take exams, those who have absorbed nothing at all will be exposed. We do it because we are motivated and envision how a perfect custom writing service should look like. It is overall quality not amount of submitted and processed orders that we primarily focus attention. We maintain services with strict anonymity and under no circumstances disclose customers private data. Stanford, ucla, Berkeley, NYU, Columbia, University of Houston, and other institutions from these states are known for their competitive systems. When it comes to subjects, students most commonly struggle with projects for Business, English language, and Management courses. But ultimately, students who use essay - writing services are cheating no one more than themselves. Some may simply be short on time and juggling competing commitments. The essay writing industry is a source of interesting statistical data. This is what we are doing at our company every single day provide you with lifetime memories. There is no such thing as academic issue; it is lack of will to conquer it! The writer will follow the guidelines you input in the box below. So I gave them a call. Since academic writing is becoming one of the most prominent aspects of the educational system, the constant development of the custom- writing industry is clearly justified. By this logic, a student who pays a fair market price for it has earned whatever grade it brings. Writing is a vital skill that is applied in many areas of life, especially for those who are entering the workforce, whether they are doing so as an employee or a business owner. Online, essay, writers AT your service! But when students outsource their essays to third-party services, they are devaluing the very degree programs they pursue. It is typical hearing clients say: "write my paper for me we respond: "have no worries, our assignment will bring you an A!" - determination on delivering research paper writing services of an unprecedented quality is unique. Be yourself, tell them what you feel and how you feel, share your values with the readers in the form of an anecdote. Eventually, two news articles were selected that showed a variety of opinions and topics of.. Making Sense: A Real-World Rhetorical Reader. D'Agata, John (Editor The Lost Origins of the Essay. Each argument of argumentative essay should be supported with sufficient evidence, relevant to the point. 5 During the Age of Enlightenment, essays were..
{ "redpajama_set_name": "RedPajamaC4" }
1,741
layout: post date: '2016-01-25' title: "Watters - Wtoo Wtoo Maids Dress 788 Sleeveless Floor-Length Empire" category: Watters - Wtoo tags: [Watters - Wtoo,Wtoo,Empire,Sweetheart,Floor-Length,Sleeveless] --- ### Watters - Wtoo Wtoo Maids Dress 788 Just **$199.99** ### Sleeveless Floor-Length Empire <table><tr><td>BRANDS</td><td>Wtoo</td></tr><tr><td>Silhouette</td><td>Empire</td></tr><tr><td>Neckline</td><td>Sweetheart</td></tr><tr><td>Hemline/Train</td><td>Floor-Length</td></tr><tr><td>Sleeve</td><td>Sleeveless</td></tr></table> <a href="https://www.readybrides.com/en/watters-wtoo/14747-watters-dress-788.html"><img src="//static.msromantic.com/33642/watters-dress-788.jpg" alt="Wtoo Maids Dress 788" style="width:100%;" /></a> <!-- break --> Buy it: [https://www.readybrides.com/en/watters-wtoo/14747-watters-dress-788.html](https://www.readybrides.com/en/watters-wtoo/14747-watters-dress-788.html)
{ "redpajama_set_name": "RedPajamaGithub" }
7,600
Doctor Who: The Monthly Adventures, formerly titled the Main Range, is a series that consists of full-cast audio dramas based on the British science fiction television programme Doctor Who, produced by Nicholas Briggs and Big Finish Productions and starring one of the original actors to play The Doctor on television in the classic era of the programme. The main audio series currently feature the Fifth, Sixth and Seventh Doctors, and have since developed the pattern of thirteen releases per year, one every month with two in September or December. In May 2020, Big Finish announced that the Main Range would conclude with its 275th release in March 2021, to be replaced with regular releases of each Doctor in their own boxsets throughout the year from January 2022. With 275 releases over 22 years, in 2021 the series achieved the Guinness World Record for longest running science fiction audio play series. Big Finish Productions began producing audio dramas featuring the Fifth Doctor, Sixth Doctor, and Seventh Doctors, starting with The Sirens of Time in July 1999. This continued through to 2000, and from 2001 to 2007, the main range also included releases featuring the Eighth Doctor with his companions Charley Pollard and C'rizz, but these were ended due to the simultaneously-running Eighth Doctor Adventures, which ran from 2006 to 2011 and featured companion Lucie Miller. From 2008 to late 2011, only one Eighth Doctor release was produced for the main range: The Company of Friends, featuring companions from other media to the audio plays and the historical figure Mary Shelley. The Eighth Doctor returned to the main range in a trilogy of adventures with Mary Shelley in October 2011. Cast Notable Guests Nicholas Courtney and Jon Culshaw as Brigadier Lethbridge-Stewart Lalla Ward as Romana Louise Jameson as Leela John Leeson as K9 Frazer Hines as Jamie McCrimmon Katy Manning as Jo Grant and Iris Wildthyme Richard Franklin as Mike Yates Peter Purves as Steven Taylor Maureen O'Brien as Vicki Ian McNeice as Winston Churchill Robert Jezek as Frobisher Lisa Bowerman as Bernice Summerfield Miles Richardson as Irving Braxiatel Anna Hope as DI Menzies John Picard as Thomas Brewster Julie Cox as Mary Shelley Maggie O'Neill as Lysandra Aristedes Nicola Walker as Liv Chenka Amy Pemberton as Sally Morgan Christian Edwards as Will Arrowsmith George Watkins as Marc Geoffrey Beevers, Alex Macqueen, and James Dreyfus as The Master Don Warrington as Rassilon Ian Collier as Omega Terry Molloy as Davros Nabil Shaban as Sil Siobhan Redmond as The Rani Graeme Garden and Rufus Hound as The Monk Mark Bonnar as The Eleven Nicholas Briggs as the Daleks, Cybermen, and Ice Warriors Releases 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 Continuation In May 2020, Big Finish announced that the Main Range would conclude with its 275th release in March 2021, to be replaced with regular releases of each Doctor in their own boxsets throughout the year from January 2022. The new boxsets for each Doctor were announced in May 2021. With the exception of the Second Doctor, Big Finish already produced boxset ranges for each Doctor. The First, Third, Fourth and Eighth Doctor Adventures ranges enjoyed regular releases by the time the Monthly Adventures ended, whereas the Fifth, Sixth and Seventh Doctor Adventures ranges had only occasional releases prior to these series being relaunched. Notes World records References Big Finish Productions Doctor Who spin-offs
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,671
Skybird is een single van Neil Diamond. Het is afkomstig van zijn album Jonathan Livingston Seagull. Als album verkocht het goed, maar de singles Be en Skybird verkochten maar matig. Skybird heeft als ondertitel "The lesson". Het lied maakt onderscheid tussen Skybird (vogels die alleen maar eindeloos vliegen), Songbird (zangvogels) en Nightbird (nachtvogels). De b-kant was Lonely looking sky van hetzelfde album. Lijsten Het liedje had succes in de Verenigde Staten (plaats 75 in de hitparade), Australië (plaats 74), België (plaats 25) en Nederland. In het Verenigd Koninkrijk haalde het geen notering. Nederlandse Top 40 Nederlandse Single Top 100 BRT Top 30 Single uit 1974 Nummer van Neil Diamond
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,427
Brave Review As hard as I try not to, I just can't stop committing the cardinal sin of comparing every new Pixar film to the ones that came before. If you can somehow manage to avoid doing that, you'll really enjoy Brave; it's a technical masterpiece, the voice acting is top notch, and it has some of the strongest humor of any Pixar film to date. It feels more Disney than Disney-Pixar, but Brave still has plenty for kids of all ages to enjoy. Right from the surprising opening sequence, it's clear that Pixar set out to craft their own version of the Disney princess tale. The main character Merida, voiced by the (actually Scottish!) actress Kelly Macdonald, is a headstrong young woman who doesn't want to go along with the arranged marriage that her mother has forced upon her. She's only happy when she's riding her horse through the woods and using her incredible archery skills to shoot things. Early in the film she makes a rash decision involving a witch, her mother and a giant bear, and a well balanced mix of comedy, drama and action ensues for a very quick 100 minutes. The storytellers at Pixar have proven time and again that they can create characters who appeal to young kids while building sophisticated, nuanced narratives around them that older moviegoers can appreciate. Brave doesn't accomplish this in the same way as WALL-E or Up (which dealt with heavy themes like taking responsibility for our planet and overcoming the loss of a loved one, respectively), but some hints of deeper meaning are hidden underneath the otherwise by-the-numbers plot. The theme of choosing one's own destiny is the most prominent, as Merida breaks from tradition and relentlessly forges her own path. Mother-daughter relationships are also put under the microscope and, having grown up with a sister who had very similar arguments on an almost daily basis, it was easy to relate. A lot of families don't pay enough attention to simply listening, which ends up being the root of most of Merida's problems. Although moments of false suspense and predictability are plentiful, the story does take a few unexpected turns that I won't spoil here. Surprisingly, there is no Prince Charming in Pixar's vision of the classic princess tale. There are plenty of male characters in Brave, most of them for comic relief, but the story is mostly about two strong-willed women who struggle to see each other's points of view until they have no other choice but to accept them. The animation is predictably astounding. Merida's shock of red hair is reason enough to marvel at Pixar's technical prowess, not to mention the characters' facial expressions and body language. Whether it's Scotland, outer space or a little kid's bedroom, Pixar can somehow make their computer generated locations look better than their real-world counterparts. I can't imagine the next film (a prequel to Monsters, Inc.) looking any better, but I have no doubts that it will. The soundtrack, a mix of classic Celtic tunes and more intense orchestral pieces, works really well. I did wish that there were less vocals throughout and more instrumental pieces, as such a distinct music style can easily stand on its own. If you've seen any trailers for Brave or read any promotional material you've probably already figured out most of the film. But if we were to stop watching Pixar films because they're too predictable, too silly, too kid-oriented or not as good as Toy Story we would have stopped watching them after…well, Toy Story. It's obvious that just as much care and effort went into Brave as any Pixar film, and even though its plot isn't as memorable or thought-provoking as some of the studio's very best works, it's still very much worth seeing. Merida's mother might have said, during one of her daughter's more obnoxious moments, "It's not what you say but how you say it." The story of Brave has been told before, but they way that Pixar tells it makes that familiar ground worth retreading. ZOOTOPIA Teaser Trailer The teaser trailer from Walt Disney Animation Studio's ZOOTOPIA is now available. NYCC 2012: Why Now Is The Right Time For An Evil Dead Comeback It's been 20 years since Army of Darkness was released, closing out the Evil Dead trilogy, a series of films that practically defined cult classics. While there's no immediate plans for our hapless hero Ash making a return, Bruce Campbell talked at New York Comic Con about why he thinks now is the perfect time…
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
131
Lihong Liu is a Chinese art historian who teaches at the University of Rochester in the United States. She studied medieval Buddhist and Daoist arts at Peking University, China, and Chinese paintings of the Ming and Qing periods (1368–1911) at the Institute of Fine Arts, New York University. After she obtained her PhD in the history of art and archaeology at New York University, she held residential postdoctoral fellowships at the Getty Research Institute in Los Angeles and at the Center for Advanced Study in the Visual Arts in Washington DC. Liu's recent research projects have variously involved attempts to develop an ecological approach to art history, the transcultural study of material medium, and studies of the art of simulation and automation. She also has a longstanding interest in the arts and material culture of the Silk Routes, of which include issues related to the transmission of Buddhism as well as the interchange between China and the Islamic world. This book project deals with a fundamental notion and practice of Chinese landscape painting, shijing (literally, "the real scene"). Shijing painting became especially popular during the mid-Ming period (ca. 1450–1550) when artists' quest for the "real" (shi) in their paintings coincided with people's quest for the "scenic" (jing) in their living surroundings. This scenic realism, I argue, emerged as an eco-pictorial mode that activates the mutual evocations between painting and a sense of place, and between art and the everyday cosmos.
{ "redpajama_set_name": "RedPajamaC4" }
9,195
from twisted.internet import reactor, protocol from gadget.settings import load_secondary_settings, get_setting, have_required_settings from gadget.protocols import parse_hostname from gadget.messages import send_global class Echoer(protocol.DatagramProtocol): """Listens for datagrams, and sends them as messages.""" def datagramReceived(self, data, (host, port)): send_global(data) def build_protocol(): defaultSettings = '''#BIND_ADDRESS = "" ''' load_secondary_settings("echoer", defaultSettings) if have_required_settings("ECHOER_BIND_ADDRESS"): host, port = parse_hostname(get_setting("ECHOER_BIND_ADDRESS")) echoer = Echoer() reactor.listenUDP(port, echoer, interface=host) return echoer
{ "redpajama_set_name": "RedPajamaGithub" }
567
\section{Introduction} \label{sec:introduction} General Relativity (GR) is based on A. Einstein's minimalistic assumption that all degrees of freedom of the gravitational field can be enconded in a single tensorial field: the metric field $g$. This historically attracted immediate criticism. Most notably from E. Cartan, who strongly advocated that metrical and affine structures are two logically distinct concepts. In Cartan's point of view, we should minimize \textit{ad hoc} assumption about the spacetime manifold. Thus, the degrees of freedom of the gravitational field should be described, in generality, by $g$ and an affine connection $\Gamma$ as independent fundamental fields. Today, more than 100 year later, many so-called alternatives to GR have been developed. Most of them falling under the umbrella of metrical (GR, $f\left(R\right)$, \textit{etc.}), scalar-metrical (Brans-Dicke, Horndeski, \textit{etc.}), vector-metrical (Will-Nordtvedt, Hellings-Nordvedt, \textit{etc.}), bimetrical (Rosen, Rastall, \textit{etc.}), affine (Teleparallel, Symmetric Teleparallel, \textit{etc.}), metric-affine (Einstein-Cartan (EC), metric-affine $f(R)$, \textit{etc.}) or gauge-theoretical (Einstein-Cartan-Sciama-Kibble (ECSK), affine gauge theory, \textit{etc.}). Of course, each of them claiming their own set of fundamental variables. In teleparallelism \cite{aldrovandi2013,maluf2013}, for instance, there exists a single fundamental field: a metric-compatible flat $\Gamma$. In this formulation, gravity is exclusively a manifestation of non-vanishing torsion. In symmetric teleparallelism \cite{ferraris1982a,nester1999}, it is a torsion-free flat $\Gamma$, and gravity is a exclusively manifestation of non-vanishing non-metricity. In EC theory \cite{hehl1974,hehl1976a,trautman2006}, metric-affine $f(R)$ \cite{sotiriou2007,sotiriou2009,sotiriou2010,olmo2011}, generic metric-affine \cite{hehl1976b}, \textit{etc.}, $g$ and $\Gamma$ are the two independent fundamental fields (Cartan's philosophy). In these, gravity is a manifestation of non-vanishing curvature, torsion, and/or non-metricity of $\Gamma$. In the gauge-theoretical branch \cite{sardanashvily1983,hehl1995,gronwald1995,gronwald1997}, $g$ and $\Gamma$ are not fundamental entities, but effective fields, constructed as composite objects from more elementary ones. This approach mainly consists of gauge theories for external symmetries -- usually the general affine group $Aff\left(n\right)$, one of its subgroups, or their supersymmetric extensions -- which, in general, have a soldering form $e$ and a gauge connection $A$ as set of fundamental fields \cite{hehl1995}. Historically, the Lorentz group $SO\left(1,3\right)$ was the first (among the external ones) to be gauged, giving raise to the ECSK theory \cite{utiyama1956,kibble1961,sciama1962,sciama1964}. The generators of $\mathfrak{so}\left(1,3\right)$ are necessarily anti-symmetric, which makes a Lorentz connection metric-compatible. In ECSK theory, gravity is exclusively a manifestation of curvature and/or torsion. Recently, the gauging of $Spin(4)$ have also shown to produce gravitation \cite{weldon2001,lippoldt2014,emmrich2022}. In spin base invariant models, the fundamental fields are the set of Dirac matrices $\gamma$ (in substitution of $e$), satisfying the Clifford algebra $\left\{ \gamma, \gamma \right\} = 2g$, and a $Spin(4)$ gauge connection $\hat A$. Analogously to ECSK theory, the generators of $\mathfrak{spin}(4)$ are anti-symmetry, which makes $\hat A$ also metric-compatible. In spin base invariant models, gravity is a manifestation of non-vanishing curvature and/or torsion of $\hat A$. As one can see, the ``Einstein \textit{versus} Cartan debate'' or, more generally stated, ``which are the correct set of variables and symmetries that fundamentally describe the physical degreees of freedom of the gravitational field debate'', is pretty much alive \cite{fay2007,capozziello2011,nojiri2011,clifton2012,berti2015}. And, in despite of the very recent advances in observational physics -- with emphasis in very long baseline interferometry and multi-messenger astronomy --, a concrete answer seems unlikely in the near future \cite{broderick2014,mizuno2018,sunny2019,eht2019,yagi2016,baker2017,ferreira2022,cantata2021}. Under this light, a classification of all these distinct description is of utmost importance. Classicaly, GR, Teleparallel and Symmetric Teleparallel are known to be equivalent among themselves \cite{jimenez2019}. As we review in more details in Section \ref{sec:gravities}, EC theory, in the presence of matter carrying vanishing hypermomentum currents, is also equivalent to GR \cite{hehl1974}. Analogously, metric-affine $f(R)$ in vacuum is equivalent to GR \cite{sotiriou2007}. This last result can strike as quite surprising, given that the only case in which metrical $f(R)$ is equivalent to GR is when $f(R)=R$ \cite{olmo2007}. Indeed, if $\phi \equiv f'(R)$ is an invertible field transformation\footnote{The prime $'$ indicates derivative of the function with respect to its argument.}, then metrical $f(R)$ is equivalent to the scalar-metrical $\omega=0$ Brans-Dicke theory, with potential $V(\phi) \equiv R(\phi)\phi-f(R(\phi))$ \cite{teyssandier1983}. Under the same field transformation, metric-affine $f(R)$, in the presence of matter carrying vanishing hypermomentum currents, is equivalent to $\omega=-3/2$ Brans-Dicke theory (with potential $V(\phi)$) \cite{sotiriou2009}. These $f(R)$ (in)equivalences smoothly hold in the limit $f(R)\rightarrow R$ \cite{olmo2007}. Many gauge-theoretical models of gravity have classical equivalence to metrical, affine or metric-affine models. For instance, the gauge theory for spacetime translations ($\mathbb{R}^4$), first developed in \cite{hayashi1967}, was shown to be equivalent to teleparallelism in \cite{cho1976,hayashi1977,hayashi1979} -- see \cite{aldrovandi2013} for a historical account. The ECSK theory, on the other hand, was born as a gauge theory of gravity and is equivalent to EC theory. Spin base invariant gravity (with vanishing spin torsion) and ECSK theory (in the presence of matter with vanishing hypermomentum currents) are both equivalent to GR. Indeed, these equivalences are so widely spread in physics literature, that they are commonly referred to as just different formalisms of a same underlying physical theory: the metrical or holonomic $(g,\Gamma)$ \textit{versus} the veilbein or non-holonomic $(e,A)$\footnote{Or $(\gamma,\hat A)$, in the case of spin base invariant gravity.}. In this work, we avoid phrasing holonomic $(g,\Gamma)$ \textit{versus} non-holonomic $(e,A)$ theories of gravity as just different formalisms. We do not take the aforementioned equivalences for granted, and we adopt the point of view that holonomic \textit{versus} non-holonomic theories of gravity are fundamentally distinct -- unless unequivocally proven otherwise. The reason for this is three-fold: i) classical equivalences might not hold quantum mechanically; ii) to the author's knowledge, it is not known if it holds for general metric-affine dynamics and; iii) it actually fails on degenerate spacetimes \cite{kaul2016a,kaul2016b,kaul2019}. As one can imagine, point i) is tricky to be answered, as we do not know how to properly formulate a quantum theory of gravity from a classical one. However, attempts have been made within a path integral approach. In \cite{lippoldt2014}, it was shown that the field transformation given by $\left\{\gamma,\gamma\right\}=2g$ introduces a trivial factor in the functional measure $\mathcal{D}g$ (of quantum GR), transformed from the functional measure $\mathcal{D}\gamma$ (of quantum spin base invariant gravity with vanishing spin torsion). In \cite{zanelli2003} the quantum equivalence between ECSK theory in vacuum and GR is also established. It, however, ignores non-globally hyperbolic spacetimes and the particular ghost structure of each theory\footnote{The former is invariant under $Diff(4) \ltimes SO(1,3)$, while the latter is under $Diff(4)$.}. Reference \cite{dario2011} shows that such oversights are irrelevant at 1-loop level. However, functional renormalization group (FRG) analyses show important qualitative differences in the $g$ \textit{versus} $e$ theory space, and their respective FRG flows \cite{reuter2010}. More concretely in \cite{reuter2012}, the ghost fields associated to the local Lorentz invariance are shown to contribute quite significantly to the running of the Newton and cosmological constant in the non-perturbative regime\footnote{This also should not be seen as a definite answer, as the FRG framework relies on a specific truncation and on the use of an Euclidean signature for the metric.} \cite{reuter2012}. As one can see, the quantum equivalence is pretty much an open issue even in the simplest models -- further discussions can be found in \cite{reuter2013,reuter2015,reuter2016}. Point ii) is addressed in Section \ref{sec:on-shell_equivalence}. We establish the equivalence between holonomic \textit{versus} non-holonomic gravity theories, in a manner independent of any particular dynamics or spacetime dimensions. This equivalence holds as long as field transformations \eqref{eq:field-transformation} hold. In Section \ref{sec:geometric_picture}, we give a global bundle-theoretical interpretation for \eqref{eq:field-transformation}, and their relation to the equivalence principle. It so happens that the former is connected to the latter via a non-degenerated $e$. This bring us to point iii). It refers to regions of spacetime where $e$ or, equivalently, the veilbein field $\tensor{e}{^A_\mu}\left(x\right)$ -- its matrix representation -- is non-invertible. As we discuss in more details in Section \ref{sec:discussions}, on these regions, the gauge-theoretical description of gravity unsolders from spacetime and all the gauge-theorical equivalences aforementioned fail. Before that, in Section \ref{sec:holonomic_and_non-holonomic_frames}, we review the concept of holonomic \textit{versus} non-holonomic variables. In Section \ref{sec:gravities}, we review the classical equivalence among GR, EC and ECSK theory, in vacuum and considering only invertible veilbein, and we pose the equivalence question to be addressed in Section \ref{sec:on-shell_equivalence}. Finally, in what follows, we consider the category of smooth $n$-dimensional manifolds ($C^\infty$ $n$-manifolds), unless stated otherwise. Greek, lower-case and upper-case latin letters range from $0$ to $n-1$, unless stated otherwise as well. \section{Holonomic \textit{versus} non-holonomic frames}% \label{sec:holonomic_and_non-holonomic_frames} A frame is generally considered as an ordered set of linearly independent vectors spanning some vector space. Given such general definition, one can image the multitude of different frames one could potentially define over a point $x$ of a manifold $X$. Or, just as well, the multitude of fields of frames one could potentially define over a neighborhood $R\subseteq X$ of $x$. Such fields are what E.~Cartan first described in generality in \cite{weyl1938} as ``moving frames'' over $R$. The most commonly defined frame is the so-called coordinate or holonomic frame. Sometimes also called ``world'' or ``spacetime'' frame for reasons that will become clear in a moment. At $x$, it is defined as a particular choice of ordered basis for $T_x X$ -- the tangent space of $X$ at $x$. Such choice consists of derivations\footnote{Derivations at a point $x\in X$ are linear maps $D_x:C^{\infty}\left(X\right)\rightarrow \mathbb{R}$ acting on smooth functions on $X$ and satisfying $D_x \left(f\circ g\right)=D_xf g(x) + f(x)D_x g$.} of the kind \begin{equation} \label{eq:holonomic-basis} \partial_\mu|_x \left[f\right] \equiv \frac{\partial}{\partial x^\mu} \left(f\circ\phi^{-1}\right)|_{\phi(x)}\;, \end{equation} acting on smooth functions $f: R \rightarrow \mathbb{R}$ on $R$. Here, $\phi:R\rightarrow A\subseteq \mathbb{R}^n$ is a local chart giving an ordered set of $n$ Euclidean labels, $\phi(x)=\left\{x^0,\cdots,x^{n-1}\right\}$, to each point $x$ in the region $R$ -- $x^\mu$ represents each of such labels. Moving on, we can now write down a holonomic frame at $x$ as the ordered set $\left\{\partial_0|_x, \cdots, \partial_{n-1}|_x\right\}$. When the context is sufficiently clear, we might refer to it simply by its elements $\partial_\mu|_x$ and \textit{vice-versa}. Moreover, a field of holonomic frames over $R$ is a map uniquely assigning to each $x\in R$ a frame $\partial_\mu|_x$. Such field, henceforth denoted as $\partial_\mu \left(x\right)$, is what Cartan would call a tangent (or holonomic, in our language) moving frame. Another very commonly defined frame is the so-called holonomic co-frame. By its name, one can guess that it is defined using functionals acting on $T_x X$ and spanning $T_x^{*} X$ -- the co-tangent space of $X$ at $x$. The natural choice is $dx^\mu|_x$, implictly defined by the relation $dx^\mu|_x \left(\partial_\nu|_x\right)=\tensor{\delta}{^\mu_\nu}$. Let the ordered basis $\left\{dx^0|_x,\cdots,dx^{n-1}|_x\right\}$ of $T^*_x X$ be one such co-frame at $x$, henceforth denoted by $dx^\mu|_x$. A field of holonomic co-frames over $R$ is denoted by $dx^\mu(x)$ and we call it a holonomic moving co-frame over $R$. Holonomic frames explictly use the concept of a local chart in their definition. As a result, they -- and, consequentially, their co-frames -- behave in a very unique way. Let us consider another chart giving different Euclidean labels to the same region $R$ in $X$. For instance, $\phi':R \rightarrow A'\subseteq\mathbb{R}^n$ such that $\phi'(x)=\{x^{0'},\cdots,x^{\left(n-1\right)'}\}$. The composition map $\phi'\circ \phi^{-1}: A \rightarrow A'$, also known as a transition function on $R$, represents for $x$ the change of labels $x^\mu \mapsto x^{\mu'}$. If such change happens to be bijective and smooth $\forall\; x\in R$, then holonomic frames -- and their co-frames -- all over this region will suffer the action of a $n\times n$ invertible matrix. Namely, for moving frames, \begin{equation} \label{eq:holonomic-frame-transf-rule} \partial_{\nu'}(x) = \tensor{J}{^\mu_{\nu'}}\left(x\right) \partial_\mu(x) \quad ; \quad \tensor{J}{^\mu_{\nu'}}\left(x\right) \equiv \partial_{\nu'} x^\mu \;, \end{equation} and, for their co-frames, \begin{equation} \label{eq:holonomic-coframe-transf-rule} dx^{\nu'}(x) = \tensor{J}{^{\nu'}_\mu}\left(x\right)dx^{\mu}(x) \quad ; \quad \tensor{J}{^{\nu'}_\mu}\left(x\right) \equiv \partial_\mu x^{\nu'} \;, \end{equation} where $\tensor{J}{^{\nu'}_\mu}\left(x\right)$ is the Jacobian matrix of $\phi'\circ {\phi}^{-1}|_{\phi(x)}$ and $\tensor{J}{^\mu_{\nu'}}(x)$ is its inverse. In summary, holonomic frames and their co-frames are sensible to changes of local charts in $X$. More precisely, if one changes charts in a region $R$, then each $T_x R$ -- and, consequentially, each $T^*_x R$ -- over it will suffer a corresponding change of basis. If this change occurs in an invertible and, at least $C^1$ manner, then the corresponding change in each $T_x R$ can be seen as an action of the general linear group $GL\left(n,\mathbb{R}\right)$. In Physics, this intimate relationship between holonomic frames and the base manifold -- usually spacetime -- is the reason why they are also called ``world'' or ``spacetime'' frames. In contrast, general frames do not rely on $\phi$ for their definition. After all, they just really need to be an ordered basis of some vector space. Thus, unlike \eqref{eq:holonomic-basis}, most frames are not sensible to changes of charts in $X$. This majority is referred to as the non-holonomic and they are naturally present in physical theories with or without gravity. For instance, in the study of a quantized Dirac field over a fixed spacetime 4-manifold, a relevant non-holonomic moving frame is defined by a map giving to each event an ordered orthonormal basis in $\mathbb{C}^4$. Physically, each of these frames can be interpreted as a Stern-Gerlach experimental apparatus, able to measure the spin orientation of the fundamental excitations of the Dirac field -- particle and anti-particle -- at a specific point in space and time. In the case of pure relativistic theories of gravity, which is the focus of this work, non-holonomic moving frames are brought into light by the geometric equivalence principle \cite{sardanashvily1983}. Consider the moving frame $\tau_a(x)$ on $R\subseteq X$ that uniquely associates to each $x\in R$ the ordered basis $\tau_a|_x$ spanning the vector space $V_x$. We want $V_x$ to carry a linear action of $SO\left(1,n-1\right)$ -- the isometry group of the $n$-dimensional Minkowski space, $M$. In other words, $\tau_a(x)$ over $R$ transforms accordingly to \begin{equation} \label{eq:lorentz_frames_transf} {\tau}_{a'}(x) = \tensor{\Lambda}{^b_{a'}}(x)\tau_b(x) \;, \end{equation} where $\tensor{\Lambda}{^b_{a'}}(x)$ is a matrix representation of $SO(1,n-1)$. In principle, equation \eqref{eq:lorentz_frames_transf} is not the result of any local change of chart on $X$. Thus, it is non-holonomic in nature. It, however, can be interpreted as holonomic in $M$. More clearly, if we forget its $x\in X$ dependence for a moment, we have the right to interpret it as the result of a change of global charts in $M$ that preserves the globally defined Minkowski metric $\eta$ there. And there, $\tau_a$ as well as $\tau_{a'}$ are global (and constant) holonomic moving frames, orthogonal with respect to (w.r.t.) $\eta$. This is exactly the kind of moving frames in which Special Relativity (SR) is formulated. Under this point of view, the equivalence principle is fulfilled in a general geometry by defining a $\tau_a(x)$ on every possible $R\subseteq X$. This effectively covers all of $X$ with frames in which Lorentz invariants can be defined. Physically, they carry exactly the same meaning as in SR: a force-free clock and $n-1$ linearly independent rods at each point in spacetime. One can go ahead and quickly define the co-frame $\tau^a|_x$ of $\tau_a|_x$ as the functionals acting on $V_x$ and spanning $V_x^*$ such that $\tau^a|_x \left(\tau_b|_x\right)=\tensor{\delta}{^a_b}$. It transforms non-holonomically accordingly to \begin{equation} \label{eq:lorentz_coframes_transf} {\tau}^{a'}(x) = \tensor{\Lambda}{^{a'}_b}(x)\tau^b(x) \;, \end{equation} where $\tensor{\Lambda}{^b_{a'}}(x)$ is the inverse matrix of $\tensor{\Lambda}{^{a'}_b}(x)$. We finally conclude this section with two remarks: i) the above discussion, of course, is not at all unfamiliar. Equations \eqref{eq:holonomic-frame-transf-rule} and \eqref{eq:holonomic-coframe-transf-rule} are nothing but the transformation rules of covariant and contravariant coordinate vectors, respectively, exhaustively discussed in many, if not all, introductory books in GR and differential geometry. We chose to repeat it here, albeit shortly and perhaps in a more modern fashion, in order to emphasize certain aspects of holonomic frames that we deem important to what is to come. Equations \eqref{eq:lorentz_frames_transf} and \eqref{eq:lorentz_coframes_transf} should also look familiar since they are pretty similar to the transformation laws that the 1-form veilbein $e^a(x)\equiv \tensor{e}{^a_\mu}\left(x\right)dx^\mu$ and its inverse field obey. However, the relation between $\tau^a(x)$ and $e^a(x)$ is a bit more subtle than just an equality, which leads us to the second remark; ii) these structures just described are that of vector bundles over $X$ \cite{trautman1970}. We will enter in more details about this in Section \ref{sec:geometric_picture}. But, for the moment, we will blindly use the fact that the non-holonomic group $SO(1,n-1)$ can be enlarged to a non-holonomic $GL(n,\mathbb{R})$ without compromising any aspect of the discussion above, including the validity of the equivalence principle. This allow us to extend our discussion in Section \ref{sec:gravities} to very general theories of gravity - not just GR or its torsional extensions. To emphasize this change, capital latin letters represents non-holonomic $GL(n,\mathbb{R})$ indexes while greek letters stay exclusively to holonomic $GL(n,\mathbb{R})$ ones. \section{Gravity in holonomic \textit{versus} non-holonomic frames} \label{sec:gravities} In holonomic moving frames, a fundamental ingredient of the Einstein-Palatini approach is the understanding that a metric tensor field $g(x)$ and an affine connection $\nabla$ are \textit{\`a priori} two logically distinct concepts over a spacetime region $R\subseteq X$. Physically, $g(x)$ introduces how a set $\partial_\mu(x)$ of observers on $R$, can perform ``dot product'' measurements on pairs of vector fields. Indeed, $g\left[\partial_\mu,\partial_\nu\right] \left(x\right)\equiv g_{\mu\nu} \left(x\right)$ is a smooth function on $R$ associating a set of $n\left(n+1\right)/2$ real numbers to each $x\in R$. Such numbers are interpreted as ``sizes'' ($\mu=\nu$) and ``angles'' ($\mu\neq \nu$) between $\partial_\mu$ and $\partial_\nu$\footnote{From now on, whenever the context is sufficiently clear, the $x$ dependence of fields will be ommited.}. On the other hand, $\nabla$ introduces a recipe on how vector fields can be differentiated along others. Indeed, $dx^\alpha\left(\nabla \left[\partial_\mu,\partial_\beta\right]\right)\equiv\tensor{\Gamma}{^\alpha_{\beta\mu}}$ is another smooth function on $R$ associating, in general, a set of $n^3$ real numbers for each $x\in R$. Such numbers are interpreted as the infinitesimal variation of $\partial_\beta$ along $\partial_\mu$ as measured by $dx^\alpha$ at each $x$. It is, however, true that there is a canonical way to collapse affine concepts into metrical ones. For instance, to impose that the only way $\partial_\beta$ can vary along $\partial_\mu$ is by an infinitesimal ``rotation'', \textit{i.e.}, a specific first-order change in $g_{\beta\mu}$. Such change is what is known as Christoffel symbols. A detailed analysis of the irreducible decomposition of $\tensor{\Gamma}{^\alpha_{\beta\mu}}$ reveals that this is equivalent to forbid $\partial_\beta$ to pick up any infinitesimal shear, dilatation and/or displacement variations along $\partial_\mu$. Or, equivalently, that $\tensor{\Gamma}{^\alpha_{\beta\mu}}$ satisfies the following constraints: \begin{subequations} \label{eq:vanishing_torsion_nonmetricity} \begin{align} \tensor{T}{^\alpha_{\beta\mu}} &\equiv \tensor{\Gamma}{^\alpha_{\beta\mu}}-\tensor{\Gamma}{^\alpha_{\mu\beta}} = 0\label{eq:vanishing_torsion} \;, \\ \tensor{Q}{_{\alpha\beta\mu}} &\equiv \partial_\alpha g_{\beta\mu} -\tensor{\Gamma}{^\nu_{\beta\alpha}}g_{\nu\mu}-\tensor{\Gamma}{^\nu_{\mu\alpha}}g_{\beta\nu} = 0 \label{eq:vanishing_nonmetricity}\;, \end{align} \end{subequations} where $\tensor{T}{^\alpha_{\beta\mu}}$ and $Q_{\alpha\beta\mu}$ are the so-called torsion and non-metricity tensor fields, respectively. As one can see, this is indeed a very particular and constrained geometry, first formulated by Bernhard Riemann in 1854\footnote{This work was never published by Riemann himself. He first laid out the foundational aspects of what is now called a Riemannian $n$-manifold in a lecture, \textit{On the hypothesis that lie at the foundation of geometry}, at G\"{o}ttigen University, as part of the habilitation process for him to become a \textit{Privatdozent} (lecturer).}. On the other hand, other geometries such as Weitzenb\"ock ($R=0, \; T\neq 0, \; Q=0$), Weyl ($R=0, \; T=0, \; Q\neq 0$), Riemann-Cartan ($R\neq 0, T\neq 0$, $Q=0$) or metric-affine ($R\neq 0, T\neq 0$, $Q\neq 0$) are, at this moment, still as compatible with observational data as the Riemannian hypothesis \cite{hehl1974,hehl1976a,trautman2006,hehl2012,hehl1995,gronwald1995,gronwald1997,jimenez2019,golovnev2022,iosifidis2020,cantata2021,ferreira2022}. Thus, in prol of generality and advocating the philosophy of pursuing the minimum number of \textit{ad hoc} assumptions about spacetime, we will consider $g_{\mu\nu}$ and $\tensor{\Gamma}{^\alpha_{\beta\mu}}$ as completely independent fields, each carrying part of the classical degrees of freedom of gravity (Cartan's philosophy). Unless, of course, stated otherwise, dynamically, via the field equations. Following the above reasoning, the curvature tensor field \begin{equation} \label{eq:curvaturetensor} \tensor{R}{^\alpha_{\beta\mu\nu}} = \partial_\mu\tensor{\Gamma}{^\alpha_{\beta\nu}}-\partial_\nu\tensor{\Gamma}{^\alpha_{\beta\mu}}+\tensor{\Gamma}{^\alpha_{\rho\mu}}\tensor{\Gamma}{^\rho_{\beta\nu}}-\tensor{\Gamma}{^\alpha_{\rho\nu}}\tensor{\Gamma}{^\rho_{\beta\mu}} \; \end{equation} should be considered as a function of $\tensor{\Gamma}{^\alpha_{\beta\mu}}$ and its derivatives alone -- not of $g_{\mu\nu}$ and its derivatives. This tensor differs from the Riemann curvature since, again, we are not making any assumption on how $\tensor{\Gamma}{^\alpha_{\beta\mu}}$ differs from Christoffel symbols. The dynamics of $n$-dimensional EC theory is defined by the $n$-dimensional Einstein-Hilbert-Palatini (EHP) action, is given by \begin{equation}\label{eq:holonomic-EP-action} S_{\text{EHP}}\left[g_{\mu\nu},\;\tensor{\Gamma}{^\alpha_{\beta\mu}}\right]=\int_{X} d^nx \sqrt{-g}\tensor{R}{^\alpha_{\mu\alpha\nu}}g^{\mu\nu}\;. \end{equation} where $g_{\mu\nu}$ correspods to a Lorentzian metric, while $g$ is its determinant and $g^{\mu\nu}$ is its inverse. The field equations obtained from the functional variation w.r.t. $g_{\mu\nu}$ and $\tensor{\Gamma}{^\alpha_{\beta\mu}}$ are, respectively, \begin{subequations} \label{eq:holonomic-field-eqs} \begin{align} -\sqrt{-g}G^{\mu\nu} &= 0 \;, \label{eq:holonomic-einstein-like-field-eqs}\\ -\sqrt{-g}\left[\tensor{T}{^\mu_\alpha^\beta}-\tensor{Q}{_\alpha^{\beta\mu}} + \frac{1}{2} \left(g^{\beta\mu}\tensor{Q}{_{\alpha\nu}^\nu}+\tensor{\delta}{_\alpha^\mu}\tensor{Q}{^\beta_\nu^\nu}\right) \right] &= 0\;, \label{eq:holonomic-cartan-like-field-eqs} \end{align} \end{subequations} where $G_{\mu\nu} \equiv \tensor{R}{^\alpha_{\mu\alpha\nu}} - \frac{1}{2} \tensor{R}{^\alpha_{\beta\alpha\lambda}}g^{\beta\lambda}g_{\mu\nu}$ is the post-Riemannian Einstein tensor; asymmetric in $\mu\nu$. At first glance, the set of field equations \eqref{eq:holonomic-field-eqs} seems to deviate from $n$-dimensional GR. Nonetheless, their physical solutions are still exclusively Einstein $n$-manifolds. This is due to the fact that \eqref{eq:holonomic-EP-action} is explicitly invariant under local projective transformations \begin{subequations} \label{eq:r-symmetry} \begin{align} g_{\mu\nu} &\rightarrow g_{\mu\nu} \;, \\ \tensor{\Gamma}{^\alpha_{\beta\mu}} &\rightarrow \tensor{\Gamma}{^\alpha_{\beta\mu}}+\tensor{\delta}{^\alpha_\beta}U_\mu \;. \end{align} \end{subequations} This is the so-called $R^d$-symmetry, where $U_\mu$ is an arbitrary vector field \cite{dadhich2012}. One can show that to choose vanishing $\tensor{Q}{_{\alpha\beta\mu}}$ (and, consequentially, $\tensor{T}{^\alpha_{\beta\mu}}$, via \eqref{eq:holonomic-cartan-like-field-eqs}) or $\tensor{T}{^\alpha_{\beta\mu}}$ (and, consequentially, $\tensor{Q}{_{\alpha\beta\mu}}$, again, via \eqref{eq:holonomic-cartan-like-field-eqs}) equates to setting $U_\mu=0$. In other words, these choices are nothing but gauge choices for this projective symmetry. Under this light, torsion and non-metricity are pure $R^d$-gauge quantities and the traditional EH action -- the action of GR -- is just an $R^d$-gauge fixed version of \eqref{eq:holonomic-EP-action}. This is true as long as we are decoupled from matter sources carrying non-vanishing hypermomentum currents. For instance, if spinorial matter is present, torsion couples to the spin density tensor and thus assumes an $R^d$-gauge invariant character. This results in a space of solutions containing EC $n$-manifolds and \eqref{eq:holonomic-EP-action} is a $R^d$-gauge unfixed version of EC theory, not GR. The $n$-dimensional ECSK theory, which is formulated in non-holonomic frames, has its dynamics defined by the so-called Vielbein-Einstein-Palatini (VEP) action, \begin{equation} \label{eq:vep-action} S_{\text{VEP}} \left[\tensor{e}{^A_\mu}, \tensor{A}{^A_{B\mu}}\right] = \int_X d^nx e\tensor{F}{^A_{B\mu\nu}}\tensor{e}{^\mu_A}\tensor{e}{^{B\nu}} \;, \end{equation} where $\tensor{e}{^A_\mu}$ is the vielbein field, $e$ is its determinant and $\tensor{e}{^\mu_A}$ is its inverse satisfying \begin{subequations} \begin{align} \tensor{e}{^A_\mu}\tensor{e}{^\mu_B} &= \tensor{\delta}{^A_B}\;, \label{eq:inversepullbackvielbein} \\ \tensor{e}{^A_\mu}\tensor{e}{^\nu_A} &= \tensor{\delta}{^\nu_\mu} \label{eq:inversevielbein}\;. \end{align} \end{subequations} Under a non-holonomic $GL\left(n,\mathbb{R}\right)$ action, they transform as \begin{subequations} \begin{align} \label{eq:active_transf_vielbein} \tensor{e}{^{A'}_\mu} &= \tensor{\Lambda}{^{A'}_{B}} \tensor{e}{^B_\mu} \;, \\ \tensor{e}{^\mu_{A'}} &= \tensor{\Lambda}{^{B}_{A'}} \tensor{e}{^\mu_B} \;, \end{align} \end{subequations} while under a holonomic one, they do as \begin{subequations} \begin{align} \label{eq:passive_transf_vielbein} \tensor{e}{^A_{\nu'}} &= \tensor{J}{^\mu_{\nu'}} \tensor{e}{^A_\mu} \;, \nonumber \\ \tensor{e}{^{\nu'}_A} &= \tensor{J}{^{\nu'}_{\mu}} \tensor{e}{^\mu_A} \;. \end{align} \end{subequations} Furthermore, $\tensor{A}{^A_{B\mu}}$ is a $GL\left(n,\mathbb{R}\right)$ connection and its curvature, \begin{equation} \label{eq:nonholcurvature} \tensor{F}{^A_{B\mu\nu}}=\partial_\mu\tensor{A}{^A_{B\nu}}-\partial_\nu\tensor{A}{^A_{B\mu}}+\tensor{A}{^A_{C\mu}}\tensor{A}{^C_{B\nu}}-\tensor{A}{^A_{C\nu}}\tensor{A}{^C_{B\mu}} \;, \end{equation} is, unmistakeably, a function of connection $\tensor{A}{^A_{B\mu}}$ and its derivatives alone. It might be confusing to the reader that the above fields are being called non-holonomic when they clearly have holonomic indexes just as well. As we will clarify in Section \ref{sec:geometric_picture}, the reason for that is that these fields are local projections on $X$ of truly non-holonomic quantities living on internal bundles above it. Hence, their dual personality behavior. Due to that, $g_{\mu\nu}$ and its inverse are still present here in order to ``rise'' and ``lower'' holonomic indexes. Nonetheless, we now also have an extra metric, $g_{AB}$, and its inverse $g^{AB}$. And, as any other metric, they do the job of ``rising'' and ``lowering'' indexes -- in this case, non-holonomic ones, of course\footnote{The relation between these two metrics is given by equation \eqref{eq:metrics} and its geometrical interpretation is given in Section \ref{sec:geometric_picture}.}. The field equations obtained from the functional variation w.r.t. $\tensor{e}{^A_\mu}$ and $\tensor{A}{^A_{B\mu}}$ are, respectively, \begin{subequations} \label{eq:nonholonomic-field-eqs} \begin{align} e\delta^{\mu\nu\lambda}_{\alpha\beta\gamma}\tensor{e}{^\alpha_A}\tensor{e}{^\beta_B}\tensor{e}{^\gamma_C}\tensor{F}{^{BC}_{\nu\lambda}} &= 0 \;, \label{eq:nonholonomic-einstein-like-field-eqs} \\ e \left(\delta^{\mu\nu\lambda}_{\alpha\beta\gamma}\tensor{e}{^\alpha_A}\tensor{e}{^{B\beta}}\tensor{e}{^\gamma_C}\tensor{T}{^C_{\nu\lambda}}-2\delta^{\mu\nu}_{\alpha\beta}\tensor{e}{^\alpha_A}\tensor{e}{^\beta_C}\tensor{Q}{_\nu^{BC}}\right) &= 0 \;, \label{eq:nonholonomic-cartan-like-field-eqs} \end{align} \end{subequations} where some useful definitions were used\footnote{\setlength{\abovedisplayskip}{-6pt} \begin{align*} \delta^{\mu_1 \cdots \mu_p}_{\nu_1\cdots \nu_p} &\equiv \frac{1}{\left(n-p\right)!}\epsilon^{\mu_1 \cdots \mu_p \lambda_{p+1}\cdots\lambda_{n}}\epsilon_{\nu_1\cdots\nu_p\lambda_{p+1}\cdots\lambda_n}\;,\\ \tensor{T}{^A_{\mu\nu}} &\equiv \partial_\mu \tensor{e}{^A_\nu}- \partial_\nu \tensor{e}{^A_\mu} + \tensor{A}{^A_{B\mu}}\tensor{e}{^B_\nu}-\tensor{A}{^A_{B\nu}}\tensor{e}{^B_\mu}\;, \\ \tensor{Q}{_\mu^{AB}} &\equiv \tensor{A}{^{AB}_\mu}+\tensor{A}{^{BA}_\mu} \;. \end{align*}}. In particular, $\delta^{\mu_1\cdots \mu_p}_{\nu_1\cdots\nu_p}$ is the generalized Kronecker delta, $\epsilon_{\mu_1\cdots \mu_n}$ is the permutation symbol, $\tensor{T}{^A_{\mu\nu}}$ is the non-holonomic torsion and $\tensor{Q}{_\mu^{AB}}$ is the non-holonomic non-metricity associated to the connection $\tensor{A}{^A_{B\mu}}$. Notice how in non-holonomic frames, it is the vanishing of non-metricity, rather than torsion, that is related to symmetries of the connection. Whenever $\tensor{Q}{_\mu^{AB}}=0$, then $\tensor{A}{^{AB}_\mu}=-\tensor{A}{^{BA}_\mu}$. From the symmetry group perspective, this is equivalent to a contraction of $GL(n,\mathbb{R})$ down to one of its orthogonal subgroups, in our case, $SO\left(1,n-1\right)$. This is, of course, the reason why we blindly did the opposite, by the end of Section \ref{sec:holonomic_and_non-holonomic_frames}, in order to be in a framework that allows more general gravities. We will expand on that in Section \ref{sec:geometric_picture}. The VEP action \eqref{eq:vep-action} also enjoys invariance under local projective transformations of the connection, namely, \begin{subequations} \label{eq:nonholonomic-projective-transformation} \begin{align} \tensor{e}{^A_\mu} &\rightarrow \tensor{e}{^A_\mu} \;, \label{eq:projective-transf-vielbein} \\ \tensor{A}{^A_{B\mu}} &\rightarrow \tensor{A}{^A_{B\mu}}+\tensor{\delta}{^A_B}V_\mu \;, \label{eq:projective-transf-connection} \end{align} \end{subequations} where $V_\mu$ is an arbitrary vector field. The previous scenario then repeats itself in a non-holonomic way. $\tensor{Q}{_\mu^{AB}}$ is a pure $R^d$-gauge quantity proportional to $V_\mu$ \cite{dadhich2012}. In the $R^d$-gauge choice $V_\mu=0$, $\tensor{Q}{_\mu^{AB}}$ and, consequentially, $\tensor{T}{^A_{\mu\nu}}$, via \eqref{eq:nonholonomic-cartan-like-field-eqs}, vanishes. Again, physical solutions are exclusively Einstein $n$-manifolds as long as there are no couplings to matter sources carrying hypermomentum currents. If gravity is indeed coupled to spinorial matter, non-metricity will remain pure gauge but torsion will not. Coupled to the spin density current of matter, it will assume a $R^d$-gauge invariant role and, in such scenario, the solutions will be Riemann-Cartan $n$-manifolds. From all the facts stated above, it should not come as a surprise that ECSK theory is uniquely connected the EC theory in holonomic frames. The connection is achieved, at the level of field equations and action functionals, by the set of field transformations \begin{subequations} \label{eq:field-transformation} \begin{align} g_{\mu\nu} &= \tensor{e}{^A_\mu}\tensor{e}{^B_\nu}g_{AB} \;, \label{eq:metrics} \\ \tensor{\Gamma}{^\alpha_{\mu\nu}} &= \tensor{e}{^\alpha_A}\tensor{A}{^A_{B\mu}}\tensor{e}{^B_\nu}-\tensor{e}{^\alpha_A}\partial_\mu\tensor{e}{^A_\nu}\;. \label{eq:connections} \end{align} \end{subequations} Three points are important to be emphasized about them: i) the Jacobian matrix of such transformations is clearly not trivial; ii) it is not even a square matrix; iii) this transformations assume $\tensor{e}{^\alpha_A}$ exists. Point i) is of major importance in the study of this (in)equivalence within a path integral quantization of both theories as it indicates the appearance of non-trivial insertions as one transforms the functional measure from $\mathcal{D}e\mathcal{D}A$ to $\mathcal{D}g\mathcal{D}\Gamma$. Point ii) reflects the one-to-many nature of the ``inverse'' transformations. Indeed, while a non-holonomic description has a single holonomic counterpart, a holonomic description has infinitely many non-holonomic versions. This is, of course, due to gauge-theoretical nature of the latter description. All these non-holonomic versions of a single holonomic description are $GL\left(n,\mathbb{R}\right)$ gauge transformations of each other -- thus, defining only one single physical theory. Finally, point iii) can be relaxed at the expense of a fixed space topology. As we will discuss in more details in Section \ref{sec:discussions}, this generally breaks any otherwise established equivalence between holonomic \textit{versus} non-holonomic descriptions. The above facts are well established and can be summarized in the following diagram: \[ \begin{tikzcd}[row sep=huge, column sep=huge] \text{\eqref{eq:holonomic-EP-action}} \arrow[d, Rightarrow, "\delta S = 0" description] \arrow[r, Leftarrow, "\eqref{eq:field-transformation}"] & \text{\eqref{eq:vep-action}} \arrow[d, Rightarrow, "\delta S =0" description] \\ \text{\eqref{eq:holonomic-field-eqs}} \arrow[r, Leftarrow, "\eqref{eq:field-transformation}"] & \text{\eqref{eq:nonholonomic-field-eqs}} \end{tikzcd} \] The key question we would like to address now is how general this known equivalence really is, assuming an invertible veilbein. Can it be extended to dynamics other then EC \textit{versus} ECSK? For instance, does it hold for generic metric-affine theories of gravity, where the Lagrangian is a generic function of the fields and their derivatives up to arbitrary, though finite, order? In the next section we answer this question at the classical level. \section{The on-shell (in)equivalence} \label{sec:on-shell_equivalence} In order to answer the question raised above, it is convenient to abstract the situation to that of a field theory for the multi-field $\Phi^I$, which single-handedly represents an arbitrary (but finite) number of scalar, vector, 2-tensor, 3-tensor, $\cdots$, $q$-tensor fields\footnote{Here upper-case latin letters do not simply ranger from $0$ to $n-1$, as it labels the fields within the $\Phi$ multiplet.} \begin{equation} \Phi^I \in \left\{\phi^1,\cdots,\phi^{q_0},\phi_\mu^1, \cdots,\phi^{q_1}_\mu,\phi^1_{\mu\nu},\cdots, \phi^{q_2}_{\mu\nu}, \cdots, \phi^1_{\mu_1\cdots \mu_q},\cdots,\phi^{q_q}_{\mu_1\cdots \mu_q} \right\} \;. \end{equation} Its dynamics is encoded in the very general action functional containing up to the $k$-th order derivative of $\Phi^I$, \begin{equation} S\left[\Phi^I\right] = \int d^nx \mathcal{L} \left(\Phi^I, \partial_\mu \Phi^I , \cdots ,\partial_{\mu_1}\cdots\partial_{\mu_k} \Phi^I \right)\;. \end{equation} Hamilton's principle states that classical configurations of $\Phi^I$ are the extremals of such functional and, thus, solutions to the partial differential equation \begin{equation} \label{eq:el-equations} \delta^{(k)}_{\Phi^I}\mathcal{L}=0 \;, \end{equation} known as Euler-Lagrange equation. The notation employed for the Lagrange operator reads \begin{equation} \delta^{(k)}_{\Phi^I} \equiv \sum_{j=0}^{k} \sum_{\mu_1\leq\cdots\leq\mu_j} (-1)^j \partial_{\mu_1}\cdots\partial_{\mu_j} \left[\frac{\partial\phantom{\left(\partial_{\mu_1}\cdots\partial_{\mu_j} \Phi^I\right)}}{\partial \left(\partial_{\mu_1}\cdots\partial_{\mu_j} \Phi^I\right)}\right] \;, \end{equation} where the second sum is over the ordered set $\left\{ \left(\mu_1,\cdots,\mu_j\right) \; ; \; \mu_1 \leq \cdots \leq \mu_j \right\}$. Whenever this set is empty, which is the case for $j=0$, the operator $\partial_{\mu_1}\cdots\partial_{\mu_j}$ is the identity and we end up with just a $\partial/\partial \Phi^I$ contribution. It is a common exercise in field theory to consider the field transformations \begin{equation} \label{eq:standard-field-transf} \Phi'^J=\Phi'^J \left(\Phi^I\right) \;, \end{equation} whose Jacobian matrix \begin{equation} \tensor{J}{_I^J} \equiv \frac{\partial \Phi'^J}{\partial \Phi^I} \end{equation} is square and non-singular. As a result, Euler-Lagrange field equations \eqref{eq:el-equations} transform covariantly, \begin{equation} \label{eq:el-equations-covariant-transf} \tensor{J}{_I^J}\delta^{(k)}_{\Phi'^J}\mathcal{L}'\left(\Phi'^J, \partial_\mu \Phi'^J, \cdots , \partial_{\mu_1\cdots\mu_k} \Phi'^J\right) = 0 \;. \end{equation} On the other hand, it is much less standard to consider field transformations of the form \begin{equation} \label{eq:non-standard-field-transf} \Phi'^J = \Phi'^J \left(\Phi^I, \partial_\mu \Phi^I\right) \;. \end{equation} Its non-vanishing dependence on first order derivatives w.r.t. the fields \begin{equation} \label{eq:Q} \tensor{K}{_I^{J\mu}}\equiv \frac{\partial \Phi'^J}{\partial \left(\partial_\mu \Phi^I\right)} \; \end{equation} results in the Jacobian matrix \begin{equation} \label{eq:non-standard-jacobian} \mathbb{J}^\mu = \begin{bmatrix} \tensor{J}{_I^J}\tensor{\delta}{_0^\mu} \\ \tensor{K}{_I^{J\mu}} \end{bmatrix} \; \end{equation} having twice as much rows than columns. This is a telltale sign of a singular, non-invertible transformation. More precisely, if one were to invert \eqref{eq:non-standard-field-transf}, one would quickly realize that the system of equations that needs to be solved is undetermined, having infinitely many solutions. Thus, this is precisely the abstraction of the concrete case presented in Section \ref{sec:gravities}, equation \eqref{eq:field-transformation}. Under \eqref{eq:non-standard-field-transf}, Euler-Lagrange field equations \eqref{eq:el-equations} transform accordingly to \begin{equation} \label{eq:non-standard-field-eq-transf} \tensor{J}{_I^J}\delta^{(k)}_{\Phi'^J}\mathcal{L}'=\partial_\mu \left[\tensor{K}{_I^{J\mu}}\delta^{(k)}_{\Phi'^J}\mathcal{L}'\right] \;, \end{equation} which is clearly a non-covariant behavior. In order to regain some sense of covariance in \eqref{eq:non-standard-field-eq-transf}, its right-hand side has to vanish. In other words, $\tensor{K}{_I^{J\mu}}\delta^{\left(k\right)}_{\Phi'^J}\mathcal{L}'$ has to be a spacetime constant. Since $\tensor{K}{_I^{J\mu}}$ and $\delta^{(k)}_{\Phi'^J}\mathcal{L}'$ are independent quantities, this can only happen if: i) $\tensor{K}{_I^{J\mu}}$ vanishes for arbitrary $\delta^{(k)}_{\Phi'^I}\mathcal{L}'$; ii) $\delta^{(k)}_{\Phi'^I}\mathcal{L}'$ vanishes for arbitrary $\tensor{K}{_I^{J\mu}}$ or; iii) $\tensor{K}{_I^{J\mu}}$ and $\delta^{(k)}_{\Phi'^J}\mathcal{L}'$ are both constants. Case i) encompasses the canonical transformations \eqref{eq:standard-field-transf} but, more generally, implies a special set of field transformations while the dynamics remains arbitrary. This is exactly the scenario we are interested. In contrast, cases ii) and iii) imply very restricted dynamics, which we will not address in this work. Finally, let us apply our results to the concrete case of gravity again. Take $\Phi^I \in \left\{\tensor{e}{^A_\mu}, \tensor{A}{^A_{B\mu}}\right\}$, ${\Phi'}^I \in \left\{g_{\mu\nu}, \tensor{\Gamma}{^\alpha_{\beta\mu}} \right\}$ and the field transformation of the kind \eqref{eq:non-standard-field-transf} to be \eqref{eq:field-transformation}. This yields the Jacobian matrices \begin{equation} \label{eq:j-jacobian-for-gravity} \tensor{J}{_I^J} = \begin{bmatrix} \frac{\partial g_{\mu\nu}}{\partial \tensor{e}{^A_\gamma}} & \frac{\partial \tensor{\Gamma}{^\alpha_{\beta\mu}}}{\partial \tensor{e}{^A_\gamma}} \\ 0 & \frac{\partial \tensor{\Gamma}{^\alpha_{\beta\mu}}}{\partial \tensor{A}{^A_{B\gamma}}} \end{bmatrix} \;, \end{equation} and \begin{equation} \label{eq:k-jacobian-for-gravity} \tensor{K}{_I^{J\lambda}} = \begin{bmatrix} 0 & \frac{\partial \tensor{\Gamma}{^\alpha_{\beta\mu}}}{\partial \left(\partial_\lambda\tensor{e}{^A_\gamma}\right)} \\ 0 & 0 \end{bmatrix} \;. \end{equation} Thus, the transformed Euler-Lagrange field equations \eqref{eq:non-standard-field-eq-transf} resume to \begin{subequations} \label{eq:non-standard-field-eq-transf-for-gravity} \begin{align} \left(\frac{\partial g_{\mu\nu}}{\partial \tensor{e}{^A_\gamma}} \delta^{(k)}_{g_{\mu\nu}} + \frac{\partial \tensor{\Gamma}{^\alpha_{\beta\mu}}}{\partial \tensor{e}{^A_\gamma}} \delta^{(k)}_{\tensor{\Gamma}{^\alpha_{\beta\mu}}}\right) \mathcal{L}' &= \partial_\lambda \left(\frac{\partial \tensor{\Gamma}{^\alpha_{\beta\mu}}}{\partial \left(\partial_\lambda \tensor{e}{^A_\gamma}\right)} \delta^{(k)}_{\tensor{\Gamma}{^\alpha_{\beta\mu}}}\mathcal{L}' \right) \;, \label{eq:transformed-einstein-like-field-eq} \\ \frac{\partial \tensor{\Gamma}{^\alpha_{\beta\mu}}}{\partial \tensor{A}{^A_{B\gamma}}} \delta^{(k)}_{\tensor{\Gamma}{^\alpha_{\beta\mu}}} \mathcal{L}' &= 0 \;. \label{eq:transformed-cartan-like-field} \end{align} \end{subequations} Individually, the transformed Cartan-like field equation \eqref{eq:transformed-cartan-like-field} does behave covariantly due to the vanishing of $\partial g_{\mu\nu} / \partial \left(\partial_\lambda \tensor{A}{^A_{B\gamma}}\right)$ and $\partial \tensor{\Gamma}{^\alpha_{\beta\mu}} / \partial \left(\partial_\lambda \tensor{A}{^A_{B\gamma}}\right)$, while the transformed Einstein-like field equation \eqref{eq:transformed-einstein-like-field-eq} does not due to a similar but opposite reason, the non-vanishing of $\partial \tensor{\Gamma}{^\alpha_{\beta\mu}} / \partial \left( \partial_\lambda \tensor{e}{^A_\gamma}\right)$. Nonetheless, equations \eqref{eq:transformed-einstein-like-field-eq} and \eqref{eq:transformed-cartan-like-field} form a system and, as such, a solution for \eqref{eq:transformed-cartan-like-field} has to be as good a solution as for \eqref{eq:transformed-einstein-like-field-eq}. From \eqref{eq:transformed-cartan-like-field}, it is clear that $\delta^{(k)}_{\tensor{\Gamma}{^\alpha_{\beta\mu}}}\mathcal{L}'=0$ since $\partial \tensor{\Gamma}{^\alpha_{\beta\mu}} / \partial \tensor{A}{^A_{B\gamma}}$ is definitely non-vanishing. Thus, $\delta^{(k)}_{\tensor{\Gamma}{^\alpha_{\beta\mu}}}\mathcal{L}'=0$ is also true in \eqref{eq:transformed-einstein-like-field-eq}, thereby killing the undesirable terms. In summary, the answer to the question raised in Section \ref{sec:gravities} is affirmative. If we start with \begin{subequations} \begin{align} \delta^{(k)}_{\tensor{e}{^A_\gamma}}\mathcal{L} &= 0 \;, \\ \delta^{(k)}_{\tensor{A}{^A_{B_\gamma}}}\mathcal{L} &= 0 \;, \end{align} \end{subequations} for whatever chosen $\mathcal{L}$ and apply field transformations \eqref{eq:field-transformation}, we end up with the covariantly transfomed field equations \begin{subequations} \label{eq:non-standard-field-eq-transf-for-gravity-simplified} \begin{align} \frac{\partial g_{\mu\nu}}{\partial \tensor{e}{^A_\gamma}} \delta^{(k)}_{g_{\mu\nu}} \mathcal{L}' &= 0 \;, \\ \frac{\partial \tensor{\Gamma}{^\alpha_{\beta\mu}}}{\partial \tensor{A}{^A_{B\gamma}}} \delta^{(k)}_{\tensor{\Gamma}{^\alpha_{\beta\mu}}} \mathcal{L}' &= 0 \;. \end{align} \end{subequations} The conclusion draw from this is that gravity formulated in holonomic and non-holonomic frames are on-shell equivalent in a way that is independent of the particular metric-affine dynamics chosen. This will remain true as long as field transformations \eqref{eq:field-transformation} hold. Clearly, there is something special about these transformations that needs to be addressed. \section{The geometrical setup} \label{sec:geometric_picture} It is time to investigate the fiber bundle structures over $X$, which were implictly used in both the holonomic and non-holonomic frame descriptions of gravity. Let us start with the tangent one. Consider the set $TX \equiv \left\{\sqcup_x T_x X \; \forall \; x\in X \right\}$ and the map $\pi_{TX}: TX \rightarrow X$. $TX$ is called the total space\footnote{Sometimes we might use the total space to refer to the whole bundle structure.} and inherits a $2n$-manifold structure from $X$; $\pi_{TX}$ is called the projection map and it is smooth and surjective. The typical fiber ${\pi_{TX}}^{-1}\left(x\right)$ of this bundle is isomorphic to $T_x X$ and, as such, carries a $GL\left(n,\mathbb{R}\right)$ representation reminiscent of smooth changes of coordinates, as argued in Section \ref{sec:holonomic_and_non-holonomic_frames}. This later statement is the archeotypical examples of a categorical lift. Morphisms in the category of smooth manifolds induce morphisms in the category of smooth vector bundles via a functor $\mathcal{F}$ \cite{tu2011}. This can be neatly captured by the diagram \begin{equation} \label{dia:categorical_lift} \begin{tikzcd} \mathcal{F}X \arrow[r,"\mathcal{F}f"] \arrow[d,"\pi_{\mathcal{F}X}" left] & \mathcal{F}X' \arrow[d, "\pi_{\mathcal{F}X'}" ] \\ X \arrow[r, "f"] & X' \end{tikzcd} \tag{diagram 1} \;, \end{equation} such that $f\circ\pi_{\mathcal{F}X} = \pi_{\mathcal{F}X'}\circ \mathcal{F}f$. The bundle morphism $\mathcal{F}f$ is said to be the functorial lift of the morphism $f$. Bundles above $X$ constructed in this functorial way are said to be a natural bundles \cite{kolar2000}. The tangent one is the natural bundle obtained by considering $X=X'$, $f$ as local automorphisms\footnote{Defined as maps from $X$ to $X$ which are necessarily diffeomorphisms only on a chart, \textit{i.e.}, $f$ is not necessary a diffeomorphism but $f|_R: R\rightarrow f(R)$ is.} and $\mathcal{F}$ as the tangent (pushforward) map $T$. On a chart, this translates to transition functions $x^{\nu'}\left(x^\mu\right)$ at $x$ lifting to automorphisms on $T_xX$ represented by Jacobian matrices $\tensor{J}{^\mu_{\nu'}}$. Since group of automorphisms of the typical fiber is isomorphic to the structure group of the bundle itself, the tangent bundle is constrained to have $GL\left(n,\mathbb{R}\right)$ as its structure group. In summary, one can say that $lAut\left(X\right)$ functorially lifts to $TX$ as $GL\left(n,\mathbb{R}\right)$. Other natural bundles on $X$ can be defined by only changing the typical fiber to another representation space of $GL\left(n,\mathbb{R}\right)$. For instance, the co-tangent bundle $T^*X$ is the one with typical fiber isomorphic to $T^*_xX$\footnote{Categorically, one just need to consider $\mathcal{F}$ as the co-functor $T^*$ of $T$.} and, more generally, the $(r,s)$-tensor bundle $\mathcal{T}^s_r X$ is the one with typical fiber isomorphic to $T_xX^{\otimes^r}\otimes T^*_x X^{\otimes^s}$\footnote{Categorically, one needs to consider $\mathcal{F}$ as the tensor products of functors $\mathcal{T}^s_r\equiv \bigotimes^r T\otimes \bigotimes^s T^*$.}. A right inverse for $\pi_{TX}$ is the well-known concept of a tangent vector field on $X$. It is generally called a section of this bundle and defined as a map $\sigma_{TX}: R\subseteq X \rightarrow TX$ such that $\pi_{TX}\circ \sigma_{TX} = \mathds{1}_R$. There might be topological obstructions for the set equality to hold for a nowhere vanishing $\sigma_{TX}$. Indeed, most nowhere vanishing sections are local ($R\subset X$). Global ones ($R=X$) are only guaranteed to exist in trivial bundles, \textit{i.e.}, when $TX$ is globally diffeomorphic to $X\times \mathbb{R}^n$. The canonical example is given in the form of the hairy ball theorem. No nowhere vanishing tangent vector field globally exists on the $2$-sphere. Indeed, $TS^2$ is not diffeomorphic to $S^{2}\times \mathbb{R}^{2}$. The opposite is true for the $3$-sphere, $TS^{3}\cong S^{3}\times \mathbb{R}^{3}$, and that is one reason why it can accept a $SU(2)$ Lie group structure. Ultimately, such results are closely connected to the value of their Euler characteristic, $\chi \left(S^{2}\right) = 2$ and $\chi \left(S^{3}\right) = 0$. In such topologies, $\chi \left(X\right)$ acts as the obstruction for the existence of any nowhere vanishing global section in $TX$. It so happens that $\chi\left(S^n\right)$ vanishes if $n$ is odd or equal 2 if $n$ is even. In the former case, $n\in\left\{1,3,7\right\}$ is especial since these are the only ones with a trivial $TS^n$. Back to the general case, the space of all sections in $TX$, global or not, is denoted as $\Gamma \left(TX\right)$. One can infer now that the metric field $g(x)$ must be an element in the symmetric subspace $\Gamma (\raisebox{\depth}{\scalebox{1}[-1]{$\Lambda$}}^2X) \subset \Gamma \left(\mathcal{T}^2_0 X\right)$ and, depending on the topology of $X$, $\chi\left(X\right)$ might act as an obstruction for it to be a global section. Another very important natural bundle is the frame bundle $\pi: FX \rightarrow X$ of $TX$. The structure group and base space are the same, but the total space $FX$ is defined as the set of all tangent frames on $X$. In particular, the typical fiber $F_x X$ is diffeomorphic to $GL\left(n,\mathbb{R}\right)$ itself. This makes $FX$ $n\left(n+1\right)$-dimensional. Furthermore, $F_x X$ carries a smooth and free right action of $GL\left(n,\mathbb{R}\right)$. These facts make $\pi: FX \rightarrow X$ into a $GL\left(n,\mathbb{R}\right)$ principal bundle over $X$. This is the space where holonomic frames live in. In particular, $\partial_\mu|_x$ is an element of $F_x X$ and the holonomic moving frame $\partial_\mu \left(x\right)$ is an element of $\Gamma\left(FX\right)$. Finally, transformation laws \eqref{eq:holonomic-frame-transf-rule} and $\eqref{eq:holonomic-coframe-transf-rule}$ are just a reflex of the naturalness of this bundle, \textit{i.e.}, that $f$ canonically lifts to it and to the co-frame bundle $F^*X$, respectively. $FX$ has such importance because all other natural vector bundles over $X$ can be derived from it via the associated vector bundle construction. Let $\rho: GL\left(n,\mathbb{R}\right) \rightarrow GL\left(\mathcal{V}\right)$ be a representation of $GL\left(n,\mathbb{R}\right)$ on a vector space $\mathcal{V}$. One can show that the product space $FX\times\mathcal{V}$, when quotiented by the equivalence relation $\left(u,v\right)\sim \left(ug,\rho\left(g^{-1}\right)v\right)$, where $u\in FX, \; v\in \mathcal{V}, \; g\in GL\left(n,\mathbb{R}\right)$, does form a vector bundle over $X$. This bundle, $\pi_{A\left(\mathcal{V}\right)}: A\left(\mathcal{V}\right)\rightarrow X$ where $A\left(\mathcal{V}\right)\equiv FX \times \mathcal{V}/\sim$, is said to be a vector bundle associated to $FX$. Clearly, there are as many associated vector bundles to $FX$ as there are representation spaces of $GL\left(n,\mathbb{R}\right)$. In particular, $A\left(T_xX\right)$ is an associated vector bundle trivially isomorphic to $TX$, $A\left(T^*_x X\right)$ is to $T^*X$, and so on and so forth. In this sense, all natural vector bundles are derived from $FX$. Therefore, one could state that these tangent vector bundles are natural because $FX$ is natural; it is the naturalness of the latter that descends to the former via the associated vector bundle construction. If $X$ is paracompact, $FX$ can have its own tangent bundle decomposed into vertical and horizontal sub-bundles, $TFX=T_VFX\oplus T_HFX$ \cite[thm. 2.1]{kobayashi1}. While $T_VFX$ is uniquely defined as the kernel of $\pi_*$, its complement $T_HFX$ is not. Given a $\rho$-equivariant\footnote{A $\mathcal{V}$-valued form $\phi$ on $FX$ is $\rho$-equivariant if, for every $g\in GL\left(n,\mathbb{R}\right)$, $$R^*_g \left(\phi\right)=\rho\left(g^{-1}\right)\phi \;,$$ where $R^*_g$ is the pullback via the right action $R_g$ of $GL\left(n,\mathbb{R}\right)$ on $FX$.} $\mathfrak{gl}\left(n,\mathbb{R}\right)$-valued global section $\omega$ in $\Gamma\left(T_V^*FX\right)$, which is the identity on $\Gamma\left(T_VFX\right)$, the choice $\Gamma\left(T_HFX\right)= \mathrm{ker}\left(\omega\right)$ can be made. $\omega$ is known as a connection form on $FX$. This construction gives a recipe on how to differentiate $\rho$-equivariant $\mathcal{V}$-valued $k$-forms into $\rho$-equivariant $\mathcal{V}$-valued $\left(k+1\right)$-forms on $FX$. $\omega$ is itself an example of such forms. However, it is a vertical\footnote{Vertical means it annihilates sections in $\Gamma\left(T_HFX\right)$.} one, which means it differentiates itself in an unusual way, resulting in \begin{equation} \label{eq:curvatureform} \Omega\equiv d\omega+\frac{1}{2}\left[\omega,\omega\right] \;, \end{equation} where $d$ is the exterior derivative and $\left[\;,\;\right]$ is the graded Lie bracket. $\Omega$ is a $\rho$-equivariant $\mathfrak{gl}\left(n,\mathbb{R}\right)$-valued 2-form known as the curvature form of $\omega$. This process of differentiation abhors verticality. Indeed, $\Omega$ is horizontal\footnote{Horizontal means it annihilates sections in $\Gamma\left(T_VFX\right)$.} and so is the result of every differentiation via $\omega$. Thus, such procedure is better understood as an operation on the space $\Gamma\left(\Lambda_{H,\rho}^*FX\right)$ of horizontal $\rho$-equivariant $\mathcal{V}$-valued forms on $FX$. Let $\rho_*|_{\mathds{1}}: \mathfrak{gl}\left(n,\mathbb{R}\right) \rightarrow \mathfrak{gl}\left(\mathcal{V}\right)$ be pushward map via $\rho$ at the identity element $\mathds{1}$ in $GL\left(n,\mathbb{R}\right)$. Then, $\omega$ indeed defines an endomorphism \begin{equation} \label{eq:ext.cov.der.} D=d+\rho_*|_{\mathds{1}}\left(\omega\right) \; \end{equation} on $\Gamma\left(\Lambda^*_{H,\rho} FX\right)$ that maps $\Gamma\left(\Lambda^k_{H,\rho} FX\right)$ into $\Gamma(\Lambda^{k+1}_{H,\rho} FX)$ while satisfying the graded Leibniz rule. This is an exterior covariant derivative on $FX$. The space $\Gamma\left(\Lambda^*_{H,\rho}FX\right)$ plays a pivotal role since an isomorphism exists between it and the space $\Gamma\left(A\left(\mathcal{V}\right)\otimes \Lambda^*X\right)$ of $\mathcal{V}$-valued forms on $X$ \cite[p.278]{tu2017}. Using such map, $D$ descends down from $FX$ to each $A\left(\mathcal{V}\right)$ as an operator $D_{A\left(\mathcal{V}\right)}$ that, instead, differentiates elements in $\Gamma\left(A\left(\mathcal{V}\right)\otimes \Lambda^kX\right)$ into elements in $\Gamma\left(A\left(\mathcal{V}\right)\otimes \Lambda^{k+1}X\right)$. One is able to guess now that the $\nabla$ introduced in Section \ref{sec:gravities} is just $D_{TX}$ composed with the interior product $\rfloor$ in $\Gamma\left(\Lambda^*X\right)$, so the result properly sends elements from $\Gamma\left(TX\otimes TX\right)$ to $\Gamma\left(TX\right)$, \begin{equation} \label{eq:cov.diff.} \nabla \equiv \;\rfloor D_{TX} \;. \end{equation} As a $\mathfrak{gl}\left(n,\mathbb{R}\right)$-valued element of $\Gamma\left(\Lambda^2_{H,\rho}FX\right)$, $\Omega$ necessarily descends to an element $R$, belonging to $\in\Gamma\left(\mathfrak{gl}\left(n,\mathbb{R}\right)\otimes\Lambda^2X\right)$, which is the familiar $\mathfrak{gl}\left(n,\mathbb{R}\right)$-valued curvature 2-form on $X$ -- the geometrical structure behind $\tensor{R}{^\alpha_{\beta\mu\nu}}$. On the other hand, $\omega$ cannot descend to $X$ via the associated bundle construct since it is a vertical form on $FX$. Nevertheless, one can always use a section $\sigma_{FX}: R \subseteq X \rightarrow FX$ to pull it down from $\Gamma\left(\Lambda^1_{V,\rho}FX\right)$ to $\Gamma\left(\mathfrak{gl}\left(n,\mathbb{R}\right)\otimes \Lambda^1 X\right)$, \begin{equation} \label{eq:local_connection} \Gamma \equiv {\sigma_{FX}}^* \omega \;. \; \end{equation} where ${\sigma_{FX}}^*$ is the pullback map. Now, $\Gamma$ is the $\mathfrak{gl}\left(n,\mathbb{R}\right)$-valued 1-form on $R$ which we would call as a $GL\left(n,\mathbb{R}\right)$ gauge field in the traditional sense - the geometrical structure behind the affine connection $\tensor{\Gamma}{^\alpha_{\beta\mu}}$. Gravity, however, should be described by a peculiar kind of gauge theory, in the sense that the fundamental fields capture the dynamics of the base space $X$ itself. The holonomic way acomplishes this by defining the theory directly on natural bundles. After all, these are the bundles having, by definition, functorial lifts of $lAut\left(X\right)$ and thus a direct connection with $X$. However, this is not the only way to do it. Consider that $X$ has such topology that, given a manifold $P$ and a Lie group $G$, the non-trivial principal $G$-bundle $\pi': P \rightarrow X$ also exists over it. Moreover, that there exists the map $h: FX \rightarrow P$ such that $\pi=\pi'\circ h$. Again, this principal bundle morphism can be neatly captured by the commutative diagram \begin{equation} \label{dia:g-bundle_isomorphism} \begin{tikzcd} FX \arrow[r,"h"] \arrow[d,"\pi" left] & P \arrow[d, "\pi'" ] \\ X \arrow[r, "\mathds{1}_X"] & X \end{tikzcd} \tag{diagram 2} \;, \end{equation} where $\mathds{1}_X$ is the identity automorphism on $X$. $h$ is called vertical since it covers $\mathds{1}_X$. It is important to note that, at each fiber $\pi^{-1}(x)$, $h$ defines a homomorphism of Lie groups $h|_{\pi^{-1}}:GL\left(n,\mathbb{R}\right)\rightarrow G$. Whenever $h|_{\pi^{-1}}$ is the actual identity automorphism on $GL\left(n,\mathbb{R}\right)$, we call $h$ equivariant. Whenever this is the case, $P$, in \ref{dia:g-bundle_isomorphism}, is said to be soldered to $X$. Let us assume this is the case and that $\omega'$ is a connection on $P$. It is also labelled as soldered since \begin{equation} \label{eq:soldered_connection} \omega = h^* \omega' \;, \end{equation} where $h^*$ is the pullback map via $h$. In particular, $FX$ is soldered to itself via vertical equivariant automorphisms. It corresponds to the case where $P=FX$. In such scenario, equation \eqref{eq:soldered_connection} represents a gauge transformation on $FX$. Indeed, the set of all vertical equivariant automorphisms on $FX$, denoted as $\mathcal{G}\left(FX\right)$, is the set of all gauge transformations on $FX$ \cite{bleecker1981, rudolph2017}. Moving on, consider a representation $\rho': G\rightarrow GL\left(\mathcal{V}'\right)$ of $G$ on $\mathcal{V}'$. Exclusively on soldered $G$-bundles, there exists an element $\theta \in \Gamma \left(\Lambda^1_{H,\rho'}P\right)$ such that $\mathrm{dim}\left(\mathcal{V}'\right)=n$. Let $D'$ be the exterior covariant derivative associated with $\omega'$, the so-called torsion form $\Theta \in \Gamma \left(\Lambda^2_{H,\rho'}P\right)$ can be defined as \begin{equation} \label{eq:torsion_form} \Theta\equiv D'\theta \;. \end{equation} Via the associated vector bundle construction regarding $\rho'$, $\theta$ as well as $\Theta$ will descend to $A'\left(\mathcal{V}'\right)$, respectively, as an element $e\in \Gamma \left(A'\left(\mathcal{V}'\right)\otimes \Lambda^1X\right)$, which can be regarded as a vertical vector bundle isomorphism \begin{equation} \label{dia:v-bundle_isomorphism} \begin{tikzcd} TX \arrow[r,"e"] \arrow[d,"\pi_{TX}" left] & A'\left(\mathcal{V}'\right) \arrow[d, "\pi_{A'\left(\mathcal{V'}\right)}" ] \\ X \arrow[r, "\mathds{1}_X"] & X \end{tikzcd} \tag{diagram 3} \end{equation} and an element $T\in \Gamma \left(A'\left(\mathcal{V}'\right)\otimes \Lambda^2X\right)$, given by \begin{equation} \label{eq:torsion_2form} T=D_{A'\left(\mathcal{V}'\right)}e \;. \end{equation} The former is the well-known $\mathcal{V}'$-valued soldering 1-form -- the geometrical quantity behind $\tensor{e}{^A_\mu}$ -- while the latter is the $\mathcal{V}'$-valued torsion 2-form on $X$ -- the geometrical quantity behind $\tensor{T}{^A_{\mu\nu}}$. These, of course, lack in the traditional (unsoldered) gauge theoretical framework of particle physics. As a consistency check, consider again the case $P=FX$. Moreover, consider $FX$ as trivially soldered, \textit{i.e.}, $\mathcal{G}\left(FX\right)$ contains only the trivial gauge transformation $\omega=\mathds{1}_{FX}^* \omega'$. To fulfill condition $\mathrm{dim}\left(\mathcal{V'}\right)=n$ for $\theta$, let $\mathcal{V}'\simeq T_xX$. In such case, $e$, of course, only corresponds to the vertical identity transformation $\mathds{1}_{TX}$ on $TX$. This tautology implies that $T$ reduces to \begin{align} \label{} T &= D_{TX}\mathds{1}_{TX} \;, \nonumber\\ &= D_{TX}\left(\partial_\alpha\otimes dx^\alpha\right) \;, \nonumber\\ &= \partial_\alpha\otimes \left(D_{TX}dx^\alpha\right) \;, \nonumber\\ &= \partial_\alpha\otimes dx^\beta \wedge \tensor{\Gamma}{^\alpha_\beta} \;, \end{align} where the definition $dx^\beta \wedge \tensor{\Gamma}{^\alpha_\beta} \equiv \rho_*|_{\mathds{1}}\left(\omega\right) dx^\alpha$, in which $\wedge$ is the wedge product, was used. Then, \begin{align} \label{eq:torsion_2form_trivial_solder} T\left(\partial_\mu,\partial_\nu\right) &= \partial_\alpha \otimes dx^\beta\wedge\tensor{\Gamma}{^\alpha_\beta} \left(\partial_\mu,\partial_\nu\right) \;,\nonumber\\ &= \partial_\alpha \otimes \left(\tensor{\delta}{^\beta_\mu}\tensor{\Gamma}{^\alpha_{\beta\nu}} - \tensor{\delta}{^\beta_\nu}\tensor{\Gamma}{^\alpha_{\beta\mu}}\right) \;,\nonumber\\ &= \partial_\alpha \otimes \left(\tensor{\Gamma}{^\alpha_{\mu\nu}} - \tensor{\Gamma}{^\alpha_{\nu\mu}}\right) \;, \end{align} which is in agreement with the definion in \eqref{eq:vanishing_torsion}. Therefore, one can say that the torsion tensor collapses to the anti-symmetric sector of $\Gamma_{\mu\nu}$ once $FX$ is trivially soldered to itself -- which is the case for holonomic theories of gravity. Finally, consider the case $P=FM$, where $FM$ is the frame bundle of the $n$-dimensional Minkowski space $M$. It is constructed over $M$ much the same way $FX$ is constructed over $X$. Thus, $G$ is forced to equal $GL\left(n,\mathbb{R}\right)$. From the perspective of $X$, elements in $FM$, in its associated vector bundles $A'\left(\mathcal V'\right)$, as well as the $GL\left(n,\mathbb{R}\right)$ actions over them, are all non-holonomic in nature. This was first discussed in Section \ref{sec:holonomic_and_non-holonomic_frames}. In the language developed in this section, however, this means that these are not natural bundles over $X$ -- though they are natural over $M$. The non-holonomic frame $\tau_A|_x$, also introduced in Section \ref{sec:holonomic_and_non-holonomic_frames}, can be regarded as an element of $FM$ over $X$, while $\tau_A\left(x\right)$ is an element of $\Gamma\left(FM\right)$ over $X$. The non-holonomic $GL\left(n,\mathbb{R}\right)$ connection form $\omega'$ has local projection $A$ and associated curvature $F$ -- the geometrical quantities behind $\tensor{A}{^A_{B\mu}}$ and $\tensor{F}{^A_{B\mu\nu}}$, respectively -- which are the gauge-theoretical fields used in the ECSK theory of gravity and its generalizations, \textit{vide} \eqref{eq:nonholcurvature}\footnote{Although $A$ is a flat connection ($F=0$) if $FM$ is seen as a bundle over $M$, the same is not necessarily true if $FM$ is seen as a bundle over $X$.}. The vector space $V_x$, that $\tau_A|_x$ spans, is a fiber of $A'\left(\mathcal{V}'\right)$ over $x$ such that $\mathrm{dim}\left(\mathcal{V}'\right)=n$. Indeed, $V\equiv \sqcup_x V_x$ is exactly the kind of vector bundle present in \ref{dia:v-bundle_isomorphism}. One should notice that $V_x$ is just $T_xM$ and, correspondingly, $V$ is just $TM$. In order to clarify how $e$ solders non-holonomic frames on $X$, consider $\tau^A\in \Gamma\left(T^*M\right)$ and the map $e^*:T^*M \rightarrow T^*X$ where \begin{equation} \label{eq:soldering_pullback} \left[e^*\left(\tau^A\right)\right]\left(\partial_\mu\right) = \tau^A\left[e\left(\partial_\mu\right)\right] \;. \end{equation} As already mentioned, the vielbein field $\tensor{e}{^A_\mu}$ is just the matrix representation of $e$ in the basis $\tau_A\otimes dx^\mu$, \textit{i.e.}, $\tensor{e}{^A_\mu}\equiv \tau^A\left[e\left(\partial_\mu\right)\right]$. Meanwhile, $\tensor{e}{^*_\mu^A}\equiv\left[e^*\left(\tau^A\right)\right]\left(\partial_\mu\right)$ is the matrix representation of $e^*$ in $dx^\mu\otimes\tau_A$. Notice that $e$ maps holonomic frames into non-holonomic ones while $e^*$ glues non-holonomic co-frames on $X$. In practice, however, due to the contravariant nature of the pullback map, the roles of $\tensor{e}{^A_\mu}$ and $\tensor{e}{^*_\mu^A}$ get flipped. Indeed, \begin{align} \label{eq:holonomic-to-nonholonomic-frame} e \left(\partial_\mu\right) &= \tensor{e}{^A_\mu}\tau_A \;,\nonumber\\ &= \tensor{e}{^*_\mu^A}\tau_A \;, \end{align} while \begin{align} \label{eq:nonholonomic-to-holonomic-coframe} e^* \left(\tau^A\right) &= \tensor{e}{^*_\mu^A}dx^\mu \;,\nonumber\\ &= \tensor{e}{^A_\mu}dx^\mu \;. \end{align} Equation \eqref{eq:soldering_pullback}, stating that $\tensor{e}{^*_\mu^A}=\tensor{e}{^A_\mu}$, was used in both \eqref{eq:holonomic-to-nonholonomic-frame} and \eqref{eq:nonholonomic-to-holonomic-coframe}. In the literature, $e^*\left(\tau^A\right)$ is presented as the 1-form vielbein $e^A$ while $e\left(\partial_\mu\right)$ is mostly ignored. The former is the ``subtle'' relation between $e^A$ and $\tau^A$ mentioned by the end of Section \ref{sec:holonomic_and_non-holonomic_frames}. Furthermore, as long as $e$ is an isomorphism, inverses exist for it and its pullback. Similar analysis can then be done. Explicitly, we have $\tensor{e}{^\mu_A}\equiv dx^\mu\left[e^{-1}\left(\tau_A\right)\right]$ and $\tensor{e}{^*_A^\mu}\equiv \left[e^{*-1}\left(dx^\mu\right)\right]\left(\tau_A\right)$. Moreover, \begin{align} \label{eq:nonholonomic-to-holonomic-frame} e^{-1} \left(\tau_A\right) &= \tensor{e}{^\mu_A}\partial_\mu \;,\nonumber\\ &= \tensor{e}{^*_A^\mu}\partial_\mu \;, \end{align} while \begin{align} \label{eq:holonomic-to-nonholonomic-coframe} e^{*-1} \left(dx^\mu\right) &= \tensor{e}{^*_A^\mu}\tau^A \;,\nonumber\\ &= \tensor{e}{^\mu_A}\tau^A \;. \end{align} The analog of equation \eqref{eq:soldering_pullback} for $e^{-1}$ states that $\tensor{e}{^*_A^\mu}=\tensor{e}{^\mu_A}$ and was used in both \eqref{eq:nonholonomic-to-holonomic-frame} and \eqref{eq:holonomic-to-nonholonomic-coframe}. It is easy to show that compositions $e^{*-1}\circ e^*$ and $e^{-1}\circ e$ are behind equations \eqref{eq:inversepullbackvielbein} and \eqref{eq:inversevielbein}, respectively. In literature, however, it is commonplace to define $e^{-1}\left(\tau_A\right)$ as the 1-vector $e_A$ such that $e^A \left(e_B\right)=\tensor{\delta}{^A_B}$. This latter equation does not hold by itself, but it is a consequence of \eqref{eq:inversepullbackvielbein} being true. The last geometrical structure we need to address is that of a metric. In the beginning of this section, we commented on how a metric tensor on $X$ is an element of $\Gamma (\raisebox{\depth}{\scalebox{1}[-1]{$\Lambda$}}^2X)$. Such a metric lives on a natural bundle and thus is holonomic in nature. Analogous definition can be made on using any other vector bundle over $X$. For instance, a non-holonomic metric $g'$ on $X$ can be defined as a symmetric $(0,2)$-tensor on $A'\left(\mathcal{V}'\right)$, \textit{i.e.}, an element in $\Gamma (\raisebox{\depth}{\scalebox{1}[-1]{$\Lambda$}}^2A'\left(\mathcal{V}'\right))$. Also in the beginning of this section, we made comments on how there might be topological obstructions for a local section to be smoothly glued together to form a global one. On certain topologies, $\chi\left(X\right)$ plays that role. Luckly, if $X$ is paracompact, partition of unity can be used to always extend a local Riemannian metric into a global one on whatever vector bundle above $X$. Thus, global Riemannian metrics always exist at our disposal. The downside of Riemannian metrics, however, is that they are all geodesically complete. Thus, they cannot provide good classical models for cosmology and/or black hole physics. Unluckly, obstructions to extend a local Lorentzian metric into a global one are more common. Indeed, paracompactness is only enough for non-compact topologies. On compact ones, paracompactness needs to be supplemented with the condition $\chi\left(X\right)=0$. For instance, famously even-spheres do not accept a Lorentzian structure, most notably $S^2$ and $S^4$. The existence of metric structures on $X$ has interesting consequences for $FX$ and $P$, as exemplified in \cite[ex. 5.5]{kobayashi1}. Via the Gram-Schmidt process, $g$ and $g'$ allow us to define in each $F_x X$ and $P_x$, respectively, subsets $F^\mathcal{O}_x X \subset F_x X$ and $P^\mathcal{O}_x \subset P_x$ of orthogonal frames. The disjoint union in all $x$ allows us to define $F^\mathcal{O}X$ and $P^\mathcal{O}$ and one can show that these do have the structure of embedded principal sub-bundles within $FX$ and $P$, respectively, with structure group $O\left(p,q \right) \; ; \; p+q=n$ if the metrics $g$ and $g'$ have signature $\left(p,q\right)$. For instance, if Riemannian ($p=0$), then the structure subgroup is $O\left(n\right)$ while if Lorentzian ($p=1$), then the structure subgroup is $O\left(1,n-1\right)$. On orientable topologies, the structure groups of $FX$ and $P$ can also be reduced from $GL\left(n,\mathbb{R}\right)$ to the orientation-preserving $GL^+\left(n,\mathbb{R}\right)$ (positive determinant). Consequentially, the structure subgroup is going to be $SO\left(p,q\right)$. The proof of existence on paracompact $X$ rely on the quotient bundles $FX/O\left(p,q\right)$ and $P/O\left(p,q\right)$ admiting global sections, \textit{i.e.}, being trivial \cite[prop. 5.6]{kobayashi1}. This is true whenever the coset space $GL\left(n,\mathbb{R}\right)/O\left(p,q\right)$ is contractible \cite[thm. 5.7]{kobayashi1}. It so happens that if $p=0$, this space is homotopic to $\mathbb{R}^n$ while if $p=1$, it is actually homotopic to $\mathbb{R}P^{n-1}$ -- the $\left(n-1\right)$-dimentional real projective space. The former space is clearly contractible, while the latter is not. Indeed, $\pi_1 \left(\mathbb{R}P^{n-1}\right) \simeq \mathbb{Z}_2$ and $\pi_k\left(\mathbb{R}P^{n-1}\right) \simeq \pi_k\left(S^{n-1}\right)$ for $k>1$. This is the bundle-theoretical reason why Riemannian structures always exist while Lorentzian ones do not. In any case, whenever a metric structure does exist, one can can show that the embeddings $F^\mathcal{O}X \rightarrow FX$ and $P^\mathcal{O}\rightarrow P$ also do. Ultimately, this is the justification, ommited from Section \ref{sec:holonomic_and_non-holonomic_frames}, that allowed us to extend the transformation group of non-holonomic frames from $SO\left(1,n-1\right)$ to $GL\left(n,\mathbb{R}\right)$: we assumed the existence of $\eta$. From a physical standpoint, the converse interpretation is more interesting \cite[exe. 5.7]{kobayashi1}. Since the argument above works both ways, one can say that whenever the quantum structure of spacetime changes to that of a smooth manifold $X$ with appropriated topology, then a corresponding symmetry breaking $GL\left(n,\mathbb{R}\right) \rightarrow O\left(p,q\right)$ occurs in $FX$ and $P$. This gives a comprehensive scenario in which metric structures arise dynamically, as Higgs-Goldstone type of fields \cite{sardanashvily1983,nikolova1984,sardanashvily2016}. Theories of induced gravity imploying similar mechanism have been extensively explored in \cite{macdowell1977a,stelle1979,gotzes1989,neeman1987,kirsch2005,leclerc2006,tresguerres2008,randono2010,sobreiro2011,mielke2011,sobreiro2017,sadovski2015,sadovski2015}. It should be apparent that to have a bundle of orthogonal frames by no means equates to have a everywhere flat Minkowski structure $\eta$. Thus, to realize the equivalence principle we do need plug $FM$ onto $X$. As already said, $FM$ is first conceived over the Minkowski space $M$. The latter is paracompact and non-compact, thus a global Lorenzian metric $g'$ definitely exists on it. Further, $M$ is homotopic to $\mathbb{R}^n$ and thus contractible. This means that any bundle over it is trivial, \textit{e.g.}, $FM$, $TM$ and all other bundles associated to them. Consequentially, $\omega'$ can be chosen as the canonical flat connection \cite[\S 2.9]{kobayashi1}. By postulating that this connection is torsionless and metrical, we arrive at the Riemannian hypothesis that states that $\omega'$ is derived from the metric $g'=\eta$. As discussed, the existence of $\eta$ guarantees the existence of the global Lorentz frame $\tau_a$ via the embedding $F^\mathcal{O} M \rightarrow FM$ in which $SO(1,n-1)$ is structure subgroup. In $\tau_a$, $\eta$ assumes its well-known diagonal form $\eta\left(\tau_a,\tau_b\right)=\eta_{ab}\equiv \mathrm{diag}\left(-1,+1,\cdots,+1\right)$. By plugging $FM$ onto $X$ via the projection $\pi'$, we essentially localize on $X$ the global Minkowskian structures just mentioned. In summary, the equivalence principle is geometrically enconded in the following diagrams: \begin{equation} \label{dia:p-bundle_ep} \begin{tikzcd} GL\left(n,\mathbb{R}\right) \arrow[r,hook] & FX \arrow[r,"h"] \arrow[d, "\pi" left] & FM \arrow[r,"q"] \arrow[d, "\pi'"] & F^\mathcal{O}M \arrow[d, "\pi''"] & \arrow[l,hook'] SO(1,n-1) \\ & X \arrow[r,"{\mathds{1}_X}"] & X \arrow[r,"{\mathds{1}_X}"] & X & \end{tikzcd} \tag{diagram 4} \end{equation} or, equivalently, in terms of vector bundles, \begin{equation} \label{dia:v-bundle_ep} \begin{tikzcd} GL\left(n,\mathbb{R}\right) \arrow[r,hook] & TX \arrow[r,"e"] \arrow[d, "\pi_{TX}" left] & TM \arrow[r,"q'"] \arrow[d, "\pi_{TM}"] & T^\mathcal{O}M \arrow[d, "\pi_{T^\mathcal{O}M}"] & \arrow[l,hook'] SO(1,n-1) \\ & X \arrow[r,"{\mathds{1}_X}"] & X \arrow[r,"{\mathds{1}_X}"] & X & \end{tikzcd} \tag{diagram 4}\;, \end{equation} where $q$ and $q'$ are bundle contractions. Clearly, the existence of the bundle isomorphism $h$ or, equivalently, $e$, allow us to formulate gravity on other bundles beyond natural ones. The issue of these different constructions being dynamically equivalent was exactly what we tackle in Section \ref{sec:on-shell_equivalence}. At least, classically, where these isomorphisms are (usually) assumed to exist. At this point, were ready to state equations \eqref{eq:field-transformation} in a geometrical fashion. Indeed, they are just the pullback by $e$ of the non-holonomic metric $\eta$ and connection $A$ to the holonomic metric $g$ and connection $\Gamma$, respectively, \begin{subequations} \label{eq:e-pullback} \begin{align} g &= e^* \eta \;, \label{eq:e-pullback-of-nonholonomic-metric} \\ \Gamma &= e^* A \;, \label{eq:e-pullback-of-nonholonomic-connection} \end{align} \end{subequations} much in the spirit first presented in \cite{giachetti1980}. Clearly, equations in \eqref{eq:e-pullback} are a reincarnation of equation \eqref{eq:soldered_connection}, but on an associated vector bundles and including the metric field. Again, we stress that $e$ is not an isomorphism from $TX$ it iself, but an isomorphism from $TX$ to $TM$. In the former case, $e$ can be seen as functorial lifts of $lAut\left(X\right)$, according to \ref{dia:categorical_lift}, and thus its matrix representation is $\tensor{J}{^{\nu'}_\mu}(x)$. This is an important conceptual distinction that, in practice, only amount for a substitution from $\tensor{J}{^{\nu'}_\mu}(x)$ to $\tensor{e}{^A_\mu}(x)$ in the transformation law for tensor and connection fields. Since equations of motion should be covariant under the latter, it is also covariant under the former. This is what underpins our result from Section \ref{sec:on-shell_equivalence}. An advantage of this geometrical construction, however, is that it makes very clear what is the meaning of a non-invertible $\tensor{e}{^A_\mu}(x)$, which are configurations that imply topology-changing spacetimes (\textit{vide} Section \ref{sec:discussions}). Let $R$ be a region in which $\mathrm{det}\left[\tensor{e}{^A_\mu}(x)\right]=0 \; \forall \; x \in R$. Over such non-invertible region, $e$, of course, cannot be an isomophism and, thus, $TM$ cannot be realized as a bundle soldered to $TX$. In other words, at $R$ the equivalence principle, as defined above, fails. \section{Discussion}% \label{sec:discussions} In section \ref{sec:gravities}, we reviewed the known fact that, in the absence of matter carrying non-vanishing hypermomentum currents, the holonomic formulation of gravity, starting with the functional $S_{\text{EHP}}\left[g,\Gamma\right]$, is on-shell equivalent to traditional GR. Indeed, the former has symmetry under the projective transformations \eqref{eq:r-symmetry} that ensures torsion and non-metricity to remain pure gauge objects. The same is expected of the non-holonomic formulation of gravity starting with $S_{\text{VEP}}\left[e,A\right]$. Analogously, it enjoys symmetry under projective transformations \eqref{eq:nonholonomic-projective-transformation} that renders non-holonomic torsion and non-metricity non-propagating. Field transformations \eqref{eq:field-transformation} can be used to covariantly transform $\left(e,A\right)$ field equations \eqref{eq:nonholonomic-field-eqs}, into the $(g,\Gamma)$ ones, \eqref{eq:holonomic-field-eqs}. This establishes that both are equivalent to Einstein equations. Once we consider more general functionals, $S\left[g,\Gamma\right]$, the equivalence to GR is, of course, lost \cite{borunda2008}. After all, projective invariance is generally absent. Nevertheless, the general question of equivalence between holonomic $\left(g,\Gamma\right)$ \textit{versus} the non-holonomic $\left(e,A\right)$ formulations of gravity remains. In section \ref{sec:on-shell_equivalence}, we addressed it to found out that the answer is generally negative. At least if one considers very general field transformations, of the likes of \eqref{eq:non-standard-field-transf}. On the other hand, the aforementioned \eqref{eq:field-transformation} are special in the sense that they allow the on-shell equivalence to be established independently of any particular dynamics or spacetime dimensions, \textit{i.e.}, independently of $S$. To better understand \eqref{eq:field-transformation}, in Section \ref{sec:geometric_picture}, we reformulated our fields in terms of global geometrical quantities living in bundles above $X$. We rewrote \eqref{eq:field-transformation} as \eqref{eq:e-pullback}, concluding that these are the soldering of a Lorentz connection and a Minkowski metric to $X$. This is done via the vector bundle isomorphism $e: TX \rightarrow TM$ which ultimately encodes the equivalence principle, \textit{vide} \ref{dia:v-bundle_ep}. This made clear that \eqref{eq:field-transformation} are special because, functionally, they have the same form as gauge transformations. Thus, gauge covariant field equations will remain covariant under them. This is what underpins the result from Section \ref{sec:on-shell_equivalence}. After the discussion above, it should be clear that a violation of \eqref{eq:field-transformation} might lead to the unsoldering of the $SO\left(1,n-1\right)$ gauge-theoretical (non-holonomic) description of gravity from spacetime; decoupling internal from external degrees of freedom. This jeopardizes the equivalence principle, as defined here, and the equivalence between holonomic \textit{versus} non-holonomic description. For instance, consider singular metrics or, in a non-holonomic language, non-invertible configurations of the veilbein field. This equates to spacetimes with degenerate regions, over which the vector bundle morphism $e$ is, at most, surjective or injective. Classically, these spacetimes have been historically ignored due to number theorems by Geroch \cite{geroch1967}: topology-changing spacetimes are causally misbehaved. However, later works by Tipler, Horowitz, Borde, Sorkin and others have shown that topology-changing spacetimes can be causal albeit with degenerate regions of the aforementioned kind \cite{tipler1977,horowitz1991,borde1994,borde1999,heveling2022}. In \cite{kaul2016a,kaul2016b,kaul2019} it is shown that the classical equivalence between $S_{\text{EHP}}\left[g,\Gamma\right]$ and $S_{\text{VEP}}\left[e,A\right]$ breaks in these degenerate regions, even in vacuum. In other words, \eqref{eq:field-transformation} fails and the gauge description unsolders from spacetime. Quantum mechanically, an earlier work by Tseytlin had already noticed this failure once $\mathrm{det}\left(\tensor{e}{^A_\mu}\right)=0$ configurations are allowed in the gravitational path integral \cite{tseytlin1982}. Of course, the same is expected for more general functionals of $g$ and $\Gamma$ \textit{versus} $e$ and $A$, classically and quantum mechanically. Topology-change and acausality are much more digestible features in the quantum than in the classical realm. In despite of that, degenerate spacetimes have also been largely avoided quantum mechanically since an invertible veilbein can be absorbed as the translational part of a $ISO\left(1,n-1\right)$ or $Aff\left(n\right)$ connection \cite{mielke1993,mielke2000}. A gauge-theoretical interpretation is, of course, much easier to deal with than the original gravitational theory. This was highly inspired by Ach\'ucarro, Townsend and Witten's result connecting $n=3$ EH to the Chern-Simons (CS) functional, in which the invertible dreibein is naturally absorbed as part of a $ISO(1,2)$, $SO(3,1)$ or $SO(2,2)$ connection \cite{townsend1986,witten1988c}, depending on the value of the cosmological constant ($\Lambda=0$, $\Lambda > 0$ or $\Lambda < 0$, respectively). The main issue with this approach is that the gauge transformations of $e$ and $A$ combine together into the gauge transformation of a pure $ISO\left(1,2\right)$, $SO(3,1)$ or $SO\left(2,2\right)$ connection if, and only if, the veilbein is invertible. Not only that, we are referring here only to infinitesimal gauge transformations smoothly connected to the identity. Only under these assumptions, the three-dimensional EH=CS holds. The price paid is that now $n=3$ quantum GR admits a finite and 1-loop exact topological quantum field theory (TQFT) description whose observables are exclusively knot invariants \cite{witten1989c,singer1991,sorella1992,piguet1995}. This became especially crippling after the discovery of the three-dimensional BTZ black hole and its thermodynamical properties for $\Lambda < 0$ \cite{zanelli1992,zanelli1993}. It is highly improbable that a TQFT description alone could account for the huge degeneracy of microstates such an anti de Sitter (AdS) black hole potentially has; see \cite{carlip1995a,carlip2005} for a review. The tension led Witten to reconsider the issue in \cite{witten2007}, conjecturing instead, in the asymptotic AdS boundary, a family of monster conformal field theories as holographic duals of $n=3$ quantum GR in the bulk for certain non-perturbative values of the coupling parameter $\sqrt{\Lambda}/G$, where $G$ is Newton constant. Indeed, the monster group has smallest non-trivial representation on a 196.883-dimensional complex vector space. Thus, it potentially provides enough room for the microscopic description of the BTZ black hole. In any case, the point is that on degenerate regions, again, equation \eqref{eq:field-transformation} fails and the CS description unsolders from spacetime. Finally, another way one could envision to violate \eqref{eq:field-transformation} would be to postulate the existence of a non-vanishing tensor field \begin{equation} \label{eq:d-tensor} \tensor{D}{^A_{\mu\nu}} \equiv \partial_\mu \tensor{e}{^A_\nu} + \tensor{\Gamma}{^\alpha_{\mu\nu}}\tensor{e}{^A_\alpha}-\tensor{\omega}{^A_{B\mu}}\tensor{e}{^B_\nu}\;. \end{equation} Such tensor is sometimes interpreted in the literature as an exterior covariant derivative, defined on the spliced bundle $TX\times TM$, applied to $\tensor{e}{^A_\nu}$. Theories with non-vanishing $\tensor{D}{^A_{\mu\nu}}$, however, actually live on these spliced bundles and are, therefore, concomitantly holonomic and non-holonomic. Thus, the question of equivalence becomes nonsensical. In any case, this scenario is worth mentioning since it presents a novel way to lift spacetime and gauge space symmetries into a single geometrical arena, finding recent applications in 11-dimensional supergravity, higher-spin gravity and, ultimately, M-theory \cite{hull2007,engquist2008,nicolai2014a,nicolai2014b}. \section*{Acknowledgements} The author would like to thanks R.~F. Sobreiro and J. Zanelli for the discussions during the development of this work. \printbibliography \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,077
Gefahrenanalyse und kritische Kontrollpunkte, auch Gefahrenanalyse und kritische Lenkungspunkte (, abgekürzt HACCP), ist ein Qualitätswerkzeug, das für Produktion von und Umgang mit Lebensmitteln konzipiert wurde. Es ist klar strukturiert und auf präventive Maßnahmen ausgerichtet. Das Konzept dient der Vermeidung von Gefahren im Zusammenhang mit Lebensmitteln, die zu einer Erkrankung oder Verletzung von Konsumenten führen können. Geschichte Das Konzept wurde im Jahr 1958 entwickelt, als der amerikanische Konzern The Pillsbury Company von der Raumfahrtbehörde NASA beauftragt wurde, eine weltraumgeeignete Astronautennahrung herzustellen, die hundertprozentig sicher sein sollte. Pillsbury wandte die 1949 vom US-Militär für technische Anwendungen geschaffene FMEA-Methodik auf die Lebensmittelindustrie an und entwickelte dieses Präventivkonzept gemeinsam mit der NASA weiter. 1971 wurde es in den USA als HACCP-Konzept veröffentlicht. 1985 empfahl die US-amerikanische National Academy of Sciences, das Konzept anzuwenden; daraufhin wurde es weltweit erprobt und weiterentwickelt. Der von der Ernährungs- und Landwirtschaftsorganisation der Vereinten Nationen (FAO) herausgegebene Codex Alimentarius empfiehlt seit 1993 ebenfalls die Anwendung des Konzepts. Prinzipien Die Prinzipien des Qualitätswerkzeugs Gefahrenanalyse und kritische Kontrollpunkte (HACCP) lauten wie folgt: Durchführen einer Gefahrenanalyse Festlegung von Plänen zur Identifizierung von Gefahren durch Lebensmittel und von Gegenmaßnahmen, welche diesen Gefahren vorbeugen können. Eine Gefahr kann jegliche physikalische, chemische oder biologische Eigenschaft sein, welche den Lebensmittelkonsum für Menschen gefährlich macht. Identifikation der für die Sicherheit der Lebensmittel kritischen Kontrollpunkte Ein kritischer Kontrollpunkt ist ein Punkt, Schritt oder eine Prozedur im gesamten Lebensmittelherstellungsprozess, an dem Kontrollen möglich sind, um eine Gefährdung durch das Lebensmittel zu verhindern, zu eliminieren oder auf ein erträgliches Maß zu reduzieren. Festlegung von Eingreifgrenzen an den jeweiligen kritischen Kontrollpunkten Eine kritische Eingreifgrenze ist der Maximal- oder Minimalwert, auf die hin physikalische, chemische oder biologische Gefahren überprüft werden müssen, um eine Gefährdung abzuwenden, zu eliminieren oder auf ein erträgliches Niveau zu reduzieren. Einrichten von entsprechenden Überwachungsverfahren an den kritischen Kontrollpunkten Überwachung bzw. fortlaufende Beobachtung ist notwendig, damit sich der Prozess an jedem kritischen Punkt unter Kontrolle befindet. Das Überwachungsverfahren und die Häufigkeit sollten im HACCP-Plan festgehalten werden. Einrichten von Korrekturmaßnahmen für den Fall von Abweichungen Die notwendigen Schritte im Fall von Über- oder Unterschreitungen der festgelegten Grenzwerte müssen festgelegt sein. Ziel ist dabei, dass kein Lebensmittel, welches die erforderlichen Grenzwerte nicht einhält, in den Konsumkreislauf gelangt. Einrichten von Evaluierungsmaßnahmen zur Überprüfung der Effizienz des festgelegten HACCP-Systems Die Evaluierungsmaßnahmen des HACCP-Systems dienen dazu, die Ziele einer sicheren Lebensmittelproduktion dauerhaft zu gewährleisten. Dabei können die gesamten HACCP-Pläne, die Aufzeichnungen der kritischen Kontrollpunkte, die kritischen Grenzwerte, die Stichproben und die Analysen evaluiert werden. Einrichten einer Dokumentation der Maßnahmen Zu den Prinzipien des HACCP gehören, dass in allen Lebensmittelproduktionsstätten entsprechende Archive und Dokumentationen der Daten zum HACCP festgehalten sind. Dazu gehören die Daten zu den Kontrollpunkten, die Grenzwerte, die Aktivitäten zur Überprüfung und zur Evaluierung und die Vorgehensweise bei Abweichungen. Umsetzung in EU und Deutschland Im deutschen Recht wurde das HACCP-Konzept erstmals mit der Lebensmittelhygiene-Verordnung von 1998 verankert. Die Verordnung (EG) Nr. 852/2004 der Europäischen Gemeinschaft sieht ebenfalls die Anwendung des HACCP-Konzeptes in allen Unternehmen, die mit der Produktion, der Verarbeitung und dem Vertrieb von Lebensmitteln beschäftigt sind, verpflichtend vor. Am 1. Januar 2006 trat das 2004 angenommene Hygienepaket der EU in Kraft. Hierin wird verordnet, dass nur noch Lebensmittel, die die HACCP-Richtlinien erfüllen, in der Union gehandelt und in die Union eingeführt werden dürfen. Schon zuvor mussten alle Unternehmen, die Lebensmittel herstellen oder mit Lebensmitteln in irgendeiner Weise umgehen, ein HACCP-Konzept haben. Seit 2006 muss es in einer dokumentierten Version vorliegen. Bei großen Unternehmen mit vielen Gefahren und hohem Risikopotenzial sind ausführliche Aufzeichnungen vorgeschrieben, bei kleinen Unternehmen genügen Reinigungspläne, Verifizierungsnachweise oder Personalanweisungen. Wichtig ist es, bei der Umsetzung der gesetzlichen Forderungen mit der Einführung der Guten Hygienepraxis (GHP) zu beginnen. Diese Vorbeugemaßnahmen (zum Beispiel Reinigungsprogramm, Schulungsprogramm, Schädlingsbekämpfung, Wareneingangskontrolle und Rohstoffpolitik) werden in Leitlinien von vielen Verbänden für die unterschiedlichen Berufsgruppen herausgegeben. Auf dieser Basis steht das Unternehmen und aus dem erzielten Erfolg ergibt sich das betriebsspezifische Restrisiko. Dieses muss entsprechend den Codex-Forderungen (s. o.) für jedes Unternehmen gesondert ermittelt werden. Hieraus ergeben sich möglicherweise kritische Kontrollpunkte, die verwaltet werden müssen. Die GHP allein ist noch kein HACCP-Konzept. HACCP für Tiefkühlkost Die erste internationale HACCP-Regelung für Tiefkühlkost wurde 1978 beschlossen. Seither ist auch diese Regelung regelmäßig überarbeitet und verbessert worden. Hierbei wird spezielle Rücksicht darauf genommen, dass die Erfordernisse an eine Tiefkühlkette noch größer sind, als dies bei einer normalen Kühlkette der Fall ist. Um dieser Komplexität Rechnung zu tragen, wurde im neuesten Appendix 1996 neben anderen Methoden die Nutzung von Zeit-Temperatur-Indikatoren vorgeschlagen. Zertifizierung Für die Zertifizierung sind unabhängige und akkreditierte Zertifizierungsstellen verantwortlich. "Lebensmittelunternehmer" sind durch die Verordnung EG 852/2004 rechtlich bindend verpflichtet, "ein oder mehrere ständige Verfahren, die auf den HACCP-Grundsätzen beruhen, einzurichten, durchzuführen und aufrechtzuerhalten". In der Europäischen Gemeinschaft (EG) besteht jedoch keine Verpflichtung zur Zertifizierung dieser Systeme. Literatur Mayer, Jürgen: Modernes HACCP. Praktischer Anwendungsleitfaden. 1. Auflage 2019; JMC Verlag. ISBN 978-3-00-061977-9 Mayer, Jürgen: HACCP in der Gastronomie. Praxis-Handbuch. 1. Auflage 2022; JMC Verlag. ISBN 978-3-9813908-3-4 Markus Krauß, Levke Voß: Hygieneanforderungen an unverpackte Lebensmittel in Selbstbedienungstheken: Ein Beitrag zur Auslegung von § 4 Abs. 2 der Verordnung (EG) Nr. 852/2004, in: Zeitschrift für das gesamte Lebensmittelrecht, ZLR 4, 2010, S. 413 Hans-Jürgen Sinell, Heinz Meyer: Lebensmittelsicherheit. HACCP in der Praxis. ISBN 978-3-86022-290-4 Ulrike Arens-Azevedo, Heinz Joh: Mit HACCP sicher ans Ziel! Praxishilfe für die Umsetzung betriebseigener Hygienemaßnahmen und Kontrollen zur Qualitätssicherung in Gastronomie und Gemeinschaftsverpflegung. Mit Arbeitsblättern. ISBN 978-3-87515-000-1 Weblinks Erläuterungen zum Aufbau eines HACCP Konzepts mit einfachen Beispielen Einzelnachweise Lebensmittelhygiene Risikomanagement Lebensmittelrecht Europarecht
{ "redpajama_set_name": "RedPajamaWikipedia" }
765
Q: Show a modal form onchange of a dropdownlist So I have a ASP .NET MVC application in which users should be able to add content to a page. This content can be things like a location, photos, ... I have a dropdownlist in my View with all the different content options. When an option is selected, the page should show a modal form where they can enter details based on the type of content they selected. My modalform is constructed like this: <div class="modal fade" id="locationModal" tabindex="-1" role="dialog" aria-labelledby="Login" aria-hidden="true"> <div class="modal-dialog"> <div class="modal-content"> <div class="modal-header"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"> <span aria-hidden="true">&times;</span> </button> <h4 class="modal-title">Map</h4> </div> <div class="modal-body"> <!-- The form is placed inside the body of modal --> <form id="locationForm" method="post" class="form-horizontal"> <div class="form-group"> <label class="col-xs-3 control-label">Address</label> <div class="col-xs-5"> <input type="text" class="form-control" name="address" /> </div> </div> <div class="form-group"> <div class="col-xs-5 col-xs-offset-3"> <button type="submit" class="btn btn-primary">Add location</button> </div> </div> </form> </div> </div> </div> </div> It's easy to fire up the modal with a button with the data-toggle and data-target attributes like this: <p class="text-center"> <button id="addLocation" class="btn btn-default" data-toggle="modal" data-target="#locationModal"></button> </p> But how do I achieve this form to be shown when selecting a value from the dropdown list? I have added a JS function to the OnChange of my dropdownlist select but I don't know how to fire up the modal form. Thanks in advance A: You can load the modal using javascript on the change event of the dropdown. Just check this link : Bootstrap Modal doesn't show
{ "redpajama_set_name": "RedPajamaStackExchange" }
668
{"url":"http:\/\/math.stackexchange.com\/questions\/817686\/a-doubt-on-indefinite-integral","text":"# A doubt on indefinite integral [closed]\n\nFind indefinite integral of $f(x) = 9x^3\\sqrt{x} - 2x^5 +e^{-2x} + 11x$\n\nHere is my attempt: $$\\int\\left(9x^3\\sqrt{x} - 2x^5 +e^{-2x} + 11x\\right)\\ dx\\\\ =9\\int x^{7\/2}\\ dx -2\\int x^5\\ dx + \\int \\frac{x}{e^2}\\ dx+11\\int x\\ dx$$ (9 x^(7\/2)-2 x^5+x\/e^2+11 x) dx\n\n= 9 \u222b x^(7\/2) dx-2 \u222b x^5 dx+(11+1\/e^2) \u222b x dx\n\n= 2 x^(9\/2)-2 \u222b x^5 dx+(11+1\/e^2) \u222b x dx\n\n= 2 x^(9\/2)-2 \u222b x^5 dx+1\/2 (11+1\/e^2) x^2\n\n= 2 x^(9\/2)-x^6\/3+1\/2 (11+1\/e^2) x^2+constant\n\n= (x^2 (e^2 (12 x^(5\/2)-2 x^4+33)+3))\/(6 e^2)+constant\n\nI have done this for ten times and this is what I got. Is it looking right? Any help please.\n\n-\n\n## closed as unclear what you're asking by heropup, Mike, John, J. W. Perry, user91500Jun 2 at 4:40\n\nPlease clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it\u2019s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question.If this question can be reworded to fit the rules in the help center, please edit the question.\n\nI tried to edit your post but as you have written it, your notation is ambiguous and therefore cannot be edited. As such, your question has been marked for closure due to being unclear. \u2013\u00a0 heropup Jun 2 at 3:33\n@chris, can you use MathJax to form your question? Is the function this: $f(x) = 9x^3\\sqrt(c)-2x^5+\\frac{x}{e^2}+11x$ or is it $f(x) = 9x^3\\sqrt(c)-2x^5+e^{-2x}+11x$ Also tag your question as \"homework\", which I feel it is. \u2013\u00a0 tpb261 Jun 2 at 3:41\nsorry the question is find the indefinite integral ((9x^3)(\u221ax)) - 2x^5 + e^-2x + 11x past exam question my answer i got is =(x^2(e^2(12x^(5\/2) -2x^4 +33)+3)) \/ 6e^2 \u2013\u00a0 user152431 Jun 2 at 3:53\n@chris I've edited a few lines of the equation so you can know how LaTeX works. Now you continue my edit to avoid ambiguity or mistake. \u2013\u00a0 Tunk-Fey Jun 2 at 4:47","date":"2014-11-26 20:33:53","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5319229364395142, \"perplexity\": 2726.7597127140034}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-49\/segments\/1416931007501.20\/warc\/CC-MAIN-20141125155647-00160-ip-10-235-23-156.ec2.internal.warc.gz\"}"}
null
null
{"url":"https:\/\/www.math.ucdavis.edu\/research\/seminars?talk_id=856","text":"# Mathematics Colloquia and Seminars\n\nIn joint work with J. Gubeladze we have explored the linear algebra'' properties of divisorial ideals over normal semigroup rings. In invariant theory these ideals appear as modules of semi-invariants of the actions of algebraic tori. The properties and invariants to be discussed are, among other things, the Cohen-Macaulayness and the number of generators. In particular we show that there exist, up to isomorphism, only finitely many Cohen-Macaulay (divisorial) ideals.","date":"2021-07-25 18:15:56","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7494627237319946, \"perplexity\": 731.3332657781971}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-31\/segments\/1627046151760.94\/warc\/CC-MAIN-20210725174608-20210725204608-00464.warc.gz\"}"}
null
null
Q: Convergence in $L_p$, Vitali's theorem, and convergence in measure Working a measure theory question for practice from Bartle. Assume that * *$(X,\mathbb{X},\mu)$ is a finite measure space *$f_{n}\rightarrow f$ in $L_{p}(X,\mathbb{X},\mu)$ *$\varphi$ is a real-valued continuous function on the real line s.t. there exists a positive number K with $\vert\varphi(t)\vert<K\vert t\vert$ if $\vert t\vert>K$ Claim: $\varphi\circ f_{n}\rightarrow\varphi\circ f$ in $L_{p}(X,\mathbb{X},\mu)$ My approach thus far has been to use the Vitali convergence theorem. I haven't been able to show that $\varphi\circ f_{n}\rightarrow\varphi\circ f$ in measure without using the continuous mapping theorem. However, Bartle's book doesn't include this theorem so either * *I'm not supposed to use the Vitali theorem since we can't prove the convergence in measure without the continuous mapping theorem. *There's a way to prove convergence in measure without the continuous mapping theorem. I'm a little lost with how to proceed. A: Sorry, you are absolutely right. We don't need Lipschitz continuity. However, as you suggested Vitali's theorem (or sometimes a generalized Lebesgue's theorem) is applicable. So the measure space $X$ is finite. Since $f_n\to f$ in $L$ we find that, for a subsequence, $f_n(x)\to f(x)$ for a.e. $x$. By continuity of $\varphi$ we find for that subsequence $$g_n(x):=\varphi(f_n(x))\to \varphi(f(x))=:g(x)$$. We shoe that $g(x)\in L^1(X)$. This implies $|g(x)|<\infty$ a.e. in $X$ (if note the integral is necessarily unbounded). $$|g(x)|=|\varphi(f(x))|\leq \max_{t\in[-K,K]}+ K|f(x)|$$ Integration yields $$\int_X |g(x)| \leq \int C+K|f(x)|\leq C_1(1+||f||_1)<\infty.$$ Furthermore, the last estimate yields uniform integrability since $||f_n||<C_2$ uniformly in $n$. To get from the subsequence to the full sequence we use the following $\mathbf{Lemma}$. Let $(X,d)$ be a metric space and let $x_n,x\in X$ be given. Then there holds: $x_n\to x\Leftrightarrow$ for every subsequence $n'$ of $n$ there is a further subsequence $n''$ of $n'$ such that $x_{n''}\to x$. $\mathbf{Proof}$ The implication is obvious. Hence, for the other statement, assume that $x_n\not\to x$. But this means, that there is some $\varepsilon >0$ and a subsequence $n'$ such that $d(x_{n'},x)\geq \varepsilon$ for all $n'$. Due to our assumption there is a subsequence $n''$ of $n'$ such that $d(x_{n''},x)<\varepsilon$ for large $n''$. Contradiction.
{ "redpajama_set_name": "RedPajamaStackExchange" }
119
Microsoft Office Communications Server (mejor conocido como OCS o simplemente Skype for Business) es un servidor de comunicaciones en tiempo real para empresas, proporcionando la infraestructura para la empresa de mensajería instantánea, presencia, transferencia de archivos, intercambio de archivos, voz multipartidista y videollamadas ad hoc y conferencias estructuradas (audio, vídeo y web) y conectividad PSTN. Estas características están disponibles en una organización, entre las organizaciones, con los usuarios externos en la Internet pública o los teléfonos estándar. En la PSTN, así como SIP trunking. Desde febrero de 2009 - Microsoft lanzó la versión R2 del servidor que se ejecuta únicamente en plataforma de 64 bits y es más escalable. R2 trae nuevas características tales como chat de grupo y Attendant Console. Algunas de las funciones de la OCS están soportadas en un entorno virtualizado. Cliente de software y dispositivos Microsoft Office Communicator 2007 ('MOC' o 'OC') y LiveMeeting Console (LMC) 2007 son las aplicaciones cliente principal en libertad con OCS. OC 2007 es el cliente utilizado para la mensajería instantánea, presencia, voz, video llamadas y conferencias especiales. LMC se utiliza para las reuniones más estructuradas, conferencias y compartir aplicaciones. Puede funcionar de forma nativa o en contra de la OCS o el servicio hospedado LiveMeeting. Microsoft Attendant Console Una versión del MOC más orientado a recepcionistas o delegados y secretarios. Microsoft Group Chat Client - un grupo de chat específicamente para el Grupo de Servidores. Otro software de cliente y los dispositivos incluyen: Office Communicator Mobile 2007 R2 Como es una edición de Windows Mobile de Office Communicator 2007 Cliente y diseñados para funcionar de manera similar. Office Communicator Web Access 2007 es una web de mensajería instantánea y presencia del cliente. Esta versión funciona también en IE, Firefox y Opera Microsoft RoundTable es un dispositivo de audio y vídeo que ofrece una vista de 360 grados de la sala de conferencias y seguimiento de los distintos oradores. Características Uno de los usos básicos de Office Communications Server es la mensajería instantánea y presencia dentro de una sola organización. Esto incluye apoyo para la información de presencia, transferencia de archivos y mensajería instantánea, así como comunicaciones de voz y de vídeo. (Estas últimas características a menudo no son posibles, incluso dentro de una sola organización pública mediante clientes de mensajería instantánea; debido a los efectos de la negociación de los cortafuegos corporativos y Network Address Translation). OCS 2007 también es compatible con usuarios remotos, tanto a los usuarios corporativos en Internet (por ejemplo, móviles o de los trabajadores a domicilio), así como los usuarios en empresas asociadas. OCS 2007 permite la interoperabilidad con otras redes de mensajería instantánea corporativa. La Federación se puede configurar de forma manual (donde cada socio configura manualmente los servidores de borde pertinentes) o basadas en el uso de los registros SRV adecuada en el DNS. Los medios de comunicación son transferidos mediante RTP / SRTP. El cliente de Live Meeting utiliza PSOM al contenido de la reunión de descarga. El cliente comunicador también utiliza el protocolo HTTPS para conectar con el servidor de componentes Web para descargar los libros de dirección, ampliar las listas de distribución, etc. De forma predeterminada, Office Communications Server cifra toda la señalización y el tráfico de medios utilizando SIP sobre TLS y SRTP. Hay una excepción a esta - el tráfico entre el servidor de mediación y una pasarela de medios de comunicación es fundamental que se lleve como SIP sobre TCP y RTP. Sin embargo, si una puerta de enlace híbrido es apalancada, como uno de Open de Microsoft de interoperabilidad de la web, entonces, todo está codificado de todos los puntos. IM es solo una parte de la suite de OCS. Los otros componentes importantes son las conferencias de telefonía VoIP y vídeo a través del cliente de escritorio del comunicador. El acceso remoto es posible usando móviles y los clientes Web. Historia Cuando Microsoft Office Live Communications Server fue lanzado el 29 de diciembre de 2003, sustituyó a la de Exchange Instant Messenger Service que se habían incluido en Exchange 2000, pero que fue retirado de la Bolsa de 2003. Los titulares de licencias de Exchange 2000 que incluyen Software Assurance tienen derecho a recibir Live Communications Server como una actualización, junto con Exchange 2003. OCS R2 fue anunciado en VoiceCon en Ámsterdam en octubre de 2008, tan sólo 364 días después de la liberación de Office Communications Server 2007. Esta versión cuenta con grandes ventajas sobre la solución original y Microsoft se coloca firmemente en su lugar para ser un jugador importante en la telefonía IP y Video (telepresencia). Las nuevas capacidades de gestión toman un gran volumen de llamadas entrantes y rápidamente enlazan a los destinatarios con un simple clic. El nuevo escritorio compartido permite a los usuarios de Windows, Macintosh y Linux colaborar con otros al mismo tiempo que hablar entre sí usando las funciones mejoradas de conferencia de audio. La función de chat de grupo permite a las organizaciones configurar la búsqueda, tema basado en salas de chat que persisten en el tiempo, permitiendo a los equipos distribuidos geográficamente para colaborar entre sí. Curiosamente, todas estas características están cubiertos por una única licencia por cada usuario a través de la Empresa Client Access License (ECAL) de Microsoft. Esto es dramáticamente diferente de la de la telefonía tradicional. La mejora en la capacidad de audioconferencia premisa pone las Empresas en el control de su infraestructura de audioconferencia y ahorra dinero en los costos de audioconferencia sobre acoger puentes. La función de número único Reach permite a los negocios registrar las llamadas hechas por los usuarios de teléfonos celulares para fines contables, mientras que ayuda a garantizar que las reglas de marcado que se aplican a las llamadas efectuadas por los usuarios de su teléfono en el trabajo también se extienden a sus llamadas celulares. Una de las mayores ventajas de tener un software basado en la infraestructura de comunicaciones es que las empresas pueden integrar capacidades de comunicación en línea existente de aplicaciones empresariales de uso y de las comunicaciones y flujos de trabajo para automatizar los procesos empresariales, lo que ahorra dinero, ahorra tiempo y mejora el servicio al cliente. Office Communications Server 2007 R2 proporciona una plataforma ampliable de comunicaciones que funciona con mensajes existentes en la organización y la infraestructura de telefonía y pueden adaptarse a las cambiantes necesidades empresariales. Esta extensibilidad es una de las principales razones que Gartner ha puesto de Microsoft en la parte superior de su Cuadrante Mágico de Comunicaciones Unificadas para 2007 y 2008. Enlaces externos Microsoft Office Communications Server (official site) LCS feature comparison LCS Developer Portal LCS Developer articles LCS Support website and fórum, by Meni Milstein - LCS MVP Customer Case Study Factors to consider when deploying Microsoft's Office Communications Server (OCS) 2007 Innovative Communications Alliance (Nortel – Microsoft) Office Communications Server Software VoIP
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,579
\section{Introduction} Optical composites (metamaterials and metasurfaces) emerge as flexible platforms for novel optical applications that include superimaging, planar lensing, and sensing \cite{hasman1,capassoMetasurface,shalaevMetasurface,metasurfaceYu,metasurfaceCapasso2,metasurfaceTsai,metasurfaceBrongersma,eleftheriadesMetasurface,PCBook, ShalaevBook, vpBook}. In these applications, computationally expensive design and optimization of the structure of optical composites represents a crucial bottleneck. Machine learning (ML) has been instrumental in addressing some needs of the photonics community\cite{MLPhoto.1,MLPhoto.2,MLPhoto.3,MLFan,MLSuchowski,MLCai,MLLiu,MLLiu2,MLFan2,MLFainman,MLImagingGhosh,MMMLFan} to the point that ML is sometimes predicted to overtake the scientific development process itself\cite{endOfTheory}. However, conventional ML algorithms often require large data-sets to produce properly trained, generalizable models\cite{MLbook}. Therefore, ML deployment for optimizing complex composites is often slow and problematic. Several approaches that mitigate the required size of the training set, for example by training in parameter sub-space that minimizes the uncertainty of the resulting model\cite{smartLearning,soljacic} have been recently proposed. Attempts to build Physics-Informed ML, incorporating analytical equations into ML learning process itself have shown promise in simple differential equations, as well as in physics of fluids and in imaging\cite{PGintro,PGturbulence,PGMLJi}. Here, we present physics-informed ML for optical composites, and illustrate the proposed formalism on the example of solving for the modes of a composite with periodic permittivity profile, achieving fast and highly generalizable predictions with relatively small training datasets. We develop the class of ML models that map the spatial profile of permittivity of the composite to the combination of the propagation constant and parameters that determine spatial behavior of the mode supported by the system (Fig.\ref{composite}). Specifically, the developed ML process predicts the properties of highest-effective index TM-polarized mode propagating in a multi-layer periodic composite whose unit cell contains 10 layers. Several sets of composites, some purely dielectric, and some plasmonic are used to assess accuracy and generalizability of the resulting models. The resulting ML models fully utilize the benefits of parallelism offered by Graphics-Processing-Unit (GPU) computing that are unavailable to iterative eigenvalue solvers\cite{linalg}. Note that although we present data for 1D composites, the mathematical formalism used to map Maxwell equations to an eigenvalue problem, rigorous coupled wave analysis (RCWA)\cite{RCWA,normalVector}, can be directly used for periodic and non-periodic 2D media\cite{non-periodic1,non-periodic2,non-periodic3}. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{figGeomVP.jpg} \label{composite} \caption{(a) An example of the distribution of dielectric permittivity across one period of the multilayered composite and (b) spatial dependence of the highest-index propagating mode supported by this composite. Solid arrows illustrate Physics- and ML-based approaches to solving Maxwell equations, with dashed line illustrating the new Physics-Informed ML described in this work.} \label{composite} \end{figure*} \section{Exact solutions of Maxwell equations} In the case of periodic layered materials considered in this work (with $x$ being the direction of layer growth), solutions to Maxwell equations can represented as a linear combination of modes with either transverse-electric (TE, $E_z\neq 0, H_z=0$) or transverse-magnetic (TM, $E_z=0, H_z \neq 0$) polarization, with full Maxwell equations reducing to independent partial differential equations for $E_z$ and $H_z$ components, respectively, and with remaining ($x,y$) components of the fields expressed in terms of their $z$ counterparts. In particular, propagation of the TM-polarized waves is given by: \begin{equation} \label{eq:1} \frac{\partial^2{H_z} }{\partial{y^2}}=-\epsilon(x)\left[\frac{\partial{}}{\partial{x}}\left(\frac{1}{\epsilon(x)}\frac{\partial}{\partial{x}}\right)+\left(\frac{\omega}{c}\right)^2 \right] H_z, \end{equation} with $\omega$ being angular frequency and $c$ being speed of light in vacuum. RCWA\cite{RCWA,normalVector}, a semi-analytical method to analyze the mode structure of spatially periodic composites, takes explicit advantage of the periodicity and uses the Fourier expansion of spatial profile of both permittivity and electromagnetic fields: \begin{eqnarray}\label{eq:2} \epsilon(x)&=&\sum_m{\varepsilon_m e^{-i q_m x}}\nonumber \\ H_z(x,y)&=&e^{ik_y y}\sum_m{h_m e^{i k_{xm} x}} \end{eqnarray} to convert differential Eq.(\ref{eq:1}) to an eigenvalue problem: \begin{equation}\label{eq:3} \sum_j\hat{A}_{m j} h_j=k_y^2 h_m \end{equation} with \begin{equation}\label{eq:4} \hat{A}_{m j} =-\sum_s{\varepsilon_{s-j}k_{xm}\varepsilon_{m-s}^{-1}k_{xs}}+\left(\frac{\omega}{c}\right)^2\varepsilon_{m-j}. \end{equation} In the equations above $q_m=\frac{2\pi m}{\Lambda}$ is the multiple of the reciprocal lattice constant, $\Lambda$ is the period of the composite, $k_{xm}=k_0+q_m$, $\varepsilon^{-1}_m$ represent the Fourier coefficients of $1/\epsilon(x)$, and parameter $k_0$ plays the role of the $x$ component of the quasi-wavenumber of the mode. In all practical realizations, finite Fourier expansions (rather than Fourier series) have to be used. The complexity of the composite determines the number of terms in Fourier expansions that are required for adequate representation of permittivity and electromagnetic fields, and in turn determines the size of the matrix $\hat{A}$. As the complexity increases, direct solution of Eq.\eqref{eq:3} becomes increasingly slow and resource-intensive, motivating the development of tools that avoid the direct solution of the eigenvalue problem, such as ML-assisted mode analysis presented here. To comprehensively assess the performance of the ML-based models we generated three datasets, representing geometry of the particular composite and the (highest-$k_y$) mode propagating in this composite. In these studies, the size of the unit cell was fixed at $\Lambda=5\lambda_0$ (with $\lambda_0=2\pi c/\omega$ being free-space wavelength) and each period of the composite was assumed to contain 10 layers of identical thickness, the quasi-wavenumber of the mode was parameterized by the angular parameter $\theta$ via $k_0=\frac{\omega}{c}\sin\theta$, and Fourier expansions contained the components corresponding to $m\in[-m_{\rm max},m_{\rm max}]$ with $m_{\rm max}=50$ in Eq.\eqref{eq:2}. The first two (photonic) sets contained data for lossy ($0<\epsilon''<0.1$) low-index $(1<\epsilon'<4)$ and high-index $(1<\epsilon'<16)$ dielectric stacks, with randomly-assigned permittivity for each sub-layer. The remaining set was similar to the high-index dielectric set with 25\% of configurations containing plasmonic sub-layers with permittivity of $\epsilon=-100+25i$. Each set contains data for 2000 geometries with $\theta\in[20^o,40^o,60^o,80^o]$ for each geometry. Overall, each dataset contained 8000 combinations, mapping the configuration, parameterized by 21 real numbers $\{\theta,\epsilon',\epsilon{''}\}$, onto a set of 204 real numbers that represent real and imaginary parts of $k_y$ and $h_m$ [see Appendix]. Fig.~\ref{nEff} illustrates the distribution of propagation constants of the modes within each dataset. Note that the propagation constants of the modes in the dielectric composites is constrained by the largest refraction index within the set. In contrast, plasmonic dataset contains the modes with very high propagation constants that originate as result of the interplay between different surface plasmon polaritons\cite{vpBook} \begin{figure}[htb] \centering \includegraphics[width=7cm]{n_eff.png} \caption{Distribution of the propagation constants of the modes within the three datasets used in this work; each dataset contains 8000 configurations} \label{nEff} \end{figure} \section{Black box- and Physics-informed ML} Artificial Neural Networks (ANN) emerge as robust and flexible tools capable of deducing the input$\rightarrow$ output dependencies within the data\cite{MLbook}. The ANNs, inspired by biological neural networks, contain a set of linear coupling layers and a set of nonlinear activation layers stacked in-between the coupling layers. The typical ANN feeds the input into its first layer; information then flows within the ANN as output of one layer becomes the input for the subsequent layer; the output of the final layer represents the ML-based prediction. During the training stage, coupling coefficients that define the information flow are adjusted to minimize the {\it loss function}, deviation between the ANN-based prediction and the known exact results (ground truth). After training, the coupling coefficients are fixed and the ANN-based model is ready for deployment. In this work we use three different approaches to train the ANNs-based models. First, the default physics-agnostic ``black box'' formalism is used. In this approach, mean-squared deviation between components of predicted and ground truth sequences are used as optimization criterion during the training process\cite{MLbook}. In the second, meaning-informed approach, the loss function is adjusted to explicitly utilize the fact that the output sequence contains the combination of the eigenvalue and the components of the eigenvector. Meaning-informed loss is used in the final approach as well. In addition, this Physics-informed approach uses the input sequence to generate the matrices $\hat{A}$ within the network during the training process and to enforce the Eq.\eqref{eq:3} as additional constraint during the training, aiming to produce the explicitly physics-consistent results. Apart from the implementation of the loss function, the topology of the three ANNs is identical (see Appendix for more details). Our implementation of the physics-informed training follows the recipe for dynamic adjustment of the loss\cite{VT} that has been shown to improve the convergence of the model that uses multiple competing objectives\cite{MLhard}. Note however, that in contrast to Ref.\cite{VT}, the proposed formalism calculates the matrix $\hat{A}$ only during training but not during prediction stage. Moreover, even during training, the model does not directly solve the eigenvalue problem but rather tests the validity of the ML predictions against Eq.\eqref{eq:3}. A single matrix multiplication that is required for such testing is much faster than the iterative algorithms that often underline eigenvalue solvers\cite{linalg}. In addition to enforcing the consistency with Maxwell equations, Physics-informed training enables the expansion of the training set by ``padding'' it with configurations of composites that the model may expect to see in future deployments. As we show below, even in absence of full solutions, these configurations (termed {\it unlabeled data} below) can significantly improve the quality of the resulting ML model. \begin{figure}[tbh] \centering \includegraphics[width=7cm]{figOverlapVP.jpg} \caption{Illustration of the relationship between the numerical value of overlap parameter $O$ and the quality of prediction of spatial profile of the mode} \label{overlap} \end{figure} A properly trained ML model should correctly predict the propagation constant of the mode as well as its spatial profile. We therefore use two (dimensionless) parameters to characterize the performance of the ML models, the normalized error in prediction of propagation constant $\delta$ and the modal overlap across the unit cell $O$ (see Appendix for derivation), \begin{eqnarray} \label{eq:5} \delta&=&\left|\frac{{k_y}-k_y^{\rm gt}}{k_y^{\rm gt}}\right|, \nonumber \\ O&=&\frac{\left|{\sum}_m{h_m^*} h_m^{\rm gt}\right|}{\sqrt{\sum_m |h_m|^2}\sqrt{\sum_m |h_m^{\rm gt}|^2}} \end{eqnarray} In the expressions above, the quantities with "gt" superscript represent the ground truth, while the quantities with no superscript represent ML predictions and ``*'' corresponds to complex conjugation. Figure \ref{overlap} illustrates the relationship between the value of the parameter $O$ and the agreement between the predicted and exact profiles of magnetic field across the unit cell. It is seen that $O\gtrsim 0.8$ represent the adequate prediction quality. \section{Results and Discussion} \begin{figure} \centering \includegraphics[width=8cm]{figBaselineCombinedAG.png} \caption{The accuracy of predicting the propagation constant (a,c) and spatial profile (b,d) of modes from the low-index photonic dataset by different ML models trained on $\sim$800 configurations/model (a,b); and with models trained on a subset representing $\sim$200 configurations with pre-selected value of parameter $\theta$; additional configurations (without corresponding mode solutions) are used as unlabeled data for one of the physics-informed models} \label{diffModel} \end{figure} Figure \ref{diffModel} illustrates the typical performance of the different ML models that are trained on 10\% ($\sim$800 randomly selected configurations) of one of the datasets, tested on predicting the modes of the remaining configurations within the same dataset. Note that both meaning- and physics-informed models drastically outperform their black-box counterpart. Further analysis (see Appendix) demonstrates that significant expansion of the training subset can improve the performance of the BB model. The requirement to have a large training set represents the main limitation of conventional science-agnostic ML, the limitation that makes such models virtually impractical in applications where the training sets are scarce (due to, for example, significant time that it takes to solve Maxwell equations in composites with complex geometries). All models lose accuracy when the training set is further decreased by considering only $\sim$200 configurations, a subset of initial training pool representing one pre-selected value of parameter $\theta$ [see Eq.\eqref{eq:2}], Figure \ref{diffModel}(c,d). However, the performance of the Physics-informed model can be significantly improved by expanding its very limited training set with unlabeled configurations. Note that providing these extra configurations -- with no corresponding solutions -- brings the model performance almost in line with the baselines that have 4-times larger training libraries of propagation constants and mode profiles. One of the main limitations of ML models is their limited ability to predict the properties of systems that significantly deviate from their training data. As seen in Fig. \ref{nEff}, the datasets used in this work are designed to have significantly different distributions of optical modes. As result, the models trained on a particular dataset perform best on predicting the propagation constants and mode profiles for configurations within the same dataset. The models trained on plasmonic dataset also perform well on predicting properties of high-index photonic dataset (that essentially represents a subset of its plasmonic counterpart; see Appendix). However, the models trained on high-index plasmonic or photonic datasets perform poorly when they are deployed to analyze the modes of low-index configurations and vice versa. This phenomenon is illustrated in Fig.\ref{figPG}. Once again, the performance of the models based on physics-informed training can be significantly improved by expanding their original plasmonic training set with unlabeled low-index configurations. Note that such an improvement in model generalizability does not affect model's performance on its original dataset. Overall, our analysis suggests that unlabeled subset promotes physics-consistency, with resulting models correctly predicting eigenvalue/eigenvector pairs of Eq.\eqref{eq:3}. However, in the absence of sufficient labeled data, the models often fail to predict the particular solution representing the largest-$k_y$ propagating mode within the spectrum, especially in composites where multiple modes with similar propagation constants are supported. Convergence of ML-based models to proper modes can be improved by either the techniques introduced in Ref.\cite{VT} (such as introducing the eigenvalue ``pull'' terms into loss function), or by expanding labeled training subset. Along with prediction accuracy, prediction speed represents another major factor in practical applications of mode analyzers, such that optimizing the geometry of the composite to achieve the particular field distribution, mode confinement, propagation constant, etc. Here, pre-trained ML models are drastically faster than their direct Physics-solver counterparts (it takes $\sim 0.3$s to predict the properties of the high-$k_y$ modes in all 8000 elements of a given set on our desktop with Intel Core I7-10700 processor and NVidia GeForce RTX-3060 GPU), compared with $\sim 80$s it takes to run RCWA algorithm. In our tests the prediction speed was almost independent of the ML solver as well as of the complexity of the problem (see below); given previous research we expect the prediction speed to grow when the size of the dataset is significantly increased (to $\sim 10^5\ldots 10^6$ configurations). The time it takes to train the model depends not only on the size of the training set but also on the method used. ``Black box'' and Meaning-informed models train virtually in the same time ($\sim 200\ldots 250$s for 5000 epochs). The Physics-informed model trains substantially slower (300...600s for 5000 epochs, depending on the size of the training dataset). \begin{figure} \centering \includegraphics[width=8cm]{figPG_AG.png} \caption{Performance of ML models trained on plasmonic dataset in predicting the modes of plasmonic (a,b) and low-index photonic (c,d) geometries; low-index photonic configurations (without corresponding mode solutions) are used to supplement training of {\it PI+unlabeled data} model; last bin in panels (a,c) represents data with $\delta\ge 0.5$ }. \label{figPG} \end{figure} Therefore, when the number of configurations under study is small, it is beneficial to use direct solvers. Once the number of configurations reaches some critical value, ML-based tools increasingly outperform their physics-based counterparts in terms of the overall training+prediction time. For the 1D toy model of moderate complexity considered in this work, ML-learning tools ``break even'' with brute-force solvers when the number of configurations reaches $\sim 2\times 10^4$. The advantage of ML tools grows as complexity of the composite increases. For example, increasing the dimensionality of the eigenvector (by increasing the parameter $m_{\rm max}$) strongly affects the RCWA runtime. However, the time required to train black box- and meaning-informed-models grows at the slower rate, while the time it takes the model to predict the solutions are virtually unaffected by these changes (see Fig.\ref{figTiming}). The ``breaks even points'' for BB and MI models steadity decrease with the increase of $m_{\rm max}$. For relatively simple systems, the PI models behave similar to their MI and BB counterparts. As result, we conclude that the training time of these models is dominated by calculations of gradients and adjustments of learning parameters for ANN. For more complex systems ($m_{\rm max}\gtrsim 100$), the dependence of the training time the PG models on the complexity becomes similar to that of RCWA, reflecting the regime where training process is dominated by the calculations of Physics Loss. In our studies, PI models with large training sets are also affected by hardware constraints: the model reaches the available GPU memory when $m_{\rm max}\simeq 100$, explaining the rapid rise of the training time due to memory-swapping. Overall, it is seen that the ML-based solutions provide meaningful speedup for large-scale exploratory studies of optics of composites, especially when a pre-trained model is used. \begin{figure} \centering \includegraphics[width=8cm]{figTiming.jpg} \caption{(a) comparison of time it takes to calculate the properties of 8000 modes with RCWA and to train ML algorithms on 10\% and 5\% of these configurations (it takes 0.3s to predict 8000 modes with all ML models in our studies). (b) the number of configurations when time it takes to run RCWA calculations breaks even with the time it takes to train the ML model on a subset of the configurations and use it to predict the mode properties of the remaining configurations; in both panels TS stands for the size of the training set}. \label{figTiming} \end{figure} \section{Conclusions} We developed an approach to introduce Physics-based constraints into ML algorithms for analysis of optical composites and demonstrated that Physics-informed ML models provide better generalizability and better accuracy than their ``black box'' counterparts. The utility of the Physics-informed ML has been illustrated on example of calculating the properties of the highest-index mode of the periodic multilayered composites, where pre-trained ML models offer orders of magnitude (0.3s vs 80s) speedup over conventional numerical solutions of Maxwell equations. The developed formalism is directly applicable to calculation of modes in arbitrary periodic composites and can be extended to non-periodic and guiding structures, resolving crucial computational bottlenecks in design and optimization of composite-based applications. More importantly, the ability to introduce Physics-based constraints into ML algorithms provides the pathway to merge the benefits of powerful pattern-recognition-based learning that are inherent to ML with the benefits of analytical scientific knowledge that has been accumulated within Physics community. \subsubsection{Funding} This research is supported by the NSF (Grants \#III-2026703, \#IIS-2026702, \#2026710). \subsubsection{Data Availability Statement} The codes used in the project, along with data underlying the results are available in Ref.\cite{githubPGMLvp} and may be obtained from the authors upon reasonable request. \section{\label{sec:level1}First-level heading:\protect\\ The line break was forced \lowercase{via} \textbackslash\textbackslash} This sample document demonstrates proper use of REV\TeX~4.2 (and \LaTeXe) in manuscripts prepared for submission to AAPM journals. Further information can be found in the documentation included in the distribution or available at \url{http://www.aapm.org} and in the documentation for REV\TeX~4.2 itself. When commands are referred to in this example file, they are always shown with their required arguments, using normal \TeX{} format. In this format, \verb+#1+, \verb+#2+, etc. stand for required author-supplied arguments to commands. For example, in \verb+\section{#1}+ the \verb+#1+ stands for the title text of the author's section heading, and in \verb+\title{#1}+ the \verb+#1+ stands for the title text of the paper. Line breaks in section headings at all levels can be introduced using \textbackslash\textbackslash. A blank input line tells \TeX\ that the paragraph has ended. \subsection{\label{sec:level2}Second-level heading: Formatting} This file may be formatted in both the \texttt{preprint} (the default) and \texttt{reprint} styles; the latter format may be used to mimic final journal output. In addition, there is another option available, \texttt{lengthcheck}, which formats the document as closely as possible to an actual journal article, to facilitate the author's performance of a length check. Either format may be used for submission purposes; however, for peer review and production, AAPM will format the article using the \texttt{preprint} class option. Hence, it is essential that authors check that their manuscripts format acceptably under \texttt{preprint}. Manuscripts submitted to AAPM that do not format correctly under the \texttt{preprint} option may be delayed in both the editorial and production processes. The \texttt{widetext} environment will make the text the width of the full page, as on page~\pageref{eq:wideeq}. (Note the use the \verb+\pageref{#1}+ to get the page number right automatically.) The width-changing commands only take effect in \texttt{twocolumn} formatting. It has no effect if \texttt{preprint} formatting is chosen instead. \subsubsection{\label{sec:level3}Third-level heading: Citations and Footnotes} Citations in text refer to entries in the Bibliography; they use the commands \verb+\cite{#1}+ or \verb+\onlinecite{#1}+. Because REV\TeX\ uses the \verb+natbib+ package of Patrick Daly, its entire repertoire of commands are available in your document; see the \verb+natbib+ documentation for further details. The argument of \verb+\cite+ is a comma-separated list of \emph{keys}; a key may consist of letters and numerals. By default, AAPM citations are numerical; \cite{feyn54} to give a textual citation, use \verb+\onlinecite{#1}+: (Refs.~\onlinecite{witten2001,epr,Bire82}). REV\TeX\ ``collapses'' lists of consecutive numerical citations when appropriate. To illustrate, we cite several together \cite{feyn54,witten2001,epr,Berman1983}, and once again (Refs.~\onlinecite{epr,feyn54,Bire82,Berman1983}). Note that, when numerical citations are used, the references were sorted into the same order they appear in the bibliography. A reference within the bibliography is specified with a \verb+\bibitem{#1}+ command, where the argument is the citation key mentioned above. \verb+\bibitem{#1}+ commands may be crafted by hand or, preferably, generated by using Bib\TeX. The AAPM styles for REV\TeX~4 include Bib\TeX\ style file \verb+aapmrev4-2.bst+, appropriate for numbered bibliography. REV\TeX~4 will automatically choose the style appropriate for the document's selected class options: the default is numerical. This sample file demonstrates a simple use of Bib\TeX\ via a \verb+\bibliography+ command referencing the \verb+aapmsamp.bib+ file. Running Bib\TeX\ (in this case \texttt{bibtex aapmsamp}) after the first pass of \LaTeX\ produces the file \verb+aapmsamp.bbl+ which contains the automatically formatted \verb+\bibitem+ commands (including extra markup information via \verb+\bibinfo+ commands). If not using Bib\TeX, the \verb+thebibiliography+ environment should be used instead. \paragraph{Fourth-level heading is run in.}% Footnotes are produced using the \verb+\footnote{#1}+ command. Numerical style citations put footnotes into the bibliography\footnote{Automatically placing footnotes into the bibliography requires using BibTeX to compile the bibliography.}. Note: due to the method used to place footnotes in the bibliography, \emph{you must re-run BibTeX every time you change any of your document's footnotes}. \section{Math and Equations} Inline math may be typeset using the \verb+$+ delimiters. Bold math symbols may be achieved using the \verb+bm+ package and the \verb+\bm{#1}+ command it supplies. For instance, a bold $\alpha$ can be typeset as \verb+$\bm{\alpha}$+ giving $\bm{\alpha}$. Fraktur and Blackboard (or open face or double struck) characters should be typeset using the \verb+\mathfrak{#1}+ and \verb+\mathbb{#1}+ commands respectively. Both are supplied by the \texttt{amssymb} package. For example, \verb+$\mathbb{R}$+ gives $\mathbb{R}$ and \verb+$\mathfrak{G}$+ gives $\mathfrak{G}$ In \LaTeX\ there are many different ways to display equations, and a few preferred ways are noted below. Displayed math will flush left by default. Below we have numbered single-line equations, the most common kind: \begin{eqnarray} \chi_+(p)\alt{\bf [}2|{\bf p}|(|{\bf p}|+p_z){\bf ]}^{-1/2} \left( \begin{array}{c} |{\bf p}|+p_z\\ px+ip_y \end{array}\right)\;, \\ \left\{% \openone234567890abc123\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2}% \right\}% \label{eq:one}. \end{eqnarray} Note the open one in Eq.~(\ref{eq:one}). Not all numbered equations will fit within a narrow column this way. The equation number will move down automatically if it cannot fit on the same line with a one-line equation: \begin{equation} \left\{ ab12345678abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2}% \right\}. \end{equation} When the \verb+\label{#1}+ command is used [cf. input for Eq.~(\ref{eq:one})], the equation can be referred to in text without knowing the equation number that \TeX\ will assign to it. Just use \verb+\ref{#1}+, where \verb+#1+ is the same name that used in the \verb+\label{#1}+ command. Unnumbered single-line equations can be typeset using the \verb+\[+, \verb+\]+ format: \[g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \] \subsection{Multiline equations} Multiline equations are obtained by using the \verb+eqnarray+ environment. Use the \verb+\nonumber+ command at the end of each line to avoid assigning a number: \begin{eqnarray} {\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1} \delta_{\sigma_1,-\sigma_2} (g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\ &&\times [\epsilon_jl_i\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1), \end{eqnarray} \begin{eqnarray} \sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2} (N^2-1)\nonumber \\ & &\times \left( \sum_{i<j}\right) \sum_{\text{perm}} \frac{1}{S_{12}} \frac{1}{S_{12}} \sum_\tau c^f_\tau~. \end{eqnarray} \textbf{Note:} Do not use \verb+\label{#1}+ on a line of a multiline equation if \verb+\nonumber+ is also used on that line. Incorrect cross-referencing will result. Notice the use \verb+\text{#1}+ for using a Roman font within a math environment. To set a multiline equation without \emph{any} equation numbers, use the \verb+\begin{eqnarray*}+, \verb+\end{eqnarray*}+ format: \begin{eqnarray*} \sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2} (N^2-1)\\ & &\times \left( \sum_{i<j}\right) \left( \sum_{\text{perm}}\frac{1}{S_{12}S_{23}S_{n1}} \right) \frac{1}{S_{12}}~. \end{eqnarray*} To obtain numbers not normally produced by the automatic numbering, use the \verb+\tag{#1}+ command, where \verb+#1+ is the desired equation number. For example, to get an equation number of (\ref{eq:mynum}), \begin{equation} g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \tag{2.6$'$}\label{eq:mynum} \end{equation} A few notes on \verb=\tag{#1}=. \verb+\tag{#1}+ requires \texttt{amsmath}. The \verb+\tag{#1}+ must come before the \verb+\label{#1}+, if any. The numbering set with \verb+\tag{#1}+ is \textit{transparent} to the automatic numbering in REV\TeX{}; therefore, the number must be known ahead of time, and it must be manually adjusted if other equations are added. \verb+\tag{#1}+ works with both single-line and multiline equations. \verb+\tag{#1}+ should only be used in exceptional case - do not use it to number all equations in a paper. Note the equation number gets reset again: \begin{equation} g^+g^+g^+ \rightarrow g^+g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \end{equation} Enclosing single-line and multiline equations in \verb+\begin{subequations}+ and \verb+\end{subequations}+ will produce a set of equations that are ``numbered'' with letters, as shown in Eqs.~(\ref{subeq:1}) and (\ref{subeq:2}) below: \begin{subequations} \label{eq:whole} \begin{equation} \left\{ abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2} \right\},\label{subeq:1} \end{equation} \begin{eqnarray} {\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1} (g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\ &&\times [\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1).\label{subeq:2} \end{eqnarray} \end{subequations} Putting a \verb+\label{#1}+ command right after the \verb+\begin{subequations}+, allows one to reference all the equations in a subequations environment. For example, the equations in the preceding subequations environment were Eqs.~(\ref{eq:whole}). \subsubsection{Wide equations} The equation that follows is set in a wide format, i.e., it spans across the full page. The wide format is reserved for long equations that cannot be easily broken into four lines or less: \begin{widetext} \begin{equation} {\cal R}^{(\text{d})}= g_{\sigma_2}^e \left( \frac{[\Gamma^Z(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2} +\frac{[\Gamma^Z(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2} \right) + x_WQ_e \left( \frac{[\Gamma^\gamma(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2} +\frac{[\Gamma^\gamma(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2} \right)\;. \label{eq:wideeq} \end{equation} \end{widetext} This is typed to show the output is in wide format. (Since there is no input line between \verb+\equation+ and this paragraph, there is no paragraph indent for this paragraph.) \section{Cross-referencing} REV\TeX{} will automatically number sections, equations, figure captions, and tables. In order to reference them in text, use the \verb+\label{#1}+ and \verb+\ref{#1}+ commands. To reference a particular page, use the \verb+\pageref{#1}+ command. The \verb+\label{#1}+ should appear in a section heading, within an equation, or in a table or figure caption. The \verb+\ref{#1}+ command is used in the text where the citation is to be displayed. Some examples: Section~\ref{sec:level1} on page~\pageref{sec:level1}, Table~\ref{tab:table1},% \begin{table} \caption{\label{tab:table1}This is a narrow table which fits into a text column when using \texttt{twocolumn} formatting. Note that REV\TeX~4 adjusts the intercolumn spacing so that the table fills the entire width of the column. Table captions are numbered automatically. This table illustrates left-aligned, centered, and right-aligned columns. } \begin{ruledtabular} \begin{tabular}{lcr} Left\footnote{Note a.}&Centered\footnote{Note b.}&Right\\ \hline 1 & 2 & 3\\ 10 & 20 & 30\\ 100 & 200 & 300\\ \end{tabular} \end{ruledtabular} \end{table} and Fig.~\ref{fig:epsart}. \section{Figures and Tables} Figures and tables are typically ``floats''; \LaTeX\ determines their final position via placement rules. \LaTeX\ isn't always successful in automatically placing floats where you wish them. Figures are marked up with the \texttt{figure} environment, the content of which imports the image (\verb+\includegraphics+) followed by the figure caption (\verb+\caption+). The argument of the latter command should itself contain a \verb+\label+ command if you wish to refer to your figure with \verb+\ref+. Import your image using either the \texttt{graphics} or \texttt{graphix} packages. These packages both define the \verb+\includegraphics{#1}+ command, but they differ in the optional arguments for specifying the orientation, scaling, and translation of the figure. Fig.~\ref{fig:epsart}% \begin{figure} \includegraphics{fig_1 \caption{\label{fig:epsart} A figure caption. The figure captions are automatically numbered.} \end{figure} is small enough to fit in a single column, while Fig.~\ref{fig:wide}% \begin{figure*} \includegraphics{fig_2 \caption{\label{fig:wide}Use the \texttt{figure*} environment to get a wide figure, spanning the page in \texttt{twocolumn} formatting.} \end{figure*} is too wide for a single column, so instead the \texttt{figure*} environment has been used. The analog of the \texttt{figure} environment is \texttt{table}, which uses the same \verb+\caption+ command. However, you should type your caption command first within the \texttt{table}, instead of last as you did for \texttt{figure}. The heart of any table is the \texttt{tabular} environment, which represents the table content as a (vertical) sequence of table rows, each containing a (horizontal) sequence of table cells. Cells are separated by the \verb+&+ character; the row terminates with \verb+\\+. The required argument for the \texttt{tabular} environment specifies how data are displayed in each of the columns. For instance, a column may be centered (\verb+c+), left-justified (\verb+l+), right-justified (\verb+r+), or aligned on a decimal point (\verb+d+). (Table~\ref{tab:table4}% \begin{table} \caption{\label{tab:table4}Numbers in columns Three--Five have been aligned by using the ``d'' column specifier (requires the \texttt{dcolumn} package). Non-numeric entries (those entries without a ``.'') in a ``d'' column are aligned on the decimal point. Use the ``D'' specifier for more complex layouts. } \begin{ruledtabular} \begin{tabular}{ccddd} One&Two&\mbox{Three}&\mbox{Four}&\mbox{Five}\\ \hline one&two&\mbox{three}&\mbox{four}&\mbox{five}\\ He&2& 2.77234 & 45672. & 0.69 \\ C\footnote{Some tables require footnotes.} &C\footnote{Some tables need more than one footnote.} & 12537.64 & 37.66345 & 86.37 \\ \end{tabular} \end{ruledtabular} \end{table} illustrates the use of decimal column alignment.) Extra column-spacing may be be specified as well, although REV\TeX~4 sets this spacing so that the columns fill the width of the table. Horizontal rules are typeset using the \verb+\hline+ command. The doubled (or Scotch) rules that appear at the top and bottom of a table can be achieved by enclosing the \texttt{tabular} environment within a \texttt{ruledtabular} environment. Rows whose columns span multiple columns can be typeset using \LaTeX's \verb+\multicolumn{#1}{#2}{#3}+ command (for example, see the first row of Table~\ref{tab:table3}).% \begin{table*} \caption{\label{tab:table3}This is a wide table that spans the page width in \texttt{twocolumn} mode. It is formatted using the \texttt{table*} environment. It also demonstrates the use of \textbackslash\texttt{multicolumn} in rows with entries that span more than one column.} \begin{ruledtabular} \begin{tabular}{ccccc} &\multicolumn{2}{c}{$D_{4h}^1$}&\multicolumn{2}{c}{$D_{4h}^5$}\\ Ion&1st alternative&2nd alternative&lst alternative &2nd alternative\\ \hline K&$(2e)+(2f)$&$(4i)$ &$(2c)+(2d)$&$(4f)$ \\ Mn&$(2g)$\footnote{The $z$ parameter of these positions is $z\sim\frac{1}{4}$.} &$(a)+(b)+(c)+(d)$&$(4e)$&$(2a)+(2b)$\\ Cl&$(a)+(b)+(c)+(d)$&$(2g)$\footnote{This is a footnote in a table that spans the full page width in \texttt{twocolumn} mode. It is supposed to set on the full width of the page, just as the caption does. } &$(4e)^{\text{a}}$\\ He&$(8r)^{\text{a}}$&$(4j)^{\text{a}}$&$(4g)^{\text{a}}$\\ Ag& &$(4k)^{\text{a}}$& &$(4h)^{\text{a}}$\\ \end{tabular} \end{ruledtabular} \end{table*} The tables in this document illustrate various effects. Tables that fit in a narrow column are contained in a \texttt{table} environment. Table~\ref{tab:table3} is a wide table, therefore set with the \texttt{table*} environment. Lengthy tables may need to break across pages. A simple way to allow this is to specify the \verb+[H]+ float placement on the \texttt{table} or \texttt{table*} environment. Alternatively, using the standard \LaTeXe\ package \texttt{longtable} gives more control over how tables break and allows headers and footers to be specified for each page of the table. An example of the use of \texttt{longtable} can be found in the file \texttt{summary.tex} that is included with the REV\TeX~4 distribution. There are two methods for setting footnotes within a table (these footnotes will be displayed directly below the table rather than at the bottom of the page or in the bibliography). The easiest and preferred method is just to use the \verb+\footnote{#1}+ command. This will automatically enumerate the footnotes with lowercase roman letters. However, it is sometimes necessary to have multiple entries in the table share the same footnote. In this case, create the footnotes using \verb+\footnotemark[#1]+ and \verb+\footnotetext[#1]{#2}+. \texttt{\#1} is a numeric value. Each time the same value for \texttt{\#1} is used, the same mark is produced in the table. The \verb+\footnotetext[#1]{#2}+ commands are placed after the \texttt{tabular} environment. Examine the \LaTeX\ source and output for Tables~\ref{tab:table1} and \ref{tab:table2}% \begin{table} \caption{\label{tab:table2}A table with more columns still fits properly in a column. Note that several entries share the same footnote. Inspect the \LaTeX\ input for this table to see exactly how it is done.} \begin{ruledtabular} \begin{tabular}{cccccccc} &$r_c$ (\AA)&$r_0$ (\AA)&$\kappa r_0$& &$r_c$ (\AA) &$r_0$ (\AA)&$\kappa r_0$\\ \hline Cu& 0.800 & 14.10 & 2.550 &Sn\footnotemark[1] & 0.680 & 1.870 & 3.700 \\ Ag& 0.990 & 15.90 & 2.710 &Pb\footnotemark[2] & 0.450 & 1.930 & 3.760 \\ Au& 1.150 & 15.90 & 2.710 &Ca\footnotemark[3] & 0.750 & 2.170 & 3.560 \\ Mg& 0.490 & 17.60 & 3.200 &Sr\footnotemark[4] & 0.900 & 2.370 & 3.720 \\ Zn& 0.300 & 15.20 & 2.970 &Li\footnotemark[2] & 0.380 & 1.730 & 2.830 \\ Cd& 0.530 & 17.10 & 3.160 &Na\footnotemark[5] & 0.760 & 2.110 & 3.120 \\ Hg& 0.550 & 17.80 & 3.220 &K\footnotemark[5] & 1.120 & 2.620 & 3.480 \\ Al& 0.230 & 15.80 & 3.240 &Rb\footnotemark[3] & 1.330 & 2.800 & 3.590 \\ Ga& 0.310 & 16.70 & 3.330 &Cs\footnotemark[4] & 1.420 & 3.030 & 3.740 \\ In& 0.460 & 18.40 & 3.500 &Ba\footnotemark[5] & 0.960 & 2.460 & 3.780 \\ Tl& 0.480 & 18.90 & 3.550 & & & & \\ \end{tabular} \end{ruledtabular} \footnotetext[1]{Here's the first, from Ref.~\onlinecite{feyn54}.} \footnotetext[2]{Here's the second.} \footnotetext[3]{Here's the third.} \footnotetext[4]{Here's the fourth.} \footnotetext[5]{And etc.} \end{table} for an illustration. All AAPM journals require that the initial citation of figures or tables be in numerical order. \LaTeX's automatic numbering of floats is your friend here: just put each \texttt{figure} environment immediately following its first reference (\verb+\ref+), as we have done in this example file. \begin{acknowledgments} We wish to acknowledge the support of the author community in using REV\TeX{}, offering suggestions and encouragement, testing new versions, \dots. \end{acknowledgments} \section{\label{sec:level1}First-level heading:\protect\\ The line break was forced \lowercase{via} \textbackslash\textbackslash} This sample document demonstrates proper use of REV\TeX~4.2 (and \LaTeXe) in manuscripts prepared for submission to AIP journals. Further information can be found in the documentation included in the distribution or available at \url{http://authors.aip.org} and in the documentation for REV\TeX~4.2 itself. When commands are referred to in this example file, they are always shown with their required arguments, using normal \TeX{} format. In this format, \verb+#1+, \verb+#2+, etc. stand for required author-supplied arguments to commands. For example, in \verb+\section{#1}+ the \verb+#1+ stands for the title text of the author's section heading, and in \verb+\title{#1}+ the \verb+#1+ stands for the title text of the paper. Line breaks in section headings at all levels can be introduced using \textbackslash\textbackslash. A blank input line tells \TeX\ that the paragraph has ended. \subsection{\label{sec:level2}Second-level heading: Formatting} This file may be formatted in both the \texttt{preprint} (the default) and \texttt{reprint} styles; the latter format may be used to mimic final journal output. Either format may be used for submission purposes; however, for peer review and production, AIP will format the article using the \texttt{preprint} class option. Hence, it is essential that authors check that their manuscripts format acceptably under \texttt{preprint}. Manuscripts submitted to AIP that do not format correctly under the \texttt{preprint} option may be delayed in both the editorial and production processes. The \texttt{widetext} environment will make the text the width of the full page, as on page~\pageref{eq:wideeq}. (Note the use the \verb+\pageref{#1}+ to get the page number right automatically.) The width-changing commands only take effect in \texttt{twocolumn} formatting. It has no effect if \texttt{preprint} formatting is chosen instead. \subsubsection{\label{sec:level3}Third-level heading: Citations and Footnotes} Citations in text refer to entries in the Bibliography; they use the commands \verb+\cite{#1}+ or \verb+\onlinecite{#1}+. Because REV\TeX\ uses the \verb+natbib+ package of Patrick Daly, its entire repertoire of commands are available in your document; see the \verb+natbib+ documentation for further details. The argument of \verb+\cite+ is a comma-separated list of \emph{keys}; a key may consist of letters and numerals. By default, citations are numerical; \cite{feyn54} author-year citations are an option. To give a textual citation, use \verb+\onlinecite{#1}+: (Refs.~\onlinecite{witten2001,epr,Bire82}). REV\TeX\ ``collapses'' lists of consecutive numerical citations when appropriate. REV\TeX\ provides the ability to properly punctuate textual citations in author-year style; this facility works correctly with numerical citations only with \texttt{natbib}'s compress option turned off. To illustrate, we cite several together \cite{feyn54,witten2001,epr,Berman1983}, and once again (Refs.~\onlinecite{epr,feyn54,Bire82,Berman1983}). Note that, when numerical citations are used, the references were sorted into the same order they appear in the bibliography. A reference within the bibliography is specified with a \verb+\bibitem{#1}+ command, where the argument is the citation key mentioned above. \verb+\bibitem{#1}+ commands may be crafted by hand or, preferably, generated by using Bib\TeX. The AIP styles for REV\TeX~4 include Bib\TeX\ style files \verb+aipnum.bst+ and \verb+aipauth.bst+, appropriate for numbered and author-year bibliographies, respectively. REV\TeX~4 will automatically choose the style appropriate for the document's selected class options: the default is numerical, and you obtain the author-year style by specifying a class option of \verb+author-year+. This sample file demonstrates a simple use of Bib\TeX\ via a \verb+\bibliography+ command referencing the \verb+aipsamp.bib+ file. Running Bib\TeX\ (in this case \texttt{bibtex aipsamp}) after the first pass of \LaTeX\ produces the file \verb+aipsamp.bbl+ which contains the automatically formatted \verb+\bibitem+ commands (including extra markup information via \verb+\bibinfo+ commands). If not using Bib\TeX, the \verb+thebibiliography+ environment should be used instead. \paragraph{Fourth-level heading is run in.}% Footnotes are produced using the \verb+\footnote{#1}+ command. Numerical style citations put footnotes into the bibliography\footnote{Automatically placing footnotes into the bibliography requires using BibTeX to compile the bibliography.}. Author-year and numerical author-year citation styles (each for its own reason) cannot use this method. Note: due to the method used to place footnotes in the bibliography, \emph{you must re-run BibTeX every time you change any of your document's footnotes}. \section{Math and Equations} Inline math may be typeset using the \verb+$+ delimiters. Bold math symbols may be achieved using the \verb+bm+ package and the \verb+\bm{#1}+ command it supplies. For instance, a bold $\alpha$ can be typeset as \verb+$\bm{\alpha}$+ giving $\bm{\alpha}$. Fraktur and Blackboard (or open face or double struck) characters should be typeset using the \verb+\mathfrak{#1}+ and \verb+\mathbb{#1}+ commands respectively. Both are supplied by the \texttt{amssymb} package. For example, \verb+$\mathbb{R}$+ gives $\mathbb{R}$ and \verb+$\mathfrak{G}$+ gives $\mathfrak{G}$ In \LaTeX\ there are many different ways to display equations, and a few preferred ways are noted below. Displayed math will center by default. Use the class option \verb+fleqn+ to flush equations left. Below we have numbered single-line equations, the most common kind: \begin{eqnarray} \chi_+(p)\alt{\bf [}2|{\bf p}|(|{\bf p}|+p_z){\bf ]}^{-1/2} \left( \begin{array}{c} |{\bf p}|+p_z\\ px+ip_y \end{array}\right)\;, \\ \left\{% \openone234567890abc123\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2}% \right\}% \label{eq:one}. \end{eqnarray} Note the open one in Eq.~(\ref{eq:one}). Not all numbered equations will fit within a narrow column this way. The equation number will move down automatically if it cannot fit on the same line with a one-line equation: \begin{equation} \left\{ ab12345678abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2}% \right\}. \end{equation} When the \verb+\label{#1}+ command is used [cf. input for Eq.~(\ref{eq:one})], the equation can be referred to in text without knowing the equation number that \TeX\ will assign to it. Just use \verb+\ref{#1}+, where \verb+#1+ is the same name that used in the \verb+\label{#1}+ command. Unnumbered single-line equations can be typeset using the \verb+\[+, \verb+\]+ format: \[g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \] \subsection{Multiline equations} Multiline equations are obtained by using the \verb+eqnarray+ environment. Use the \verb+\nonumber+ command at the end of each line to avoid assigning a number: \begin{eqnarray} {\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1} \delta_{\sigma_1,-\sigma_2} (g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\ &&\times [\epsilon_jl_i\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1), \end{eqnarray} \begin{eqnarray} \sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2} (N^2-1)\nonumber \\ & &\times \left( \sum_{i<j}\right) \sum_{\text{perm}} \frac{1}{S_{12}} \frac{1}{S_{12}} \sum_\tau c^f_\tau~. \end{eqnarray} \textbf{Note:} Do not use \verb+\label{#1}+ on a line of a multiline equation if \verb+\nonumber+ is also used on that line. Incorrect cross-referencing will result. Notice the use \verb+\text{#1}+ for using a Roman font within a math environment. To set a multiline equation without \emph{any} equation numbers, use the \verb+\begin{eqnarray*}+, \verb+\end{eqnarray*}+ format: \begin{eqnarray*} \sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2} (N^2-1)\\ & &\times \left( \sum_{i<j}\right) \left( \sum_{\text{perm}}\frac{1}{S_{12}S_{23}S_{n1}} \right) \frac{1}{S_{12}}~. \end{eqnarray*} To obtain numbers not normally produced by the automatic numbering, use the \verb+\tag{#1}+ command, where \verb+#1+ is the desired equation number. For example, to get an equation number of (\ref{eq:mynum}), \begin{equation} g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \tag{2.6$'$}\label{eq:mynum} \end{equation} A few notes on \verb=\tag{#1}=. \verb+\tag{#1}+ requires \texttt{amsmath}. The \verb+\tag{#1}+ must come before the \verb+\label{#1}+, if any. The numbering set with \verb+\tag{#1}+ is \textit{transparent} to the automatic numbering in REV\TeX{}; therefore, the number must be known ahead of time, and it must be manually adjusted if other equations are added. \verb+\tag{#1}+ works with both single-line and multiline equations. \verb+\tag{#1}+ should only be used in exceptional case - do not use it to number all equations in a paper. Enclosing single-line and multiline equations in \verb+\begin{subequations}+ and \verb+\end{subequations}+ will produce a set of equations that are ``numbered'' with letters, as shown in Eqs.~(\ref{subeq:1}) and (\ref{subeq:2}) below: \begin{subequations} \label{eq:whole} \begin{equation} \left\{ abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2} \right\},\label{subeq:1} \end{equation} \begin{eqnarray} {\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1} (g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\ &&\times [\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1).\label{subeq:2} \end{eqnarray} \end{subequations} Putting a \verb+\label{#1}+ command right after the \verb+\begin{subequations}+, allows one to reference all the equations in a subequations environment. For example, the equations in the preceding subequations environment were Eqs.~(\ref{eq:whole}). \subsubsection{Wide equations} The equation that follows is set in a wide format, i.e., it spans across the full page. The wide format is reserved for long equations that cannot be easily broken into four lines or less: \begin{widetext} \begin{equation} {\cal R}^{(\text{d})}= g_{\sigma_2}^e \left( \frac{[\Gamma^Z(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2} +\frac{[\Gamma^Z(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2} \right) + x_WQ_e \left( \frac{[\Gamma^\gamma(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2} +\frac{[\Gamma^\gamma(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2} \right)\;. \label{eq:wideeq} \end{equation} \end{widetext} This is typed to show the output is in wide format. (Since there is no input line between \verb+\equation+ and this paragraph, there is no paragraph indent for this paragraph.) \section{Cross-referencing} REV\TeX{} will automatically number sections, equations, figure captions, and tables. In order to reference them in text, use the \verb+\label{#1}+ and \verb+\ref{#1}+ commands. To reference a particular page, use the \verb+\pageref{#1}+ command. The \verb+\label{#1}+ should appear in a section heading, within an equation, or in a table or figure caption. The \verb+\ref{#1}+ command is used in the text where the citation is to be displayed. Some examples: Section~\ref{sec:level1} on page~\pageref{sec:level1}, Table~\ref{tab:table1},% \begin{table} \caption{\label{tab:table1}This is a narrow table which fits into a text column when using \texttt{twocolumn} formatting. Note that REV\TeX~4 adjusts the intercolumn spacing so that the table fills the entire width of the column. Table captions are numbered automatically. This table illustrates left-aligned, centered, and right-aligned columns. } \begin{ruledtabular} \begin{tabular}{lcr} Left\footnote{Note a.}&Centered\footnote{Note b.}&Right\\ \hline 1 & 2 & 3\\ 10 & 20 & 30\\ 100 & 200 & 300\\ \end{tabular} \end{ruledtabular} \end{table} and Fig.~\ref{fig:epsart}. \section{Figures and Tables} Figures and tables are typically ``floats''; \LaTeX\ determines their final position via placement rules. \LaTeX\ isn't always successful in automatically placing floats where you wish them. Figures are marked up with the \texttt{figure} environment, the content of which imports the image (\verb+\includegraphics+) followed by the figure caption (\verb+\caption+). The argument of the latter command should itself contain a \verb+\label+ command if you wish to refer to your figure with \verb+\ref+. Import your image using either the \texttt{graphics} or \texttt{graphix} packages. These packages both define the \verb+\includegraphics{#1}+ command, but they differ in the optional arguments for specifying the orientation, scaling, and translation of the figure. Fig.~\ref{fig:epsart}% \begin{figure} \includegraphics{fig_1 \caption{\label{fig:epsart} A figure caption. The figure captions are automatically numbered.} \end{figure} is small enough to fit in a single column, while Fig.~\ref{fig:wide}% \begin{figure*} \includegraphics{fig_2 \caption{\label{fig:wide}Use the \texttt{figure*} environment to get a wide figure, spanning the page in \texttt{twocolumn} formatting.} \end{figure*} is too wide for a single column, so instead the \texttt{figure*} environment has been used. The analog of the \texttt{figure} environment is \texttt{table}, which uses the same \verb+\caption+ command. However, you should type your caption command first within the \texttt{table}, instead of last as you did for \texttt{figure}. The heart of any table is the \texttt{tabular} environment, which represents the table content as a (vertical) sequence of table rows, each containing a (horizontal) sequence of table cells. Cells are separated by the \verb+&+ character; the row terminates with \verb+\\+. The required argument for the \texttt{tabular} environment specifies how data are displayed in each of the columns. For instance, a column may be centered (\verb+c+), left-justified (\verb+l+), right-justified (\verb+r+), or aligned on a decimal point (\verb+d+). (Table~\ref{tab:table4}% \begin{table} \caption{\label{tab:table4}Numbers in columns Three--Five have been aligned by using the ``d'' column specifier (requires the \texttt{dcolumn} package). Non-numeric entries (those entries without a ``.'') in a ``d'' column are aligned on the decimal point. Use the ``D'' specifier for more complex layouts. } \begin{ruledtabular} \begin{tabular}{ccddd} One&Two&\mbox{Three}&\mbox{Four}&\mbox{Five}\\ \hline one&two&\mbox{three}&\mbox{four}&\mbox{five}\\ He&2& 2.77234 & 45672. & 0.69 \\ C\footnote{Some tables require footnotes.} &C\footnote{Some tables need more than one footnote.} & 12537.64 & 37.66345 & 86.37 \\ \end{tabular} \end{ruledtabular} \end{table} illustrates the use of decimal column alignment.) Extra column-spacing may be be specified as well, although REV\TeX~4 sets this spacing so that the columns fill the width of the table. Horizontal rules are typeset using the \verb+\hline+ command. The doubled (or Scotch) rules that appear at the top and bottom of a table can be achieved by enclosing the \texttt{tabular} environment within a \texttt{ruledtabular} environment. Rows whose columns span multiple columns can be typeset using \LaTeX's \verb+\multicolumn{#1}{#2}{#3}+ command (for example, see the first row of Table~\ref{tab:table3}).% \begin{table*} \caption{\label{tab:table3}This is a wide table that spans the page width in \texttt{twocolumn} mode. It is formatted using the \texttt{table*} environment. It also demonstrates the use of \textbackslash\texttt{multicolumn} in rows with entries that span more than one column.} \begin{ruledtabular} \begin{tabular}{ccccc} &\multicolumn{2}{c}{$D_{4h}^1$}&\multicolumn{2}{c}{$D_{4h}^5$}\\ Ion&1st alternative&2nd alternative&lst alternative &2nd alternative\\ \hline K&$(2e)+(2f)$&$(4i)$ &$(2c)+(2d)$&$(4f)$ \\ Mn&$(2g)$\footnote{The $z$ parameter of these positions is $z\sim\frac{1}{4}$.} &$(a)+(b)+(c)+(d)$&$(4e)$&$(2a)+(2b)$\\ Cl&$(a)+(b)+(c)+(d)$&$(2g)$\footnote{This is a footnote in a table that spans the full page width in \texttt{twocolumn} mode. It is supposed to set on the full width of the page, just as the caption does. } &$(4e)^{\text{a}}$\\ He&$(8r)^{\text{a}}$&$(4j)^{\text{a}}$&$(4g)^{\text{a}}$\\ Ag& &$(4k)^{\text{a}}$& &$(4h)^{\text{a}}$\\ \end{tabular} \end{ruledtabular} \end{table*} The tables in this document illustrate various effects. Tables that fit in a narrow column are contained in a \texttt{table} environment. Table~\ref{tab:table3} is a wide table, therefore set with the \texttt{table*} environment. Lengthy tables may need to break across pages. A simple way to allow this is to specify the \verb+[H]+ float placement on the \texttt{table} or \texttt{table*} environment. Alternatively, using the standard \LaTeXe\ package \texttt{longtable} gives more control over how tables break and allows headers and footers to be specified for each page of the table. An example of the use of \texttt{longtable} can be found in the file \texttt{summary.tex} that is included with the REV\TeX~4 distribution. There are two methods for setting footnotes within a table (these footnotes will be displayed directly below the table rather than at the bottom of the page or in the bibliography). The easiest and preferred method is just to use the \verb+\footnote{#1}+ command. This will automatically enumerate the footnotes with lowercase roman letters. However, it is sometimes necessary to have multiple entries in the table share the same footnote. In this case, create the footnotes using \verb+\footnotemark[#1]+ and \verb+\footnotetext[#1]{#2}+. \texttt{\#1} is a numeric value. Each time the same value for \texttt{\#1} is used, the same mark is produced in the table. The \verb+\footnotetext[#1]{#2}+ commands are placed after the \texttt{tabular} environment. Examine the \LaTeX\ source and output for Tables~\ref{tab:table1} and \ref{tab:table2}% \begin{table} \caption{\label{tab:table2}A table with more columns still fits properly in a column. Note that several entries share the same footnote. Inspect the \LaTeX\ input for this table to see exactly how it is done.} \begin{ruledtabular} \begin{tabular}{cccccccc} &$r_c$ (\AA)&$r_0$ (\AA)&$\kappa r_0$& &$r_c$ (\AA) &$r_0$ (\AA)&$\kappa r_0$\\ \hline Cu& 0.800 & 14.10 & 2.550 &Sn\footnotemark[1] & 0.680 & 1.870 & 3.700 \\ Ag& 0.990 & 15.90 & 2.710 &Pb\footnotemark[2] & 0.450 & 1.930 & 3.760 \\ Au& 1.150 & 15.90 & 2.710 &Ca\footnotemark[3] & 0.750 & 2.170 & 3.560 \\ Mg& 0.490 & 17.60 & 3.200 &Sr\footnotemark[4] & 0.900 & 2.370 & 3.720 \\ Zn& 0.300 & 15.20 & 2.970 &Li\footnotemark[2] & 0.380 & 1.730 & 2.830 \\ Cd& 0.530 & 17.10 & 3.160 &Na\footnotemark[5] & 0.760 & 2.110 & 3.120 \\ Hg& 0.550 & 17.80 & 3.220 &K\footnotemark[5] & 1.120 & 2.620 & 3.480 \\ Al& 0.230 & 15.80 & 3.240 &Rb\footnotemark[3] & 1.330 & 2.800 & 3.590 \\ Ga& 0.310 & 16.70 & 3.330 &Cs\footnotemark[4] & 1.420 & 3.030 & 3.740 \\ In& 0.460 & 18.40 & 3.500 &Ba\footnotemark[5] & 0.960 & 2.460 & 3.780 \\ Tl& 0.480 & 18.90 & 3.550 & & & & \\ \end{tabular} \end{ruledtabular} \footnotetext[1]{Here's the first, from Ref.~\onlinecite{feyn54}.} \footnotetext[2]{Here's the second.} \footnotetext[3]{Here's the third.} \footnotetext[4]{Here's the fourth.} \footnotetext[5]{And etc.} \end{table} for an illustration. All AIP journals require that the initial citation of figures or tables be in numerical order. \LaTeX's automatic numbering of floats is your friend here: just put each \texttt{figure} environment immediately following its first reference (\verb+\ref+), as we have done in this example file. \begin{acknowledgments} We wish to acknowledge the support of the author community in using REV\TeX{}, offering suggestions and encouragement, testing new versions, \dots. \end{acknowledgments} \section{\label{sec:level1}First-level heading:\protect\\ The line break was forced \lowercase{via} \textbackslash\textbackslash} This sample document demonstrates proper use of REV\TeX~4.2 (and \LaTeXe) in manuscripts prepared for submission to AIP journals. Further information can be found in the documentation included in the distribution or available at \url{http://authors.aip.org} and in the documentation for REV\TeX~4.2 itself. When commands are referred to in this example file, they are always shown with their required arguments, using normal \TeX{} format. In this format, \verb+#1+, \verb+#2+, etc. stand for required author-supplied arguments to commands. For example, in \verb+\section{#1}+ the \verb+#1+ stands for the title text of the author's section heading, and in \verb+\title{#1}+ the \verb+#1+ stands for the title text of the paper. Line breaks in section headings at all levels can be introduced using \textbackslash\textbackslash. A blank input line tells \TeX\ that the paragraph has ended. \subsection{\label{sec:level2}Second-level heading: Formatting} This file may be formatted in both the \texttt{preprint} (the default) and \texttt{reprint} styles; the latter format may be used to mimic final journal output. Either format may be used for submission purposes; however, for peer review and production, AIP will format the article using the \texttt{preprint} class option. Hence, it is essential that authors check that their manuscripts format acceptably under \texttt{preprint}. Manuscripts submitted to AIP that do not format correctly under the \texttt{preprint} option may be delayed in both the editorial and production processes. The \texttt{widetext} environment will make the text the width of the full page, as on page~\pageref{eq:wideeq}. (Note the use the \verb+\pageref{#1}+ to get the page number right automatically.) The width-changing commands only take effect in \texttt{twocolumn} formatting. It has no effect if \texttt{preprint} formatting is chosen instead. \subsubsection{\label{sec:level3}Third-level heading: Citations and Footnotes} Citations in text refer to entries in the Bibliography; they use the commands \verb+\cite{#1}+ or \verb+\onlinecite{#1}+. Because REV\TeX\ uses the \verb+natbib+ package of Patrick Daly, its entire repertoire of commands are available in your document; see the \verb+natbib+ documentation for further details. The argument of \verb+\cite+ is a comma-separated list of \emph{keys}; a key may consist of letters and numerals. By default, citations are numerical; \cite{feyn54} author-year citations are an option. To give a textual citation, use \verb+\onlinecite{#1}+: (Refs.~\onlinecite{witten2001,epr,Bire82}). REV\TeX\ ``collapses'' lists of consecutive numerical citations when appropriate. REV\TeX\ provides the ability to properly punctuate textual citations in author-year style; this facility works correctly with numerical citations only with \texttt{natbib}'s compress option turned off. To illustrate, we cite several together \cite{feyn54,witten2001,epr,Berman1983}, and once again (Refs.~\onlinecite{epr,feyn54,Bire82,Berman1983}). Note that, when numerical citations are used, the references were sorted into the same order they appear in the bibliography. A reference within the bibliography is specified with a \verb+\bibitem{#1}+ command, where the argument is the citation key mentioned above. \verb+\bibitem{#1}+ commands may be crafted by hand or, preferably, generated by using Bib\TeX. The AIP styles for REV\TeX~4 include Bib\TeX\ style files \verb+aipnum.bst+ and \verb+aipauth.bst+, appropriate for numbered and author-year bibliographies, respectively. REV\TeX~4 will automatically choose the style appropriate for the document's selected class options: the default is numerical, and you obtain the author-year style by specifying a class option of \verb+author-year+. This sample file demonstrates a simple use of Bib\TeX\ via a \verb+\bibliography+ command referencing the \verb+sorsamp.bib+ file. Running Bib\TeX\ (in this case \texttt{bibtex sorsamp}) after the first pass of \LaTeX\ produces the file \verb+sorsamp.bbl+ which contains the automatically formatted \verb+\bibitem+ commands (including extra markup information via \verb+\bibinfo+ commands). If not using Bib\TeX, the \verb+thebibiliography+ environment should be used instead. \paragraph{Fourth-level heading is run in.}% Footnotes are produced using the \verb+\footnote{#1}+ command. Numerical style citations put footnotes into the bibliography\footnote{Automatically placing footnotes into the bibliography requires using BibTeX to compile the bibliography.}. Author-year and numerical author-year citation styles (each for its own reason) cannot use this method. Note: due to the method used to place footnotes in the bibliography, \emph{you must re-run BibTeX every time you change any of your document's footnotes}. \section{Math and Equations} Inline math may be typeset using the \verb+$+ delimiters. Bold math symbols may be achieved using the \verb+bm+ package and the \verb+\bm{#1}+ command it supplies. For instance, a bold $\alpha$ can be typeset as \verb+$\bm{\alpha}$+ giving $\bm{\alpha}$. Fraktur and Blackboard (or open face or double struck) characters should be typeset using the \verb+\mathfrak{#1}+ and \verb+\mathbb{#1}+ commands respectively. Both are supplied by the \texttt{amssymb} package. For example, \verb+$\mathbb{R}$+ gives $\mathbb{R}$ and \verb+$\mathfrak{G}$+ gives $\mathfrak{G}$ In \LaTeX\ there are many different ways to display equations, and a few preferred ways are noted below. Displayed math will center by default. Use the class option \verb+fleqn+ to flush equations left. Below we have numbered single-line equations, the most common kind: \begin{eqnarray} \chi_+(p)\alt{\bf [}2|{\bf p}|(|{\bf p}|+p_z){\bf ]}^{-1/2} \left( \begin{array}{c} |{\bf p}|+p_z\\ px+ip_y \end{array}\right)\;, \\ \left\{% \openone234567890abc123\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2}% \right\}% \label{eq:one}. \end{eqnarray} Note the open one in Eq.~(\ref{eq:one}). Not all numbered equations will fit within a narrow column this way. The equation number will move down automatically if it cannot fit on the same line with a one-line equation: \begin{equation} \left\{ ab12345678abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2}% \right\}. \end{equation} When the \verb+\label{#1}+ command is used [cf. input for Eq.~(\ref{eq:one})], the equation can be referred to in text without knowing the equation number that \TeX\ will assign to it. Just use \verb+\ref{#1}+, where \verb+#1+ is the same name that used in the \verb+\label{#1}+ command. Unnumbered single-line equations can be typeset using the \verb+\[+, \verb+\]+ format: \[g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \] \subsection{Multiline equations} Multiline equations are obtained by using the \verb+eqnarray+ environment. Use the \verb+\nonumber+ command at the end of each line to avoid assigning a number: \begin{eqnarray} {\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1} \delta_{\sigma_1,-\sigma_2} (g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\ &&\times [\epsilon_jl_i\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1), \end{eqnarray} \begin{eqnarray} \sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2} (N^2-1)\nonumber \\ & &\times \left( \sum_{i<j}\right) \sum_{\text{perm}} \frac{1}{S_{12}} \frac{1}{S_{12}} \sum_\tau c^f_\tau~. \end{eqnarray} \textbf{Note:} Do not use \verb+\label{#1}+ on a line of a multiline equation if \verb+\nonumber+ is also used on that line. Incorrect cross-referencing will result. Notice the use \verb+\text{#1}+ for using a Roman font within a math environment. To set a multiline equation without \emph{any} equation numbers, use the \verb+\begin{eqnarray*}+, \verb+\end{eqnarray*}+ format: \begin{eqnarray*} \sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2} (N^2-1)\\ & &\times \left( \sum_{i<j}\right) \left( \sum_{\text{perm}}\frac{1}{S_{12}S_{23}S_{n1}} \right) \frac{1}{S_{12}}~. \end{eqnarray*} To obtain numbers not normally produced by the automatic numbering, use the \verb+\tag{#1}+ command, where \verb+#1+ is the desired equation number. For example, to get an equation number of (\ref{eq:mynum}), \begin{equation} g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \tag{2.6$'$}\label{eq:mynum} \end{equation} A few notes on \verb=\tag{#1}=. \verb+\tag{#1}+ requires \texttt{amsmath}. The \verb+\tag{#1}+ must come before the \verb+\label{#1}+, if any. The numbering set with \verb+\tag{#1}+ is \textit{transparent} to the automatic numbering in REV\TeX{}; therefore, the number must be known ahead of time, and it must be manually adjusted if other equations are added. \verb+\tag{#1}+ works with both single-line and multiline equations. \verb+\tag{#1}+ should only be used in exceptional case - do not use it to number all equations in a paper. Enclosing single-line and multiline equations in \verb+\begin{subequations}+ and \verb+\end{subequations}+ will produce a set of equations that are ``numbered'' with letters, as shown in Eqs.~(\ref{subeq:1}) and (\ref{subeq:2}) below: \begin{subequations} \label{eq:whole} \begin{equation} \left\{ abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2} \right\},\label{subeq:1} \end{equation} \begin{eqnarray} {\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1} (g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\ &&\times [\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1).\label{subeq:2} \end{eqnarray} \end{subequations} Putting a \verb+\label{#1}+ command right after the \verb+\begin{subequations}+, allows one to reference all the equations in a subequations environment. For example, the equations in the preceding subequations environment were Eqs.~(\ref{eq:whole}). \subsubsection{Wide equations} The equation that follows is set in a wide format, i.e., it spans across the full page. The wide format is reserved for long equations that cannot be easily broken into four lines or less: \begin{widetext} \begin{equation} {\cal R}^{(\text{d})}= g_{\sigma_2}^e \left( \frac{[\Gamma^Z(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2} +\frac{[\Gamma^Z(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2} \right) + x_WQ_e \left( \frac{[\Gamma^\gamma(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2} +\frac{[\Gamma^\gamma(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2} \right)\;. \label{eq:wideeq} \end{equation} \end{widetext} This is typed to show the output is in wide format. (Since there is no input line between \verb+\equation+ and this paragraph, there is no paragraph indent for this paragraph.) \section{Cross-referencing} REV\TeX{} will automatically number sections, equations, figure captions, and tables. In order to reference them in text, use the \verb+\label{#1}+ and \verb+\ref{#1}+ commands. To reference a particular page, use the \verb+\pageref{#1}+ command. The \verb+\label{#1}+ should appear in a section heading, within an equation, or in a table or figure caption. The \verb+\ref{#1}+ command is used in the text where the citation is to be displayed. Some examples: Section~\ref{sec:level1} on page~\pageref{sec:level1}, Table~\ref{tab:table1},% \begin{table} \caption{\label{tab:table1}This is a narrow table which fits into a text column when using \texttt{twocolumn} formatting. Note that REV\TeX~4 adjusts the intercolumn spacing so that the table fills the entire width of the column. Table captions are numbered automatically. This table illustrates left-aligned, centered, and right-aligned columns. } \begin{ruledtabular} \begin{tabular}{lcr} Left\footnote{Note a.}&Centered\footnote{Note b.}&Right\\ \hline 1 & 2 & 3\\ 10 & 20 & 30\\ 100 & 200 & 300\\ \end{tabular} \end{ruledtabular} \end{table} and Fig.~\ref{fig:epsart}. \section{Figures and Tables} Figures and tables are typically ``floats''; \LaTeX\ determines their final position via placement rules. \LaTeX\ isn't always successful in automatically placing floats where you wish them. Figures are marked up with the \texttt{figure} environment, the content of which imports the image (\verb+\includegraphics+) followed by the figure caption (\verb+\caption+). The argument of the latter command should itself contain a \verb+\label+ command if you wish to refer to your figure with \verb+\ref+. Import your image using either the \texttt{graphics} or \texttt{graphix} packages. These packages both define the \verb+\includegraphics{#1}+ command, but they differ in the optional arguments for specifying the orientation, scaling, and translation of the figure. Fig.~\ref{fig:epsart}% \begin{figure} \includegraphics{fig_1 \caption{\label{fig:epsart} A figure caption. The figure captions are automatically numbered.} \end{figure} is small enough to fit in a single column, while Fig.~\ref{fig:wide}% \begin{figure*} \includegraphics{fig_2 \caption{\label{fig:wide}Use the \texttt{figure*} environment to get a wide figure, spanning the page in \texttt{twocolumn} formatting.} \end{figure*} is too wide for a single column, so instead the \texttt{figure*} environment has been used. The analog of the \texttt{figure} environment is \texttt{table}, which uses the same \verb+\caption+ command. However, you should type your caption command first within the \texttt{table}, instead of last as you did for \texttt{figure}. The heart of any table is the \texttt{tabular} environment, which represents the table content as a (vertical) sequence of table rows, each containing a (horizontal) sequence of table cells. Cells are separated by the \verb+&+ character; the row terminates with \verb+\\+. The required argument for the \texttt{tabular} environment specifies how data are displayed in each of the columns. For instance, a column may be centered (\verb+c+), left-justified (\verb+l+), right-justified (\verb+r+), or aligned on a decimal point (\verb+d+). (Table~\ref{tab:table4}% \begin{table} \caption{\label{tab:table4}Numbers in columns Three--Five have been aligned by using the ``d'' column specifier (requires the \texttt{dcolumn} package). Non-numeric entries (those entries without a ``.'') in a ``d'' column are aligned on the decimal point. Use the ``D'' specifier for more complex layouts. } \begin{ruledtabular} \begin{tabular}{ccddd} One&Two&\mbox{Three}&\mbox{Four}&\mbox{Five}\\ \hline one&two&\mbox{three}&\mbox{four}&\mbox{five}\\ He&2& 2.77234 & 45672. & 0.69 \\ C\footnote{Some tables require footnotes.} &C\footnote{Some tables need more than one footnote.} & 12537.64 & 37.66345 & 86.37 \\ \end{tabular} \end{ruledtabular} \end{table} illustrates the use of decimal column alignment.) Extra column-spacing may be be specified as well, although REV\TeX~4 sets this spacing so that the columns fill the width of the table. Horizontal rules are typeset using the \verb+\hline+ command. The doubled (or Scotch) rules that appear at the top and bottom of a table can be achieved by enclosing the \texttt{tabular} environment within a \texttt{ruledtabular} environment. Rows whose columns span multiple columns can be typeset using \LaTeX's \verb+\multicolumn{#1}{#2}{#3}+ command (for example, see the first row of Table~\ref{tab:table3}).% \begin{table*} \caption{\label{tab:table3}This is a wide table that spans the page width in \texttt{twocolumn} mode. It is formatted using the \texttt{table*} environment. It also demonstrates the use of \textbackslash\texttt{multicolumn} in rows with entries that span more than one column.} \begin{ruledtabular} \begin{tabular}{ccccc} &\multicolumn{2}{c}{$D_{4h}^1$}&\multicolumn{2}{c}{$D_{4h}^5$}\\ Ion&1st alternative&2nd alternative&lst alternative &2nd alternative\\ \hline K&$(2e)+(2f)$&$(4i)$ &$(2c)+(2d)$&$(4f)$ \\ Mn&$(2g)$\footnote{The $z$ parameter of these positions is $z\sim\frac{1}{4}$.} &$(a)+(b)+(c)+(d)$&$(4e)$&$(2a)+(2b)$\\ Cl&$(a)+(b)+(c)+(d)$&$(2g)$\footnote{This is a footnote in a table that spans the full page width in \texttt{twocolumn} mode. It is supposed to set on the full width of the page, just as the caption does. } &$(4e)^{\text{a}}$\\ He&$(8r)^{\text{a}}$&$(4j)^{\text{a}}$&$(4g)^{\text{a}}$\\ Ag& &$(4k)^{\text{a}}$& &$(4h)^{\text{a}}$\\ \end{tabular} \end{ruledtabular} \end{table*} The tables in this document illustrate various effects. Tables that fit in a narrow column are contained in a \texttt{table} environment. Table~\ref{tab:table3} is a wide table, therefore set with the \texttt{table*} environment. Lengthy tables may need to break across pages. A simple way to allow this is to specify the \verb+[H]+ float placement on the \texttt{table} or \texttt{table*} environment. Alternatively, using the standard \LaTeXe\ package \texttt{longtable} gives more control over how tables break and allows headers and footers to be specified for each page of the table. An example of the use of \texttt{longtable} can be found in the file \texttt{summary.tex} that is included with the REV\TeX~4 distribution. There are two methods for setting footnotes within a table (these footnotes will be displayed directly below the table rather than at the bottom of the page or in the bibliography). The easiest and preferred method is just to use the \verb+\footnote{#1}+ command. This will automatically enumerate the footnotes with lowercase roman letters. However, it is sometimes necessary to have multiple entries in the table share the same footnote. In this case, create the footnotes using \verb+\footnotemark[#1]+ and \verb+\footnotetext[#1]{#2}+. \texttt{\#1} is a numeric value. Each time the same value for \texttt{\#1} is used, the same mark is produced in the table. The \verb+\footnotetext[#1]{#2}+ commands are placed after the \texttt{tabular} environment. Examine the \LaTeX\ source and output for Tables~\ref{tab:table1} and \ref{tab:table2}% \begin{table} \caption{\label{tab:table2}A table with more columns still fits properly in a column. Note that several entries share the same footnote. Inspect the \LaTeX\ input for this table to see exactly how it is done.} \begin{ruledtabular} \begin{tabular}{cccccccc} &$r_c$ (\AA)&$r_0$ (\AA)&$\kappa r_0$& &$r_c$ (\AA) &$r_0$ (\AA)&$\kappa r_0$\\ \hline Cu& 0.800 & 14.10 & 2.550 &Sn\footnotemark[1] & 0.680 & 1.870 & 3.700 \\ Ag& 0.990 & 15.90 & 2.710 &Pb\footnotemark[2] & 0.450 & 1.930 & 3.760 \\ Au& 1.150 & 15.90 & 2.710 &Ca\footnotemark[3] & 0.750 & 2.170 & 3.560 \\ Mg& 0.490 & 17.60 & 3.200 &Sr\footnotemark[4] & 0.900 & 2.370 & 3.720 \\ Zn& 0.300 & 15.20 & 2.970 &Li\footnotemark[2] & 0.380 & 1.730 & 2.830 \\ Cd& 0.530 & 17.10 & 3.160 &Na\footnotemark[5] & 0.760 & 2.110 & 3.120 \\ Hg& 0.550 & 17.80 & 3.220 &K\footnotemark[5] & 1.120 & 2.620 & 3.480 \\ Al& 0.230 & 15.80 & 3.240 &Rb\footnotemark[3] & 1.330 & 2.800 & 3.590 \\ Ga& 0.310 & 16.70 & 3.330 &Cs\footnotemark[4] & 1.420 & 3.030 & 3.740 \\ In& 0.460 & 18.40 & 3.500 &Ba\footnotemark[5] & 0.960 & 2.460 & 3.780 \\ Tl& 0.480 & 18.90 & 3.550 & & & & \\ \end{tabular} \end{ruledtabular} \footnotetext[1]{Here's the first, from Ref.~\onlinecite{feyn54}.} \footnotetext[2]{Here's the second.} \footnotetext[3]{Here's the third.} \footnotetext[4]{Here's the fourth.} \footnotetext[5]{And etc.} \end{table} for an illustration. All AIP journals require that the initial citation of figures or tables be in numerical order. \LaTeX's automatic numbering of floats is your friend here: just put each \texttt{figure} environment immediately following its first reference (\verb+\ref+), as we have done in this example file. \begin{acknowledgments} We wish to acknowledge the support of the author community in using REV\TeX{}, offering suggestions and encouragement, testing new versions, \dots. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,684
Q: How to rate movies on AppleTV I just rented a movie on Apple TV. How do I give it a star rating, i.e 1-5 stars? When you select a movie on Apple TV, it shows a rotten tomatoes rating and a star rating. In the past I stumbled across the star rating selector UI by accident- now I can't find it. I have a third-gen Apple TV running 6.0.2. A: Currently (March 2017) you have to login to iTunes to rate and leave a review. I hope they add this feature to Apple TV soon. A: I just watched Gladiator for probably the 5th time and, with a tear in my eye, wanted to give it a 5 star rating. I have rated movies in the past on Apple TV but cannot find how to do it anymore. I can't find it on the rental screen, the "more" menu, or the menu while the movie is playing. I've also looked to see if it is only available to purchased movies which is not the case. I am left to believe the feature has been removed. I hope I am wrong. However, I also scoured the internet for an image of someone rating a movie on Apple TV and could not find one, which leads me to believe I am crazy. Maybe were both crazy. Or maybe it's just gone because no one cared about it. A: Figured it out. In iTunes, rented tab, graphic of movie rented, right click on the picture, select Love or Dislike. Also an option to go to movie in iTunes store. Click on that link and you can choose how many stars and even write a review. SOLVED.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,113
For my Dad, You will always be my best pal, thanks for teaching me life's most important lesson, how to live life to the full. INTRODUCTION THE LARDER Fermented Vegetables, Fruits and Herbs Meat, Fish and Game Preservation Pickles and Jams Dairy, Butters and Oils Powders, Salts and Crisps Stocks, Sauces and Seasonings FOR THE TABLE Bases and Blends, Chef's Cocktails and Home Brews Snacks Garden Sea Land Sweet ACKNOWLEDGEMENTS INTRODUCTION MY JOURNEY My wife's nan, Nanna May, trusted people and generally warmed to them if they were what she called a 'good eater'. She ran the kitchen of a café in north Dublin and every now and again she would get visits from a local villain who was often in jail for one thing or another. Whenever he was out, he would always make time to pop in for a lasagne, a pot of tea and a large slice of cake. He was a 'good eater', and she genuinely thought of him as a 'lovely fella' because he ate well and finished what he ordered. I love how food and cooking for someone can have this effect on people: Nanna May, to my knowledge, was certainly anything but a heavy 'gangster godmother' type, yet these two complete opposites had a bond built by the simple process of cooking for someone. Way back when, I was what Nanna May would have called a 'bad eater'. Until I was about 14, my relationship with food was pretty horrific. I used to love a disgusting Granby beef burger that had been cremated under a grill and slapped into a horrid white processed bun. I also ate terrible-quality frozen pizzas, alphabet spaghetti and all shapes and sizes of frozen chicken-dipper-type things. It wasn't that my parents didn't expose me to good food. Quite the opposite. But for some reason this was the sort of crap that I insisted on eating. I point blank refused to try what they were having at home, or in all the amazing restaurants around the world that I was lucky enough to be taken to. I would have been far happier with a McDonald's. I still don't know why I was like that and it's my biggest fear that my son Ziggy will follow in my bizarre footsteps. It's much easier to tell your mum and dad that you won't try something that they have lovingly prepared for you, but when you're a guest in a mate's home, well that's different. Growing up, I spent a lot of time in my best friend Paul's house. His parents, Tom and Deirdre, were amazing cooks. It was the type of house where when you walked into the downstairs toilet you might get quite a fright, finding a brace of freshly shot birds hanging off the showerhead. There were regularly lobsters, crabs and Dublin Bay prawns that had been gifted to Tom in the local pub, and Deirdre made her own pizzas on a Friday. I pretended to have dietary problems to avoid eating certain things (how ironic, as that happens at least once every night in the restaurant... karma, I hear you muttering!). Slowly, too embarrassed to say no, I was exposed to all the amazing things I had been missing out on. Looking back, I now realise how much the McNerney kitchen in Clarinda Park shaped and influenced me. Paul had always wanted to be a cook; I didn't know it then but I was to follow him along many of his own chosen paths. My father was a musician and my mother a choreographer. Both had very long, successful careers in the business and I had immense pressure to follow in their footsteps. Initially I tried music and dropped out, then dance. My father wasn't too happy about that so it didn't last long either. Then it was acting. Yep, packed that in too. I stuck at nothing! I tried all sports too – I bought the best gear – and packed it in two weeks later. Then there was school... Well, I'm sure you're seeing a pattern. Having convinced my parents to put me into an expensive school in the centre of Dublin for my final year before college, I had reached the final flunk! I decided university wasn't for me and had a mate put me forward for an apprenticeship as an electrician. I came home to tell my parents of yet another failure, but at least this time I had a plan. My parents couldn't understand why I would want to become an electrician – as my father said, 'you can't even change a lightbulb', but he suggested cooking. I had been cooking at home and now at 17 I had a thirst for knowledge plus, finally, a decent appetite. So, taking some advice, I decided to avoid college and get straight into it. I applied for a job in one of the busiest and highest-profile spots in Dublin. I have no regrets about passing up the chance to go to college as I felt I had some catching up to do and had wasted enough of my time and my parents' money over the years. This was my chance to show everyone that I could do something and would stick with it no matter what – hopefully with some success. I have now been cooking professionally for 18 years and I have managed to cook in some of the best and toughest kitchens in Europe, yet I can honestly say that my first experience at that restaurant in Dublin was the most unnecessarily brutal and it has scarred me for life. Keen as mustard with a brand new set of knives, I set foot into the kitchen, full of energy and excitement. The hustle and bustle of cooks from different lands running around frantically trying to get ready for the lunch service instantly drew me into this new world. This was the place for me. To say I was fresh is an understatement. Within my first few weeks I had several cuts and, as my body adjusted to the 18-hour days, I would literally fall into bed at night fully clothed, only to wake up panicking about the job. I loved how much I was learning and tried to write down everything as the weeks flew by into months. I made mistakes along the way, as you would expect, but what I had not expected was the bullying. There was a senior team of about five guys from Head Chef down to Chef de Partie. They were the pirates running the ship and seemed quite close. It started with silly pranks, like asking me to run to the basement walk-in fridge to count the produce. Upon my return they would sneer and make degrading remarks. Then there was the aggression when I had hot soup thrown at me and was forced to clean up the mess. For no apparent reason, they all seemed to dislike me. Whenever I asked a question, I was either threatened or screamed at. The quiet evenings were the worst as they had more time to be cruel. One evening they were all standing around a bucket of garlic, peeling it, and I walked up to try to take part. I reached in to grab some garlic, at which they all stopped chatting and glared at me. The horrific intimidating silence was broken by the Sous Chef calling me a 'fucking queer'. I walked away while they were all laughing and struggled to hold back the tears. This continued for months. I dealt with it by telling the stories to my pals and turning it into humour. We were permanently understaffed because many of the new cooks who took the jobs would not last the day. I stuck with it and began to move up the ranks, to managing my own section. On a Friday lunchtime with 250 covers on the books, there was a backlog of checks that we weren't keeping up with. One of the cooks screamed at me, threw me four duck breasts and told me to get them cooking. I was nervous as I didn't know how to do this, so I threw a couple of pans on full blast heat and added two huge ladles of oil. The oil instantly started to smoke. I suddenly remembered seeing duck breasts going into a warm pan with no oil, skin side down, and went back to question the chef. He screamed at me to 'just cook the fucking things', so I panicked and threw one breast into the hot oil. A flame spurted up as I threw in the second breast. The hot oil splashed up to my neck, chin and face, covering a third of my face. I was rushed to hospital. I was out of the kitchen for two weeks and paid for just one. The doctor told me I should take two months off to recover. Upon my return to the kitchen the men were standing around my section reading my recipe book and making jokes. I grabbed my book back and one of them barked, 'Have you got the recipe for the duck?' They all cracked up laughing. That evening one of the waitresses was asking me about my burns when the Head Chef shouted down: 'Get the fuck away from him. He has AIDS. Just look at his fucking face.' Embarrassed, I went back to my station. I took advice from my brother who was a manager at a restaurant in Dublin called QV2, run by a wonderful gent, Count John McCormack, and they offered me a job in the kitchen. I thanked them but I wanted to prove that I could cope and do my job before I left it. That day came when I was finishing a week where I had worked incredibly hard and had successfully smashed every service running a two-man section solo. I had cleared an area where I could lay out plates, to prepare for the slamming we were expecting. The chef grabbed me and said: 'What the fuck are you doing, you retard? Fuck off over there and strain the stocks.' That was it! I'd had enough. I threw down my apron and told him to go fuck himself and strain it himself. I went to change and gather my things, and completely broke down. As I write this I am fighting back the tears. There is no reason for this kind of behaviour. It didn't make me a better cook. They couldn't staff the kitchen and the team all broke apart – after a few months the Head Chef walked out. If you asked me to name them I honestly couldn't; I don't remember. It's as though I've blocked them out. Thankfully this is not commonplace across the industry now, but it does happen and not only in kitchens. From then on my career improved. I worked in better kitchens and travelled the world, and eventually started setting up my own restaurants. The businesses are run with passion and hard work but without fear. There will never be any bullying in my kitchens. I ensure there is a warm and loving atmosphere where everyone is as important as the next and everyone has a say. Ideas and creativity are embraced. The people I am blessed to work with are not my 'staff', they are my family and they know that. FLYING THE NEST Aged 19, my two pals, Paul and Ed, had landed jobs in London. Not messing about, Paul was going to Le Gavroche and Ed to Chez Nico. I didn't want to be left behind so I decided to follow. I took their advice and bought a copy of the London Michelin guide and a tube map. For two days I knocked on the back door of every kitchen that had a star or stars. I had a letter with me pleading for a position (addressed to the Chef de Cuisine, whose name I had quickly checked on the menu at the front of the restaurant). London impressed me but scared me and the tube confused me. I dropped in everywhere green as green could be. I was completely out of my depth and not too hopeful of landing a job. The last place I called into was the Oak Room Marco Pierre White. It was in Le Méridien Hotel on Piccadilly. I couldn't figure out where the back kitchen entrance was so I entered nervously through the front doors: the place just took my breath away. Robert Reid was running the kitchen and he took the time to come and speak to me. I told him of my lack of experience and that I hadn't been to college but that I had decided to move to London and get my ass kicked there instead. He laughed and told me of the great times he had had at college and not to rush, but if I was serious I could start in a month. That was it! The three green amigos were off to London, all starting on the same day in the three top spots in the UK. It wasn't easy. I made mistake after mistake and worked far slower than anyone else in the kitchen. I was there six days a week, getting home at 1am and back in for 8am, but every day got a little easier. Sure, I was shouted at during service and called a few names that still make me laugh, but the longer I was there, the more I became part of something, part of a team. And I developed skills that I would never forget. Robert was a fair man and became almost a father figure to me in a city that can be a very cold place at times. I spent a year and a half there, meanwhile dreaming of an easier time I'd had with my amazing friends and family in Dublin. At 21 I returned home and took a role that I was not really ready for – as a Sous Chef at a place that I loved and had worked at before, just a stone's throw from where I grew up. It was called Brasserie Na Mara, overlooking Scotsman's Bay. I loved it there but found myself reproducing dishes I had learned to cook at the Oak Room and doing them badly. So I decided it was time for a change again. I set off for Asia on what was supposed to be a three-week holiday but turned into a six-month stint. I became a complete hippy and told myself that I would never return to London, or anything like London, and never work in a Michelin-starred restaurant again. Life was too short and there was too much to see. So where should I go next? Well, whenever I was lost I would always talk to Paul. At the time, he was in Naples, working in a beautiful restaurant with a Michelin star called Taverna del Capitano, right on the water in Marina del Cantone, a small fishing village. That sounded nice but I didn't want to work anywhere that had a star. I loved Italian cooking, as that's what I had started off cooking, so I set off for Naples with the romantic idea that I would be learning to make pasta with a nonna in a trattoria. I dreamed of becoming part of an Italian family, spending all my free time on a boat somewhere along the Amalfi Coast or lost in a vineyard! That might have happened but for the fact that the trattorias of that area are family-run, so unless you're local or a family member, you're pretty far down the line, and understandably so. The only option I had was to apply to a two-star place called Don Alfonso 1890 – I had a friend named Fernando who was working there and he very kindly put in a good word for me. So I landed myself in a bloody two-star working six days a week again! The good news was that it was a family-run business, and had been since 1890. The charming and hard-working Iaccarino family who ran the restaurant had a farm overlooking Capri. It was beautiful in every way. Alfonso came from the farm every morning in a white Hiace van filled with the most wonderful produce that we got to cook with. It was my first exposure to true cooking with the seasons, when something was in such abundance and at its best and had to be put to use. It was a revelation to me. Whatever couldn't be used was preserved and kept for a season less generous. It was natural in every way. Without realising it at the time, this made the biggest impression on how I cook today. I learned how to hold back and let the produce speak for itself. I learned for the first time what it really meant to be seasonal, not just as a slogan but to live by it. And guess what? I had the Michelin bug again. I threw out those silly hippy pants that I wore at a full-moon party somewhere and found I had a thirst for knowledge at that level. I dreamed of Paris, having listened to the stories of Robert, who was working at Joël Robuchon, and thought that would be my next adventure. But alas, things don't always work out the way you plan them. In our business chefs at this level are stupidly committed to the craft, taking holidays but not resting, and actually working for FREE at some highly regarded spot with glitzy accolades. In Italy I met Craig who was on such a trip. He had taken time off to travel to Don Alfonso on a break from Raymond Blanc's Le Manoir aux Quat' Saisons. He didn't speak Italian so I looked after him. Thinking about my next move, as the season was coming to a close, I wanted to know all about Le Manoir. I was intrigued, and decided that the least I could do was check it out on a similar exchange basis. Le Manoir is a special place. I never went to catering college but I consider getting a job at Le Manoir to be the equivalent of gaining a scholarship at Oxford. The pay wasn't amazing – our industry has a bad reputation for pay and hours, and in some cases it's probably true. It is not an easy career path. But at Le Manoir I felt I was gaining knowledge in one of the most successful 'cooking schools' in the world and actually being paid for it, while many young men and women more commonly take on huge loans to attend university with no guarantee of an income. How ironic. Well, my two-week stage turned into a minimum two-year commitment. My then girlfriend (now wife), Sarah, wasn't too happy but fully supported my decision and followed me to Oxfordshire a few months later. The two years turned into four, working with Raymond, Gary, Benoit and the amazing team. It was a disciplined place to work and the training was incredible. The kitchen had a very clear structure of who was who in the ranks, from Commis, 1/2/3, Demi, CDP (Chef de Partie), Senior CDP, Chef Tournon, Junior Sous, Senior Sous, Head Chef, Executive Chef and, finally, Raymond the Chef Patron. In the pastry section, the talented and flamboyant Benoit ran a very tight ship indeed, producing breads and some of the finest pastry in the country every day. The point I'm making is that the kitchen at Le Manoir was a huge operation with a very steep career ladder to climb. But although it was one of the most competitive environments I have ever worked in, I built some of the most sincere bonds with many of the talented individuals I got to work with and remain close to many of them to this day, following their successes all over the world. It was a very important building block for me. I took my first head chef role in opening The Diamond Club at Arsenal's Emirates Stadium representing Raymond Blanc. This was an incredible opportunity as I was still being mentored by RB and Gary. To all my pals at home, this, ironically, was like winning the lotto and landing the luckiest job in the world. I thought it odd that when I had worked in two-and three-star places, they never gave a damn, but now this was huge. You see, I had – and still have – zero interest in football. On one occasion, we were doing recipe testing in the private dining room kitchen in Le Manoir during the day. I still had ingredients left from some of the testing dishes that were a success, so Gary asked me to prepare them and send them out as appetisers for regulars and VIPs. This went on throughout the service until Johnson came running down, screaming, 'Robin, I need two appetisers VIP Thierry Henry'. I innocently said, 'Eh? What the hell is that? I've not cooked it before', not having a clue that this was actually a person and one of the most successful and famous footballers in the world, at the time captaining Arsenal. I spent two years running between Le Manoir and Arsenal, and it gave me a taste of London that I hadn't had previously. I was very young when I was first working there but now I was a little older and had a little more money in my pocket, and Sarah by my side. So I decided to look for a permanent role in London. I landed a job in the City running a beautiful restaurant in The Royal Exchange building, for a forward-thinking and growing company called D&D. This was just before the financial crash. When that happened I thought it was really bad luck that I had landed a job in the City where restaurants thrive on customers with expense accounts, which were suddenly now all scrapped. I had to learn how to be frugal, to use more affordable ingredients that I had never worked with before and still make them delicious. These were challenging times but the relationships formed and skills gained became instrumental in opening my own place. Fast forward four successful years and I had an obsessive desire to do my own thing. However, I didn't know what that was yet. During my training at Le Manoir, I had met Matt Orlando, currently of Amass in Copenhagen, who was doing an extreme cooking sabbatical. He was from San Diego and had saved up enough cash to spend a year travelling and staging across Europe in some of the best places in the world. The idea of doing something similar never left me. I got a call from a friend who was looking to build a team to cook for a head of state from the Middle East during a visit to the UK. The salary I could earn in two months was the same as 18 months' work. Well, it was a no-brainer. The crazy, bizarre two months were exhausting but I then had a healthy bank balance. That gave me the opportunity to follow in Matt's footsteps. I took six months off, and staged and ate in some of the finest places in Europe and Scandinavia. I visited suppliers all over the British Isles and basically studied everywhere I visited. As I was mostly travelling alone I took to writing reports on what inspired me in the design, menu ideas and wine lists, as well as the cultures in different restaurants. What inspired me most was what I called modern bistros. They were cool, laid-back dining rooms with lots of raw material, banging out great playlists. They were full of energy and creativity. Menus changed frequently with what seemed like a fearless confidence and disregard for the big guides. These places were cooking better than some three-stars I visited. They were perfectly imperfect. I spent all my savings in those six months and I gained a few kilos. But I ended up with a sharp, clear vision of what I wanted to do. That was to take a back-to-basics approach to my cooking and to learn ancient techniques like charcuterie, baking and preserving. I wanted to work as closely as I could to being on a farm and by the coast by working directly with fishermen and farmers and buying direct. It was at this point we came across The Dairy in Clapham. With no savings, we borrowed, begged and stole to get the business up and running, taking it from a shitty late-night bar with a terrible reputation to the restaurant it is today. The old building was originally a house and it was what I can only describe as a hoarder's paradise. Every room, from the basement to the sheds out the back and the four rooms above, were full, floor to ceiling, with absolute crap. Tractor wheels, empty sweet tins, smelly old curtains, hundreds of pillows, disgusting objects of all varieties and shapes, all worth absolutely nothing. It took us a month just to clear it. We had £80k to get the restaurant open. Our business partner, Matt, had friends in the building trade and he called in huge favours to get things on serious mates' rates for us. Dean, Richie and Eoghann, who were dear friends and were going to be in the kitchen with me, started work six weeks before opening – not recipe-testing, which would be the norm, but sanding, painting, scrubbing and constantly cleaning up after the pretty horrifically bad-habited builders. Getting the business to a fit opening state was traumatic. We all lost a worrying amount of weight, and I almost came to blows with the head builder who threatened to walk out unless he got more funds. Negotiations followed and we reached an agreement to get the work done. We built a herb garden on the roof and inherited some beehives from Dean's uncle. With no money left in the bank we were forced to open the doors without any form of soft opening. Early on I had had the idea for the menu format of snacks/ garden/ sea/ land/ sweets, and I wrote the first menu. But that was as far as I got. I turned to Dean, Richie and Eoghann and explained that this was what I hoped to do. I talked them through the style and then I questioned them about how we should do it. They worked manically trying to figure out what I was trying to achieve while I struggled to get the build together. Together we pulled it off. And together is how we have approached every menu and every decision to this day. The builders left at 5.30pm on opening day. With the smell of paint, glue and cement hanging in the air, we opened the doors of The Dairy on St Patrick's Day in 2013. We had a banging playlist, a killer menu, my amazing wife Sarah and Damiano (of Tutto wines' fame, who created our first wine list) front of house, us four cooks and an amazing KP (kitchen porter) named Depeche in the kitchen, and that was it. The cooks ran the food, and Sarah and Damiano worked the room. Fast forward a few years and a couple more restaurants later, and we realised our dream with charcuterie room, a farm, a bread programme and a cellar filled with ferments, vinegars and miso, chutneys and jams. We are as close as you can get to being a farmhouse kitchen by the sea in almost central London, in ol' leafy Clapham Common. **Robin Gill,** 2018 THE LARDER I am obsessed with forgotten traditions and the way we used to cook. My time spent in Italy, where the menu was scripted by the seasons and the produce harvested from Alfonso's farm, shook me and awakened my thirst – so much so that as I write this, we are in the process of opening our own Italian restaurant, Sorella. My years on the Amalfi Coast taught me the importance of working with the best of ingredients at their peak and preserving the excess. My restaurants are in an urban setting but my approach to cooking is an extreme version of this philosophy. We have urban gardens above the restaurants; we house beehives; our cellar is full of vinegars, miso, kimchi, charcuterie, kombucha, cordials, jams and chutneys. Every inch of our old house is put to use with culinary experiments bubbling away. We have achieved great things from a central London location and I want to share my techniques to prove that a more traditional way of cooking is perfectly achievable in any home. The rewards and possibilities are endless. Vegetable fermentation, jam-making, pickling, curing and smoking meat and fish are but a few of the techniques that I want to share. You don't need a countryside location to stock a healthy larder, and this can be your secret weapon in creating some inspiring dishes. FERMENTED VEGETABLES, FRUITS AND HERBS If you are new to this method of food preservation, I think vegetable fermentation is where you should start. It is the safest and simplest way to build up your confidence to explore the large universe of fermentation. I love the fact that you are working with something alive, and that no two batches will turn out the same. And the end product can be enjoyed fairly quickly. It was John Lancaster, one of our regulars, who gave us a book called _The Art of Fermentation_ , by Sandor Katz, because he knew we were keen to learn about fermenting and to experiment. We basically went on a fermentation rampage after that, then put our efforts out of reach and out of mind. A month later John popped in for lunch and suddenly we remembered the box of tricks we had put above the freezer. When we pulled it out and opened the jars, the smells and flavours were like nothing we'd had before. We knocked up a quick menu incorporating everything we had fermented and served it to John, hoping we wouldn't kill him (we didn't and he's still a regular). Little did he know what an impact that book would have on our approach to cooking. Now all of our cooks have a copy and use it as their go-to bible. Thank you, John. The basic premise for vegetable and fruit fermentation is to tightly pack the chosen fruit or vegetable under a liquid to create an environment where oxygen-dependent mould and organisms cannot grow while encouraging and allowing acidifying bacteria to grow instead. Apart from this basic principle the approach can be varied – the fermentation of fruit and vegetables has a long history, and methods and approaches have changed and developed over the years. There really is no right and wrong so long as the basic principle is observed. In this section, I share how we approach vegetable fermentation in the kitchen at The Dairy, but that is not to say that this is the only way to do it. We have changed our own approach over the years by experimenting with different liquids, amounts of salt, methods, storage conditions and timings. Because of this flexibility, vegetable fermentation is a good place to start if you are interested in experimenting with fermenting, in the conditions that you have available and with what works for your personal taste. The addition of spices and seasonings in these recipes is only a suggestion. Once you are comfortable with the basic principle of fermentation, you can really add whatever flavours you fancy. Ferments are pretty simple to create but can add an extra layer of flavour and balance to most dishes. WHEY Whey is the liquid that separates from the solid curd as milk curdles. It is therefore a by-product of cheese-making, as well as of the production of yoghurts and other curds. The acidity of whey means that it can speed up the fermentation process. In cheese-making, the whey is usually discarded but savvy chefs are bringing it into their kitchens and reaping the benefits. We use whey in our ferments – we have built up a strong relationship with our cheese suppliers and they are able to easily source whey for us from cheese producers. We find it to be a useful ingredient but it is not essential when fermenting at home. Whey is growing in popularity but is still not readily available as a retail product. If you are lucky enough to live near a decent cheese shop or, even better, a cheese producer, then ask them for some and use it in your ferments. As consumers, the bigger the demand we make for something, the more likely it is to come into general circulation. SALT AND BRINES One ingredient that we do rely on in our approach to fermenting is salt. It is salt that draws the naturally occurring water out of vegetables – in some recipes, such as fermented cabbage all that is needed is cabbage, salt and elbow grease. Salt also creates an environment that restricts the growth of some bacteria, giving lactic acid bacteria (the bacteria required here) a chance to grow. In addition salt is a natural preservative, so it is perfect for use with vegetables to prolong their shelf life. For recipes where the ingredient is finely chopped, such as in the cabbage ferment recipe, then dry-salting is appropriate. For others where the ingredient is left whole or chopped into larger pieces, then a brine is needed. A brine is a mixture of salt and water. The salt is added to the water and brought just to the boil to dissolve the salt, then allowed to cool before use. We make brines of different strengths based on the amount of salt that is added. This is expressed as a percentage in relation to the amount of water. So, for example, a 2% brine means that the weight of salt added is 2% of the weight of the water. In other words, for a litre of water (which weighs 1kg) you would need to add 20g of salt. JARS AND EQUIPMENT When fermenting vegetables or fruit, they need to be packed tightly into the chosen vessel as this will help eliminate oxygen. We use jars with a rubber seal such as Kilner jars at the restaurant. The ingredient is packed tightly into the jar and then any gaps are filled with liquid, which may be liquid that has naturally emerged through salting, or whey, brine or other liquid. The reason the jars have a rubber seal is so that any CO2 can escape. If you are using airtight jars, it's a good idea to open them regularly during the fermentation process to allow CO2 to escape and thus avoid cracked jars! For vegetable ferments, we find the 2-litre Kilner jars with rubber seals the most useful. The 500ml jars come in handy for smaller ingredients such as herbs. Having said that, you may find some of the recipes a little large for a home kitchen, so do scale them down as you wish. For some ferments, it may be necessary to weigh down the ingredients in the jar so they stay submerged under the liquid. Lots of different things could be used, such as a large cabbage leaf, a plastic lid that fits snugly in the jar or a sealed bag of water (we use a vacuum pack bag filled with water). One thing to note, it is important to sterilise jars and equipment and to wear clean rubber or plastic gloves when handling the ingredients. This is to avoid the introduction of any unwanted bacteria. It is especially important during the fermentation process when you are tasting the ferment to see if it is ready. Use a sterilised spoon here (simply dip the spoon into boiling water) rather than dipping dirty cutlery or hands into the jar. TIMING AND STORAGE There is no right or wrong way to store ferments. Some people ferment in warm, bright locations while others insist that a cool, dark environment is best. Temperature will, of course, affect the time the fermentation takes. We store our jars at a slightly warm temperature, in a kitchen environment for example, but away from direct sunlight, as we find this works best for us. One question that constantly comes up is how long it will take. This really is a 'how long is a piece of string?' scenario. Firstly, so many factors affect the fermentation process: the vegetable or fruit itself, the temperature, the amount of salt present, and whether you are using water as the main liquid or a more acidic substance such as whey. Secondly, personal taste also comes into play – some people prefer a lighter ferment with only a slightly acidic taste while others like a really strong and acidic punch. The best advice that I can give is to taste your ferment along the way (with a sterilised spoon). Once the vegetable or fruit breaks down slightly and there is a slightly acidic flavour, then it really is up to you when you want to use it and how much more you wish it to ferment. Fermented vegetables should taste sour not salty. The fermentation times in these recipes are only a guideline and they fall in line with the conditions available at The Dairy. One useful tip when it comes to fermenting is to set reminders on your phone to check the jars. When you have a few ferments on the go, it is easy to forget what needs to be checked when. When the ferment is to your taste and ready to use, it can be moved to the fridge to slow the fermentation process right down. There it can be stored unopened for up to 3 months. Once the jar has been opened, it is recommended that the ferment is used within 1 month. This applies to most of our fermented vegetables and fruits unless otherwise specified in a recipe. FERMENTED APPLES makes about 900g 10 Granny Smith apples, quartered 1 litre whey or 500ml water mixed with 500ml fresh apple juice) Pack the apples into a sterilised 2-litre Kilner jar. Cover with whey or the water/juice mixture and seal. Leave to ferment at a warm room temperature for 4 days; keep away from direct sunlight. When ready, the sealed jar can be stored in the fridge for up to 3 months. Once opened, use within 1 month. FERMENTED APPLE JUICE makes about 780ml 10 Granny Smith apples, quartered 1 litre whey (or a mixture of 500ml fresh apple juice and 500ml filtered water) Place the apples in a sterilised 2-litre Kilner jar, cover with the whey (or apple juice and water) and seal. Leave in a warm area of your kitchen, away from direct sunlight, for 4 days. Remove the apples from the jar, draining off the whey. (The whey can be used again two or three times, then discarded.) Juice the apples in an electric juicer/press. The juice can be kept in the fridge in a sealed bottle for up to a week or it can be frozen. FERMENTED ARTICHOKE The malt barley can be purchased online from home brewing/beer brewing companies. If not easy to source it can be omitted. makes about 1.5kg 50g malt barley (optional) 1.5kg Jerusalem artichokes, washed and cut in half 500ml wheat beer fine table salt If using the malt barley, lightly toast it in a dry pan for 1–2 minutes until slightly darkened. Place a sterilised 2-litre Kilner jar on a kitchen scale and turn the scales back to zero. Pack the artichokes and barley (if using) into the jar and cover with the beer. Top up with filtered water if needed so that the artichokes are fully submerged. Based on the weight of the contents of the jar, calculate 2% salt. Add this, then seal the jar. Leave to ferment at a warm room temperature for about 2 weeks; keep away from direct sunlight. When ready, the artichokes will be slightly softened and sour. Store in the fridge for up to 3 months. Once opened, use within 1 month. POTATO FERMENT makes about 2kg 2kg new/young potatoes 250ml whey (optional) fine table salt If the potatoes are small leave them whole; cut larger potatoes in half or into uniform pieces. Simmer the potatoes in whey or water until cooked and can be easily pierced with a knife. Drain and allow to cool. Set a 2-litre Kilner jar on a set of scales and return the scales to zero. Pack the potatoes into the jar and pour in the whey (if using). Top up with water to cover. Based on the weight of the contents of the jar, calculate 2% salt. Add this to the jar. Seal the jar and leave to ferment at a warm room temperature for 14 days; keep away from direct sunlight. When ready, the sealed jar can be stored in the fridge for up to 3 months. Once opened, use within 1 month. NETTLE FERMENT makes about 1kg 1kg nettles, leaves picked and washed 330ml beer with a strong hoppy flavour fine table salt Place a 1-litre Kilner jar on a set of scales and return the scales to zero. Add the nettles and beer to the jar and top up with water to cover. Based on the weight of the contents of the jar, calculate 2% salt. Add this to the jar. Seal the jar and leave to ferment at a warm room temperature for 10 days; keep away from direct sunlight. When ready, the sealed jar can be stored in the fridge for up to 3 months. Once opened, use within 1 month. FERMENTED SORREL makes about 500g 500g sorrel 1.8 litres 2% brine (see here) Pack the sorrel into a sterilised 2-litre Kilner jar and cover with the 2% brine. Seal the jar and leave at a warm room temperature to ferment for 5 days; keep away from direct sunlight. When ready, the sealed jar can be stored in the fridge for up to 3 months. Once opened, use within 1 month. FERMENTED BARLEY makes about 300g 200g pearl barley 600ml filtered water Place the barley in a suitable-sized sterilised container and pour the water over the grains. Stir with a clean sterilised spoon, then cover the container with muslin. Keep in a warm part of the kitchen and allow to ferment for 4 days. The barley is now ready to use. It can be stored, in the liquid, in an airtight container in the fridge for up to 5 days. FERMENTED BEETROOT makes 2kg 3kg raw beetroot, peeled and quartered whey (optional) fine table salt Juice a third of the beetroot. Place a 2-litre Kilner jar on a kitchen scale and return the scales to zero. Add the remaining quartered beetroot to the jar and pour in the beetroot juice. Top up with a mixture of whey and water or just water. Calculate 2% of the weight of the contents of the jar and add this amount of salt to the jar. Seal the jar. Leave to ferment at a warm room temperature for about 3 weeks; keep away from direct sunlight. Once ready, the beetroot will be slightly softened and sour. The sealed jar can be stored in the fridge for up to 3 months. Once opened, use within 1 month. FERMENTED CAVOLO NERO STALKS makes 800g–1kg cavolo nero stalks from approx. 5kg cavolo nero about 1.5 litres 3% brine (see here) Pack the cavolo nero stalks into a sterilised 2-litre Kilner jar and cover with the 3% brine so that they are completely submerged. Seal the jar and leave at a warm room temperature to ferment for 3–4 weeks; keep away from direct sunlight. The stalks are ready when they have softened slightly and taste sour. The sealed jar can be stored in the fridge for up to 3 months. Once opened, use within 1 month. FERMENTED DULSE makes 500g 500g fresh dulse, washed really well whey (optional) fine table salt Place a 500ml Kilner jar on a set of scales and return the scales to zero. Add the dulse to the jar and cover with a mixture of equal parts whey and water or just water. Calculate 2% of the weight of the contents of the jar and add this amount of salt to the jar. Seal the jar and leave the dulse to ferment at a warm room temperature for at least 1 month; keep away from direct sunlight. The ferment is ready once the dulse has taken on strong sour and savoury notes similar to anchovies and Parmesan. When ready, the sealed jar can be stored in the fridge for up to 3 months. Once opened, use within 1 month. KALE FERMENT makes about 1kg 1kg kale (leaves only), washed well 330ml beer with strong hoppy flavours fine table salt Set a 1-litre Kilner jar on a set of scales and return the scales to zero. Pack the kale into the jar and pour in the beer. Top up with water to cover the leaves. Based on the weight of the contents of the jar, calculate 2% salt. Add this to the jar. Seal the jar and leave to ferment at a warm room temperature for 10 days; keep away from direct sunlight. When ready, the sealed jar can be stored in the fridge for up to 3 months. Once opened, use within 1 month. JANUARY KING FERMENT makes 2kg 2 tablespoons yellow mustard seeds 1 teaspoon cumin seeds 2kg January King cabbage, quartered 3 garlic cloves, sliced 10 black peppercorns whey (optional) fine table salt Toast the mustard and cumin seeds in a small dry pan until they smell aromatic. Set a 2-litre Kilner jar on a kitchen scale and return the scales to zero. Pack the cabbage quarters, garlic, mustard seeds, cumin seeds and peppercorns into the jar and cover with a mixture of equal parts whey and water or just water. Based on the weight of the contents of the jar, calculate 2% salt. Add this to the jar. Seal the jar and leave to ferment at a warm room temperature for 10–14 days; keep away from direct sunlight. The ferment is ready when the cabbage has taken on a sour flavour. When ready, the sealed jar can be stored in the fridge for up to 3 months. Once opened, use within 1 month. CABBAGE FERMENT makes 900g–1kg 1 white cabbage fine table salt black peppercorns caraway seeds Remove any tough or discoloured outer leaves from the cabbage. Cut it vertically in half, through the core, then slice thinly. Set an empty bowl on a kitchen scale and turn the scales back to zero. Put the cabbage into the bowl. Based on the weight of the cabbage, calculate 2% salt, 0.35% black peppercorns and 0.5% caraway seeds. Add these seasonings to the cabbage and stir the mixture vigorously with your hands. Set aside for 30 minutes until lots of liquid has been released, occasionally stirring with your hands. Pack the cabbage tightly into a 2-litre Kilner jar and cover with the liquid that was released. Seal the jar and leave to ferment at a warm room temperature for 5 days; keep away from direct sunlight. The ferment is ready once the cabbage has broken down slightly in texture but still retains a bite, similar to a cooked texture. It should taste sour not salty. The sealed jar can be stored in the fridge for up to 3 months. Once opened, use within 1 month. SALSIFY FERMENT makes about 1.5kg 1.5kg salsify, peeled beer fine table salt Place a 2-litre Kilner jar on a set of scales and return the scales to zero. Add the salsify to the jar and top up with enough beer to cover. Calculate 2% of the weight of the contents of the jar and add this amount of salt to the jar. Seal the jar and leave the salsify to ferment at a warm room temperature for about 14 days; keep away from direct sunlight. The ferment is ready once the salsify has broken down slightly in texture but still retains a bite, similar to a cooked texture. It should taste sour not salty. When ready, the sealed jar can be stored in the fridge for up to 3 months. Once opened, use within 1 month. SLOE FERMENT makes about 1kg 1kg sloes 30g fine table salt Mix the sloes with the salt in a large bowl. Set aside for 1 hour so they release liquid. Decant the contents of the bowl into a 1-litre Kilner jar. Weigh down the sloes so that they are submerged under the liquid, then seal the jar. Leave to ferment at a warm room temperature for 10 days; keep away from direct sunlight. The sloes will completely soften and break down. Push the sloe pulp through a drum sieve; discard any stones. Mix the pulp with the liquid that was released from the sloes. Decant the mixture into a sterilised jar and seal. Store in the fridge for up to 3 months. Once opened, use within 1 month. SWISS CHARD FERMENT makes about 2kg 2kg Swiss chard, washed 3 garlic cloves, sliced 15 black peppercorns whey (optional) fine table salt Roughly chop the chard stalks; keep the leaves whole. Place a 2-litre Kilner jar on a set of scales and return the scales to zero. Add the chard, garlic and peppercorns to the jar and cover with a mixture of equal parts whey and water or just water. Calculate 2% of the weight of the contents of the jar and add this amount of salt to the jar. Seal the jar and leave to ferment at a warm room temperature for 10–14 days; keep away from direct sunlight. The ferment is ready once the chard has taken on a sour flavour. When ready, the sealed jar can be stored in the fridge for up to 3 months. Once opened, use within 1 month. KIMCHI makes about 1.8kg 2 Chinese cabbages (we use wong bok), thinly sliced ½ carrot, peeled and grated 2 garlic cloves, sliced 1 spring onion, sliced 15g root ginger, grated fine table salt caster sugar fish sauce light soy sauce shrimp paste Korean chilli powder Set a large mixing bowl on a set of scales and return the scales to zero. Mix together the cabbages, carrot, garlic, spring onion and ginger in the bowl. Based on the weight of the contents of the bowl, calculate 2% salt. Add this to the bowl. Stir the mixture vigorously with your hands. Set aside for about 1 hour, occasionally mixing vigorously as before, until lots of liquid has been released. Calculating from the weight of the mixture in the bowl, in a separate bowl weigh out 4% caster sugar, 2% fish sauce, 2% light soy sauce, 2% shrimp paste and 3% Korean chilli powder. Mix these ingredients together to form a paste, then mix the paste into the cabbage mixture. Pack the mixture tightly into a 2-litre Kilner jar, with all the liquid that was released. Seal the jar and leave to ferment at a warm room temperature for about 3 weeks; keep away from direct sunlight. When ready, the sealed jar can be stored in the fridge for up to 3 months. Once opened, use within 1 month. VEGAN KIMCHI This is a spicy kimchi! If you can't handle the heat, reduce the amount of chillies. makes 900g 10 green chillies 2 sweetheart cabbages (if unavailable, other cabbages can be used) 2 cinnamon sticks 2 star anise 20g Szechuan peppercorns fine table salt Char the chillies on a hot barbecue or ridged grill pan until completely blackened on all sides. Cut the cabbages into wedges. Place an empty 2-litre Kilner jar on a set of scales and return the scales to zero. Pack the cabbage into the jar with the cinnamon sticks, star anise, peppercorns and charred chillies. Cover with water. Based on the weight of the contents of the jar, calculate 3% salt. Add this to the jar. Seal the jar. Leave to ferment at a warm room temperature for about 3 weeks; keep away from direct sunlight. When ready, the sealed jar can be stored in the fridge for up to 3 months. Once opened, use within 1 month. FENNEL KIMCHI makes about 1.8kg 40g fennel seeds 40g black peppercorns 10g pink peppercorns 30g cumin seeds 6 bulbs of fennel about 300ml fresh apple juice about 300ml whey or filtered water fine table salt 4 garlic cloves, sliced 12 star anise Toast the fennel seeds, peppercorns and cumin seeds in a dry pan, then grind in a mortar and pestle or spice grinder to a powder. Trim the fennel bulbs and remove the outer layer. Using an electric juicer, juice these outer layers and the trimmings along with one whole fennel bulb (alternatively you can blend the fennel in a blender or food processor until as liquid as possible, then strain through a fine sieve into a bowl). Measure the fennel juice and mix it with an equal amount of apple juice and an equal amount of whey or water. Set an empty 2-litre Kilner jar on a set of kitchen scales and turn it back to zero. Cut the remaining 5 fennel bulbs into 1cm slices and pack into the jar. Add the juice and whey/water mixture and top up with water to cover. Based on the weight of the contents of the jar, calculate 2% salt. Add this along with the garlic, star anise and ground spices. Seal the jar and set aside at room temperature, away from direct sunlight. The kimchi will take 2–3 weeks to ferment (it can be left to ferment for up to 2 months; the flavour will only improve). Give the jar a shake each day for the first few days. The kimchi is ready when the fennel has softened slightly but retains a little bite and has a sour taste. Store in the fridge for 3 months. Once opened, use within a month. PRESERVED AMALFI LEMONS makes 12 12 Amalfi lemons coarse sea salt fresh lemon juice (if required) Cut each lemon into quarters lengthways but leave a couple of centimetres uncut at one end so that the quarters remain attached. Pack gaps between the quarters with salt. Pack the lemons tightly into a sterilised 2-litre Kilner jar. Fill any gaps in the jar with more salt until the lemons are completely covered. Seal the jar. Leave at room temperature, out of direct sunlight – a garage/cellar would be ideal but a store cupboard will suffice. Juice will start to be released after 1–2 days of the preserving process, which will dissolve the salt. If the lemons become exposed, top up the jar with fresh lemon juice. Preserving time is dependent on many factors including temperature, but it will take at least 3 weeks. Turn the jar occasionally during the process to disperse the salt and juice. The lemons are ready once the peel has softened. After this, they can be stored in the fridge for up to 9 months. SOUR ONIONS makes about 1.2–1.4kg 10–12 Roscoff onions, quartered whey or filtered water fine table salt Place an empty sterilised 2-litre Kilner jar on a set of kitchen scales and turn the scales back to zero. Pack the onions into the jar and cover with whey or water so that the onions are completely submerged. Based on the weight of the contents of the jar, calculate 2% salt and add it. Seal the jar and leave at warm room temperature to ferment for about 1 month; keep out of direct sunlight. The onions will be ready when they have started to break down a little and are sharp and acidic in taste. The sealed jar can be stored in the fridge for up to 3 months. Once opened, use within 1 month. MEAT, FISH AND GAME PRESERVATION I consider being a chef no different to being a carpenter. We both have tools and we both build and create. The only difference is that my creations are eaten moments after completion. Chefs are craftsmen, and there are many different types of craft within the cooking world. One of these is ancient and yet still so relevant: the art of curing meats. You need butchery skills as well as patience and an understanding of how and why the process works. At the Basque restaurant Asador Etxebarri I had one of the most beautiful, eye-opening experiences of my career – just by having dinner there. Everything was primitive; everything was cooked over coal and wood, with a delicate touch using ingredients taken from the surrounding Basque hills. It all blew me away with its honesty. But the one thing I will never forget is Victor's chorizo. It was soft and slightly smoky, with a subtle kick of chilli and smoked paprika, and an intense acidity reminiscent of lemons from Amalfi. We have all tried the mass-produced 'chorizo' in supermarkets worldwide so I did not expect for this to have such an impact on me. The difference was unbelievable. From that moment I made it my mission to learn the craft of meat preservation. Over the past three years at The Dairy we have been experimenting and improving. We are blessed with a cellar with naturally perfect conditions, which means we need to work less hard for a better product, but we are still learning every time we make a new batch. You should not be afraid to try it yourself. The process of curing meat is simple although it needs time, patience and love. I can honestly say it is one of the most rewarding and satisfying crafts to learn. The moment you cut into your creation, your gleaming pride will be difficult to hide! You need to take great care with meat and fish preservation and follow instructions to the letter, because it is a little more fragile than the fermentation and preserving of fruit and vegetables. There is a slightly higher risk of it not working out due to the nature of meat and fish. I am in no way trying to put you off but it is worth doing some research in advance. There are many useful books and online resources on the topic. Below is a basic introduction to our approach at The Dairy, which outlines some exact conditions that need to be adhered to. In vegetable preservation, the goal is often to remove any oxygen present so that certain types of moulds and organisms could not grow. Similarly, in meat and fish preservation, moisture is removed to deprive these organisms of the water that they need to survive. We use a mixture of methods to remove the moisture: dry-curing, salting and smoking. DRY-CURING/MAKING CHARCUTERIE AT HOME There are some useful tips and pieces of equipment to consider if you fancy giving dry-curing a try in your kitchen. 1.For some recipes, you will be required to mince or grind the meat. For occasional use, the mincer attachment on a stand mixer will be perfectly acceptable. 2.One of the key points to remember for dry-curing is the environment that needs to be created for the hanging period. These three conditions are key: •a temperature of 15–18°C •60–70 per cent humidity •air circulation There are no small domestic appliances on the market currently for dry-curing meat so you would need to create this environment in another way. There are many creative tutorials and videos online showing multiple ideas for this. For example, a drying chamber could be created in an old mini fridge or wine fridge set to a high temperature of somewhere between 15–18°C (the fridge would need to have a fan to keep the air circulating). To create the correct humidity, you could simply place a pan of heavily salted water in the bottom of the fridge. There are, of course, plenty of other creative ways to create this environment. So long as all three conditions are adhered to, then dry-curing at home is achievable. 3.Regarding the storage of charcuterie, over time any exposed surfaces will dry out and become less pleasant to eat. So once it is ready, wrap the charcuterie in greaseproof paper, then cover well in clingfilm and keep in the fridge. If wrapped well, most dry-cured meat can be kept for up to 5 months, but use your discretion when it comes to shelf life. Look out for telltale signs: if at any stage it has dried out too much and is an unpleasant texture as a result, or if there is a rancid or bitter scent or flavour, it is best not to keep it. 4.As some of our recipes make a large quantity, it is always a nice idea to share the end result with family and friends. You'll prove popular, I promise! SMOKING Some of our recipes require smoking, which is achievable in a home environment. Cold-smoking takes place at a temperature of 32°C or lower, which means that the meat or fish is smoked but not cooked. Therefore, there cannot be a heat source underneath it. The heat required for the smoke needs to be separate and away from the meat or fish, which must be enclosed in a chamber that can then be filled with smoke. If the meat or fish only needs to be cold-smoked for a few minutes, this can be achieved quite simply: spread woodchips in a flat tray such as a deep roasting tray and place a flat steamer rack over the chips. Warm the tray over a medium heat until the chips start to smoke. Remove from the heat and place the fish or meat to be smoked on the steaming rack. Completely cover the top and sides tightly with oven-safe clingfilm so the smoke is sealed inside with the fish or meat and leave to smoke for the required time. If you want to cold-smoke for a longer time, you need to create a smoking chamber. There are plenty of creative ways to do this – again, there are some informative tutorials online – but the basic premise is that the heat source is separate from the chamber and the smoke is fed between the two with a tube. Other recipes require hot-smoking, which is smoking that takes place at a temperature of up to 93°C. So that the meat is both smoked and gently cooked at the same time. Reasonably priced stovetop and outdoor smokers are available from many online sources. SALTING Salting is often used in the preservation of fish but also can apply to meat, especially offal, as it too works to draw out moisture through osmosis while also acting as a preservative itself. The ingredient is packed in salt, or a mixture made mostly of salt, and allowed to cure. After the curing, the ingredient could be smoked to dry it out even further or it could be stored in oil. FENNEL SALAMI makes about 2kg 3kg boneless pork shoulder, trimmed and diced 300g hog casing 1ST STAGE CURE 15g pink curing salt 150g Maldon sea salt flakes 30g dextrose 2ND STAGE CURE ½ bulb of garlic, cloves peeled 50ml olive oil 300ml white wine 30g fennel seeds 5g dried chilli flakes To make the 1st stage cure, mix the salts and dextrose. Toss the meat with this mixture in a freezer bag and seal. Leave to cure in the fridge for 48 hours. For the 2nd stage cure, cook the garlic in the olive oil until soft. Boil the wine in a pan until reduced by half; cool. Put the garlic, olive oil and wine into a blender or food processor and blend until smooth. Toast the fennel seeds in a dry pan until fragrant, then grind finely to a powder. Stir the fennel seeds and chilli into the garlic and oil mix. Drain any excess liquid from the pork, then pass the meat through a mincer attachment. Mix the 2nd stage cure paste through the mince. Tie off one end of the hog casing. Pipe the meat into the hog casing to make sausages approximately 300g in weight and tie off the ends. Weigh the sausages. Hang them in a suitable place – at 15–18°C with 60–70% humidity and a good airflow (see here) – until they lose 30% of their original weight. This usually takes about 3 weeks. Once ready, wrap the sausages in clingfilm or place in a dry airtight container. Store in the fridge and slice as needed. (See here for guidance on storage.) GOOSE HAM makes 2 (cuts into about 60 slices) 2 boneless goose breasts (skin on) Maldon sea salt flakes caster sugar pink curing salt 10 black peppercorns 2 star anise a pinch of dried chilli flakes 2 garlic cloves, crushed 3 sprigs of thyme, leaves picked Weigh the goose breasts (together), then calculate the seasonings: you want 2.25% sea salt, 1.5% caster sugar and 0.5% pink curing salt. Toast the peppercorns and star anise in a dry pan until fragrant, then crush quite finely. Mix with the sea salt, sugar, curing salt, chilli flakes, garlic and thyme leaves. Distribute the seasonings evenly over the goose breasts and rub in on both sides. Place the breasts in freezer bags and seal well. Leave to cure in the fridge for 7 days, turning the bags over every 2 days. Remove the goose breasts from the bags, rinse and pat dry. Wrap them in muslin and tie the ends. Weigh each breast. Hang in a suitable place – at 15–18°C and with 60–70% humidity and a good airflow (see here) – until the meat loses 30% of its original weight. This usually takes about 3 weeks. Once ready, wrap the goose hams in clingfilm or place in a dry airtight container. Store in the fridge and slice as needed. (See here for guidance on storage.) BRESAOLA makes about 1.3kg 1.2kg coarse sea salt 800g demerara sugar 1 x 2kg silverside beef joint, sinew removed red wine Mix together the salt and sugar. Pack the beef in the salt mixture in an airtight container so that the meat is completely surrounded by salt and sugar. Cover the container tightly and leave in the fridge for 6 days, turning the beef each day. Remove the beef from the sugar and salt mixture, rinse and place in a clean, snug-fitting airtight container. Cover with red wine and seal. Leave in the fridge for 3 days. Remove the beef from the wine and weigh it. Wrap it in muslin and hang it in a suitable place – 15–18°C with 60–70% humidity and a good airflow (see here) – until it loses 30% of its original weight. This will take 10–14 days. Once ready, keep the beef, well wrapped, in the fridge and cut thin slices as required. It can be stored in the fridge for up to 5 months. (See here for guidance on storage.) COPPA makes about 1.5kg (cuts into 500 slices) rock salt 1 pork coppa joint (comprising neck, shoulder and loin) 15 black peppercorns 15g fennel seeds 5g smoked paprika Rub lots of salt evenly all over the coppa, then place it in a freezer bag and seal tightly. Place on a rimmed tray and set another heavy tray over the coppa with some weights on top. Place in the fridge and leave to cure, allowing 1 day per kilo weight of meat. Turn the coppa every day to help distribute the salt. Remove the coppa from the bag and rinse off the salt in cold water. Pat dry. Toast the peppercorns and fennel seeds in a dry pan until they smell fragrant. Crush these coarsely and mix with the paprika. Dust the spices evenly over the meat. Wrap the coppa in muslin and tie it with butcher's twine to form a nice round shape. Weigh it, then hang in a suitable place to dry – 15–18°C, with 60–70% humidity and a good airflow (see here) – until the meat loses 30% of its original weight. This usually takes 3–4 weeks. Once ready, wrap the coppa in clingfilm or place in a dry airtight container. Store in the fridge for up to 5 months and slice as needed. (See here for guidance on storage.) LOMO makes about 1.3kg 1 x 2kg boned rack of pork Maldon sea salt white wine espelette pepper Remove the skin and most of the fat from the pork, leaving just a thin cap of fat over the meat. Weigh the trimmed meat. Roll it in coarse sea salt so it is completely covered, then place it in a sealable bag and seal. Place a weight on top of the bag and leave it in the fridge for 24 hours per kilo. Rinse off the salt and place the pork on a plate. Put it back into the fridge, uncovered, and leave for 24 hours. Weigh the meat again. Brush all over with white wine, then roll in espelette pepper to coat. Wrap the meat in muslin and hang it in a suitable place – at 15–18°C with 60–70% humidity and a good airflow (see here) – until it loses 30% of its original weight. Keep, well wrapped, in the fridge and cut into thin slices as required. The lomo can be stored in the fridge for up to 5 months. (See here for guidance on storage.) NDUJA makes about 2.5kg 2kg skinless fresh pork belly, cut into large dice 250g hot paprika 150g smoked sweet paprika 50g Maldon sea salt 5g dextrose 5g pink curing salt 10g Bactoferm (or other live starter culture) 2 tablespoons distilled water 300g hog casing, soaked in tepid water for at least 20 minutes and rinsed Partially freeze the pork belly, then pass it through a fine mincer twice. Put it into the bowl of a stand mixer and add the hot and sweet paprika, salt, dextrose and curing salt. Mix on a medium speed for 1 minute. Dissolve the starter culture in the distilled water, add it to the meat and mix until well distributed. Tie off one end of the hog casing. Stuff the mixture into the casing and tie off into 12cm sausages. Hang the sausages for 24 hours at room temperature. Cold-smoke the sausages for 4 hours (see here). Hang the sausages in a suitable place – at 15–18°C with 60–70% humidity and a good airflow (see here) – for 1 week. After this, the nduja can be kept, well wrapped, in the fridge for up to a month and used as required. PANCETTA makes about 1.3kg 1 skinless fresh pork belly (about 2kg) Maldon sea salt white wine cracked black pepper dried thyme Weigh the meat. Roll it in coarse sea salt so it is completely covered, then place it in a sealable bag and seal. Place a weight on top of the bag and leave it in the fridge for 24 hours per kilo. Rinse off the salt and place the pork on a plate. Put it back into the fridge, uncovered, and leave for 24 hours. Weigh the meat again. Brush all over with white wine, then roll it in black pepper and thyme to coat. Wrap the meat in muslin and hang it in a suitable place – at 15–18°C with 60–70% humidity and a good airflow (see here) – until it loses 30% of its original weight. Keep, well wrapped, in the fridge and cut into thin slices as required. The pancetta can be stored in the fridge for up to 5 months. (See here for guidance on storage.) SMOKED BONE MARROW The amount of bone marrow you will get from marrow bones will be about 10 per cent of the weight of the bones. If you need 50g bone marrow, for example, you need to start with about 500g marrow bones. yield will depend on how many marrow bones you begin with (see introduction) marrow bones (ask your butcher to split the bones down the middle) 7% brine (see here) applewood chips Maldon sea salt and freshly ground black pepper Soak the bones in cold water for 5 hours to soften the marrow before scooping out. Place the marrow in the 7% brine and leave for 5 hours in the fridge. Remove from the brine. Preheat the oven to 120°C fan/140°C/Gas Mark 1. Take a tray with a steam insert (such as a deep roasting tray that will hold a flat steaming rack) and spread applewood chips over the bottom of the tray. Warm it over a medium heat until the chips start to smoke. Remove from the heat. Place the marrow on a heatproof tray in the steam insert and completely cover the top and sides tightly with oven-safe clingfilm so the smoke is sealed inside with the marrow. Leave to smoke for 10 minutes. Uncover the smoked marrow and place it on its heatproof tray in the oven. Roast the smoked bone marrow for 10 minutes. Transfer to a blender or food processor and blend until smooth. Season with salt and pepper. The bone marrow can be used straight away, or kept in the fridge for 1–2 days or frozen. WILD GARLIC AND SUNFLOWER SALAMI makes about 2kg 3kg boneless pork shoulder, trimmed and diced 300g hog casing (for casing the sausages) 1ST STAGE CURE 15g pink curing salt 150g fine table salt 30g dextrose 2ND STAGE CURE 600g wild garlic leaves 120g sunflower seeds 300ml white wine To make the 1st stage cure, mix the salts and dextrose. Toss the meat with this mixture in a freezer bag and seal. Leave to cure in the fridge for 48 hours. Meanwhile, for the 2nd stage cure, dry the wild garlic leaves in a dehydrator, or oven at its lowest setting, until completely dry – this will take 2–3 hours. Once dried, blend to a powder. Gently simmer the sunflower seeds in the white wine until the liquid has evaporated and the seeds have softened; allow to cool. Drain off any excess liquid from the pork. Pass the meat through a mincer attachment. Fold the wild garlic powder and sunflower seeds through the minced meat. Tie off one end of the hog casing. Pipe the meat into the casing to make sausages approximately 300g in weight and tie off the ends. Weigh the sausages. Hang them in a suitable place – at 15–18°C with 60–70% humidity and a good airflow (see here) – until they lose 30% of their original weight. This usually takes about 3 weeks. Once ready, tightly wrap the salami in clingfilm or place in a dry airtight container. Store in the fridge and slice as needed. (See here for guidance on storage.) BOTTARGA Bottarga is dried cod's roe that is used to season dishes. While it can be purchased from fine Italian delicatessens, this can work out to be quite expensive. We decided to make our own at the restaurant as cod's roe is so often thrown away. Bottarga is an amazing way to use this often discarded ingredient. It is possible to make bottarga at home but you will need to be able to create the correct conditions for the hanging process. Some notes on how to create these conditions can be found here. If you are unable to make your own bottarga, or to source it, then anything with a deep savoury note can be used instead to season dishes – bonito flakes, for example, would be a suitable alternative. makes 30–40% of the weight of the cod's roe you start with 1.5 litres water 450g rock salt 500g–1kg cod's roe Bring the water and salt to the boil and simmer until the salt has dissolved into the water. Pour into a bowl and allow to cool completely (this is a 30% brine). Submerge the roe in the brine, place in the fridge and leave for 2 hours. Remove the roe from the brine and gently place it on a wire rack set over a tray near the fan in your fridge. Leave for 7 days to dry out. Carefully wrap the roe in muslin and hang in a suitable drying chamber – 15–18°C with 60–70% humidity and a good airflow (see here) – for 2–3 weeks. Once ready, the bottarga can be kept in an airtight container in the fridge for up to 6 months and shaved off bit by bit for use. (See here for guidance on storage.) CURED SALMON The salmon can be cured in this manner and used as is, or it can then be smoked to add extra depth to the flavour. makes about 1.5kg 40g fennel seeds 40g black peppercorns 40g juniper berries 800g fine table salt 160g caster sugar 160g demerara sugar 280g soft brown sugar zest of 8 lemons 16 sheets of dried nori (3g each), cut into small pieces with scissors 1 side of salmon, pin-boned and skinned applewood chips, for smoking Lightly toast the fennel seeds, peppercorns and juniper berries in a dry pan until they smell fragrant. Crush them lightly in a pestle and mortar. Combine the crushed spices with all the other ingredients, apart from the salmon and applewood chips, in a bowl and mix thoroughly. Lay a double layer of clingfilm, roughly four times the width of the salmon, across a worktop. Spread half of the spice cure evenly over the clingfilm, following the outline of the fish. Place the salmon on this and scatter the remaining cure over the top. Wrap the clingfilm around the fish, sealing in the cure. Set in a suitable-sized tray and leave in the fridge for 4 days, turning the fish over every 24 hours. Unwrap the fish, rinse under cold water and pat dry. This cured salmon can be kept in the fridge, wrapped well in clingfilm, for up to 3 weeks. To lightly smoke the salmon for dishes such as Loch Duart Salmon spread some applewood chips over the bottom of a large, deep roasting tray. Warm the tray over a medium heat until the chips start to smoke. Place the fish on a heatproof tray set on a flat steamer rack. Remove the roasting tray from the heat. Place the steamer rack directly over the smoking chips. Completely cover the top and sides tightly with oven-safe clingfilm so the smoke is sealed inside with the fish, then leave to smoke lightly for 7 minutes. To fully smoke the salmon, it would need to be placed in a smoking chamber for 7 hours. Once smoked, store the salmon, wrapped well in clingfilm, in the fridge. CURED SARDINES These will keep in the fridge in a sealed jar, covered in olive oil, for up to a year so it is worth making a big batch starting with at least 1kg of sardines. They can be used in a very similar way to anchovies in the seasoning of dishes. makes about 700g 20g fennel seeds 150g parsley (leaves and stalks) zest of 2 lemons 3 garlic cloves (peeled) 200g coarse salt 1kg sardines, heads removed and gutted olive oil (for storage) Toast the fennel seeds in a dry pan until fragrant. Tip into a blender or food processor and add the parsley, lemon zest and garlic cloves. Blend together to make a coarse paste. Add the salt and blend again. Spread some of the paste over the bottom of a clean container, then layer up the sardines and the remaining paste so that the fish are completely covered. Cover with an airtight lid and leave to cure in the fridge for 5 days. Remove the sardines from the paste, rinse well and pat dry. Fillet the sardines. Pack them into a jar, cover with olive oil and seal. Store in the fridge. When you remove sardines from the jar, ensure that the rest remain covered with olive oil. SMOKED COD'S ROE makes about 700g 1 large, very fresh cod's roe fine table salt applewood chips, for smoking Cut the roe away from the membrane. Place in a sieve and rinse under cold running water for 30 minutes. Drain and weigh the roe. Calculate 2% of this weight: this is the amount of salt to add. Season the roe with the salt. Take a flat tray with a steam insert (such as a deep roasting tray that will hold a flat steaming rack) and spread the applewood chips over the bottom of the tray. Warm it over a medium heat until the chips start to smoke. Meanwhile, place a tray of ice cubes on the steam insert and put the roe on a tray over the ice. Remove the tray of smoking chips from the heat and set the steam insert over it. Completely cover the top and sides tightly with oven-safe clingfilm so the smoke is sealed inside with the roe. Leave to lightly smoke for 10 minutes. Decant the smoked roe into an ice-cold sterilised jar and seal. It can be kept in the fridge for 3–5 days. SMOKED MACKEREL makes 6 fillets 20g fennel seeds 150g flat-leaf parsley (stalks and leaves) zest of 2 lemons 3 garlic cloves (peeled) 200g coarse sea salt 3 medium mackerel applewood chips, for smoking Toast the fennel seeds in a small dry pan until they smell fragrant. Blend the fennel seeds with the parsley, lemon zest and garlic in a food processor to make a paste. Add the salt and blend again. Gut the mackerel and remove the gills. Rinse well to remove any blood. Pack the cavities with some of the salt mixture, then completely cover the fish with the salt mixture on a tray. Leave in the fridge for 6 hours. Rinse away the salt mixture and pat the fish dry. Return to the clean tray and leave to dry out, uncovered, in the fridge overnight. You now need to cold-smoke the mackerel, which can be done in a smoking chamber for 3 hours or using the quick cold-smoke method. For this, take a flat tray with a steam insert (such as a deep roasting tray that will hold a flat steaming rack) and spread the applewood chips over the bottom of the tray. Warm it over a medium heat until the chips start to smoke. Meanwhile, place a tray of ice cubes on the steam insert and put the mackerel on a tray over the ice. Remove the tray of smoking woodchips from the heat and set the steam insert over it. Completely cover the top and sides tightly with oven-safe clingfilm so the smoke is sealed inside with the fish. Return the tray to a low heat and leave to smoke for about 1 hour – keep an eye on the fish to be sure it is smoking and not cooking, and adjust the heat under the tray if necessary. Replace the ice and replenish the wood chips as required during the smoking. Fillet and pin-bone the fish. Char the skin side of the fillets with a blowtorch, on a barbecue or under a hot grill. The cured mackerel fillets can be stored in an airtight container in the fridge for a couple of weeks. PICKLES AND JAMS Pickles and jams are an easy way to preserve fruits and vegetables through the use of acid or sugar, or a mixture of the two. There is a simple joy in this kind of preservation. Certain times of the year are more bountiful than others and preservation of this kind means that we can capture fruits and vegetables in their prime to enjoy again in another form during the colder months. There is something very uplifting about opening larder cupboards during the darker days to be greeted by rows of vibrant jars. PICKLES When it comes to pickling, there are endless methods, from simple cold pickles – where the raw ingredient is just submerged in vinegar – to gastriques, where shallots are gently sweated in wine and then vinegar. In this section, I have included a mixture of methods. The one that we rely on the most uses a standard 1:1:1 pickling liquor made with equal parts of water, caster sugar and vinegar. These are heated together so that the sugar dissolves, then brought to the boil and poured over whatever is being pickled. The type of vinegar used varies, depending on the flavour wanted and the pickled ingredient itself. Most pickles can be stored in a cool, dark place for up to a year and then, once opened, in the fridge for 3 months, unless otherwise specified in the individual recipes. You can use any jars that you have for pickles, but it is best practice to sterilise them. For the restaurant we make our pickles in big batches although the recipes here can be scaled down as required. There is no need to be too prescriptive – if you have a glut of an ingredient then get pickling. It doesn't matter how large or small a batch you make, the principle is the same. JAMS Jams rely on sugar as the preservative. Again, any sterilised jars that you have can be used. Some fruits do not contain enough natural pectin to ensure the jam sets. In this case, we use jam sugar, which contains pectin. BEER-PICKLED ONIONS makes 1kg 600ml strong hoppy beer 330ml cider vinegar 270g honey 2 sprigs of thyme 3 bay leaves 1 tablespoon Maldon sea salt 36 black peppercorns, coarsely crushed 1kg cipollini onions or other small onions (peeled) Put all the ingredients, except the onions, into a suitable-sized pot and bring to the boil. Add the onions and simmer until they are just tender. Remove from the heat and allow to cool at room temperature. Store in sealed sterilised jars in a cool, dark place for a year. Once opened, keep in the fridge and use within 3 months. CELERIAC PICKLE makes about 250g ½ celeriac 150ml white wine vinegar 300ml water 10g fine table salt 10 juniper berries, crushed 8 black peppercorns, crushed Peel the celeriac and slice it on a mandoline as thinly as possible. Put the slices in a bowl. Bring the vinegar, water, salt and spices to the boil, then pour the hot liquor over the celeriac. Cover with clingfilm or a lid and leave to steam for 5 minutes. Decant into a sterilised 500ml jar and seal. This pickle can be kept in the fridge and used over a 3-week period. CARROT AND CARAWAY PICKLE makes about 1kg 40g caraway seeds 1kg mixed heritage carrots 200ml cider vinegar 200ml water 200g caster sugar Toast the caraway seeds in a dry pan until they smell aromatic. Set aside. Peel the carrots, then slice into thin rounds on a mandoline. Combine the vinegar, water, sugar and caraway seeds in a suitable-sized pot. Bring to the boil, then add the carrot slices and remove from the heat immediately. Decant into a sterilised 2-litre jar and seal. The pickle can be stored for 1 year in a cool, dark place. Once opened, keep in the fridge for up to 3 months. NASTURTIUM CAPERS makes 250g 250g nasturtium buds coarse sea salt 250ml water 200ml white wine vinegar 50g caster sugar Pack the nasturtium buds in coarse salt in a jar so that they are completely covered. Seal the jar and leave it in a cool, dark place for 1 month. Remove the buds from the jar and rinse well in a sieve. Pour the water and vinegar into a suitable-sized pan and add the sugar. Bring to the boil, stirring to dissolve the sugar. Add the buds and immediately remove from the heat. Decant into sterilised jars and seal. Store in a cool, dark place for up to a year. Once opened, keep in the fridge and use within 3 months. LOVAGE SEED PICKLE makes about 150g 150g lovage seeds 50ml cider vinegar 50g caster sugar 50ml water Put the seeds in a sterilised 200ml heatproof jar. Combine the cider vinegar, sugar and water in a pot and bring to the boil. Remove from the heat and pour the boiling liquid over the seeds. Seal the jar and allow to pickle for 5 days. The sealed jar can be stored in a cool, dark place for up to a year. Once opened, keep in the fridge and use within 3 months. WILD GARLIC CAPERS makes about 250g 250g picked wild garlic buds coarse sea salt 500ml water 100g caster sugar 400ml white wine vinegar Pack the wild garlic buds into jars with the salt so that they are completely covered. Seal the jars and leave them in a cool, dark place for 1 month. Remove the buds from the salt and rinse well. In a suitable-sized pan, bring the water, sugar and vinegar to the boil. Add the buds and remove from the heat immediately. Decant into sterilised jars and seal. The garlic capers can be stored in a cool, dark place for up to a year. Once opened, keep in the fridge and use within 3 months. WILD GARLIC PICKLE yield will depend on how many stalks you begin with wild garlic stalks, finely diced pickling liquor: equal parts water, white wine vinegar and caster sugar Place the wild garlic in a sterilised jar. Combine the ingredients for the pickling liquor in a pan – you need enough liquid to cover the garlic – and bring to the boil, stirring to dissolve the sugar. Pour the boiling liquor over the wild garlic and seal the jar. The pickle can be stored in a cool, dark place for up to a year. Once opened, keep in the fridge and use within 3 months. PICKLED WAKAME makes about 100g 35ml rice wine vinegar 35ml water 35g caster sugar 50g dried wakame Put the vinegar, water and sugar in a pan and bring to the boil, stirring to dissolve the sugar. Pour this boiling pickling liquor over the wakame and allow to cool. Decant into a sterilised jar and seal. Store in a cool, dark place for 3 months. Once opened, keep in the fridge and use within a month. PICKLED RADISHES makes 1.5kg 300ml water 300ml white wine vinegar 300g caster sugar 1.5kg radishes Combine the water, vinegar and sugar in a pan and bring to the boil, stirring to dissolve the sugar. Pour this boiling pickling liquor over the radishes in a bowl. Allow to cool, then decant into sterilised jars and seal. The radishes are ready to use straight away or can be stored in the fridge for up to 2 months. PICKLED ELDERBERRIES makes about 400g 400g elderberries (picked off the stem) 100ml Cabernet Sauvignon vinegar 100g caster sugar 100ml water Put the elderberries into a 500ml sterilised heatproof jar. Combine the vinegar, sugar and water in a pot and bring to the boil. Remove from the heat and pour the boiling liquid over the berries. Seal the jar and leave in a cool place to pickle for 5 days. The sealed jar can be stored in a cool, dark place for up to a year. Once opened, keep in the fridge and use within 3 months. PICKLED WHITE PEACHES makes about 800g 500ml water 500ml white wine vinegar 300g caster sugar 16 coriander seeds 16 black peppercorns 2 cinnamon sticks 4 star anise 2 cloves a pinch of ground mace 10 ripe white peaches, peeled Combine all the ingredients, except the peaches, in a pan. Bring to the boil, then simmer for 5 minutes. Put the peaches into two sterilised 2-litre heatproof jars. Pour in the hot spiced liquid and seal the jars. Leave to pickle for at least 7 days before use. The sealed jars can be stored in a cool, dark place for up to a year. Once opened, keep in the fridge and use within 3 months. ROCK SAMPHIRE PICKLE makes 600g 600g rock samphire 170ml rice wine vinegar 170g caster sugar 170ml water Pack the rock samphire into a sterilised 1-litre heatproof jar. Combine the vinegar, sugar and water in a pot and bring to the boil. Remove from the heat and pour the boiling liquid over the rock samphire. Seal the jar and leave in a cool place to pickle for 5 days. The sealed jar can be stored in a cool, dark place for up to a year. Once opened, keep in the fridge and use within 3 months. RED WINE SHALLOT GASTRIQUE makes about 1kg 1kg shallots, finely diced 500ml red wine 300ml red wine vinegar Maldon sea salt and freshly ground black pepper Put the shallots into a pan with the red wine. Bring to a simmer and reduce until all the liquid has evaporated. Add the red wine vinegar, bring to a simmer and reduce until the liquid has evaporated. Season with a pinch each of salt and pepper. Allow to cool. Decant into a sterilised jar and seal. Store in a cool, dark place for up to 1 year. Once opened, keep in the fridge and use within 3 months. SHALLOT VINEGAR makes 500ml 500g banana shallots, finely diced 500ml red wine vinegar Put the shallots in a sterilised jar and cover with the red wine vinegar. Seal and leave in the fridge to pickle for at least 3 days before using. The vinegar can be kept for up to 3 months in the fridge. WHITE WINE SHALLOT GASTRIQUE makes about 1kg 1kg shallots, finely diced 500ml white wine 300ml white wine vinegar Maldon sea salt and freshly ground black pepper Put the shallots and wine in a pan, bring to a simmer and reduce until all the liquid has evaporated. Add the vinegar, bring back to a simmer and reduce until the liquid has evaporated. Season with a pinch each of salt and pepper. Allow to cool. Decant into a sterilised jar and seal. Store in a cool, dark place for up to a year. Once opened, keep in the fridge and use within 3 months. ARTICHOKE PICCALILLI This is an amazing variation on a traditional piccalilli. I love it with all types of cold meats such as ham, game terrines and any type of pie. It works particularly well with the Rabbit Feast. I have deliberately given quantities for a large batch as the piccalilli really is so versatile and also makes a lovely present. makes about 2kg FOR THE PICCALILLI 1.5kg Jerusalem artichokes, scrubbed clean 4 white-skin onions 600g red peppers, stalks and seeds removed 60g fennel seeds 10g celery seeds 50g black peppercorns 20g cumin seeds 2 litres cider vinegar 75g plain flour 100g English mustard powder 400g caster sugar 30g ground turmeric a pinch of saffron threads FOR THE BRINE 500g fine table salt 4.5 litres water Dice the artichokes, onions and peppers so they are roughly in uniform-sized pieces. For the brine, put the salt and water in a pan and bring to the boil, then simmer until the salt has completely dissolved. Remove from the heat, pour into a bowl and cool. Add the vegetables to the brine, cover the bowl and leave in the fridge for 24 hours. Toast the fennel seeds, celery seeds, peppercorns and cumin seeds in a dry frying pan until aromatic. Tip into a mortar and crush coarsely with the pestle. Put 200ml of the cider vinegar, the flour and mustard powder in a small bowl and whisk into a paste. Pour the remaining vinegar into a large pan and add the sugar, crushed spices, turmeric and saffron. Bring to the boil, then simmer over a medium to high heat. Gradually stir in the flour-mustard paste and continue cooking, stirring, for about 12 minutes or until the liquid has thickened. Drain the vegetables from the brine and pack into sterilised Kilner jars. Pour the hot pickling liquor over the vegetables, leaving a 2cm gap at the top. Seal the jars. The piccalilli can be stored for 6 months in a cool, dark place (leave it for a minimum of 2 weeks before eating). Once opened, keep in the fridge for up to 3 months. ONION TREACLE makes about 100ml 10 large white onions, quartered 1 sachet (8g) pectin powder 50g caster sugar Maldon sea salt Place the onions in a large pressure cooker, or a large pot, and fill halfway up with water. Seal the pressure cooker, or cover the pot with foil and then a lid so that no steam can escape. If using a pressure cooker, steam the onions for about 3 hours. If using a pot, steam over a very gentle heat for 6–7 hours until the onions have completely softened but have not coloured. Pour the onions into a sieve set over a bowl and place a heavy weight on the onions so that all the liquid will be drained from them and pass through the sieve. Discard the onions. Measure the strained liquid, then pour it into a pan. Boil to reduce by 80%. Add the pectin and sugar and simmer for about 5 minutes, whisking constantly, until the mixture reaches 107°C. Season lightly with salt to taste. Allow to cool before pouring into a sterilised container. Seal, then store in the fridge for up to 2 months. APRICOT AND LEMON THYME JAM makes 6 x 228ml jars 1kg fresh apricots 50ml water 50ml fresh lemon juice 600g jam sugar 100g unsalted butter, cut into cubes 100g honey 3 sprigs of lemon thyme, leaves picked 1 teaspoon Maldon sea salt Before you begin making the jam, put three or four small plates in the freezer. Cut the apricots in half and remove the stones, then cut each half into quarters. Place the apricots and water in a large pot and cook over a medium heat for 10 minutes to soften. Stir in the lemon juice and sugar and bring the mixture up to 104°C. Reduce the heat and allow to simmer, stirring now and again, for a further 20 minutes or until the jam has reached soft setting point – use the wrinkle test to check. To do this, take the pan off the heat and carefully spoon a little jam on to one of the cold plates. Let it stand for a minute, then push the blob of jam with your finger. If the surface of the jam wrinkles then it has reached setting point; if it is still quite liquid, then put the pan back on the heat and boil the jam for another couple of minutes before testing again, using different plates from the freezer. Meanwhile, make a brown butter by melting and heating the butter cubes in a pan over a high heat until the butter starts to foam and brown and gives off a nutty aroma. Once this occurs, remove from the heat immediately and cool quickly by setting the base of the pan in cold water, to stop the butter from burning. Put the honey in another pan and cook over a medium heat to a dark caramel colour. Remove from the heat and stir in the brown butter. Add to the apricot jam while still warm. Stir through the lemon thyme leaves and salt. Ladle the warm jam into sterilised jars and seal. The jam can be stored in a cool, dark place for up to 6 months. Once opened, keep in the fridge and use within 6 weeks. BLOOD ORANGE MARMALADE makes about 2.5kg 3.5kg blood oranges 150ml fresh lemon juice 1.2kg demerara sugar 1.2kg dark muscovado sugar Peel two-thirds of the oranges and julienne the peel. Using an electric juicer/press, juice all the oranges in two batches – first the oranges that have been peeled and second those that haven't. Keep the fruit pulp and the juice from each batch separate. Pour the fruit pulp from the second batch (unpeeled oranges) into a pan and cover with water. Bring to the boil, then put on the lid and simmer for 1 hour. Strain the liquid into a jug or bowl; discard the fruit pulp. Mix all the blood orange juice (from both batches) with the lemon juice and measure the mixture. Pour into a pan and add enough of the reserved strained liquid from the boiled fruit pulp to make the total liquid up to 3 litres. Add the sugars and the julienned zest. Wrap the reserved fruit pulp from the peeled oranges in a piece of muslin, tie the ends together to make a bag and add to the pan. Slowly bring the mixture up to 110°C, stirring constantly. Allow to cool slightly before removing the muslin bag; squeeze any juice from the bag into the marmalade. Decant the marmalade into hot sterilised jars and seal. The marmalade can be stored in a cool, dark place for a year. Once opened, keep in the fridge and use within a month. FORCED RHUBARB, HIBISCUS AND GINGER JAM makes 6 x 228ml 1.5kg forced rhubarb, cut into 1.5cm pieces zest and juice of 2 lemons 5g root ginger, grated 5g dried hibiscus flowers 150ml water 500g jam sugar Put the rhubarb, lemon zest and juice, ginger, hibiscus flowers and water in a pan over a medium heat and cook for about 10 minutes or until the fruit softens. Stir in the sugar, then simmer, stirring regularly, for about 45 minutes or until a thick consistency – use the trail test to check for setting. To do this, take a spoonful of the jam and allow it to fall back into the rest; it should fall slowly, forming a trail that will hold its shape on the surface of the jam in the pan for a minute or so. Remove from the heat and allow to cool for 5 minutes before ladling the warm jam into sterilised jars; seal. The jam can be stored in a cool, dark place for up to 6 months. Once opened, keep in the fridge and use within 6 weeks. SOUR TOMATO JAM makes about 1.5kg 10 shallots, thinly sliced 5 garlic cloves, sliced 2.5kg overripe tomatoes (with their vines), roughly chopped 5 bay leaves 150ml olive oil 1 litre tomato juice fresh horseradish, finely grated whey or extra tomato juice fine table salt and cracked black pepper Gently sweat the shallots, garlic, tomato vines and bay leaves in the olive oil until the shallots have softened. Add the tomatoes and cook until all excess liquid has evaporated. In a separate pan, reduce the tomato juice down to a purée consistency. Mix together the reduced tomato juice and the tomato mixture and allow to cool. Discard the tomato vines and bay leaves. Weigh the tomato mixture and calculate 1.5% salt, 1% fresh horseradish and 25% whey or extra tomato juice. Stir these into the mixture with pepper to taste. Decant into sterilised jars and seal. Leave to ferment at a warm room temperature for 5 days; keep away from direct sunlight. When ready, the sealed jars can be stored in the fridge for up to 3 months. Once opened, use within 1 month. WILD BLACKBERRY AND LEMON VERBENA JAM makes 6–8 x 228ml jars 1.5kg wild blackberries 400ml water 1.5kg jam sugar 2 tablespoons fresh lemon juice 50g picked lemon verbena leaves Before you begin making the jam, put three or four small plates in the freezer. Put the berries into a large pan with the water, set on a medium heat and cook for 10 minutes to soften the fruit. Stir in the sugar and lemon juice, turn up the heat and cook rapidly for 15 minutes or until at setting point – use the wrinkle test to check the consistency. To do this, take the pan off the heat and carefully spoon a little jam on to one of the cold plates. Let it stand for a minute, then push the blob of jam with your finger. If the surface of the jam wrinkles then it has reached setting point; if it is still quite liquid, put the pan back on the heat and boil the jam for another couple of minutes before testing again, using different plates from the freezer as necessary. Cut the lemon verbena leaves into small pieces and stir through the jam. Allow to cool for 5 minutes, then ladle the warm jam into sterilised jars and seal. The jam can be stored in a cool, dark place for up to 6 months. Once opened, keep in the fridge and use within 6 weeks. DAIRY, BUTTERS AND OILS Butter has become quite the obsession since our opening. Our bone marrow butter has become a cult classic, Dean's chicken skin butter legendary and Simon's sour smoked whiskey butter a revelation. I find it fascinating how certain fats and oils take to new flavours and lift dishes to the next level. For example, the aroma from dulse butter when it hits a pan, finishing a roast piece of monkfish, is quite something. The intense flavour ember oil gives to a tartare or yoghurt is mind-boggling! I think it's worth always having a number of oils in the cupboard as well as some flavoured butters in the fridge. Then you can effortlessly enhance a simple dish cooked at home. BONITO BUTTER For this recipe, we use the skin from smoked eel as we don't want anything in the kitchen to go to waste. However, you can use the skin of any smoked fish such as mackerel. Alternatively, the butter can be made without the fish skin, in which case increase the bonito flakes to 10g. makes about 500g 500g unsalted butter, cut into small cubes 80g skin from smoked fish 5g bonito flakes Melt the butter in a pan over a high heat and cook until the butter starts to foam and takes on a really golden colour. Reduce the heat, add the fish skin and sweat for 1–2 minutes. Remove the fish skin, then transfer the butter to a blender or food processor and add the bonito flakes. Blend together. Allow to cool. Store in an airtight container in the fridge for 1 month, or freeze for 3 months. CHICKEN AND SAVORY BUTTER makes about 600g 500g Cultured Cream 10g herb savory, leaves picked 100g chicken or duck fat ½ garlic clove, crushed a small bunch of thyme zest of ½ lemon 40g fine table salt Whisk the cultured cream until it separates into butter and buttermilk. Strain the buttermilk into a bowl and reserve the butter in the sieve. Blend the savory into the buttermilk really well in a blender or food processor, then strain through a fine sieve. Put the chicken or duck fat, garlic, thyme and lemon zest in a pot and heat to 90°C. Remove from the heat and allow to infuse for 20 minutes, then strain the mixture into the bowl of a stand mixer fitted with the paddle attachment and allow to cool. Add the butter from the cultured cream. Whip the two fats together. Add the buttermilk and salt and whip until combined. Store, wrapped in clingfilm, in the fridge for up to a month or freeze for longer storage. CULTURED BUTTER makes 750g 1 litre double cream 200ml buttermilk about 5g fine table salt Pour the cream into a pan and heat to 35°C. Stir in the buttermilk. Pour into a container and cover with a piece of muslin secured with an elastic band or string. Leave in a warm, dry area of your kitchen to ferment for 24 hours. Season the mix with the salt and refrigerate. Once chilled, transfer the mixture to a stand mixer fitted with a whisk attachment. Whisk on a medium-high speed for about 15 minutes. It is ready when the liquid (buttermilk) separates from the solid butter. Strain the buttermilk from the butter through muslin into an airtight container. Pack the butter together into a block and wrap in greaseproof paper or put into another container. Store both in the fridge for up to 3 months. NORI BUTTER makes about 250g 2 sheets of dried nori (3g each) 5g dried wakame flakes 1 black peppercorn 250g unsalted butter, cut into small cubes Preheat the oven to 150°C fan/170°C/Gas Mark 3–4. Toast the nori and wakame with the peppercorn on a baking tray in the oven for 10 minutes. Melt the butter in a pan over a high heat and cook until it starts to foam and takes on a really golden colour. Immediately remove from the heat and cool quickly to stop the butter cooking further. Blend the toasted ingredients with the butter in a blender or food processor. Allow to cool, then store in an airtight container in the fridge for a month, or freeze for 3 months. SMOKED BONE MARROW BUTTER makes 275g 250g Smoked Butter, at room temperature 25g Smoked Bone Marrow Maldon sea salt Put the smoked butter in a stand mixer fitted with a paddle attachment. Turn the mixer to a high speed, add the bone marrow and mix to incorporate. Add salt to taste. Store in an airtight container in the fridge for up to 1 week. Remove from the fridge 10 minutes before serving. SMOKED BUTTER makes 250g 200g applewood chips 250g unsalted butter, diced and frozen Take a flat tray with a steam insert (such as a deep roasting tray that will hold a flat steaming rack) and spread the applewood chips over the bottom of the tray. Warm it over a medium heat until the chips start to smoke. Remove from the heat. Place the frozen butter on a tray on the steam insert/steaming rack and set this over the smoking chips. Completely cover the top and sides tightly with oven-safe clingfilm so the smoke is sealed inside with the butter. Leave to lightly smoke for 10 minutes. Store in an airtight container in the fridge for up to 6 weeks. WHISKEY CULTURED BUTTER makes 750g about 4 teaspoons whiskey 750g Cultured Butter (see opposite), at room temperature Fold the whiskey into the butter until fully combined. Taste and add more whiskey if you like. Store wrapped in greaseproof paper or in an airtight container in the fridge for up to 3 months. FRESH CURD Be sure all the ingredients are cold before starting. makes about 800ml 1.25 litres whole milk 60ml double cream 25ml buttermilk 1 teaspoon fine table salt 5g vegetable rennet Put all the ingredients in a bowl and whisk together to combine. Pour into a pan and set over a gentle heat. Bring the mixture to 36°C, then remove from the heat and allow to cool to room temperature. If you are looking for a loose, light curd then this is now ready to use. If you want a thicker, more stable curd, line a large sieve with a piece of muslin and set it over a deep bowl, then pour the mixture into the sieve. Gather up the edges of the cloth and secure. Leave in the fridge overnight. The next day, the thicker curd will be left in the cloth and some whey will have passed into the bowl (this whey can be reserved and used for ferments). The curd can be seasoned, smoked or flavoured with herbs and spices as desired. Both the loose and thick curd can be stored in an airtight container in the fridge for up to 3 days. CULTURED CREAM makes about 600ml 500ml double cream 100ml buttermilk Place the cream in a pan and heat to 35°C. Stir in the buttermilk. Pour the mixture into a sterilised container (a plastic container is fine) and cover with muslin secured with an elastic band or string – the cream needs to breathe, hence the cloth covering. Leave in a warm, dry place to culture for about 4 days. The mixture will thicken and become sour. Once ready, the cultured cream can be stored in the fridge in an airtight container for up to 2 weeks. KEFIR makes about 2.4 litres 2 litres whole milk (unpasteurised if possible) 100g kefir grains 400ml double cream Gently warm the milk to approximately 30°C, then add the kefir grains (they are dormant and the heat will activate them). Stir in the cream. Decant into a sterilised plastic, glass or crockery container and cover with muslin secured with a rubber band or string. Leave in a warm spot in the kitchen to culture until the mixture becomes quite thick and the aroma is pleasant and acidic. This will usually take about 24 hours. After culturing is complete, strain the grains out of the finished kefir (the grains can be used again to make more batches of kefir). The finished kefir will keep in the fridge in a sealed container for 2–3 weeks. BEN'S BEESWAX CREAM This recipe was developed by Ben, who thought of the idea one day while maintaining the hives on the rooftop of The Dairy. He was blowtorching the wooden frames of the hives, to clean them, and was taken by the incredible aroma that was released. This is a delicious accompaniment to the Hibiscus Doughnuts or to spoon over a dessert just as you would cream. makes about 500g 75g comb from honey, broken up to release the honey 375ml UHT double cream 60g honey 100g egg yolks Using a blowtorch, burn the exterior of the comb on all sides for about 2 minutes or until you start to smell the honey caramelising. Put the comb in a suitable-sized pan with the UHT cream. Slowly, while stirring with a wooden spoon, bring the mixture to the boil. As soon as it starts to boil, remove from the heat and allow to infuse for 10 minutes. Place the pan back on the heat and bring just to the boil again. Remove from the heat and pass the cream through a fine sieve into a deep bowl, pushing hard on the comb to release as much of the honey as possible through the sieve. Leave to stand and the mix will produce a skin. Skim this off the top. Allow a skin to form again three further times and remove it each time. (Doing this will prevent the beeswax cream from cracking during baking.) In a small pan, caramelise the honey to a light amber colour. Allow to cool until it is just warm, then mix it with the egg yolks. Whisk the egg yolk mixture with the cream mixture until combined – try to avoid whisking in any air bubbles. If bubbles do appear, bang the bowl on a hard surface to deflate them. Preheat the oven to 95°C fan/115°C/Gas Mark ¼. Pour the mixture into a baking dish. Bake for 1 hour. Allow to cool at room temperature, then keep in the fridge until required. CRAB OIL makes about 350ml 800g–1kg crab shells, cleaned 200ml vegetable oil 50g fennel seeds 50g coriander seeds a pinch of dried chilli flakes 2 bay leaves 200ml rapeseed oil zest of 2 lemons 20g dried wakame Smash the crab shells into small pieces using a mallet or hammer. Put the shells and vegetable oil into a medium-sized pan over a high heat. Cook for 3–5 minutes, scraping the bottom of the pan, until the shells are nicely toasted. Add the spices and bay leaves for the last 2 minutes of cooking so they get toasted too. Lower the heat and add the rapeseed oil. Bring up to just below a simmer (85°C), then cook for 45 minutes. Remove from the heat and add the lemon zest and wakame. Cover the top of the pan with clingfilm and leave to infuse for 1 hour. Strain the oil and decant into jars. Keep in the fridge – the oil can be used straight away – for up to a week, or in the freezer for up to 3 months. HERB OIL makes about 200ml a bunch of flat-leaf parsley, leaves picked and chopped 3–4 sprigs of tarragon, leaves picked and chopped ½ bunch of chervil, leaves picked and chopped a bunch of chives, chopped a bunch of spring onion tops (the green part), chopped 200ml extra virgin olive oil Maldon sea salt Blend the herbs and onion tops with the oil in a blender or food processor. Pour the mixture into a pan and warm over a high heat, stirring constantly. Season with salt. Pour the mixture back into the blender or food processor and blend for 1 minute. Strain the oil through a piece of muslin into a tray set over ice so it cools quickly. The oil can be kept in an airtight container in the fridge for a week or in the freezer for 3 months. GARLIC OIL makes 1 litre 1 litre extra virgin olive oil 5 garlic cloves, finely sliced Put the oil and garlic into a pot and heat to 70°C. Remove from the heat and allow to infuse for 1 hour at a warm room temperature. Strain the oil, then decant into an airtight jar or bottle. Keep in the fridge for up to 1 month. KOMBU OIL makes 150ml 100g dried kombu 150ml grapeseed oil Preheat the oven to 110°C fan/130°C/Gas Mark ½–1. Toast the kombu in a small tray in the oven for 1 hour. Blend the toasted kombu with the oil in a blender or food processor, then strain through a fine sieve. The oil can be stored in a sealed jar in the fridge for up to 3 months. EMBER OIL Please prepare this recipe with care because dealing with burning hot embers can be dangerous. makes 1 litre 1 litre vegetable oil 1 ember of white-hot charcoal from a barbecue or fire Pour the oil into a large pot. Wearing heavy barbecue gloves and using appropriate tongs, gently place the burning hot ember into the oil. Cover with a lid and leave to infuse as it cools. Once cool, strain through a fine sieve. Store in an airtight container or jar in a cool, dark place. NORI OIL makes about 350ml 350ml grapeseed oil 5 sheets of dried nori (3g each), cut into small pieces with scissors a bunch of parsley, leaves picked a bunch of chervil, leaves picked a bunch of tarragon, leaves picked a bunch of dill, leaves picked About 2 hours before required, put the oil into the freezer to chill. Preheat the oven to 180°C fan/200°C/Gas Mark 6. Toast the nori on a baking tray in the oven for 5 minutes. Blanch the herb leaves in boiling water for 2 minutes; refresh in iced water and drain well. Chop the herbs roughly. Put the oil and toasted nori into a blender or food processor and blend for 2 minutes. Add the herbs and blend again for 2 minutes. Strain the mixture through a fine sieve into a bowl set over ice (keeping the oil cool helps to retain the bright green colour). Store the nori oil in a sealed container in the fridge for 1 month or in the freezer for 3 months. SICHUAN OIL This is not for the faint-hearted! It is best to make it outdoors or in a very well-ventilated area, and please prepare with extreme care because dealing with oil at this temperature can be dangerous. makes about 500ml 500ml vegetable oil 1 cinnamon stick 4 star anise 200g Sichuan pepper 100g dried chilli flakes 2 garlic cloves, finely sliced Pour the oil into a pan and add the cinnamon stick, star anise and Sichuan pepper. Heat to 220°C. Carefully (wearing protective gloves) pour the hot oil over the chilli flakes and garlic in a heatproof bowl. Leave to cool to room temperature. The oil is ready to be used straight away. Decant into an airtight container or jar and store in the fridge for up to a month. LOBSTER OIL makes about 350ml 800g–1kg lobster shells, cleaned 200ml vegetable oil 50g fennel seeds 50g coriander seeds a pinch of dried chilli flakes 2 bay leaves 200ml rapeseed oil zest of 2 lemons 20g dried wakame Smash the lobster shells into small pieces using a mallet or hammer. Put the shells and vegetable oil into a medium-sized pan over a high heat. Cook for 3–5 minutes, scraping the bottom of the pan, until the shells are nicely toasted. Add the spices and bay leaves for the last 2 minutes of cooking so they get toasted too. Lower the heat and add the rapeseed oil. Bring to just below a simmer (85°C) and cook for 45 minutes. Remove from the heat and add the lemon zest and wakame. Cover the top of the pot with clingfilm and leave to infuse for 1 hour. Strain and decant into jars. Keep in the fridge – the oil can be used straight away or stored for up to a week – or in the freezer for up to 3 months. POWDERS, SALTS AND CRISPS Everyone should have a dehydrator – they are cheap as chips on eBay. Dehydrating, or drying something out completely, can add a serious depth of flavour to all kinds of food, from fruits and vegetables to fish. Try slicing a scallop thinly and dehydrating it overnight at around 65°C: the natural sugars caramelise and you end up with a super-intense, sweet scallop crisp that could be crumbled over a ceviche to intensify the flavour. That is why we dehydrate things – it intensifies flavours. We add dried mushrooms to a game broth and the earthy flavours go off the charts. Poach a quince, then stick it in a dehydrator for 6 hours and see what you think! You will not be disappointed. Drying also means you avoid waste. I used to throw away lemons in the fridge that were not being used up, but now I peel them and dehydrate the peel. Once I have built up a large batch I blend the peel with some salt and fennel seeds to make the most amazing aromatic salt to season lamb, chicken or fish. When we prepare wild mushrooms, we keep all the trimmings and dry them. Then, when we have a good batch it is blended into a powder. This is a premium product generated from WASTE! Have a look at how much it will cost you to buy a pack of mushroom powder in your local supermarket! The price should be encouragement enough to try making your own at home. SEAWEED CRACKERS makes 300–400g 900ml water 190g tapioca pearls 15g Nori Powder 5g Maldon sea salt vegetable oil for deep-frying Pour the water into a pan and bring to the boil. Add the tapioca and cook over a gentle heat, stirring constantly, until thick and translucent. Stir in the nori powder and salt. Pour on to a baking tray lined with greaseproof paper or a silicone mat and spread out into a thin layer. Leave to dry out at a warm room temperature (or in your oven at the lowest setting or in a dehydrator). Once completely dry, break into crackers. If not frying straight away, the crackers can be kept in an airtight container. Just before serving, heat oil in a deep pan or deep-fat fryer to 200°C and deep-fry the crackers until puffed and golden. Drain on kitchen paper. LEVAIN CRISPS This is a great way to use up any excess levain/sourdough starter. Ideally use a really sour levain here. makes about 250g 250g levain/sourdough starter ½ teaspoon fine table salt Preheat the oven to 250°C fan/its highest setting. Line a baking tray with a silicone mat, or greaseproof paper brushed with oil, and place in the oven to heat up. Put the levain and salt in a blender or food processor and blend with enough water to create a double cream consistency. Spread a really thin layer of the mixture on the hot lined tray and bake until it just turns golden. Remove from the oven and allow to cool slightly before breaking into crisps. Repeat with the remaining mixture. The crisps can be kept in an airtight container for up to 1 week. PUFFED BARLEY makes about 200g 200g pearl barley vegetable oil for deep-frying Maldon sea salt Put the barley in a pot and cover with water. Bring to the boil, then simmer until completely overcooked and soft – this will take about 40 minutes. Drain the barley in a sieve and rinse to remove excess starch. Spread out the barley grains on a large tray in one layer. Dry out completely in a dehydrator, or in the oven at its lowest setting (this will take about 6 hours). If not frying straight away, the dried grains can be stored in an airtight container at room temperature. Just before serving, heat oil in a deep pan or deep-fat fryer to 250°C and deep-fry the barley grains for 30 seconds or until puffed. Drain on kitchen paper and sprinkle with a pinch of salt. FRIED BREAD makes about 300g 300g yesterday's sourdough 50ml olive oil 1 garlic clove, crushed zest of 1 lemon 3 sprigs of lemon thyme, leaves picked Maldon sea salt Chop up the bread, then dry out completely in a dehydrator, or in the oven at its lowest setting, for 3–4 hours. Blend the dried bread in a food processor to a breadcrumb consistency. Set a wide-bottomed pan on a medium heat and add the olive oil followed by the breadcrumbs. Toast, using a whisk to stir the crumbs, for about 10 minutes or until golden brown. Add the crushed garlic, lemon zest and thyme with a pinch of salt and mix in. Tip the mixture on to a flat tray lined with kitchen paper to cool. Store in an airtight container for up to 5 days. NORI SALT makes about 110g 2 sheets of dried nori (3g each) 100g Maldon sea salt 5g dried wakame Preheat the oven to 160°C fan/180°C/Gas Mark 4. Place the sheets of nori on a baking tray and toast in the oven for 5 minutes. Blend the toasted nori with the salt and wakame in a blender or food processor until finely ground. Store in an airtight container in your store cupboard. SPICED SALT makes 120g 10g caraway seeds 10g black peppercorns 100g fine table salt Toast the caraway seeds and peppercorns in a small dry frying pan until aromatic. Crush coarsely in a mortar and pestle. Blend the spices through the salt in a blender or food processor. Store in an airtight container at room temperature. BLACKBERRY LEAF POWDER makes about 100g 2 shopping bags of unsprayed blackberry leaves Rinse the leaves well in a colander under cold running water, then shake or spin dry. In a dehydrator, or your oven set to the lowest heat, dry out the leaves completely – this will take 6–8 hours. Allow to cool, then blend to a fine powder in a blender or food processor. Store in an airtight container in your store cupboard. NORI POWDER yield depends on how many nori sheets you begin with sheets of dried nori (use a minimum of 4 sheets as fewer will not turn in the blender) Preheat the oven to 160°C fan/180°C/Gas Mark 4. Place the sheets of nori on a baking tray and toast in the oven for 5 minutes. Allow to cool completely. Cut the sheets into small pieces. Put into a blender or food processor (make sure the bowl is really dry) and blend to a fine powder. Store in an airtight container in your store cupboard. MUSHROOM POWDER makes about 100g 500g chestnut mushrooms, finely sliced Spread the mushroom slices on a large baking tray in a single layer. Place in a dehydrator, or the oven set at about 70°C fan/90°C, and leave to dry out completely. This should take about 8 hours. Once completely dry, tip the mushrooms into a high-speed blender and blend to a fine powder (if your blender is not high-speed, blend as fine as possible, then pass through a sieve to remove any lumps). Store in an airtight container in your store cupboard. STOCKS, SAUCES AND SEASONINGS BROWN CHICKEN STOCK makes about 3 litres 4kg chicken wings 1 pig's trotter, split 5 litres water 1 bulb of garlic, cut in half (horizontally) 3 banana shallots, cut in half Preheat the oven to 220°C fan/240°C/Gas Mark 9. Roast the wings and trotter in large roasting trays for 35 minutes. Tip the wings and trotter into a large stock pot. Deglaze the roasting trays with some of the water by bringing to the boil, stirring and scraping. Add to the stock pot with the remaining water. Bring to the boil and skim. Reduce to a gentle simmer and cook for 3 hours, skimming regularly. Add the garlic and shallots. Simmer for a further 1½ hours. Strain the stock into a large container. Set the sieve containing the chicken wings and trotter over a bowl and place a weight on top. Leave for 1 hour to extract all the juices. Add these to the strained stock. Strain through a fine sieve. The stock can be stored in the fridge for a couple of days but can also be frozen in portions to be used at a later date. LAMB STOCK makes about 2.5 litres 2kg lamb bones, chopped a drizzle of white wine vegetable oil 500g lamb trimmings (not too fatty if possible), diced 3 litres Brown Chicken Stock (see above) Preheat the oven to 220°C fan/240°C/Gas Mark 9. Roast the lamb bones in two large roasting trays for 20–30 minutes or until golden. Remove the bones and set aside. Skim off any fat from the roasting juices left in the trays, then deglaze with the white wine by bringing to the boil, stirring and scraping. Save these deglazed roasting juices. Heat about 1cm of vegetable oil in a wide-based pan over a high heat. Add the lamb trimmings, in batches, and brown to a dark golden colour. Place the browned lamb trimmings, the roasted bones and reserved roasting juices in a large pot and add the brown chicken stock. Bring to the boil and skim, then simmer gently for 5 hours, skimming as required. Remove from the heat and allow to sit for 2 hours before straining this lamb stock through a fine sieve. If not needed straight away, the stock can be stored in the fridge for a couple of days or frozen. LAMB SAUCE makes about 500ml 2kg lamb bones, chopped 1 bottle of white wine vegetable oil 1.25kg lamb trimmings (not too fatty if possible), diced 3 litres Brown Chicken Stock 60ml fresh lemon juice zest of ½ lemon a sprig of rosemary 2 garlic cloves, crushed Preheat the oven to 220°C fan/240°C/Gas Mark 9. Roast the lamb bones in two large roasting trays for 25–30 minutes or until golden. Remove the bones and set aside. Skim off any fat from the roasting juices left in the trays, then deglaze with half of the white wine by bringing to the boil, stirring and scraping. Save these deglazed roasting juices. Heat about 1cm of vegetable oil in a wide-based pan over a high heat. Take the leanest 500g of the lamb trimmings and brown, in batches, to a dark golden colour. If the oil catches badly while you are browning the lamb, clean out the pan and use fresh oil for the next batch. Place these browned lamb trimmings, the roasted bones and reserved roasting juices in a large pot and add the brown chicken stock. Bring to the boil and skim, then simmer gently for 5 hours, skimming as required. Remove from the heat and allow to sit for 2 hours before straining this lamb stock through a fine sieve. Brown the remaining lamb trimmings in batches in a wide-based pan as before, starting with the fattiest trimmings and a little oil, then keep reusing the fat to brown the rest, ensuring the fat doesn't burn. Once all these trimmings are browned, drain off the majority of the fat before deglazing the pan with the lemon juice. Put all the trimmings back into the pan and add 1 litre of the lamb stock and bring to the boil. Skim the surface. Reduce the heat slightly and allow to reduce until the liquid is no longer covering the lamb trimmings. Add the remaining lamb stock and allow it to reduce once again by half. Boil the remaining wine to reduce by half. Add to the sauce. Remove from the heat and add the lemon zest, rosemary and garlic. Leave to infuse for 10 minutes before straining through a fine sieve. The sauce can be kept in the fridge for a couple of days or it can be frozen in portions to use at a later date. VENISON SAUCE makes about 200ml 1kg venison bones, cut into small pieces 1 litre Brown Chicken Stock 5 juniper berries 200g venison trimmings 50ml vegetable oil 50g unsalted butter 200ml Madeira 2 garlic cloves, sliced 2 sprigs of thyme 1 tablespoon Cabernet Sauvignon vinegar Preheat the oven to 220°C fan/240°C/Gas Mark 9. Roast the venison bones in a roasting tray for 35 minutes. Meanwhile, bring the brown chicken stock to the boil and reduce by a quarter. Toast the juniper berries in a small dry pan until they smell aromatic, then lightly crush them; set aside. Add the roasted venison bones to the reduced stock. Deglaze the roasting tray with a little stock from the pot and pour back into the pot. Bring back to the boil, then simmer for 1½ hours. Leave to cool for 20 minutes with the bones in, then remove them and strain the stock. While the stock is simmering, caramelise the venison trimmings in a very hot pan with the vegetable oil. Add the butter and continue to cook, scraping the bottom of the pan, until the meat has taken on a dark, roasted colour. Remove the meat from the pan (reserve it). Strain off the fat (reserve it), then deglaze the pan with the Madeira and reduce by half. Add the venison stock and two-thirds of the caramelised meat to the Madeira reduction. Bring to the boil and skim, then keep on a light rolling boil for 20 minutes. Strain into a clean pan and add the remaining caramelised venison trimmings. Reduce by a third to a glossy sauce. Add the juniper berries, garlic, thyme and vinegar. Stir in the reserved fat (from caramelising the venison trimmings) to taste. Allow to infuse for a couple of minutes, then strain the sauce through muslin and chill quickly over an ice bath so that the fats emulsify. If not using straight away, keep in the fridge and warm through before serving. MISO Miso is something that you might not necessarily choose to make at home but it is interesting to see the process behind it. If you do want to try making it, then really any carbohydrate can be mixed with the 'koji' (the fermentation culture) to create different flavours. We have experimented with everything from beans to bread. During the process of making the koji and miso, it is important that the environment is very clean and the equipment sterilised so as not to introduce the wrong type of bacteria. In any recipes in this book where miso is used, it is perfectly acceptable to use your favourite shop-bought variety if you haven't made your own! KOJI makes about 670g 500g plump white rice 10g koji spores/starter (2% of dry rice weight) Soak the rice in cold water overnight. Drain the rice and spread in a thin layer in a steamer tray. Place this tray over a roasting/baking tray of water. Cover with a lid or foil and gently steam the rice for 1½ hours – it should be quite dry and barely cooked. Set aside on the steamer tray. Put the koji spores in a small blender and blend to a dry powder. When the rice has cooled to about 35°C, sprinkle over the koji powder and mix well. Spread out the rice in a thin layer on the steamer tray. Set this over a tray of room-temperature water. Cover the top of the steamer tray with clingfilm and wrap around the bottom tray so that the rice is sealed in with the water. Pierce a few holes in the film. Leave in a warm, moist environment such as a kitchen for 5 days, stirring the rice occasionally. A white mould should appear on the rice, similar to the mould found on the outside of brie, and the rice will kind of break down. The rice and mould are the koji. Once it reaches this stage, it can be used straight away or stored in the fridge, tightly wrapped, for up to 4 days. MISO makes about 1kg 500g dried white beans (we use haricot beans), soaked in water overnight 500g Koji (see above) fine table salt Drain the beans and place in a pot. Cover with water and bring to the boil, then simmer the beans, skimming occasionally, until they are overcooked and starting to fall apart. Drain and cool down to a temperature of about 35°C. Mix in the koji. Weigh the mixture and calculate 7% of this – this is the amount of salt to add. Mash or mince the mixture to a rough paste. Spoon into a sterilised container, packing tightly to remove any air bubbles. Top with a layer of salt. Place a sterilised weight over the top of the mixture, then cover the container with muslin. Leave in a cool, ambient environment (a cellar is ideal) for at least 30 days or until the saltiness is taken over by sweet and umami flavours. Once ready, store in an airtight container in the fridge. HERB MISO This is a variation of our traditional miso. We add 500g of mixed herbs and they get pushed through the mincer with the bean and koji mixture. A mixture of chervil, parsley and tarragon is a good place to start but most herbs will work and it is a good way to use leftovers. The rest of the process remains the same. ONION MISO This is another variation on the original but this time 250g spring onion tops and 250g wild garlic leaves get minced into the mixture. The rest of the process remains the same. BREAD MISO This variation is a bit different. Tear 500g stale bread into chunks, then soak in water for about 2 hours so that the bread completely softens. Drain and squeeze out as much water as possible. Mix the bread with the koji, then weigh the mixture and add 5.5% salt. The rest of the process remains the same. Once ready, if the mixture is very sloppy, it can be very gently cooked in a pan to evaporate some of the liquid. ROAST GARLIC MISO PURÉE makes about 650g 350g garlic cloves (peeled) a drizzle of vegetable oil demerara sugar 175g unsalted butter, cut into small cubes 150ml sherry vinegar 175g sweet white miso 175g malt extract Preheat the oven to 180°C fan/200°C/Gas Mark 6. Toss the garlic cloves in the oil and coat them in demerara sugar. Wrap the cloves loosely in foil to create a parcel. Roast for 25 minutes. Open the parcel and return to the oven to roast for a further 5 minutes. Tip the garlic into a food processor and blend the cloves to a smooth purée. Put the butter into a pan set over a high heat and cook until the butter starts to foam, brown and take on a nutty aroma. Immediately remove from the heat and cool quickly to stop the butter from burning. Boil the vinegar in another pan until reduced to 75ml. Add the brown butter, vinegar, miso and malt extract to the garlic purée and blend until smooth. Cool. The purée can be stored in the fridge in an airtight container for up to 1 month. DASHI makes about 1 litre 25g dried kombu 1 litre distilled water, boiled and cooled (or use filtered water or still mineral water) 1 sheet of dried nori (about 3g) 15g bonito flakes 2 teaspoons white soy sauce 10 wild garlic leaves (if unavailable use 2 sliced garlic cloves) Maldon sea salt Add the kombu to the water in a pan and bring to a very gentle simmer (do not boil). Simmer for 1 hour. Strain the liquid through a fine sieve into a jug. Season with the nori, bonito flakes, soy sauce, wild garlic leaves and a pinch of salt. Allow to infuse for 5 minutes. Taste to check the seasoning and adjust as required: the dashi should be salty and savoury with umami. Strain the dashi through the fine sieve. Once cooled, it can be stored in a sealed container in the fridge for 2–3 days. WHITE ONION DASHI makes about 1.5 litres 1.5 litres water, boiled and cooled (or use filtered water or still mineral water) 20g dried kombu 1 medium white onion, sliced 10g bonito flakes white soy sauce Maldon sea salt Combine the water, kombu, onion and bonito flakes in a pan. Bring to a very gentle simmer (do not boil), then cover with a lid and keep at a very low simmer for 1 hour. Strain and season with white soy sauce and salt to taste. Once cooled, the dashi can be stored in a sealed container in the fridge for 2–3 days. ELDERFLOWER VINEGAR The cider vinegar in this recipe is optional but will speed up the process. makes about 2 litres 280g caster sugar 2 litres water 500g elderflower stems 1 tablespoon cider vinegar (optional) Put the sugar and water in a pot, set over a medium heat and bring to the boil. Lower the heat and simmer gently for a few minutes until the sugar has completely dissolved. Allow to cool (this is a 14% sugar syrup). Pour the syrup into a sterilised container and add the elderflower stems. Cover the top with muslin. Keep at room temperature for about 2 weeks, agitating the liquid each day by stirring with a sterilised spoon or moving the container. After 2 weeks, some bubbles should have started to appear and the liquid should be sour. Add the cider vinegar if you are using it. Again, leave at room temperature, covered with cloth – it is a matter of taste how long you leave the vinegar to sour, so taste it regularly with a sterilised spoon. When it reaches your desired acidity, strain it and decant into sterilised narrow-necked bottles. Seal the tops of the bottles well (we use a wax seal). Allow the vinegar to mellow in the bottle in a cool, dark place for 6 months to a year before use. MAYONNAISE makes about 300g 3 egg yolks 1 teaspoon Dijon mustard 1 teaspoon white wine vinegar 250ml rapeseed oil 50ml water (if needed) fine table salt Put the egg yolks, mustard and vinegar in a blender or food processor and blend together. Drizzle in the oil while blending to emulsify to a mayonnaise. Let it down with a little water if it gets too thick. Season with a pinch of salt to taste. The mayonnaise can be stored in the fridge, covered, for a couple of days. SICHUAN MAYONNAISE makes about 300g 3 egg yolks 1 teaspoon Chardonnay vinegar 250ml rapeseed oil 50ml water (if needed) fine table salt Sichuan Oil Put the egg yolks and vinegar in a blender or food processor and blend together. Drizzle in the rapeseed oil while blending to emulsify to a mayonnaise. Let it down with a little water if it gets too thick. Season with a pinch of salt and Sichuan oil to taste. Store in an airtight container in the fridge for up to 2 days. SMOKED COD'S ROE EMULSION makes about 400g 150g cod's roe 7% brine (see here) applewood chips for smoking 50g sourdough bread, crusts removed whole milk 5g Dijon mustard ½ garlic clove (peeled) 250ml vegetable oil 50ml water (if needed) fresh lemon juice Maldon sea salt Brine the roe in a 7% brine in the fridge for 3 hours. Remove from the brine. Take a flat tray with a steam insert (such as a deep roasting tray that will hold a flat steaming rack) and spread the applewood chips over the bottom of the tray. Warm it over a medium heat until the chips start to smoke. Remove from the heat. Place a tray of ice cubes on the steam insert and put the roe on a heatproof tray over the ice. Set over the smoking chips. Completely cover the top and sides tightly with oven-safe clingfilm so the smoke is sealed inside with the roe. Leave to smoke for 2–3 minutes. Repeat the process, heating the woodchips and smoking the roe for another 2–3 minutes. While the roe is being smoked, soak the sourdough in milk. Squeeze the liquid from the sourdough, then put the bread in a blender or food processor. Add the roes, mustard and garlic and blend until smooth. Drizzle in the vegetable oil while the blender/food processor is running until emulsified to a mayonnaise consistency. Let down with a little water if the emulsion is too thick. Season with lemon juice and salt to taste. The emulsion can be stored in the fridge, in an airtight container, for a couple of days. FOR THE TABLE I have created a healthy selection of larder recipes in the first part of this book, certainly more than you might find at the back of most cookbooks. For me, our larder is the backbone of our recipes, it is our secret weapon. In my opinion, the better you stock your larder the easier it is to create interesting and exciting dishes without breaking a sweat, and as such, most if not all the recipes in this second half of the book will refer to a recipe from the larder chapter. I have broken this section into chapters that reflect what we do in our restaurants – snacks, garden, sea, land and sweet – and the recipes flow from spring and summer into autumn and winter. I hope you enjoy cooking the recipes. Don't worry if things don't always go to plan, some of our greatest dishes were created by accident! 'Mistakes are almost always of a sacred nature. Never try to correct them. On the contrary: rationalise them, understand them thoroughly. After that, it will be possible for you to sublimate them.' –Salvador Dali BASES AND BLENDS, CHEF'S COCKTAILS AND HOME BREWS BASES AND BLENDS SALT SOLUTION Some of our cocktails benefit from a spray of this salt solution over the top before serving. It adds a nice balance of flavour between sweet and savoury notes. makes 200ml 100g Maldon sea salt 100ml water Dissolve the salt in the water in a pan over a medium heat. Allow to cool. Decant into a spray bottle or atomiser and keep at room temperature. BLACKBERRY SYRUP makes about 700ml 250g blackberries 250g caster sugar 250ml water Combine the blackberries, sugar and water in a pan and simmer gently until the fruit has completely softened and the sugar has dissolved. Strain through a fine sieve, pressing on the fruit in the sieve so that all the juice passes through. The syrup can be stored in a sealed jar or bottle in the fridge for 2 weeks. BLACKBERRY SHRUB makes 875ml 500ml Blackberry Syrup (see above) 375ml apple cider vinegar Mix the syrup with the vinegar. Store in sealed sterilised jars in a cool, dark place for 2 months before use. SUGAR SYRUP makes about 500ml 250g caster sugar 250ml water Dissolve the sugar in the water in a pan over a low to medium heat. Ensure that the sugar is fully dissolved. Allow to cool. The syrup can be stored in a sealed jar or bottle in the fridge for 6 months. BROWN SUGAR SYRUP makes about 500ml 250g demerara sugar 250ml water Dissolve the sugar in the water in a pan over a low to medium heat. Ensure that the sugar is fully dissolved. Allow to cool. The syrup can be stored in a sealed jar or bottle in the fridge for 6 months. DILL SYRUP makes about 350ml 175g caster sugar 175ml water 10g dill Dissolve the sugar in the water in a pan over a low to medium heat. Bring to the boil, then remove from the heat and add the dill. Clingfilm the top of the pan and leave the syrup to cool and become infused with the dill flavour for 3 hours. Strain through a fine sieve, pressing down on the dill in the sieve to ensure that all the flavour passes through. The syrup can be stored in a sealed jar or bottle in the fridge for 2 weeks. SORREL SYRUP makes about 200ml 200g sorrel 50ml Sugar Syrup 50ml fresh lemon juice 100ml cloudy apple juice Blanch the sorrel in a pan of boiling water for 10 seconds. Drain and refresh in iced water. Put into a blender or food processor with all the other ingredients and blend together until as smooth as possible. Strain through a fine sieve, then pass through muslin. Store in a sealed jar or bottle in the fridge for a couple of days. APPLE PURÉE makes about 800g–1kg 10 Chantecler apples, or other sweet apples, quartered and cored 200ml Sugar Syrup Cook the apple quarters on a barbecue, or hot ridged grill pan, until slightly charred for flavour. Tip them into a pan and add the sugar syrup. Simmer gently until the apples break down. Purée in a blender or food processor, then pass through a fine sieve. The purée can be stored in an airtight container in the fridge for up to 2 days, or frozen. THYME SYRUP makes about 500ml 250g caster sugar 250ml water a bunch of thyme (about 15g) Dissolve the sugar in the water in a pan over a low to medium heat. Bring to the boil, then remove from the heat and immediately add the thyme. Allow to cool to room temperature. Strain the syrup through a fine sieve, pressing on the thyme in the sieve to ensure that all the flavour passes through. The syrup can be stored in a sealed jar or bottle in the fridge for up to 2 weeks. RHUBARB PURÉE makes about 350g 300g rhubarb, cut into uniform-sized pieces 50g caster sugar 20g Ultratex Maldon sea salt and freshly ground black pepper Put the rhubarb and sugar into a pan, cover and cook over a gentle heat until the rhubarb is soft. Pour into a blender or food processor and blend to a smooth purée with the Ultratex. Season with a pinch of salt and black pepper to taste. Store the purée in an airtight container in the fridge for up to 2 days or freeze. ELDERFLOWER CORDIAL makes about 5 litres 300g fresh elderflowers 2 lemons 3.6 litres water 2kg caster sugar 1 teaspoon citric acid Remove the elderflowers from the stalks, picking off all the leaves. Rinse the flowers gently. Peel the lemons and remove the pips. Put the water, sugar and citric acid in a large pot and bring to the boil. Once the sugar has dissolved, add the lemon flesh and elderflowers. Remove from the heat and leave to macerate for 2 hours. Strain the liquid, then pour into sterilised bottles and seal. Store in a cool, dark place for up to a year. Once opened, keep in the fridge and use within 3 months. DEAN'S GREEN TEA KOMBUCHA Dean's obsession with kombucha is quite infectious. As I walk round the restaurants I can see terrifying-looking jars of scoby with people's names on them, because many of the team want to have their own concoction on the go. One of our guys was suffering quite badly with some stomach pains that wouldn't go away. Doctors and antibiotics had no impact. He started his own kombucha and the problem disappeared. They are currently looking at launching their own brand together. Bravo and best of luck to them! makes about 3 litres 3.5 litres filtered water 12g unbleached green teabags 300g caster sugar, plus extra to add at the end 1 kombucha scoby Bring the water to the boil and boil for 10 minutes. Strain 3 litres of the water into a sterilised large, heatproof glass jar. Cool until the temperature reaches 68°C, then add the teabags. Leave to infuse for 40 minutes. Add the sugar and stir to mix well, then strain into a sterilised large, wide-mouthed glass jar. Add the kombucha scoby. Place a cloth over the top of the vessel and leave at room temperature for 3–5 days until the liquid reaches an acidity of 3.8ph on a PH meter. Strain out the scoby (it can be kept and used again). Measure the liquid and add ½ tablespoon of caster sugar per 1 litre. Mix well. Pour into bottles that can hold pressure, such as beer bottles, leaving about a 4cm gap at the top. Seal the bottles and leave for at least 3 days at room temperature (15–20°C). Then store in the fridge. FAT-WASHED WHISKEY makes 750ml 250g unsalted butter, cut into small cubes 750ml Irish whiskey Make a brown butter by melting and heating the butter cubes in a pan over a high heat until the butter starts to foam and brown and gives off a nutty aroma. Once this occurs, remove from the heat immediately and cool quickly by setting the base of the pan in cold water, to stop the butter from burning. While the butter is still warm, add the whiskey and stir. Clingfilm the top of the pan and allow to cool and set for 3 hours, then leave in the freezer overnight. The next day, lift off the layer of butter from the top and strain the whiskey through a fine sieve into a glass jar. Store the whiskey in the fridge. The butter can be kept (in the fridge) and used in the parfait for Old-Fashioned Ice Cream Sandwiches. ROSEMARY CIDER BRANDY makes 750ml 4 sprigs of rosemary 750ml apple cider brandy or Calvados Add the rosemary sprigs to the brandy in a large jar, seal and leave to infuse for 2 days in a cool, dark place. Strain through a fine sieve, pressing down on the rosemary in the sieve to ensure all the flavour passes through. Store the brandy in a sealed jar or bottle at room temperature. PEA GIN makes 750ml 750ml gin 2 handfuls of fresh peas (in their pods) Pour the gin over the peas in a large jar, seal and leave to infuse for 4 days in a cool, dark place. Strain through a fine sieve, pressing the peas in the sieve to ensure all the flavour passes through. Store the gin in a sealed jar or bottle in the fridge. ROSEMARY GIN makes 750ml 4 sprigs of rosemary 750ml gin Add the rosemary sprigs to the gin in a large jar, seal and leave to infuse for 2 days in a cool, dark place. Strain through a fine sieve, pressing down on the rosemary in the sieve to ensure all the flavour passes through. Store the gin in a sealed jar or bottle at room temperature. BLACKBERRY BRANDY makes about 1 litre 400g blackberries 750ml brandy 150g caster sugar Place the blackberries, brandy and sugar in a large sterilised jar. Seal and store in a cool, dark place for at least 2 months before use. During the first 2 weeks, give the jar a shake every 1–2 days. After this, give the jar a shake once a week. BEETROOT GIN makes 750ml 4 raw beetroots, peeled and diced 750ml gin Add the diced beetroot to the gin in a large sterilised jar, seal and leave to infuse in a cool, dark place for 3–4 days. Strain through a fine sieve. Store the gin in a sealed jar or bottle in a cool, dark place. RHUBARB LIQUEUR makes about 900ml 300g rhubarb, chopped 200g caster sugar 1 vanilla pod zest of ½ orange 750ml vodka Put the rhubarb and sugar into a jar, seal and leave for 24 hours. Add the whole vanilla pod and orange zest, then pour in the vodka to cover everything. Seal the jar again and shake well. Leave in a cool, dark place for about 2 months before use; during the first few days, give the jar a shake each day to encourage the sugar to dissolve. After 2 months, strain the mixture and bottle the liqueur. CHEF'S COCKTAILS Mixologists and cooks think in the same way: it's all about the perfect balance of flavours. At The Dairy, the bar runs into the kitchen, so breaking a barrier. The bar is always checking out what's on the menu and questioning what's in season. Chefs will interact and consult, and are rewarded with taste tests. Cocktails at The Dairy are a complete collaboration between the bar and the kitchen, working together on flavours and ingredients. A perfect example of this is the Kerry G Old-Fashioned and our Ice Cream Sandwiches In the process of making the cocktail, we fat-wash the whiskey with butter, then we use the butter to make the parfait for the dessert. DILL OR DIE serves 1 1 thumb-sized piece of cucumber, diced 4 sprigs of dill 50ml Hendrick's gin 25ml fresh lemon juice 35ml Dill Syrup Muddle the cucumber and 3 sprigs of dill together in a mixing glass, pressing and crushing lightly. Add the gin, lemon juice and syrup with some ice and shake. Fine-strain the mixture into a martini glass. Garnish with the remaining sprig of dill. Dill or Die CIDER WITH ROSIE serves 1 2 sprigs of rosemary 25ml Rosemary Cider Brandy 25ml Chase Marmalade vodka 15ml fresh lemon juice 25ml Sugar Syrup Muddle one of the rosemary sprigs in a mixing glass, pressing lightly to release the aromatic oil from the herb. Add the brandy, vodka, lemon juice and sugar syrup. Shake with ice, then fine-strain into a martini glass. Garnish with the remaining sprig of rosemary. Cider With Rosie DAIRY-QUIRI serves 1 35ml dark rum 15ml falernum 25ml fresh lime juice 40ml Apple Purée a wedge of lime, to garnish Shake all the ingredients together in a mixing glass with ice. Fine-strain into a martini glass. Garnish with a wedge of lime. Dairy-Quiri PEA AND MINT SOUR serves 1 1 egg white 25ml fresh lemon juice 25ml Sugar Syrup 6 mint leaves, chopped 50ml Pea Gin Salt Solution, to finish Dry-shake all the ingredients together in a mixing glass. Add some ice and shake again. Fine-strain into a martini glass. Atomise with a spray of salt solution. Pea And Mint Sour PANIC! AT THE PISCO serves 1 15ml La Diablada pisco 35ml Belsazar white vermouth 15ml fresh lemon juice 25ml Sugar Syrup 25ml Rhubarb Purée a strip of orange peel (pith removed), to garnish Shake all the ingredients together with ice in a mixing glass. Fine-strain into a martini glass. Gently twist the orange peel over the glass to release the essential oils, then drop it into the drink. SORREL BELLINI serves 1 20ml Sorrel Syrup 20ml gin 5ml Elderflower Cordial chilled Champagne or prosecco Shake together the sorrel syrup, gin and elderflower cordial in a mixing glass. Fine-strain into a champagne flute. Top up with Champagne or prosecco. APPLE AND FENNEL HENDRICK'S serves 4 3 small Granny Smith apples, quartered 1 bulb of fennel (reserve the fennel fronds to garnish) ½ cucumber juice of 1 lime 140ml Hendrick's gin (35ml per person) Using an electric juicer/press, juice the apples, fennel and cucumber. Add the fresh lime juice straight away to keep the juice a bright, fresh green colour. Divide chunky ice cubes and gin among four glasses, add the juice and stir. Garnish with the fennel fronds. Apple and Fennel Hendrick's BRING THAT BEET BACK serves 1 50ml Beetroot Gin 20ml Blackberry Syrup 3 dashes of cocoa/chocolate bitters cacao nibs, to garnish Stir the gin, syrup and bitters together in a mixing glass with ice. Strain over fresh ice in a small rocks glass. Garnish with cacao nibs. Bring that Beet Back THYME FOR ANOTHER serves 1 100ml tonic water 50ml Botanist gin 25ml Thyme Syrup 25ml cloudy apple juice Simmer the tonic water until reduced by half; cool. Stir all the ingredients together in a glass with ice. Strain into a chilled martini glass and enjoy! KERRY G OLD-FASHIONED serves 1 50ml Fat-Washed Whiskey 20ml Brown Sugar Syrup 3 dashes of Angostura bitters a strip of orange peel (pith removed), to garnish Salt Solution, to finish Stir the whiskey, syrup and bitters in a mixing glass with ice, then strain over fresh ice in a small rocks glass. Gently twist the orange peel over the glass to release the essential oils, then drop it into the drink. Atomise with a spray of salt solution. ROSIE AND GIN serves 1 2 sprigs of rosemary 35ml Rosemary Gin 10ml Campari 5ml gin 35ml grapefruit juice 10ml fresh lemon juice 15ml Sugar Syrup Muddle one of the rosemary sprigs in a mixing glass, pressing lightly to release the aromatic oil from the herb. Add the rosemary gin, Campari, gin, juices, syrup and some ice and shake together. Fine-strain the mixture into a martini glass. Garnish with the remaining sprig of rosemary. Rosie and Gin RAMBLE IN THE BRAMBLE serves 1 25ml Blackberry Brandy 25ml apple brandy (Somerset Pomona) or Calvados 1 egg white 25ml fresh lemon juice a wedge of lemon, to garnish a blackberry from the brandy, to garnish Dry-shake all the ingredients together in a mixing glass. Then add some ice and shake again. Strain on to fresh ice in a rocks glass. Garnish with a lemon wedge and one of the blackberries from the brandy. Ramble In the Bramble VODKA AND COFFEE AFFOGATO serves 7–8 VODKA AND MILK 100ml milk 100ml vodka 100g caster sugar COFFEE GRANITA 300ml espresso coffee Mix together the milk, vodka and sugar. Cover and leave at room temperature for 4 days. The milk will curdle slightly. Pass through a fine sieve and store in the freezer. For the granita, freeze the coffee until solid. Break it up with a fork to create a granita texture. To serve, put a spoonful of the vodka and milk mixture in each glass and top with a spoonful of coffee granita (approximately 40g of each mixture). Serve each glass with a spoon on the side. THE DAIRY AMERICANO serves 1 50ml Vergano Mauro Americano 25ml white vermouth soda (to top up) a strip of orange peel (pith removed), to garnish Mix together the Vergano Americano and white vermouth. Pour over ice in a rocks glass. Top up with soda and stir gently. Gently twist the orange peel over the glass to release the essential oils, then drop into the drink. HOME BREW BEER Big shout out here to Will and Wesley, a classic example of teamwork and passion. These guys actually come in on their days off to experiment and develop beer recipes together. I'm sure they enjoy the tasting too! Someone's gotta do it! When it comes to making your own beer, the possibilities are endless. There are many sources for research, and it is always a good idea to do some reading before investing in equipment. One of the key points to remember is to keep all equipment and areas sterilised throughout the process. To get started with beer-making, the following equipment is required: a very large pan, a temperature probe, a fermentation bucket with an airlock, a hydrometer, a bottling wand, a beer capper, bottles and caps. The recipes here are for large quantities but they can, of course, be scaled down to your requirements. GINGER BEER makes about 4 litres GINGER BUG 500ml water 175g caster sugar 175g root ginger, finely grated Pour the water into a suitable-sized, sterilised container. Add 25g of the caster sugar and 25g of the ginger. Cover the top of the container with muslin. Leave at room temperature for 7 days, feeding it each subsequent day with the same quantities of sugar and ginger. GINGER BEER 5 litres water 1kg caster sugar, plus some extra to be added at the end 250g root ginger, finely chopped juice of 2 limes juice of ½ lemon a Ginger Bug (see above) Bring the water and sugar to the boil in a large pan, stirring to dissolve the sugar. Add the ginger and simmer for 10 minutes. Remove from the heat and allow to cool to about 40°C, then stir in the lime and lemon juices. Cool to room temperature before adding the ginger bug. Strain through a fine sieve into a fermentation bucket and seal. Leave at room temperature for about 3 weeks. During this time, test regularly with a hydrometer – the sugar level should be dropping. Once it levels off and stops dropping, the liquid is ready for the next step. Weigh the liquid and calculate 10% of this – this is the amount of sugar you will need to add. Decant 700ml of the liquid into a pan and add the sugar. Bring to the boil to dissolve the sugar, then cool before pouring back into the rest of the liquid. Using a bottling wand, decant into sterilised bottles, cap and seal. Allow to condition at room temperature for one week, then store in the fridge until required. PUMPKIN BEER makes about 23 litres 3 pumpkins (we used Delicia) 500g Marisota malt barley 25 litres water 3kg dry malt extract 40g Equinox/Ekuanot hops 1 sachet English ale yeast (7g) 1 cinnamon stick 2 cloves a thumb-sized piece of root ginger (peeled) granulated sugar Preheat the oven to 160°C fan/180°C/Gas Mark 4. Peel two of the pumpkins and discard the seeds and fibres. Dice the flesh and spread on a large baking tray. Roast for 20 minutes or until soft. Allow to cool. Cut the remaining pumpkin in half and remove the seeds and fibres, then juice it (with the skin on) through an electric juicer. Set the juice aside. Wrap the roasted pumpkin and the Marisota malt barley in a piece of muslin and tie the top. Add this parcel to 6 litres of the water in a large pan and allow to stew over a low heat, keeping the water at 72°C, for 30 minutes. Remove the parcel and give it a good squeeze over the pan, then discard. Add the remaining water to the stewing liquor and bring to the boil. Whisk in the malt extract while the liquid is boiling. Add 15g of the Equinox hops, then leave to boil vigorously for 1 hour. Cool to 80°C, then add the remaining Equinox hops and the raw pumpkin juice. Cool rapidly to 20°C in sterilised trays set over ice. Pour into a fermentation bucket, whisk to add in air, whisk in the yeast and seal the bucket. Leave to ferment at room temperature. On day 4, remove a little of the liquid and pour into a sterilised pan. Add the cinnamon, cloves and ginger, and bring to the boil. Allow to cool before pouring the whole lot back into the fermentation bucket. On day 7, check the sugar levels with a hydrometer. Do the same on day 8. If the level is the same then the liquid is ready. If not, repeat the checking each day until the level stops dropping and is the same 2 days in a row. Once ready, measure the liquid and calculate 4.5g of granulated sugar per litre. Decant a small amount (about a litre) into a pan and add the sugar to this. Bring to the boil just to dissolve the sugar, then remove from the heat and allow to cool before adding back to the rest of the liquid, stirring to mix. Using a bottling wand, decant into sterilised bottles, cap and seal. Allow to condition at room temperature for 3 weeks and then store in the fridge until required. SNACKS FERMENTED POTATO FLATBREAD NDUJA AND CULTURED CREAM Potato flatbreads feature in many cuisines. In my own Irish culture, there are a lot of recipes but none call for the potato to be fermented. In Ireland fermented potatoes were (and still are) used to distil a lethal drink called poitín, or poteen, which could range anywhere from 40 per cent to 90 per cent ABV (alcohol by volume)! In fact, the Irish word for a hangover is póit. We find that adding fermented potato gives this light flatbread an incredible sour flavour that is very welcome. We serve it with cultured cream and our own nduja, but it is a really versatile bread that can be served with just about anything, from hummus to a fried egg at breakfast. serves 12 FERMENTED POTATO FLATBREAD 15g fresh yeast 225ml tepid water 125ml buttermilk 430g strong white flour 15g rye flour a large pinch of fine table salt 250g Potato Ferment, lightly crushed with the back of a fork a pinch of Maldon sea salt Mix the yeast with a small amount of the water. Add this to the rest of the water and the buttermilk in the bowl of a stand mixer fitted with a paddle attachment and mix until smooth. Add the flours and table salt and mix/knead for about 5 minutes to make a smooth dough. (Alternatively, you can mix and knead the dough by hand.) Cover the dough with a towel or clingfilm and leave to rise in an ambient part of the kitchen (20–24°C) for 40 minutes. Fold the dough over on itself to knock out the air, then leave to rise for 40 minutes. Fold again to knock out the air, then leave to rise for another 40 minutes. Repeat the process so that the dough has four rises and four folds in total. Once the dough has had its final fold, chill it until it reaches 8°C (use a temperature probe to check). Remove the dough from the fridge and roll it out on a lightly floured surface into a large rectangle about 3cm thick. Place on a large baking tray. Scatter the potatoes and Maldon salt over the top. Leave to prove in an ambient part of the kitchen until the dough rectangle has almost doubled in thickness. Preheat the oven to 250°C fan/its highest setting. When the bread is ready to be baked, place a baking tray filled with water in the bottom of the oven, then slide the bread, on its baking tray, on to a higher shelf. Bake for 10 minutes. Remove the tray of water, lower the oven temperature to 180°C fan/200°C/Gas Mark 6 and bake the bread for a further 6 minutes or until it sounds hollow when tapped on the bottom. Cool on a wire rack. ASSEMBLY Cultured Cream Nduja picked marjoram leaves Warm the potato flatbread in the oven or on a barbecue. Tear into portions. Serve with a ramekin of cultured cream and a ramekin of nduja topped with marjoram on the side. Fermented Potato Flatbread, Nduja and Cultured Cream BREAD COURSE On a trip to Stockholm I visited Restaurant Frantzén. I spent a couple of days in the kitchen there and then had dinner. This consisted of twenty-plus courses, all of which were incredible, but the one thing that stood out and made me think was the bread. When you sit down at your spot for dinner (mine was at the counter peering into the kitchen), there is a little box in your place setting. Inside I found some bread dough that was proving. It was explained to me that I would have the bread later when it was ready, but they wanted to stress the process and its importance. About 45 minutes and eight or so courses later, they produced the bread fresh from the oven with a number of accompaniments. I took time to relax, break bread and think. For many years in zillions of restaurants bread has been a gap-filler once you sit down – usually stale, unseasoned, bought from a mass producer. You get my point. Offering bread is one of the oldest and most sacred traditions in the world that has been bastardised for too long by too many. When we opened The Dairy it was our mission to create the best bread serving we could. We had a terrible gas lower-deck oven that enabled us to bake only one tray at a time, which caused fights over oven space. But we made it happen. Our bread is one of the things that keeps our customers coming back. We decided that it should not be offered the moment guests sit down but a few courses later, served in baskets made by my mum. Our guests have to tear the warm crusty bread with their hands, which makes them stop for a minute to relax and enjoy that sacred and intimate tradition of breaking bread with each other. TOMATO AND BUCKWHEAT PANCAKES This is our version of a tomato tart, gluten-free and very light. We use second-grade beef tomatoes (which basically means overripe) to make the chutney. It may seem odd to barbecue the tomatoes for it, but this does add a real depth to the flavour profile. The chutney can be made in advance and you'll have quite a large amount. The recipe could be halved but it is worth making a big batch as it keeps well in the fridge and can be used to jazz up anything from oily fish like mackerel, smoked eel and sardines to roast lamb, chicken or quail. If you don't have access to a barbecue, you can forgo the barbecuing step and instead smoke the chutney at the end. Sometimes at the restaurant, having barbecued the tomatoes, we taste the chutney and are not happy with the intensity of the smokiness, so we smoke it as well. serves 4–6 BBQ TOMATO CHUTNEY 80ml Chardonnay vinegar 10 beefsteak tomatoes 1 shallot, finely diced 50g chives, finely chopped Onion Treacle capers, drained Maldon sea salt and freshly ground black pepper Boil the vinegar in a small pan to reduce to about 20ml (4 teaspoons). Fire up a barbecue (or alternatively, smoke the chutney at the end; see below). When there is plenty of smoke, place the tomatoes on the grid over a low heat and cook until they start to break down and the skins start to blister. Transfer the tomatoes to a pan set over a low heat and cook until the liquid from the tomatoes has evaporated. Remove from the heat and stir in the shallot and chives. Season to taste with the vinegar, onion treacle, capers, salt and pepper. If you want to smoke the chutney rather than barbecuing the tomatoes, spread the chutney in a thin layer on a heatproof tray and place this into a steel steaming tray. Cover with oven-safe clingfilm, sealing in the top and allowing gaps for the smoke to come through. Place some wood chips in a baking tray and heat over a medium heat until the chips start to smoke. Place the steaming tray over the smoking chips and smoke the chutney for 10 minutes. BUCKWHEAT PANCAKES 95g buckwheat flour 1½ teaspoons baking powder 1 teaspoon bicarbonate of soda a large pinch of fine table salt 190ml buttermilk 1 egg, separated 15g unsalted butter, melted, plus butter for frying buckwheat groats, for sprinkling Mix together the flour, baking powder, bicarbonate of soda and salt. In a separate bowl, mix together the buttermilk, egg yolk and melted butter. Whisk the egg white to stiff peaks. Whisk the buttermilk mixture into the flour mixture until smooth, then fold in the egg white. Melt a small knob of butter in a large frying pan set over a medium heat. Ladle about half of the batter into the pan to make a pancake about 1cm. When the pancake is cooked about three-quarters of the way up, sprinkle the top with some buckwheat groats. Allow to cook a little more, then turn the pancake over to finish cooking on the other side. Remove from the pan. Repeat with the remaining batter. Allow the pancakes to cool, then use a round cutter of your chosen size to cut out small discs. ASSEMBLY a selection of small seasonal tomatoes (we use Datterini when available), cut in half a drizzle of olive oil bronze fennel fronds tarragon leaves chervil leaves basil leaves hard goat's cheese (we use Tymsboro), frozen and grated grated fresh horseradish Maldon sea salt and freshly ground black pepper Warm the small pancakes on a baking tray in a hot oven for 1–2 minutes or until they are just heated through. Season the fresh tomatoes with a drizzle of olive oil, salt and pepper. Top each pancake with a little of the tomato chutney, followed by the fresh tomatoes, herbs and goat's cheese, and finish with a little horseradish. Tomato and Buckwheat Pancakes TRUFFLE BARON BIGOD FIG AND WALNUT TOAST, ROOFTOP HONEY This is a very special 'Dairy' recipe, one of only a few that we cannot ever dare to take off the menu (we did this once and it was met with tears and anger from our guests; my wife, aka 'Boss Lady', stepped in and it was put swiftly back on the menu, never to be tampered with again). We initially used a Brie de Meaux but as we became increasingly confident about British and Irish cheese we looked for a substitute. Our friends in Neal's Yard Dairy put us on to a lovely couple who produce Baron Bigod. In my opinion it is one of the greatest cheeses on the planet, not just in Europe. You don't need to make your own bread but we thought you might like to try. serves 4–6 TRUFFLE BARON BIGOD 200g Baron Bigod cheese 80g mascarpone 10g fresh black truffle, grated 2–3 drops of truffle oil Maldon sea salt and freshly ground black pepper * This is best prepared 24 hours in advance to allow the cheese to take on the truffle flavour. Cut the Baron Bigod in half horizontally. Season the mascarpone with the fresh truffle, truffle oil, salt and pepper. Spread this mixture across the bottom cut side of the Baron Bigod, then put the top half on to make a sandwich. Wrap in clingfilm and store in the fridge for at least 24 hours. FIG AND WALNUT BREAD Makes 2 small or 1 large loaf 75g walnut halves 300g Campaillou bread flour or other strong white flour, plus extra for dusting 75g strong wholemeal flour 50g chestnut flour 10g fresh yeast 10g salt 30g honey 30g full-fat plain yoghurt 120ml semi-skimmed milk 50ml apple juice 75g dried figs, soaked in green tea for 30 minutes, then drained 75g golden sultanas Lightly toast the walnuts in a dry pan, then crush them slightly. Put the flours, yeast and salt into a large stand mixer fitted with the paddle attachment and mix for 3–4 minutes. Add the honey, yoghurt, milk and apple juice and continue mixing/kneading on a low speed for 10–15 minutes, scraping the sides of the bowl if the mixture catches. The resulting dough should be sticky but retain its shape and have a slight bounce to the touch. Add the figs, sultanas and walnuts and mix for 3 minutes. (Alternatively, you can mix and knead the dough by hand.) Using a dough scraper, tip the dough on to a floured surface. Divide in half if you want to make two loaves. Fold the dough and tuck under the ends to create a rectangular loaf. Dust with Campaillou flour and slash the top with a sharp knife. Place on a floured baking tray and leave in a warm place to prove until almost doubled in size. Preheat the oven to 250°C fan/its highest setting. Bake the bread for 6 minutes. Transfer from the baking tray to the oven rack. Lower the oven temperature to 180°C fan/200°C/Gas Mark 6 and bake for a further 20 minutes or until the bread sounds hollow when tapped on the bottom. Cool on a wire rack. ASSEMBLY good-quality honey, for drizzling 10g fresh black truffle Remove the cheese from the fridge about 1 hour before serving to allow it to soften. Preheat the grill. Slice the bread and toast the slices. Top each slice of toast with a slice of the cheese and melt under the grill for about 30 seconds. Drizzle with honey and grate over a little fresh truffle, then serve. Truffle Baron Bigod, Fig and Walnut Toast, Rooftop Honey CHICKEN LIVER PARFAIT APRICOT GEL AND TOASTED SOURDOUGH This is one of the very first recipes I learned at Marco Pierre White's legendary restaurant, The Oak Room. During my time there the recipe had 50 per cent foie gras and a small fortune's worth of Perigord truffle grated in! If you can afford it, why not, eh? For years, we kept the foie gras in and Richie questioned the necessity of it. He claimed that if we made sure the chicken livers were the absolute best quality that could be found, the foie gras would not be missed. He was absolutely right. Not only was it as delicious, if not more, it was hugely cheaper and considerably less controversial. It meant that we then became a foie gras-free restaurant. This may not seem like a big deal but I spent ten years of my training in restaurants that spent thousands of pounds a month on the stuff! I was lucky enough to be able to work with the world's most expensive ingredients on a daily basis, but what I have now come to realise is that cost is not always an assurance of flavour. Here the humble chicken liver beats the big, fat, overfed goose liver. You'll see I still like the finer things in life though and have grated a heap of black truffle over the top. I couldn't help myself... serves 10–12 CHICKEN LIVER PARFAIT 650g fresh chicken livers pink curing salt fine table salt 450g unsalted butter, softened 5 eggs (at room temperature), beaten MARINADE 5 medium shallots, sliced 250ml Madeira 250ml port 100ml red wine 50ml Cognac 2–3 bay leaves First make the marinade. Put all the ingredients in a pan and cook on a low heat until all the liquid has evaporated. Allow to cool, then keep in the fridge until required. Set a large empty bowl on a set of kitchen scales and turn the scales back to zero. Combine the marinade and livers in the bowl. Based on the weight of the contents of the bowl, calculate 0.5% pink salt and 0.5% salt (about 3.5g of each). Add these and mix in. Tip the mixture into a freezer bag, seal tightly and refrigerate for 12 hours. Place the butter in another freezer bag and seal. Warm both bags at the same time by placing them under hand-hot running water at about 40°C (make sure the bags are well sealed so that no water gets in). Once the butter has melted, the chicken livers should be at the correct temperature. Decant the chicken livers and marinade into a blender or food processor and blend until smooth. Add half the eggs and blend again. While blending, add the melted butter at a steady pace. Finally, blend in the remaining eggs. Pass the mixture through a fine sieve. Preheat the oven to 80°C fan/100°C/Gas Mark low. Transfer the mixture to terrine moulds (or loaf tins). Cover each mould with a folded piece of foil and top with a lid (cover loaf tins with extra foil). Set the moulds in a bain marie, or roasting tray of hot water, and place in the oven to cook for about 35 minutes – the core temperature of the parfait needs to reach 68°C, so use a temperature probe to check this before removing the moulds from the oven. Allow to cool, then keep in the fridge until required. APRICOT GEL 300g fresh apricots, stones removed and roughly diced 35g caster sugar 20g Ultratex sherry vinegar freshly ground black pepper Combine the apricots and sugar in a pan, cover and cook over a gentle heat until the fruit is soft. Transfer to a blender or food processor and blend to a smooth purée with the Ultratex. Season to taste with sherry vinegar and black pepper. ASSEMBLY fresh black truffle (optional) fresh apricots, stones removed and sliced 2 slices of sourdough bread per person, toasted Maldon sea salt and freshly ground black pepper Whisk the parfait in a stand mixer fitted with a balloon whisk attachment, or using a hand-held mixer, until light and aerated. Taste and season with salt and pepper as required. Spread a little of the apricot gel in each bowl and top with a spoonful of the whipped chicken liver parfait. Grate over fresh black truffle, if using. Serve sliced apricots and toasted bread on the side. Chicken Liver Parfait, Apricot Gel and Toasted Sourdough BRAWN To give utmost respect for Mary Holbrook's beautiful pigs, every bit of them must be put to good use. This is our version of a pig recipe that most cooks like to put their own stamp on. It's best to follow our technique but you can be creative with the spices, herbs and seasonings. I would always recommend serving brawn with something sharp and fresh to cut through the fat. That's the only rule. serves 10–15 (depending on size of the pig's head) 1 pig's head, split down the centre 7% brine (see here) 1 litre white wine 1 white onion, cut in half 1 bunch of celery, cut in half (across) 1 carrot, cut in half 4 bay leaves 1 clove 1 bulb of garlic, cut in half (horizontally) 6 allspice berries a handful of white peppercorns 2 handfuls of flat-leaf parsley leaves, chopped 50ml fresh lemon juice wholegrain mustard Dijon mustard Maldon sea salt Remove the brain from the pig's head. Remove the ears from the head. Clean the head and the ears well using an abrasive sponge. Burn the hairs off both using a kitchen blowtorch. Brine them in a 7% brine in the fridge, or a cool place, for 8 hours. Drain the head and ears from the brine. Preheat the oven to 120°C fan/140°C/Gas Mark 1. Pour the wine into an ovenproof pot or flameproof casserole that the head and ears will fit into snugly. Bring to the boil and boil for 2 minutes to evaporate some of the alcohol. Slightly char the onion, celery and carrot on a barbecue, or on a hot ridged grill pan, for flavour, then add to the pot. Add the bay leaves, clove, garlic, allspice berries and peppercorns. Finally, add the pig's head and ears and top up with water to cover. Put a lid on the pot and place in the oven to cook for 5–6 hours or until the meat is falling off the bone. Remove the head and ears from the stock. Strain the stock into another pan and boil to reduce it by two-thirds so that it is highly seasoned and strong in gelatine. Pick the meat off the head. Roughly chop half of the skin and fat from the head and mix with the meat. Slice the ears into thin strips and mix in. Season the meat mixture with the parsley, lemon juice and mustard to taste. Mix some of the reduced stock through the mixture so that it has a stew-like consistency. Taste and season with salt if required. Pour the mixture into a terrine mould lined with clingfilm. Cover the top with clingfilm and weigh down with a light weight, just to press the brawn gently into shape. Leave to set in the fridge overnight. About 20 minutes before serving, turn the brawn out of the terrine mould and remove the clingfilm. Slice the brawn into 2–3cm pieces. Leave to come up to room temperature. ASSEMBLY breakfast radishes radish tops Pickled Radishes capers, drained miner's lettuce Dijon mustard toasted sourdough or fresh warm bread Serve a slice of brawn on each plate with a side of radishes, radish tops, pickled radishes, capers and miner's lettuce. Serve a spoonful of Dijon mustard in a small ramekin at the side. This dish is delicious with some slices of toasted sourdough or fresh warm bread. Brawn SPICED POLLOCK GOUGÈRES This is a fine example of taking something that in most cases is thrown away and turning it into something outrageously elegant. Gougères can be a vessel to use up all sorts of things – we make a beautiful mousse from cheese trim and rinds to fill gougères, or to fill mushrooms. Here a filling based on cooked fish is piped into gougères, which are topped with a rich mornay sauce. Any type of cooked white fish could be substituted for the pollock. Served warm, the gougères are a welcoming mouthful to start off an evening. In the restaurant this is often the first snack. Our guests stop and take a bite, then settle in, feeling comfortably at home. Makes about 30 GOUGÈRES 100g unsalted butter 130ml whole milk 120ml water 1 teaspoon fine table salt a pinch of espelette pepper 150g plain flour 110g Cheddar (preferably Isle of Mull), grated 4 eggs egg yolk, to glaze Preheat the oven to 180°C fan/200°C/Gas Mark 6. In a pan, bring the butter, milk, water, salt and espelette pepper to the boil. Stir in the flour and cook over a medium heat for about 5 minutes, stirring. Transfer to a stand mixer fitted with a paddle attachment, or a bowl if using a hand-held electric mixer, and add the grated cheese. Mix in the cheese until evenly incorporated. Allow the mixture to cool slightly, then mix in the eggs one at a time to make a smooth choux paste. Spoon the choux paste into a disposable piping bag. Pipe into small rounds on a baking tray lined with greaseproof paper, allowing about a heaped tablespoonful of mixture per gougère. Leave space between the rounds to allow for spreading. Dip your finger in beaten egg yolk and gently smooth out the peaks and round off the tops. Bake for 30 minutes. Lower the oven temperature to 160°C fan/180°C/Gas Mark 4 and bake for a further 5 minutes. Cool on a wire rack. POLLOCK FILLING 270g cooked skinless pollock fillet 35g White Wine Shallot Gastrique 50g Mayonnaise, seasoned with a pinch of smoked paprika ½ bunch of chives, chopped Tabasco sauce fresh lemon juice Maldon sea salt Mix together the pollock, shallot gastrique, mayo and chives. Season with a few drops of Tabasco, lemon juice and salt to taste. Spoon the mixture into a disposable piping bag and keep in the fridge until required. MORNAY SAUCE 20g unsalted butter 20g plain flour 160ml whole milk 40g strong Cheddar, grated 2 teaspoons Dijon mustard 25g egg yolks 40ml double cream, whipped until slightly thick Melt the butter in a pan and add the flour. Cook out the flour for 1–2 minutes, stirring. Whisk in the milk. Continue to whisk over the heat until the sauce has thickened. Remove from the heat and mix in the cheese and mustard. Allow the mixture to cool slightly, then stir in the egg yolks and cream. Decant into a disposable piping bag. ASSEMBLY espelette pepper chopped chives Preheat the oven to 180°C fan/200°C/Gas Mark 6 and heat the grill to high. Poke a small hole in the bottom of each gougère and pipe in the pollock filling. Arrange the gougères on a baking tray and warm in the oven for a couple of minutes. Pipe the mornay sauce over the top of each gougère, then flash under the hot grill until golden. Serve hot, dusted with espelette pepper and garnished with chopped chives. Spiced Pollock Gougères SALT COD BRANDADE SQUID INK AND SORREL Every chef I know has their own interpretation of brandade. I first learned Raymond Blanc's version while working in his kitchen on the fish section many moons ago. Brandade is best made on the day and kept warm, never hitting the fridge – we had to make it fresh for every service and I admit that I had a tendency to make a little too much every time. I loved to scrape the pot clean with some torn bread (don't tell RB). The quality of the olive oil is key. There are so many horrific varieties on the market, you will feel quite rightly devastated if your dish is ruined by cheap, poor-quality oil. We use an Arbequino oil that is slightly sweeter than most and not as peppery. My advice is to search for your favourite and stock up. serves 6 COD BRANDADE 200g rock salt 500g good-quality waxy potatoes 300g skinless cod fillet 250ml whole milk 250ml water 3 garlic cloves (peeled) ¼ white onion 1 bay leaf 4 black peppercorns 70ml extra virgin olive oil fresh lemon juice Maldon sea salt and freshly ground black pepper Preheat the oven to 190°C fan/210°C/Gas Mark 6–7. Sprinkle a little of the rock salt on a baking tray and add the potatoes, spreading them out on the salt. Bake for about 45 minutes or until cooked through. While the potatoes are cooking, cover the cod in the remaining rock salt and marinate for 8 minutes, then rinse in cold water and pat dry. Combine the milk, water, garlic cloves, onion, bay leaf, peppercorns and a pinch of salt in a pot. Bring to a simmer and gently poach the garlic until it is soft. Strain the liquid into another pot. Keep the garlic; crush it and set aside. Add the cod to the strained hot liquid and poach gently for 12 minutes. Remove from the heat and leave to cool in the liquid for 10 minutes. While the potatoes are still hot, scoop out their flesh and pass through a drum sieve or fine mesh chinois into a mixing bowl. Drain the cod (reserve the liquid) and gently fold into the warm potato along with the crushed garlic. Stir in the olive oil and season to taste with lemon juice, salt and pepper. Loosen the brandade with some of the reserved poaching liquid. SQUID INK DRESSING 10g squid ink 50ml extra virgin olive oil juice of 1 lemon a pinch of smoked paprika Put the squid ink in a mixing bowl and whisk in the oil, lemon juice and paprika. ASSEMBLY a bunch of sorrel leaves olive oil Spoon the brandade into a round on each plate and top with sorrel leaves. Drizzle with the dressing and a little olive oil on the side. Salt Cod Brandade, Squid Ink and Sorrel PÂTÉ EN CROÛTE I find cooking at home to be therapeutic and stress-free. I move at a slower pace, a glass of wine in hand. This pâté en croûte is one of the recipes I like to make, taking my time by spreading the work over a couple of days. The flavour of the pâté will only improve if you make it a day ahead. So if you're entertaining at the weekend, start to prepare this on the Wednesday by slicing the pancetta, making the pastry, cooking the onions and weighing the mix. Bake on Thursday to enjoy on Saturday, and finish any left over on Sunday. serves 10–12 FILLING 625g pork belly mince 2 teaspoons white wine 1 tablespoon port 1 garlic clove, finely chopped ½ allspice berry, crushed to a powder 2 sprigs of thyme, leaves picked 15g Maldon sea salt 1 white-skinned onion, diced 25g lard or unsalted butter 75g sunflower seeds 1 teaspoon Madeira 30g flat-leaf parsley, leaves picked and chopped 1 egg 1 egg yolk ½ green apple (such as Granny Smith), cored and diced 125g pork liver, cut into 1cm dice 125g pork back fat, diced Mix together the pork belly mince with the white wine, port, garlic, allspice, thyme leaves and salt in a large bowl. Cover tightly with clingfilm and leave in the fridge overnight. Sweat the onion in the lard or butter until completely softened but with no colour. Fold in the sunflower seeds and Madeira. Remove from the heat and allow to cool before mixing into the pork belly mixture. Stir in the parsley, egg, egg yolk, apple, pork liver and back fat. PASTRY 285g strong white flour 1 teaspoon baking powder 2 teaspoons fine table salt 50g duck fat 45g unsalted butter, diced 1 egg 1 teaspoon white wine vinegar about 50ml whole milk Put the flour, baking powder, salt, duck fat and diced butter into a stand mixer fitted with the paddle attachment (or into a food processor) and mix to a breadcrumb consistency. Mix the egg with the vinegar and add to the flour mixture. Add enough milk to make a smooth dough that is dry to the touch. (If making the pastry ahead of time, it can be wrapped in clingfilm and kept in the fridge. Bring it to room temperature for an hour before required.) ASSEMBLY unsalted butter for greasing the mould 400g thinly sliced pancetta or smoked bacon rashers 1 egg, beaten, for glazing 3 sheets/leaves of silver leaf gelatine 300ml fresh apple juice 50g Onion Treacle Grease a 1-litre terrine mould (35.5 x 11cm, 12cm deep) with butter. Roll out the pastry on a lightly floured worktop away from you into a rectangle about 5mm thick that is large enough to line the entire mould and fold over the top as a lid. Place the terrine mould on the rolled-out pastry parallel to one short side and about three-quarters of the way down the rectangle. Cut lines in the pastry diagonally towards each corner of the mould. Now line the mould with the pastry, sealing up any gaps at the corners. You will have a long overhang of pastry on one side, which will be folded over the top (reserve the pastry trimmings). Line the pastry case with the pancetta or bacon, laying them crossways so there is an overhang on each long side. Spoon the pork filling into the centre and fold the overhanging bacon over the top. Finish by folding the pastry overhang over the top and sealing well by crimping the edges. Cut three holes in the lid down the length of the terrine. Make three small funnels out of foil and fit them into the holes. Roll some of the pastry trimmings into three thin sausage shapes and place at the base of the funnels to secure them in the pastry. Brush the entire pastry lid with beaten egg. Put into the fridge and leave to set for at least 2 hours. Preheat the oven to 210°C fan/230°C/Gas Mark 8. Bake the pâté en croûte for 15 minutes. Lower the oven temperature to 170°C fan/190°C/Gas Mark 5 and bake for a further 15–20 minutes or until the core temperature reaches 58°C. Allow to cool in the mould to room temperature, then place in the fridge to set overnight. To make the jelly, soak the gelatine in cold water to soften it. Warm the apple juice and onion treacle together. Drain the gelatine and stir into the warm liquid until completely melted. Allow to cool to about 10°C, then pour through the funnels into the pâté en croûte. Remove and discard the funnels. Place the mould back in the fridge and leave to set for 2 hours. Remove the pâté from the mould. Slice and serve with any pickles that you have, some Dijon mustard and a nice peppery leaf salad. Pâté En Croûte BEEF TARTARE SOUR ONIONS, NASTURTIUM CAPERS AND ROCK OYSTER For this tartare, we use quite a funky 100-day-aged beef rump at the restaurant but I have also used onglet, fillet and sirloin. When you buy your beef, ask your butcher for a nice aged piece of whatever he has. There are big punchy flavours in this dish, ticking all the boxes with acidity, heat and spice, so you need a cheesy well-aged piece of beef to stand up for itself. We use nasturtium capers, made from flowerbuds that we collect from our farm, but garlic capers or regular capers will do just fine too. This dish is really a belter and an all-year-rounder. serves 8–10 OYSTER EMULSION 100g banana shallots, sliced 200ml dry white wine 130g freshly shucked rock oysters (juice reserved) 150ml grapeseed oil 1 tablespoon crème fraîche Put the shallots into a saucepan and pour over the white wine. Place on a medium to low heat and boil until all the wine has evaporated. Remove from the heat and allow to cool. Tip the shallot mixture and oysters into a blender or food processor and blend until smooth. While blending, gradually add the oil to make a mayonnaise consistency. Add some of the reserved oyster juice to loosen the mixture. Stir in the crème fraîche. Keep the emulsion in the fridge until ready to serve. SHALLOT CRISPS 150g unsalted butter, diced 150g banana shallots, finely sliced Maldon sea salt Put the diced butter into a wide, flat-bottomed pan over a high heat. Stir the butter and cook until it starts to foam. Add the sliced shallots and cook, stirring, until they start to turn golden brown and the butter smells like toasted nuts. Remove the pan from the heat and drain the shallots in a sieve. Spread out the shallot crisps on a tray lined with kitchen paper and season lightly with salt. Keep in a warm, dry place. BEEF TARTARE 250g 100-day-aged beef rump (or good-quality beef rump aged for a minimum of 28 days), trimmed and cut into 1cm dice 1 tablespoon Dijon mustard 1 tablespoon juice from Nasturtium Capers 2 teaspoons extra virgin olive oil Maldon sea salt and freshly ground black pepper Combine the beef, mustard and caper juice in a bowl and mix. Season to taste with salt and pepper. Finish with the olive oil. ASSEMBLY 25–30 'petals' of Sour Onions Nasturtium Capers nasturtium leaves, to garnish bronze fennel or peppery leaves, to garnish Spoon a teaspoon of the beef tartare into each onion petal and garnish with some oyster emulsion, nasturtium capers, the shallot crisps and leaves. Beef Tartare, Sour Onions, Nasturtium Capers and Rock Oyster LAMB TARTARE Everyone associates tartare with beef or tuna, and the thought of a lamb tartare would freak most people out. But anything that can be served rare or medium-rare can usually be turned into a tartare by using salt and acid as an amazing alternative to heat for cooking. Before making the tartare we render the fat from our lamb, take off the fillets and brush them in the fat to coat the exterior, then age the meat for up to three weeks. serves 4–6 LAMB TARTARE 10g fennel seeds 100g rock salt 10g dried lemon zest (dry out in a dehydrator or a very low oven) 300g lamb fillet 1 tablespoon Dijon mustard 1 tablespoon capers, drained 1 tablespoon finely diced shallots 1 tablespoon Onion Treacle a drizzle of olive oil a pinch of Maldon sea salt Toast the fennel seeds in a dry pan until they smell fragrant. Tip into a small blender or food processor, add the rock salt and lemon zest, and blend until finely ground. Rub this mixture all over the meat, then set aside for 8 minutes. Rinse off the rub and pat the meat dry. Cut into 1cm dice. Just before serving, season the tartare with the remaining ingredients. ASSEMBLY 100g Cabbage Ferment 1 tablespoon Dijon mustard a drizzle of olive oil slices of sourdough or other good-quality bread lamb fat or olive oil Maldon sea salt flat-leaf parsley leaves sorrel leaves 150ml Cultured Cream a drizzle of Garlic Oil Put the fermented cabbage, Dijon mustard and a drizzle of olive oil in a blender or food processor and blend to a pesto-like consistency. Toast the slices of bread under a grill, then spread with some lamb fat, or drizzle with olive oil, and sprinkle with salt. Run your knife through the parsley and sorrel leaves to create really thin strips. Serve the lamb tartare topped with the herbs and spoon the fermented cabbage mixture on to the side of the plate. Put the cultured cream into a separate dish and drizzle with garlic oil. Serve the toast on the side and allow guests to help themselves at the table. Lamb Tartare ANCHOVY CRISPS LEMON AND SORREL This is a really clever snack. The flavours remind me of those wonderful little skewers you find in San Sebastián in the Basque country – olives, anchovies in olive oil, pickled peppers – great to have with an aperitivo. These crispy anchovies are that and more, and you only need a couple per person. The lemon gel brings a welcome kick of freshness. We serve this with a bowl of fat green olives from Sicily called Nocellara del Belice and a glass of dry sherry, The Dairy's Americano, a negroni or a spritz. Makes 10 LEMON GEL 1 sheet/leaf of silver leaf gelatine 75g caster sugar 150ml fresh lemon juice Soak the gelatine in cold water to soften it. Put the sugar and 50ml of the lemon juice in a pan and warm to just before boiling point to dissolve the sugar. Remove from the heat. Drain the gelatine, squeezing to remove excess water, and add to the pan, stirring until the gelatine has melted into the mixture. (If necessary, place the pan back over the heat to help melt the gelatine.) Add the remaining lemon juice. Leave to set in a covered container in the fridge until required. BATTER 100g rice flour 40g cornflour 50g potato flour 30g tapioca flour 1½ teaspoons baking powder 10g honey 195ml sparkling water Put all the dry ingredients into a large bowl. Using a whisk, add the honey and then slowly whisk in the sparkling water. Whisk for a good 5 minutes to work the flours into the liquid. ASSEMBLY vegetable oil, for deep-frying 10 best-quality canned anchovy fillets 3 sorrel leaves, torn into pieces Blend the lemon gel in a blender or food processor to liquefy it, then decant it into a squeezy bottle. Heat oil in a deep pan or deep-fat fryer to 180°C. Remove the anchovies from the can and pat dry. Dip them into the batter, then deep fry until golden and crisp. Drain on kitchen paper. Arrange the anchovies on your chosen plate, add a couple of dots of the lemon gel and garnish with the sorrel leaves. Anchovy Crisps, Lemon and Sorrel APPLEWOOD-SMOKED EEL GUINNESS SODA BREAD, HORSERADISH This is one of the easiest and most satisfying bread recipes. The method is so simple yet the results are very pleasing. I bake this every Christmas and serve it with smoked salmon on the day, then the following days with jam and butter or a bowl of soup. It's a great recipe for involving the younger ones in the family as they can get their hands nice and dirty mixing the dough. Note to oneself, a pint of Guinness is not a bad partner. serves 4–6 GUINNESS SODA BREAD makes 1 loaf 250g plain flour 200g wholemeal flour 15g bicarbonate of soda ¾ teaspoon fine table salt 150g jumbo oat flakes 1 tablespoon clear honey 1 tablespoon black treacle 250ml Guinness 250ml buttermilk Preheat the oven to 200°C fan/220°C/Gas Mark 7. Line a large loaf tin (8 x 30 x 11cm) with baking parchment. Mix together all the dry ingredients in a large bowl and make a well in the centre. Add the remaining ingredients into the well and work the mixture with your hands to make a loose, smooth and wet dough. Place the dough in the lined tin and score the top lengthways with a knife. Bake for 15 minutes, then turn down the oven to 175°C fan/195°C/Gas Mark 5–6 and bake for a further 20 minutes. Remove the bread from the tin and bake, directly on the oven rack, for a final 5 minutes or until the bread sounds hollow when tapped on the bottom. Remove from the oven and cool on a wire rack. HORSERADISH YOGHURT 100ml Greek yoghurt 2 teaspoons horseradish cream 5g fresh horseradish, grated 2–3 drops of Tabasco sauce fresh lemon juice Maldon sea salt Mix together the yoghurt, horseradish cream, fresh horseradish and Tabasco sauce. Season with lemon juice and salt to taste. ASSEMBLY 1 shallot 200g applewood-smoked eel, cut into chunky dice 4 teaspoons capers, drained dill fronds borage flowers (if available) grated fresh horseradish Fennel Kimchi Pickled Radishes Cut the shallot into thin rings and plunge into iced water to crisp up. Slice the soda bread and build open sandwiches on the slices – start with the eel, then add horseradish yoghurt and garnish with the shallot rings, capers, dill, borage flowers, horseradish, kimchi and pickles. Applewood-Smoked Eel, Guinness Soda Bread, Horseradish CRAB, NORI AND POTATO Crab is one of my favourite foods. I have tried crab all over the world and I can say quite confidently that Cornish crab has to be the best and most flavourful. The white meat is delicate and sweet and the brown has an intensity of fresh sea flavours that you never forget. I was brought up on the south coast of Ireland, practically on the sea's edge, but it wasn't until I left that I appreciated how lucky I'd been to enjoy such seafood. So now, whenever we get a delivery of crabs, I'm the first to volunteer to take on the laborious job of 'picking'. The only problem is that the yield is never as good as it should be because I can't help myself from taking great big spoonfuls when nobody's looking. Makes about 12 'sandwiches' BROWN CRAB MAYO 150g brown crab meat 50g egg yolks 155g Crab Oil 50g crème fraîche 1–2 drops of Tabasco sauce juice of ½ lemon Place the brown crab meat in a piece of muslin, gather into a pouch and gently squeeze out any excess liquid. Put the crab meat in a blender or food processor and add the egg yolks. Start blending at a medium speed, then slowly drizzle in the oil while blending (add a touch of cold water if the mix becomes too thick). Transfer to a mixing bowl and add the crème fraîche, Tabasco and lemon juice to taste. Mix together. Keep in the fridge until required. NORI MAYO 3 sheets of dried nori (3g each) 140ml warm water 20g wholegrain mustard 140ml grapeseed oil Soak the nori in the warm water until softened. Transfer the nori and soaking water to a blender or food processor, add the mustard and blend on a medium speed. While blending, gradually drizzle in the oil until emulsified. Keep in the fridge until required. POTATO CRISPS 2 large Maris Piper or other best-quality chipping potatoes, washed well vegetable oil, for deep-frying Nori Powder Maldon sea salt Slice the potatoes very thinly on a mandoline (you need at least 24 slices). Heat oil in a deep pan or deep-fat fryer to 160°C. Deep-fry the potato slices, in batches, for about 5 minutes or until they are golden and crisp. Lift out and drain on kitchen paper. Season with salt and nori powder. Keep in a warm, dry area of your kitchen until ready to serve. ASSEMBLY 200g white crab meat 50g Pickled Wakame Nori Powder for dusting Gently mix the white crab meat with the brown crab mayo. Lay half of the crisps on a flat tray. Top each of these with a generous spoonful of the crab mixture, followed by the pickled wakame and, lastly, a spoonful of the nori mayo. Cover each one with another potato crisp to create a sandwich. Dust the top of each sandwich with nori powder. Crab, Nori and Potato GARDEN FRESH PEAS ROOFTOP MINT AND FRIED BREAD Everyone has food memories and most of us remember eating fresh peas from a pod. Sweet and juicy, the taste and bouncy texture alert you to summer just round the corner. As a chef, the last months of winter can be challenging, so fresh peas are a sign that soon the crates from the farm will be full and plentiful. In the morning, we work as a team to pick, pod, wash and trim. During the first hour or two we chat and plan what we are going to serve as we work our way through the vegetable preparation. As with most dishes that make a menu, it starts with a discussion and ideas based on what we have. This recipe was one of the first dishes created in this way, where we took one ingredient and discussed how best to treat it, then shared the labour to create a dish with the perfect balance of freshness, texture, acidity and surprise. This memory will always remind me to create collectively based on something perfect in its time and place. serves 8–10 PEA MOUSSE 1½ sheets/leaves of silver leaf gelatine 500g frozen peas 10g Sosa ProEspuma Cold Soak the gelatine in cold water to soften it. Bring a suitable-sized pot of water to a rolling boil with a generous pinch of salt. Blanch the peas in the boiling water for 2 minutes, then drain. Tip into a blender or food processor and blend until smooth. Drain the gelatine, then warm it in a small pan with a splash of water to melt it. Add to the pea purée along with the ProEspuma and blend to incorporate. Pass the mixture through a fine sieve on to a flat tray set over ice to cool the mixture as quickly as possible. Decant the mixture into a siphon gun so that it is three-quarters full. Add two charges and give the siphon gun a violent shake. Keep refrigerated until required. LEMON GEL ½ sheet/leaf of silver leaf gelatine 35g caster sugar 75ml fresh lemon juice Soak the gelatine in cold water to soften it. Dissolve the sugar in the lemon juice by warming it in a pan to just before it boils. Remove from the heat. Drain the gelatine, squeezing out excess water, and add to the pan. Stir until melted into the mixture. Allow to cool, then decant into a squeezy bottle or disposable piping bag and leave to set in the fridge until required. MINT GRANITA 2 bunches of mint (about 60g in total), leaves picked Blanch the mint leaves in boiling salted water for 4 minutes. Drain, reserving the liquid, and refresh the mint in iced water. Drain the mint and squeeze out as much of the water as possible. Blend the mint with a little of the reserved blanching liquid in a blender or food processor until smooth. Pour into a metal tray or other freezerproof container to make a thin layer. Freeze until solid, then run a fork through to break it up and create a granita texture. Keep in the freezer until required. ASSEMBLY 500g podded fresh peas 1 head of celery Nori Oil fresh lemon juice 15g chives, chopped 100g Fried Bread black mint leaves Moroccan mint leaves sorrel leaves Maldon sea salt marigold or chive flowers, to garnish (if available) Separate the small, sweeter-tasting peas from the larger ones. Leave the small, sweet ones raw. Blanch the larger ones in boiling water for 30 seconds, then refresh in iced water. Keep all the peas in the fridge until required. Peel the strings from the celery, then dice. Weigh the celery and calculate 1% salt to season. Dress the peas and celery with a little nori oil, lemon juice and salt to taste. Stir through the chopped chives. Spoon this mixture around each plate. Pipe six or seven dots of lemon gel per plate. Sprinkle over the fried bread and add a generous mound of pea mousse. Scatter the mint and sorrel over the plates and finish with some of the granita. Garnish with flowers, if using. Fresh Peas, Rooftop Mint and Fried Bread WILD GARLIC TAGLIATELLE TROMBETTA COURGETTE, SUNFLOWER SEED PESTO You might think sunflower seeds are an unusual substitute for pine nuts, or that we use them just to be different. But when sunflower seeds are toasted to the extreme – borderline burnt – I think they have a better flavour than pine nuts at a fraction of the price. And they keep this dish nut-free. The pasta dough makes more than is required for the recipe but if you are going to the effort of making pasta then it is worth making a large batch. It can be dried and stored, or frozen. If made ahead of time and dried it will take about 4 minutes to cook. serves 4–6 TAGLIATELLE 2 tablespoons olive oil 4 eggs 2 egg yolks 480g type '00' flour fine semolina, for dusting The pasta dough can be made in a stand mixer fitted with a paddle attachment, or in a bowl or on the work surface by hand. Start by mixing together the olive oil, eggs, yolks and half of the flour until well worked. Add the remaining flour a handful at a time, mixing in well before adding the next handful. This should be a slow process – a little at a time really is best. Once the last of the flour has been incorporated, knead briefly in the mixer. If making by hand, knead the dough on the floured surface for at least 10 minutes or until the dough is firm, smooth and even in consistency. Wrap the dough in clingfilm and set aside to rest at room temperature for about 30 minutes. Divide the dough into eight pieces. Flatten the pieces and dust lightly with flour. Work with one piece at a time. Feed through a pasta machine set on the widest setting, then fold the dough over and pass through this setting again. Repeat this process 3 times so that you have a rectangular shape and an even thickness. Continue to pass the dough through the pasta machine, changing the setting until you reach the second thinnest setting. Repeat with the remaining pieces of dough. Dust the work surface with fine semolina. Roll up each sheet and cut across into 1cm wide ribbons using a sharp knife. Toss the ribbons with the semolina. TROMBETTA COURGETTES 2 Trombetta courgettes 2% brine (see here) (or the brine from jarred olives) Put the whole courgettes in a container and cover with a 2% brine. Leave in the fridge for 6 hours. Remove the courgettes from the brine and slice lengthways into ribbons (the same width as the tagliatelle) using a mandoline or peeler. SUNFLOWER SEED PESTO 50g honey 250g sunflower seeds 100ml olive oil 25g Bread Miso or ready-made brown miso a bunch of basil, leaves picked 3 sprigs of flat-leaf parsley, leaves picked 1 tablespoon golden marjoram leaves 50g aged Parmesan, finely grated zest of 1 lemon Put the honey in a small pan, bring to a gentle simmer and cook to a nutty brown colour. Allow to cool. Toast the sunflower seeds in the oil; remove from the heat. Put the miso, honey and a third of the sunflower seeds and oil in a food processor and pulse to mix. Add another third of the sunflower seeds and oil and pulse. Finally, add the remaining seeds and oil and pulse to a pesto-like texture. Roughly chop the herbs. Just before serving, fold them through the sunflower seed paste along with the Parmesan and lemon zest. ASSEMBLY 100ml water 100ml whey a drizzle of olive oil a bunch of wild garlic leaves fresh lemon juice freshly grated Parmesan golden marjoram (if available) basil leaves Maldon sea salt and freshly ground black pepper Put the water, whey and olive oil into a large pan and season generously with salt and pepper. Bring this emulsion to a simmer. Add about a third of the pasta and simmer for 1–2 minutes so that it is still al dente. Drain the pasta, reserving some of the cooking liquor, and tip into a bowl. Add some of the reserved cooking liquor, the courgette ribbons, wild garlic and a squeeze of lemon juice to the warm tagliatelle. Toss gently. Divide the tagliatelle, wild garlic and courgette ribbon mixture among the bowls. Garnish with the Parmesan, marjoram and basil. Serve the pesto on the side. Wild Garlic Tagliatelle, Trombetta Courgette, Sunflower Seed Pesto GARDEN COURGETTE SMOKED BUFFALO MILK CURD, ROOFTOP HONEY For the curd here we use fresh buffalo milk, which is a little richer than cow's milk, but you could easily use any full-fat milk, even sheep milk. You'll notice that I use aged Parmesan to season and bring acidity to the courgette purée. I do this to keep the mix green. If you put an acid like lemon in the purée it would go brown. I like to use Parmesan as a seasoning in many purées, pulses and soups because I find it brings an incredible umami flavour that you sometimes can't achieve with just salt and lemon. serves 4 SMOKED BUFFALO CURD 500ml buffalo milk 25ml double cream 2 teaspoons buttermilk a pinch of Maldon sea salt zest of 1 lemon a handful of dried hay ½ teaspoon liquid vegetable rennet Preheat the oven to 180°C fan/200°C/Gas Mark 6. Combine all the ingredients, except the hay and rennet, in a large jug. Spread the hay in a deep baking tray and toast in the oven until it is an amber colour all over and has started to smoke. Carefully remove the tray of smoking hay from the oven and pour over the milk mixture. Leave to infuse for 30 minutes. Strain the mixture through a fine sieve into a clean pot and add the rennet. Set over a low heat and heat to 36°C (the mix should be just warm on the fingertips). Transfer into a container and chill for at least 2 hours. COURGETTE AND BASIL PURÉE 2 courgettes olive oil 1 garlic clove, crushed a bunch of basil, leaves picked 20g aged Parmesan, finely grated Cut the courgettes into quarters lengthways, then slice across into thin pieces. Set a medium-sized pan over a medium heat. Add a good drizzle of olive oil and the garlic and follow quickly with the sliced courgettes. Stir and add a spoonful of water. Cover with a lid to help create steam. After 2 minutes, add half of the basil leaves (reserve the best leaves for the assembly) and the Parmesan. Tip the mixture into a blender or food processor and blend until smooth. Transfer to a bowl placed over iced water to cool quickly so the bright green colour is retained. PUMPKIN SEED PRALINE DRESSING 125g pumpkin seeds 50ml vegetable oil 50ml honey 10g white miso a pinch of Maldon sea salt Toast the pumpkin seeds in the oil until really golden. In a separate pan, caramelise the honey to a dark golden colour. Put the seeds with their oil, the honey and miso into a blender or food processor and pulse until combined but still retaining a coarse texture. Season with salt to taste. Allow to cool to room temperature. ASSEMBLY 2 courgettes fresh lemon juice olive oil 5 Nocellara del Belice olives, stoned smoked paprika, for dusting sea salt and freshly ground black pepper Slice the courgettes thinly lengthways using a mandoline or peeler. Season with salt, black pepper, lemon juice and olive oil to taste. Spoon the courgette and basil purée generously around each plate. Roll the courgette slices and place some on each plate. Slice the olives and add a few olive pieces and a spoonful of the smoked curd. Dust the curd with a little smoked paprika. Finish with the reserved basil leaves and dressing. Garden Courgette, Smoked Buffalo Milk Curd, Rooftop Honey HERITAGE TOMATOES CURED SARDINES, ROOFTOP HERBS This is a theatrical dish, full of fun. I stole the idea from a supper club collaboration between Dean and Ben. In the restaurant we always have a mini milk bottle on each table full of flowers and herbs, mostly edible, from a local allotment. Rather than filling the bottles for this dish with water we used a smoky and aromatic dashi and a herb bouquet of entirely edible herbs and flowers. The trick is to place the bottles on the table just before the guests sit down. Serve the bowls of beautiful tomatoes and cured sardines, then snip the bouquets over the bowls and pour in the contents of the bottles. It's a great party trick of a dish. serves 4–6 ANCHOVY DRESSING 250ml olive brine (from jars of olives) 120g salted anchovies 50g capers, drained 180ml white wine vinegar 20g caster sugar 50ml extra virgin olive oil Place all the ingredients, except the oil, in a blender or food processor. Blend on a high speed, then gradually add the oil while blending. The dressing can be stored in a sealed container in the fridge for up to 7 days. TOMATO DASHI 15g dried kombu 500ml water, boiled and cooled (or use filtered or still mineral water) 1 sheet of dried nori (3g) 10g bonito flakes 1 teaspoon white soy sauce a pinch of Maldon sea salt 100g vine cherry tomatoes, sliced (vines reserved) 2–3 basil stalks 1 garlic clove, sliced Add the kombu to the water in a pan and bring to a very gentle simmer (do not boil). Simmer for 1 hour. Strain the liquid through a fine sieve into a jug. Add the nori, bonito flakes, soy sauce, salt, tomatoes, tomato vines, basil stalks and garlic. Allow to infuse for 40 minutes. Taste to check the seasoning – the dashi should have a strong savoury flavour – and adjust as required. Strain the dashi through the fine sieve. ASSEMBLY 800g mixed heritage tomatoes, chopped 4 fillets of Cured Sardines, very finely chopped 1 fillet Smoked Mackerel, chopped wild rocket (both leaves and flowers) tarragon leaves bronze fennel sorrel leaves basil leaves Dress the tomatoes with the anchovy dressing and the cured sardines. Spoon into bowls and top with the mackerel. Pour the tomato dashi into small glass bottles. Place a small bunch of rocket, tarragon, bronze fennel, sorrel and basil in the top of each bottle. At the table, use scissors to snip the herbs over the tomatoes and pour over the dashi. . Heritage Tomatoes, Cured Sardines, Rooftop Herbs BBQ SPRING CABBAGE FRESH RICOTTA, COPPA TRIM We make our own charcuterie at the restaurant and over time we generate a substantial amount of trim from the ends of coppa and salami. As we try not to throw anything away, we have to find clever ways to use the trim in a dish. Any charcuterie trim would be delicious here – use what you have. The dish is kind of a play on bacon and cabbage. I love the shapes of vegetables so when plating the dish I like to arrange the cabbage back into its natural round shape. serves 4–6 DRESSING 50g honey 50ml grapeseed oil 1 teaspoon very finely diced peel from Preserved Amalfi Lemons fresh lemon juice Mix together the honey, oil and preserved lemon peel. Season with lemon juice to taste. BBQ CABBAGE 1 spring/Hispi cabbage sheets of dried nori (3g each) 5% brine (see here), in a spray bottle Remove the green outer leaves of the cabbage. Blanch them in a pan of boiling water for 1–2 minutes, then drain and squeeze out as much liquid as possible. Dry these leaves in a dehydrator, or in the oven at the lowest setting, for 3–6 hours or until completely dried out. Preheat the oven to 150°C fan/170°C/Gas Mark 3–4. Weigh the dried cabbage leaves, then calculate 30% of this – this is the weight of nori you need. Toast the nori sheets on a baking tray in the oven for 10 minutes. Combine the toasted nori sheets and dehydrated cabbage leaves in a small blender or food processor and blend to a fine powder. Separate the remaining cabbage leaves, then char on both sides on a barbecue, or a hot ridged grill pan. Spray them with the brine solution twice while they are on the barbecue or pan. ASSEMBLY 100g fresh ricotta 10g Parmesan, freshly grated zest of ½ lemon a drizzle of olive oil a drizzle of vegetable oil 80–100g coppa trim, or Coppa, diced 250g Cabbage Ferment Maldon sea salt and freshly ground black pepper Mix together the ricotta, Parmesan, lemon zest and olive oil. Season with salt and pepper to taste. Heat the vegetable oil in a pan over a low-medium heat, add the coppa trim and cook slowly until crispy. Spread some of the ricotta mix on each plate, then pile up the BBQ cabbage leaves and fermented cabbage in alternate layers, scattering coppa and drizzling some of the dressing between each layer. (Note, for each portion you want about 70% fermented cabbage to 30% BBQ cabbage.) Dust cabbage-nori powder over the top of each dish. BBQ Spring Cabbage, Fresh Ricotta, Coppa Trim CORNISH CRAB FRIED CACKLEBEAN EGG AND COASTAL VEGETABLES I love a good fried egg. It's a go-to late at night after work, on the rare occasion Sarah hasn't left me a plate of what she had earlier for dinner. A couple of slices of white bread, slapped with a lick of Kerrygold butter and topped with a fried egg seasoned with an unhealthy amount of salt and pepper does me right in under five minutes. This is a mega pimped-up version to make if you just happen to have freshly picked crab and brown crab mayo in the fridge. You probably don't but it's still worth the work as it's a fantastic combo. serves 6 18 asparagus spears 300g fresh white crab meat 2 tablespoons Nori Powder olive oil 50g sea purslane 100g samphire 18 radishes 50g Rock Samphire Pickle juice of 1 lemon 50g unsalted butter, diced 6 eggs (we use CackleBean) 200g Brown Crab Mayo (see recipe for Crab, Nori and Potato Maldon sea salt and freshly ground black pepper Remove the tough woody ends from the asparagus spears. Lay the asparagus on a flat tray, season with a pinch of salt and allow to sit for 10 minutes. Put a ridged grill pan on to heat up. Meanwhile, season the white crab meat with a little of the nori powder, salt and olive oil to taste. Set aside. Blanch the sea purslane and samphire together in a pan of boiling water for 10 seconds. Drain and refresh in iced water. Cut half of the radishes into quarters and the rest into thin round slices. Put the sea purslane, samphire and radishes into a mixing bowl with the pickled rock samphire. Season with a pinch of salt, the lemon juice and a little olive oil, and toss together gently. Place the asparagus on the raging hot grill pan and scorch on all sides for 1–2 minutes or until the asparagus spears are blistered and blackened. Remove from the pan and cool slightly, then cut each spear in half lengthways and season with salt and pepper. Place a couple of large non-stick frying pans over a medium heat. Add the butter and melt it. When it turns slightly golden, crack the eggs into the pans and season each with a pinch of salt and black pepper. Fry the eggs to your liking. Place a generous spoonful of brown crab mayo in the centre of each plate. Top with a fried egg, placing it gently, and add a dollop of the white crab meat (don't cover the egg yolk). Scatter radishes, coastal vegetables and asparagus around. Finish by dusting the entire dish with the remaining nori powder. Cornish Crab, Fried Cacklebean Egg and Coastal Vegetables OUR FARM My first exposure to a working farm occurred when I was quite young. My Auntie Emer lived in Mallow in West Cork with the charming and gentle Doctor Miles Frankel at Kilbrack farm. As a boy I used to go down and spend a couple of weeks there in the summer. I will never forget that mad kitchen in the old house. Stone floors, a big farmhouse table, old wood-burning stove heating the house and something delicious always bubbling away slowly on the stove. My brother, Earl, was living and working there at the time, looking after the livestock and doing handyman stuff around the place. I remember the fresh bread he would bake and the hams he would cure. There were iron beams crossing all over the ceiling and he'd hang the hams and herbs from them. They had a parrot that used to parade from beam to beam above, barking orders. When we'd be sitting there, having tea or some soup, we'd have to cover our teacup or bowl to prevent unwanted parrot crap from plonking in. He had a good aim. His name was Apu. Funny that. Patrick Frankel now runs Kilbrack and has turned it into one of the most respected organic farms in Cork. I was proud to see the Kilbrack branding on some beautiful vegetables ordered for Ballymaloe during the Litfest we took part in. The Allens at Ballymaloe have been pioneering growers for years, and if they hold events where they need to outsource a little, only the best will do. The last time I visited Kilbrack was for my brother's wedding. The next day, my nephews and I went out to help Patrick get ready for the market. It was backbreaking work we did that day, with the attention to detail, careful trimming of the ingredients on our hands and knees. We were out there for hours and hours. Once it started to get dark, we came in, but not Patrick. He stayed on cutting things right into the night, to prepare for what would hopefully be a successful day at market. When you think about the work that goes into farming at this level – on a certified organic farm with virtually no powered equipment – you have to appreciate the work that is done. Chefs win accolades, book and television deals, pats on the back and sponsorships of all types, basically taking credit for someone else's work, but farmers, working passionately in this way, are the unsung heroes. Fast forward to 2017 and we have our own what I like to call 'guerilla urban farming' set up. Dean has claimed a space of unwanted ground near the restaurant and grows what he can there. We've had a garden on the rooftop of The Dairy since 2013 where we've been growing herbs and salads. But after we met Igor and Tom, who set up a company called Indie Ecology, what they have helped us to achieve is remarkable. They basically take our food waste and, using a Japanese method known as 'bokashi', they turn it into a rich, almost black compost. It's entirely organic, using a type of fermented molasses and naturally occurring micro-organisms to turn kitchen scraps into safe, nutrient-rich compost. Igor and Tom have rented a farm in Sussex, which is divided among the ten restaurants that are involved. I have named The Dairy and Sorella part Our Farm. Each restaurant consults with Igor and Tom about what they would like to grow, then we buy the seeds and the rest is left to the guys and nature. We get deliveries twice a week and our menu works around what is produced. We all get involved in the kitchen, preparing the vegetables covered in soil that were literally picked hours before. While we do this, we are brainstorming on how to incorporate what we have into the menu. We are now a step closer to the farmhouse kitchen by the sea. As Igor puts it: 'Forget field to fork; it's plate to farm and back to plate again.' SMOKED BEETROOT TARTARE CACKLEBEAN EGG YOLK, HAZELNUT I've become slightly obsessed with smoking things. I started with the obvious, salmon, and moved on to meat like game, pigeon and venison, then to bone marrow (our smoked bone marrow butter became kind of legendary). We even started smoking ice creams. Playing around with smoking fruit and vegetables was exciting and opened up so many possibilities. Beetroot worked immediately. It's one of my favourite vegetables because of its versatility. I find the large ruby beetroot to be quite meaty so we thought up a play on a beef tartare. But not in the way of veggie burgers and vegan sausages. I hate that stuff! It is kind of fun to dress this tartare as you would imagine it being served in a Parisian brasserie. serves 6 HUNG YOGHURT 200g plain yoghurt Line a large sieve with muslin and set it over a deep bowl. Put the yoghurt into the sieve, then gather up the edges of the cloth and secure them together. Leave in the fridge overnight to allow the liquid to drain out of the yoghurt (this liquid or whey can be reserved and used in ferments). SMOKED BEETROOT 500g raw beetroots a drizzle of vegetable oil rock salt applewood chips for smoking Preheat the oven to 190°C fan/210°C/Gas Mark 6–7. Drizzle each beetroot with oil, sprinkle with salt and wrap individually in foil. Bake for 1–1½ hours or until the core temperature reaches 90°C. Remove from the oven and allow to cool and steam in the foil for 15 minutes. Remove from the foil and rub off the skins. Take a flat tray with a steam insert (such as a deep roasting tray that will hold a flat steaming rack) and spread the applewood chips over the bottom of the tray. Warm the tray over a medium heat until the chips start to smoke, then turn the heat down to low. Place the beetroot on the steam insert/steaming rack and set this over the smoking chips. Completely cover the top and sides tightly with oven-safe clingfilm so the smoke is sealed inside with the beetroot. Leave to lightly smoke for 7 minutes. Remove the beetroot from the tray and leave to cool. BRINED EGG YOLKS 500ml 7% brine (see here) 10 egg yolks (we use CackleBean) – this allows for a few breakages a drizzle of vegetable oil Pour the brine into a deep bowl. Gentle add the yolks using your hands or a slotted spoon. Cover the surface of the brine with the vegetable oil so that the yolks are held down in the brine. Allow the yolks to brine for 1 hour at room temperature. To serve, gently remove the yolks with your hands or a slotted spoon. ASSEMBLY 240g Fermented Beetroot 1 tablespoon Shallot Vinegar 2 tablespoons capers a drizzle of Ember Oil Maldon sea salt and cracked black pepper handful fresh hazelnuts, finely sliced bittercress or watercress to garnish Mince the fermented and smoked beetroot through a mincer or chop finely with a knife. Season with the shallot vinegar, capers, ember oil and some salt and pepper. Using a small ring mould, make a disc of the beetroot mixture in the centre of each plate. Top with a layer of the hazelnut slices. Gently place a brined egg yolk to the side of each disc. Garnish with cracked black pepper and bittercress or watercress. Place a spoonful of the hung yoghurt to the side of each disc. Smoked Beetroot Tartare, Cacklebean Egg Yolk, Hazelnut CAULIFLOWER AND DATE RECIPE BY DEAN PARKER, HEAD CHEF OF THE MANOR This is a classic Dean dish – working with flavours that bounce and burst around a similar flavour profile, yet tweaking all the senses with sweet, bitter and sour notes, leaving you with the best flavour possible from the humble yet complex cauliflower. A bit like Dean really... serves 6 HUNG KEFIR OR YOGHURT 500ml Kefir or plain yoghurt Maldon sea salt Line a large sieve with muslin and set it over a deep bowl. Put the kefir or yoghurt into the sieve, then gather up the edges of the cloth and secure them together. Leave in the fridge overnight to allow the liquid to drain out of the kefir/yoghurt (this liquid can be used in any of the recipes calling for whey). Remove from the fridge and weigh the contents of the cloth. Calculate 1% salt and add this to the hung kefir to season. DATE PURÉE 2 black peppercorns seeds from 1 cardamom pod 100g pitted dates 100ml water ½ teaspoon peeled and sliced root ginger a small pinch of Maldon sea salt 1⁄3 vanilla pod, split lengthways 4 teaspoons fresh lemon juice Grind the peppercorns and cardamom seeds to a powder in a mortar and pestle. Put all the ingredients except the lemon juice into a pot and bring to a simmer. Cover with a lid and simmer gently for 10 minutes, topping up with more water if the mixture gets too dry. Remove the vanilla pod. Transfer the mixture to a blender or food processor and add the lemon juice. Blend until smooth, adding a little more water if the purée seems too thick. CAULIFLOWER MOUSSE 330g cauliflower florets (keep the cores/stems for other elements of the recipe) 125g unsalted butter, cut into small cubes a large pinch of Maldon sea salt ½ sheet/leaf of silver leaf gelatine 330ml whey or whole milk 165g plain yoghurt 0.66g xanthan gum (Dean from The Manor uses the back of a teaspoon to pick up a small pinch) Pulse the cauliflower florets in a blender or food processor into pieces about 5mm in size. Put the butter in a pan set over a high heat. When the butter starts to foam and turn a really golden colour, reduce the heat and add the floret pieces with the salt. Toast gently until the florets are golden and the bubbles have stopped (this means the moisture has evaporated). Soak the gelatine in cold water to soften it. Drain the cauliflower well on kitchen paper, then tip into a pot and add the whey or milk. Bring to the boil. Drain the gelatine, squeezing out excess moisture, and add to the pot. Stir until the gelatine has melted into the mixture. Transfer the mixture to the blender or food processor, add the yoghurt and xanthan gum, and blend until smooth. Pass through a fine sieve into a bowl set over ice and leave to cool. Keep in the fridge. TOASTED GRUE DE CACAO 100g cacao nibs 50g unsalted butter Combine the cacao nibs and butter in a pan set over a medium heat. As soon as the butter starts to turn golden and smell nutty, remove from the heat. Drain the nibs in a sieve. CAULIFLOWER CRUMB florets from 1 medium cauliflower (keep the core/stems for other elements of the recipe) 375g unsalted butter, cut into small cubes Grate the florets on a coarse grater into crumbs. Put the butter in a pan set over a high heat. When the butter starts to foam and turn a really golden colour, reduce the heat and add the floret crumbs. Gently fry until the bubbles stop (this means all the moisture has evaporated). Drain the crumbs in a fine sieve, pressing hard to remove all the liquid, then spread them over a clean J-cloth or tea towel to remove any excess fat. Season with salt. CAULIFLOWER STEMS cores/stems from 2 cauliflowers vegetable oil, for deep-frying Slice the cores/stems thinly lengthways on a mandoline or grater. Blanch the slices in a pan of boiling salted water for 2 minutes, then refresh in iced water; drain. Dry in a dehydrator, or overnight in the oven set at the lowest temperature, until completely dried out. Heat vegetable oil in a deep pan or deep-fat fryer to 170°C, then deep-fry the dried slices, in batches, until puffed and golden. Ensure that the oil returns to temperature between batches so that the cauliflower stems puff up as soon as they hit the oil. Drain on kitchen paper or a wire rack. BBQ CAULIFLOWER ½ medium cauliflower (without leaves) a drizzle of vegetable oil fresh lemon juice Maldon sea salt and freshly ground black pepper Portion the cauliflower into approximately 6cm florets with long stems of core. Toss with a drizzle of vegetable oil and seasoning of salt and pepper. Barbecue over a low fire, or grill on a hot ridged grill pan, until golden on all sides and cooked through – this should take about 8 minutes. Season the florets with lemon juice. FRIED FLORETS ½ medium cauliflower (without leaves) vegetable oil, for deep-frying Maldon sea salt Cut the cauliflower into 1cm florets. Deep-fry, in batches, in vegetable oil at 190°C until golden. Drain on kitchen paper and season with salt. ASSEMBLY Decant the cauliflower mousse into a siphon gun with one charge and shake well. Spread a brush of the date purée on each plate. Sprinkle over some of the toasted cacao nibs. Dot around some of the hung kefir/yoghurt. Arrange two BBQ florets, five fried florets and one slice of fried stem on each plate. Add two mounds of cauliflower mousse to each plate, one half the size of the other. Sprinkle the cauliflower crumb over the larger of the two mounds. Cauliflower and Date ROAST AND FERMENTED ARTICHOKE PEAR, CHEESE TRIM MOUSSE Never ever throw away the rinds and ends of cheese because they can make the most funky and amazing fondue or mousse. If you caramelise the cheese beforehand you can get an intense, nutty flavour. This idea came about at a lunch we cooked at the fabulous and inspiring Ballymaloe House. There was a lot of lovely cheese trim left on their trolley. We caramelised it and served it as a cold fondue, with a little herb roll of wild watercress and rocket from their beautiful gardens. It went down a storm. Cheese and pear are a natural match but the addition of artichokes elevates this dish into a stunner. serves 4–6 CHEESE TRIM MOUSSE 10g unsalted butter 300g edible cheese trim (we use the trimmings – white bloom/rind – of soft cheeses such as Baron Bigod) 400ml whole milk 5g Sosa Procrema Cold 100 (ice cream stabiliser) Maldon sea salt Chardonnay vinegar Melt the butter in a non-stick pan, add the cheese trim and stir the mixture over a low heat while scraping the bottom of the pan as the cheese will catch. Continue to cook, stirring, until the cheese caramelises and takes on a golden-brown colour. Add the milk and slowly bring to the boil, still stirring. Pour into a blender or food processor, add the Procrema and blend together – the mixture will resemble a loose custard. Season with salt and Chardonnay vinegar to taste. Set aside (in the fridge if making ahead). ROAST ARTICHOKES a drizzle of vegetable oil 12 Jerusalem artichokes, washed and cut in half 50–100g unsalted butter 2 garlic cloves (skin on), lightly crushed 2 sprigs of thyme Preheat the oven to 180°C fan/200°C/Gas Mark 6. Heat the vegetable oil in an ovenproof pan over a high heat. Add the artichokes and reduce the heat to medium. Add the butter. When it starts to foam, transfer the pan to the oven and roast the artichokes for about 10 minutes or until tender. Remove from the oven and add the garlic and thyme to the pan. Set aside to allow the garlic and thyme flavours to infuse the butter. ASSEMBLY 1–2 ripe pears, quartered, cored and thinly sliced 200g Fermented Artichoke, thinly sliced on a mandoline chervil leaves fresh black truffle, thinly sliced Warm the roast artichokes in the butter that they were roasted in, then spoon them on to the plates with some of the butter. Scatter the slices of pear and fermented artichoke around each plate but leave one large gap somewhere on the plate. Warm the cheese trim mousse over a gentle heat to no higher than 70°C, stirring constantly so that it does not catch. Decant into a siphon gun with one charge and shake well. Add some of the cheese trim mousse to the gap on each plate. Garnish with chervil and black truffle. Roast and Fermented Artichoke, Pear, Cheese Trim Mousse FERMENTED BARLEY WILD MUSHROOM AND CHICKEN SKIN The aroma of the fermentation process of the barley grains reminds me of a brewery. That's why in the restaurant we suggest drinking an ice-cold pale ale with this dish. It works a treat. We have added chicken skin for an extra depth of flavour but it is an excellent dish with just the wild mushrooms. serves 4 CRISPY CHICKEN SKIN the skin from 1 chicken (about 150g) Maldon sea salt Preheat the oven to 175°C fan/195°C/Gas Mark 5–6. Trim any veins or bloody parts from the chicken skin, then lay it flat between two baking trays. Bake for 20–30 minutes or until crispy. Transfer the skin to kitchen paper to drain off the excess fat, then season with a pinch of salt and allow to cool. Once cool, chop with a knife to a coarse texture. ASSEMBLY a batch of Fermented Barley 100g unsalted butter a drizzle of sherry vinegar 300ml Brown Chicken Stock 20g crème fraîche 300g mixed wild mushrooms juice of 1 lemon Maldon sea salt and freshly ground black pepper savory, to garnish Drain the barley, reserving the liquid. Set a suitable-sized pan over a medium-high heat and add 75g of the butter. When it starts to foam, add the barley and cook out, scraping the bottom of the pan every couple of minutes as the barley starts to catch and caramelise, until it is dark and roasted in appearance with a nutty aroma (this could take up to 30 minutes). Add the vinegar to deglaze the bottom of the pan, stirring and scraping well, then pour in half of the stock and season with a little salt. Turn down to a simmer and cook as you would a risotto: once the first addition of stock has been absorbed, gradually add the remaining stock and some of the reserved water from the barley, stirring the liquid into the grains until absorbed. It should take about 10 minutes for the barley grains to be cooked through and reach a risotto consistency. Adjust the seasoning and stir in the crème fraîche. Set a frying pan over a high heat and add the remaining butter and the wild mushrooms. Cook for just a couple of minutes or until wilted. Finish with a pinch each of salt and pepper and lemon juice to taste. Place a generous spoonful of barley in the bottom of each warm serving bowl. Top with the wild mushrooms and a sprinkle of crispy chicken skin. Garnish with savory. Fermented Barley, Wild Mushroom and Chicken Skin BBQ DUCK HEARTS WHITE POLENTA AND CORN When sweetcorn is in season it's like one of those red-flag-to-a-bull type of things that lets us all know what time of year it is. Sweetcorn signifies the close of summer when the days are growing shorter and leaves are turning amber. It doesn't make me sad at all. Quite the opposite. There is such an abundance of things to cook with around the beginning of autumn, and dishes can become a little richer and more comforting. For me, this is the best way to cook white polenta – in the corn cooking liquid. It's such a natural combo: as you know, polenta is ground cornmeal. I think of starchy polenta as the northern Italian version of a great creamy mash. We had the honour of cooking this for the legend that is Alain Ducasse when he visited us at The Dairy. To say we were a little nervous is a huge understatement! He said that among the many dishes we cooked for him, this one stood out. serves 6 4 fresh corn on the cob 50g unsalted butter 20 fresh cobnuts 1 white-skin onion, finely diced 80g white polenta 18 duck hearts 1 tablespoon Red Wine Shallot Gastrique 4 teaspoons Onion Treacle 2 sprigs of thyme, leaves picked 20g Parmesan, finely grated fine table salt Maldon sea salt and cracked black pepper Remove the outer green leaf layers (the husk) from the corn. Pull off the golden hair-like fibres but reserve these. Melt the butter in a pan and bring to a simmer, then add the golden fibres and cook until the butter turns a dark golden colour and the fibres start to crisp. Strain the butter through a sieve into a bowl (the butter will now have the most amazing nutty sweetcorn aroma) and reserve. Tip the fibres on to kitchen paper and season them with a light pinch of fine salt. Set aside. Put the corn on the cob in a suitable-sized pot, cover with water and add a good pinch of Maldon salt. Bring to the boil, then simmer for 12 minutes. Remove from the heat and allow the corn to cool in the cooking water. Crack open the cobnuts and peel them, then cut each one in half. For the polenta, put half of the reserved butter into a pan on a medium heat, add the diced onion and cook until soft. Add the polenta and cook, stirring, for 2 minutes. Pour in 200ml of the corn cooking water and simmer gently, stirring constantly with a whisk. Keep topping up with the corn cooking water, as you would when making a risotto, until the polenta is cooked and silky soft with only a slight bite – this can take 10–15 minutes. Remove from the heat and cover the surface of the polenta in the pan with greaseproof paper to prevent a crust from forming. Set aside. (If you are making the polenta in advance, remove from the heat and pour on to a flat tray; cover the top with greaseproof paper and leave to cool.) Remove the corn from the pot (reserve any remaining cooking water). Using a sharp knife, cut the kernels from the cob on to a baking tray, trying to keep them attached together in randomly shaped pieces. Reserve any loose kernels that fall away – put these into a pan with the cobnuts and a teaspoon of the sweetcorn-scented butter, ready to reheat for serving. Use a spoon to scrape off any tiny corn pieces that remain on the cob to be added to the polenta. Meanwhile, fire up the barbecue or heat a ridged grill pan. Thread the duck hearts on to skewers. Barbecue, or cook on the hot pan, for 1–2 minutes, turning occasionally. Allow the hearts to rest for 2 minutes before removing them from the skewers. Cut each in half and put into a pan with the gastrique, a teaspoon of the sweetcorn-scented butter, the onion treacle, thyme leaves and a pinch each of Maldon sea salt and black pepper. ASSEMBLY thyme leaves, to garnish Preheat the oven to 160°C fan/180°C/Gas Mark 4. Add the tiny pieces of scraped corn and some of the corn cooking water to the polenta and stir to mix – you want a loose risotto consistency. Heat through gently on the stove, then fold in the Parmesan. Place the larger corn kernel pieces on the baking tray in the oven and warm through for 3 minutes. Gently warm through the duck hearts as well as the cobnut and corn mixture. Add a spoonful of polenta to each plate and top with the large corn pieces. Scatter over the duck hearts. Finish with the cobnut and corn mixture and garnish with the reserved corn fibres and some thyme leaves. BBQ Duck Hearts, White Polenta and Corn SALT-BAKED CELERIAC NORI AND SUNFLOWER SEED PRALINE In my opinion, vegetables can be far more complex and exciting than most meat and fish. This quite technical dish is proof of that. It is Richie and Ben's dish and shows off their skill and respect for ingredients. The nori acts as a natural gelatine to hold the celeriac ballotine together. We have used the same technique with other ingredients, replacing the nori with wild mushrooms or game offal, and served the ballotine alongside venison as a very smart game combo – one to impress your foodie mates. serves 6 ALMOND MILK 250g whole dried almonds 500ml water about 2 tablespoons verjus a pinch of Maldon sea salt, or to taste Soak the almonds in the water for 24 hours. Tip it all into a blender or food processor and blend together for 2 minutes. Line a large sieve with muslin and set it over a deep bowl. Pour the blended almond mixture into the sieve. Gather up the edges of the cloth and secure. Place in the fridge and leave the almond milk to strain through. Discard the contents of the muslin bag. Add the verjus and salt to the almond milk, to taste, and mix together with a spoon or whisk. SUNFLOWER SEED PRALINE 125g sunflower seeds 50ml rapeseed oil 50g honey 15g brown miso a pinch of Maldon sea salt Toast the sunflower seeds in the oil in a pan until they are quite dark brown. Tip on to kitchen paper. Heat the honey in a small pan and caramelise to a dark golden colour. Put the honey, a third of the toasted seeds and the miso into a food processor and pulse to a coarse paste. Add another third of the sunflower seeds and pulse to incorporate. Finally, add the remaining seeds with a pinch of salt and pulse again for 1 second. SALT-BAKED CELERIAC 250g rock salt 250g plain flour about 50ml cold water 1 celeriac (unpeeled) Combine the salt and flour in a stand mixer fitted with a paddle attachment and mix on medium speed for 2 minutes. Gradually mix in enough cold water to make a dough-like consistency. Cover the bowl with clingfilm and rest in the fridge for 30 minutes. Preheat the oven to 250°C fan/its highest temperature. Roll out the dough to approximately 3cm thickness in a shape large enough to wrap around the celeriac and completely enclose it. Wrap up the celeriac, smoothing out any air pockets in the dough. Place on a baking tray and bake for 15 minutes, then reduce the oven temperature to 160°C fan/180°C/Gas Mark 4 and bake for a further 30–40 minutes or until the core temperature reaches 80°C. Carefully remove the top of the salt crust. Leave the celeriac to cool to room temperature before removing the remaining crust. CELERIAC BALLOTINE 1 Salt-Baked Celeriac 8 sheets of dried nori (about 3g each) Peel the celeriac, then trim it into as large a cube as possible; reserve the trim. Using a mandoline, cut the cube into very thin square slices (you need 24 slices with some celeriac left over). Lay out three layers of oven-safe clingfilm on a flat surface. Place the celeriac slices on the clingfilm in a rectangular shape, four slices across and three slices down. Lay the slices as close together as possible with no overlaps. Cover the rectangle with a layer of four nori sheets, leaving a 2cm border clear at the bottom. Using the clingfilm to help, roll up as you would a roulade or Swiss roll. Wrap the ballotine tightly in the clingfilm. Repeat the process to create a second ballotine. Reserve any leftover celeriac. Chill the ballotines for a minimum of 30 minutes to set. CELERIAC GLAZE 30g honey 400g peeled raw celeriac, chopped reserved Salt-Baked Celeriac slices and trimmings from the ballotines (see above) 150g unsalted butter Heat the honey in a small pan and caramelise to a dark golden colour. Juice the raw celeriac – you want 100ml juice. Caramelise the reserved celeriac slices and trimmings in the butter to a nutty brown colour. Deglaze the pan with a little water, stirring well, then add the celeriac juice and top up with water to cover. Bring to a simmer and cook for 20 minutes. Remove from the heat, cover with clingfilm and allow to infuse for 10 minutes. Strain through a fine sieve into a bowl, pressing down on the pulp to extract all the liquid; discard the pulp left in the sieve. Blend the strained liquid with some of the caramelised honey to taste in a blender or food processor – the amount of honey added to the glaze depends on the natural sweetness of the celeriac. ASSEMBLY a drizzle of vegetable oil a knob of unsalted butter Lemon Gel (see recipe for Fresh Peas, Rooftop Mint and Fried Bread here) Preheat the oven to 175°C fan/195°C/Gas Mark 5–6. Trim the ends off the ballotines, then cut them, still wrapped in clingfilm, into six slices. Heat the oil in an ovenproof frying pan over a medium heat, add the ballotine slices, cut side down, and fry for 1 minute. Add the butter and transfer the pan to the oven to cook for 2 minutes. Flip the slices over and remove the clingfilm. Spoon the buttery juices over the slices. Remove from the oven and add the celeriac glaze. Allow it to warm in the residual heat of the pan. Pour a pool of almond milk into the bottom of each bowl. Place a slice of ballotine in the middle, a quenelle of the sunflower praline to one side and a squeeze of lemon gel to the other side. Pour some of the glaze over the top of the ballotine. Salt-Baked Celeriac, Nori and Sunflower Seed Praline SEA JULIE GIRL SKATE WHITE ASPARAGUS, BREAD MISO This is a deceptively simple dish that shows how we have grown and become quite confident in the kitchen. Rather than adding technique after technique, we let the ingredients do the talking. In the Larder section of this book, we've shown you how to make your own miso from stale bread, but this does take some time. A brown rice miso from a healthfood store would still do the trick. serves 4–6 SKATE 1 skate wing, weighing 600–700g 7% brine (see here) 300–400g unsalted butter, cut into small cubes fresh lemon juice Maldon sea salt Brine the skate wing in a 7% brine for 3 hours. Remove, rinse and pat dry. Preheat the oven to 60–80°C fan/80–100°C/Gas Mark low. Place the skate in a baking tray. Melt the butter in a pan set over a high heat and heat until the butter starts to foam and brown and gives off a nutty aroma. Remove from the heat immediately and cool quickly to stop the butter from burning (you can do this by setting the base of the pan in iced water). Pour the brown butter over the skate to cover. Bake for 15–20 minutes or until the core temperature of the fish reaches 50°C. Lift the skate from the butter, take the flesh off the bone and portion into 4–6 pieces. Season with the poaching butter, lemon juice and salt to taste. ASPARAGUS 12 white asparagus spears 200ml Kombu Oil fresh lemon juice Remove the tough woody ends from the asparagus spears, then peel the stalks down their length. Put the asparagus in a pan and add the kombu oil and enough water just to cover the asparagus. Cover with a lid and simmer for 2–5 minutes or until the asparagus is just tender but retains a bite. Remove the asparagus from the cooking liquid and cut into pieces on an angle. Reserve the cooking liquid. Season with lemon juice. ASSEMBLY 4–6 tablespoons Bread Miso or ready-made brown miso dill fronds 500ml White Onion Dashi, warmed Spread 1 tablespoon of miso in each bowl. Top with the asparagus. Add the skate to the side. Garnish with dill. Decant the warm dashi into a jug, and very gently stir through the reserved asparagus cooking liquid. The dashi should be poured over the dishes at the table. Julie Girl Skate, White Asparagus, Bread Miso GALICIAN OCTOPUS SUMMER VEGETABLES AND NDUJA BRIOCHE This is an absolute showstopper of a dish. I'm always amazed by the depth of flavour you get from octopus, one of the most bizarre and beautiful creatures of the sea. It always reminds me of the Med so that's why I've taken a very Mediterranean approach in this recipe and it fits quite well. The dish can be prepared in advance, then plonked down in the centre of the table family-style with a bottle of chilled sherry. In the Larder section of this book we have shown you how to make your own nduja, but it can be sourced from most good-quality Italian delicatessens. serves 6–8 NDUJA BRIOCHE 570g type '00' flour 40g caster sugar 10g fine table salt 15g fresh yeast 370ml whole milk 1 egg 310g unsalted butter, at room temperature 150g Nduja, or use shop-bought) beaten egg yolk to glaze Put the flour, caster sugar, salt and yeast into a stand mixer fitted with the paddle attachment. Mix together, then mix in the milk and egg to form a dough. Mix in 60g of the butter, a little at a time, until combined and smooth. Cover the bowl and place in the fridge so the dough can slowly rise for 6–12 hours. Place the remaining room-temperature butter between two sheets of greaseproof paper. Using a rolling pin, roll out the butter into a square about 2cm thick. Keep cool (in the fridge if necessary) while the dough finishes rising. Remove the dough (and butter) from the fridge and place it on a lightly floured worktop. Shape the dough into a rough square about 2cm thick. Using a rolling pin, mark out another square in the centre of the dough by pressing into the dough with the length of the rolling pin (this square should be about the same size as the square of butter). Roll out the dough from the sides of the marked square to create an even cross shape, leaving the thicker square area in the middle of the cross (this is what was the marked square). The 'arms' of the cross need to be about 5mm thick and large enough to fold over the square of butter. At this point, the dough and butter should be the same texture to the touch. Place the square of butter in the middle of the cross, on the raised centre. Fold the right 'arm' over the butter so that it completely covers it. Repeat with the left 'arm', then the bottom one and, finally, the top. You will now have a square of dough completely encasing the butter. Roll out the square away from you on the lightly floured worktop into a rectangle almost triple its original length and double the width. Fold the top third down and the bottom third up over this. Leave to rest at room temperature for 20 minutes. Lift and turn the dough on the lightly floured worktop so that the folded edges are to the sides. Roll out the dough away from you into a rectangle the same size as before. Fold into thirds as before, then lift and turn the dough so that the folded edges are to the sides again. Roll out and repeat the folding. Wrap the dough in clingfilm and chill. Once the dough is chilled, roll out to a large rectangle roughly 30 x 50cm. Spread a layer of nduja about 5mm thick all over the surface of the dough. Roll up the dough from a long side to create a pinwheel effect. Wrap the roll in a floured cloth and chill until firm enough to slice. Cut the roll across into 80g slices. Place the slices cut side down in an ovenproof greased saucepan, arranging them about 1cm apart so that when they rise and bake their sides will be touching. Brush the slices with a little egg yolk, then cover loosely with clingfilm and leave to rise in a warm part of the kitchen for about 30 minutes. Preheat the oven to 180°C fan/200°C/Gas Mark 6. Place a pan of water in the bottom of the oven. Remove the clingfilm and bake the brioche for 15 minutes. Allow to cool on the baking tray. BRAISED OCTOPUS 1 bottle of white wine 30g fennel seeds 20g coriander seeds 2 tablespoons olive oil 15g dried chilli flakes 5 garlic cloves, sliced 3 bay leaves 10 black peppercorns 1 octopus, cleaned 1 bulb of fennel, cut in half peel of 1 lemon (no pith) Preheat the oven to 120°C fan/140°C/Gas Mark 1. Pour the wine into a pan and bring to the boil, then simmer for 2 minutes to evaporate some alcohol; set aside. Gently toast the fennel seeds and coriander seeds in the olive oil in a small pan for 3–5 minutes. Remove from the heat and stir in the chilli flakes, garlic, bay leaves and peppercorns. Allow to cool slightly, then rub this mixture all over the octopus. Place the octopus in a large ovenproof pot. Add the fennel and lemon peel. Pour in the wine and top up with water to cover. Cover with a lid and place in the oven to braise for 2–3 hours or until the octopus is tender. Remove the fennel after 1 hour, cut it into wedges and reserve it. Once tender, drain the octopus, reserving the stock. Trim the octopus into portions. Strain the stock. BORLOTTI BEANS 100g podded fresh borlotti beans (if using dried borlotti beans, soak and cook according to the packet instructions) Put the borlotti beans in a pan, cover with cold water and bring to the boil. Remove from the heat and drain. Return the beans to the pan and cover with a mixture of equal parts stock from the octopus and water. Simmer for about 40 minutes or until tender. Leave the beans to cool in the liquid before draining. ASSEMBLY 100g podded fresh broad beans 100g small tomatoes (Datterini or cherry) 1 Trombetta or normal courgette, sliced 50g Nduja or use shop-bought) fresh lemon juice olive oil basil leaves Blanch the broad beans in boiling water for 1 minute, then refresh in iced water and pop the bright green beans out of their thick skins by squeezing gently. Heat the remaining octopus stock in a pan, halve the tomatoes and add them to the pan along with the reserved fennel wedges, borlotti beans, broad beans, courgette, nduja and octopus to warm through. Season with lemon juice and a drizzle of olive oil. Ladle into bowls and garnish with basil. Serve the brioche on the side. Galician Octopus, Summer Vegetables and Nduja Brioche APPLEWOOD-SMOKED EEL BROAD BEANS, PRESERVED LEMON, MINT Applewood-smoked eel is one of the few ingredients that we buy in already prepared. This is because Corine and her family at the Dutch Eel Company in Lincolnshire have been producing it for centuries and it is one of the most amazing products I have ever come across. At first I had my reservations. I had once had an unpleasant jellied eel experience where all I could taste was vinegar, fish and jelly (it didn't help that I was quite hungover at the time). But when I tried this smoked eel I was hooked. Applewood-smoked eel is my go-to ingredient when cooking at events where I really want to impress. It is something that people are least excited about when reading the menu, but when they try it they fall in love. It's a surprise that captures you. I describe it as the smoked bacon of the sea as it has a sweet smoky flavour but is also very meaty. serves 4–6 SMOKED EEL CREAM a drizzle of vegetable oil 50g skin and bones from smoked eel 100ml UHT double cream 2 teaspoons white soy sauce Heat the vegetable oil in a pan and add the eel skin and bones. Sweat gently for about 10 minutes so that the flavour and fats from the eel are released. Add the cream and bring to the boil, then remove from the heat and allow to infuse for 20 minutes. Strain the cream and season it with the white soy sauce. Allow to cool to room temperature. BROAD BEAN AND PEA SALAD 100g podded fresh broad beans 100g podded fresh peas ½ lemon 10g Wild Garlic Pickle with a little of the pickling liquor 50ml olive oil a little peel from Preserved Amalfi Lemons, finely diced 1 fillet of Cured Sardines, finely diced garum sauce or fish sauce Blanch the broad beans in boiling water for 1 minute, then refresh in iced water. Pop the bright green beans out of their thick skins by squeezing gently. If the peas are quite large, blanch them in boiling water for 30 seconds, then refresh in iced water; if they are small and sweet, leave them raw. Put the broad beans and peas in a metal sieve and place over a barbecue to warm through and smoke slightly (or you can warm them gently in a pan in a drizzle of olive oil). Peel the lemon half and remove the segments, cutting them from the membrane. Cut the lemon flesh into very small pieces. Mix together the pickled wild garlic, lemon, olive oil, preserved lemon peel, cured sardines and garum or fish sauce to taste to create a dressing. Toss through the warm broad beans and peas. ASSEMBLY 120g skinless applewood-smoked eel fillet, portioned into 4–6 pieces mint leaves sorrel leaves wild rocket (leaves and flowers) Bottarga, optional Place a heaped spoonful of the broad bean and pea salad down the centre of each plate. Put a piece of the eel to one side and a spoonful of smoked eel cream to the other. Garnish the salad with mint, sorrel and wild rocket. Grate over a little bottarga, if using. Applewood-Smoked Eel, Broad Beans, Preserved Lemon, Mint LOBSTER SALAD SQUASH MISO, SALAD FROM THE FARM I strongly recommend firing up the barbecue for this one. There is something so special about cooking lobster over wood and coals that after you do it for the first time you will not want to settle for anything else. The aroma from the smoke is just incredible. By rubbing the miso into the flesh of the lobster, the sugars will caramelise nicely. I have served greens and vegetables from our farm but a simple salad of any kind would work. It's all about the lobster! serves 4 SQUASH MISO 50g cooked butternut squash 50g Miso (or shop-bought) 50g lobster coral ½ garlic clove 1 teaspoon garum or fish sauce 140ml Lobster Oil Put all the ingredients, except the oil, in a blender or food processor and blend until smooth. Drizzle in the lobster oil while blending to emulsify to a mayonnaise consistency. Let it down with a little water if it gets too thick. LOBSTER 2 live lobsters (about 600g each) Bring a large pot of water to a rapid boil. As humanely as possible, kill the lobsters by placing the sharp tip of a knife into the spot on the lobster behind the eyes where a cross section of grooves appear on the shell. Place the lobsters in the pot and boil for 1 minute, then remove and refresh in an ice-water bath. Pull off the head and discard. Cut through the tail shell vertically to expose the tail meat – leave the meat in the shell but remove the digestive tract. Crack open the claws, remove the meat and pat dry. If you are going to finish cooking the lobsters on the barbecue, set it up to get the coals to the right temperature. VEGETABLES 50ml extra virgin olive oil 6 Tokyo turnips, cut in half 6 large spring onion bulbs, cut in half 50g unsalted butter 300g Siberian kale leaves 300g Cavolo Nero leaves 100g Cultured Cream or crème fraîche juice of 2 lemons Maldon sea salt and freshly ground black pepper Preheat the oven to 190°C fan/210°C/Gas Mark 6–7. To cook the vegetables, place two large, wide-bottomed, ovenproof pans over a high heat. Add a drizzle of olive oil to one pan followed by the turnips, cut side down. Cook for 1 minute. Add the spring onions, also cut side down, to the same pan and cook for another minute. Add the butter and transfer the pan to the oven to cook for 5 minutes. Meanwhile, add a generous drizzle of olive oil to the second pan and quickly add the kale and cabbage. Char the leaves until they take on a dark caramel colour, turning them with tongs. Season with salt and pepper, then remove from the heat. Add the cultured cream or crème fraîche and a squeeze of lemon juice; keep warm. Remove the turnips and onions from the oven. Season with salt, pepper and a squeeze of lemon juice; keep warm. ASSEMBLY extra virgin olive oil 3 sprigs of tarragon, leaves picked 100g miner's lettuce lemon To finish cooking the lobsters on the barbecue, brush the meat side of the tails (still in shell) and the claw meat with olive oil, then place on hot bars over the coals and cook for 1–2 minutes. Remove the lobster from the heat and brush over some of the squash miso. Return to the barbecue and cook for another minute to caramelise. Alternatively, you can use a hot ridged grill pan set over a high heat to caramelise the lobster in exactly the same way. If necessary, place in the oven for a minute or two afterwards to be sure the lobster is warmed through. Place the charred kale and cabbage in a serving bowl followed by the onions and turnips. Add a sprinkle of tarragon leaves and miner's lettuce. Arrange the lobster on a serving plate with some tarragon leaves and a squeeze of lemon. Serve the miso on the side in a bowl. Serve family-style for everyone to help themselves at the table. Lobster Salad, Squash Miso, Salad from the Farm SMOKED POLLOCK, POTATO MOUSSE RECIPE BY DEAN PARKER, HEAD CHEF OF THE MANOR This was on Dean's first menu at The Manor and remains there to this day. It has all the components of a fish pie but, like Dean, it is clever, warm and exciting. That sounds a bit creepy, I know, but it is just a brilliant dish by a brilliant chef! serves 4–6 KOMBU DASHI 2 sheets of dried kombu (10g each) 2 litres filtered water Maldon sea salt Soak the kombu in the water in a pan for 2 hours. Then set on a low-medium heat and bring to a very gentle simmer. Simmer for 2 hours (do not boil). Measure the liquid and season with 1% salt. Clingfilm the top of the pot, leaving the kombu in to infuse, and set aside for 1 hour. Strain the liquid, reserving the kombu. SORREL EMULSION a bunch of sorrel (about 60g) 60g Fermented Sorrel 100ml olive oil 100ml Kombu Dashi (see above) Separate the stems from the leaves of the fresh sorrel. Put the stems in a blender or food processor and add the fermented sorrel (undrained), olive oil and dashi. Blend until smooth. Add the fresh sorrel leaves and pulse until smooth. Set aside at room temperature. POLLOCK 300g skinless pollock fillet salt applewood chips, for smoking Cover the fish with salt and set aside for 5 minutes, then rinse well and pat dry. Place on a cloth-covered tray and leave in the fridge for 8 hours – change the cloth once during this time. Take a flat tray with a steam insert (such as a deep roasting tray that will hold a flat steaming rack) and spread the applewood chips over the bottom of the tray. Warm it over a medium heat until the chips start to smoke. Remove from the heat. Place the fish on the steam insert/steaming rack and set this over the smoking chips. Completely cover the top and sides tightly with oven-safe clingfilm so the smoke is sealed inside with the fish. Leave to lightly smoke for 5 minutes. Once smoked, portion the pollock into 50g pieces. NORI BUTTER EMULSION 250g Nori Butter 1 litre Kombu Dashi (see above) 10g dried wakame zest of 1 lemon 100ml Kefir or plain yoghurt 2g (about 2 small pinches) Gelespressa fish sauce bonito flakes Put the nori butter, dashi, one sheet of the reserved kombu from the dashi, the wakame, lemon zest, kefir and Gelespressa in a pot and warm gently; do not boil. Season heavily with fish sauce and bonito flakes. Transfer the mixture to a blender or food processor and blend well. POTATO MOUSSE 500g Maris Piper potatoes (unpeeled), cut into 5cm dice ½ quantity Nori Butter Emulsion lemon zest fish sauce 1g (about a small pinch) Gelespressa 50g Smoked Butter 50g Cultured Cream 50g Bonito Butter Simmer the potatoes in the nori butter emulsion until cooked. Blend the potatoes in a high-speed blender or food processor with enough of the emulsion to make a thick purée. Season to taste with lemon zest and fish sauce. While blending, gradually add the Gelespressa, smoked butter, cultured cream and bonito butter to emulsify. Pass through a fine sieve into another pan and allow to cool. CHERIE POTATOES 500g Cherie or other waxy new potatoes (unpeeled) ½ quantity Nori Butter Emulsion Simmer the potatoes in the nori butter emulsion until tender. Drain but reserve the emulsion to warm the potatoes when serving. POTATO CRISPS 1 small Maris Piper potato (unpeeled) vegetable oil, for deep frying Nori Salt Slice the potato very thinly on a mandoline. Heat vegetable oil in a deep pan or deep-fat fryer to 160°C, then deep-fry the potato slices for about 5 minutes or until they are golden and crisp. Drain on kitchen paper and season with nori salt. Hold the crisps in a warm, dry area of your kitchen. ASSEMBLY 30g Bonito Butter 1 sheet of kombu reserved from the Kombu Dashi (see here) 3 spring onions 30g Nori Butter chopped chives fresh lemon juice Maldon sea salt sorrel leaves Preheat the oven to 140°C fan/160°C/Gas Mark 3. Wrap the pollock with the bonito butter in the kombu sheet. Place on a baking tray in the oven to heat for 10 minutes. Meanwhile, char the spring onions on a barbecue or hot ridged grill pan until blackened. Peel off the black outside layer and split each spring onion down the middle. Warm the halves in the nori butter. Gently heat the potato mousse, stirring, to no higher than 70°C. Decant into a siphon gun with one charge and shake well. Warm the Cherie potatoes in some of the reserved nori butter emulsion; drain them and add some chopped chives. Flake each portion of pollock into two pieces and season with lemon juice. Spread the sorrel emulsion on the warmed plates. Place the Cherie potatoes, spring onions and pollock around each plate. Finish with potato mousse, off centre in the middle, topped with a potato crisp and a sorrel leaf. Smoked Pollock, Potato Mousse CHARRED MACKEREL CUCUMBER, DASHI, SEA PURSLANE Generally speaking, mackerel must be at its absolute freshest – I detest mackerel once it has been more than two days out of the deep blue – so when buying your fish, make sure the flesh is firm, the gills are bright red and the eyes are bright and glistening. We use salt to season and firm up the fish, and I like to serve it medium-rare. This is a really fresh and vibrant dish to serve in late summer. serves 4–6 DILL-PICKLED CUCUMBER 2 small cucumbers or ½ regular-sized cucumber 75g ice 75g caster sugar 75ml Chardonnay vinegar a bunch of dill, fronds picked a large pinch of fine table salt Peel the cucumbers and set aside; reserve the skin. Blend together the ice, caster sugar, vinegar, dill, the cucumber skin and salt in a blender or food processor. Strain through a fine sieve and pour this liquid over the peeled cucumbers. Leave to marinate for 1 hour. DILL OIL 150g picked dill fronds 150ml grapeseed oil Blend together the dill and oil in a blender or food processor for 1 minute. Transfer to a pan, bring to the boil and boil rapidly for 2 minutes. Strain through a fine sieve into a bowl set over ice to cool. CHARRED MACKEREL 3 medium mackerel, filleted fresh lemon juice Blowtorch, barbecue or grill (on a hot ridged grill pan) the skin side of the mackerel fillets –you are just looking to scorch the skin and lightly cook the fish to medium-rare. Season the fillets with lemon juice and salt to taste. ASSEMBLY 4–6 teaspoons Roast Garlic Miso Purée, at room temperature – 1 teaspoon per serving purslane leaves sea purslane, blanched for 30 seconds Wild Garlic Capers with some of the pickling liquor 160–240ml Dashi, warmed – 40ml per serving Maldon sea salt Drain the pickled cucumbers and slice into rounds. Spread a teaspoon of miso purée in each bowl, then add the mackerel. Top the fish with the cucumber slices (fanned). Place the fresh purslane, sea purslane and wild garlic capers to the side. Drizzle over some dill oil. In a jug, season the warm dashi with a little of the pickling liquor from the wild garlic capers. The dashi should be poured over each dish at the table. Charred Mackerel, Cucumber, Dashi, Sea Purslane THE BEAN FAMILY I first became aware of the Bean family when I was running Sauterelle in 2007. Dean was working at the time for a fellow manic-obsessive produce freak named Peter Weedon – I cannot talk about the Bean family and their company Kernowsashimi without giving a nod to Peter. In his normal excitable manner, Dean was jumping out of his seat describing the quality and freshness of the fish they were getting from Kernowsashimi for Paternoster Chop House. Peter was very kind and helpful, not to mention generous, in passing on this most valuable contact. Chris Bean, his son Dylan and daughter-in-law Mutsuko, and their two amazing kids from St Martin in Cornwall are a family that has been very important to us since we first had the opportunity to start working with them all that time ago. Chris leads the fleet of about five day-boats on the Lady Hamilton, his first boat bought back in 1972. An estuary runs to a spot called Coverack, which is where at 5am most of the boats set off. They come ashore around 2pm with the landings. Dylan, Mutsuko and Michelle run things on the ground, coordinating where the stocks are sent. I'm embarrassed to say that my first opportunity to go and see for myself how they work was to research for this book and for some photo ops. A good few of my team had already made the journey and came back somewhat changed in their way of thinking, inspired in a thoughtful way. I now know why. Ben and I hired a car to make the five-hour journey to the tip of Cornwall near to Lizard. On our arrival, we had quite a surreal moment, or at least it was surreal for me. Over ten years I had talked to Dylan almost every other day – he would explain and try to predict what would be caught so that we could arrange for a box of fish, which meant discussing tides, weather, boats and fish – and I considered it to be a very important relationship. When we finally met it felt like we had had a long-distance arranged marriage! Anyway, we quickly got over that weird moment, hugged it out, and he invited us into their home. Mutsuko, if you hadn't guessed, is from Japan. She has done a pretty good job of holding on to her heritage and culture. When we met she had just arrived back from a trip to her home town and put together a feast full of rare treats like an indigenous rice, a particularly fresh yuzu only grown in her village, several types of what I would consider drinkable bottles of soy, fresh wasabi and shiso grown in her greenhouse. And naturally she had her pick of the finest catch of Cornwall's coast. Armed with bottles of Asahi beer and her favourite sake, we got stuck into smashing an array of spider crabs, cuttlefish sashimi, fried squid and a katsu-style pork with her own miso. We ate like kings. No pressure then when Ben and I in turn were to cook the following evening! The next day we had hoped to get out on a boat but the weather was too rough. So we explored the coast for dinner inspirations and some more photo opportunities, pulling into half a dozen postcard-like coves and harbours. We were chasing the boats coming ashore and had hoped to capture an image of Lady Hamilton, but we missed it. So we raced back to the sorting bay, as I call it, at Dylan's house, to witness the sorting of the catch and where it is to be sent. Everything is weighed and recorded at a furious pace. In a human conveyor belt, the fish are carefully laid into the awaiting ice pack-pillowed boxes, which have the name of the restaurant scribbled on the side. I got to pack my own box and 20 minutes later a courier in a van arrived for collection. The Dairy received their fish six hours later. Quite often the fish we get is too fresh and needs to rest a day or two for the flesh to relax. I learned this the hard way by once trying to cook very fresh Dover sole on the bone. It was really tough and chewy, and a number of guests sent it back. That afternoon Dylan and Mutsuko took us for Spingo ales at The Blue Anchor, and that night Ben and I cooked for the Bean family. The following day we returned home, inspired and a little wiser. Many fishermen and women risk their lives and many are lost to deliver to us this most precious and controversial food source. To all the unsung heroes aboard Lady Hamilton, Lucy Mariana, Willy's, Julie Girl and many more that venture out to sea, we salute you. COD FEAST Nose-to-tail cooking has been quite the fashionable way to cook over the last ten years, credited mostly to Fergus Henderson. This is a very good thing indeed, considering the way we eat today and the little time we have to cook at home. But what I have not seen, which is equally important, is the head-to-tail approach to fish. We have all heard the upsetting statistics regarding the fishing stocks worldwide and the massive fishing trawlers shovelling up the seabeds and coastline, landing up to 250 tonnes of fish a day. I believe we still don't do enough to cook and eat fish responsibly. Cod is probably the most popular yet controversial fish of the sea, and has been an important economic commodity in international markets for more than a thousand years. Here I show three techniques that use parts of the fish we usually don't see on any menus or that we cook at home. I have used cod here but the recipes could easily be transported to any fish. I have also included a couple of preservation recipes that could be made in bulk and kept in the store cupboard for months. serves 4–6 MISO BBQ COD COLLARS 4 cod collars 7% brine (see here) 50g honey 2 teaspoons rice wine vinegar 20g brown miso vegetable oil, for deep-frying 30g dried wakame 80g crème fraîche, seasoned with lemon juice and Maldon sea salt to taste 100g Pickled White Peaches 100g Fennel Kimchi 100g sorrel, leaves picked Brine the cod collars in a 7% brine for 6 hours. Drain and rinse well. Mix together the honey, vinegar and miso, and rub this paste into the collars. Cook over raging hot coals on a barbecue for 2 minutes on each side, then allow to rest for a minute. Alternatively, sear on a hot ridged griddle pan for 2 minutes each side, then rest. Heat vegetable oil in a deep-fat fryer or a deep pan to 160°C and fry the dried wakame for 1 minute; drain on kitchen paper. Serve all the components (cod, wakame, crème fraîche, pickled peaches and fennel kimchi) in separate bowls and allow guests to build their own 'tacos' at the table using the sorrel leaves as carriers. SMOKED COD'S ROE 80–100g Smoked Cod's Roe 20g Cultured Cream a drizzle of Kombu Oil Bottarga, optional a bunch of radishes Gently warm the cod's roe with the cultured cream to 40°C. Carefully fold through the kombu oil. Decant into a ramekin and grate over the bottarga, if using. Serve with the radishes on the side. ASSEMBLY Serve all dishes family-style in the centre of the table for guests to help themselves. Cod Feast LOCH DUART SALMON OYSTER EMULSION, FENNEL, FRIED WAKAME The oyster emulsion here is an absolute winner. It's also amazing served as a dip with some oysters in tempura or with a beef tartare. The way the salmon is cooked is a trick I picked up from Raymond Blanc. I'll never forget tasting it for the first time. It simply blew my mind and taught me to understand the nature of cooking fish. You will often hear chefs say that it takes great skill to cook fish. I slightly disagree. I believe it just requires an understanding. Fish is delicate and in most cases should never be cooked at too high a temperature, otherwise the fish tenses up and an unpleasant white protein appears, which for me is an alarm bell screaming that I have overcooked the fish. serves 4 OYSTER EMULSION 100g banana shallots, sliced 200ml dry white wine 130g freshly shucked rock oysters (juice reserved) 150ml grapeseed oil 5 sorrel leaves 1 tablespoon crème fraîche Put the shallots into a saucepan and pour over the white wine. Place on a medium to low heat and boil until all the wine has evaporated. Remove from the heat and allow to cool. Tip the shallot mixture and oysters into a blender or food processor and blend until smooth. While blending, gradually add the oil to make a mayonnaise consistency. Add the sorrel leaves and blend through, then blend in some of the reserved oyster juice to loosen the mixture. Stir in the crème fraîche. Keep the emulsion in the fridge until ready to serve. FRIED WAKAME 200ml vegetable oil, for frying 50g dried wakame Heat the oil in a deep pan to 160°C. Fry the wakame for 1½ minutes or until crisp. Remove and drain on kitchen paper. ASSEMBLY 250g Cured Salmon 8 slices Fennel Kimchi dill fronds fennel fronds Portion the salmon into four pieces. Place a spoon of oyster emulsion on each plate and add a piece of salmon to the side. Arrange the fennel kimchi and fried wakame around the fish. Garnish with dill and fennel. Loch Duart Salmon, Oyster Emulsion, Fennel, Fried Wakame CORNISH CRAB SALT-BAKED BEETROOT, COBNUTS Crab meat has a natural sweetness so it makes sense to me that it would work with beetroot. Baking the beetroot in a salt dough is a clever way to keep the juices in the beetroot and intensify the flavour while helping to season the vegetable. If you can't be bothered to make the dough, just wrap the beetroot in foil. In the restaurant we use golden or white beetroot as it is a little sweeter, but any beetroot, or indeed any root vegetable, can be baked like this. serves 6 SALT-BAKED BEETROOT 500g rock salt 500g plain flour about 100ml cold water 4 large raw beetroots (golden or white, if available) applewood chips for smoking Combine the salt and flour in a stand mixer fitted with a paddle attachment and mix on medium speed for 2 minutes. Gradually add enough cold water to bring together into a dough. Wrap in clingfilm and leave to rest in the fridge for 30 minutes before use. Preheat the oven to 210°C fan/230°C/Gas Mark 8. Divide the dough into 4 portions. Roll out each to a round large enough to enclose a beetroot. Wrap the beetroots in the salt dough. Place on a baking tray and bake for 15 minutes. Lower the oven temperature to 150°C fan/170°C/Gas Mark 3–4 and bake for a further 20–30 minutes or until the beetroot has a core temperature of 85°C. Allow to cool before cracking off the salt dough crust (discard it). Rub the skin off the beetroots. Take a flat tray with a steam insert (such as a deep roasting tray that will hold a flat steaming rack) and spread the applewood chips over the bottom of the tray. Warm it over a medium heat until the chips start to smoke. Remove from the heat. Place the beetroots on the steam insert/steaming rack and set this over the smoking chips. Completely cover the top and sides tightly with oven-safe clingfilm so the smoke is sealed inside with the beetroot. Return to a low heat so the wood chips smoke gently. Leave to lightly smoke the beetroots for 15 minutes. Remove the beetroots from the tray. Slice them thinly on a mandoline. BROWN CRAB AND BEETROOT MAYO 25g egg yolks 75g brown crab meat (wrapped in muslin and squeezed to remove excess liquid) 50g Salt-Baked Beetroot (see above) 80ml Crab Oil 25g crème fraîche 1–2 drops of Tabasco sauce juice of ¼ lemon or to taste Put the egg yolks into a food processor with the brown crab meat and beetroot. Start blending at a medium speed, then slowly drizzle in the crab oil while blending (add a touch of cold water should the mix become too thick). Transfer the mixture to a mixing bowl and add the crème fraîche, Tabasco and lemon juice to taste. Keep in the fridge until required. ASSEMBLY 200g fresh cobnuts or hazelnuts 210g fresh white crab meat Crab Oil fresh lemon juice Rock Samphire Pickle Maldon sea salt Crack open the cobnuts and peel, then cut each one in half. Spoon a little of the crab and beetroot mayo into the bottom of each bowl. Season the white crab meat with a drizzle of crab oil, some lemon juice and a pinch of salt. Top the mayo with the white crab. Dress the remaining beetroot slices with some crab oil and a pinch of salt. Arrange the slices on top of the white crab. Garnish with the cobnuts and pickled rock samphire. Cornish Crab, Salt-Baked Beetroot, Cobnuts RED MULLET CAULIFLOWER AND DULSE BUTTER This dish is all about _mise en place_ , timings and pan-cooking skill. With quite a bit happening all at once, the key here is to make the butter and purée in advance, and to be sure you have suitable pans for cooking the cauliflower and fish. I believe the cooking of the cauliflower is just as important as getting that beautiful crispy skin on the fish. Every second that both cauliflower and fish are not hitting nice heated plates, the more the dish deteriorates. Depending on the size of the cauliflower, the fish and cauliflower should only be a minute apart from cooking so use your instinct, get prepared and do your best! serves 4 DULSE BUTTER 80g drained Fermented Dulse, finely chopped 100g unsalted butter, at room temperature Mix the dulse into the butter until well dispersed throughout. CAULIFLOWER PURÉE ½ cauliflower (use the outer part of the cauliflower, reserving the centre) 100g unsalted butter, cut into small cubes 1 teaspoon Maldon sea salt 150ml whole milk 50g Cultured Cream a squeeze of fresh lemon juice Grate the cauliflower through a coarse grater. Put the butter into a pan over a high heat and cook until the butter starts to foam, brown and take on a nutty aroma. Add the grated cauliflower and salt and cook over a high heat, stirring regularly, for up to 8 minutes or until the cauliflower is softened. Add the milk and bring to a simmer. Check that the cauliflower is cooked by tasting it. Drain the cauliflower and tip into a blender or food processor. Add the cultured cream and lemon juice and blend until completely smooth. Taste and adjust the seasoning if necessary. RED MULLET AND CAULIFLOWER 80g sea purslane handful of broccoli leaves (most leafy greens would work) olive oil reserved centre of cauliflower (see above), cut into 2 large flat slices 4 x 80g red mullet (skin on) 20g Nori Powder 30g capers juice of 2 lemons Preheat the oven to 185°C fan/205°C/Gas Mark 6. Pick the sea purslane, then blanch in a pan of boiling water for 30 seconds. Drain, refresh in iced water and set aside. Repeat with the broccoli leaves. Set a large ovenproof pan over a high heat and add a good drizzle of olive oil, then place the cauliflower slices in the pan. Caramelise for 3–4 minutes or until the surface area is a nice even dark brown. Add half of the dulse butter. Transfer the pan to the oven to cook for 4 minutes. When you start to caramelise the cauliflower, set another ovenproof non-stick frying pan over a medium heat and add a good splash of olive oil. Make sure the skin of the fish is dry, then place it skin side down in the pan. Turn the heat up – the fish will start to curl. Let it relax, when it will return to its natural shape. As the skin starts to crisp and the fat starts to spit, continue to cook, pressing down on areas that are not colouring. Once the skin is light golden brown, transfer the pan to the oven to cook for 1 minute. Remove the fish from the oven and set the pan over a very high heat. Season the fish with nori powder and cook for 30 seconds to get the skin crispy again. Add the remaining dulse butter, turn the fish over and remove from the heat. Add the capers and half of the lemon juice. Remove the fish from the pan and keep warm. Reserve the pan juices. Remove the cauliflower from the oven and reserve on a plate. Dust with the nori powder. In the same pan, add the purslane and cook over a high heat for 30 seconds, with a squeeze of lemon juice. Remove from the pan and reserve. Repeat with the blanched broccoli leaves and remaining lemon juice. ASSEMBLY Put a heaped tablespoon of the purée on one side of each plate. Place the fish off centre on the other side and spoon over any pan juices. Add a couple of broccoli leaves to each plate. Place the cauliflower slices on a separate plate, and finish with another sprinkle of nori powder and top with the purslane. Put the cauliflower in the centre of the table for your guests to help themselves. Red Mullet, Cauliflower and Dulse Butter LADY HAMILTON COD CHARRED LEEKS, LEEK MOLASSES The method for cooking the leeks in this recipe is the most impressive part for me. Dean picked this up working with Tom Aikens, and it comes and goes on our menus in different guises, accompanying all sorts of things from cheese to beef and here with cod. This was on our very first menu. It was months before we removed it as we simply couldn't come up with a better-tasting dish. It's brilliant any time of the year so we revert back to it every now and again. serves 4–6 CHARRED LEEKS AND LEEK MOLASSES 3–4 medium leeks vegetable oil, for roasting 80g demerara sugar 4 teaspoons rice wine vinegar 2 tablespoons olive oil 2 tablespoons water fresh lemon juice Maldon sea salt Preheat the oven to 250°C fan/its highest temperature. Split the leeks down the middle and wash thoroughly, then blanch in boiling water for 5–7 minutes or until slightly softened. Drain the leeks and place on a rack in a baking tray, cut side up. Coat with vegetable oil and 50g of the demerara sugar. Roast for 12 minutes. Remove the very burnt ends from the leeks but reserve them for the molasses. Set the charred leeks aside. For the leek molasses, melt the remaining 30g demerara sugar in a small pan and cook to a dark caramel. Stir in the rice wine vinegar to deglaze. Transfer the caramel mixture to a small blender or food processor and add the burnt ends of the leeks and the olive oil. Blitz to a purée. Emulsify the purée with the water. Season with lemon juice and salt to taste. COD 250g unsalted butter, cut into small cubes 350g skinless cod fillet Turn the oven to 60–80°C fan/80–100°C/Gas Mark low. Melt the butter in a pan set over a high heat and heat until the butter starts to foam and brown and gives off a nutty aroma. Remove from the heat immediately and cool quickly to stop the butter from burning (you can do this by setting the base of the pan in iced water). Place the cod in a baking tray and cover with the brown butter. Poach in the oven for 15–20 minutes or until the core temperature of the cod reaches 50°C. ASSEMBLY chopped chives fresh lemon juice olive oil 150g Smoked Cod's Roe Emulsion 100g Fried Bread sorrel leaves, torn Maldon sea salt Spoon some of the leek molasses into the centre of each plate and spread it with the back of a spoon. Gently pull apart the charred leeks and season them with chopped chives, lemon juice, olive oil and salt. Scatter them over each plate. Flake the cod around the plates and pipe around a few dots of smoked cod's roe emulsion. Scatter fried bread and sorrel leaves over the top. Lady Hamilton Cod, Charred Leeks, Leek Molasses APPLEWOOD-SMOKED EEL FERMENTED CHARD, SHALLOT CRISPS, WHITE SOY The technique of blending the egg yolk emulsion long enough to reach a mayonnaise consistency is a great trick to learn. You can basically give the emulsion any flavour you want by simply browning and infusing the butter – here shallots make it slightly sweet. Chard was one of the first vegetables we fermented that made it on to our menu. serves 6 SHALLOT CRISPS 200g unsalted butter, diced 4 banana shallots, thinly sliced on a mandoline fine table salt Put the butter into a pan and melt, then bring to a simmer over a high heat. Add the shallots and cook, stirring constantly, for 5–10 minutes or until the butter turns a nutty brown and the shallots are crisp. Drain the shallots immediately through a sieve set in a bowl to reserve the butter. Press the shallots to drain out all the butter, then tip them on to kitchen paper to drain. Season the shallot crisps with a pinch of salt. The butter reserved in the bowl will now taste of caramelised shallot – it will be the base for the miso and egg yolk emulsion. MISO AND EGG YOLK EMULSION 50g egg yolks 20g brown rice miso 2–3 tablespoons white soy sauce 2 tablespoons rice wine vinegar 100ml shallot-flavoured butter (see above) Put the egg yolks into a blender with the miso, white soy and vinegar. While blending, drizzle in the shallot-flavoured butter, as you would when adding oil for a mayonnaise. Blend to a mayonnaise consistency, adding a touch of water if the emulsion thickens too much. Adjust the seasoning to your liking with more soy, miso and vinegar. ASSEMBLY 240g applewood-smoked eel, divided into 6 portions 200g Swiss Chard Ferment, drained, stalks diced and leaves kept whole 50g Cultured Cream Preheat the oven to 170°C fan/190°C/Gas Mark 5. Warm the eel on a baking tray in the oven for 5 minutes. Warm the diced Swiss chard stalks in the remaining shallot-flavoured butter in a small pan. In a separate pan, warm the chard leaves with the cultured cream. Place a spoonful of the egg yolk emulsion to the side of each plate and sprinkle over a few shallot crisps. Arrange a piece of eel to the other side of the plates and add some diced chard stalks, covering them with the chard leaves. Applewood-Smoked Eel, Fermented Chard, Shallot Crisps, White Soy LAND MERGUEZ FENNEL KIMCHI At one time I had only ever had merguez bought from a butcher or a supplier. It was always perfectly nice but nothing to write home about. But then I tried a recipe brought to us by Joe. My first taste kind of slapped me in the tastebud chops! Toasting your own spices and barbecuing the peppers makes such a difference. There are obviously Middle Eastern flavours in the merguez so it made sense to pair it with fennel kimchi. One of the guys in the kitchen said upon tasting: 'It's the perfect kebab!' Not sure I was happy with that but actually he may have had a point. serves 4–6 MERGUEZ SAUSAGES 5–6 red peppers 15g paprika 15g sweet smoked paprika 5g ground cinnamon 10g ground cumin 4 cloves 2.25kg lamb mince (ask your butcher for a mince that contains at least 25% fat) 40g fine table salt 10g light brown sugar 20g garlic, finely chopped 50ml red wine (cold) 300g hog casing Char the red peppers on a barbecue or under the grill. Remove the peppers from the heat and immediately place them in a bowl. Cover with clingfilm and allow to cool slightly, then remove the skins and seeds. Weigh out 175g of red pepper flesh and blend in a blender or food processor to a paste. Toast all the spices in a dry pan, then grind together to a powder. Mix with the red pepper paste, lamb, salt, sugar, garlic and wine. Stuff the mixture into the hog casing, using a sausage stuffer or mincing attachment on a stand mixer, or pipe, to make sausages approximately 150–200g in weight; tie the ends. (The sausages can be kept in the fridge for a couple of days before cooking, or they can be frozen for 3 months.) SAUCE 50g Merguez Sausages (see above) a drizzle of sherry vinegar Lamb Sauce Chop up 50g of the sausages and place in a pan over a medium heat. Allow the fat to render out of the sausages. As soon as they start to catch, stir in the sherry vinegar to deglaze the pan. Strain the juices from the sausages into a saucepan (discard the sausage). Weigh these juices and top up with an equal weight of lamb sauce. Set aside. ASSEMBLY 350g Merguez Sausages (see above) 180g Fennel Kimchi 100g Cultured Cream a drizzle of Herb Oil bronze fennel, to garnish Fire up the barbecue or heat a ridged grill pan. Cook the sausages on the barbecue or in the hot grill pan until browned all over and the core temperature reaches 60°C. Put the fennel kimchi and cultured cream into a pan and warm through. Drizzle in some herb oil and stir so that it just marbles through but does not fully mix in. Warm through the sauce. Slice the barbecued sausages into pieces. Place the fennel on one half of each plate, spooning over the cultured cream/herb oil. Garnish with bronze fennel. Place the sausages on the other side of the plates and drizzle over the sauce. Merguez, Fennel Kimchi LAMB FEAST SLOW-COOKED LAMB SHOULDER, BBQ CABBAGE, RICOTTA This is the ultimate in big feast dining. What is great about the dish is that most of the work can be done in advance so you get to enjoy the lucky company you've invited round. You pretty much can't go wrong with the lamb shoulder by cooking it low and slow. It's a showstopper. I love the accompaniments spread out all over the table too. Invite only your nearest and dearest for this one. serves 8 LAMB 1 lamb shoulder 7% brine (see here) 2.5–3 litres Brown Chicken Stock 2 carrots, finely diced 1 onion, finely diced 3 celery sticks, finely diced 1 leek, finely diced 1 bay leaf 1 garlic bulb, cut in half horizontally 45g Onion Treacle 3 tablespoons Lamb Sauce 60g lamb fat (trimmed from the shoulder) – add unsalted butter if you don't have enough fat 20g black mustard seeds 20g fennel seeds 50ml vegetable oil 1 garlic clove (peeled) Maldon sea salt chive and rosemary flowers, to garnish Brine the lamb shoulder in a 7% brine for 12 hours. Drain. Preheat the oven to 140°C fan/160°C/Gas Mark 3. Place the lamb in a deep roasting tray and almost cover it with the chicken stock (top up with water if needed). Add the carrots, onion, celery, leek, bay leaf and bulb of garlic. Cover the tray with foil and place in the oven. Cook for about 4 hours or until the meat is tender. To make the glaze, gently melt together the onion treacle, lamb sauce and lamb fat in a saucepan until combined. Remove the lamb shoulder from the stock and place on a clean roasting tray. Lower the oven temperature to 100°C fan/120°C/Gas Mark ½. Brush an even coating of the glaze over the meat using a pastry brush. Place back in the oven to roast for 40 minutes, brushing with the glaze every 10 minutes during this time. To make the spice mix, warm the mustard seeds in a dry pan over a gentle heat, shaking the pan constantly. The seeds will eventually puff slightly and crack. Once they reach this stage, remove from the heat and grind in a pestle and mortar. Tip back into the warm pan off the heat. Add the fennel seeds and allow them to warm in the residual heat of the pan. In a separate pan, warm the oil slightly with the garlic clove. Add the spices to the oil (off the heat). Spoon the spice mix over the lamb just before serving, season with Maldon salt and garnish with chive and rosemary flowers. BRINED AND CHARRED CABBAGE WITH RICOTTA 1 spring/Hispi cabbage 2% brine (see here) olive oil 100g ricotta 10g Parmesan, freshly grated zest of ½ lemon Maldon sea salt and freshly ground black pepper Brine the cabbage in a 2% brine for 1 hour. Drain but reserve some of the brine – decant this into a spray bottle and add a drizzle of olive oil. Fire up the barbecue, or heat a ridged grill pan. Cut the brined cabbage in half and place it cut side down on the barbecue or really hot grill pan. Cook for about 10 minutes, turning the cabbage halves occasionally, until they are softened slightly and charred. Keep spraying the cabbage with the brine mixture as it cooks. Season the ricotta with the Parmesan, lemon zest, and salt and black pepper to taste. Drizzle over some olive oil. Serve the cabbage on a plate and the ricotta in a bowl alongside. FRESH MINT SAUCE a bunch of mint (about 30g), leaves picked 150ml boiling water 1 sheet/leaf of silver leaf gelatine 75g caster sugar 100ml fresh lemon juice black mint leaves, to garnish Moroccan mint leaves, to garnish Blanch the mint in the boiling water for 1–2 minutes or until soft. Drain in a sieve set in a bowl, pushing down on the mint so that the flavourful juice passes through. Reserve 50ml of the blanching liquid in the bowl. Soak the gelatine in cold water to soften it. Dissolve the sugar in 50ml of the lemon juice by warming them in a pan to just before boiling point. Drain the gelatine and stir through the lemon mixture until melted. Add the remaining lemon juice and the reserved 50ml mint blanching water. Leave to set in a suitable container in the fridge. Just before serving, top with the black mint and Moroccan mint leaves. ASSEMBLY Serve the dishes family-style so that guests can help themselves. Lamb Feast, Slow-Cooked Lamb Shoulder, BBQ Cabbage, Ricotta ROAST WOOD PIGEON CHICORY, RHUBARB Game is around at a time of year when the rest of nature has little to offer. My first experience with game was also my first experience in one of the most exciting kitchens there has ever been, the Oak Room Marco Pierre White. I was literally fresh off the boat from Dublin. Robert Reid was running the kitchen and to this day he has been one of the biggest influences on the way I cook. Rob was going off-road on the menu, knocking up something special. It was game season and as he stormed through each section of the kitchen, commanding caramelised ceps here, roast foie gras there, I looked up to see him stuffing the cavity of a grouse with juniper, bay and a bunch of thyme; after rolling the bird in nutty foaming butter, he popped it into the oven. We had 4 minutes to respond to all his requests perfectly or... We were, of course, ready and waiting. It was 2 minutes for the bird to rest before he smashed the heart and liver into the warm pan, added a dash of sherry vinegar and proceeded to carve. As we dressed the plate, he rolled a bunch of watercress through the warm pan with the bird's offal, before delicately placing the breasts and legs on the plate and scattering the watercress. That's when he did it... It's a moment that will live with me, something that made me forget about the long hours, terrible money and unforgiving girlfriend. None of that mattered! He took the bloody carcass in his hands, and with brute force he squeezed. The blood and innards trickled between his fingers and on to the plate. I took all the chefs out one night for dinner about two years ago and Rob Reid's technique came up. After many a bottle of wine, we all decided we would do it table-side at The Dairy. We would carry the bird through the dining room over to the table on a roasting-hot pan, hay and heather smoking away. We'd plonk the plated dish in front of the guests and politely request they pull up their white, starched napkins as we performed our pigeon press by hand. It is a shocking showstopper of a dish – not for the faint-hearted, but one for the adventurous foodie. serves 4 GRILLED RHUBARB 10g Maldon sea salt 50g caster sugar 2 stalks forced rhubarb, cut into strips Mix together the salt, sugar and rhubarb in a mixing bowl. Leave to marinate for 1 hour, then drain off the excess liquid. Char the rhubarb on a very hot ridged grill pan for 2 minutes, to scorch on all sides, until it is just tender. Remove and keep warm. BRAISED CHICORY 2 heads red chicory 50g soft unsalted butter 5g Maldon sea salt 20g caster sugar 500ml water juice of 1 lemon Cut the chicory in half lengthways. Put all the remaining ingredients into a pan and bring to the boil. Whisk together, then place the chicory in the liquid. Simmer on a low heat for 10–15 minutes or until the chicory is tender on the stem. Allow to cool in the liquid. Warm through for serving. ROAST WOOD PIGEON AND SAUCE a bunch of thyme 10 juniper berries, crushed 4 wood pigeons, plucked and trussed (ask your butcher to clean the carcass but to give you the livers and hearts separately for your sauce) a drizzle of vegetable oil a knob of butter 50ml Brown Chicken Stock 200g muscat grapes or black seedless grapes (removed from the stem) 1 teaspoon sweet sherry vinegar Preheat the oven to 150°C fan/170°C/Gas Mark 3–4. Stuff the thyme and juniper into the cavities of the birds. Set a large pan over a high heat, add a little vegetable oil and colour the birds all over. Transfer them to a roasting tray and roast for 12–15 minutes. Allow the birds to rest for 5 minutes. Meanwhile, finely chop the hearts and livers. Add the butter to the pan you used to brown the birds, followed by the chopped hearts and livers, the stock and grapes. Cook for 2 minutes. Add the vinegar to taste and keep warm. Remove the breasts and legs from the wood pigeon carcasses and keep warm in the roasting tray. ASSEMBLY Arrange the drained chicory and rhubarb on each plate, followed by the pigeon breasts and legs, and the heart and liver sauce. Take the bloody carcasses to the table and hand-squeeze over the entire dish. It's bound to cause a stir! Roast Wood Pigeon, Chicory, Rhubarb RABBIT FEAST Simon Woodrow, who worked with us for three and a half years, across all sites, deserves all the credit for the raves that this dish so deservedly gets. I love it as it represents so many great things: it is nose-to-tail at its best and gets loved ones around the table without costing a fortune. The dish was inspired by one of our late-night Bloodshot supper clubs. There was a crazy recipe created by Dean and Ben called 'reservoir hogs' – all you need to know is that it was a gory, Tarantino-esque dish where whole hogs' heads were served with pig's blood syringes and surgical gloves as cutlery. Anyway, in a strange way that was the inspiration for Simon and me to come up with a less intimidating but equally impressive beast feast. It's not surprising that rabbit was Simon's choice of meat. He had come across 400 of them a month during his time with Anthony Demetre at Arbutus, where rabbit has been celebrated on the menu since day one. This is not a 20-minute, midweek one-pan wonder; it requires time to prepare. The first thing you must do is speak to your favourite local butcher and order the rabbit; it may take a week. Then you need to ask the butcher to bone it for you. I'm suggesting that you do all the component parts for one cracking spring lunch, but they could easily be broken down into many variations as they are versatile. For example, the rabbit could easily be replaced with chicken, guinea fowl or hare. Any pickle would be a welcome addition to this dish although the Artichoke Piccalilli works particularly well. Serves 4–6 RABBIT SADDLE 1 saddle of rabbit, with the liver (if unavailable, substitute 150g chicken livers) a drizzle of vegetable oil 12 slices of Pancetta or shop-bought) 10g Preserved Amalfi Lemons, finely chopped 4 sprigs of tarragon 300g caul fat, soaked in cold water and cleaned well Maldon sea salt and freshly ground black pepper Bone the saddle of rabbit (or have the butcher do this for you); reserve the bones for the gravy. Heat a frying pan on a high heat until smoking hot. Drizzle a little vegetable oil over the rabbit liver (or chicken livers), add to the smoking pan and sear all over. Remove and allow to cool. Lay out three layers of clingfilm on a flat surface. Cover with the pancetta slices, placed side by side. Place the rabbit, skinned side down, on the pancetta layer and open the breasts out, exposing a gap in the centre between the fillets. Add the livers, preserved lemon and tarragon to the gap and season with salt and pepper. With the help of the clingfilm, roll the pancetta around the rabbit to create a sausage-like shape, then remove the clingfilm. Spread out the caul fat on the work surface and cut so the caul is a rectangular shape just big enough to wrap around the rabbit. Place the rabbit at one long side of the caul fat and roll it up around the rabbit, folding in the sides. Secure the caul fat by tying butcher's twine around the roll. Keep in the fridge until required. RABBIT GRAVY 50g plain flour 2 tablespoons vegetable oil 8 chicken wings, chopped rabbit bones from the saddle, chopped 80g unsalted butter 4 garlic cloves, crushed 2 shallots, sliced 100ml white wine 800ml water a sprig of lemon thyme zest of 1 lemon Preheat the oven to 200°C fan/220°C/Gas Mark 7. Toast the flour on a baking tray in the oven for 15 minutes. Set a wide-bottomed pan over a high heat and add the vegetable oil, chicken wings and rabbit bones. Once they start to caramelise, add the butter and continue cooking until the bones take on a golden-brown colour. Add the garlic and shallots and cook until they are translucent. Using a slotted spoon, remove the contents from the pot to a bowl; drain off the fat from the pan but reserve it. Deglaze the pan with the white wine, stirring well, and reduce by half. Pour into a bowl and reserve. Clean the bottom of the pan, then add half of the reserved fat and the toasted flour and stir until the flour absorbs the fat. Add the bones mixture and water and bring to the boil. Skim and simmer for 45 minutes. Remove from the heat and add the thyme, lemon zest and reduced white wine. Allow to infuse for 15 minutes. Strain into a clean pan and reduce to a gravy consistency. RABBIT TURNOVERS 600g duck fat 4 rabbit shoulders 2 bay leaves 3 garlic cloves, crushed 1 carrot, peeled and diced 3 celery sticks, diced 1 white-skin onion, diced 100ml Rabbit Gravy 1 tablespoon Dijon mustard 1 sheet of puff pastry (for homemade) 1 egg, beaten Maldon sea salt and freshly ground black pepper Preheat the oven to 140°C fan/160°C/Gas Mark 3. Put 500g of the duck fat into a cassoulet pot or other flameproof casserole. Season the rabbit shoulders generously with salt and pepper, then add to the pot with the bay leaves and garlic. Bring to a simmer. Transfer to the oven to cook for 1½ hours or until the meat is falling away from the bone. Meanwhile, put each type of diced vegetable into a small pan with some of the remaining duck fat and cook until tender. Allow to cool. Remove the rabbit shoulders from the pot (the fat could be reused). While the rabbit is still warm, pick the meat off the bones. Reduce the gravy by a third – it should be quite thick. Add the meat, vegetables and mustard to the gravy. Allow the mixture to cool. Turn the oven back on to 180°C fan/200°C/Gas Mark 6. Cut the pastry into four to six triangles, depending on how many people you want to serve. Place a spoonful of the rabbit mixture into one corner of each triangle. Roll the pastry over the mixture and crimp the edges. Place on a baking tray and brush with beaten egg. Bake for about 15 minutes or until golden and crisp. VEGETABLES 50g unsalted butter 200ml water a generous pinch of Maldon sea salt a bunch of breakfast radishes 2 Baby Gem lettuces, quartered lengthways 1 head of red chicory, quartered lengthways 40g capers a small bunch of flat-leaf parsley, chopped Make an emulsion with the butter, water and salt in a pan by bringing to the boil, whisking. Add the radishes, lettuces and chicory and cook until the lettuce starts to wilt. Drain, then add the capers and parsley and toss together. ASSEMBLY Artichoke Piccalilli Preheat the oven to 210°C fan/230°C/Gas Mark 8. Take a suitable-sized ovenproof pan, set it over a medium-high heat and sear the rabbit saddle until golden brown all over. Transfer to the oven to roast for 5 minutes. Turn the saddle over and roast for another 5 minutes. Allow to rest for 12 minutes before removing the twine and carving. Heat the remaining gravy and decant into a jug. Serve the saddle, turnovers, vegetables and gravy family-style, with the piccalilli, for guests to help themselves. Rabbit Feast BELTED GALLOWAY ONGLET PIATONE BEANS, YOUNG GARLIC, HAY It's all about the smoke here. The hay butter is a revelation – a Canadian chap named Joe showed us this trick. Its sweet, smoky flavour is amazing. Make a big batch and try frying an egg in the hay butter with some wild mushrooms. Delicious! I like to use onglet as it has a great depth of flavour and is cheap as chips. You must serve it rare with a nice pinch of Maldon sea salt. serves 4–6 HAY EMULSION 250ml Brown Chicken Stock 125g hay 250g unsalted butter Boil the stock to reduce to 125ml. In another pan, combine the hay and butter. Set over a high heat and cook until the butter starts to foam, turns a nut-brown colour and has a nutty aroma. Pass the butter through a fine sieve and whisk it into the stock to create an emulsion. Set aside in the pan. BEANS 250g piatone or white runner beans 1 teaspoon Maldon sea salt 250g fine green beans Toss the piatone beans with the salt and allow to soften in the fridge for 3 hours. Char the beans on a barbecue or hot ridged grill pan. Blanch the fine green beans in boiling salted water for 2–3 minutes or until tender. Drain and refresh in cold water, then split each bean down the middle. NEW SEASON'S WHITE GARLIC PURÉE 2 bulbs of new season's garlic, cloves separated and peeled 1 bay leaf whole milk 2 egg yolks 50ml olive oil crème fraîche fresh lemon juice Blanch the garlic cloves in boiling salted water for 1 minute. Drain and refresh in cold water, then repeat the blanching. Put the blanched cloves back in the empty pan with the bay leaf and a pinch each of salt and black pepper, and cover with milk. Gently simmer until the garlic has softened. Drain but reserve the milk. Put the garlic cloves and egg yolks into a food processor. Blend together and, while blending, drizzle in the olive oil until emulsified. If the mixture thickens too much, let it down with a little of the reserved milk. Stir in the crème fraîche and salt, pepper and lemon juice to taste. ONGLET 400–500g onglet steak a drizzle of vegetable oil (if pan-cooking) a knob of unsalted butter (if pan-cooking) Ensure that the meat has come up to room temperature. Season well on both sides with salt and pepper. Barbecue the steak until medium-rare. Alternatively, heat the vegetable oil in a pan over a high heat, add the steak with the butter and cook for 2 minutes on each side while basting with the foaming butter. Allow the steak to rest for 5 minutes before serving; reserve any pan/resting juices. ASSEMBLY 1 shallot fresh lemon juice tarragon leaves chervil leaves wild rocket leaves Maldon sea salt and freshly ground black pepper Slice the shallot into thin rings, then crisp up in a bowl of iced water. Add the piatone beans, the fine green beans and the drained shallot rings to the hay emulsion in a pan and warm through. Season with lemon juice and black pepper to taste. Slice the steak. Spread some of the garlic purée on each plate. Scatter around the beans, shallots and the slices of steak. Drizzle over some of the pan/resting juices from the steak. Finish with a scattering of herbs. Belted Galloway Onglet, Piatone Beans, Young Garlic, Hay SUCKLING PIG BELLY BAO, KIMCHI Aahhhh, pig belly... crispy, succulent pig belly. I sound like a dribbling Homer Simpson! But that's what this dish turns me into. A soft, white, steamed rice bun and hot-as-hell kimchi... I'm literally dribbling as I type. Our Sichuan mayonnaise is a welcome addition. Try our Ginger Beer recipe with this and you will be a very happy camper indeed. serves 6 SUCKLING PIG BELLY 15g curry powder 14 black peppercorns 5g ground cumin 5g ground five spice 5g ground coriander 50ml vegetable oil 3 garlic cloves, crushed 2 red chillies, roughly chopped a good pinch of Maldon sea salt 1.5kg boned suckling pig belly (skin on) 500ml water Preheat the oven to 150°C fan/170°C/Gas Mark 3–4. Put all the spices and the vegetable oil in a pan and toast the spices over a medium heat for 2–3 minutes. Remove from the heat and tip the spice mix into a food processor (or a mortar). Add the garlic, chillies and salt, and blend (or crush) into a paste. Rub the paste into the meat. Place the belly in a deep roasting tray with the water, cover with foil and cook in the oven for 1½ hours. Allow the meat to cool in the liquid. Once cool, drain off the liquid, then weigh down the pork by covering with another tray and setting something heavy on this (e.g. cans of food). Leave in the fridge to press for 6–8 hours. BAO 1 tablespoon fresh yeast 350ml tepid water 600g rice flour 75g caster sugar 3 tablespoons milk powder 1 tablespoon fine table salt ½ teaspoon bicarbonate of soda ½ teaspoon baking powder 2½ tablespoons duck fat Put all the ingredients into the bowl of a stand mixer fitted with the paddle attachment. Mix on the slowest speed for about 20 minutes or until you have an elastic dough. Cover the dough with a tea towel and leave to rise at room temperature for an hour. The dough will puff up. Knock back the dough, then divide into 12 balls. Leave to rise again for 30 minutes at room temperature. On a surface dusted lightly with rice flour, flatten each ball into an oval, then fold over to create the bao shape (an elongated half-moon). Leave to prove at room temperature until the bao puff up to at least double in size. Steam the buns in a bamboo or regular steamer lined with greaseproof paper over a high heat for 11 minutes. (The bao should be eaten straight away but can be warmed gently in the oven if made ahead.) ASSEMBLY Tokyo turnips or any radishes a drizzle of vegetable oil 300g Kimchi coriander Sichuan Mayonnaise lime wedges Thinly slice the turnips, then soak in iced water for an hour; drain. Preheat the oven to 220°C fan/240°C/Gas Mark 9. Remove the pork from the fridge and score the skin. Heat the vegetable oil in a large ovenproof pan on a high heat. Add the pork, skin-side down, and lower the heat to medium. Cook for 10 minutes or until the skin is golden and crisp. Put the pan in the oven and cook for 5 minutes to heat the pork through. Meanwhile, place the kimchi in a bowl and garnish with coriander. Slice the pork. Serve all the elements – pork, bao, kimchi, mayonnaise, turnips and lime wedges – separately for guests to build their own buns. Suckling Pig Belly, Bao, Kimchi MARY HOLBROOK In restaurant kitchens in London, and I imagine in most places, there are so many reps and middlemen trying to sell you something. The meat industry has had some pretty horrific press over the years, and as a chef I find it quite hard to trust or believe some jerk in a suit trying to sell me some meat from a price list offering beef/lamb/chicken/hippo from who knows where. They always come in unannounced (for any reps reading this, it is a pet hate for me) and their photos always show a kind-eyed, weather-beaten farmer hugging a cow that looks as though it just stepped out of a salon! Then you come across someone like Mary Holbrook. Mary is famous for making cheese. At one time, we were hosting monthly events at Paradise Garage to highlight our favourite suppliers. Neal's Yard Dairy is one of these, a well-known company that does a rare thing for these times, which is to celebrate and support the smaller, more passionate folk who dedicate their lives to producing something special. Simon, who was designing the menu, had a tough task in planning a meal based on a cheese supplier. But he picked a favourite cheese, which was Tymsboro, produced in Bath on Sleight Farm, and he thought that goat would be the obvious choice of meat. But then he found something else. Mary was producing a massive excess of whey from her cheese production but didn't know what to do with it. So she bought a couple of Lop pigs and fed them the whey. This natural cycle has been a huge success – her cheese production has grown a bit as the pig population has also grown. The pigs roam free and happy over ten acres of rich ground. We shared a pig with Simon for his event and now we take a whole pig every second week. We use every scrap, paying the utmost respect to the pig in our charcuterie programme. Mary delivers them herself, and no rep or middleman is involved in our relationship. It's built on trust, passion and friendship. She turns business and expansion away – she is hard working but content and we are very lucky to work with her. SMOKED BONE MARROW AGNOLOTTI WILD MUSHROOMS This has become kind of a cult classic on our menu. It's an idea brought to life by Richie. The clever so-and-so wanted to create a burst of liquid with the bone marrow filling inside the agnolotti. People go nuts for the dish, often asking for a second serving, which can make a chef cry as it's not the easiest of dishes to do. The hard work is worth it, though. Just be prepared for your guests to want more. serves 4–6 PASTA 2 tablespoons olive oil 4 eggs 2 egg yolks 480g type '00' flour fine semolina, for dusting The pasta dough can be made in a stand mixer fitted with a paddle attachment, or by hand in a bowl or on the work surface. Start by mixing together the olive oil, eggs, yolks and half of the flour until well worked. Add the remaining flour a handful at a time, mixing in well before adding the next handful. This should be a slow process – a little at a time really is best. Once the last of the flour has been incorporated, knead briefly in the mixer. If making by hand, knead the dough on the floured surface for at least 10 minutes or until firm, smooth and even in consistency. Wrap the dough in clingfilm and set aside to rest at room temperature for about 30 minutes. Divide the dough into eight pieces. Flatten the pieces and lightly dust with flour. Work with one piece at a time. Feed through a pasta machine set on the widest setting, then fold the dough over and pass through this setting again. Repeat this process three times so that you have a rectangular shape and an even thickness. Continue to pass the dough through the pasta machine, changing the setting until you reach the second thinnest setting. Repeat with the remaining pieces of dough. Lay out the sheets of pasta dough on the semolina-dusted work surface. SMOKED BONE MARROW FILLING 50ml Worcestershire sauce 200ml Brown Chicken Stock 65g egg yolks 135g Smoked Bone Marrow, melted but not hot Cabernet Sauvignon vinegar black truffle oil Mushroom Powder Maldon sea salt Boil the Worcestershire sauce in a small pan until reduced by half to 25ml. In another pan, boil the stock until reduced to 100ml. Allow both to cool. Pour the reduced stock into a blender or food processor and add the egg yolks. Blend, then transfer to a clean non-stick pan. Cook over a gentle heat, stirring constantly, until the mixture reaches 79°C. Once it has reached this temperature, pour it back into the blender. While blending, gradually add the bone marrow so that the mixture emulsifies. Season to taste with a few drops of vinegar, Worcestershire sauce and truffle oil, plus mushroom powder and salt. Allow the mixture to cool, then spoon into a disposable piping bag and keep in the fridge until needed. AGNOLOTTI Pasta dough (see here) Smoked Bone Marrow Filling fine semolina, for dusting Pipe a line of the filling along the length of one sheet of pasta, near one long edge and leaving enough pasta uncovered to fold over. Fold the pasta over the filling and press firmly to seal. Using a pastry wheel, cut the filled tube of pasta away from the rest of the sheet but leave a border of the sealed section attached. Pinch the filled tube of pasta into uniform sections, creating a seal between the pockets of filling. Use the pastry wheel to separate the sections. Repeat with the remaining pasta and filling. Keep the agnolotti in the fridge, on a tray sprinkled with semolina, until required. GIROLLES AND PEAS a knob of unsalted butter 300g girolles, cleaned, trimmed 200g podded fresh peas 100ml Brown Chicken Stock Melt the butter in a large pan and add the girolles, halved if large, followed by the peas. Whisk in the chicken stock and cook, stirring, for 2 minutes. ASSEMBLY 200g podded broad beans a bunch of wild garlic leaves, chopped fresh lemon juice Mushroom Powder shaved fresh black truffle Maldon sea salt Blanch the beans in boiling water for 1 minute. Drain and refresh in iced water, then pop the bright green beans out of their thick skins by squeezing gently. Bring a large pan of water to the boil. Add salt (3% of the volume of water in the pan) followed by the pasta and simmer for 1–2 minutes or until the pasta rises to the surface. Use a slotted spoon to remove the pasta from the water and add to the pan with the girolles and peas. Add the wild garlic leaves and broad beans and season with lemon juice. Warm through. Transfer to plates and garnish with mushroom powder and fresh black truffle. Smoked Bone Marrow Agnolotti, Wild Mushrooms CHICKEN SKIN LEEK AND CAVOLO NERO STALK This was our dish for Dan Barber's WastED takeover on Selfridges rooftop. Each dish had to use ingredients that normally would go to waste. In our case it was leek trimmings, chicken skin and cheese trim. Even the stock from the leeks didn't go to waste and was added back into the cheese trim sauce to give an almost cheese and onion flavour. As it was made for a crowd, this recipe is slightly on the large size but it could certainly be scaled down as required. serves 7–10 CHICKEN SKIN TERRINE 1.5kg chicken skin Maldon sea salt cracked black pepper Preheat the oven to 130°C fan/150°C/Gas Mark 2. Line a large terrine mould with oven-safe clingfilm. Layer the chicken skin in the mould, sprinkling each layer lightly with salt and every third layer with cracked black pepper. Cover the top of the layered chicken skin with a piece of folded foil, then cover the top of the terrine with oven-safe clingfilm. Set the terrine mould in a bain marie (or roasting tin/tray of water) and bake for 8–10 hours or overnight. Remove the mould from the bain marie and place it in an empty roasting tray. Peel off the clingfilm from the top and press down on the terrine (foil still on top) – any stock that overflows will be caught in the tray (reserve this stock). Place a weight on the top of the terrine and allow it to cool, then chill. CAVOLO NERO AND LEEK TOPS BALLOTINE stock reserved from the terrine (see above) Brown Chicken Stock leek top trimmings from 10 leeks 10 Fermented Cavolo Nero Stalks, drained 6 sheets of dried nori (3g each) Measure the stock from the terrine and top it up with brown chicken stock so that you have 750ml in total. Bring the stock to the boil in a pan. Braise the leek trimmings and cavolo nero stalks in the stock until the leeks are soft. Remove the vegetables from the pan with a slotted spoon; reserve the stock. Lay out a sheet of nori on a piece of clingfilm. Spoon some of the vegetables down the centre. With the help of the clingfilm, roll up the nori so that the vegetables are encased inside to create a ballotine. Repeat this process to make two more ballotines, then remove the clingfilm from each. Spread out a large piece of oven-safe clingfilm and lay the three ballotines on it side by side. Use the clingfilm to roll up the three ballotines together so you have one large ballotine. Tie off the ends. Repeat with the remaining nori and vegetables to make three more large ballotines. Leave to set in the fridge for 2–3 hours. CHEESE TRIM SAUCE 10g unsalted butter 300g edible cheese trim (we use the trimmings – white bloom/rind – of soft cheeses such as Baron Bigod) 300ml whole milk 5g Sosa Procrema Cold 100 (ice cream stabiliser) 200ml buttermilk stock reserved from braising the leeks and cavolo nero stalks Maldon sea salt Chardonnay vinegar Melt the butter in a non-stick pan over a low heat. Add the cheese trim and stir the mixture as it heats, scraping the bottom of the pan because the cheese will catch. Continue to cook, stirring, until the cheese caramelises and takes on a golden-brown colour. Add the milk and slowly bring to the boil, still stirring. Remove from the heat and blend with the Procrema and buttermilk using a stick blender. Blend in some of the stock both for flavour and to loosen the consistency. Season with salt and Chardonnay vinegar to taste. Set aside in the pan. ASSEMBLY a drizzle of vegetable oil chives (with flowers, if available) Preheat the oven to 180°C fan/200°C/Gas Mark 6. Remove the mould from the fridge and turn out the terrine. Cut into 1.5cm slices (you want 7–10 slices). Heat a drizzle of oil in a large ovenproof pan over a medium to high heat. Add the slices of terrine and brown slightly on both sides. Transfer the pan to the oven and bake for 8 minutes. Turn the terrine slices over and bake for a further 6 minutes. The slices should be crispy and golden on the outside and soft in the centre. Slice the ballotines into 1.5cm pieces, leaving the clingfilm on. Place them on a roasting tray and warm through in the oven for 4–5 minutes. Remove the clingfilm. Gently warm through the sauce, stirring constantly so that it does not catch. Spread some of the sauce on one side of each plate and top with the terrine. Add a piece of ballotine to the other side of the plate. Wilt the chives and chive flowers in a dry pan over a medium heat and use them to garnish each plate. Chicken Skin, Leek and Cavolo Nero Stalk LAMB'S TONGUE HOTPOT This dish came about when I was cooking in the City of London. I was running a beautiful little restaurant in The Royal Exchange called Sauterelle, overlooking Bulgari, Tiffany's, Boodle's and Cartier. It was my first real head chef position. Before this I had been working in two and three Michelin star kitchens, exposed to the best and most expensive ingredients. Then three months after I took my position as head chef came the big financial crisis in 2008. Suddenly I was faced with the massive challenge of cooking great food but with more humble ingredients to keep our prices down. This is where I really started to explore nose-to-tail cooking. I found lamb's tongue to be one of the tastiest and leanest parts of the animal – also bloody cheap. I thought that nobody would order it so I hid it in a hotpot with layers of onion and potato and a great stock, and baked it to perfection. Everyone who ordered it loved it. They just didn't realise they were eating tongue. Well, they never asked! serves 4–6 HOTPOT 500g lamb's tongues 2 litres 7% brine (see here) 500ml white wine 1.5 litres Lamb Stock 1 bulb of garlic, cut in half horizontally 10 black peppercorns 3 fresh bay leaves 1.5kg small baking/frying-type potatoes such as King Edwards 2 large white-skin onions, thinly sliced a bunch of lemon thyme, leaves picked soft unsalted butter, to glaze Maldon sea salt and freshly cracked black pepper Place the lamb's tongues in the brine in a bowl and leave in the fridge for 8 hours. Boil the wine until reduced by half, then set aside. Preheat the oven to 130°C fan/150°C/Gas Mark 2. Drain the tongues and place in a cassoulet pot or other heavy casserole with the reduced wine, stock, garlic, peppercorns and bay leaves. Lay a sheet of greaseproof paper over the top followed by a lid. Place in the oven to cook for 3–4 hours or until the tongues are tender but still retain their shape and are still very slightly firm. Allow the tongues to cool in the liquid. Once cool enough to handle, drain the tongues in a fine sieve set in a bowl so you can retain the stock (discard the garlic, peppercorns and bay leaves). While the tongues are still warm, peel away the membrane that covers the outside using a paring knife – it is like peeling an egg. Once the membrane is removed, slice the tongues lengthways as thinly as possible. Set aside. Peel the potatoes, then slice very thinly on a mandoline – slices about 1.5mm thick. Clean the cassoulet pot, then build the hotpot in it. Start with potato, placing slices neatly and evenly over the bottom, overlapping them like a fan from the outside in and filling any gaps. Scatter over an even layer of sliced onion and season with a pinch of salt, some cracked black pepper and a pinch of lemon thyme leaves. Add an even layer of tongue slices and press flat, then add a ladleful of the strained stock. Repeat the layers all the way to the top, using up all the ingredients and saving the best-looking sliced potatoes for the top layer. Brush a little softened butter over the top potato layer, then cover with a sheet of greaseproof paper and a lid. Bake for 1 hour and 20 minutes – check if the hotpot is ready by piercing through to the bottom with a cake tester or long thin knife: it should meet no resistance. Remove the lid and paper. Increase the oven temperature to 220°C fan/240°C/Gas Mark 9 and bake the hotpot for about 5 minutes or until the potato layer on top is beautifully golden brown and the edges have started to crisp up. LAMB GLAZE 35ml Lamb Sauce 35g unsalted butter, at room temperature 35ml Onion Treacle Towards the end of the hotpot cooking time, whisk all the ingredients for the glaze together until combined. ASSEMBLY Maldon sea salt lemon thyme leaves and flowers (if they are in season) Brush the top layer of the hotpot with the lamb glaze and season with a pinch of salt and a pinch of lemon thyme leaves (and flowers if using). Serve with a loaf of crusty bread and a pot of mint sauce. Lamb's Tongue Hotpot GAME FAGGOTS CELERIAC, TOASTED HAZELNUTS This is a really comforting dish that is also versatile. The quantities here are a guideline only, and the game could easily be replaced with lamb, beef or pork. The celeriac purée adds a richness and the pickle helps to cut through that richness. I add fresh shavings of truffle at our restaurant, which bring a bit of luxury to what really is a peasant dish made from leftovers. serves 8 FAGGOTS 3 juniper berries 50g skinless boneless chicken breast, diced 75ml double cream 75g duck liver, minced or finely chopped 75g duck hearts, finely chopped or minced 100g minced venison (could even be minced haunch) 3 black peppercorns, freshly cracked a pinch of ground mace a pinch of freshly grated nutmeg a sprig of thyme, leaves picked 5g Mushroom Powder 1 teaspoon Cognac 150g caul fat, soaked and cleaned well Maldon sea salt Toast the juniper berries in a small dry pan until they smell fragrant, then crush them coarsely with a mortar and pestle. Blend the chicken in a food processor with a pinch of salt until smooth. Add the cream and blend it in. Scrape down the sides of the processor bowl and blend again until the cream is evenly incorporated. Transfer the mixture to a mixing bowl set over ice to keep it cold. Stir in all the remaining ingredients (with the crushed juniper), except the caul fat, and season with 1 teaspoon salt. Mix well. Chill for 30 minutes, then fry off a little spoon of the mixture to taste for seasoning and adjust accordingly. Divide the mixture into eight portions and roll each into a sausage shape. Wrap the sausages individually in two layers of caul fat. They can be kept in the fridge in an airtight container for 1 or 2 days or can be frozen for up to 2 months. CELERIAC PURÉE 25g unsalted butter ¼ celeriac, diced into small pieces fresh lemon juice 100ml whole milk 25g crème fraîche fine table salt and freshly ground black pepper Add the butter to a hot pan. When the butter starts to foam and turns brown, with a nutty aroma, add the celeriac and a pinch of salt. Cook for 10 minutes, stirring regularly. Add a squeeze of lemon juice and the milk. Turn the heat down to a simmer and cook for 5 minutes or until the celeriac is softened (check the largest piece). Transfer to a blender and add the crème fraîche. Blend to a smooth purée. Taste and adjust the seasoning, then keep warm. ASSEMBLY a knob of unsalted butter a drizzle of vegetable oil 24 slices Celeriac Pickle 100g toasted hazelnuts, crushed lightly black truffle (optional) Preheat the oven to 190°C fan/210°C/Gas Mark 6–7. Heat the butter and oil in a suitable-sized ovenproof frying pan over a medium heat. Add the faggots and sear all over to get a little colour. Transfer the pan to the oven and cook for 6–7 minutes. Remove from the oven and leave to rest in the hot pan while you gently warm the pickled celeriac in another pan with a little of its pickling liquid. Place a spoonful of celeriac purée on each plate, followed by the faggots together with a spoonful of the oil and juices from the pan. Finish with the celeriac pickle and hazelnuts. Shave over the black truffle, if using. Game Faggots, Celeriac, Toasted Hazelnuts GAME TERRINE On our last day of shooting photographs for this book, I knew there would be a late addition, namely this terrine. It came about after a Halloween Blood and Guts-themed supper club was hosted at The Dairy. Richie and Patrick Powell, a dear friend of ours, concocted an off-the-wall menu. It began with a welcome drink, a Negroni with duck heart and sour cherry, served in a goblet made from grouse carcasses. This was no mean feat – we had to order in an extra 30 birds just for the carcasses! So we were left with all the breasts. We needed to turn them into something special as grouse is a pretty special (and bloody expensive) game bird. This idea came out of the bag and I just had to add it to the book in the last minute. Note: You don't need all the foie gras for the terrine, but I would prepare the whole lobe and keep what you don't use to have on toast. Makes 1 terrine – 10 generous slices FOIE GRAS 1kg foie gras (de-veined) 2 teaspoons fine table salt ½ teaspoon pink curing salt 1 teaspoon caster sugar a pinch of freshly ground black pepper a drizzle of Armagnac Season the foie gras with the salts, sugar, pepper and Armagnac. Cover and leave in the fridge overnight. MUSHROOM DUXELLES 100g chestnut mushrooms 100g trompette mushrooms ½ teaspoon Spiced Salt 20g unsalted butter 10g black truffle a drizzle of truffle oil Put the mushrooms and spiced salt in a blender or food processor and blend to a coarse paste. Melt the butter in a pan, add the mushroom mixture and gently sweat over a low heat until any liquid has evaporated. Remove from the heat and grate in the truffle. Mix in the truffle oil. Allow to cool completely. CHICKEN MOUSSE 200g skinless, boneless chicken breast, diced ½ teaspoon Spiced Salt 250ml cold double cream Season the chicken with the spiced salt, then chill in the fridge or freezer for a few minutes so it is really cold. Blend the chicken in a blender or food processor until it forms a tight ball. Push down so it is flat and blend again while drizzling in the cream. Scrape down the sides of the container occasionally so that the chicken mousse is evenly blended. Decant into a bowl set over ice. GROUSE 9 skinless, boneless grouse breasts Spiced Salt Weigh the breasts (all together) and calculate 1.5% – this is the weight of spiced salt to use to season the breasts. Rub the salt all over the breasts, then set aside until needed. ASSEMBLY 8 large cabbage leaves (outer leaves) 20 slices lardo seasonal leaves and pickles, to serve Preheat the oven to 110°C fan/130°C/Gas Mark ½–1. Line a 1-litre terrine mould (35.5 x 11cm and 12cm deep) with oven-safe clingfilm. Blanch the cabbage leaves in boiling water for 20 seconds. Drain and refresh in iced water, then remove the stalks and central rib/stem. Shape 200g of the foie gras into a rectangle roughly the size of the terrine mould and 1cm thick. Fold the mushroom duxelles into the chicken mousse. Line the terrine mould with slices of lardo so there is a slight overhang, then do the same with the cabbage leaves. Make a layer of three of the grouse breasts, slightly overlapping, on the bottom of the mould so there are no gaps. Add a layer of chicken and mushroom mousse (about a quarter of it) about 1cm thick. Place the foie gras rectangle on this and top with another layer of chicken and mushroom mousse. Add three more grouse breasts, then a layer of mousse followed by the remaining grouse breasts and the remaining mousse. Fold the overhang of cabbage leaves over the top, followed by the overhang of lardo. Cover the top of the mould with oven-safe clingfilm. Set the terrine mould in a bain marie and place in the oven to cook for 45 minutes to 1 hour or until the core temperature reaches 55°C (use a temperature probe to check). Remove from the oven and weigh down the top of the terrine with a mould of a similar size. Once cooled, leave in the fridge overnight to set. To serve, remove the terrine from the mould, cut into slices and peel off the clingfilm. Serve with a seasonal leaf and your pickle of choice. Game Terrine CHART FARM VENISON BROGDALE PEAR, ARTICHOKE AND TRUFFLE I always think of bitter chocolate and fruit when it comes to cooking venison. To my mind (or tastebuds), the Jerusalem artichokes here have the same malt-like flavour you get from chocolate. Pickled pears bring a welcome sharpness to the dish, which can help cut through the richness of the venison. An elegant dish that will go down a storm! serves 4 ARTICHOKES CONFIT 500g duck fat 3 sprigs of thyme ½ bulb of garlic (cut horizontally), cut in half vertically 500g Jerusalem artichokes, scrubbed clean with a brush Preheat the oven to 90°C fan/110°C/Gas Mark ¼. Put the duck fat in an ovenproof pot and heat to 90°C. Add the thyme, garlic and artichokes. Cover the pot with a lid and place in the oven. Cook for up to 3 hours until the artichokes are softened (the cooking time can vary dramatically, so checking regularly is advised). Remove the artichokes from the pot and cut each in half or into smaller wedges, depending on how large they are. PEARS IN PEAR PICKLE 50g caster sugar 200ml cider vinegar 250ml fresh pear juice (pressed from about 6 pears) 2 sweet, ripe pears, cored and quartered Warm the sugar in the cider vinegar until just dissolved. Add the fresh pear juice and pears. Remove from the heat and leave to pickle at room temperature for 2 hours. JERUSALEM ARTICHOKE CRISPS 300ml vegetable oil 1 Jerusalem artichoke, scrubbed clean with a brush Mushroom Powder Heat the vegetable oil in a deep pan or deep-fat fryer to 170°C. Using a mandoline, cut the artichoke into thin slices, cutting down the widest part. Deep-fry in batches (about 10 slices at a time) until golden brown. As each batch is fried, drain on kitchen paper and season while hot with mushroom powder. Keep in a warm, dry spot until required. VENISON AND MARINADE 1 venison loin, trimmed (about 300g) a drizzle of vegetable oil a knob of unsalted butter 2 garlic cloves, lightly crushed 2 sprigs of thyme MARINADE 300g beetroots, peeled and roughly chopped 200g rock salt 100g caster sugar 100ml vegetable oil zest of 1 orange 15 black peppercorns 20 juniper berries Turn the oven back on to 90°C fan/110°C/Gas Mark ¼. Put all the ingredients for the marinade into a food processor and blend to a paste. Cover the venison with the paste, then marinate for 8 minutes only. Rinse well and pat dry with kitchen paper. Set a flameproof roasting tray over a medium to high heat, add the oil and brown the venison on all sides. Add the butter, garlic and thyme sprigs. Transfer the tray to the oven and roast the venison for about 12 minutes, turning over halfway through, or until the core temperature reaches 50°C. Remove from the oven and leave to rest. ASSEMBLY a knob of unsalted butter a sprig of lemon thyme, leaves picked 100ml Venison Sauce 5g black truffle, finely chopped black truffle shavings, to garnish While the venison is resting, strain some of the juice from the pickled pears into a pan, add the butter and whisk over a medium heat to emulsify. Add the artichokes confit, drained pears and lemon thyme leaves and warm through. Warm the venison sauce in a separate pan, stirring in the finely chopped truffle and any pan juices from the venison. Carve the venison into four pieces. Arrange them on plates with the pears and artichokes. Finish with the venison sauce, artichoke crisps and shavings of black truffle. Chart Farm Venison, Brogdale Pear, Artichoke and Truffle ONGLET TARTARE LEA & PERRINS, SMOKED BONE MARROW AND MUSHROOM I use onglet here as I love the flavour but sirloin, fillet or bavette are all good substitutes. Just ask your butcher for the longest-aged piece of beef he has as it will be the most tender and tasty. The mushroom and bone marrow purée is so easy. Serving it with a roast scallop would be a winning combo too. serves 4–6 PARIS MUSHROOM PURÉE 125g Paris or chestnut mushrooms, roughly chopped 50ml water ½ teaspoon sherry vinegar a sprig of tarragon 50g Smoked Bone Marrow, melted but not hot fresh lemon juice Put the mushrooms, water, vinegar and tarragon into a food processor and blend together. While blending, drizzle in the bone marrow until smoothly emulsified. Season with lemon juice and salt to taste. ONGLET TARTARE 250g onglet steak 100ml Worcestershire sauce 100ml red wine vinegar 60ml olive oil 1 shallot, finely diced 3–4 sprigs of tarragon, leaves picked and chopped ½ bunch of chives, chopped 2–3 sprigs of flat-leaf parsley, leaves picked and chopped Maldon sea salt Sprinkle the meat with 1¼ teaspoons sea salt, then leave in the fridge for 1 hour. Boil the Worcestershire sauce and vinegar (in separate pans) to reduce to 2 tablespoons each; cool. Mix together the reduced sauce and vinegar with the olive oil. Pat the steak dry with kitchen paper, then chop into 1cm dice. Season with the oil mixture and with the remaining ingredients to taste. ASSEMBLY 1 shallot 200g chestnut mushrooms wild rocket land cress tarragon leaves finely grated fresh horseradish Thinly slice the shallot into rings, then immerse in iced water to crisp up. Shave the raw mushrooms into thin pieces. Spread some of the mushroom purée on each plate. Top with the tartare and garnish with the shallot rings, shaved mushrooms, wild rocket, land cress, tarragon and horseradish. Onglet Tartare, Lea & Perrins, Smoked Bone Marrow and Mushroom SWEET CULTURED CREAM SORREL, COBNUTS Sorrel is something most people would use with fish – the famous Trosgrois salmon and sorrel combo has been replicated thousands of times by thousands of cooks. But when we had a heap growing on our rooftop garden, we thought the sharpness would be a perfect palate-cleanser to set you up for a final sweet dish. It's an ideal pre-dessert. serves 4–6 SORREL GRANITA 100g sorrel 100ml fresh apple juice a pinch of citric acid Blend together the sorrel, apple juice and citric acid in a blender or food processor until as smooth as possible. Strain through a fine sieve. Freeze until solid, then break up by scraping with a fork to create a granita texture. ASSEMBLY about 24 fresh cobnuts 60g Cultured Cream honey for drizzling fresh sorrel leaves Crack open the cobnuts, peel them and cut each in half. Spread a small amount of cultured cream on the bottom of each plate. Top with a spoonful of the sorrel granita. Garnish with a couple of cobnuts, a drizzle of honey and sorrel leaves. Cultured Cream, Sorrel, Cobnuts MILK, HONEY, BLUEBERRIES BREAD CRISPS Milk and honey is one of those classic combos, like tomato and basil, almost meant to be. We are lucky enough to have our own beehives above The Dairy so we can take credit for all of the bees' hard work. The bread crisps are a clever and easy way to transform leftover bread into something elegant. serves 6–8 YOGHURT PANNA COTTA 2 sheets/leaves of silver leaf gelatine 125ml whole milk 35g honey 250g plain yoghurt Soak the gelatine in cold water to soften it. Warm the milk with the honey in a pan to just short of boiling point. Drain the gelatine, squeezing out excess water, add to the pan and stir until melted. Strain the mixture through a fine sieve on to the yoghurt in a bowl and fold together. Pour into a container and leave to set in the fridge. YOGHURT SORBET 100ml whole milk 60g liquid glucose 45g trimoline 20g glycerine 5g Maldon sea salt 1 tablespoon fresh lemon juice 500g plain yoghurt Put all the ingredients, except the yoghurt, in a pan and heat, stirring, until evenly combined. Allow the mixture to cool slightly, then blend with the yoghurt in a blender or food processor. Churn in an ice cream machine according to the manufacturer's instructions. Store in the freezer until required. BREAD CRISPS frozen, slightly stale sourdough bread icing sugar, for dusting Preheat the oven to 180°C fan/200°C/Gas Mark 6. Allow the bread to thaw slightly, then cut into very thin slices using a serrated knife (you want to have 20 slices). Lay them on a baking tray. Toast the slices in the oven until golden on both sides. Remove from the oven and dust with icing sugar while still hot, then leave to cool. ASSEMBLY 240g blueberries a drizzle of olive oil a pinch of Maldon sea salt 50g comb honey, chopped into small pieces Season the blueberries with the olive oil and salt. Put a spoonful of panna cotta in the bottom of each bowl and make a well in it. Fill the well with some blueberries and comb honey. Top with a scoop of sorbet and some bread crisps (broken into shards). Milk, Honey, Blueberries, Bread Crisps SUSSEX ALEXANDER APPLE, SUNFLOWER SEEDS, CRÈME FRAÎCHE SORBET A lovely young chap named Dan, who worked with us, would always take the train to Brighton to see his girl (or his 'babe' as he used to call her) on his days off. On his return he would bring gifts of all sorts of weird and wonderful foraged things in bags, such as alexanders, which have a stunning, light, liquorish type of flavour. It's thanks to him that this dish came about. He now runs the wonderful Silo in Brighton. serves 6–8 SUNFLOWER SEED PURÉE 125g sunflower seeds 100ml cold water 30g maple syrup 2½ teaspoons fresh lemon juice a pinch of Maldon sea salt Simmer the sunflower seeds in a pan of boiling water for 15 minutes or until slightly softened. Drain the seeds and tip into a blender or food processor. Add the remaining ingredients and blend to a smooth purée. MERINGUE 125g egg whites 250g caster sugar 30g alexanders flowers (picked from stems) Whisk the egg whites in a stand mixer fitted with the balloon whisk attachment until they will hold stiff peaks. Meanwhile, put the sugar and a splash of water into a pan and set over a low-medium heat to melt the sugar. Bring the sugar syrup to a simmer. When it reaches 116°C, pour it in a thin, steady stream on to the egg whites while whisking. Continue to whisk until the meringue mixture is cold, stiff and shiny. Spread the meringue thinly on trays lined with greaseproof paper. Sprinkle over the flowers. Dry out in a dehydrator, or overnight in the oven at its lowest setting, until completely dry. Allow the meringue to cool before breaking into large pieces. CRÈME FRAÎCHE SORBET 300ml whole milk 60g liquid glucose 40g trimoline 15g glycerine 5g Maldon sea salt 1 tablespoon fresh lemon juice 30g Sosa Procrema Cold 100 (ice cream stabiliser) 1 teaspoon Stab 2000 (ice cream stabiliser) 300g crème fraîche Put all the ingredients, except the crème fraîche, in a pan and gently heat until they all melt together. Remove from the heat and allow to cool, then fold in the crème fraîche. Churn in an ice cream machine according to the manufacturer's instructions. Store in the freezer until required. ALEXANDERS PICKLE 50g alexanders leaves 50ml apple juice a pinch of citric acid 2 Granny Smith apples, peeled, cored and cut into 1cm dice Blend the alexanders leaves, apple juice and citric acid together in a blender or food processor for about 1 minute or until smooth. Strain through a fine sieve. Pour over the diced apples in a bowl. Cover and leave in the fridge for about 1 hour. ASSEMBLY a drizzle of olive oil Spread some of the sunflower seed purée in the bottom of each bowl. Top with a scoop of sorbet. Make a well in the sorbet and fill it with some diced apple and pickling liquor from the alexanders pickle plus a drizzle of olive oil. Top each plate with one of the meringue pieces. Sussex Alexander, Apple, Sunflower Seeds, Crème Fraîche Sorbet CHOCOLATE, WILD FENNEL RECIPE BY DEAN PARKER, HEAD CHEF OF THE MANOR This dish has become a staple of the restaurant, changed through the seasons. Anise flavours are sourced from our farm or with the help of Sarah's nearby allotment – in spring it is wild fennel; in summer, anise hyssop; and in winter, wild alexanders. Dean has created one of those dishes that you dare not remove from the menu as he guides it through the seasons like a pro! serves 10 CHOCOLATE MOUSSE 60g egg yolks 90g whole eggs 60g caster sugar 90g egg whites 10g Maldon sea salt 15g molasses 270g 70% dark chocolate buttons (or chopped dark chocolate) 375ml double cream Whisk together the egg yolks, whole eggs and half of the sugar in a bain marie (or heatproof bowl set over a pan of simmering water) until light and fluffy. Remove from the heat, but keep the bowl over hot water. Put the egg whites into a stand mixer fitted with the balloon whisk attachment and whisk to stiff peaks. Whisk in the salt. Meanwhile, melt the remaining sugar with a splash of water in a pan, then bring to a simmer. Continue to simmer until the syrup reaches 116°C; as soon as it reaches this temperature, pour it in a thin, steady stream on to the egg whites while whisking. Continue to whisk until the mixture is cold, stiff and shiny. Bring the molasses and 90ml of water to the boil in another pan. Pour over the chocolate in a large bowl and leave to the side for the chocolate to melt. Whip the cream until it will form soft peaks. Fold the egg yolk mixture into the melted chocolate mixture. Fold in the meringue. Lastly, fold in the cream. Leave to set in the fridge. WILD FENNEL ICE CREAM 10g liquorice root, ground to a powder 2 round seeds from a star anise 25g glycerine 150g caster sugar 500ml whole milk 4 sprigs of chervil, leaves picked 20g dill fronds 2 mint leaves 20g wild fennel fronds 80g Sosa Procrema Cold 100 (ice cream stabiliser) 480ml buttermilk In a dry pan, gently toast the liquorice powder and star anise seeds until fragrant. Add the glycerine, sugar and milk and heat, stirring, until the sugar has dissolved. Bring to the boil, then add the chervil, dill, mint and wild fennel. Simmer for 1 minute. Transfer this mixture to a blender or food processor and blend with the Procrema until very smooth. Pour into a bowl set over ice to cool. Once cold, stir in the buttermilk. Churn in an ice cream machine according to the manufacturer's instructions. Store in the freezer. CRYSTALLISED CHOCOLATE 185g caster sugar 70ml water 85g 70% dark chocolate buttons (or chopped dark chocolate) 2g (about 2 pinches picked up on the back of a teaspoon) bicarbonate of soda Put the sugar and water in a pan over a medium heat. Once the sugar has dissolved, bring the mixture up to 120°C. Add the chocolate and stir vigorously until it has crystallised – pale, grainy and crunchy. Stir in the bicarbonate of soda, remove from the heat and allow to cool. SALTED LIQUORICE CARAMEL 200g caster sugar 25g honey 75g molasses 300ml double cream 100g white chocolate buttons (or chopped white chocolate) 75g unsalted butter, cut into small cubes 10g liquorice root, ground to powder Maldon sea salt Melt the sugar in a pan, then cook to a golden caramel. Add the honey and molasses and caramelise further to 110–115°C. Add the cream, stir in and bring to the boil. Pour this mixture over the chocolate in a bowl and allow to melt. In a separate small pan, melt the butter over a high heat and cook until it starts to foam, brown and take on a nutty aroma. Immediately remove from the heat and cool slightly, then stir into the caramel mixture along with the liquorice. Season with salt to taste. WHITE CHOCOLATE TRUFFLES 200ml double cream 500g white chocolate buttons (or chopped white chocolate) 50g unsalted butter 50g cacao butter 70g cacao nibs Heat the cream until it just starts to simmer, then pour it over the chocolate in a bowl. Leave aside to melt. In a separate small pan, melt the butter over a high heat and cook until it starts to foam, brown and take on a nutty aroma. Immediately remove from the heat, add the cacao butter and stir until melted. Stir the two mixtures together, then place in the fridge to cool. Pour the cooled mixture into a stand mixer fitted with a balloon whisk attachment and whisk until pale and fluffy. (The mixture may appear split when first mixed. Keep the mixer going until it warms up. It will eventually become pale and fluffy.) Fold in the cacao nibs. Using a teaspoon, spoon small rounds on to a tray lined with greaseproof paper (these truffles can be quite random in shape; they do not need to be uniform). Store in the freezer. ASSEMBLY Drizzle some of the caramel on to each plate. Using a tablespoon, spoon two rounds of chocolate mousse on to each plate. Scatter some of the crystallised chocolate around. Place two truffles on each plate and finish with a rocher or quenelle (or scoop) of ice cream. Chocolate, Wild Fennel GARIGUETTE STRAWBERRY MILLE-FEUILLE CACAO BUTTER ICE CREAM When making mille-feuille, you can of course use a good-quality shop-bought puff pastry but making your own is a real achievement. There are a few notes to keep in mind. Throughout the process, take care when rolling out, being as even as possible. Do not press hard into the pastry as you may merge the delicate layers of butter. Try to maintain square corners and straight edges, especially when folding so that the folds are as even as possible. Be aware of temperature when making the pastry: too warm and the butter will become too soft and melt into the dough; too cold and the butter will crack within the pastry, forming rips and gaps between layers. You can make the mille-feuille whatever size you like. In fact, you could even make one large one that could be cut into slices at the table. Any excess pastry dough stores well in the freezer to be used in other recipes. serves 6–8 CACAO BUTTER ICE CREAM 500ml whole milk 100ml double cream 65g liquid glucose 40g trimoline 15g glycerine 30g Sosa Procrema Cold 100 (ice cream stabiliser) 1 teaspoon Stab 2000 (ice cream stabiliser) 100g cacao butter Put the milk, cream, glucose, trimoline, glycerine, Procrema and Stab 2000 into a pot and gently warm through, stirring occasionally. In a separate small pan, gently melt the cacao butter. Pour the milk mixture into a food processor. While blending, drizzle in the cacao butter to emulsify. Allow the mixture to cool to room temperature, whisking occasionally so that the cacao butter does not solidify. Churn in an ice cream machine according to the manufacturer's guidelines. Store in the freezer. PUFF PASTRY 550g plain flour 50g unsalted butter, melted 210ml water 25ml white wine vinegar 400g unsalted butter, at room temperature Put the flour into the bowl of a stand mixer fitted with the paddle attachment. While mixing on a low speed, add the melted butter, water and vinegar. Continue to mix at a low speed until a dough forms. Turn up the speed slightly and mix until the dough is smooth and even to the touch. (Alternatively you can mix/knead the dough by hand.) Knead the dough a little on a floured worktop before forming into a ball. Wrap in clingfilm and leave to rest in the fridge for 1 hour. Place the slab of room-temperature butter between two sheets of greaseproof paper. Using a rolling pin, roll out the butter into a square about 2cm thick. Remove the dough from the fridge and shape it into a rough square on the floured worktop. Using a rolling pin, mark out another square on top of the dough by pressing into the centre with the length of the rolling pin (this square should be about the same size as the square of butter). Roll out the dough from the sides of the marked square to create an even cross shape, leaving the thicker square area in the middle of the cross (the marked square). The 'arms' or flaps of the cross need to be about 5mm thick and large enough to fold over the square of butter. At this point the square of butter and the dough should be the same texture to touch. Place the square of butter in the middle of the cross, on the raised centre square. Fold the right flap over the butter so that it completely covers it. Repeat with the left flap, then the bottom flap and, finally, the top flap. You will now have a square of dough with the butter completely encased within. Roll out the square away from you on the lightly floured worktop into a rectangle almost triple its original length and double the width. Fold the top third down and the bottom third up over this. Wrap in clingfilm and rest in the fridge for 20 minutes. Place the dough on the lightly floured worktop so the folded edges are to the sides. Roll out away from you into a rectangle the same size as before. Fold as before, then turn the dough so that the folded edges are to the sides and roll out again. Repeat the folding (making a total of three folds up to this point). Wrap in clingfilm and rest in the fridge for 20 minutes. Repeat the previous rolling and folding process so that the dough will have five folds in total. Rest the pastry in the fridge again before rolling out for use. Preheat the oven to 200°C fan/220°C/Gas Mark 7. Roll out the pastry dough to a 5mm thickness and cut into 8 x 30cm strips. Place the strips on baking trays lined with greaseproof paper. Bake for 10 minutes, then lower the oven to 180°C fan/200°C/Gas Mark 6 and bake for a further 30 minutes. Drop the temperature to 160°C fan/180°C/Gas Mark 4 for 10 minutes of baking, then to 150°C fan/170°C/Gas Mark 3–4 for another 10 minutes and finally to 140°C fan/160°C/Gas Mark 3 for the last 10 minutes. The oven door must remain closed during the entire baking process. Remove the pastry from the oven and allow to cool on a wire rack. Portion to desired size rectangles (you need three for each mille-feuille). STRAWBERRY JAM 300g strawberries, hulled and cut in half 200g caster sugar ½ vanilla pod, split open in half Put the strawberries and sugar into a pot with a splash of water and scrape in the seeds from the vanilla pod. Cook, stirring, until the sugar has dissolved and the fruit begins to break down. Bring the mixture to 105°C, then remove from the heat. Allow to cool. HONEY CREMEAUX 25g egg yolks ½ sheet/leaf of silver leaf gelatine 65ml UHT double cream 65ml whole milk 25g honey 100g white chocolate buttons (or chopped white chocolate) about 300ml double cream Put the egg yolks into a large bowl. Soak the gelatine in cold water to soften it. Bring the UHT cream and milk to the boil in a pan. Drain the gelatine, squeezing out excess water, add to the pan and stir until melted. Pour this mixture over the honey in a bowl, stirring. Gradually add the honey mixture to the yolks, stirring constantly to prevent scrambling. Add the chocolate and whisk until fully emulsified. Pour into a container and leave to set in the fridge. Weigh the honey mixture and add an equal weight of double cream. Whisk together until the mixture is light and will hold form. ASSEMBLY 2 punnets of Gariguette strawberries, hulled and sliced sherbet To assemble each mille-feuille, lay one rectangle of pastry on the worktop. Spoon over some cremeaux, then jam and add some sliced strawberries. Place a second pastry layer on top and gently press down. Spoon over some cremeaux, then jam and add some sliced strawberries. Place a final layer of pastry on top and press down gently. Repeat for each mille-feuille. Dust the tops with sherbet and serve each one with ice cream on the side. Gariguette Strawberry Mille-Feuille, Cacao Butter Ice Cream WHITE PEACH ALMOND SKIN ICE CREAM, ELDERFLOWER JELLY This dish shows off seasonality and nature's gifts. After producing it just once, it went straight on the menu – one of the rare times this has happened. The skins of fresh almonds are normally discarded as they are too tough and fibrous to eat, but we use them to infuse the milk and cream for our ice cream. It is a light, fresh-tasting ice cream, with its own unique flavour. The dessert has a short life on the menu as fresh almond season runs only from April to early June – an exclusive 6 weeks. White peaches and elderflower have the same short season – it's as if it was meant to be! serves 6–8 ALMOND SKIN ICE CREAM 800g–1kg fresh almonds 500ml semi-skimmed milk 200ml double cream 100g caster sugar 10g liquid glucose 90g pasteurised egg yolks Peel the green outer shells from the almonds and discard. Peel off the orange skin or flesh closest to the nuts – you want 100g of these skins for the ice cream. Set the fresh almonds aside for serving. In a suitable-sized pan, combine the milk, cream and almond skins and set over a medium heat. Bring to a simmer, then remove from the heat. Once cooled, refrigerate overnight to allow the flavour from the skins to infuse the milk/cream. The next day, pour the milk infusion into a pan and bring to a simmer. Meanwhile, whisk the sugar with the glucose and egg yolks until pale and creamy. Strain the milk infusion through a fine sieve, then pour half of it on to the egg yolk mixture and whisk to combine. Pour this back into the saucepan with the rest of the strained milk infusion. Cook over a gentle heat, stirring all the time, until the mixture thickens enough to coat the back of the spoon (it should reach 84°C). Remove from the heat and pass through a fine sieve into a flat tray set over ice to cool it down quickly. Once cool, transfer to an ice cream machine and churn according to the manufacturer's instructions. Freeze until required. ELDERFLOWER JELLY 2 sheets/leaves of silver leaf gelatine juice of 2 lemons 320ml Elderflower Cordial Soak the gelatine in cold water to soften, then drain, squeezing out excess water. Mix the lemon juice with the cordial. Warm 80ml of this liquid and melt the gelatine in it. Add this to the rest of the liquid and mix together. Pour into a suitable container and leave to set in the fridge. ASSEMBLY 4 white peaches elderflower pulp (from making cordial) or fresh elderflowers (if available) Before serving, put the ice cream in the fridge to soften slightly. Slice the fresh white peaches and divide among the plates. Using a teaspoon, add 3–4 scoops of elderflower jelly to each plate. Scatter the reserved fresh almonds around the plates and finish each with a scoop of ice cream and some elderflower pulp or fresh elderflowers, if using. White Peach, Almond Skin Ice Cream, Elderflower Jelly SALTED CARAMEL CACAO, MALT ICE CREAM One of the first dishes to be created at The Dairy, this recipe has been improved and enhanced by the quality of the chocolate we now use and the addition of a special malt we buy from a local brewery. A well-known chef said this about the dessert: 'I would run completely naked across the Common just to have that again.' If you are left with any excess truffles, they can be stored in the freezer and served as petits fours. serves 6–8 CHOCOLATE TRUFFLES 50g unsalted butter, cut into small cubes 100ml double cream 250g 72% dark chocolate buttons (or chopped dark chocolate) 40g cacao nibs a pinch of Maldon sea salt cocoa powder, for dusting Put the butter in a pan over a high heat and cook until it starts to foam and brown and has a nutty aroma. Stir in the cream, then bring just to the boil. Pour this mixture over the chocolate in the bowl of a stand mixer fitted with the balloon whisk attachment. Whisk on a low speed until the chocolate has fully melted. Turn up the mixer speed gradually until the mixture begins to whip. When it is light and aerated, add the cacao nibs and salt, and mix on a high speed briefly to incorporate. Transfer the mixture to a disposable piping bag and snip off the end. Pipe into lengths (1.5cm in diameter) on greaseproof paper. Freeze before roughly cutting into pieces (about 1.5cm long). Dust with cocoa powder. Keep in the freezer until required. CHOCOLATE SOIL 250g ground almonds 150g demerara sugar 150g buckwheat flour 80g cocoa powder 1 teaspoon Maldon sea salt 140g unsalted butter, melted Preheat the oven to 160°C fan/180°C/Gas Mark 4. Mix together all the dry ingredients in a stand mixer fitted with the paddle attachment. Add the melted butter and mix to combine. Spread the mixture on a baking tray. Bake for 30 minutes, stirring the mixture every 10 minutes. Allow to cool, then store in an airtight container in a cool, dry place. SALTED CARAMEL 300g caster sugar 7.5g trimoline 75g unsalted butter, diced 300ml double cream 100g 66% dark chocolate buttons (or chopped dark chocolate) 1 teaspoon Maldon sea salt Place the sugar and trimoline in a pan. Add a little water to make a 'wet sand' consistency. Set over a high heat to melt the sugar, then boil until the syrup reaches a dark caramel stage (165–175°C). Remove from the heat and whisk in the butter a third at a time. Continue whisking until smooth. In a separate pan, warm the cream until it just reaches boiling point. Pour over the chocolate in a bowl and whisk until smooth and glossy. Pour the cream/chocolate mixture into the butter caramel and whisk together until smooth. Add the Maldon salt and mix through. CHOCOLATE TUILE 50g liquid glucose 50ml double cream 125g unsalted butter 155g caster sugar ¾ teaspoon pectin powder 175g cacao nibs Put the glucose, cream, butter and 150g of the sugar in a pan and melt together. Mix the pectin with the remaining sugar and add to the pan. Boil the mixture until it reaches 107°C. Remove from the heat and allow the mixture to cool down to at least 45°C before folding through the cacao nibs. Roll out the mixture between sheets of greaseproof paper as thinly as possible. Freeze and keep in the freezer until ready to bake. Preheat the oven to 160°C fan/180°C/Gas Mark 4. Place the frozen tuile sheet (still with greaseproof paper top and bottom) on a large baking tray and set a large wire rack over the top to hold down the edges of the greaseproof paper. Bake for about 15 minutes or until the tuile is set and doesn't appear to be liquid when the tray is gently knocked. Allow to cool before breaking into shards. Store in an airtight container. MALT ICE CREAM 375ml double cream 375ml whole milk 35g milk powder 25g trimoline 1 teaspoon Stab 2000 (ice cream stabiliser) 75g malt extract 90g pasteurised egg yolks 65g caster sugar Put the cream, milk, milk powder, trimoline, Stab and malt extract in a pan. Whisk together and bring to the boil. In a large bowl, mix together the yolks and sugar. Pour a third of the hot mixture over the yolks and sugar and whisk together. Add this to the rest of the hot mixture in the pan and whisk in. Heat until the temperature of the mixture is 85°C. Pass through a chinois or very fine sieve into a deep tray set over ice to cool the mixture quickly. Once cool, churn in an ice cream machine according to the manufacturer's instructions. Store in the freezer. ASSEMBLY Spoon some of the salted caramel over the bottom of each plate. Sprinkle with a few truffles and scatter over chocolate soil. Add a couple of quenelles of ice cream to each plate and finish with a few tuile shards. Salted Caramel, Cacao, Malt Ice Cream JERUSALEM ARTICHOKE CRÈME FRAÎCHE, POACHED QUINCE Kira Ghidoni helped put The Manor on the map. We received a five-star review from the late, great AA Gill (not a relative unfortunately) in which he called Kira 'a tweezer-twirling pâtissier: unequivocally the best pudding in London'. Thanks AA, and well done Kira. This is still one of my favourite puddings ever. serves 8–10 JERUSALEM ARTICHOKE ICE CREAM 200g Jerusalem artichokes, scrubbed clean and grated 100g unsalted butter 300ml whole milk 100ml double cream 15g milk powder 25g trimoline 25g caster sugar 80g egg yolks Put the Jerusalem artichokes into a pan with the butter and cook until the butter turns golden brown. Drain off the butter. In another pan, bring the milk, cream, milk powder, trimoline and caster sugar to the boil. Blend this hot liquid with the artichokes in a blender or food processor until smooth, then pass through a fine sieve into a clean pan. Add the egg yolks and cook the mixture over a very gentle heat, stirring constantly (as you would a crème anglaise) until it thickens enough to coat the back of the spoon. Pass through a fine sieve again, then allow to cool. Churn in an ice cream machine according to the manufacturer's instructions. Store in the freezer until required. POACHED QUINCES seeds from 5 cardamom pods 3 quinces 1 lemon, cut in half 400ml fresh apple juice 40ml cider vinegar 30g caster sugar 5g fine table salt Preheat the oven to 120°C fan/140°C/Gas Mark 1. Toast the cardamom seeds in a small dry pan until fragrant, then lightly crush them. Peel the quinces and cut in half. As each is prepared, drop it into a bowl of lemon water (water with squeezed lemon halves) to prevent browning. Combine the apple juice, vinegar, sugar, salt and cardamom in an ovenproof pot and bring to the boil. Add the quince halves. Cover with a lid and transfer to the oven to cook for 3 hours. Allow the quinces to cool in the poaching liquid, then lift them out (reserve the liquid). Remove the cores and cut the quince flesh into a uniform dice, reserving the trimmings for the gel. QUINCE GEL 125g quince trimmings (if there are not enough trimmings, use some of the diced quince to make the 125g) 50ml fresh apple juice ½ teaspoon agar agar Put the quince trimmings and apple juice in a blender or food processor with 50ml of the quince poaching liquid. Blend until smooth, then pass through a fine sieve into a pan. Add the agar agar and simmer, stirring, for 2 minutes. Pour into a jug or bowl. Leave in the fridge until set, then blend again before decanting into a squeezy bottle. Store in the fridge until required. FROZEN CRÈME FRAÎCHE 150ml whole milk 110g crème fraîche 20g Sosa Procrema Cold 100 (ice cream stabiliser) 10g dextrose 35g caster sugar juice of ¼ lemon Put all the ingredients into a blender or food processor and blend until thoroughly mixed. Decant into a siphon gun with one charge. Shake vigorously, then express the mixture into a freezerproof container. Cover with a lid and freeze. ARTICHOKE CRISPS 2 Jerusalem artichokes, scrubbed clean vegetable oil for deep frying Maldon sea salt Thinly slice the artichokes on a mandoline. Heat oil in a deep pan or deep-fat fryer to 170°C. Fry the artichoke slices a few at a time in the hot oil until golden and crisp. Drain on kitchen paper and season with a little salt. The crisps can be stored in an airtight container for up to 2 days before using. ASSEMBLY Scrape the frozen crème fraîche with a fork to make a granita texture. Place a spoonful of the artichoke ice cream in the centre of each plate. Scatter poached quince around the ice cream and dot gel around the plate. Scatter frozen crème fraîche and artichoke crisps over the top. Jerusalem Artichoke, Crème Fraîche, Poached Quince TOASTED WHITE CHOCOLATE PANNA COTTA, FORCED RHUBARB This is a little showstopper, rich with white chocolate but with sharp flavours of rhubarb cutting through nicely. The hint of salt on the vanilla biscuit is very much welcome. Did you know that forced rhubarb is actually grown in barns in complete darkness and harvested in candlelight to avoid photosynthesis, which would otherwise turn the rhubarb green and tough? Wow! serves 4–6 VANILLA SABLÉ 120g unsalted butter, at room temperature 100g caster sugar 1 egg yolk 2 vanilla pods, split open in half 150g plain flour ½ teaspoon fine table salt 5g baking powder Put the butter and sugar in a stand mixer fitted with the whisk attachment and whisk together until light and fluffy. Mix in the egg yolk and the seeds scraped from the vanilla pods. Add the flour, salt and baking powder and bring together with your hands into a dough. Divide the dough into two pieces. Roll out each piece of dough on a lightly floured surface into a rectangle about 2mm thick and place on a baking tray lined with greaseproof paper. Chill for 30 minutes. Preheat the oven to 175°C fan/195°C/Gas Mark 5–6. Bake the sablé sheets for 10 minutes or until golden. Cool on the trays, then break into random shards. TOASTED WHITE PANNA COTTA 125g white chocolate chips (or white chocolate broken into pieces) 1 sheet/leaf of silver leaf gelatine 250ml double cream Preheat the oven to 160°C fan/180°C/Gas Mark 4. Spread out the chocolate on a baking tray lined with greaseproof paper. Bake for 8–12 minutes or until caramelised to a golden colour. Remove from the oven and allow to cool slightly. Soak the gelatine in cold water to soften it. Bring 125ml of the cream to the boil in a pan. Remove from the heat. Drain the gelatine, squeezing out excess water, and add to the cream along with the chocolate. Whisk until the gelatine has melted and the mixture is smooth. Pour into a bowl and chill for 3–4 hours to set. Whip the remaining double cream to soft peaks. Whisk the white chocolate mixture with the whipped cream. Keep in the fridge until required. MARINATED RHUBARB 200g forced rhubarb, cut into 5mm dice ½ teaspoon fine salt juice of ½ lemon 1 teaspoon caster sugar In a bowl, mix the rhubarb with the salt, lemon juice and sugar. Cover tightly with clingfilm and leave in the fridge for 8 hours. RHUBARB COMPOTE 250g caster sugar 125ml water 500g forced rhubarb, roughly diced into 1cm pieces a squeeze of lemon juice Put the sugar and water into a pan and bring to the boil, stirring to dissolve the sugar. Boil until the syrup reaches 120°C. Add the rhubarb and cook over a high heat, stirring, until the rhubarb breaks down. Remove from the heat and stir in the lemon juice. Allow to cool to room temperature. RHUBARB SNOW 100g caster sugar 100ml water 250g forced rhubarb (rhubarb trimmings can be used here), roughly chopped 20ml lemon juice or citric acid Dissolve the sugar in the water in a pan. Add the rhubarb and lemon juice or citric acid and bring to a simmer. Cook until the liquid has taken on a strong pink colour and the rhubarb has broken down. Strain through a fine sieve, gently pressing on the rhubarb so as much flavour passes through as possible. Allow the liquid to cool, then pour into a freezerproof container and freeze until solid. Break it up by scraping with a fork to create a granita texture. ASSEMBLY Spoon a generous mound of panna cotta on to the centre of each plate. Make a well in the centre of it and fill it with all the rhubarb elements. Serve the sablé biscuits on the side. Toasted White Chocolate, Panna Cotta, Forced Rhubarb IVY HOUSE MILK TART SOUR APPLE SORBET This is one of those challenging dishes that takes some skill and a lot of patience to perfect. We've had a few grown men close to tears trying to make it, not to mention some near punch-ups over oven space! When it's done well, there is nothing but pure satisfaction in its seemingly simple beauty. It caresses all the senses. We make it every day and it must be all sold on the day. There are fights over the remaining spoonful once last orders hit the kitchen. serves 8 SOUR APPLE SORBET 2 sheets/leaves of silver leaf gelatine 590ml Fermented Apple Juice 150g caster sugar 20ml fresh lemon juice 60g liquid glucose 1 heaped teaspoon citric acid Soak the gelatine in cold water to soften it, then drain and put into a pan with all the other ingredients. Heat to melt the gelatine, then simmer gently until the sugar and glucose have dissolved. Allow to cool. Churn in an ice cream machine as per the manufacturer's instructions. Store in the freezer until ready to use. TART BASE 165g Weetabix 170g unsalted butter, cut into small cubes 170g Campaillou bread flour 30g cornflour 2 whole eggs 20g egg yolks 240g caster sugar egg wash Preheat the oven to 175°C fan/195°C/Gas Mark 5–6. Crush the Weetabix finely in a blender or food processor, then tip on to a baking tray and toast in the oven until golden. Melt the butter in a pan over a high heat and cook until it starts to foam, brown and take on a nutty aroma. Immediately remove from the heat and cool quickly (set the base of the pan in cold water) to stop the butter from burning. Whisk during cooling so the milk solids are spread evenly through the butter as it sets. Put the toasted Weetabix back in the food processor and add the flour and cornflour. With the machine running, gradually add the cooled brown butter through the feed tube to make a breadcrumb consistency. Whisk the eggs and egg yolks with the sugar until light and fluffy. Add the flour and butter mixture and fold through thoroughly. Wrap this pastry in clingfilm and leave to rest in the fridge for 30 minutes. Preheat the oven to 160°C fan/180°C/Gas Mark 4. Roll out the pastry into a large rectangle on a lightly floured surface, about a 4mm thickness. Place on a large baking tray. Now take a 30cm square bottomless tin (with the bottom removed) and press into the pastry, cutting through the pastry but keeping it on the tray. Cover the pastry within the square tin with greaseproof paper and weigh down with baking beans. Blind bake for 15 minutes. Remove the beans and paper from the internal square, lower the oven to 150°C fan/170°C/Gas Mark 3–4 and bake for a further 10 minutes. Reduce the oven to 140°C fan/160°C/Gas Mark 3 and bake for a final 10 minutes. When you take the pastry out of the oven, remove the trimmings/edge of the pastry (i.e. the pastry outside of the square) from the tray, allow to cool and crumble into a crumb. Meanwhile, brush the bottom of the square pastry case with egg wash and allow to cool. BURNT APPLE PURÉE 100g demerara sugar 4 Granny Smith apples, diced 50g malt extract 50ml apple juice ½ teaspoon fresh lemon juice Preheat the oven to 190°C fan/210°C/Gas Mark 6–7. Mix the sugar with the apples and spread out on a baking tray. Bake for about 20 minutes or until the apples are a deep, dark brown colour. Tip into a food processor, add the remaining ingredients and blend to a smooth purée. Pass through a fine sieve. Turn the oven down to 90°C fan/110°C/Gas Mark ¼. Spread a thin layer of the purée over the bottom of the tart case and bake for 10 minutes to set the purée. Leave to cool. CUSTARD 200g egg yolks 110g caster sugar 350ml double cream (we use Ivy House) 330ml UHT cream Whisk the egg yolks and sugar together until the sugar has dissolved. Pour the two types of cream into a pan and bring to a simmer. Remove from the heat and allow to cool until a skin forms across the top. Remove the skin. Repeat this three more times, removing the skin each time. Finally, bring the cream to a simmer, then slowly stir it into the egg yolk mixture. Preheat the oven to 90°C fan/110°C/Gas Mark ¼. Pour the custard into the pastry case. Bake the tart for about 1 hour or until it takes on a panna cotta wobble. Allow to cool to room temperature. ASSEMBLY Slice the tart and place a slice on each plate. Add a spoon of the reserved pastry crumb and top with a quenelle of sour apple sorbet to each plate. Ivy House Milk Tart, Sour Apple Sorbet BLACKBERRY LEAF PANNA COTTA BLACK PEPPER SABLÉ In late summer it's such a nice idea to flavour a cream or panna cotta with blackberry leaves, which have a wonderful herbaceous green-tea-like flavour. We sometimes use fig or citrus leaves as an alternative. serves 10 BLACK PEPPER SABLÉ 120g unsalted butter, at room temperature 100g caster sugar 1 egg yolk 2 vanilla pods, split open in half 150g plain flour ½ teaspoon fine table salt 5g baking powder ½ teaspoon freshly cracked black pepper Put the butter and sugar in a stand mixer and whisk together until light and fluffy. Mix in the egg yolk and the seeds scraped from the vanilla pods. Add the flour, salt, baking powder and black pepper and bring together with your hands into a dough. Divide the dough into two pieces. Roll out each piece of dough on a lightly floured surface into a rectangle about 2mm thick and place on a baking tray lined with greaseproof paper. Chill for 30 minutes. Preheat the oven to 175°C fan/195°C/Gas Mark 5–6. Bake the sablé sheets for 10 minutes or until golden. Cool on the trays, then break into random shards. PANNA COTTA 4 sheets/leaves of silver leaf gelatine 500ml double cream 250ml whole milk 75g caster sugar 10g Blackberry Leaf Powder Soak the gelatine in cold water to soften it. Drain and squeeze out excess water, then put into a pan with all the remaining ingredients. Bring to the boil, stirring occasionally. Remove from the heat and strain through a fine sieve into a bowl. Cover and chill the mixture to set. BLACKBERRY LEAF OIL 50g fresh unsprayed blackberry leaves, rinsed well in a colander 50ml olive oil Blend the leaves and oil together in a blender or food processor until as smooth as possible. Pour into a pan and bring quickly to the boil over a high heat. As soon as the mixture reaches the boil, remove from the heat and strain through a fine sieve into a bowl set over ice (this will cool the oil quickly and help it retain its bright green colour). BLACKBERRY LIQUOR 150g very ripe blackberries 75g dextrose a pinch of citric acid Put all the ingredients in a pan, bring to a simmer and simmer for 8 minutes or until the fruit has completely broken down. Pass through a fine sieve, pushing down on the fruit so that all the juice passes through. Cool. ASSEMBLY fresh blackberries an aromatic herb such as sorrel or lemon balm Whip the panna cotta mixture just to loosen it, then decant into a piping bag. Pipe a round of panna cotta in the bottom of each bowl. Drizzle a little blackberry leaf oil and blackberry liquor into the bottom of the bowl. Garnish with fresh blackberries and whatever herb you have chosen. Serve the sablé biscuits on the side. Blackberry Leaf Panna Cotta, Black Pepper Sablé OLD-FASHIONED ICE CREAM SANDWICHES Ice cream 'sambos' take me back to where I was brought up. I lived a stone's throw from the infamous Teddy's Ice Cream in Sandycove, where many a summer's day was spent rummaging through my father's jingles (loose change) or reaching down the side of the couch to raise enough pennies for another ice cream sandwich! This dessert uses the flavours of an Old-Fashioned cocktail. In the process of making our cocktail, Kerry G Old-Fashioned, we fat-wash the whiskey with butter. It's this butter that we use to make the parfait for this dessert. Makes 8–10 WHISKEY BUTTER PARFAIT 1½ sheets/leaves of silver leaf gelatine 125g butter that has been used for Fat-washed Whiskey 485ml double cream 150g egg yolks 190g caster sugar Soak the gelatine in cold water to soften it. Melt the butter in a pan, add 50ml of the cream and bring just to the boil. Drain the gelatine, squeezing out excess water, and stir into the cream mixture until melted. Remove from the heat and allow to cool. Whisk the egg yolks in a stand mixer until pale and fluffy. Meanwhile, melt 100g of the sugar in a pan with a little water, then boil until it reaches 116°C. Gradually pour the hot sugar syrup down the side of the mixer bowl while whisking to incorporate it into the egg yolks. Continue to whisk at a high speed until the mixture begins to stiffen. Put the remaining sugar and cream into a bowl and whip to soft peaks. Gently fold the butter/gelatine mixture into the egg yolk mixture, then fold in the whipped cream. Pour the mixture into a freezerproof container or tray in a layer about 4cm thick. Freeze. PÂTE DE BRICK 'WAFERS' 6 sheets of feuille de brick pastry 50g unsalted butter, melted icing sugar, for dusting Preheat the oven to 160°C fan/180°C/Gas Mark 4. Brush a sheet of the pastry with butter and dust with icing sugar. Top with another sheet of pastry, brush this with butter and dust with icing sugar. Top with a third sheet of pastry. You'll now have three sheets of pastry stuck together. Repeat with the remaining three sheets of pastry. Cut out 4 x 9cm rectangles. Place them on a baking tray lined with greaseproof paper. Lay another sheet of greaseproof paper on top and weigh down with a second baking tray. Bake for 12 minutes or until golden brown. Allow to cool before storing in an airtight tin. SHERBET 60g icing sugar, sifted ½ teaspoon citric acid Mix the icing sugar with the citric acid. ASSEMBLY 100g Blood Orange Marmalade Portion the parfait into 4 x 9cm rectangles. Set each rectangle on one of the wafers and drizzle with some of the marmalade, then top with another wafer to create a sandwich. Dust the sandwiches with the sherbet. Old-Fashioned Ice Cream Sandwiches HIBISCUS DOUGHNUTS These mini doughnuts make a wonderful snack or petits fours at the end of a meal. It is worth serving them with something creamy on the side such as Ben's Beeswax Cream, crème fraîche or yoghurt. Makes about 30 bite-sized doughnuts HIBISCUS SHERBET 100g caster sugar 5g dried hibiscus flowers 5g citric acid Blend all the ingredients together in a blender or food processor to a powder. DOUGHNUTS 500g plain flour 25g caster sugar a good pinch of fine table salt 5 eggs 125ml water 25ml vegetable oil zest of ½ orange zest of ½ lemon 10g fresh yeast 115g unsalted butter, softened vegetable oil, for deep-frying In a stand mixer fitted with the paddle attachment, mix together the flour, caster sugar and salt for a minute. Add the eggs, water, oil, zests and yeast and mix for 2 minutes. Add the butter and mix for another 2 minutes to make a dough. Spoon the dough into disposable piping bags, filling each only by a third; do not tie closed. Keep in the fridge until required. About an hour before you want to serve the doughnuts, place the dough (in the piping bags) in an ambient area – about 25°C – and allow to rise. Heat oil in a deep pan or deep-fat fryer to 200°C. Have a small bowl of water nearby so you can dip your fingers. Pipe a little of the dough into the hot oil, squeezing the top of the bag with one hand and using the wet fingertips of your other hand to separate the dough into round balls as it is piped out. Fry the doughnuts – in batches – for about 1 minute or until golden all over. Drain well on kitchen paper. ASSEMBLY Ben's Beeswax Cream Once drained, roll the doughnuts in hibiscus sherbet and serve immediately with the beeswax cream on the side. Hibiscus Doughnuts SALTED LEMON AND SUNFLOWER SEED NOUGAT When making nougat at home, you can add so many different flavours. There are just a few things to keep in mind. Have all the equipment and ingredients ready before you start as you won't have time for any preparation once the sugars are hot. Do not take the sugar and honey any higher than the temperatures specified in the recipe. If adding a praline or nut paste, stir it in at the end of the process but while the stand mixer is still whisking. Makes 1 tray (about 30 x 24cm) 20g peel from Preserved Amalfi Lemons, very finely diced 300g sunflower seeds 435g caster sugar 90g liquid glucose 125ml water 250g honey 50g egg whites TO DUST cornflour icing sugar Preheat the oven to 140°C fan/160°C/Gas Mark 3. Spread the preserved lemon peel on a small baking tray and slightly dry out in the oven for 5–10 minutes. In another baking tray, toast the sunflower seeds in the oven for 8–10 minutes or until golden brown. Keep them warm. Put 415g of the sugar, the glucose and water in a pan and heat to dissolve the sugar. Put the honey in a separate pan and heat. While the honey and sugar syrup are heating up, put the egg whites and remaining sugar in the bowl of a stand mixer fitted with a balloon whisk attachment and whisk to stiff peaks. Once the honey has reached 125°C, gradually pour it into the egg whites while whisking. Continue to whisk at a high speed while the sugar syrup carries on boiling. When it reaches 145°C, slowly pour it into the egg white mixture, whisking constantly. Then whisk at full speed for a further 1 minute. Remove the bowl from the mixer and, using a large metal spoon, fold in the warm seeds and the preserved lemon. Spoon on to a tray (about 30 x 24cm) lined with greaseproof paper. Spread out the mixture as much as possible before placing another sheet of greaseproof on top and using the back of the spoon or a dough scraper to spread the nougat evenly. Allow it to set at room temperature before portioning into 2cm squares. Dust each piece with a mixture of equal parts cornflour and icing sugar. Salted Lemon and Sunflower Seed Nougat ACKNOWLEDGEMENTS To my wife, Sarah, who always keeps me grounded and loved. She brought me Ziggy and has always been my biggest fan. She makes sure we have time outside of the restaurant, lies to me about how long we are on holidays for, and together we know how to have fun. To true talent... It wasn't until I opened The Dairy that I realised how important, productive and fun it is to share the creativity of menu-planning. Up until that point I always wrote the menus and dictated how things were done. I would always ask for opinions and welcomed input but the opening of The Dairy was different. On day one I wrote the first menu, then asked Dean, Richie and Ben how they thought we should do it. That was the first of millions of brainstorming sessions that happen constantly and paved the way for how we run the business. For the first couple of years, dishes would always have to get the thumbs up from me before they hit the menu, but as we all grew and developed our style, I went on to open other spots, which took me away from time to time and things have to move on. If I'm away from the kitchen for more than a couple of weeks you can be sure that upon my return there will be a few beauties for me to taste. This is a fine example of what happens when people grow and become the creators. The success of every dish and recipe here is down to the collaborative kitchen teams led by Dean, Richie, Ben and Simon. Restaurants are like theatres: the band and the dancers. My mum told me that if there was an argument between the musicians and cast, the band would up the tempo to create havoc on stage. That story has always stuck with me. We have open kitchens and this has broken the barriers between us. We are a team and that's it – we are a family working towards a common goal. The front of house teams led by Dan, Lewis, Imants, Meri, Alessandra, Becca, Claire and Wesley are a force to be reckoned with. They run the floor with a smile, keeping everyone happy no matter what's broken or any flooding or electrics failing behind the scenes. They are the hardest working band in town! There are many people who contribute to the success of a restaurant, and many working quietly behind the scenes with little credit. They may add a small contribution but leave a huge impact, so a big shout out to Sarah, who looks after all our wild flower arrangements every week and has been with us since day one, and to Barbara, a huge-hearted character with a potty mouth who taught us how to work with bees and continues to work with bees in our area. And to my mum and Dean's mum, who applied their skills to hand-make our aprons and bread baskets. It's all in the detail! To all my chef mentors along my travels; Eoin McDonald, Paul from the Na Mara days, Derek Breen, Steve McAllister, Robert Reid, Alfonso Iaccarino, Agnar Sverrisson, Gary Jones and Raymond Blanc. To Damiano, who was with us from the start, guiding us through the world of wine, committed to being a driving force behind the less-than-smooth opening of The Dairy. We're lucky to still be working with him today. To Paul Winch-Furness, who has captured what we do so perfectly and made us look good! The best photographer for the job! A natural who just gets it without the need for any direction at all and a dream to work with. To the wonderful team at Absolute Press/Bloomsbury, especially Emily and Marie. Thanks for your creativity, complete understanding and patience along the way. To our brilliant editor, Norma MacMillan, for always keeping us on track throughout this entire process. Big shout out to Natalie for her beautiful illustrations. She works in The Dairy and has made many badass posters for us too! Watch this space is all I can say! To all the amazing people that have passed through our doors, come and gone and come back again. Your stamp on the place is still here and always will be. To our suppliers, who make our jobs a dream. You are the real heroes! To Emma McGettrick, without whom this book would never have made it to press. She has painstakingly pulled this book together from scraps, recipes in different languages, illegible handwriting, and grease-stained paper. Hats off. **Publisher** Jon Croft **Commissioning Editor** Meg Boas **Project Editor** Emily North **Art Director and Designer** Marie O'Mara **Photographer** Paul Winch-Furness **Photographer's Assistant** Allan Stone **Illustrations** Natalie Candlish **Copyeditor** Norma MacMillan **Proofreader** Rachel Malig **Indexer** Zoe Ross **ABSOLUTE PRESS** Bloomsbury Publishing Plc 50 Bedford Square, London, WC1B 3DP, UK This electronic edition published in 2018 by Bloomsbury Publishing Plc BLOOMSBURY, ABSOLUTE PRESS and the Absolute Press logo are trademarks of Bloomsbury Publishing Plc First published in Great Britain 2018 Copyright © Robin Gill, 2018 Photography © Paul Winch-Furness, 2018 Illustrations © Natalie Candlish, 2018 Robin Gill has asserted his right under the Copyright, Designs and Patents Act, 1988, to be identified as Author of this work. All rights reserved You may not copy, distribute, transmit, reproduce or otherwise make available this publication (or any part of it) in any form, or by any means (including without limitation electronic, digital, optical, mechanical, photocopying, printing, recording or otherwise), without the prior written permission of the publisher. Any person who does any unauthorised act in relation to this publication may be liable to criminal prosecution and civil claims for damages. Bloomsbury Publishing Plc does not have any control over, or responsibility for, any third-party websites referred to or in this book. All internet addresses given in this book were correct at the time of going to press. The author and publisher regret any inconvenience caused if addresses have changed or sites have ceased to exist, but can accept no responsibility for any such changes. A catalogue record for this book is available from the British Library. Library of Congress Cataloguing-in-Publication data has been applied for. ISBN: 978-1-4729-4854-0 (HB) ISBN: 978-1-4729-4855-7 (eBook) ISBN: 978-1-4729-4853-3 (ePDF) To find out more about our authors and their books please visit www.bloomsbury.com where you will find extracts, author interviews and details of forthcoming events, and to be the first to hear about latest releases and special offers, sign up for our newsletter. # 1. Cover 2. Title Page 3. Dedication 4. Contents 5. Introduction 6. The Larder 1. Fermented Vegetables, Fruits and Herbs 2. Meat, Fish and Game Preservation 3. Pickles and Jams 4. Dairy, Butters and Oils 5. Powders, Salts and Crisps 6. Stocks, Sauces and Seasonings 7. For The Table 1. Bases and Blends, Chef's Cocktails and Home Brews 2. Snacks 3. Garden 4. Sea 5. Land 6. Sweet 8. Acknowledgements 9. eCopyright
{ "redpajama_set_name": "RedPajamaBook" }
3,205
{"url":"https:\/\/mathoverflow.net\/questions\/259331\/hitting-time-of-a-specific-markov-chain-using-martingale-approach-or-otherwise","text":"# Hitting time of a specific Markov chain using martingale approach (or otherwise)\n\nLet $0 < c < 1$. Consider the Markov chain $(X_i)$ on $\\{0, 1, \\dots, n\\}$, with transition probabilities $$P(k,k+1) = \\left(1 - \\tfrac {k}{n} \\right)(1-c), \\quad k = 0, \\dots, n-1,$$ $$P(k,k-1) = \\tfrac{c k}{n}, \\quad k = 1, \\dots, n,$$ and with all remaining probability mass in $P(k,k)$ so that $\\sum_j P(k,j) = 1$. I am interested in the distribution of $\\tau_0$, the hitting time of $0$.\n\nI know a general trick to obtain $\\mathbb E_{l+1} \\tau_l$ (the subscript $l+1$ denoting the starting point of the chain) using the invariant distribution of a modification of $(X_i)$ which is reflecting in $l$; see (Levin, Peres, Wilmer, 2009, Section 2.5) for the essential idea (with state ordering reversed). Then $\\mathbb E_j \\tau_0 = \\sum_{l=0}^{j-1} \\mathbb E_{l+1} \\tau_l$. Unfortunately the resulting expression is very involved, but I am working on it. This is not the core of the problem.\n\nThe point is that I can make quite a lot of progress using a martingale approach in case $c > 1-\\tfrac 1 n$, say $c = 1-\\tfrac C n$ with $0 < C < 1$. As an example, the process $N_i = X_i \\wedge \\tau_0 + \\alpha(i \\wedge \\tau_0)$, with $\\alpha = (1-C)\/n$, can be seen to be a non-negative supermartingale, so that by the optional stopping theorem, $\\mathbb E_j[\\tau_0] \\leq \\frac{jn}{1-C}$ for all $j$. I can obtain much more precise results using this approach.\n\nIntriguingly, everything I try seems to fail when applying the martingale approach for $0 < c \\leq 1-\\tfrac 1 n$. For example, $$M_i := \\left( \\frac{n}{n-1} \\right)^i (X_i - n (1-c))$$ can be seen to be a martingale, but unfortunately the conditions of the optional stopping theorem will not be satisfied for this martingale. I have also looked at constants $\\alpha, \\gamma$ such that $$N_i := \\exp(-\\alpha X_i +\\gamma i)$$ is a non-negative supermartingale. However, in this case $\\gamma$ will be a negative constant resulting in only lower bounds on $\\mathbb E \\tau_0$ or $\\mathbb P_j(\\tau_0 > i)$.\n\nOne of the intuitions behind these troubles is that the Markov chain has a certain drift towards 0 in case $c > 1-\\tfrac 1 n$, whereas it seems to equilibriate around $n(1-c)$ in case $c \\leq 1 -\\tfrac 1 n$.\n\nTo summarize:\n\n\u2022 If you can please obtain upper bounds on $\\mathbb P_j(\\tau_0 \\geq i)$ or $\\mathbb E_j \\tau_0$ using a martingale approach, for $0 < c \\leq 1 - \\tfrac 1 n$, or else\n\u2022 Please explain why a martingale approach is doomed to fail for this problem.\n\nOther suggestions for this problem are also welcome, but the focus of this question is on the martingale approach.","date":"2019-08-22 04:12:42","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9540857672691345, \"perplexity\": 138.92157957061798}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-35\/segments\/1566027316718.64\/warc\/CC-MAIN-20190822022401-20190822044401-00317.warc.gz\"}"}
null
null
What roles and campaigns do video interviews work best for? I'm regularly asked by recruiters and HR directors what type of roles video interviewing works best for. It's a very pertinent question, for which there is no simple, right or wrong answer. It depends on a number of factors such as the seniority of role you're recruiting for, or the size of your organisation.
{ "redpajama_set_name": "RedPajamaC4" }
2,232
{"url":"https:\/\/www.codingame.com\/training\/hard\/rocket-mice","text":"\u2022 12\n\n## Goal\n\nA colony of peaceable, space-faring mice has been invaded by cats. You must get the mice to your rocket so they can evade the evil cats' claws.\n\nPuzzle goal\n\nCalculate the final score for all players in the game, based on the moves made.\n\nScoring\n\nEach player has a rocket on a grid of squares. A player scores 1 point for each mouse that enters his rocket, and -10 points for each cat that enters his rocket. Animals can enter a rocket from any orthogonally adjacent square.\n\nA player's score can never go below zero.\n\nAnimal generation\n\nEach board has one or more doors on the edges of the grid. Animals enter through these doors and always move directly away from the door they entered through, in a direction perpendicular to the wall where the door is located. So, for example, if a mouse enters through a door on the northern edge of the grid, it will move south. Doors are never in the corners of the grid.\n\nAnimals always go straight until they leave the board (rocket, pit, or eaten) or are redirected (wall or arrow).\n\nEach door produces 1 animal every turn. Every 10th animal to enter through each door is a cat. The other 9 animals produced are mice.\n\nWalls and pits\n\nIf an animal hits a wall, it will turn 90 degrees counter-clockwise. Animals never exit through doors. They only enter through them. Otherwise, a door behaves like a wall.\n\nThe four squares in the corners of the grid are pits. If a mouse or a cat moves to one of these squares, they fall to their doom.\n\nEaten by cats\n\nMultiple mice can coexist on the same square, and multiple cats can coexist on the same square, but mice and cats don't mix. A mouse on the same square as a cat is eaten.\n\nIf a mouse and cat switch positions, i.e. mouse from (1,2) to (1,3) and cat from (1,3) to (1,2), the mouse is eaten in passing.\n\nArrows\n\nEach turn, one player places an arrow onto a square of the grid. Arrows point in any of four directions: N, E, S, or W. The players take turns placing arrows. Player 1 places on the first turn, then player 2 on the next turn, etc. Arrows cannot be placed onto another arrow, a rocket, or a pit.\n\nEach player can have a maximum of 3 arrows on the grid at a time. Once a player places his 4th arrow, then the first arrow he placed is removed. When his 5th is placed, his 2nd is removed, and so on.\n\nWhen an animal moves to a square with an arrow, it turns to face the direction indicated by the arrow.\n\nOrder of a turn\n\n1) An animal is produced behind each door, just off the grid, ready to enter the board.\n2) All animals move forward one square, simultaneously, in the direction they are facing.\n3) Mice score.\n4) Cats score.\n5) Mice are eaten.\n6) All animals on an arrow or facing a wall are redirected.\n7) The next player to play places an arrow.\n\nAfter the final arrow is placed on the last turn, steps 1-4 are executed one more time to determine the score.\nInput\nLine 1: Two space-separated integers indicating the width and height of the grid\nLine 2: An integer indicating the number of players in the game\nLine 3: An integer indicating the number of doors on the grid\nLine 4: An integer indicating the number of turns to be played\nNext players lines: Two space-separated integers (rX and rY) indicating the coordinates of each player's rocket\nNext doors lines: An integer indicating the x or y coordinate of the door, followed by the wall that the door is on, either N, E, S, or W\nNext turns lines: Two space-separated integers (tX and tY) indicating the coordinates where an arrow is placed, followed by a direction, either N, E, S, or W.\nOutput\nplayers lines: One line per player, indicating that player's score for the game\nConstraints\n5 \u2264 width = height \u2264 25\n2 \u2264 players \u2264 4\n1 \u2264 doors \u2264 16\n1 \u2264 turns \u2264 100\n0 \u2264 rX, tXwidth - 1\n0 \u2264 rY, tYheight - 1\n(0, 0) is the north-west corner.\n(width-1, 0) is the north-east corner.\nExample\nInput\n5 5\n2\n2\n8\n1 2\n3 2\n2 E\n2 W\n2 0 W\n2 2 E\n1 0 S\n1 1 E\n2 1 W\n0 1 S\n0 3 E\n2 0 E\nOutput\n8\n8\n\nA higher resolution is required to access the IDE\n\nJoin the CodinGame community on Discord to chat about puzzle contributions, challenges, streams, blog articles - all that good stuff!\nOnline Participants","date":"2019-02-20 10:38:02","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.26383402943611145, \"perplexity\": 1737.4723276903771}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-09\/segments\/1550247494694.1\/warc\/CC-MAIN-20190220085318-20190220111318-00628.warc.gz\"}"}
null
null
Дугди́нский сельсове́т — муниципальное образование со статусом сельского поселения в составе Зейского района Амурской области. Административный центр — посёлок Дугда. История Дугдинский сельский Совет народных депутатов был образован в 1984 году в связи со строительством Байкало-Амурской магистрали. 31 октября 2005 года в соответствии с Законом Амурской области № 73-ОЗ муниципальное образование наделено статусом сельского поселения. Население Состав сельского поселения Примечания Ссылки Сельсовет на сайте Зейского района Муниципальные образования Зейского района Сельские поселения Амурской области
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,426
Dayton, OH-May1, 2008 The Plus Technologies division of Digital Controls today announced that their implementation of the OM Plus based Color Control solution has realized savings that top $800,000 in the first 27 months for a large Mid-Western Insurance company. Initially, the customer estimated that the solution would save them around $700,000 over 36 months. At current cost savings rates, the OM Plus Color Control solution will net total savings of over $1,000,000 in the 36 month timeframe. The OM Plus Color Control solution permits the Insurance company to control which jobs are allowed to print in color at their 180+ offices around the United States. Important customer-facing documents need to be in color, so the Insurance company implemented Lexmark color laser printers in all their offices as the primary print device for the office. However, all non customer-facing documents needed to be in black and white to control costs. OM Plus manages those decisions on the fly in real time, so the end user does not have to, only corporately permitted documents have the color functionality turned on and all other documents print in black and white. Excerpts from Customer Success Story* At the center of the proposal was the integration of a third-party color management technology, OM Plus from Plus Technologies, that would allow the company to specify the precise applications from which documents could be printed in color. It was not enough for the company to dictate which users could print in color because nearly all employees needed that capability. But, those same employees should only print document types in color that were customer-facing. With Lexmark printers and our new ability to control the use of color,[OM Plus from Plus Technologies] we are estimating savings of $200,000 in the first year and $700,000 over three years.
{ "redpajama_set_name": "RedPajamaC4" }
9,085
Q: Dynamically Generate HTML in ASP.NET I was interested to know whether or not asp.net is allows us to dynamically generate HTML inline on the .aspx Source page (not the code-behind). For testing I created the following simple .aspx page... In my asp.net code-behind I have the following: protected List<string> myList = null; protected void Page_Load(object sender, EventArgs e) { if (myList == null) myList = new List<string>(); myList.Add("One String"); myList.Add("Two String"); myList.Add("Three String"); myList.Add("Four String"); this.Repeater1.DataSource = myList; this.Repeater1.DataBind(); } On the corresponding Source page I have: <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <ol> <asp:Repeater ID="Repeater1" runat="server"> <ItemTemplate> <li> <%# DataBinder.GetDataItem(myList) %> </li> </ItemTemplate> </asp:Repeater> </ol> </body> </html> The resultant .aspx page is: <html xmlns="http://www.w3.org/1999/xhtml"> <head><title> </title></head> <body> <ol> <li></li> <li></li> <li></li> <li></li> </ol> </body> </html> Notice that the Repeater control did in fact create the four list items. However, the contents (One String, Two String, etc) of the myList list did not come along for the ride. What do I need to do to evaluate the myList list and get its values inside the list item tags? By the way, I'm not concerned with how to use the Repeater control specifically, so if there is a solution to this problem that does not include the Repeater control, I'm fine with that. Note: I'm aware that I can bind the "myList" generic list to an asp:BulletedList and get the same result. I am more interested in dynamically creating HTML inline of the Source page. A: Use this code: <asp:Repeater ID="Repeater1" runat="server"> <ItemTemplate> <li> <%# Container.DataItem %> </li> </ItemTemplate> </asp:Repeater> If you need to bind source with list of objects with properties, try to use: <asp:Repeater ID="Repeater1" runat="server"> <ItemTemplate> <li> <%# Eval("PropertyName") %> or <%# Eval("PropertyName","DataFormat") %> </li> </ItemTemplate> </asp:Repeater>
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,623
Everyone sees what happens when stars hit the red carpet. They pose, they pout and they strut… right? But what goes down on the inside? Beyonce side-eyeing "Becky", Katy Perry canoodling with new beau Orlando Bloom and Taylor Swift and Tom Hiddleston battling it out in the biggest dance-off we've ever seen. Now, thanks to Valentino brand ambassador Carlos Souza, we can finally make an unbiased decision as to whose got the best 'Crazy in Love' moves… and who needs to go back to dancing in their bedrooms. This, of course, isn't the first time the world has been captivated by Tay's epic steps. Throwback to last month, where T-Swiz bopped around to her hot-bodied beau Calvin Harris' set at Coachella, proving (once again) that the couple who moves together, stays together. Keep those dance moves coming Taylor… We're loving 'em!
{ "redpajama_set_name": "RedPajamaC4" }
6,384
MARA Swim International Women's Day in March of 2018 was bigger than ever and the hashtag progress for change went viral. Launching a swimwear label, MARA swim in 2017, Australian sisters Naomi Collings and Kirsty Parnell are two inspiring women embracing the #progressforchange in more ways than one. MARA swim has taken Australia and the world by storm with their luxurious sun safe swimwear collection incorporating bold prints, unique designs and rich protective fabrics to make women feel comfortable, protected, confident and fashionable. These two women in the short time of launching have seen interest from international markets including the US West Coast, French Polynesia and Europe, have stock constantly selling out online and are currently working on the new summer collection to be released this year – doing this all while being busy dedicated mums, juggling small children and building a successful label. "Since we launched in March 2017 we are now experiencing a whole new market. There are now MARA swim owners all over the world. We are proud to help women everywhere protect their skin with our swimwear. With these new markets comes new opportunities for us and we have now been invited to L.A Fashion Week, we are getting interest from the US Paddleboarding market and stockists requests are coming in from east coast of the U.S. Our swimwear range is Australian made, meaning our quality is produced ethically, which meets the standards of all international markets," said Naomi. "What this means for the future is growth for the brand and the sun protection message, not just in Australia. International recognition means women everywhere are joining us to protect their skin and work towards ending skin cancer. Every woman should have a sun protective swimwear piece in their wardrobe, and choices have never been better now that MARA swim Australia is delivering style to the typically boring market" said Kristy. But it doesn't stop there for MARA swim. Since the launch, they have also started some incredible partnerships. Pro surfer Sally Fitzgibbons has been spotted representing the brand during the world surf league championship tour as a fan and lover of the swimwear. However, it is the partnership with Melanoma Institute Australia formed this year that means the most to these two sisters as Naomi was diagnosed with Melanoma in 2014, sparking the inspiration for this label. "As a business, we want to do all we can to create more awareness for Melanoma and the Melanoma Institute of Australia. It is an organisation that has a very special place in our heart. Sun protection messages is at the genuine base of everything we do, it definitely helps drive us each day when we get to create beautiful swimwear while at the same time sending our message. Our customers are now joining the MARA swim movement bringing beautiful sun protection into their lives and making the health of their skin a priority. Most important to us is that women get it, our MARA swim customers either already understand the importance of sun protection or our unique designs merging high fashion with sun protection has brought Melanoma into the spotlight for them," said Naomi. The partnership with Melanoma Institute Australia holds great importance with these two business owners and sisters as it means being able to contribute to vital research for Melanoma, which will continue to help save lives. "Our partnership with Melanoma Institute Australia builds confidence in our brand and gives us the ability to give back to a great organisation, with $1 from every purchase of any MARA swim product going to the institute. With this partnership also gives us updates on latest melanoma research, advancements and access to expert advice, as well as opportunities to participate in corporate events, and fundraisers for the charity," said Kristy. Kristy added, "AusMumpreneur is a movement aiming to recognise Mum's in business and we love it, so when we were nominated for the 2018 AusMumpreneur awards we were so thrilled. There are so many amazing women balancing motherhood and business in Australia and even around the world, so we are honoured. It is so empowering being a mother, a woman and a businesswoman and being nominated for this award is great motivation to keep moving forward towards our goals and success." "Juggling motherhood and a business can sometimes feel as though there is not enough time in the day, but this has helped build an ability to maximise every minute and see priorities much clearer than we did before. Living in two very different worlds existing at the same time might be difficult to understand, and it sounds impossible, but it can be done. It took time to get used to but now we live it every day. Mothers are some of the toughest people on this planet," said Naomi. MARA swim debuted the first collection "Au" in 2017, which has since been featured on runways at the Virgin Australia Fashion Festival Melbourne, the Mercedes Benz Fashion Festival Brisbane and Brisbane Fashion Month, which they have been invited back to all three to feature in for the second year. "Currently we handle sales exclusively through our web store, but we are now so excited to be at a stage where retailers want access to MARA swim for their stores, which is fantastic for us as not only business women but advocates for sun protection awareness. We are excited at the opportunity to extend MARA swims reach giving more women access to our beautiful sun protection range. Now to be invited back for a second time at the Brisbane Fashion Week, the Mercedes Benz Fashion Festival in Brisbane and the Virgin Australia Fashion Festival Melbourne is an exciting time for the label and reassures us that our product is having an impact on the market, and we cannot wait to showcase the new collection later this year as well as develop industry connections. Last year we were absolutely pinching ourselves to be given such an amazing opportunity," said Kristy. As a brand, MARA swim entices all to change their perspective on the importance of sun protection – not just women. By creating a luxurious and unique range of long sleeve one-pieces, swim tops and swim pants, they are providing not only beautiful swimwear but also a lifesaving gift – which is the label's greatest achievement. See more at https://www.maraswim.com.au Instagram and Facebook @maraswimau. Fashion No Comments » « Buckle | 1922 – Handcrafted With Care (Previous News) (Next News) Operation Finale – official trailer » Husk Summer 2021 Collection – Archipelago Under the name Achipelago, Husk presents its latest collection for Summer 2021, inspired by theRead More Latest looks from Honey Birdette Check out these outstanding new looks from Australia's Honey Birdette. NEW COLLECTIONS MELBOURNE JEWELLER SOURCES ONE OF WORLD'S LAST RED DIAMONDS 2021 Miss Global Australia and Miss Intercontinental Australia Couture designer ZIAD NAKAD dresses Amandine Petit, Miss France 2021 for Miss Universe title 2021 Miss World Australia – Victorian State Final Nina Dobrev Expands Dior Partnership Playful Promises lingerie PUMA presents the Cali Star Fashion Community Week 2021
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
457
Q: write hash_map as a json file into a zip file in java I need to append a hash map as json file into zip file as new zip entry. I tried the following code: Map<String, String> map1 = new HashMap<String, String>(); map1.put("a","b"); zipOutputStream.putNextEntry(new ZipEntry("info.json")); ByteArrayOutputStream ba = new ByteArrayOutputStream(); ObjectOutputStream oba = new ObjectOutputStream(zipOutputStream); oba.writeObject(map1); oba.close(); but the above code after executing the contents of the json in the zip file is not proper.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,807
{"url":"https:\/\/intl.siyavula.com\/read\/maths\/grade-11\/functions\/05-functions-02","text":"We think you are located in United States. Is this correct?\n\n# Don't get left behind\n\nJoin thousands of learners improving their maths marks online with Siyavula Practice.\n\nWe notice that the gradient of a curve changes at every point on the curve, therefore we need to work with the average gradient. The average gradient between any two points on a curve is the gradient of the straight line passing through the two points.\n\nFor the diagram above, the gradient of the line $$AC$$ is \\begin{align*} \\text{Gradient }&= \\frac{y_A - y_C}{x_A - x_C} \\\\ &= \\frac{7-(-1)}{-3-(-1)} \\\\ &= \\frac{8}{-2} \\\\ &= -4 \\end{align*} This is the average gradient of the curve between the points $$A$$ and $$C$$.\n\nWhat happens to the gradient if we fix the position of one point and move the second point closer to the fixed point?\n\n## Gradient at a single point on a curve\n\nThe curve shown here is defined by $$y=-2{x}^{2}-5$$. Point $$B$$ is fixed at $$\\left(0;-5\\right)$$ and the position of point $$A$$ varies.\n\nComplete the table below by calculating the $$y$$-coordinates of point $$A$$ for the given $$x$$-coordinates and then calculating the average gradient between points $$A$$ and $$B$$.\n\n $$x_A$$ $$y_A$$ Average gradient $$-\\text{2}$$ $$-\\text{1,5}$$ $$-\\text{1}$$ $$-\\text{0,5}$$ $$\\text{0}$$ $$\\text{0,5}$$ $$\\text{1}$$ $$\\text{1,5}$$ $$\\text{2}$$\n1. What happens to the average gradient as $$A$$ moves towards $$B$$?\n2. What happens to the average gradient as $$A$$ moves away from $$B$$?\n3. What is the average gradient when $$A$$ overlaps with $$B$$?\n\nIn the example above, the gradient of the straight line that passes through points $$A$$ and $$C$$ changes as $$A$$ moves closer to $$C$$. At the point where $$A$$ and $$C$$ overlap, the straight line only passes through one point on the curve. This line is known as a tangent to the curve.\n\nWe therefore introduce the idea of the gradient at a single point on a curve. The gradient at a point on a curve is the gradient of the tangent to the curve at the given point.\n\n## Worked example 7: Average gradient\n\n1. Find the average gradient between two points $$P\\left(a;g(a)\\right)$$ and $$Q\\left(a+h;g(a+h)\\right)$$ on a curve $$g(x)={x}^{2}$$.\n2. Determine the average gradient between $$P\\left(2;g(2)\\right)$$ and $$Q\\left(5;g(5)\\right)$$.\n3. Explain what happens to the average gradient if $$Q$$ moves closer to $$P$$.\n\n### Assign labels to the $$x$$-values for the given points\n\n\\begin{align*} x_1 &= a \\\\ x_2 &= a + h \\end{align*}\n\n### Determine the corresponding $$y$$-coordinates\n\nUsing the function $$g(x) = x^2$$, we can determine: $y_1 = g(a) = a^2$ \\begin{align*} y_2 &= g(a + h) \\\\ &= (a + h)^2 \\\\ &= a^2 + 2ah + h^2 \\end{align*}\n\n\\begin{align*} \\frac{{y}_{2}-{y}_{1}}{{x}_{2}-{x}_{1}} &= \\frac{\\left({a}^{2}+2ah+{h}^{2}\\right)-\\left({a}^{2}\\right)}{\\left(a+h\\right)-\\left(a\\right)} \\\\ &= \\frac{{a}^{2}+2ah+{h}^{2}-{a}^{2}}{a+h-a} \\\\ &= \\frac{2ah+{h}^{2}}{h} \\\\ &= \\frac{h(2a+h)}{h} \\\\ &= 2a+h \\end{align*} The average gradient between $$P\\left(a;g(a)\\right)$$ and $$Q\\left(a+h;g(a+h)\\right)$$ on the curve $$g(x)={x}^{2}$$ is $$2a+h$$.\n\n### Calculate the average gradient between $$P\\left(2;g(2)\\right)$$ and $$Q\\left(5;g(5)\\right)$$\n\nThe $$x$$-coordinate of $$P$$ is $$a$$ and the $$x$$-coordinate of $$Q$$ is $$a+h$$ therefore if we know that $$a=2$$ and $$a+h=5$$, then $$h=3$$.\n\n$$2a+h=2\\left(2\\right)+\\left(3\\right)=7$$\n\n### When $$Q$$ moves closer to $$P$$\n\nWhen point $$Q$$ moves closer to point $$P$$, $$h$$ gets smaller.\n\nWhen the point $$Q$$ overlaps with the point $$P$$, $$h=0$$ and the gradient is given by $$2a$$.\n\nWe can write the equation for average gradient in another form. Given a curve $$f(x)$$ with two points $$P$$ and $$Q$$ with $$P\\left(a;f(a)\\right)$$ and $$Q\\left(a+h;f(a+h)\\right)$$. The average gradient between $$P$$ and $$Q$$ is: \\begin{align*} \\text{Average gradient }&= \\frac{{y}_{Q}-{y}_{P}}{{x}_{Q}-{x}_{P}} \\\\ &= \\frac{f(a+h)-f(a)}{\\left(a+h\\right)-\\left(a\\right)} \\\\ &= \\frac{f(a+h)-f(a)}{h} \\end{align*} This result is important for calculating the gradient at a point on a curve and will be explored in greater detail in Grade $$\\text{12}$$.\n\n## Worked example 8: Average gradient\n\nGiven $$f(x) = -2x^2$$.\n\n1. Draw a sketch of the function and determine the average gradient between the points $$A$$, where $$x = 1$$, and $$B$$, where $$x = 3$$.\n\n2. Determine the gradient of the curve at point $$A$$.\n\n### Examine the form of the equation\n\nFrom the equation we see that $$a < 0$$, therefore the graph is a \u201cfrown\u201d and has a maximum turning point. We also see that when $$x = 0$$, $$y = 0$$, therefore the graph passes through the origin.\n\n### Calculate the average gradient between $$A$$ and $$B$$\n\n\\begin{align*} \\text{Average gradient} &= \\frac{f(3) - f(1)}{3 - 1} \\\\ &= \\frac{-2(3)^2 - (-2(1)^2)}{2} \\\\ &= \\frac{-18 + 2}{2} \\\\ &= \\frac{-16}{2} \\\\ &= -8 \\end{align*}\n\n### Calculate the average gradient for $$f(x)$$\n\n\\begin{align*} \\text{Average gradient} &= \\frac{f(a + h) - f(a)}{(a + h) - a} \\\\ &= \\frac{-2(a + h)^2 - (-2a^2)}{h} \\\\ &= \\frac{-2a^2 - 4ah - 2h^2 + 2a^2}{h} \\\\ &= \\frac{-4ah - 2h^2}{h} \\\\ &= \\frac{h(-4a - 2h)}{h} \\\\ &= -4a - 2h \\end{align*} At point $$A$$, $$h = 0$$ and $$a = 1$$.\n\nTherefore \\begin{align*} \\text{Average gradient} &= -4a - 2h \\\\ &= -4(1) - 2(0) \\\\ &= -4 \\end{align*}","date":"2020-10-20 19:40:43","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 1.0000100135803223, \"perplexity\": 319.6879803272542}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-45\/segments\/1603107874135.2\/warc\/CC-MAIN-20201020192039-20201020222039-00499.warc.gz\"}"}
null
null
{"url":"http:\/\/mathhelpforum.com\/pre-calculus\/4082-radioactive-print.html","text":"\u2022 July 10th 2006, 08:28 AM\nkwtolley\nThe amount of a radioactive tracer remaining after t days is given by A=Ao e^-0.058t, where Ao is the starting amount at the beginning of the time period. How many days will it take for one half of the original amount to decay? My answer is 12 days is that right. Thanks for looking at my answer.\n\u2022 July 10th 2006, 08:34 AM\nCaptainBlack\nQuote:\n\nOriginally Posted by kwtolley\nThe amount of a radioactive tracer remaining after t days is given by A=Ao e^-0.058t, where Ao is the starting amount at the beginning of the time period. How many days will it take for one half of the original amount to decay? My answer is 12 days is that right. Thanks for looking at my answer.\n\nRonL\n\u2022 July 10th 2006, 09:08 AM\nkwtolley\nmissing table\nLooks like I'm missing a table I need to use with this question. I need to find half - life Bismuth isotope Ao. Then maybe I can get a better answer. Thanks for your help.\n\u2022 July 10th 2006, 10:14 AM\ntopsquark\nQuote:\n\nOriginally Posted by kwtolley\nLooks like I'm missing a table I need to use with this question. I need to find half - life Bismuth isotope Ao. Then maybe I can get a better answer. Thanks for your help.\n\nYou don't need a value for A0. Here's a hint: What is the definition of the term \"half life?\"\n\n-Dan\n\u2022 July 10th 2006, 12:06 PM\nSoroban\nHello, kwtolley!\n\nYou probably did it correctly . . . it's hard to see your work from here.\n\nQuote:\n\nThe amount of a radioactive tracer remaining after t days is given by: $A = A_o e^{-0.058t}$\nwhere $A_o$ is the starting amount at the beginning of the time period.\nHow many days will it take for one half of the original amount to decay?\n\nThe question is: When is: $A = \\frac{1}{2}A_o$ ?\n\nThe equation is: . $A_oe^{-0.058t} \\:=\\:\\frac{1}{2}A_o\\quad\\Rightarrow\\quad e^{-0.058t}\\:=\\:\\frac{1}{2}$\n\nTake logs: . $\\ln\\left(e^{-0.058t}\\right) \\:= \\:\\ln(0.5)\\quad\\Rightarrow\\quad (-0.058t)\\ln e\\:=\\:\\ln(0.5)$\n\nTherefore: . $t \\:= \\:\\frac{\\ln(0.5)}{-0.058} \\;=\\;11.95081346\\:\\approx\\;12$\n\n\u2022 July 11th 2006, 08:02 AM\nkwtolley\nthank you\nThanks for checking my problem.","date":"2016-07-27 23:04:24","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 6, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9174588322639465, \"perplexity\": 837.7142428167268}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-30\/segments\/1469257827080.38\/warc\/CC-MAIN-20160723071027-00268-ip-10-185-27-174.ec2.internal.warc.gz\"}"}
null
null
Q: Is there ErrorCollector rule analogue for JUnit5 There was the ErrorCollector Rule in JUnit4 and now we have to switch to extensions during migration to JUnit5. Usage of the ErrorCollector was described here https://junit.org/junit4/javadoc/4.12/org/junit/rules/ErrorCollector.html Is there a similar extension in JUnit5. I found one in the assert-j https://www.javadoc.io/doc/org.assertj/assertj-core/latest/org/assertj/core/api/junit/jupiter/SoftAssertionsExtension.html but is the same thing still supported in JUnit 5 as an extension? Note: I would like to use this on a system testing level. So I would have Step 1 -> assertion -> Step 2-> assertion->... assertAll in my opinion is worse option here as I have to store values for verification and assert them at the end of the test, not in places where I got these values. assertAll(() -> {{Some block of code getting variable2} assertEquals({what we expect from variable1}, variable1, "variable1 is wrong")}, {Some block of code getting variable2} assertEquals({what we expect from variable2}, variable2, "variable2 is wrong"), {Some block of code getting variable3} assertEquals({what we expect from variable3}, variable3, "variable3 is wrong")); This approach doesn't look clear and looks worse than described here https://assertj.github.io/doc/#assertj-core-junit5-soft-assertions A: For now I see that the best way is to use assert-j like this @ExtendWith(SoftAssertionsExtension.class) public class SoftAssertionsAssertJBDDTest { @InjectSoftAssertions BDDSoftAssertions bdd; @Test public void soft_assertions_extension_bdd_test() { //Some block of code getting variable1 bdd.then(variable1).as("variable1 is wrong").isEqualTo({what we expect from variable1}); //Some block of code getting variable2 bdd.then(variable2).as("variable2 is wrong").isEqualTo({what we expect from variable2}); //Some block of code getting variable3 bdd.then(variable3).as("variable3 is wrong").isEqualTo({what we expect from variable3}); ... } } or @ExtendWith(SoftAssertionsExtension.class) public class SoftAssertionsAssertJTest { @Test public void soft_assertions_extension_test(SoftAssertions softly) { //Some block of code getting variable1 softly.assertThat(variable1).as("variable1 is wrong").isEqualTo({what we expect from variable1}); //Some block of code getting variable2 softly.assertThat(variable2).as("variable2 is wrong").isEqualTo({what we expect from variable2}); //Some block of code getting variable3 softly.assertThat(variable3).as("variable3 is wrong").isEqualTo({what we expect from variable3}); ... } } It looks more understandable then to write many steps with verification in one line A: Jupiter's aasertAll comes closest: https://junit.org/junit5/docs/current/user-guide/#writing-tests-assertions. It allows to execute several assertion statements and report their result together. Eg: @Test void groupedAssertions() { // In a grouped assertion all assertions are executed, and all // failures will be reported together. assertAll("person", () -> assertEquals("Jane", person.getFirstName()), () -> assertEquals("Doe", person.getLastName()) ); }
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,652
package main // explicit conversion of constants var x1 = string(1) var x2 string = string(1) var x3 = int(1.5) // ERROR "convert|truncate" var x4 int = int(1.5) // ERROR "convert|truncate" var x5 = "a" + string(1) var x6 = int(1e100) // ERROR "overflow" var x7 = float32(1e1000) // ERROR "overflow" // implicit conversions merit scrutiny var s string var bad1 string = 1 // ERROR "conver|incompatible|invalid|cannot" var bad2 = s + 1 // ERROR "conver|incompatible|invalid" var bad3 = s + 'a' // ERROR "conver|incompatible|invalid" var bad4 = "a" + 1 // ERROR "literals|incompatible|convert|invalid" var bad5 = "a" + 'a' // ERROR "literals|incompatible|convert|invalid" var bad6 int = 1.5 // ERROR "convert|truncate" var bad7 int = 1e100 // ERROR "overflow" var bad8 float32 = 1e200 // ERROR "overflow" // but these implicit conversions are okay var good1 string = "a" var good2 int = 1.0 var good3 int = 1e9 var good4 float64 = 1e20 // explicit conversion of string is okay var _ = []rune("abc") var _ = []byte("abc") // implicit is not var _ []int = "abc" // ERROR "cannot use|incompatible|invalid" var _ []byte = "abc" // ERROR "cannot use|incompatible|invalid" // named string is okay type Tstring string var ss Tstring = "abc" var _ = []rune(ss) var _ = []byte(ss) // implicit is still not var _ []rune = ss // ERROR "cannot use|incompatible|invalid" var _ []byte = ss // ERROR "cannot use|incompatible|invalid" // named slice is now ok type Trune []rune type Tbyte []byte var _ = Trune("abc") // ok var _ = Tbyte("abc") // ok // implicit is still not var _ Trune = "abc" // ERROR "cannot use|incompatible|invalid" var _ Tbyte = "abc" // ERROR "cannot use|incompatible|invalid"
{ "redpajama_set_name": "RedPajamaGithub" }
1,807
\section{Introduction and Motivating Example} Hierarchical models, often known as multilevel, mixed, or random effects models, are ubiquitous in the social sciences (\citealt{gelman2006multi,skrondal2008multilevel}). In political science alone, these models are used for addressing unobserved heterogeneity, explicitly modeling dependence between observations, allowing effects to vary across space or time, and many other applications (e.g. \citealt{clark2015fe,bell2015explaining,steenbergen2002multilevel,stegmueller2013multilevel}). They are also popular in other fields such as educational research and psychology. The benefits and challenges of these models can be illustrated by an increasingly popular application for survey research in social science: Multilevel Regression and Post-Stratification (MRP; \citealt{gelman1997mrp,park2004mrp,gao2019improving}). Described in more detail in Section~\ref{section:gg_app}, the core purpose of this method is to extrapolate outcomes from nationally representative surveys to small geographic areas with limited data (e.g. city, state, or legislative district) using (i) a rich hierarchical model fit on the national survey and the (usually) binary or binomial outcome and (ii) post-stratification of predicted values based on the underlying population. This method has been widely applied to a variety of questions such as measuring public opinion on a wide variety of policies, examining ideology at the city level, and exploring determinants of vote choice and turnout decisions (e.g. \citealt{ghitza2013mrp,lax2009gay,lax2012deficit,buttice2013mrp,tausanovitch2014representation}). Early applications of these models usually additively included reasonable number of non-nested effects (e.g. four), but subsequent work noted the inability of such models to capture the rich complexity of the data (\citealt{ghitza2013mrp}). That paper increased the complexity of the model substantially by using \emph{eighteen} mostly non-nested random effects and thus specifying a model with thousands of parameters. More broadly, the idea of using a more complex model has led to a variety of papers implementing more complex hierarchical models (\citealt{gelman2016using,gao2019improving}) or relying on machine learning methods (\citealt{bisbee2019barp,ornstein2019stacked,goplerud2018sparse}). Regardless of whether one relies on a ``traditional'' MRP or a recent extension, it is clear that comparing multiple specifications in a principled way is fundamental to performing reliable inference. Given the long history and popularity of using traditional hierarchical models when performing MRP, it is essential that there is a method to fit those models reliably and quickly given computational constraints for many practitioners. Unfortunately, inference for non-linear hierarchical models---especially at the complexity needed to be competitive with machine learning alternatives---can be challenging as the likelihood function contains an intractable, high-dimensional, integral. There are two popular methods for applied researchers (\citealt{stegmueller2013multilevel}): First, one can approximate the integral numerically (e.g. \citealt{bates2015lmer,rabehesketh2004gllamm}). Second, one can use a fully Bayesian approach and sample from the joint distribution of all of the parameters of the model (e.g. \citealt{carpenter2017stan}). The key downside of these methods is that they can be slow even on modestly sized problems, and thus it is challenging to get estimates of reasonable quality in a modest period of time. This is a problem of ``scalability'' to the large and complex models required for many empirical applications. A key downside of non-scalable models is that common techniques such as $K$-fold cross-validation or bootstrapping are prohibitively expensive. This paper makes two contributions to tackling this problem. First, I outline a series of new variational algorithms based on Polya-Gamma augmentation that allow coordinate ascent variational inference to be implemented for binomial logistic regression for an arbitrary number of (non-nested) random effects while imposing only a mean-field factorization assumption. This extends existing work on variational methods for this class of model, as there does not appear to be a tailored algorithm to estimate models with more than two non-nested random effects.\footnote{Generic methods for variational inference, e.g. stochastic variational inference or automatic differentiation variational inference (ADVI; \citealt{kuckelbir2017advi}), can be applied to most models, including hierarchical ones. I compare ADVI against my ``tailored'' algorithms and show it performs worse.} Further, the algorithm can be implemented without assuming independence between the ``fixed'' (i.e. fully pooled) and random effects. Second, I outline a generic procedure for improving an initial variational approximation when a parameter expansion of the underlying Bayesian model exists. I do this by drawing a connection to ``marginal augmentation'' from the Markov Chain Monte Carlo literature (e.g. \citealt{liu1999parameter,van2001art}) and showing that this parameter expansion often permits a nearly costless improvement of the initial approximation. The method (``marginally augmented variational Bayes''---MAVB) transforms the initial approximation by sampling the expansion parameter and re-transforming the original samples while maintaining the stationarity of the target posterior. This induces dependencies between the parameters that were assumed away in estimating the initial procedure and provides a provable \emph{guaranteed} improvement upon the original approximation. Methodologically, this pushes forward the literature on variational inference for hierarchical models by extending work in the case of a single random effect (\citealt{hall2011poisson,ormerod2012gva,tan2013variational,hall2019fast}) or two non-nested random effects (\citealt{jeon2017variational,menictas2019streamlined}) to the general case. The proposed method requires no integration, unlike many existing methods for binary outcomes (\citealt{ormerod2012gva,tan2013variational,jeon2017variational}). It further provides a link to existing work that seeks to combine Markov Chain Monte Carlo and variational inference by stochastic optimization (e.g. \citealt{salimans2015markov,ruiz2019contrastive,yin2018semi}). Instead of optimizing the transformed density, MAVB transforms the samples from the initial approximation with a partial step of MCMC using marginal augmentation that, in practice, appears as performing a stochastic location/scale transformation of the sampled parameters. This leverages a sampler that is known to mix well in the case of fully Bayesian MCMC and lacks internal tuning parameters as its primary goal is to find a computationally inexpensive way to improve an initial approximation. While it bears some similarities to work on re-parameterization in hierarchical models for variational algorithms (e.g. \citealt{tan2013variational,tan2021rvb}), it does not fix the re-parameterization in advance of estimation. It differs from other approaches that seek to improve an initial approximation (e.g. linear response variational Bayes; \citealt{giordano2015lrvb}) in that it has a guarantee on improving the approximation quality. Future work could examine how such methods work alongside Polya-Gamma data augmentation. The remainder of the paper proceeds as follows. Section~\ref{section:CAVI} states multiple factorization assumptions under which Polya-Gamma augmentation can be used to estimate a variational approximation for a binomial logistic hierarchical model. Section~\ref{section:MAVB} links parameter expansion to variational Bayes and explains MAVB formally. Sections~\ref{section:simulations} and~\ref{section:gg_app} conduct simulations and examine performance on the empirical example (\citealt{ghitza2013mrp}). The latter shows dramatic gains in speed: Even after applying MAVB and drawing 4,000 samples, the fastest variational algorithm is nearly 60 times faster than Laplace approximation and nearly 350 times faster than Hamiltonian Monte Carlo for the most complex models. This reduces the run time from hours to minutes. All variational methods well-recover the posterior means. While the strongest factorization assumptions have poor performance in terms of estimating the posterior variance, applying MAVB corrects a large amount of the problem. Section~\ref{section:gg_app} then uses this algorithm to engage in model comparison that was computationally infeasible in \cite{ghitza2013mrp}. I perform 10-fold cross-validation across nine models ranging from having four to 18 random effects and thousands of parameters. The process takes around 30 minutes compared to the hours needed to fit even a single model once using existing approaches. The results provide some evidence of over-fitting in the original specification suggesting that the most complex model does not outperform models of intermediate complexity. I use this to draw out some guidance for practitioners of MRP in other substantive domains. \section{Mean-Field Variational Inference for Binomial Hierarchical Models} \label{section:CAVI} I focus on the following generative model that is broader than MRP but also captures the majority of applications. For each observation $i \in \{1, \cdots, N\}$, the researcher observes $y_i$ ``successes'' out of $n_i$ trials (e.g. how many individuals in a population of size $n_i$ turn out to vote). I model this using a binomial distribution with probability of success $p_i$ defined via a linear predictor ($\psi_i$) put through a logistic link. Equation~\ref{eq:model_gendesign} expresses this model using a ``general design'' notation (\citealt{zhao2006mlm}). Appendix~\ref{section_app:pg} shows the model using \citet{gelman2006multi}'s notation and a plate diagram. \begin{subequations} \label{eq:model_gendesign} \begin{alignat}{2} &y_i | \bm{\beta}, \bm{\alpha} \sim \mathrm{Binom}(n_i, p_i), \quad p_i = \frac{\exp(\psi_i)}{1+\exp(\psi_i)}, \quad \psi_i = \bm{x}_i^T\bm{\beta} + \bm{z}_i^T\bm{\alpha} \\ &\bm{\alpha}_{j} | \bm{\Sigma}_j \sim N\left(\bm{0}, \bm{I}_{g_j} \otimes \bm{\Sigma}_j\right), \quad \bm{\Sigma}_j \sim \mathrm{IW}(\nu_j, \bm{\Phi}_j), \quad p(\bm{\beta}) \propto 1 \\ & \bm{z}_{i,j} = \bm{m}_{i,j} \otimes \bm{z}^b_{i,j}, \quad \bm{\alpha}^T = [\bm{\alpha}^T_1, \cdots, \bm{\alpha}^T_J], \quad \bm{z}_i^T = [\bm{z}_{i,1}^T, \cdots, \bm{z}_{i,J}^T] \end{alignat} \end{subequations} As is standard in hierarchical models, the linear predictor consists of $p$ ``fixed'' effects: $\bm{x}_i \in \mathbb{R}^p$. The hierarchical component contains $J$ random effects indexed from $j \in \{1, \cdots, J\}$. For each random effect $j$, there is a $d_j$ dimensional covariate vector indexed by $\bm{z}^b_{i,j}$ where $\bm{z}^{b}_{i,j} = 1$ represents the ubiquitous ``random intercept.'' Each random effect has $g_j$ groups and each observation $i$ is assigned to exactly one group for each random effect; define its membership for random effect $j$ as a one-hot vector $\bm{m}_{i,j} \in \{0, 1\}^{g_j}$. The notation in Equation~\ref{eq:model_gendesign} stacks together the hierarchical components as follows; first, for each random effect $j$, $\bm{z}_{i,j}$ represents a $d_j \times g_j$ length vector (mostly sparse) by the Kronecker product ($\otimes$) of the group membership vector $\bm{m}_{i,j}$ and the base covariate. This repeats $\bm{z}^{b}_{i,j}$ once in the position corresponding to the group of which $i$ is a member for random effect $j$. This allows us to model the distribution of the entire parameter vector for random effect $j$ ($\bm{\alpha}_j \in \mathbb{R}^{g_j \cdot d_j}$) as a multivariate normal with a block diagonal matrix where each block is given an identical Inverse Wishart prior as noted in Equation~\ref{eq:model_gendesign}b ($\bm{\Sigma}_j \in \mathbb{R}^{d_j \times d_j}; \bm{\Phi}_j \in \mathbb{R}^{d_j \times d_j}$). Using such priors is standard in the literature on variational inference for hierarchical models (e.g. \citealt{tan2013variational}), although extensions to more weakly informative priors are possible (e.g. \citealt{huang2013simple}). The compact notation in Equation~\ref{eq:model_gendesign}a stacks together all random effects $j$ into a single vector $\bm{z}_i \in \mathbb{R}^{\sum_{j=1}^J g_j \cdot d_j}$ that is highly sparse. It thus accommodates designs with arbitrary patterns of crossing (non-nesting) amongst the $J$ random effects. A key distinguishing feature of this model as applied to MRP is that $J$ can be large (e.g. greater than ten) and $g_j$ ranges widely from a handful up to over a thousand (e.g. $g_j = 4$ for ethnicity and $g_j = 1,020$ for state-ethnicity-age combinations in \citealt{ghitza2013mrp}). In most applications for MRP, $d_j = 1$ and $\bm{z}^{b}_{i,j} = 1$ (random intercept) but sometimes $d_j = 2$ in the case of a random slope and intercept (\citealt{gelman2006multi}). Regarding the other parameters, for most applications of MRP, $N$ is often relatively modest given post-stratification requirements (see Section~\ref{section:gg_app}) and that surveys can be collapsed into units with identical state-demographic covariates by allowing varying $n_i$. Thus, in many studies, $N$ can be made smaller than 10,000 (e.g. below 5,000 in \citealt{park2004mrp,ghitza2013mrp}). The size of $\bm{\beta}$ ($p$) is also usually modest and below ten. By using Polya-Gamma augmentation, the model in Equation~\ref{eq:model_gendesign} can be rendered conditionally conjugate, enabling the straightforward application of numerous standard algorithms for Bayesian inference (\citealt{polson2013polyagamma}). Specifically, Equation 2 from \citet{polson2013polyagamma} states that for any $a, b > 0$ the following identity holds, where $f_{PG}(\omega | b, c)$ denotes the Polya-Gamma density with parameters $b$ and $c$. The definition of a Polya-Gamma variable as a weighted infinite convolution of Gamma random variables is also shown. \begin{subequations} \begin{alignat}{2} \frac{\exp(\psi)^a}{\left[1+\exp(\psi)\right]^b} = 2^{-b} \int \exp(s \psi - \psi^2/2 \omega) f_{PG}(\omega | b, 0) d\omega, \quad s = a - b/2 \\ \omega \sim PG(b,c) \coloneqq \omega = \frac{1}{2\pi^2} \sum_{k=1}^\infty \frac{Z_k}{(k-1/2)^2 + c^2/(4\pi^2)}, \quad Z_k \overset{i.i.d.}{\sim} \mathrm{Gamma}(b, 1) \end{alignat} \end{subequations} Thus, the complete data likelihood can be expressed as follows where $\bm{\Omega}$ denotes the $N \times N$ diagonal matrix of the corresponding $\omega_i$, $\bm{X}$, $\bm{Z}$ stack the data for each observation into a $N \times p$ and $N \times \sum_{j=1}^{J} d_j g_j$ design matrices, and $\bm{s}$ is a $N \times 1$ vector with $[\bm{s}]_i = y_i - n_i/2$. \begin{equation} \begin{split} p(\bm{y}, \bm{\Omega} | \bm{\alpha}, \bm{\beta}) &\propto \exp\left(\bm{s}^T [\bm{X}\bm{\beta} + \bm{Z} \bm{\alpha}] - \frac{1}{2} \left[\bm{X}\bm{\beta} + \bm{Z}\bm{\alpha}\right]^T\bm{\Omega} \left[\bm{X}\bm{\beta} + \bm{Z}\bm{\alpha}\right]\right) \prod_{i=1}^N f_{PG}(\omega_i | n_i, 0) \end{split} \end{equation} Noting the result from \citet{polson2013polyagamma} that the full conditional of $\omega_i | \bm{y}, \bm{\alpha}, \bm{\beta}, \{\bm{\Sigma}_j\}_{j=1}^J$ has a Polya-Gamma distribution $PG(n_i, \bm{x}_i^T\bm{\beta} + \bm{z}_i^T\bm{\alpha})$, it immediately follows that a Gibbs Sampler exists to sample all of the parameters in the model where the full conditionals on $\bm{\beta}$ and $\bm{\alpha}$ are normal and $\bm{\Sigma}_j$ is Inverse Wishart. \subsection{Variational Inference} The first contribution of this paper is to use the Polya-Gamma representation above to find a tractable variational algorithm to approximate the joint posterior of $p(\bm{\beta}, \bm{\alpha}, \{\bm{\Sigma}_j\}, \bm{\Omega} | \bm{y})$ and thus the joint posterior on the parameters excluding $\bm{\Omega}$. \citet{blei2017vi} provides a recent review of these methods. Equation~\ref{eq:vi_def} formulates the problem where $\mathcal{X}$ denotes some (restricted) set of distributions to optimize over. It can be equivalently expressed as finding the closest distribution in $\mathcal{X}$ to the true posterior in terms of KL-divergence. For notational simplicity, denote $\bm{\theta} = \{\bm{\beta}, \bm{\alpha}, \{\bm{\Sigma}_j\}_{j=1}^J, \bm{\Omega}\}$. \begin{equation} \label{eq:vi_def} q^*(\bm{\theta}) = \argmax_{q(\bm{\theta}) \in \mathcal{X}} \mathrm{ELBO}_{q(\bm{\theta})} \quad \mathrm{where} \quad \mathrm{ELBO}_{q(\bm{\theta})} = E_{q(\bm{\theta})}\left[\ln p(\bm{y}, \bm{\theta})\right] - E_{q(\bm{\theta})}\left[\ln q(\bm{\theta})\right] \end{equation} A common method for solving this problem is known as ``coordinate ascent variational inference'' (CAVI; \citealt{blei2017vi}). It maximizes or increases the target $\mathrm{ELBO}$ with respect to some sub-block of $\bm{\theta}$. By cycling through $\bm{\theta}$ repeatedly, a local optimum can be obtained. The choice of restriction $\mathcal{X}$ is crucial to the accuracy of the approximation method; an extremely popular choice is a ``mean-field'' factorization assumption where blocks of parameters are assumed to be independent. Leveraging the existence of a Gibbs Sampler, Result~\ref{result:CAVI} states that the augmented posterior on $q(\bm{\theta})$ can be approximated using a number of mean-field assumptions with no further restrictions on distributional form, all updates having closed analytical forms, and for arbitrary $J, d_j, g_j$. Appendix~\ref{section_app:pg} provides the full derivations as well as noting how to back out the corresponding Gibbs Sampler. \begin{result}[Existence of CAVI] \label{result:CAVI} Consider the three factorization assumptions: \begin{tabular}{llll} Scheme I:& ``Strong Factorization'' & --- & $\mathcal{X}_1 = q(\bm{\beta})\prod_{j=1}^{J} q(\bm{\alpha}_j) q(\bm{\Sigma}) q(\bm{\Omega})$ \\ Scheme II:& ``Partial Factorization'' & --- & $\mathcal{X}_2 = q(\bm{\beta}) q(\bm{\alpha}) q(\bm{\Sigma}) q(\bm{\Omega})$ \\ Scheme III:& ``Limited Factorization'' & --- & $\mathcal{X}_3 = q(\bm{\beta}, \bm{\alpha}) q(\bm{\Sigma}) q(\bm{\Omega})$ \end{tabular} For the model in Equation~\ref{eq:model_gendesign} and for each choice of $\mathcal{X}_k$ above, each step of the CAVI algorithm can be implemented exactly in closed form, with no additional assumptions. For each $\mathcal{X}_k$, the optimal approximation for $q(\bm{\beta}, \bm{\alpha})$ is multivariate normal, $q(\bm{\Sigma})$ is the product of $J$ independent Inverse Wishart densities, and $q(\bm{\Omega})$ is the product of $N$ independent Polya-Gammas. \end{result} Algorithm~\ref{alg:CAVI} explicitly outlines the updates for Scheme I. Experiments showed that convergence could be improved at little computational cost by jointly updating the mean parameters of $q(\bm{\beta})$ and $q(\bm{\alpha})$; see Appendix~\ref{section_app:acceleration} for discussion. All models estimated in the paper use this acceleration technique. \begin{algorithm}[!ht] \caption{CAVI for Scheme I} \label{alg:CAVI} \begin{algorithmic} \State{\textbf{Set Priors of Inverse Wishart}: $\{\nu_j, \bm{\Phi}_j\}_{j=1}^J$; \textbf{Set Number of Iterations}: $T$} \State{\textbf{Initialize Variational Parameters}: $\{\tilde{b}_i, \tilde{c}_i\}_{i=1}^N$ (for Polya-Gamma); $\tilde{\bm{\mu}}_\beta, \tilde{\bm{\Lambda}}_\beta, \tilde{\bm{\mu}}_\alpha, \tilde{\bm{\Lambda}}_\alpha$ (for $\bm{\beta}, \bm{\alpha}$); $\{\tilde{\nu}_j, \tilde{\bm{\Phi}}_j\}_{j=1}^J$ (for $\bm{\Sigma}_j$)} \For{$t$ in $1, \cdots, T$} \State{1. Update Polya-Gammas - $q\left(\{\omega_i\}_{i=1}^N\right)$: $\tilde{b}_i = n_i, \quad \tilde{c}_i = \sqrt{E_{q(\bm{\alpha},\bm{\beta})}\left[(\bm{x}_i^T\bm{\beta} + \bm{z}_i^T\bm{\alpha})^2\right]}$} \State{2. Update $q(\bm{\beta}) \sim N(\tilde{\bm{\mu}}_\beta, \tilde{\bm{\Lambda}}_\beta)$}: $$\tilde{\bm{\Lambda}}_\beta = \left(\sum_{i=1}^N E_{q(\omega_i)}[\omega_i] \bm{x}_i\bm{x}_i^T\right)^{-1}, \quad \tilde{\bm{\mu}}_\beta = \tilde{\bm{\Lambda}}_\beta \bm{X}^T \left(\sum_{i=1}^N \left(y_i - \frac{n_i}{2}\right) - E_{q(\omega_i)}[\omega_i] \cdot \bm{z}_i^T E_{q(\bm{\alpha})}[\bm{\alpha}]\right)$$ \State{3. Update $q\left(\bm{\alpha}_j\right) \sim N(\tilde{\bm{\mu}}_{\alpha,j}, \tilde{\bm{\Lambda}}_{j,\alpha})$, where $\bm{T}_j$ stacks the block diagonal expectation of the precision on the random effects ($\bm{\Sigma}_j^{-1}$):} $$\tilde{\bm{\Lambda}}_{\alpha,j} = \left(\bm{T}_j + \sum_{i=1}^N E_{q(\omega_i)}[\omega_i] \bm{z}_{i,j}\bm{z}_{i,j}^T\right)^{-1}, \quad \bm{T}_j = E_{q(\bm{\Sigma}_j)}\left[\bm{I}_{g_j} \otimes \bm{\Sigma}_j^{-1}\right] $$ $$\tilde{\bm{\mu}}_{\alpha,j} = \tilde{\bm{\Lambda}}_{\alpha,j} \bm{Z}_j^T \left[\sum_{i=1}^N \left(y_i - \frac{n_i}{2}\right) - E_{q(\omega_i)}[\omega_i] \cdot \left(\bm{x}_i^T E_{q(\bm{\beta})}[\bm{\beta}] + \sum_{\ell: \{1, \cdots, J\} \setminus j} \bm{z}_{i,\ell}^T E_{q(\bm{\alpha}_\ell)}[\bm{\alpha}_\ell]\right)\right]$$ \State{4. Update $q\left(\{\bm{\Sigma}_j\}_{j=1}^J\right)$: $\tilde{\nu}_j = \nu_j + g_j, \quad \tilde{\bm{\Phi}}_j = \bm{\Phi}_j + \sum_{g=1}^{g_j} E_{q(\bm{\alpha}_{j,g})}\left[\bm{\alpha}_{j,g}\bm{\alpha}_{j,g}^T\right]$} \vspace{0.25em} \State{5. Check for convergence, evaluate ELBO (see Appendix~\ref{section_app:pg} for derivation).} \EndFor \end{algorithmic} \end{algorithm} This improves upon existing mean-field schemes for logistic hierarchical models in a number of ways. First, for any of the factorization assumptions, no further distributional assumptions are required (cf. \citealt{ormerod2012gva,tan2013variational} assuming normality). Second, most existing algorithms for binomial outcomes require the repeated evaluation of (low) dimensional integrals at each iteration whose number scales with $g_j$ (cf. \citealt{ormerod2012gva,tan2013variational,jeon2017variational}). Extending these algorithms to $J > 2$ would likely incur significant computational costs as the number of those integrals increases. None of the schemes in Result~\ref{result:CAVI} require integration at any step as the Polya-Gamma augmentation turns inference into iteratively performing weighted ridge regression. In the models considered in this paper, the major bottleneck as one moves from Scheme I to Scheme III is in calculating the variance term of $q(\bm{\beta},\bm{\alpha})$; even relying on a (sparse) Cholesky decomposition, this involves inverting an increasingly dense lower triangular matrix as weaker independence assumptions are imposed. Appendix~\ref{section_app:gg_extra} disaggregates the run-time of Algorithm~\ref{alg:CAVI} by stage and scheme. Most importantly, the ability to choose between Schemes I, II, and III allows the researcher to smoothly trade-off computational cost and accuracy as in \citet{menictas2019streamlined}'s work on $J=2$ for linear mixed effects models. Scheme I with its strong implied factorization assumptions is immediately scalable to huge datasets with large $J$ or $g_j$. However, the downside is that the strong factorization assumptions will likely degrade performance. Scheme III provides the ability to avoid these strong assumptions at a somewhat increased computational cost. The ability to avoid such factorization assumptions for arbitrary $J > 1$ and binomial outcomes appears to be a new result. The expectation is that it will have the best performance. Scheme II is a compromise between the two extremes, and other hybrid approaches are possible such as applying re-parameterizations to the augmented posterior (e.g. \citealt{tan2013variational,tan2021rvb}). \section{Marginally Augmented Variational Bayes} \label{section:MAVB} The second major contribution of the paper is demonstrating that there is a computationally cheap way of improving the initial approximation resulting from Schemes I, II, or III. The key intuition, formalized below, is that once an initial approximation $q(\bm{\theta})$ is found, one can draw samples from this approximation, perform a single step of Markov Chain Monte Carlo through (some of) the parameters, and thereby ``improve'' the sample. Some existing work in computer science (e.g \citealt{salimans2015markov,ruiz2019contrastive}) has leveraged this point to attempt to \emph{optimize} over the intractable improved density which can be computationally expensive. By contrast, this paper explores the idea that if one can find a transition kernel with good mixing, then simply doing a single partial step can provide considerable gains at limited computational cost. While many samplers can be employed for this purpose, initial experiments suggested that the key problem was the independence assumptions in $q(\bm{\beta},\bm{\alpha})$ and thus I chose to focus on marginal augmentation and parameter expansion as it is inexpensive to use in fully Bayesian MCMC to improve a Gibbs Sampler, has demonstrated strong performance in hierarchical models, lacks internal tuning parameters, and was explicitly designed to link the fixed and random effects together (\citealt{liu1999parameter,van2001art,gelman2008px}). I focus on logistic hierarchical models although the procedure is itself much more general; Section~\ref{section:conclusion} discusses some broader implications and Appendix~\ref{section_app:param_x} formulates the results in a more general fashion. The key idea behind parameter expansion is to create an ``over-parameterized'' model where certain additional parameters ($\bm{\xi}$) are introduced such that they (i) maintain the observed data model but (ii) are not identifiable from the observed data itself. A careful choice of parameter expansion allows the construction of algorithms that have either faster mixing for MCMC (\citealt{liu1999parameter,van2001art}) or faster convergence for deterministic algorithms such as EM (\citealt{liu1998pxem}). The intuition behind its effectiveness is that it allows ``moves'' (either via sampling steps in MCMC or parameter updates in EM) in the un-identified space that can break or escape the strong associations between parameter blocks (e.g. $\bm{\beta}$ and $\bm{\alpha}$) that slow down mixing (\citealt{liu1999parameter}) or lead to the algorithm getting ``stuck'' for many iterations near boundary conditions (e.g. a small sampled $\bm{\Sigma}_j$ shrinking $\bm{\alpha}_{j,g}$ leading to a small $\bm{\Sigma}_j$, etc.; \citealt{gelman2008px}). \cite{liu1998pxem} provide a useful explanation of parameter expansion in the context of EM as a ``covariance adjustment'' to the estimated parameters. In the case of hierarchical models, the most popular parameter expansion appears as a location and/or scale transformation of the random effects (e.g. \citealt{van2001art}). The location transformation, for example, allows the random effects to have a non-zero mean: $\bm{\alpha}_{j,g} \sim N(\bm{\mu}_j, \bm{\Sigma}_j)$. Note that it is not possible to estimate $\bm{\mu}_j$ from the observed data but that it could be estimated if $\bm{\alpha}_{j,g}$ were known. Implementation is simple and appears as a location or scale transformation of the sampled parameters that leads to very large gains in performance (e.g. \citealt{van2001art,gelman2008px}). Definition~\ref{def:expanded_hier} generalizes this parameter expansion to the arbitrary $J$ case, where the $\bm{M}_j$ notation is bookkeeping to note which element of $\bm{x}_i$ (and $\bm{\beta}$) corresponds to each element of $\bm{z}^b_{i,j}$ (and $\bm{\alpha}_{j,g}$). \begin{definition}[Expansions for Hierarchical Models] \label{def:expanded_hier} Define a set of expansion parameters $\bm{\xi}$ that consists, for each $j$, of a mean shift $\bm{\mu}_j \in \mathbb{R}^{d_j}$ and a scale shift $\bm{R}_j \in \mathbb{R}^{d_j \times d_j}$ such that $\bm{R}_j$ is invertible. I use superscript $^X$ to denote the ``expanded'' parameters. The mapping between $\bm{\theta}^X$ and $\bm{\theta}$ for a fixed $\bm{\xi}$ is denoted as $t_{\bm{\xi}}(\bm{\theta}^X)$ and listed below. $\bm{M}_j$ is a $p \times d_j$ matrix such that $[\bm{M}_{j}]_{a,b} = 1$ if the covariate corresponding to $[\bm{z}_{i,j}]_b$ is the same as the covariate for $[\bm{x}_i]_a$. All other elements of $\bm{M}_j$ are zero. For simplicity, assume that each element of $\bm{z}_i$ corresponds to some variable in $\bm{x}_i$, i.e. that each column of $\bm{M}_j$ has exactly one non-zero element. $$\left[\bm{\beta}, \bm{\alpha}, \{\bm{\Sigma}_j\}_{j=1}^J, \bm{\Omega}\right] = t_{\bm{\xi}}([\bm{\beta}^X,\bm{\alpha}^X, \{\bm{\Sigma}_j^X\}_{j=1}^J, \bm{\Omega}^X]) = \begin{cases} \bm{\beta} = \bm{\beta}^X + \sum_{j=1}^J \bm{M}_j \bm{R}_j \bm{\mu}_j \\ \bm{\alpha}_{j,g} = \bm{R}_j\left(\bm{\alpha}^X_{j,g} - \bm{\mu}_j\right) \\ \bm{\Sigma}_j = \bm{R}_j\bm{\Sigma}^X_j \bm{R}_j^T \\ \bm{\Omega} = \bm{\Omega}^X \end{cases}$$ The augmented model is listed below for an important special case treated in detail (``Mean Expansion'') in the empirical analysis. The full expansion (``Translation Expansion'') is also listed. \begin{itemize} \item Mean Expansion: Assume all $\bm{R}_j = \bm{I}_{d_j}$. \begin{equation*} \begin{split} &\ln p(y_i | \omega_i, \bm{\beta}^X,\bm{\alpha}^X) \propto \bm{s}^T[\bm{X}\bm{\beta}^X + \bm{Z}\bm{\alpha}^X] - 1/2[\bm{X}\bm{\beta}^X + \bm{Z}\bm{\alpha}^X]^T\bm{\Omega}[\bm{X}\bm{\beta}^X + \bm{Z}\bm{\alpha}^X] \\ &p(\bm{\beta}^X) \propto 1, \quad \bm{\alpha}_{j,g}^X | \bm{\Sigma}^X_j, \sim N\left(\bm{\mu}_j, \bm{\Sigma}^X_j\right), \quad p(\bm{\Sigma}_j^X) \sim IW(\nu_j, \bm{\Phi}_j) \end{split} \end{equation*} \item Translation Expansion: \begin{equation*} \begin{split} &\ln p(y_i | \omega_i, \bm{\beta}^X,\bm{\alpha}^X) \propto \bm{s}^T[\bm{X}\bm{\beta}^X + \bm{Z}\bm{R}\bm{\alpha}^X] - 1/2[\bm{X}\bm{\beta}^X + \bm{Z}\bm{R}\bm{\alpha}^X]^T\bm{\Omega}[\bm{X}\bm{\beta}^X + \bm{Z}\bm{R}\bm{\alpha}^X] \\ &\bm{R} = \mathrm{blockdiag}\left(\{\bm{I}_{g_j} \otimes \bm{R}_j\}_{j=1}^J\right), \quad p(\bm{\beta}^X) \propto 1, \quad \bm{\alpha}_{j,g}^X | \bm{\Sigma}^X_j \sim N\left(\bm{\mu}_j, \bm{\Sigma}_j^X\right) \\ &p(\bm{\Sigma}_j^X) \sim IW(\nu_j,\bm{R}_j^{-1} \bm{\Phi}_j\bm{R}_j^{-T}) \end{split} \end{equation*} \end{itemize} \end{definition} Given such an expanded version of the hierarchical model, there are two ways to improve the algorithms in this paper. First, drawing on \citet{jaakkola2007parameter}, it is possible to accelerate convergence of Algorithm~\ref{alg:CAVI} using ``parameter expanded variational Bayes'' (PX-VB). Appendix~\ref{section_app:acceleration} derives a new application of PX-VB to the models in Result~\ref{result:CAVI} and shows it can often improve the algorithm's convergence by decreasing the number of iterations required at effectively no computational cost as, functionally, it involves centering the random effects to be mean zero and adjusting the mean of $q(\bm{\beta})$ correspondingly. The main use of parameter expansions in this paper, however, is to improve the \emph{quality} of the approximation by ``improving'' $q(\bm{\theta})$ by performing one step of marginal augmentation where the expansion parameters $\bm{\xi}$ are sampled and then the components of $\bm{\theta}$ are re-sampled. Definition~\ref{def:mavb} outlines the procedure in a general case. The notation and procedure mirrors that in \cite{liu1999parameter}. \begin{definition}[Marginally Augmented Variational Bayes---MAVB] \label{def:mavb} Given an initial approximation $q(\bm{\theta})$, a proper prior on the expansion parameter $p_0(\bm{\xi})$, and a one-to-one and differentiable transformation such that $t_{\bm{\xi}}(\bm{\theta}^X) = \bm{\theta}$, create a new approximation $\tilde{q}(\bm{\theta})$ using the following procedure: \begin{enumerate} \item Sample $\bm{\theta} \sim q(\bm{\theta})$ and $\bm{\xi}_0 \sim p_0(\bm{\xi})$. \item Create $\bm{\theta}^X = t^{-1}_{\bm{\xi}_0}(\bm{\theta})$. \item Sample a new $\bm{\xi}_1$ as follows where $J_{\bm{\xi}}(\bm{\theta}^X)$ is the Jacobian of $t_{\bm{\xi}}$ with respect to $\bm{\theta}^X$ and $p(\bm{\theta}| \bm{y})$ denotes the \emph{true} posterior distribution. $$\bm{\xi}_1 \sim p\left(\bm{\xi} | \bm{\theta}^X, \bm{y}\right) \propto p(t_{\bm{\xi}}(\bm{\theta}^X) | \bm{y}) \cdot |J_{\bm{\xi}}(\bm{\theta}^X)| \cdot p_0(\bm{\xi})$$ \item Define $\tilde{\bm{\theta}} = t_{\bm{\xi}_1}(\bm{\theta}^X) = t_{\bm{\xi}_1}\left(t^{-1}_{\bm{\xi}_0}(\bm{\theta})\right)$ \end{enumerate} \end{definition} Theorem~\ref{thm:MAVB} states a key result for MAVB. \begin{theorem}[Guaranteed Improvement with MAVB] \label{thm:MAVB} \vspace{1px} For any (proper) choice of prior $p_0(\bm{\xi})$, the MAVB approximation $\tilde{q}(\bm{\theta})$ has a better $\mathrm{ELBO}$ than the initial approximation: $$\mathrm{ELBO}_{\tilde{q}(\bm{\theta})} \geq \mathrm{ELBO}_{q(\bm{\theta})}$$ \end{theorem} The proof is in Appendix~\ref{section_app:param_x} and uses two lemmas from existing results. First, Theorem 1 in \citet{liu1999parameter} demonstrates the transformation to generate MAVB maintains the stationarity of the posterior. Second, a data processing inequality noted by various authors (e.g. \citealt{ruiz2019contrastive}) showing that this transformation which keeps the \emph{true} posterior invariant results in a better approximating distribution. It is known from the data augmentation literature that an increasingly diffuse prior on the expansion parameters (``working prior'') allows for the parameters themselves to ``decide'' the best expansion parameter $\bm{\xi}$ rather than being weighed down by the prior (e.g. \citealt[p. 1268]{liu1999parameter}), and I conjecture that a similar intuition applies for MAVB. Thus, in all applications, I use an improper prior (i.e. $p_0(\bm{\xi}) \propto 1$); Appendix~\ref{section_app:param_x} discusses the validity of this prior using existing theory (\citealt{liu1999parameter,van2001art}), provides the result for a proper prior on $\bm{\xi}$, and notes Algorithm~\ref{alg:MAVB} can be found as the limit of a proper working prior $p_0(\bm{\mu}_j) \sim N(0, \tau^2 \bm{I})$ as $\tau \to \infty$. Algorithm~\ref{alg:MAVB} shows how MAVB is implemented using the mean expansion noted in Definition~\ref{def:expanded_hier}.\footnote{MAVB for ``Translation Expansion'' (i.e. $\bm{R}_j$ is not fixed) is more delicate and thus not explored here, as it requires a specific choice of prior on $\bm{\Sigma}_j$ and a specific choice of improper working prior to be tractable; see \citet{van2001art} for details. Examining whether this could be used with proper priors is an interesting area for future research.} \begin{algorithm}[!ht] \caption{Applying MAVB to Non-Linear Hierarchical Models} \label{alg:MAVB} \begin{algorithmic} \State{\textbf{Set the Number of Samples Desired}: $M$} \State{\textbf{Estimate $q(\bm{\theta})$ using CAVI (e.g. Algorithm~\ref{alg:CAVI})}} \For{$m$ in $1, \cdots, M$} \State{1. Draw $\bm{\theta}^{(m)} \sim q(\bm{\theta})$} \State{2. Sample the expansion parameters $\bm{\mu}_j$ for each $j$} $$\tilde{\bm{\mu}}_j \sim N\left(\frac{1}{g_j} \sum_{g=1}^{g_j} \bm{\alpha}_{j,g}^{(m)}, \frac{1}{g_j} \bm{\Sigma}^{(m)}_j\right)$$ \State{3. Adjust the initial draws to get the improved sample $\tilde{\bm{\theta}}^{(m)}$} $$\tilde{\bm{\alpha}}^{(m)}_{j,g} = \bm{\alpha}_{j,g}^{(m)} - \tilde{\bm{\mu}}_j, \quad \tilde{\bm{\beta}}^{(m)} = \bm{\beta}^{(m)} + \sum_{j=1}^J \bm{M}_j \tilde{\bm{\mu}}_j$$ \EndFor \end{algorithmic} \end{algorithm} Thus, for this model and relying only on a location transformation, MAVB has a simple form that, as shown later, can result in considerable improvements in the performance of Scheme I. As noted in the earlier discussion, the presentation of MAVB in Algorithm~\ref{alg:MAVB} illustrates the close relationship to the location transformation noted earlier: It can be thought of a ``stochastic'' location transformation given that mean of the expansion parameter is the mean of the sampled $\bm{\alpha}_{j,g}$. Some additional remarks are in order: First, if MAVB is applied to an approximation resulting from Scheme I (i.e. with independence assumed between $\bm{\beta}$ and $\bm{\alpha}$), the resulting approximation \emph{will not} imply such an assumption. Consider the correlation between $\bm{\alpha}_{j,g}$ and $\bm{\beta}$ in Algorithm~\ref{alg:MAVB}. Before applying MAVB, the two parameters are independent by assumption. After applying MAVB, they have a non-zero posterior correlation because of the shared dependence on $\tilde{\bm{\mu}}_j$. While not sufficient to restore \emph{all} missing dependencies (e.g. components of $\bm{\beta}$ that are not included in any random effect), this can at least address some of the shortcomings of Scheme I. MAVB can be applied to the outputs of Scheme II and III, although the expectation is that the improvement for these schemes should be less pronounced given that more of those dependencies are estimated directly. Second, the cost of implementing MAVB is quite modest, unlike existing approaches that attempt to \emph{optimize} over the improved density (e.g. \citealt{ruiz2019contrastive}). After drawing a sample from $q(\bm{\theta})$, all that is needed to perform MAVB is drawing $\sum_j d_j$ univariate Gaussians (22 in the largest model considered in this paper [Model 9]), some summation of the sampled random effects and then subtracting off the sampled expansion parameter $\tilde{\bm{\mu}}_j$. Note that this MAVB procedures do not require sampling the Polya-Gammas as they are left un-transformed by the algorithm nor does the cost of MAVB depend on the size of the data ($N$) directly; even if $g_j$ is large, MAVB will still be fast. Third, while MAVB is guaranteed to increase performance, the quality of MAVB is difficult to ascertain analytically in most complex models. However, insights come from simpler cases: In a stylized hierarchical model, \cite{liu1999parameter} show that marginal augmentation results in perfect sampling. In the more realistic case where $\bm{\Sigma}_j$ is not fixed and $J = 1$, studies show that certain forms of marginal augmentation result vastly improved mixing of MCMC samplers (\citealt{van2001art,gelman2008px}). Thus, there is reason to be optimistic about the ability of MAVB to improve initial approximations as the scale/location transformations in fully Bayesian marginal augmentation seem to provide quite considerable benefits over simple Gibbs samplers. Overall, while MAVB is likely to be helpful in improving the variational schemes in this paper, it is not a panacea. Its major benefit appears to be in ``connecting'' blocks of parameters that were assumed to be independent in a way that is guaranteed to improve the approximation quality at a very limited computational cost. The key limitation is that its speed and scalability depends on it \emph{not} returning to the observed data ($\bm{y}$). Interestingly, this suggests a ``stronger'' version of MAVB that could be performed by implementing one full sweep of the Gibbs Sampler, i.e. sampling Polya-Gammas and cycling through all full conditionals, and then performing marginal augmentation. If this were to be performed many times, the samples would converge to the true posterior by standard properties of MCMC. While this might raise its own computational concerns, exploring this is an interesting area of future research. \section{Simulation Study} \label{section:simulations} I perform a simulation study to assess the accuracy of the proposed methods. I compare my variational algorithms against two gold standards (Laplace approximation using \texttt{blme} - \citealt{bates2015lmer,chung2015weakly}; HMC in \texttt{STAN} using \texttt{brms}; \citealt{burkner2017brms}) and Automatic Differentiation Variational Inference (ADVI; \citealt{kuckelbir2017advi}).\footnote{Using \texttt{blme} allows for an identical Inverse Wishart prior to be added to the Laplace approximation; models are fit using \texttt{optimx}'s \texttt{nlminb} algorithm (\citealt{nash2011unifying}) that returned noticeably better performance. \texttt{brms} generates a model that can be manually adapted to place an Inverse Wishart prior as this is not permitted in the default options in pre-written STAN models at the time of writing (e.g. \texttt{rstanarm} or \texttt{brms}).} The latter is a useful comparison as it is easily implemented in \texttt{STAN} and is a generic approach to approximate complex models. I show results using its mean-field approximation (MF) and full rank (FR). To begin, I conducted a simulation where the linear predictor $\psi_i$ was generated using the following scheme ($J = 2$). \begin{framed} Draw the fixed effects $\bm{\beta} \sim N(\bm{0}, \left[0.2\right]^2 \bm{I}_{10})$ For each group $g \in \{1, \cdots, 10\}$, draw the random intercept $\alpha_{1,g} \sim N(0, 1)$ For each group $g' \in \{1, \cdots, 10\}$, draw the random intercept $\alpha_{2,g} \sim N(0, 1)$ For each observation $i \in \{1, \cdots, 1000\}$, assign at random to groups $g, g'$. Draw its fixed effect $\bm{x}_i \sim N(\bm{0}, \bm{\Sigma})$ where $\bm{\Sigma}_{j,j'} = 0.5^{|j-j'|}; j,j' \in \{1, \cdots, 10\}$. Draw $y_i$ such that: \begin{equation*} y_i \sim \mathrm{Bern}(p_i), \quad p_i = \frac{\exp(\bm{x}_i^T\bm{\beta} + \alpha_{1,g[i]} + \alpha_{2,g'[i]})}{1 + \exp(\bm{x}_i^T\bm{\beta} + \alpha_{1,g[i]} + \alpha_{2,g'[i]})} \end{equation*} \end{framed} All models are fit with a standard Inverse Wishart prior of $\mathrm{IW}(d_j + 1; \bm{I}_{d_j})$ on $\bm{\Sigma}_j$. I run each variational algorithm until the change in the ELBO is less than $10^{-8}$ or the largest parameter changes by less than $10^{-5}$. For the HMC and MAVB methods, I draw 4,000 samples from the (approximate) posterior. Table~\ref{tab:sims} reports four measures of performance; the first two measures compare the point estimates (posterior mean) against HMC. The third measure compares the full posterior using a measure of ``accuracy'' that modifies the integrated absolute error (e.g. \citealt{faes2011variational}). Formally, this is defined as $1 - \frac{1}{2} \int_{-\infty}^\infty | q_k(\theta) - q_{\mathrm{HMC}}(\theta)|d\theta$. I use kernel density estimation with a range over the shared support of the samples (\texttt{bkde}, \texttt{KernSmooth}; \citealt{wand2020kernsmooth}) and then approximate the integral. Finally, to understand how the estimates of uncertainty fare against the unknown truth, I examine the ``frequentist coverage'': Does an interval of $\pm$ 1.96 times the standard deviation of the parameter contain the truth? A value of around 0.95 would indicate correct coverage at the expected frequentist level. \begin{table}[!ht] \caption{Results from Simulations} \label{tab:sims} \begin{center} \begin{tabular}{rlrr|rr|rr|rr} \hline\hline & & \multicolumn{2}{c}{Bias} & \multicolumn{2}{c}{RMSE} & \multicolumn{2}{c}{Accuracy} & \multicolumn{2}{c}{Coverage} \\ & & FE & RE & FE & RE & FE & RE & FE & RE \\ \hline \input production_figures/synth_results.tex \hline\hline\\ \multicolumn{10}{l}{ \begin{minipage}{\textwidth} \footnotesize \emph{Note}: This reports the bias (Bias), root mean squared error (RMSE) of the estimated posterior means against those estimated from HMC. The distance between the distributions (Accuracy) and frequentist coverage (Coverage) are reported; see the main text for an explanation of these measures. The statistics are disaggregated by fixed (FE) and random effects (RE). All results are created using all relevant parameters in each simulation and then averaged across one hundred simulations. ADVI (MF) uses the mean-field approximation; ADVI (FR) uses the full rank approximation in \citet{kuckelbir2017advi}. \end{minipage} } \end{tabular} \end{center} \end{table} The results are promising; looking at the bias and RMSE, the variational methods perform well; they have very small bias against the means estimated from HMC and an RMSE that is quite small, comparable to the Laplace approximation, and out performs both ADVI implementations. Examining accuracy and frequentist coverage shows more separation across the methods. The accuracy and coverage of Scheme I are noticeably lower than the Laplace approximation. However, applying MAVB results in noticeable improvements in accuracy (around 6\% for fixed effects; and nearly 25\% for random effects) and increases coverage by similar amounts to near nominal levels. After this improvement, Scheme I is comparable to the best approximate method (Laplace approximation) having slightly lower accuracy for the fixed effects but noticeably better accuracy and coverage for the random effects. Scheme II has somewhat better initial performance but is also boosted considerably by applying MAVB. Scheme III---the factorization that does not assume independence between $q(\bm{\alpha})$ and $q(\bm{\beta})$---performs nearly as well as the Laplace approximation (and better in terms of the random effects) before applying MAVB. Applying MAVB results in only slight improvements (e.g. a 1-2\% boost in accuracy and coverage for the random effects). Appendix~\ref{section_app:simulations_extra} conducts additional simulations. First, I vary the magnitude of the true coefficients by changing the variance of the fixed and random effects. After applying MAVB, the coverage of the variational methods is near nominal (i.e. above 0.90) in all cases except when the variance of the true distribution of the fixed effects is larger where MAVB is insufficient to obtain nominal coverage on the fixed effects (0.80-0.85) although the coverage on the random effects remains good. While this is worthy of future exploration, I conjecture this occurs because of the large magnitudes of the linear predictors (with 5-95\% interval of around -5.9 to 5.5 vs. -2.7 and 2.3 in the simulations in Table~\ref{tab:sims}) and the highly bimodal distribution of $p_i$. It may be that a pass over the observed data and one full sweep of MCMC (discussed in Section~\ref{section:MAVB} as a ``stronger'' MAVB) could result in more significant improvements in coverage. Second, to examine simulations in a more realistic case, I fit a simple MRP model on the data from \cite{ghitza2013mrp} with random effects for age, income, ethnicity and state (Model 1 from Table~\ref{tab:gg_models}, below) and take the parameter estimates from the Laplace approximation as ``ground truth'' to create simulated outcomes. It shows a similar pattern although with weaker performance across the board---Scheme I after applying MAVB outperforms ADVI (Mean Field) across all measures with noticeable improvements in accuracy (10\%) and is comparable to ADVI (Full Rank). Scheme III performs the best of all approximate methods, including beating the Laplace and ADVI (Full Rank). The values of the linear predictor are relatively modest in this case (90\% of the HMC posterior means are between -0.67 and 1.99) and more comparable to those in the simulations in Table~\ref{tab:sims}. This provides further evidence that for reasonably sized linear predictors, the variational approximations perform well and are improved by MAVB. Finally, I examine the sensitivity of the algorithm to initial values to see if there is evidence of arriving at different local optima. I find little evidence of this for the models considered in this paper given reasonable random initializations, although researchers should check for this in their own applications. \section{Application: Estimation for Complex MRP} \label{section:gg_app} This section re-analyses the results in \cite{ghitza2013mrp} where I compare my results against Hamiltonian Monte Carlo (HMC). I then conduct 10-fold cross-validation using Scheme I to examine which model seems to be most appropriate to use for the final predictive task. I find that, contrary to the decision in \citet{ghitza2013mrp}, a model with intermediate complexity is preferred. \subsection{Brief Explanation of MRP} Before proceeding, I provide a brief explanation of MRP (see, e.g. \citealt{park2004mrp,lax2009estimation,ghitza2013mrp} for more detailed explanations). The key problem is that while it is easy to gather a representative survey at the national level, it is very expensive to gather a sufficiently large and representative survey at sub-national units (e.g. states) or sub-types of respondents (e.g. by race, education, income, their interactions, etc.). Further, the number of observations in any sub-group may be very small, rendering a direct analysis of their values unreliable (\citealt{lax2009estimation,warshaw2012district,buttice2013mrp}). However, the most substantively important questions exactly rely on drawing inferences about those sub-groups. MRP provides a model-based procedure to attempt to reliably estimate these sub-group effects by providing a principled way to extrapolate the nationally representative survey. MRP is a two-step procedure. First, the researcher estimates a hierarchical model (``multilevel regression'') on the initial survey including covariates such as demographic characteristics and indicators for the relevant geographic unit (e.g. state) to get estimates for various ``types'' of respondents (e.g. age-income-ethnicity by state). The hierarchical model usually has a binomial or binary outcome. The second step calculates the expected response for each demographic-state profile. These can be examined directly or aggregated to get a measure of opinion at the desired geographic level (e.g. state). The aggregation or ``post-stratification'' occurs by taking a weighted average of those sub-group predictions from the known joint distribution in the population from some ground truth such as the Census. This paper has focused on the first step (``multilevel regression''). \citet{ghitza2013mrp} apply this method to explore the decision to turn out to vote and party choice by age-race-income-state sub-groups in the 2004 and 2008 American presidential elections. They note that traditional MRP includes the random effects linearly and thus may be failing to capture important complexities or interactions between demography and geography. They thus fit a highly complex model with eighteen random effects and nearly 4,000 parameters on a dataset with around 4,000 observations. After doing so, they draw a variety of subtle and nuanced conclusions about the behavior of particular demographic sub-groups. For example, they qualify the conventional wisdom to show that turnout increases were concentrated amongst \emph{non-white} younger voters instead of younger white voters (\citealt[p. 771-772]{ghitza2013mrp}). \subsection{Estimating Complex Hierarchical Models} I begin by performing a direct comparison of Schemes I, II, and III against the gold standard approaches applied to \citet{ghitza2013mrp}. To illustrate the many specifications available to the researcher, Table~\ref{tab:gg_models} shows nine possible specifications ranging from a simple MRP model with no interactions to the preferred model in \citet{ghitza2013mrp} (Model 9). I round $y_i$ and $n_i$ to the nearest integer to facilitate interpretation as a standard binomial regression. The intermediate models represent varying complexities that allow for some, but not all, interactions. \begin{table}[!ht] \caption{Nine Possible Models for Predicting Turnout via MRP} \label{tab:gg_models} \resizebox{\textwidth}{!}{ \begin{tabular}{l*{9}r} \hline\hline & \multicolumn{9}{c}{Model} \\ & (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline \input production_figures/gg_formula.tex \hline &\multicolumn{9}{c}{\emph{Number of Parameters}} \\ \input production_figures/gg_formula_param.tex \hline &\multicolumn{9}{c}{\emph{Run Time of Model in Minutes}} \\ \input production_figures/gg_formula_time.tex \hline\hline \end{tabular}} \bigskip \begin{tabular}{l} \begin{minipage}[b]{\textwidth} \footnotesize \emph{Note}: This table summarizes nine possible models to predict voter turnout. All models include six fixed effects: an intercept, (standardized) individual income, state-level income, state-level Republican vote share and the interaction between individual income and the latter two variables. \citet{ghitza2013mrp} use Model 9. The first panel indicates which random effects are included; a hollow diamond ($\diamond$) indicates that only a random intercept is used. A solid circle ($\bullet$) indicates that a random intercept and a random slope allowing for the effect of (standardized) individual income to vary by group are included. The number of parameters is the number of fixed effects, random effects, and variance components for the random effects. The run times are for a Laplace approximation using \texttt{blme} (\citealt{bates2015lmer,chung2015weakly}) and HMC in \texttt{STAN} (via \texttt{brms}; \citealt{burkner2017brms}). All models were run on an instance with 16 GB of memory and 4 cores. HMC was estimated using four chains distributed in parallel. \end{minipage} \end{tabular} \end{table} It clearly shows the scale of the difficulty for applied researchers: Fitting the published model (Model 9) takes hours using either specification on a machine similar to that available for many applied researchers (a Microsoft Azure instance; Ubuntu, 4 cores, 16 GB of RAM). Methods that require fitting the model repeatedly to facilitate common tasks as bootstrapping, model comparison via cross-validation, or ensemble analysis (\citealt{van2007super}, see \citealt{ornstein2019stacked} for an application to MRP) are clearly prohibitively expensive for all except the simplest models using the Laplace approximation. Figure~\ref{fig:speed} illustrates the improvement after applying the variational algorithms to each model in Table~\ref{tab:gg_models} and performing MAVB. All reported times include estimation of the variational algorithm and drawing 4,000 samples using MAVB. Appendix~\ref{section_app:gg_extra} shows the time for estimation and MAVB separately; it takes around thirty seconds for Scheme I on the most complex model. \begin{figure}[!ht] \caption{Speed of Estimation} \label{fig:speed} \includegraphics[width=\textwidth]{production_figures/timing_models.pdf} \caption*{\footnotesize \emph{Note}: Each figure plots the run-time of each of the five methods (Laplace approximation, Hamiltonian Monte Carlo [HMC], Schemes I-III with drawing 4,000 samples using MAVB). The reported times are averaged across the 2004 and 2008 elections. The left figure shows the time in minutes on a linear scale; the right figure reports the same information on a log-scale. Model 1-9 are described in Table~\ref{tab:gg_models}. All models are fit on a computer with 16 GB of RAM and 4 cores.} \end{figure} As shown on a linear scale, the time to estimate either the Laplace approximation or Hamiltonian Monte Carlo dwarfs that of any of the variational schemes. The right panel shows the results on a log-scale to allow for clearer comparisons; it shows that Scheme I remains remarkably fast estimating even Model 9 in around one minute versus hour(s) for either gold standard method. The performance of Schemes II and III degrade somewhat---taking around fifteen minutes to fit. This is still very reasonable, but may still be onerous if repeated fitting is required as in cross-validation. The quality of the approximation is also crucial to assess. As the truth is unknown, I do this by comparing all methods against HMC as this seeks most directly to sample the posterior.\footnote{This method is, of course, itself approximate as it may fail to accurately sample the posterior. Experiments suggested that setting ``adapt delta'' to 0.99 was required to eliminate all divergent transitions (except for one in Model 3 in 2004).} Figure~\ref{fig:gg_post_means} begins by comparing the point estimates pooling across the 18 models. As there are thousands of parameters to plot, I simplify the picture in the following way; I plot the absolute magnitude of the estimates averaged across $j$: $\bar{\alpha}_j = \frac{1}{g_j} \sum_{g=1}^{g_j} |\alpha_{j,g}|$ in solid circles and shade the background of the plot based on the density of the individual $|\alpha_{j,g}|$. This prevents the domination of the $j$ with smaller numbers of groups (e.g. age, income, etc.) in the visualization. I also separately mark the fixed and random effects. \begin{figure}[!ht] \caption{Comparing Posterior Means} \label{fig:gg_post_means} \includegraphics[width=\textwidth]{production_figures/compare_mean.pdf} \caption*{\footnotesize \emph{Note}: This figure plots the absolute value of estimated mean value from Schemes I-III and the Laplace approximation on the horizontal axis against the absolute value of the posterior mean from Hamiltonian Monte Carlo [HMC] on the vertical axis. Each parameter is plotted as a thin grey point; the average of the values inside each random effect are shown as larger points. The axes are on a square-root scale.} \end{figure} Consider first the Laplace approximation; it nearly exactly recovers the point estimates---its solid points and shading lie very near to the 45-degree line. For the variational methods, Scheme I is highly correlated with the posterior ($\rho = 0.996$ for $\bar{\alpha}_j$; $\rho = 0.964$ for the raw $|\alpha_{j,g}|$) although less so than the Laplace approximation. Schemes II and III show tight coupling with the estimates from HMC and are effectively equally accurate to the Laplace approximation. This matches the conventional wisdom that variational methods typically well-recover the posterior means. Figure~\ref{fig:gg_post_var} presents an analogous figure for the posterior variability, plotting the standard deviation of each parameter. It smooths across random effects in the same way as Figure~\ref{fig:gg_post_means}. In interpreting this figure, note that points in the upper left quadrant (above the 45-degree line) indicate worrying performance as the posterior variability is below that coming from the HMC estimates. \begin{figure}[!ht] \caption{Comparing Posterior Variability} \label{fig:gg_post_var} \includegraphics[width=\textwidth]{production_figures/compare_sd.pdf} \caption*{\footnotesize \emph{Note}: This figure plots the estimated standard deviation from Schemes I-III and the Laplace approximation on the horizontal axis against the standard deviation of the posterior distribution from Hamiltonian Monte Carlo [HMC] on the vertical axis. Each parameter is plotted as a thin grey point; the average of the values inside each random effect are shown as larger points.} \end{figure} Again consider first the Laplace approximation; the standard deviation of its point estimates are often tightly clustered near the 45-degree line but there are a number of random effects that are noticeably smaller (above the 45-degree line). The performance for the variational algorithms is rather mixed, by comparison. Looking at Scheme I, almost all points show a too small standard deviation---with many random effects being considerably too small. Scheme III, however, improves the situation markedly. While slightly smaller---especially for points with large standard deviations--it tracks the 45-degree line closely and has better performance than the Laplace approximation. As expected, Scheme II is somewhat of an intermediate case; improving some parameters but still having significant problems. Overall, therefore, Schemes I and II fall into the usual problem of understating posterior variance. By contrast, Scheme III appears to do rather well and lacks the obvious problems of lack of posterior variability versus a fully Bayesian baseline. This corroborates results from \cite{menictas2019streamlined} that estimating $q(\bm{\beta}, \bm{\alpha})$ jointly performs well for (linear) hierarchical models with $J > 1$. Finally, I show how these estimates change when using MAVB. I focus on the effect on posterior variability as the means are not materially affected by MAVB; Appendix~\ref{section_app:gg_extra} shows the analogous figure. Figure~\ref{fig:gg_mavb} presents the distribution of the gap between the variability between the HMC estimates and the other methods where negative values indicates a smaller standard deviation for the competitor methods. Any point below the dotted line indicates that that percentile of effects has a smaller standard deviation than the HMC estimates. To make results interpretable, I report the percentage gap, e.g. $\left(\mathrm{sd}^{\mathrm{Laplace}}_k - \mathrm{sd}^{\mathrm{HMC}}_k\right)/\mathrm{sd}^{\mathrm{HMC}}_k \cdot 100$ for all parameters $k$ in $(\bm{\beta}, \bm{\alpha})$. To ensure that random effects with small $g_j$ are counted, it presents the averaged statistic across $g$ as in Figures~\ref{fig:gg_post_means} and~\ref{fig:gg_post_var}. \begin{figure}[!ht] \caption{Improvements from MAVB} \label{fig:gg_mavb} \includegraphics[width=\textwidth]{production_figures/compare_MAVB.pdf} \caption*{\footnotesize \emph{Note}: This figure plots the percentile of the gap between the standard deviations estimated via Hamiltonian Monte Carlo [HMC] and the approximate methods. The percentage gap, i.e. $\left(\mathrm{sd}^{\mathrm{Laplace}}_k - \mathrm{sd}^{\mathrm{HMC}}_k\right)/\mathrm{sd}^{\mathrm{HMC}}_k \cdot 100$, is shown. A negative value on the vertical axis indicates that the corresponding percentile has a smaller variance than HMC. A vertical shift upward of the line indicates the variance of the parameters has increased. The solid markers indicate the deciles and extremes of the distribution. The dashed line with hollow triangles represents the estimates without using MAVB. The red line with solid circles represents the results after using MAVB.} \end{figure} The results provide clear evidence for the important role of MAVB. Considering first random effects in the left panel, it is first worth noting that the Laplace approximation---commonly used by researchers---has poor performance for a number of parameter blocks (e.g. the lower percentiles). Scheme I shows a clear lack of variability in the posterior estimates with all estimates being estimated at least 20\% too precisely and around \emph{half} of all estimates having less than 75\% of the variability estimated in the fully Bayesian setting. After applying MAVB, the (red) solid line shows a considerable improvement although still markedly below the HMC estimates and performing worse than the Laplace approximation. Large improvements are seen for the fixed effects ($\bm{\beta}$) where the estimates of the variability go from extremely poor to being much closer to the Laplace approximation which, itself, is markedly below the HMC coverage. Scheme III is worth also considering in detail; even before applying MAVB, it has stronger performance than the Laplace approximation in that its curve has a much less poor ``tail'' (i.e. its worst blocks are around 25\% too small vs around 60\% for the Laplace approximation). MAVB provides some additional gains ensuring that most parameter blocks are only around 10\% too small in terms of their variability. Scheme II is again somewhat intermediate; after applying MAVB, it is broadly comparable to the Laplace approximation. To provide another interpretation of the role of MAVB, consider the accuracy measure in Section~\ref{section:simulations} that measures similarity between two distributions, averaged within and then across parameter blocks: The Laplace approximation performs relatively well (90\%). Scheme I performs poorly (43\%) because it clearly fails to capture the posterior variance. MAVB increases this considerably (68\%) although it still falls below the Laplace approximation. Scheme III, however, out-performs the Laplace approximation (95\%) with a slight improvement from MAVB (97\%). Appendix~\ref{section_app:gg_extra} provides some additional results. First, it breaks apart Figure~\ref{fig:gg_post_var} by the type of random effect; the main implication is that the initial lack of variability from Scheme I is most pronounced for the fixed effects and random effects with small $g_j$ (age, ethnicity, income). The improvements for MAVB for those random effects are large and resolve the much of the negative gap. Second, it examines the linear predictor (i.e. $\bm{x}_i^T\bm{\beta} + \bm{z}_i^T\bm{\alpha}$). It shows that MAVB has little effect, although all schemes perform well. In addition to closely estimating the posterior mean (Scheme I has a bias of -0.002 vs HMC), the standard deviation is also fairly close (bias of -0.013 or about -2\%), especially compared to the gaps seen in Figure~\ref{fig:gg_mavb}. A conjecture would be that MAVB as implemented here has little impact on the linear predictor as it is more about building correlations between parameter blocks, but the ``stronger'' MAVB noted above might address such limitations. \subsection{Choosing an Optimal Model} Finally, I return to the substantive analysis in \citet{ghitza2013mrp}. A key question when performing MRP is the complexity of the accompanying model. Even with the regularization implied by the hierarchical effects, it is still possible to over-fit to the survey sample (\citealt{goplerud2018sparse}). The reported analysis relies on Model 9 without exploring this possibility. The computational burden needed to estimate multiple models and thereby engage in model testing and checking is often onerous for the applied researcher. I thus use the ability to rapidly fit variational approximations to deploy a standard model comparison technique (cross-validation) and examine whether a model of intermediate complexity should be preferred. Table~\ref{tab:gg_cv} reports a number of statistics on model fit. \begin{table}[!ht] \caption{Cross-Validation to Choose Optimal Model} \label{tab:gg_cv} \resizebox{\textwidth}{!}{ \begin{tabular}{l*{9}cr} \hline\hline \multicolumn{1}{l}{\multirow{2}{*}{Method}} & \multicolumn{9}{c}{Models} & \multirow{2}{*}{Time} \\ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & \\ \hline & \multicolumn{9}{c}{2004 Election} & \\ \input production_figures/cv_models.tex \hline\hline\\ \end{tabular}} \bigskip \begin{tabular}{l} \begin{minipage}{\textwidth}% \footnotesize \emph{Note} This table reports statistics for model fit. The first two rows for each election report fit statistics on the model estimated via Hamiltonian Monte Carlo that approximate cross-validation; the ``LOO'' information criterion and the WAIC information criterion (\citealt{gelman2013understanding,vehtari2017practical}). The third row reports the average out-of-sample deviance from a model fit using Scheme I. For all statistics, smaller is better and the best value is bolded. The time in minutes for each row to be estimated is shown in the final column; this includes estimation time and the time needed to estimate the relevant fit statistic. \end{minipage} \end{tabular} \end{table} The first two rows (LOO and WAIC) are popular tools for deciding between non-nested Bayesian models (\citealt{vehtari2017practical}). Details on their exact calculation can be found in the relevant articles (\citealt{gelman2013understanding,vehtari2017practical}), but both are designed to be approximations to cross-validation that do not require fitting the Bayesian model repeatedly. Fortunately, both have diagnostics to assess whether the underlying approximations are reliable; unfortunately, the diagnostics tests fail in this setting. Almost all models report unacceptable violations of the underlying assumptions for both the LOO and WAIC, and the associated software explicitly encourages the user to resort to $K$-fold cross-validation. On the other hand, variational inference provides a fast approximate method. The final row of the table (VI-CV) reports the average deviance (twice the negative log-likelihood) of the held-out predictions after conducting 10-fold cross-validation where observations are allocated to each fold with equal probability using Scheme I. Formally, if observation $i$ has a prediction $\hat{p}_i$, the individual deviance is $-2 \left[y_i \ln(\hat{p}_i) + (n_i - y_i) \ln(1-\hat{p}_i)\right]$. Observations with $n_i = 0$ are excluded from the reported average. Model 4 is also selected if Schemes II or III are used. The results are interesting and push against the decision to use Model 9; it finds that while Model 1 performs noticeably worse than all other models, it is not necessarily best to use the most complex model. Indeed, an intermediate model---Model 4---performs the best although the differences in the error are quantitatively small between Models 4 and 9. Appendix~\ref{section_app:gg_extra} provides a more detailed exploration against a ``Bayesian gold standard.'' Using a new set of folds, it fits Models 1, 4, and 9 using ten-fold cross-validation and performing HMC and the Laplace approximation on each fold. This is extremely time intensive---taking around \emph{ten} days to complete the whole process. It confirms that cross-validated HMC, Laplace approximation, and Scheme I all select Model 4. The out-of-sample predictions between Scheme I and HMC are highly correlated (0.998). This gives some confidence that the results of the variational method can be used in lieu of prohibitively expensive classical cross-validation. When it is too expensive to conduct such an analysis, relying on methods such as simulation-based calibration (e.g. \citealt{yao2018work}) may be a feasible way to assess whether the variational approximation ``successfully'' approximated the posterior. Returning to Table~\ref{tab:gg_models}, the major feature that distinguishes Model 4 from less complex models is interactions between the core random effects (age, ethnicity, income) and state. This matches a reasonable expectation from political science that demographics are likely to vary across state but the complex higher-order interactions between region and three-way-interactions do not seem to add much predictive power. These results are useful to practitioners of MRP in three ways; first, complex hierarchical models can now be compared against other state-of-the-art machine learning methods versus relying on a very simple model (analogous to Model 1) due to computational costs (\citealt{bisbee2019barp,ornstein2019stacked}). Thus, it is an interesting and open question whether methods such as BART are actually superior for MRP tasks (\citealt{bisbee2019barp}) or whether properly specified complex hierarchical models can be competitive. Second, it suggests that interactions between demographics and state characteristics are important to include although the evidence for going extremely ``deep'' and adding many higher-order interactions appears more limited. Finally, even if one prefers to fit a Bayesian model for the final regression, the ability to quickly search between models allows the researcher to narrow down a set of plausible candidate models for final exploration and model testing. \section{Conclusion} \label{section:conclusion} This paper provided a new set of variational algorithms that, leveraging Polya-Gamma data augmentation (\citealt{polson2013polyagamma}), require only a mean-field assumption to estimate a logistic hierarchical regression with an arbitrary number and size of random effects. It provided multiple factorization assumptions; Scheme I required the independence of the fixed effects and each block of random effects whereas Scheme III relaxed that assumption at the expense of increased computational cost. All methods seemed to quite accurately capture the posterior means in even complex models. As expected in both simulations and real data, Scheme I performed worse--especially in terms of understating posterior variance for many random effects. The paper also provided a generic way to improve the performance of Scheme I, and Schemes II and III to a lesser extent. By leveraging the existence of a parameter expansion of the underlying model either by allowing the means of the random effects to be non-zero or by imposing some translation, one can use a marginal augmentation sampler to improve the posterior approximation. This procedure (``marginally augmented variational Bayes''; MAVB) showed promising performance when applied to Scheme I: It increased the variance of the estimated approximations to be closer to the samples drawn using a fully Bayesian procedure, although still remaining too small on real data. However, given its speed even on complex models, MAVB provides a cheap way to make Scheme I a more viable approximation to the true posterior. It is also worth noting that Scheme III performed very well---often beating the very popular Laplace approximation on both real and simulated data. Future work could proceed in at least two directions. First, the algorithms here can be naturally extended to count and multinomial outcomes providing a more unified approach to variational estimation of non-linear hierarchical models. Extending the model to include a weakly informative prior such as \cite{huang2013simple} is also an important extension. Second, the usefulness of MAVB should be explored both theoretically and in the context of other models. As noted earlier, there is nothing about using MAVB that is specific to logistic hierarchical models \emph{per se}. Indeed, this idea of ``improving'' an approximation by pushing it through a Markov transition kernel can be generalized to a wide variety of MCMC samplers and models. It thus opens a question of which Markov transition density to use for other models that do not admit marginal augmentation. A reasonable conjecture is that as the mixing of the sampler improves, the transformed sample will be closer to the true posterior. \section*{Supplemental Materials} The supporting information contains derivations of the variational algorithm (Appendix~\ref{section_app:pg}), formal definitions and proofs of MAVB (Appendix~\ref{section_app:param_x}), results on accelerating CAVI using PX-VB and joint updates of certain parameters (Appendix~\ref{section_app:acceleration}), additional simulations (Appendix~\ref{section_app:simulations_extra}), and additional analyses on \citet{ghitza2013mrp} (Appendix~\ref{section_app:gg_extra}). Open-source statistical software to implement the algorithms in this paper is available on GitHub as noted in the acknowledgements. Materials to replicate the analyses in the paper can be found at the following link: \url{https://doi.org/10.7910/DVN/DI19IB}. \bibliographystyle{ba}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,029
\section{#1}} \def\thesection.\arabic{equation}{\thesection.\arabic{equation}} \catcode`@=11 \@addtoreset{equation}{section} \@addtoreset{equation}{subsection} \def\thesection.\arabic{equation}{\ifnum\value{section}=0 \arabic{equation}\ignorespaces \else \ifnum\value{section}=-1 A.\arabic{equation}\ignorespaces \else \ifnum\value{subsection}=0 \thesection.\arabic{equation}\ignorespaces \else \thesection.\arabic{subsection}.\arabic{equation}\ignorespaces \fi \fi \fi} \catcode`@=12 \def\pa^{++} } \def\App{A^{++} } \def\Dpp{\cd^{++} {\partial^{++} } \def\App{A^{++} } \def\Dpp{{\cal D}^{++} } \def\pa^{--} } \def\Amm{A^{--} } \def\Dmm{\cd^{--} {\partial^{--} } \def\Amm{A^{--} } \def\Dmm{{\cal D}^{--} } \def\pa^{\pm\pm}} \def\Apm{A^{\pm\pm}} \def\Dpm{\cd^{\pm\pm}{\partial^{\pm\pm}} \def\Apm{A^{\pm\pm}} \def\Dpm{{\cal D}^{\pm\pm}} \def\nabla{\nabla} \def\oplus} \def\ot{\otimes{\oplus} \def\ot{\otimes} \def${\widehat {\cal M}}^+$} \def\hM{${\widehat {\cal M}}${${\widehat {\cal M}}^+$} \def\hM{${\widehat {\cal M}}$} \def\0#1{{\stackrel{\circ}{#1}}} \def\der#1{{\partial \over \partial #1}} \def\pp#1#2{{\partial #1 \over \partial #2}} \def\N#1{$N{=}#1$} \def\square{\kern1pt\vbox {\hrule height 0.6pt\hbox{\vrule width 0.6pt\hskip 3pt \vbox{\vskip 6pt}\hskip 3pt\vrule width 0.6pt}\hrule height 0.6pt}\kern1pt} \defself-dual{self-dual} \defself-duality{self-duality} \defsuper-Poincar\'e algebra{super-Poincar\'e algebra} \defYang-Mills{Yang-Mills} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\la#1{\label{#1}} \def\begin{array}{rll}{\begin{array}{rll}} \def\end{array}{\end{array}} \def\begin{eqnarray}{\begin{eqnarray}} \def\end{eqnarray}{\end{eqnarray}} \begin{document} \begin{titlepage} \noindent hep-th/9808053 \hfill ITP--UH--15/98 \\ \vskip 1.0cm \begin{center} {\Large\bf String-induced Yang-Mills coupling to self-dual gravity~$^*$}\\ \vskip 1.5cm {\large Chandrashekar Devchand} {\it Max-Planck-Institut f\"ur Mathematik in den Naturwissenschaften}\\ {\it Inselstra\ss e 22-26, 04103 Leipzig, Germany}\\ {E-mail: devchand@mis.mpg.de}\\ \vskip 0.7cm {\large Olaf Lechtenfeld} {\it Institut f\"ur Theoretische Physik, Universit\"at Hannover}\\ {\it Appelstra\ss{}e 2, 30167 Hannover, Germany}\\ {http://www.itp.uni-hannover.de/\~{}lechtenf/}\\ \vskip 2.5cm {\bf Abstract} \end{center} \begin{quote} By considering $N{=}2$ string amplitudes we determine the $(2{+}2)$-dimensional target space action for the physical degrees of freedom: self-dual gravity and self-dual Yang-Mills, together with their respective infinite towers of higher-spin inequivalent picture states. Novel `stringy' couplings amongst these fields are essential ingredients of an action principle for the effective target space field theory. We discuss the covariant description of this theory in terms of self-dual fields on a hyperspace parametrised by the target space coordinate and a commuting chiral spinor. \end{quote} \vfill \hrule width 5.cm \vskip.1in {\small \noindent ${}^{*\ {}}$ supported in part by the Deutsche Forschungsgemeinschaft; grant LE-838/5-2} \end{titlepage} \hfuzz=10pt \section{Introduction} We have recently presented a covariant description of the physical degrees of freedom of the \N2 open string in terms of a self-dual Yang-Mills theory on a {\it hyperspace} parametrised by the coordinates of the $(2{+}2)$-dimensional target space $x^{\pm\dtt{\a}}$ together with a {\it commuting\/} chiral spinor $\eta^\pm$ \cite{dl1}. The infinite tower of massless string degrees of freedom, corresponding to the inequivalent pictures (spinor ghost vacua) of this string~\cite{LS}, are compactly represented by a hyperspace generalisation of the prepotential originally used by Leznov~\cite{leznov,parkes} to encode the dynamical degree of freedom of a self-dual\ Yang-Mills (SDYM) theory. The generalised hyperspace Leznov Lagrangean yields an action describing the tree-level \N2 open string amplitudes~\cite{dl1}. This description reveals the symmetry algebra of the space of physical states to be the {\it Lie-algebra extension} of the Poincar\'e algebra \cite{ac} obtained from the \N1 super-Poincar\'e algebra\ by changing the statistics of the Grassmann-odd (fermionic) generators. Picture-raising~\cite{FMS,BKL} is thus interpreted as an {\it even\/} variant of a supersymmetry transformation. The purpose of this paper is to investigate whether closed strings allow incorporation into the above picture. The physical centre-of-mass mode is well-known to describe self-dual gravity (SDG) in $(2+2)$ dimensions \cite{OVold}. The effect of inequivalent picture states has, however, hitherto not been taken into account. As for the open case, the closed sector physical state space consists of an infinite tower of massless picture-states of increasing spin~\cite{BL1,BL2}. In particular the scattering of open strings with closed strings determines a particular coupling of the SDG tower of picture states with the SDYM tower~\cite{marcus}. Motivated by our previous results for the open string sector~\cite{dl1}, we first (in section 2) set up the general framework of curved-hyperspace self-duality, in the expectation that it underlies the full (open ${+}$ closed) \N2 string dynamics. This involves a generalisation of the formalism previously developed to study self-dual gravity \cite{revisited} and self-dual supergravity \cite{sdsg} to the hyperspace introduced in \cite{dl1}. Our formalism is basically a field-theoretical variant of the twistor construction. The dynamical degrees of freedom of hyperspace self-duality are seen to be encoded in a hyperspace variant of Plebanski's `heavenly' equation~\cite{plebanski}. Gauge covariantising the construction yields a curved hyperspace variant of the Leznov equation as well. These two equations, however, do not provide a complete description of the effective \N2 string dynamics, for perusal of string scattering amplitudes (section 3) reveals further couplings between the gravitational and gauge degrees of freedom. Taking these into account yields an effective target space action (section 4) for the two infinite towers of target space fields. We discuss the hyperspace-covariant description of this action and write down homogeneous hyperspace equations of motion. Finally, a consistent truncation is performed (section 5) to multiplets of 9 fields from the Plebanski tower and 5 from the Leznov tower. Their combined action is rather reminiscent of the maximally helicity-violating projection of (non-self-dual) light-cone $N{=}8$ {\it super\/}gravity plus $N{=}4$ {\it super\/} Yang-Mills, with the replacement of the fermionic chiral superspace coordinate by a commuting spinor. \vfil\goodbreak \section{Self-dual gravity picture album} Consider a {\it self-dual chiral hyperspace\/} ${\widehat {\cal M}}^+$} \def\hM{${\widehat {\cal M}}$\ with coordinates $ \{ x^{\alpha\dtt{\mu}}, \eta^\alpha \}$, where $\eta^\alpha $ is a {\it commuting\/} spinor and $x^{\alpha\dtt{\mu}}$ are standard coordinates on ${\bubble R}^{2,2}$. As for self-dual superspaces, only half the global tangent space group $SO(2,2)\simeq SL(2,{\bubble R})\times SL(2,{\bubble R})$ is gauged. One of the world indices is therefore identical to the corresponding tangent index (denoted by early Greek indices $\alpha,\beta,\gamma,$ etc.) and only the dotted index has `world' and `tangent' variants. The components of the spinor $\eta^\alpha$ therefore do not transform under space diffeomorphisms. Covariant derivatives in the chiral hyperspace therefore take the form \begin{equation} {\cal D}_{\alpha}\ =\ \partial_{\alpha}\ +\ E^{\beta \dtt{\mu}}_{\alpha }\partial_{\beta \dtt{\mu}}\ +\ \omega_{\alpha}\ \qquad,\qquad {\cal D}_{\alpha \dtt{\a}}\ =\ E^{\beta \dtt{\mu}}_{\alpha \dtt{\a}}\partial_{\beta \dtt{\mu}}\ +\ \omega_{\alpha\dtt{\a}} \quad,\end{equation} with the partial derivatives $\ \partial_{\alpha} \equiv \der{\eta^{\alpha}}\ $ and $\partial_{\alpha \dtt{\mu}} \equiv \der{x^{\alpha \dtt{\mu}}}\ $. The components of the spin connection $( \omega_{\alpha}, \omega_{\alpha\dtt{\a}})$ are determined in terms of the vielbein fields in virtue of zero-torsion conditions. We choose them in a {\it self-dual gauge}, $\omega_{\alpha}=(\omega_{\alpha} )_\dtt{\b}^\dtt{\g} \Gamma^\dtt{\b}_\dtt{\g}$ and $\omega_{\alpha\dtt{\a}} = (\omega_{\alpha\dtt{\a}} )_\dtt{\b}^\dtt{\g} \Gamma^\dtt{\b}_\dtt{\g}$ , i.e. taking values in the Lie algebra of the gauged $SL(2,{\bubble R})$. They therefore act on dotted tangent space indices. Thus restricting the local part of the tangent space group to half of the Lorentz group is (gauge) equivalent to imposing self-duality on the corresponding curvatures, viz., \begin{equation}\begin{array}{rll} [{\cal D}_{\alpha}, {\cal D}_{\beta}]\ &=&\ \epsilon_{\alpha\beta} R \\[5pt] [{\cal D}_{\alpha}, {\cal D}_{\beta \dtt{\b}}]\ &=&\ \epsilon_{\alpha\beta} R_{ \dtt{\b}} \\[5pt] [{\cal D}_{\alpha \dtt{\a}}, {\cal D}_{\beta \dtt{\b}}]\ &=&\ \epsilon_{\alpha\beta} R_{\dtt{\a}\dtt{\b}} \quad.\la{cvv1} \end{array}\end{equation} With the undotted indices thus `de-gauged', we can proceed in analogy to the Yang-Mills\ case \cite{dl1} and enlarge ${\widehat {\cal M}}^+$} \def\hM{${\widehat {\cal M}}$\ to a harmonic space with coordinates $\{ x^{\pm\dtt{\mu}}, \eta^{\pm}, u^\pm_\alpha \} ,$ where $ x^{\pm\dtt{\mu}} = u^\pm_\alpha x^{\alpha\dtt{\mu}},$ and $\eta^{\pm} = u^\pm_\alpha \eta^{\alpha} $. The equations \gl{cvv1} are equivalent to the following curvature constraints: \begin{eqnarray} & [{\cal D}^+ , {\cal D}^+_\dtt{\b} ]\ =\ 0\quad ,\qquad [{\cal D}^+_\dtt{\a} , {\cal D}^+_\dtt{\b} ]\ =\ 0 \la{c++}\\[5pt] & [{\cal D}^- , {\cal D}^-_\dtt{\b} ]\ =\ 0\quad ,\qquad [{\cal D}^-_\dtt{\a} , {\cal D}^-_\dtt{\b} ]\ =\ 0 \la{c--}\\[5pt] & [{\cal D}^+ , {\cal D}^- ]\ =\ R \quad ,\qquad [{\cal D}^+ , {\cal D}^-_\dtt{\b} ]\ =\ [ {\cal D}^+_\dtt{\b} , {\cal D}^- ]\ =\ R_\dtt{\b} \quad ,\qquad [{\cal D}^+_\dtt{\a} , {\cal D}^-_\dtt{\b} ]\ =\ R_{\dtt{\a}\dtt{\b}}\la{c+-} \quad. \la{cvv2}\end{eqnarray} These allow, by the usual Frobenius argument, the choice of an analytic {\it Frobenius frame} in which ${\cal D}^+ , {\cal D}^+_\dtt{\b}$ are flat. In this frame, diffeomorphism and Lorentz invariances are determined in terms of analytic (independent of $x^{+\dtt{\mu}}, \eta^+$) degrees of freedom. We do not transform the harmonic variables $u^\pm_\alpha$\,. This facilitates the application to \N2 string theory, which requires a fixing of the complex structure and the use of corresponding light-cone variables. Moreover, let us choose the transformation parameters to be independent of the spinorial variables $\eta^{\pm}$. We thus consider the following action of the local group of infinitesimal diffeomorphisms \begin{equation} \delta\, x^{+\dtt{\mu}}\ =\ \lambda^{+\dtt{\mu}}(x^{+\dtt{\mu}},u^\pm)\quad ,\qquad \delta\, x^{-\dtt{\mu}}\ =\ \lambda^{-\dtt{\mu}}(x^{+\dtt{\mu}},x^{-\dtt{\mu}}, u^\pm)\quad ,\qquad \delta\, \eta^\pm\ =\ 0 \quad. \end{equation} The gauge choice ${\cal D}^+=\partial^+$ and ${\cal D}^+_\dtt{\b}=\partial^+_\dtt{\b}$ is tantamount to the following relationship between the non-analytic parameter $\lambda^{-\dtt{\mu}}$ and the analytic parameter of local $sl(2,{\bubble R})$ transformations $\lambda_\dtt{\a}^\dtt{\b} = \lambda_\dtt{\a}^\dtt{\b}(x^{+\dtt{\mu}},u^\pm)$: \begin{equation} \partial^+_\dtt{\a} \lambda^{-\dtt{\mu}}\ =\ -\, \lambda_\dtt{\a}^\dtt{\mu} \quad. \la{gauge1}\end{equation} In this frame the covariant derivatives take the form \begin{equation}\begin{array}{rll} {\cal D}^+ &=& \partial^+ \\[5pt] {\cal D}^+_{\dtt{\a}}&=& \partial^+_{\dtt{\a}} \\[5pt] {\cal D}^- &=& -\partial^-\ +\ E^\dtt{\mu}\partial^-_\dtt{\mu}\ +\ E^{--\dtt{\mu}}\partial^+_\dtt{\mu}\ +\ \omega^- \\[5pt] {\cal D}^-_{\dtt{\a}} &=& -E_\dtt{\a}^\dtt{\mu} \partial^-_{\dtt{\mu}}\ +\ E^{--\dtt{\mu}}_\dtt{\a} \partial^+_{\dtt{\mu}}\ +\ \omega^-_\dtt{\a} \quad, \la{D_an}\end{array}\end{equation} where the vielbein fields transform as: \begin{equation}\begin{array}{rll} \delta\, E^\dtt{\mu} &=& E^\dtt{\nu} \partial^-_\dtt{\nu} \lambda^{+\dtt{\mu}}\\[5pt] \delta\, E^{--\dtt{\mu}} &=& {\cal D}^- \lambda^{-\dtt{\mu}}\\[5pt] \delta\, E_\dtt{\a}^\dtt{\mu} &=& {\cal D}^-_\dtt{\a} \lambda^{+\dtt{\mu}} + \lambda_\dtt{\a}^\dtt{\b} E_\dtt{\b}^\dtt{\mu} \\[5pt] \delta\, E^{--\dtt{\mu}}_\dtt{\a} &=& {\cal D}^-_\dtt{\a} \lambda^{-\dtt{\mu}} + \lambda_\dtt{\a}^\dtt{\b} E_\dtt{\b}^{--\dtt{\mu}}\quad. \la{transf}\end{array}\end{equation} We note that the fields $E^\dtt{\mu}$ and $E_\dtt{\a}^\dtt{\mu}$ have transformations depending only on analytic transformation parameters. Moreover, in virtue of \gl{c--} and \gl{c+-} these fields are analytic, satisfying the set of equations \begin{equation}\begin{array}{rll} &&\partial^+ E^\dtt{\mu}\ =\ 0\ =\ \partial^+_\dtt{\a} E^\dtt{\mu} \\[5pt] &&\partial^+ E_\dtt{\b}^\dtt{\mu}\ =\ 0\ =\ \partial^+_\dtt{\a} E_\dtt{\b}^\dtt{\mu} \\[5pt] &&{\cal D}^- E_\dtt{\a}^\dtt{\mu} + {\cal D}^-_\dtt{\a} E^\dtt{\mu}\ =\ 0 \\[5pt] && {\cal D}^-_{[\dtt{\b}} E_{\dtt{\g}]}^\dtt{\mu}\ =\ 0\quad. \la{Ee}\end{array}\end{equation} We may therefore choose $x^{+\dtt{\mu}}$ such that $E^\dtt{\mu} = 0$ and $E_\dtt{\a}^\dtt{\mu} =\delta_\dtt{\a}^\dtt{\mu}$\,. In this gauge, the relation \gl{gauge1} is supplemented by \begin{equation} \partial^-_\dtt{\a} \lambda^{+\dtt{\mu}}\ =\ \lambda_\dtt{\a}^\dtt{\mu}\quad. \la{gauge2}\end{equation} All diffeomorphisms are thus effected by residual Lorentz transformations, allowing world indices to be freely replaced by tangent indices with an action of the Lorentz group. In this gauge, the last two equations in \gl{Ee} yield the conditions \begin{equation} (\omega^- )^\dtt{\d}_{\dtt{\g}}\ =\ 0\quad,\qquad (\omega^-_{[\dtt{\b}} )^\dtt{\d}_{\dtt{\g}]}\ =\ 0\quad. \la{o1}\end{equation} The former condition is consistent with the $\eta$-independence of the Lorentz parameters $\lambda^\dtt{\b}_\dtt{\a}\,$. Clearly, the curvature constraints $R=0=R_\dtt{\a}$ follow. The non-trivial covariant derivatives therefore take the simpler form \begin{equation}\begin{array}{rll} {\cal D}^- &=& -\partial^-\ +\ E^{--\dtt{\mu}}\partial^+_\dtt{\mu}\ \\[5pt] {\cal D}^-_{\dtt{\a}} &=& -\partial^-_{\dtt{\a}}\ +\ E^{--\dtt{\mu}}_\dtt{\a} \partial^+_{\dtt{\mu}}\ +\ \omega^-_\dtt{\a} \quad. \la{D_frob}\end{array}\end{equation} For the vielbeins appearing here, the torsion constraints implicit in \gl{c--} and \gl{c+-} yield the equations, \begin{eqnarray} &&\partial^+ E^{--\dtt{\mu}}\ =\ 0\ =\ \partial^+_\dtt{\a} E^{--\dtt{\mu}} \\[5pt] &&\partial^+ E_\dtt{\b}^{--\dtt{\mu}}\ =\ 0 \la{an}\\[5pt] && \partial^+_\dtt{\a} E_\dtt{\b}^{--\dtt{\g}}\ =\ (\omega^-_\dtt{\b} )_\dtt{\a}^\dtt{\g}\quad. \la{o2} \end{eqnarray} The latter, together with \gl{o1} and the tracelessness of $sl(2,{\bubble R})$ matrices (viz. $(\omega^-_\dtt{\b})_\dtt{\a}^\dtt{\a}=0$), yield an expression for the connection component $\omega^-_\dtt{\a}$ and vielbein $E^{--\dtt{\mu}}_\dtt{\a}$\,, \begin{equation} \left(\omega^-_\dtt{\a} \right)^\dtt{\g}_\dtt{\b}\ =\ \partial^+_\dtt{\a} E_\dtt{\b}^{--\dtt{\g}}\ =\ \partial^+_{\dtt{\a}} \partial^{+\dtt{\g}} \partial^+_{\dtt{\b}} F^{----}\quad, \end{equation} where the prepotential $F^{----}$ is $\eta^-$-independent, $\partial^+ F^{----} =0$\,, so as to satisfy \gl{an}. The $\eta^+$-dependence of $E^{--\dtt{\mu}}_\dtt{\a}$ yields the remaining vielbein, which satisfies the linear equation \begin{equation} {\cal D}^-_\dtt{\a} E^{--\dtt{\b}}\ =\ -\, \partial^- E^{--\dtt{\b}}_\dtt{\a}\quad. \end{equation} The torsion constraint from \gl{c--}, \begin{equation} {\cal D}^-_{[\dtt{\b}} E^{--\dtt{\mu}}_{\dtt{\g}]}\ =\ 0\quad, \end{equation} yields, on using an analytic pre-gauge invariance of $F^{----}$, the extended Plebanski equation, \begin{equation} \partial^{-\dtt{\a}} \partial^+_\dtt{\a} F^{----}\ =\ \fr12\,\partial^{+\dtt{\a}}\partial^{+\dtt{\b}}F^{----}\ \partial^+_{\dtt{\a}}\partial^+_{\dtt{\b}}F^{----}\quad. \la{P}\end{equation} Since $F^{----}$ is $\eta^-$-independent and transforms in an $\eta$-independent fashion, it can be thought of as a Laurent expansion in $\eta^+$. This equation therefore encapsulates an infinite tower of equations. Its Lagrangean is of the compact Plebanski form, with a potential term of the cubic Monge-Amp\`{e}re type: \begin{equation} {\cal L}^{(-8)}_P\ =\ \fr12\, \partial^{-\dtt{\a}}F^{----}\, \partial^+_\dtt{\a} F^{----}\ +\ \fr16\,F^{----}\,\partial^{+\dtt{\a}}\partial^{+\dtt{\b}}F^{----}\ \partial^+_{\dtt{\a}}\partial^+_{\dtt{\b}}F^{----}\ . \la{P_lagr}\end{equation} So far we have just considered pure self-dual gravity in $(2{+}2)$-dimensional chiral hyperspace. Let us now add self-dual Yang-Mills degrees of freedom by gauge covariantising the curvature constraints~\gl{cvv2}. In the analytic gauge ($A^+=0=A^+_\dtt{\a}$), this is achieved by `minimally coupling' Yang-Mills potentials to the negatively charged covariant derivatives, with coupling constant $g$, \begin{equation}\begin{array}{rll} {\cal D}^- &\rightarrow& \widehat{\cal D}^-\ =\ {\cal D}^-\ +\ gA^- \\[5pt] {\cal D}^-_{\dtt{\a}} &\rightarrow& \widehat{\cal D}^-_{\dtt{\a}}\ =\ {\cal D}^-_{\dtt{\a}}\ +\ gA^-_{\dtt{\a}} \quad. \la{D_gym}\end{array}\end{equation} The components of the gauge potentials take values in the Lie algebra of the gauge group and have the gauge transformations \begin{equation}\begin{array}{rll} A^- &\rightarrow& \Lambda\, A^-\, \Lambda^{-1}\ -\ \fr1g\, {\cal D}^- \Lambda \,\Lambda^{-1}\\[5pt] A^-_{\dtt{\a}} &\rightarrow& \Lambda\, A^-_{\dtt{\a}}\, \Lambda^{-1}\ -\ \fr1g\,{\cal D}^-_{\dtt{\a}}\Lambda\,\Lambda^{-1} \quad, \end{array}\end{equation} with analytic parameter $\Lambda = \Lambda(x^+,\eta^+,u)$ taking values in the gauge group. The coupled gravity-Yang-Mills self-duality\ conditions thus take the form of the curvature constraints \begin{eqnarray} &[\widehat{{\cal D}}^-_\dtt{\a} , \widehat{{\cal D}}^-_\dtt{\b} ]\ =\ 0 \la{c1vv}\\[5pt] &[\widehat{{\cal D}}^- , \widehat{{\cal D}}^-_\dtt{\a} ]\ =\ 0 \la{c1sv}\\[5pt] &[\partial^+ , \widehat{{\cal D}}^- ]\ =\ F \quad,\qquad [\partial^+ , \widehat{{\cal D}}^-_\dtt{\a} ]\ =\ [ \partial^+_\dtt{\a} , \widehat{{\cal D}}^- ]\ =\ F_\dtt{\a} \la{c2}\\[5pt] &[\widehat{{\cal D}}^+_\dtt{\a} , \widehat{{\cal D}}^-_\dtt{\b} ]\ =\ R_{\dtt{\a}\dtt{\b}}\ +\ F_{\dtt{\a}\dtt{\b}}\la{c3}\quad. \end{eqnarray} Here, $F_{\dtt{\a}\dtt{\b}}(x,\eta)$, resp. $R_{\dtt{\a}\dtt{\b}}(x,\eta)$, are symmetric and have the corresponding ${\bubble R}^{2,2}$ Yang-Mills, resp. Weyl, self-dual curvatures as their evaluations at $\eta=0$. The fields $F\,,\, F_\dtt{\a}\,$ and $F_{\dtt{\a}\dtt{\b}}\,$ take values in the gauge algebra. In virtue of \gl{c2} and \gl{c3} we have the expressions \begin{eqnarray} &A^-\ =\ \partial^+ \Phi^{--}\quad,\quad A^-_{\dtt{\a}}\ =\ \partial^+_{\dtt{\a}} \Phi^{--}\\[5pt] &F\ =\ \partial^+\partial^+ \Phi^{--}\quad,\quad F_\dtt{\a}\ =\ \partial^+\partial^+_{\dtt{\a}} \Phi^{--} \quad,\quad F_{\dtt{\a}\dtt{\b}}\ =\ \partial^+_\dtt{\b} \partial^+_{\dtt{\a}} \Phi^{--} \end{eqnarray} in terms of a generalised Leznov prepotential $\Phi^{--}$. These expressions maintain their flat space forms \cite{dl1} since in the analytic gauge the positively-charged derivatives remain `flat' in both gauge and gravitational senses. The Plebanski prepotential $F^{----}$ is a scalar under the gauge group; its equation remains unmodified by the Yang-Mills\ coupling. This is consistent with the stress-free self-dual Yang-Mills\ field {\it not} providing any source for the gravitational field. The equation for $\Phi^{--}$, on the other hand, is a generally covariant version of the Leznov equation obtained from the gauge-algebra valued part of \gl{c1vv}, \begin{equation} {\cal D}^{-\dtt{\a}} \partial^+_\dtt{\a} \Phi^{--}\ +\ \fr{g}{2}\, \left[ \partial^{+\dtt{\a}} \Phi^{--}\ ,\ \partial^+_\dtt{\a} \Phi^{--} \right]\ =\ 0 \quad. \end{equation} More explicitly, \begin{equation} \partial^{-\dtt{\a}} \partial^+_\dtt{\a} \Phi^{--}\ =\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} F^{----}\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} \Phi^{--}\ +\ \fr{g}{2}\, \left[ \partial^{+\dtt{\a}} \Phi^{--}\ ,\ \partial^+_\dtt{\a} \Phi^{--} \right]\quad. \la{L}\end{equation} This is the equation which determines the effective dynamics on ${\bubble R}^{2,2}$ of the residual vector potential $A^-_\dtt{\a} = \partial^+_\dtt{\a} \Phi^{--}$. The remaining equation for $\Phi^{--}$, the one arising from the gauge-algebra part of \gl{c1sv}, determines the $\eta^+$-evolution of $A^-_\dtt{\a}$ from its $\eta^+=0$ `initial data', namely, \begin{equation} \partial^- A^-_\dtt{\a}\ =\ \partial^-_\dtt{\a} \partial^+ \Phi^{--}\ +\ E^{--\dtt{\b}}\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} \Phi^{--} \ -\ E_\dtt{\a}^{--\dtt{\b}}\ \partial^+ \partial^+_\dtt{\b} \Phi^{--}\ +\ g\, \left[ \partial^+ \Phi^{--}\ ,\ \partial^+_\dtt{\a} \Phi^{--} \right]\ . \end{equation} It may be noted that the combined system of equations \gl{P} and \gl{L} cannot be derived from an action principle, since their mutual coupling appears only in \gl{L}. In the next section, we shall see that new string-induced couplings provide a remedy. The hyperspace fields $F^{----}(x,\eta)$ and $\Phi^{--}(x,\eta)$ can clearly be thought of as ${\bubble R}^{2,2}$ fields, taking values in the infinite-dimensional algebra spanned by polynomials of $\eta^\alpha$. Such algebras have been investigated in~\cite{v}. In particular, consistent higher-spin free-field equations were shown to arise as components of zero-curvature conditions for connections taking values in such algebras. It remains to be seen whether our equations allow interpretation as interacting variants of these free-field equations. \section{Open and closed string amplitudes} Having obtained the dynamical equations \gl{P} and \gl{L} for the self-dual hyperspace gravitational and gauge degrees of freedom, we are ready to ask whether these provide a correct description of $N{=}2$ string dynamics. By considering scattering amplitudes, we shall see that these naive equations require modification which, moreover, yields equations derivable from an action principle. The required modification includes `stringy' contributions which vanish in the infinite string tension limit. Since we would like to describe in particular the coupling of self-dual Yang-Mills to self-dual gravity (in $2{+}2$ dimensions), let us consider a general (mixed) scattering amplitude involving $n_c$ closed and $n_o$ open $N{=}2$ strings. Each such amplitude has a topological expansion in powers of the open string coupling~$g$ and in powers of a phase given by the angle~$\theta$ of the spectral flow. The expansion is governed by the world-sheet instanton number~$c\in{\bubble Z}$ and the world-sheet Euler number $\chi=2-2h-b-x$ in the presence of $h$ handles, $b$ boundaries and $x$ cross-caps. Introducing the `spin' \begin{equation} J\ =\ 2n_c + n_o -2\chi \ =\ 2n_c + n_o - 4 + 4h + 2b + 2x \la{spin} \end{equation} one finds \cite{berk1,LS,dl1} for any given choice of $(n_c,n_o)$, the amplitude, \begin{equation} A\ =\ \sum_J\ g^J \ A_J \ =\ \sum_{J,c} \left( 2J \atop J{+}c \right)\ g^J \ \sin^{J-c}\fr{\theta}{2}\ \cos^{J+c}\fr{\theta}{2}\ A_{J,c} \la{topexp} \end{equation} where $A_{J,c}$ is a correlator of vertex operators~$V$ on a world-sheet of fixed topology, integrated over all moduli. Clearly, $J$ runs upwards in steps of two, starting from $J_{\rm min}=2n_c+n_o-2-2\delta_{n_o,0}$. Due to unbalanced spinor ghost zero modes, the instanton sum is constrained to $|c|\leq J$. In four dimensions, the open-string coupling~$g$ is just the (dimensionless) Yang-Mills gauge coupling, while the closed-string coupling~$g^2$ is related to the (dimensionful) gravitational coupling~$\kappa$ via \begin{equation} \kappa\ \sim\ \sqrt{\alpha'}g^2 \end{equation} where $\alpha'$ denotes the inverse string tension. The string coupling~$g$ and the $\theta$ angle change under global $SO(2,2)$ tangent space transformations of the target space~\cite{parkes,BL1,dl1}. We may therefore set $g=1$ and $\theta=0$ for convenience.\footnote{ The dependence on $g$ and $\theta$ may easily be restored by performing an appropriate $SO(2,2)$ transformation.} As a consequence, only the top instanton number, $c=J$, contributes, i.e. \begin{equation} A\ =\ \sum_J A_{J,J} \la{amptop} \end{equation} with $A_{J,J}$ carrying helicity~$J$.\footnote{ We split $so(2,2)=sl(2)\oplus sl(2)'$. `Helicity' is the eigenvalue of the non-compact generator of $sl(2)$ which we choose to diagonalise.} Each partial amplitude $A_{J,J}$ contains an integral over four moduli spaces~${\cal M}_i\,$, corresponding to the world-sheet supergravitational non-gauge degrees of freedom of the metric, the two gravitini, and the Maxwell field. The respective real dimensions are \begin{eqnarray} {\rm dim}_{\bubble R} {\cal M}_{\rm metric}\ &=&\ 2n_c+n_o-3\chi \ =\ J-\chi \nonumber\\ {\rm dim}_{\bubble R} {\cal M}^\pm_{\rm gravitino} \ &=&\ 2n_c+n_o-2\chi\pm c \ =\ J\pm c \ \buildrel{c=J}\over{\longrightarrow}\ \cases{2J & ($+$)\cr 0 & ($-$)\cr} \\ {\rm dim}_{\bubble R} {\cal M}^\pm_{\rm Maxwell}\ &=&\ 2n_c+n_o- \chi \ =\ J+\chi \nonumber \end{eqnarray} The gravitini modular integral can be performed and yields $2J$ picture-raising insertions $\tilde{\cp}^+$ in the path integral~\cite{BKL,BL2}. For $\chi>0$, the Maxwell modular integral is trivial due to spectral flow invariance. Likewise, the metric moduli for positive $\chi$ reduce to the world-sheet positions of the vertex operators. The final matter-plus-ghost path integral is a superconformal correlation function of antighost zero modes, picture-raisers, and (canonical) local vertex operators which create the string states from the vacuum. Which string states are to be scattered? It is known from the relative BRST cohomology of the {\it open} $N{=}2$ string that its spectrum consists of a single massless physical state at each value of the picture charge $(\pi_+,\pi_-)$ labelling inequivalent spinor ghost vacua \cite{JL}. An internal symmetry group~$G$ is incorporated by requiring these states to carry Chan-Paton adjoint representation indices of~$G$. The difference $\pi_+{-}\pi_-$ changes continuously under the action of spectral flow. Since the Maxwell modular integration entails an averaging over the parameter $\rho$ of spectral flow, one must identify the equivalent sectors $(\pi_+,\pi_-)\sim (\pi_+{+}\rho,\pi_-{-}\rho)$. We shall use the $\pi_+{=}\pi_-$ representative. The total picture number $\pi\equiv\pi_+{+}\pi_-$ takes integral values, and the helicity of the state is $j=1{+}\pi/2\in{1\over 2}{\bubble Z}$. For states of non-zero momentum~$k$, picture-changing can be used to implement an equivalence relation among all pictures~\cite{JL}. In this case, a single open string state interacts with itself, unless the Chan-Paton group~$G$ is abelian. Such an identification, however, ruins target space covariance, because picture-raising by $\tilde{\cp}^+$ increases not only $\pi$ by one, but also the helicity~$j$ by half a unit. It is therefore advantageous to distinguish the unique physical states in different pictures~$\pi$. The canonical ($j{=}0$) open string state resides at $\pi{=}-2$. The corresponding (canonical) vertex operators $V^o_{\pi_i=-2}(k_i)$ are located on the world-sheet boundaries and feed in target space momenta $k_i\,$. The scattering amplitude does not depend on the positions of the $2J$ picture-raisers~$\tilde{\cp}^+$; this is the statement of picture equivalence~\cite{FMS}. Hence, we are free to arbitrarily fuse the picture-raisers with some of the vertex operators which raises their picture assignments, $V^o_{-2} \to V^o_{\pi>-2}$. In this way we arrive at a correlation function of the form~\footnote{ We suppress the additional appearance of $J{-}\chi$ conformal antighost and $J{+}\chi$ Maxwell antighost insertions, which balance the ghost charges of the {\it local\/} vertex operators and are used to transform them to {\it integrated\/} ones.} \begin{equation} \VEV{\ V^o_{\pi_1}(k_1)\ V^o_{\pi_2}(k_2)\ \ldots\ V^o_{\pi_{n_o}}(k_{n_o})\ } \la{opencorr}\end{equation} with the picture or helicity selection rule \begin{equation}\begin{array}{rll} \pi_{\rm tot}\ &=&\ \sum_{i=1}^{n_o} \pi_i\ =\ -2n_o + 2J\ =\ -4\chi \\[5pt] j_{\rm tot}\ &=&\ \sum_{i=1}^{n_o} j_i\ =\ {1\over 2} 2J\ =\ J \quad. \la{select1}\end{array}\end{equation} The first line follows from the second since $\pi=-2{+}2j$. Since the correlator \gl{opencorr} is invariant under picture-changes, its value cannot depend on the distribution of $\{\pi_i\}$ (or $\{j_i\}$), provided the selection rules~\gl{select1} hold. A preferred arrangement of picture charges is \begin{equation} \VEV{\ V^o_{-4\chi}(k_1)\ V^o_{0}(k_2)\ \ldots\ V^o_{0}(k_{n_o})\ } \la{pref}\end{equation} corresponding to $j_i=1-2\chi\delta_{i1}$. In order to repeat this analysis for the closed string, we note that the semi-relative BRST cohomology of the closed $N{=}2$ string also consists of a single massless (now, color-singlet) physical state at any value of the picture charge $(\pi_+,\pi_-)$~\cite{JL}. In principle, one could consider independent left- and right-moving picture charges; however, they need to be identified in the semi-relative construction. Moreover, since the coupling to open strings via world-sheet boundaries enforces a left--right relation for global properties, there exists only a single set of picture charges. Compared to the open string case, the only alteration then is a different canonical picture for the physical excitation, namely $\pi{=}-4$ with $j{=}0$. This modifies the picture--helicity relation to $\pi=-4{+}2j$ but does not change the selection rules~\gl{select1}. Of course, closed-string vertex operators $V^c_\pi$ are to be inserted in the interior of the world-sheet. A convenient distribution of picture charges among the vertex operators inside a closed-string correlator is obtained from \gl{pref} by replacing the labels `$o$' by `$c$'. Any theory of open strings generates intermediate closed strings at the loop level. The most general setup contains them already at tree level, i.e. as external closed-string states. Thus, it is of interest to consider {\it mixed amplitudes}, which describe scattering processes involving open as well as closed string external states. Having already allowed for handles, boundaries and cross caps of the world sheet, we simply consider both kinds of vertex operators simultaneously. The selection rule \begin{equation} \pi_{\rm tot}\ =\ -4\chi \qquad\qquad{\rm resp.}\qquad\qquad j_{\rm tot}\ =\ J \la{select2}\end{equation} for a mixed correlator \begin{equation} A_{J,J}^{o\ldots o\ c\ldots c}\ \sim\ \VEV{\ \prod_{i=1}^{n_o} V^o_{\pi_i}(k_i)\ \prod_{j=n_o+1}^{n_o+n_c} V^c_{\pi_j}(k_j)\ } \end{equation} remains in effect; and we may again choose all but one external state in the $\pi{=}0$ picture. Using \gl{select2}, we see that, for a given set $\{j_1,\ldots,j_{n_o};j_{n_o+1},\ldots,j_{n_o+n_c}\}$ of external state helicities, the amplitude \gl{amptop} only receives contributions from topologies with fixed Euler number \begin{equation} \chi\ =\ (2n_c + n_o - J)/2 \end{equation} where $J=\sum_i j_i$. Because $j_i\in{1\over 2}{\bubble Z}$, there are infinitely many picture-equivalent and therefore identical amplitudes even at tree level ($\chi{>}0$). Since not much is known yet about loop amplitudes of $N{=}2$ strings, let us collect the results on the $\chi{=}2$ and $\chi{=}1$ amplitudes. \noindent\underline{$\chi=2$}\\ The only topology is the sphere, and it does not admit open string legs. The external helicities sum to $J=2n_c{-}4$. It has been shown~\cite{hippmann} that all these amplitudes vanish, except for the three-point function ($J{=}2$)~\footnote{ Refs.~\cite{OVold,marcus} computed the $A_{2,0}^{ccc}$ component; the full result was obtained in~\cite{berk1,buckow}.} \begin{equation} A^{n_c=3,n_o=0}\ =\ A_{2,2}^{ccc}\ =\ \sqrt{\alpha'} \left( k^{++}_{12} \right)^2 \quad. \la{3cl2}\end{equation} Here, \begin{equation} k_{ij}^{++}\ =\ k_i^{+\dtt{\a}}\ k_j^{+\dtt{\b}}\ \epsilon_{\dtt{\a}\dtt{\b}} \ =\ \kappa_i^+ \kappa_j^+\ \chi_{ij} \ =\ -k_{ji}^{++} \end{equation} for lightlike target space momenta \begin{equation} k_i^{\alpha\dtt{\a}}\ =\ \kappa_i^\alpha\ \chi_i^\dtt{\a} \ \in\ {\bubble R}^{2,2} \qquad {\rm and}\qquad \chi_{ij}\ =\ \chi_i^\dtt{\a}\ \chi_j^\dtt{\b}\ \epsilon_{\dtt{\a}\dtt{\b}}\ =\ -\chi_{ji} \quad. \end{equation} Note that \begin{equation} k_{12}^{++}\ =\ k_{23}^{++}\ =\ k_{31}^{++} \end{equation} for massless three-point kinematics since $k_1{+}k_2{+}k_3=0$. Thus, \gl{3cl2} is totally symmetric in the external legs, although every helicity assignment (for example, $(-2,2,2)$) is necessarily asymmetric. The inverse string tension, $\alpha'$, must enter \gl{3cl2} on dimensional grounds. \noindent\underline{$\chi=1$: three-point}\\ This situation admits a single boundary or a cross-cap, and is therefore still interpreted as tree level. The cross cap leads to the real projective plane, which only appears for (unoriented) closed string scattering, i.e. in $A^{ccc}$. The boundary case is the familiar disk or, equivalently, upper half plane, which contributes to all three-string amplitudes. The results are (see also~\cite{marcus} for the $c{=}0$ parts) \begin{eqnarray} A_{1,1}^{ooo}\ &=&\ f^{a_1a_2a_3}\ k_{12}^{++} \\ A_{2,2}^{ooc}\ &=&\ \delta^{a_1a_2}\ \sqrt{\alpha'} \left( k_{12}^{++} \right)^2 \\ A_{3,3}^{occ}\ &=&\ 0 \\ A_{4,4}^{ccc}\ &=&\ \gamma\ \sqrt{\alpha'}^3 \left( k_{12}^{++} \right)^4 \la{3point}\end{eqnarray} where $a_i$ is the adjoint representation Chan-Paton label of the $i$th string leg, $f^{a_1a_2a_3}$ are structure constants of the Lie algebra of~$G$, and $\gamma$ is a {\it finite\/} numerical constant depending on~$G$.\footnote{ The disc contribution to $A^{ccc}$ involves a Chan-Paton factor of~${\rm tr}{\bf1}$ coming from the boundary, which the real projective plane does not have.} It is important to note that for the $N{=}2$ string, in contrast to bosonic and ordinary ($N{=}1$) superstrings, the `higher-order tree' corrections to closed-string scattering are finite. In the limit of the boundary shrinking to a point, the integrand of $A^{ccc}$ should yield a (diverging) dilaton propagator at zero momentum multiplying $A^{cccc}(k_4{=}0)$ on the sphere. Obviously, the finiteness of $A^{ccc}$ on the disk is consistent with the vanishing of the four-point function! Hence, we do {\it not\/} seem to be forced to take $G=SO(2^{d/2})$ in order to cancel infrared divergences. Nevertheless, it would be interesting to know whether $\gamma$ can be made to vanish for some distinguished choice of Chan-Paton group. As expected, $A_{2,2}^{ccc}\sim(A_{1,1}^{ooo})^2$. It is instructive to apply an $SO(2,2)$ transformation and restore the generic $\theta$ dependence; for instance \begin{equation} A_1^{ooo}\ \sim\ \cos^2\fr{\theta}{2}\ k_{12}^{++} \ +\ 2\cos\fr{\theta}{2}\sin\fr{\theta}{2}\ i k_{12}^{+-} \ +\ \sin^2\fr{\theta}{2}\ k_{12}^{--} \quad, \end{equation} with the obvious definition for $k_{12}^{\alpha\beta}$. This shows again that the interacting $N{=}2$ string lives in a broken phase of target space $SO(2,2)$ symmetry. The Goldstone modes of the $SO(2,2)\to U(1,1)$ breaking are precisely the spacetime dilaton and axion fields~\cite{BL1}. Due to the identity \begin{equation} k_{ij}^{++}\ k_{ij}^{--}\ =\ k_{ij}^{+-}\ k_{ij}^{-+} \end{equation} for lightlike momenta, the $\chi{=}2$ three-point amplitude indeed factorises: \begin{equation}\begin{array}{rll} A_2^{ccc}\ &\sim&\ \cos^4\fr{\theta}{2}\ k_{12}^{++}k_{12}^{++} \ +\ 4\cos^3\fr{\theta}{2}\sin\fr{\theta}{2}\ i k_{12}^{+-}k_{12}^{++} \ +\ 6\cos^2\fr{\theta}{2}\sin^2\fr{\theta}{2}\ k_{12}^{--}k_{12}^{++} \\[4pt] && \!\!\!\!\! +\ \sin^4\fr{\theta}{2}\ k_{12}^{--}k_{12}^{--} \ +\ 4\cos\fr{\theta}{2}\sin^3\fr{\theta}{2}\ i k_{12}^{--}k_{12}^{+-} \\[5pt] &\sim&\ \left( A_1^{ooo} \right)^2 \quad. \end{array}\end{equation} Apparently, the question~\cite{BL2} of whether one has a single (joint left-right) instanton number, or two independent ones (left and right), is irrelevant. \noindent\underline{$\chi=1$: beyond three-point}\\ Tree-level four-point functions are the first place to see `stringy' dynamics, but they are not easy to compute for mixed (i.e. open plus closed string) cases. Calculations (see \cite{marcus} for the $c{=}0$ piece) have revealed that \begin{equation}\begin{array}{rll} A_{2,2}^{oooo}\ &=&\ 0 \\ A_{3,3}^{oooc}\ &=&\ 0 \\ A_{4,4}^{oocc}\ &=&\ 0 ? \\ A_{5,5}^{occc}\ &=&\ 0 \\ A_{6,6}^{cccc}\ &=&\ 0 ? \end{array}\la{4point}\end{equation} where the question marks denote conjectured vanishing amplitudes. As argued below, these follow from the assumption that the target space field theory requires no fundamental quartic vertex, an expectation from self-dual Yang-Mills plus gravity. Beyond four-point functions, we only know that the pure open-string disk amplitude, $A_{n_o-2,n_o-2}^{oo\ldots o}$, vanishes~\cite{hippmann}. \section{String target space actions} Knowledge of the $\chi{>}0$ three-point string amplitudes allows us to read off the cubic couplings of the target space action for the massless open and closed $N{=}2$ string excitations. We associate string states (resp. their vertex operators) with space-time fields or their Fourier representatives, \begin{equation}\begin{array}{rll} V^o_\pi(k) \ &\Leftrightarrow&\ \widetilde\varphi_{(j)}(k)\ =\ \widetilde\varphi^{\overbrace{--\cdots-}^{2j\rm\;times}}(k) \\ V^c_\pi(k) \ &\Leftrightarrow&\ \widetilde f_{(j)}(k)\ =\ \widetilde f^{\overbrace{--\cdots-}^{2j\rm\;times}}(k)\quad, \end{array}\end{equation} remembering that $j=1{+}\pi/2$ for open (and $j=2{+}\pi/2$ for closed) string states. Fourier transforming to coordinate space (and dropping the tildes) we find that the $\chi{>}0$ three-point functions \gl{3cl2} and \gl{3point} are reproduced by the target-space Lagrangean density \begin{eqnarray} {\cal L}_\infty\ &=& -\ \fr12\sum_{j\in{\bubble Z}/2} f_{(-j)}\square f_{(+j)}\ +\ \fr{\sqrt{\alpha'}}{6} \sum_{J=2} f_{(j_1)}\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f_{(j_2)}\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} f_{(j_3)} \nonumber \\[5pt] && +\ \fr{\gamma\sqrt{\alpha'}^3}{6} \sum_{J=4}\ f_{(j_1)}\ \partial^{+\dtt{\a}}\partial^{+\dtt{\b}}\partial^{+\dtt{\g}}\partial^{+\dtt{\d}} f_{(j_2)}\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} \partial^+_\dtt{\g} \partial^+_\dtt{\d} f_{(j_3)} \nonumber\\[5pt] && +\ {\rm Tr}\biggl\{ -\fr12\sum_{j\in{\bubble Z}/2} \varphi_{(-j)}\square\varphi_{(+j)}\ +\ \fr16\sum_{J=1} \varphi_{(j_1)}\ \Bigl[\partial^{+\dtt{\a}}\varphi_{(j_2)}\ ,\ \partial^+_\dtt{\a}\varphi_{(j_3)}\Bigr]\biggr. \nonumber\\[5pt] &&\qquad\qquad\qquad\qquad\qquad\quad\;\ -\ \biggl.\fr{\sqrt{\alpha'}}{2}\sum_{J=2} \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f_{(j_1)}\ \partial^+_\dtt{\a} \varphi_{(j_2)}\ \partial^+_\dtt{\b} \varphi_{(j_3)} \biggr\} \la{Linfty}\end{eqnarray} where $\ J\equiv j_1{+}j_2{+}j_3\ $ in the sums, and $\ \square = \partial^{-\dtt{\a}} \partial^+_\dtt{\a}\,$. A field of helicity~$j$ carries a mass dimension equal to $1{-}j$, so that ${\cal L}_\infty$ has dimension four as required. The fundamental interactions among $\{\varphi_{(j)},f_{(j)}\}$ are purely cubic and of three types, which may be called {\sl three-graviton\/}, {\sl three-gluon\/}, and {\sl gluon-graviton\/} couplings, respectively. Furthermore, the couplings are independent of the external helicities as long as these sum to~$J$. The conjectured vanishing of the higher $n$-point tree-level string amplitudes (see \gl{4point}) must be reflected in the target space field theory. In other words, if $\ {\int}{\cal L}_\infty\ $ is the complete space-time action, it must imply the on-shell vanishing of all tree-level amplitudes beyond the three-point functions. A non-trivial check, for example, is that iterating the fundamental cubic vertices of \gl{Linfty} yields zero for the on-shell four-point functions. This was in fact verified for the pure gluon and the pure graviton cases in~\cite{OVold,buckow}. Moreover, it is straightforward to extend these results to the mixed four-point functions as well, with the help of the kinematic relations, \begin{equation}\begin{array}{rll} && {\displaystyle {k_{12}^{++}\ k_{34}^{++} \over s_{12}}\ =\ {k_{23}^{++}\ k_{14}^{++} \over s_{23}}\ =\ {k_{31}^{++}\ k_{24}^{++} \over s_{31}} } \\[10pt] && k_{12}^{++}\ k_{34}^{++}\ +\ k_{23}^{++}\ k_{14}^{++}\ +\ k_{31}^{++}\ k_{24}^{++}\ =\ 0 \quad, \end{array}\end{equation} where $s_{ij}=k_{ij}^{[+-]}\ ,\quad s_{ii}=0$ and $\sum_{i=1}^4 k_i=0$. An inductive argument shows~\cite{MS} that as a consequence all higher tree-level $n$-point functions also vanish. Thus, the absence of higher than cubic vertices in ${\cal L}_\infty$ corresponds perfectly with the tree-level string amplitudes computed so far. Of course, our Lagrangean density ${\cal L}_\infty$ contains infinitely many terms. It affords, nevertheless, a compact representation in terms of the hyperspace functional \begin{eqnarray} {\cal L} &=& \fr1{\alpha'}\ {\cal L}^{(-8)}\ +\ {\rm Tr}\ {\cal L}^{(-4)} \nonumber\\[8pt] &=& \fr{1}{\alpha'}\ \Bigl( -\ \fr12 F^{----} \square F^{----}\ +\ \fr16 F^{----}\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} F^{----}\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} F^{----} \nonumber\\[4pt] &&\quad +\ {\textstyle{\gamma\alpha' \over 6}}\ (\eta^+ )^{-4}\ F^{----}\ \partial^{+\dtt{\a}}\partial^{+\dtt{\b}}\partial^{+\dtt{\g}}\partial^{+\dtt{\d}} F^{----}\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} \partial^+_\dtt{\g} \partial^+_\dtt{\d} F^{----} \Bigr) \nonumber\\[4pt] &&\quad +\ {\rm Tr}\ \Bigl( - \fr12 \Phi^{--} \square \Phi^{--}\ +\ \fr16 \Phi^{--}\ [\partial^{+\dtt{\a}} \Phi^{--}\ ,\ \partial^+_\dtt{\a} \Phi^{--}] \nonumber\\[4pt] && \qquad\qquad -\ \fr12 \partial^{+\dtt{\a}}\partial^{+\dtt{\b}}F^{----}\ \partial^+_\dtt{\a} \Phi^{--}\ \partial^+_\dtt{\b} \Phi^{--} \Bigr) \quad. \la{hyperaction}\end{eqnarray} The $\fr1{\alpha'}$ coefficient of ${\cal L}^{(-8)}$ represents the dimensional difference between the gravitational and the gauge couplings. We now claim that the Lagrangean density ${\cal L}_\infty$ is precisely the zero-charge homogeneous projection, $ {\cal L}|_0$, of this inhomogeneous combination of the modified Plebanski functional ${\cal L}^{(-8)}$ and the modified Leznov functional ${\cal L}^{(-4)}$. Specifically, since the field $F^{----}$ is independent of $\eta^-$, we may think of it as a Laurent expansion in $\eta^+$, \begin{equation} \fr1{\sqrt{\alpha'}}F^{----}\ =\ \dots + \eta^+ f^{-----} + f^{----} + (\eta^+)^{-1} f^{---} + (\eta^+)^{-2} f^{--} + \dots + (\eta^+)^{-8} f^{++++} + \dots \la{atlasP}\end{equation} On the other hand, for $\Phi^{--}$, the essential data is that at $\eta^+=0$. We therefore consider an $\eta^-$ expansion of $\Phi^{--}$ evaluated at $\eta^+=0$, \begin{equation} \Phi^{--}\ =\ \dots + (\eta^-)^{-1} \varphi^{---} + \varphi^{--} + \eta^- \varphi^{-} + (\eta^-)^2 \varphi + (\eta^-)^3 \varphi^{+} + (\eta^-)^4 \varphi^{++} + (\eta^-)^5 \varphi^{+++} + \dots \la{atlasL}\end{equation} The projection $ |_0$ then, is to the respective homogeneous (i.e. zero-charge) terms in \gl{hyperaction}: coefficients of $(\eta^+ )^{-8}$ for ${\cal L}^{(-8)}$ and coefficients of $(\eta^- )^p/(\eta^+ )^q\ $, for all $p,q\ge 0$ such that $p+q=4$, for the remaining terms. This charge-homogenising projection yields the homogeneous component Lagrangean \gl{Linfty} from the inhomogeneous hyperspace functional ${\cal L}$. Although the hyperspace functional ${\cal L}$ is not $U(1)$-charge homogeneous, the question of whether a covariant hyperspace action exists remains open. Nevertheless, having the above projection in mind, we may indeed write down homogeneous equations of motion. Varying $\Phi^{--}$ yields the generalised Leznov equation \gl{L} unmodified, whereas varying $F^{----}$ yields a modification of the hyperspace Plebanski equation \gl{P}, \begin{eqnarray} \partial^{-\dtt{\a}} \partial^+_\dtt{\a} F^{----} &=& \fr12\ \partial^{+\dtt{\a}}\partial^{+\dtt{\b}}F^{----}\ \partial^+_{\dtt{\a}}\partial^+_{\dtt{\b}}F^{----} \nonumber\\[4pt] && +\ \fr{\gamma\alpha'}{2}\ (\eta^+ )^{-4}\Bigl( \partial^{+\dtt{\a}}\partial^{+\dtt{\b}}\partial^{+\dtt{\g}}\partial^{+\dtt{\d}}F^{----}\ \partial^+_{\dtt{\a}}\partial^+_{\dtt{\b}}\partial^+_{\dtt{\g}}\partial^+_{\dtt{\d}}F^{----}\Bigr) \nonumber\\[4pt] && -\ \fr{\alpha'}{2}\ (\eta^-)^4\ {\rm Tr}\ \partial^{+\dtt{\a}}\partial^{+\dtt{\b}}\Phi^{--}\ \partial^+_{\dtt{\a}}\partial^+_{\dtt{\b}}\Phi^{--}\quad. \la{mP}\end{eqnarray} Inserting the expansions \gl{atlasP} and \gl{atlasL} yields the infinite set of Euler-Lagrange equations for ${\cal L}_{\infty}$ on comparing coefficients of equal charge. Both $\gamma FFF$ and $F\Phi\F$ terms in ${\cal L}$ are of higher order in~$\alpha'$ and have the homogeneity of the generalised Leznov functional ${\cal L}^{-4}$, rather than the generalised Plebanski functional ${\cal L}^{-8}$ entering the hyperspace Lagrangean~\gl{hyperaction}. Although the contribution of the $F\Phi\F$ term to the generalised Leznov equation~\gl{L} is basically the `curving' of the flat hyperspace Leznov equation, both terms yield novel contributions to the generalised Plebanski equation~\gl{mP}. These contributions are proportional to two topological densities: the square of the self-dual Weyl curvature $\ C^{\dtt{\a}\dtt{\b}\dtt{\g}\dtt{\d}}C_{\dtt{\a}\dtt{\b}\dtt{\g}\dtt{\d}}\ $ and the trace of the self-dual field-strength squared $\ Tr\ F^{\dtt{\a}\dtt{\b}}F_{\dtt{\a}\dtt{\b}}\ $. The appearance of the latter term was actually foreseen by early considerations of Marcus \cite{marcus}. The `stringy' $\alpha'$-dependent terms do not appear to afford a fully hyperspace-covariant formulation, although these terms are of manifestly geometric character, being proportional to the second Chern class of the hyperspace structure bundle and that of the Yang-Mills bundle respectively. These terms have the structure of torsion contributions, with the $\partial^{+\dtt{\nu}}$-derivative of \gl{mP} taking the form \begin{equation} {\cal D}^{-\dtt{\a}} E^{--\dtt{\nu}}_\dtt{\a}\ =\ \alpha'\ T^{---\dtt{\nu}} \quad . \end{equation} The full coupled system \gl{L} and \gl{mP} shares with each of the uncoupled equations a conserved-current form, \begin{equation} \partial^{+\dtt{\a}} {\cal J}^{(-3)}_\dtt{\a}\ =\ 0 \qquad,\qquad \partial^{+\dtt{\a}} J^{(-5)}_\dtt{\a}\ =\ 0 \quad, \end{equation} with the currents being expressible in terms of higher prepotentials ${\cal Y}^{(-4)}$ and $Y^{(-6)}$ thus: \begin{eqnarray} {\cal J}^{(-3)}_\dtt{\a} &=\ \partial^+_\dtt{\a} {\cal Y}^{(-4)} &=\ \partial^-_\dtt{\a} \Phi^{--}\ - \fr12\, [ \partial^+_\dtt{\a} \Phi^{--}\ ,\ \Phi^{--} ] - \partial^+_\dtt{\a} \partial^{+\dtt{\b}} F^{----}\ \partial^+_\dtt{\b} \Phi^{--}\ \nonumber\\[6pt] J^{(-5)}_\dtt{\a} &=\ \partial^+_\dtt{\a} Y^{(-6)} &=\ \partial^-_\dtt{\a} F^{----}\ - \fr12\, \partial^+_\dtt{\a} \partial^{+\dtt{\b}} F^{----}\ \partial^+_\dtt{\b} F^{----} \nonumber\\[4pt] &&\quad +\ \fr{\alpha'}{2}\ (\eta^-)^4\ {\rm Tr}\ \partial^+_{\dtt{\a}}\partial^{+\dtt{\b}}\Phi^{--}\ \partial^+_{\dtt{\b}}\Phi^{--} \nonumber\\[4pt] &&\quad -\ \fr{\gamma\alpha'}{2}\ (\eta^+)^{-4}\Bigl( \partial^+_{\dtt{\a}} \partial^{+\dtt{\b}}\partial^{+\dtt{\g}}\partial^{+\dtt{\d}}F^{----}\ \partial^+_{\dtt{\b}}\partial^+_{\dtt{\g}}\partial^+_{\dtt{\d}}F^{----}\Bigr) \quad. \end{eqnarray} The action of $\partial^{-\dtt{\a}}$ on these equations yields wave equations for the higher prepotentials ${\cal Y}^{(-4)}$ and $Y^{(-6)}$. These have conserved-current form as well, yielding, in turn, higher prepotentials in the fashion of the uncoupled systems \cite{leznov,bp}. The towers of higher prepotentials also encode the dynamics of the higher spin fields. However, here we shall not pursue the relationship between the description they provide and that offered by the coefficients in the $\eta$-expansions of the hyperspace fields $F^{----}$ and $\Phi^{--}$ of present interest. \section{Truncated actions} Just as for the `flat' pure-Yang-Mills picture album discussed in \cite{dl1}, there exist various consistent truncations of ${\cal L}_\infty$ to systems of finite numbers of fields. The `maximal' consistent truncation has the $9$ $f$-type fields with $|j|\le 2$ coupled to the $5$ $\varphi$-type fields with $|j|\le 1$. This collection of fields is obtained by Taylor-expanding $F^{----}$ in powers of $1/\eta^+$ and $\Phi^{--}$ in powers of $\eta^-$, and truncating at orders $(\eta^+)^{-8}$ and $(\eta^-)^4$ respectively. Inserting in \gl{hyperaction} yields a homogeneous Lagrangean for a multiplet of 9 Plebanski fields coupled to 5 Lie-algebra-valued Leznov fields, with helicities ranging in half-integer steps from ${+}2$ to ${-}2$ and ${+}1$ to ${-}1$ respectively. Setting $\alpha'{=}1$ for simplicity, we obtain \begin{eqnarray} {\cal L}_{9+5} &=& -\ f^{++++} \square f^{----}\ -\ f^{+++} \square f^{---}\ -\ f^{++} \square f^{--}\ -\ f^{+} \square f^{-}\ -\ \fr12 f \square f\ \nonumber\\[6pt] && +\ \fr12\ f^{++++} \partial^{+\dtt{\a}}\partial^{+\dtt{\b}} f^{----} \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{----}\ +\ f^{+++} \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{---} \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{----} \nonumber\\[6pt] && +\ f^{++} \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{--} \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{----}\ +\ \fr12\ f^{++} \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{---} \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{---} \nonumber\\[6pt] && +\ f^{+} \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{-} \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{----}\ +\ f^{+} \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{--} \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{---} \nonumber\\[6pt] && +\ \fr12\ f\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{----}\ +\ \fr{\gamma}{2}\ f \partial^{+\dtt{\a}}\partial^{+\dtt{\b}}\partial^{+\dtt{\g}}\partial^{+\dtt{\d}} f^{----} \partial^+_{\dtt{\a}}\partial^+_{\dtt{\b}}\partial^+_{\dtt{\g}}\partial^+_{\dtt{\d}} f^{----} \nonumber\\[6pt] && +\ \fr12\ f\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{--} \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{--}\ +\ {\gamma}\ f^- \partial^{+\dtt{\a}}\partial^{+\dtt{\b}}\partial^{+\dtt{\g}}\partial^{+\dtt{\d}} f^{---} \partial^+_{\dtt{\a}}\partial^+_{\dtt{\b}}\partial^+_{\dtt{\g}}\partial^+_{\dtt{\d}} f^{----} \nonumber\\[6pt] && +\ f\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{-}\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{---}\ +\ \fr{\gamma}{2}\ f^{--} \partial^{+\dtt{\a}}\partial^{+\dtt{\b}}\partial^{+\dtt{\g}}\partial^{+\dtt{\d}} f^{--} \partial^+_{\dtt{\a}}\partial^+_{\dtt{\b}}\partial^+_{\dtt{\g}}\partial^+_{\dtt{\d}} f^{----} \nonumber\\[6pt] && +\ \fr12\ f^{-}\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{-}\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{--}\ +\ \fr{\gamma}{2}\ f^{--} \partial^{+\dtt{\a}}\partial^{+\dtt{\b}}\partial^{+\dtt{\g}}\partial^{+\dtt{\d}} f^{---} \partial^+_{\dtt{\a}}\partial^+_{\dtt{\b}}\partial^+_{\dtt{\g}}\partial^+_{\dtt{\d}} f^{---} \nonumber\\[6pt] && +\ {\rm Tr}\ \biggl\{-\ \varphi^{++} \square \varphi^{--}\ -\ \varphi^{+} \square \varphi^{-}\ -\fr12\ \varphi \square \varphi\ \nonumber\\[6pt] &&\qquad\quad + \left( \varphi^{++}\epsilon^{\dtt{\b}\dtt{\a}}\ -\ \fr12\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f \right) \partial^+_\dtt{\a} \varphi^{--} \partial^+_\dtt{\b} \varphi^{--}\ \nonumber\\[6pt] &&\qquad\quad + \left( 2\varphi^{+}\epsilon^{\dtt{\b}\dtt{\a}}\ -\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{-} \right) \partial^+_\dtt{\a} \varphi^{-} \partial^+_\dtt{\b} \varphi^{--}\ \nonumber\\[6pt] &&\qquad\quad + \left( \varphi\ \epsilon^{\dtt{\b}\dtt{\a}}\ -\ \fr12\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{--} \right) \partial^+_\dtt{\a} \varphi^{-} \partial^+_\dtt{\b} \varphi^{-}\ \nonumber\\[6pt] &&\qquad\quad + \left( \varphi\ \epsilon^{\dtt{\b}\dtt{\a}}\ -\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{--} \right) \partial^+_\dtt{\a} \varphi\ \partial^+_\dtt{\b} \varphi^{--}\ \nonumber\\[6pt] &&\qquad\quad -\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{---} \left( \partial^+_\dtt{\a} \varphi^{+} \partial^+_\dtt{\b} \varphi^{--}\ +\ \partial^+_\dtt{\a} \varphi\ \partial^+_\dtt{\b} \varphi^{-} \right) \nonumber\\[6pt] &&\qquad\quad -\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{----} \left( \partial^+_\dtt{\a} \varphi^{++} \partial^+_\dtt{\b} \varphi^{--}\ +\ \partial^+_\dtt{\a} \varphi^{+} \partial^+_\dtt{\b} \varphi^{-}\ +\ \fr12\ \partial^+_\dtt{\a} \varphi\ \partial^+_\dtt{\b} \varphi \right) \biggr\}\ . \la{max}\end{eqnarray} This truncated Lagrangean is remarkable in that it describes a {\it one-loop exact} theory; it is not hard to see that its Feynman rules do not support higher-loop diagrams. Further, this is the largest consistent subtheory of ${\cal L}_\infty$ with this property and with finitely many fields. Any attempt to include further fields necessarily requires the inclusion of the infinite set in order to obtain a consistent Lagrangean, and ${\cal L}_\infty$ does not forbid multi-loop diagrams. The `flat limit', with the nine Plebanski-type fields $f^{\dots}$ set to zero, yields the five-field one-loop exact theory presented previously~\cite{dl1}. The equations of motion for the Plebanski tower are \begin{eqnarray} &\square f^{----} &=\ \fr12\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{----} \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{----} \nonumber\\[12pt] &\square f^{---}\ &=\ \partial^{+\dtt{\a}}\partial^{+\dtt{\b}} f^{---} \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{----} \nonumber\\[12pt] &\square f^{--}\quad &=\ \partial^{+\dtt{\a}}\partial^{+\dtt{\b}}f^{--}\partial^+_\dtt{\a}\partial^+_\dtt{\b} f^{----}\ +\ \fr12\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{---} \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{---} \nonumber\\[12pt] &\square f^{-}\quad\ &=\ \partial^{+\dtt{\a}}\partial^{+\dtt{\b}}f^{-}\partial^+_\dtt{\a}\partial^+_\dtt{\b} f^{----}\ +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{--} \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{---} \nonumber\\[12pt] &\square f\quad\quad\ &=\ \partial^{+\dtt{\a}}\partial^{+\dtt{\b}} f\ \partial^+_\dtt{\a}\partial^+_\dtt{\b} f^{----}\ +\ \fr12\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{--} \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{--} \nonumber \\[4pt]&&\quad +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{-} \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{---} +\ \fr{\gamma}{2}\ \partial^{+\dtt{\a}}\partial^{+\dtt{\b}}\partial^{+\dtt{\g}}\partial^{+\dtt{\d}} f^{----} \partial^+_{\dtt{\a}}\partial^+_{\dtt{\b}}\partial^+_{\dtt{\g}}\partial^+_{\dtt{\d}} f^{----} \nonumber \\[4pt]&&\quad -\ {\rm Tr}\ \fr12\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} \varphi^{--}\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} \varphi^{--} \nonumber\\[12pt] &\square f^{+} \quad\ &=\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{+}\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{----}\ +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{-} \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{--} \nonumber \\[4pt]&&\quad +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{---} +\ {\gamma}\ \partial^{+\dtt{\a}}\partial^{+\dtt{\b}}\partial^{+\dtt{\g}}\partial^{+\dtt{\d}} f^{---} \partial^+_{\dtt{\a}}\partial^+_{\dtt{\b}}\partial^+_{\dtt{\g}}\partial^+_{\dtt{\d}} f^{----} \nonumber \\[4pt]&&\quad -\ {\rm Tr}\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} \varphi^{-}\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} \varphi^{--} \nonumber \\[12pt] &\square f^{++}\quad &=\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{++}\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{----}\ +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{+}\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{---} \nonumber \\[4pt]&&\quad +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{--} +\ {\gamma}\ \partial^{+\dtt{\a}}\partial^{+\dtt{\b}}\partial^{+\dtt{\g}}\partial^{+\dtt{\d}} f^{--} \partial^+_{\dtt{\a}}\partial^+_{\dtt{\b}}\partial^+_{\dtt{\g}}\partial^+_{\dtt{\d}} f^{----} \nonumber \\[4pt]&&\quad +\ \fr12\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{-}\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{-} +\ \fr{\gamma}{2}\ \partial^{+\dtt{\a}}\partial^{+\dtt{\b}}\partial^{+\dtt{\g}}\partial^{+\dtt{\d}} f^{---} \partial^+_{\dtt{\a}}\partial^+_{\dtt{\b}}\partial^+_{\dtt{\g}}\partial^+_{\dtt{\d}} f^{---}\ \nonumber \\[4pt]&&\quad -\ {\rm Tr}\ \left( \fr12\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} \varphi^{-}\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} \varphi^{-} +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} \varphi\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} \varphi^{--} \right) \nonumber \\[12pt] &\square f^{+++}\ &=\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{+++}\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{----}\ +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{++}\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{---} \nonumber \\[4pt]&&\quad +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{+}\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{--} +\ {\gamma}\ \partial^{+\dtt{\a}}\partial^{+\dtt{\b}}\partial^{+\dtt{\g}}\partial^{+\dtt{\d}} f^{-} \partial^+_{\dtt{\a}}\partial^+_{\dtt{\b}}\partial^+_{\dtt{\g}}\partial^+_{\dtt{\d}} f^{----} \nonumber \\[4pt]&&\quad +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{-} +\ {\gamma}\ \partial^{+\dtt{\a}}\partial^{+\dtt{\b}}\partial^{+\dtt{\g}}\partial^{+\dtt{\d}} f^{--} \partial^+_{\dtt{\a}}\partial^+_{\dtt{\b}}\partial^+_{\dtt{\g}}\partial^+_{\dtt{\d}} f^{---}\ \nonumber \\[4pt]&&\quad -\ {\rm Tr}\ \left( \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} \varphi^{+}\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} \varphi^{--} +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} \varphi\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} \varphi^{-} \right) \nonumber \\[12pt] &\square f^{++++} &=\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{++++}\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{----}\ +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{+++}\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{---} \nonumber \\[4pt]&&\quad +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{++}\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{--} +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{+}\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} f^{-} \nonumber \\[4pt]&&\quad +\ \fr12\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} f\ +\ \fr{\gamma}{2}\ \partial^{+\dtt{\a}}\partial^{+\dtt{\b}}\partial^{+\dtt{\g}}\partial^{+\dtt{\d}} f^{--} \partial^+_{\dtt{\a}}\partial^+_{\dtt{\b}}\partial^+_{\dtt{\g}}\partial^+_{\dtt{\d}} f^{--} \nonumber \\[4pt]&&\quad +\ {\gamma}\ \partial^{+\dtt{\a}}\partial^{+\dtt{\b}}\partial^{+\dtt{\g}}\partial^{+\dtt{\d}} f^{-} \partial^+_{\dtt{\a}}\partial^+_{\dtt{\b}}\partial^+_{\dtt{\g}}\partial^+_{\dtt{\d}} f^{---} \nonumber \\[4pt]&&\quad +\ {\gamma}\ \partial^{+\dtt{\a}}\partial^{+\dtt{\b}}\partial^{+\dtt{\g}}\partial^{+\dtt{\d}} f \partial^+_{\dtt{\a}}\partial^+_{\dtt{\b}}\partial^+_{\dtt{\g}}\partial^+_{\dtt{\d}} f^{----} -\ {\rm Tr} \left(\partial^{+\dtt{\a}} \partial^{+\dtt{\b}} \varphi^{++}\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} \varphi^{--} \right.\nonumber \\[4pt] && \left.\quad +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} \varphi^{+}\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} \varphi^{-} +\ \fr12\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} \varphi\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} \varphi\ \right) \quad. \la{9}\end{eqnarray} The five fields of the Leznov tower satisfy curved space versions of the five flat space equations given in \cite{dl1}, \begin{eqnarray} &\square \varphi^{--} &=\ \fr12 [ \partial^{+\dtt{\a}} \varphi^{--} , \partial^+_\dtt{\a} \varphi^{--} ]\ +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{----} \partial^+_\dtt{\a} \partial^+_\dtt{\b} \varphi^{--} \nonumber\\[12pt] &\square \varphi^{-}\ &=\ [ \partial^{+\dtt{\a}} \varphi^{-} , \partial^+_\dtt{\a} \varphi^{--} ] +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{---} \partial^+_\dtt{\a} \partial^+_\dtt{\b} \varphi^{--}\ \nonumber\\[4pt] && \quad +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{----} \partial^+_\dtt{\a} \partial^+_\dtt{\b} \varphi^{-} \nonumber\\[12pt] &\square \varphi\quad &=\ [ \partial^{+\dtt{\a}} \varphi , \partial^+_\dtt{\a} \varphi^{--} ]\ +\ \fr12 [ \partial^{+\dtt{\a}} \varphi^{-} , \partial^+_\dtt{\a} \varphi^{-} ] \nonumber\\[4pt] &&\quad +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{--} \partial^+_\dtt{\a} \partial^+_\dtt{\b} \varphi^{--}\ +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{---} \partial^+_\dtt{\a} \partial^+_\dtt{\b} \varphi^{-} \nonumber\\[4pt] &&\quad +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{----} \partial^+_\dtt{\a} \partial^+_\dtt{\b} \varphi \nonumber\\[12pt] &\square \varphi^{+}\ &=\ [ \partial^{+\dtt{\a}} \varphi^{+} , \partial^+_\dtt{\a} \varphi^{--} ]\ +\ [ \partial^{+\dtt{\a}} \varphi , \partial^+_\dtt{\a} \varphi^{-} ] \nonumber\\[4pt]&&\quad +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{-} \partial^+_\dtt{\a} \partial^+_\dtt{\b} \varphi^{--}\ +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{--} \partial^+_\dtt{\a} \partial^+_\dtt{\b} \varphi^{-} \nonumber\\[4pt] &&\quad +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{---} \partial^+_\dtt{\a} \partial^+_\dtt{\b} \varphi\ +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{----} \partial^+_\dtt{\a} \partial^+_\dtt{\b} \varphi^{+} \nonumber\\[12pt] &\square \varphi^{++} &=\ [ \partial^{+\dtt{\a}} \varphi^{++} , \partial^+_\dtt{\a} \varphi^{--} ]\ +\ [ \partial^{+\dtt{\a}} \varphi^{+} , \partial^+_\dtt{\a} \varphi^{-} ]\ +\ \fr12 [ \partial^{+\dtt{\a}} \varphi , \partial^+_\dtt{\a} \varphi ] \nonumber\\[4pt]&&\quad +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f\ \partial^+_\dtt{\a} \partial^+_\dtt{\b} \varphi^{--}\ +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{-} \partial^+_\dtt{\a} \partial^+_\dtt{\b} \varphi^{-} \nonumber\\[4pt] &&\quad +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{--} \partial^+_\dtt{\a} \partial^+_\dtt{\b} \varphi\ +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{---} \partial^+_\dtt{\a} \partial^+_\dtt{\b} \varphi^{+} \nonumber\\[4pt] &&\quad +\ \partial^{+\dtt{\a}} \partial^{+\dtt{\b}} f^{----} \partial^+_\dtt{\a} \partial^+_\dtt{\b} \varphi^{++} \quad. \la{5}\end{eqnarray} Picture-raising induces a derivation $Q^+$ on the set of target space fields, \begin{equation}\begin{array}{rll} &&Q^+:\ f^{----} \mapsto f^{---} \mapsto 2\ f^{--} \mapsto 3!\ f^{-} \mapsto\ 4!\ f \mapsto 5!\ f^+ \mapsto\dots \mapsto 8!\ f^{++++}\\[8pt] &&Q^+:\ \varphi^{--}\ \mapsto\ \varphi^-\ \mapsto\ 2\ \varphi\ \mapsto\ 3!\ \varphi^+\ \mapsto\ 4!\ \varphi^{++} \quad. \end{array}\end{equation} The five-equation system \gl{5} follows from the $\varphi^{--}$ equation on successive application of $Q^+$. This property, displayed for the corresponding flat space equations in \cite{dl1}, therefore survives the coupling to the five $f$-type fields occurring in \gl{5}. For the nine-equation tower \gl{9}, successive application of $Q^+$ on the `top' ($f^{----}$) equation yields all the `non-stringy' terms, namely those {\it not} depending on the suppressed $\alpha'$. On the other hand, these `stringy' terms follow on successive application of $Q^+$ to the two topological densities inserted in the neutral $f$ equation. However, the relative normalisations of these two sets of terms are not suited to the consideration of the $f$ equation as the `top' equation for the positively charged equations. The fourth-order ($\gamma$-dependent) `stringy' terms of the Lagrangean $\,{\cal L}_{9+5}\,$ are seen to affect only the equations for the gravitational `multiplier' fields~$f_{j\le0}\,$ and do not enter the Plebanski equations for the positive-helicity (negatively-charged) fields. In the high string tension limit, $\alpha'\to 0$, these terms in any case disappear and we recover equations which arise on expanding \gl{P}. The above $9{+}5$ field system, apart from the $\alpha'$-dependent deformation, is indeed somewhat similar to self-dual \N8 {\it super\/}gravity \cite{siegel_sdsg} plus $N{=}4$ self-dual {\it super\/} Yang-Mills \cite{siegel_sdym,CS}, with adjustments made for the difference in statistics of the spinorial fields. We note however, that whereas the five fields of the $\Phi^{--}$ multiplet are in one-to-one correspondence with the components of the \N4 SDYM multiplet \cite{dl1}, the \N8 supermultiplet of \cite{siegel_sdsg} has eleven component fields. Starting from the $9{+}5$ field truncation, smaller consistent Lagrangean theories may be constructed by ignoring any selection of pairs of fields from $\{ (f_j , f_{-j}),(\varphi_j , \varphi_{-j})\}$. Any such truncation of the `maximal model' may easily be seen to be one-loop finite. We note that the `minimal model', containing only the standard Plebanski and Leznov fields, $f^{----}$ and $\varphi^{--}$, together with their respective multipliers, $f^{++++}$ and $\varphi^{++}$, does not even contain a `$\gamma$-term'. \section{Conclusions} We have seen that the classical curved-space self-duality equations in $(2,2)$ hyperspace describe the interaction of open and closed $N{=}2$ strings, at least on topologies with $\chi{>}0\,$, up to stringy torsion-like modifications which vanish in the high tension limit. Since massive $N{=}2$ string excitations do not exist, these $\alpha'$-corrections are actually unexpected. They owe their appearance to the picture degeneracy of the massless level, which also forms the basis of the hyperspace extension of self-duality. The second order $\alpha'$-terms, moreover, are seen to be indispensable for the formulation of a unified action principle for the coupled self-dual Einstein-Yang-Mills system. In the hyperspace formulation of our coupled system \gl{hyperaction}, the stringy modifications depend explicitly on the spinorial hyperspace coordinates ($\eta^\pm$) and do not seem to afford a fully hyperspace-covariant reformulation. This difficulty is actually related to the `wrong' statistics of the spinorial coordinate. In {\it super\/}space, the difference in dimension of the two integration measures, $d^4\theta$ for the Yang-Mills\ terms and $d^8\theta$ for the gravitational ones, makes it possible to construct a covariant combined action. The existence of the higher conserved currents and potentials (section 4) seems to indicate that the $\alpha'$-deformation introduced here does not affect the integrability of the coupled model. The full system described by \gl{Linfty} therefore deserves further study in this light. Relaxing the string-enforced requirement of a fixed complex structure, we may recover full Lorentz invariance in harmonic space, with the $\{u^\pm_\alpha\}$ of section 2 treated as genuine coordinates. It remains to be seen whether such a reformulation provides an $\alpha'$-deformation of the Penrose twistor transform. \noindent {\bf Acknowledgment} \noindent We thank J\"urgen Schulze for discussions concerning the relation of string theory to field theory amplitudes. \baselineskip=14pt
{ "redpajama_set_name": "RedPajamaArXiv" }
4,043
Q: App shouldnt get uninstall normally I am using an app where I am storing the data in sqlite. So, my app shouldn't get uninstall normally. Only administrator can be able to uninstall it. So how can I provide security to stop my app from getting uninstalled. Is there a way that only administrator can uninstall the app? If yes, please help me. I am new to android. A: That isn't possible, there is nothing like an administrator on Android and also no way for developers to change the way their app is unistalled. Main reason for that is that it would be too much of a security problems if you could simply block an app from being uninstalled.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,593
At Peninsula Sports Medicine Group, we have partnered with Rohan Gazzard, Clinical Hypnotherapist. Rohan has practiced personal hypnotherapy in Australia for many years with outstanding success. Hypnosis is a state of altered awareness where the subconscious mind becomes accessible for the implantation of suggestions. It is recognized as the smartest, easiest and most effective method to alter the mindset of the subconscious mind. Hypnosis only functions when the person is awake, despite the contrary belief that they are in a deep sleep. In the hypnotic state, the doorway between the conscious and the subconscious mind is opened. Memories become accessible and new information can be stored. In the 'hypnotic state', you are not really 'thinking' in the traditional sense. You are experiencing without critical judgment or analysis, such as when you read an interesting book, watch a movie or drive a car. When you are focused, Rohan can make suggestions that 'stick', precisely because your conscious mind is not getting in the way. You are not 'judging' or being 'critical' of the suggestions. A suggestion given to the subconscious mind and accepted as true will be converted to an autosuggestion and stored in the subconscious. That which is held as a 'belief' in the subconscious mind will be outwardly manifested in the conscious world and circumstances of life. 1. Recognition of ownership of 'the problem'. Clients are encouraged to realize that they are in control of their responses and they can choose to continue in a negative pattern or re-arrange their outlook and mindset to enhance their frame of mind and quality of life. This can be performed in a number of ways dependent on each client's personality and history. Allowing a patient to fully understand the reason for the development of the belief that is governing their behavior is a powerful way of facilitating change of that behavior. The belief does not need to be logical or relevant; it only needs to be 'believed' by the person. Once there is an understanding, decisions can be made about what changes, if any, are to be made to that belief system to facilitate change of behavior. The subconscious mind not only holds all the information required to understand the basis of the problem, but often also knows how best to solve the problem for that person.
{ "redpajama_set_name": "RedPajamaC4" }
9,021
\section{Introduction} The Expectation--Maximization (EM) algorithm is a maximum likelihood estimation algorithm for missing observations proposed in~\cite{DEMP1977}. The EM algorithm consists of an E-step that fills in missing parts of the observed data to generate pseudo-complete data and an M-step that maximizes the likelihood function for the complete data. The E-step can be described using the sufficient statistics of the assumed statistical model, while the M-step specifically solves the likelihood equation in the framework of complete data. The EM algorithm is well established as a general-purpose numerical solution for maximum likelihood estimation of missing observations. The regularity conditions for convergence and the convergence conditions for the sequence of log-likelihood function values and parameter estimates generated by the EM algorithm were investigated in~\cite{10.1214/aos/1176346060}, and a convergence rate and its estimation method of the algorithm were also developed~\cite{10.2307/2337198}. In~\cite{1574231875723835008}, Csiszar and Tusnady studied sufficient conditions for the convergence of algorithms that find the shortest distance between two sets by iterations involving the EM algorithm and gave examples of calculating the channel capacity, rate distortion function, and portfolio optimization. Statistical properties and other variants of the EM algorithm are summarized in, for example,~\cite{BA85746989}. Even in recent years, various novel theoretical results on the EM algorithm have been discovered. For example, in \cite{Balakrishnan2017}, a theoretical foundation for quantifying the convergence of the EM algorithm within a statistical precision of a global optimum was developed, while in~\cite{DBLP:conf/aistats/KwonHC21}, a strong theoretical guarantee of the EM algorithm applied to a mixture linear model was established. It is also widely used in applications such as machine learning, information theory, imaging~\cite{doi:10.1177/096228029700600105}, epidemiology~\cite{McLachlan1997TheIO,doi:10.1177/096228029700600103}, psychology~\cite{Enders2003UsingTE}, privacy~\cite{DBLP:journals/tifs/MurakamiKH17,DBLP:journals/popets/MurakamiHS18}, neuroscience~\cite{DBLP:journals/nn/IwasakiHTAM18}, and economics~\cite{RUUD1991305}, and is being extended in each of these fields. For example, for estimating the parameters of the hidden Markov model, the Baum--Welch algorithm~\cite{10.1214/aoms/1177699147} which is nothing but an instance of the EM algorithm, is widely used. Information geometry is a framework for analyzing the statistical manifold equipped with the Fisher metric and a pair of affine connections with the methodology of differential geometry~\cite{amari2000methods}. The information geometry makes it possible to understand the mechanisms and behavior of statistical estimation and machine learning in relation to the structure of the space of probability distributions. The geometric view has yielded a variety of results. For example, it has been used to clarify the relationship between predictive distribution and curvature in Bayesian statistics~\cite{10.2307/2337602}. In semiparametric inference, it is used to decompose the parameter of interest and the nuisance parameter by orthogonal foliation~\cite{bj/1178291931}. It offers an orthogonal decomposition of hierarchical statistical models such as a nested stochastic dependence among a number of random variables such as a higher order Markov chain~\cite{930911}. Ensemble methods in machine learning, such as Bagging~\cite{Breiman1996} and Boosting~\cite{FREUND1997119}, are also investigated from the viewpoint of information geometry. The Bagging predictor is analyzed in~\cite{Baggins2004}, and it is shown that bootstrap predictive distributions are equivalent to Bayesian predictive distributions in the second-order expansion. The geometric structure of the Boosting algorithm has been elucidated in~\cite{NIPS2001_71e09b16} by identifying a classification problem with an estimation problem of conditional probability. In~\cite{DBLP:journals/neco/MurataTKE04}, the inference procedure in Boosting algorithms was extended by considering the class of $U$-divergence, which is an extension of a standard Kullback--Leibler divergence, and the robustness of the information geometrically extended Boosting algorithms is investigated in~\cite{DBLP:journals/neco/TakenouchiEMK08}. The EM algorithm was characterized from a geometric perspective in~\cite{AMARI19951379}. Because of this pioneering work, the usefulness of considering iterative algorithms from a geometric point of view is now widely known, and inference algorithms in various aspects have been analyzed in the information geometric framework. In this paper, we provide an overview of EM-like algorithms with iterative structures from a geometric point of view; since the EM algorithm and its applications are very broad, we aim to provide a concise survey focusing on the geometric point of view. The rest of the paper is organized as follows. In Sections~2 and 3, the EM algorithm and the element of information geometry are presented. Section~4 introduces the $em$ algorithm, the information geometric version of the EM algorithm. From Sections~5 to 8, various iterative algorithms are considered from the viewpoint of information geometry. In Section~5, geometrical analysis of an algorithm for calculating the capacity of a memoryless communication channel is presented. Section~6 deals with parameter estimation problems for statistical models with special structures. Section~7 considers the situation that a distribution is regarded as a datum, and a principled framework for dealing with distributional data is introduced. Section~8 shows an attempt to formulate generative model learning from a geometric manner. Section~9 is devoted to conclusions. \section{EM algorithm} The EM algorithm is a method of performing maximum likelihood estimation by simple iterative computation for problems where a part of the random variable is unobservable for some reason. The EM algorithm can be applied to the parameter estimation of mixture models by treating the unknown information concerning which distribution the data were observed from as a missing variable. In this section, we introduce the symbols and describe the problem setup through a description of the EM algorithm. Let $X$ be a random variable and $x$ be its realization. Let $Z$ be the hidden variable. In other words, we consider the situation that a part of a random vector is observed while the rest cannot be observed. The problem is to determine the parameter $\theta$ of the statistical model $p(x,z;\theta)$ from only the observations of $X$, where its marginal distribution is given by \begin{align} p(x ; \theta) = \int p(x,z; \theta) \mathrm{d} z \end{align} Taking the logarithm of both sides gives \begin{align} \log p(x;\theta) =& \log \int p(x,z;\theta) \mathrm{d} z \shortintertext{but the logarithm of the summation (integration) is intractable in general; hence we take the variational lower bound as} =& \log \int q(z)\frac{p(x,z;\theta)}{q(z)} \mathrm{d} z \\ \geq & \int q(z) \log \frac{p(x,z;\theta)}{q(z)}\\ =:& \mathcal{L} (q,\theta) \end{align} where the inequality comes from Jensen's inequality. Note that \begin{align} \log p(x;\theta) - \int q(z) \log \frac{p(x,z;\theta)}{q(z)} \mathrm{d} z =& \int q(z) \log p(x;\theta) \mathrm{d} z - \int q(z) \log \frac{ p(z|x;\theta) p(x;\theta) }{ q(z)} \mathrm{d} z \\ =& \int q(z) \log \frac{q(z)}{p(z|x;\theta)} \mathrm{d} z = D(q(z) , p(z|x;\theta)), \end{align} where \begin{align} D(f , g) = \int \left( f(x) \log \frac{f(x)}{g(x)} \right) \mathrm{d} x \end{align} is the Kullback--Leibler (KL) divergence. Suppose a set of observation $\{x_i\}_{i=1}^{n}$ is given. Then, starting from an initial parameter $\theta_0$ and $t=0$, the EM algorithm is the following iterative procedure composed of the E- and M-steps. \begin{description} \item[E-step: ] Maximize the variational lower bound $\mathcal{L}(q,\theta_t) = \int q(z) \log \frac{p(x,z;\theta_t)}{q(z)} \mathrm{d} z$ w.r.t. $q$. Namely, \begin{align} \mathop{\rm minimize}\limits_{q(z)} \; D(q(z) , p(z|x,\theta_t)), \end{align} which is achieved by setting $q(z)$ to be the estimated posterior as $q(z) = p(z|x;\theta_{t})$, and calculate the Q-function as \begin{align} Q(\theta,\theta_{t}) := \frac{1}{n} \sum_{i=1}^{n} \int p(z|x_i;\theta_{t}) \log p(x_i,z;\theta) \mathrm{d} z + const. \end{align} \item[M-step: ]Maximize $\mathcal{L}(q,\theta)$ with respect to $\theta$ and update $\theta_t$ \begin{align} \theta_{t+1} = \mathop{\rm arg~max}\limits_{\theta} \; Q(\theta,\theta_{t}). \end{align} \end{description} The EM algorithm is also used for MAP estimation \begin{align} \mbox{maximize} \; \log p(\theta|x) \end{align} when a prior distribution $p(\theta)$ is given. The posterior distribution is \begin{align} \log p(\theta | x) =& \log \frac{p(x|\theta) p(\theta)}{p(x)} = \log \frac{p(\theta)}{p(x)} \int p(x,z|\theta) \mathrm{d} z \\ =& \log \frac{p(\theta)}{p(x)} \int q(z) \frac{p(x,z|\theta)}{q(z)} \mathrm{d} z \\ =& \log p(\theta) - \log p(x) + \log \int q(z) \frac{p(x,z|\theta)}{q(z)} \mathrm{d} z \\ \geq & \log p(\theta) - \log p(x) + \int q(z) \log \frac{p(x,z|\theta)}{q(z)} \mathrm{d} z \\ =& \mathcal{L}(q,\theta) + \log p(\theta) - \log p(x) = \mathcal{L}^{\prime}(q,\theta), \end{align} and we have \begin{equation} \log p(\theta|x) = \mathcal{L}^{\prime}(q,\theta) + D(q(z) , p(z|x,\theta)). \end{equation} The E-step for MAP estimation is the same as the standard E-step, while the M-step for MAP estimation maximizes \begin{equation} Q(\theta, \theta_{t}) + \log p(\theta). \end{equation} \section{Information geometry} Let us consider the space of positive finite measures over $x \in \mathcal{X}$, where $\mathcal{X}$ is a space of input variables, under a carrier measure $\Lambda (x)$ \begin{equation} \mathcal{F} = \left\{ m(x) \; \middle| \; m: \mathcal{X} \to \mathbb{R}_{+}, \; \int_{x \in \mathcal{X}} m(x) \mathrm{d} \Lambda (x) < \infty, \right\} \end{equation} and the space of probability densities as a subspace of $\mathcal{F}$ \begin{equation} \mathcal{S} = \left\{ m(x) \; \middle| \; m : \mathcal{X} \to \mathbb{R}_{+}, \; \int_{x \in \mathcal{X}} m(x) \mathrm{d} \Lambda (x) =1 \right\} \subset \mathcal{F}. \end{equation} We restate the KL divergence with more generality as \begin{align} D(f, g) = \int f \log \frac{f}{g} \mathrm{d} \Lambda. \end{align} The integral with respect to the measure $\Lambda$ should read summation when we consider discrete variables. When the KL divergence is adopted for measuring a statistical distance between distributions, the $m$-geodesic and the $e$-geodesic play the most important roles. The $m$-geodesic is defined as a set of interior points between two distributions $p(x)$ and $q(x)$: \begin{equation} r(x;t) = (1-t) p(x) + t q(x), \quad t \in (0,1). \end{equation} The $e$-geodesic is defined as a set of interior points between $p(x)$ and $q(x)$ in the sense of the logarithmic representation: \begin{equation} \log r(x;t) = (1-t) \log p(x) + t \log q(x) + a(t), \quad t \in (0,1) \end{equation} where $a(t)$ is the normalization constant to make $r(x;t)$ a probability function and is defined by \begin{equation} a(t) = \log \int p(x)^{1-t} q(x)^t \mathrm{d} x. \end{equation} Let $K$ be a submanifold of $\mathcal{S}$ and $p \in \mathcal{S}$. We call $\hat{p}$ an $m$-projection of $p$ onto $K$ when the $m$-geodesic connecting $p$ and $\hat{p}$ is orthogonal to $K$ with respect to the Fisher metric $g$ at $\hat{p}$. Also, we call $\hat{p}$ an $e$-projection of $p$ onto $K$ when the $e$-geodesic connecting $p$ and $\hat{p}$ is orthogonal to $K$ at $\hat{p}$. In information geometry~\cite{amari2000methods}, a manifold that consists of statistical models is called a model manifold and is denoted by $\mathcal{M}$. One of the representative parametric models is the exponential family \begin{equation} \mathcal{M}_e = \left\{ p(x;\theta) = \exp \left( \sum_{i=1}^{s} \theta_i t_i(x) - \psi(\theta) \right), \quad \theta = (\theta_1,\dots,\theta_s) \subseteq \mathbb{R}^{s} \right\}, \end{equation} which includes many important distributions such as the Gaussian distribution, exponential distribution, Poisson distribution, and Bernolli distribution, for example. Let us consider a mixture family of distributions spanned by $d$ distinct probability functions $p_i(x)$, \begin{align} \mathcal{M}_m = \left\{ p(x;\theta) = \sum_{i=1}^{d} \theta_i p_i(x), \; \theta_i >0, \; \sum_{i=1}^{d} \theta_i = 1 \right\}. \end{align} This set $\mathcal{M}_m$ is closed under the internal division, i.e., any $m$-geodesic that connects two arbitrarily chosen distributions in $\mathcal{M}_m$ is included in $\mathcal{M}_m$. This means that the manifold is composed of straight lines and $\mathcal{M}_m$ is a flat subset of $\mathcal{S}$ in the sense of the straightness induced by $m$-geodesics. Similarly, for an exponential family, any $e$-geodesic connecting any two points in $\mathcal{M}_e$ is included in $\mathcal{M}_e$, and the subset $\mathcal{M}_e$ is flat in the sense of the straightness induced by $e$-geodesics. The notion of flatness is defined in a more rigorous manner by using the metric and curvature tensors~\cite{amari2000methods,ay2017information}, but the above intuitive explanation suffices for explaining the $em$ algorithm in this paper. We then introduce the notion of orthogonal projection by defining tangent vectors and the inner product in the space of statistical model $\mathcal{S}$. Consider the partial derivative operator $\partial_{\alpha} = \partial/\partial \alpha$ along with the direction $\alpha$, and as is conventionally done in the literature of differential geometry~\cite{kobayashi1996foundations}, we identify $\partial_{\alpha}$ as a basis of the tangent vector space for the manifold of interest. For example, a tangent vector along an $m$-geodesic with a parameter $t$ is \begin{align} \partial_{t} \log r(x;t) =& \frac{\partial_{t}r(x;t)}{r(x;t)} = \frac{q(x)-p(x)}{r(x;t)}. \end{align} A tangent vector along an $e$-geodesic is \begin{align} \partial_t \log r(x;t) =& \log q(x) - \log p(x) - \dot{a}(t). \end{align} The tangent vectors of the model manifold are naturally defined by the derivatives with respect to the model parameter $\theta$ as \begin{align} \partial_{i} \log p(x;\theta) = \frac{ \partial_i p(x;\theta)}{p(x;\theta)}, \end{align} where $\partial_i$ is the partial derivative with respect to the $i$-th element of the parameter $\theta$. We can define a special form of the inner product in the space of probability distributions $\mathcal{S}$ as \begin{equation} \langle \partial_{\alpha} p, \partial_{\beta} p \rangle = \mathbb{E}_{p} [ (\partial_{\alpha} \log p(X)) (\partial_{\beta} \log p(X)) ]. \end{equation} Consider the point $p(\hat{\theta}) \in \mathcal{M}$ closest to $q$ in terms of the KL divergence \begin{align} \hat{\theta} = \mathop{\rm arg~min}\limits_{\theta} D(q , p(\theta)) = \mathop{\rm arg~min}\limits_{\theta} \mathbb{E}_{q}[\log q(X) - \log p(X;\theta)]. \end{align} By definition, at $\theta = \hat{\theta}$, all partial derivatives of the KL divergence vanish: \begin{align} \left. \partial_{i} D(q , p(\theta))\right|_{\theta=\hat{\theta}} = - \mathbb{E}_{q} \left[ \partial_i \log p(X; \hat{\theta}) \right] = 0. \end{align} The inner product of the tangent vector along the $m$-geodesic at $p(\hat{\theta})$ \begin{align} \left. \partial_t \log r(x;t) \right|_{t=0} =& \left. \frac{q(x) - p(x;\hat{\theta})}{r(x;t)} \right|_{t=0} \\ =& \frac{q(x) - p(x;\hat{\theta})}{p(x;\hat{\theta})} \end{align} and the tangent vectors along the model manifold at $p(\hat{\theta})$ \begin{align} \left. \partial_{i} \log p(x;\theta) \right|_{\theta=\hat{\theta}} = \frac{ \partial_i p(x;\hat{\theta})}{p(x;\hat{\theta})} \end{align} is calculated as \begin{align} \mathbb{E}_{p_{\hat{\theta}}} [ \partial_t \log r(X;0) \cdot \partial_i \log p(X;\hat{\theta}) ] =& \int \left( \frac{q(x) - p(x;\hat{\theta})}{p(x;\hat{\theta})} \right) \partial_i \log p(x;\hat{\theta}) p(x;\hat{\theta}) \mathrm{d} x \\ =& \mathbb{E}_q [ \partial_i \log p(X;\hat{\theta})] - \mathbb{E}_{p_{\hat{\theta}}} [ \partial_i \log p(X;\hat{\theta}) ]=0. \end{align} Thus, the $m$-geodesic between $q$ and $p(\hat{\theta})$ is orthogonal to the model manifold, and $p(\hat{\theta})$ in this case is called the $m$-projection from $q$ onto $\mathcal{M}$. We note that when $q$ is the empirical distribution $q(x) = \frac{1}{n} \sum_{i=1}^{n} \delta(x-x_i)$ of the observed data $\{x_i\}_{i=1}^{n}$, the $m$-projection coincides with the maximum likelihood estimation. The $e$-projection is also defined in the same manner as the $m$-projection. Consider the point $p(\hat{\theta}) \in \mathcal{M}$ closest to $q$ in terms of the KL divergence \begin{align} \hat{\theta} = \mathop{\rm arg~min}\limits_{\theta} D(p(\theta) , q) = \mathbb{E}_{p_{\theta}}[\log p(X;\theta) - \log q(X)]. \end{align} By definition, \begin{align} \left. \partial_i D(p(\theta),q) \right|_{\theta=\hat{\theta}} =& \int \partial_i p(x;\hat{\theta}) ( \log p(x;\hat{\theta}) - \log q(x) ) \mathrm{d} x = 0. \end{align} The tangent vector along the $e$-geodesic is given by \begin{align} \left. \partial_t \log r(x;t) \right|_{t=0} =& \log q(x) - \log p(x;\hat{\theta}) - \left. \dot{a}(t)\right|_{t=0}\\ =& \log q(x) - \log p(x;\hat{\theta}) - \mathbb{E}_{p_{\hat{\theta}}} [ \log q(X) - \log p(X;\hat{\theta}) ]. \end{align} Then, the inner product of this tangent vector and that of the model manifold is shown to be zero as \begin{align} \mathbb{E}_{p_{\hat{\theta}}} [ \partial_t \log r(X;0) \cdot \partial_i \log p(X;\hat{\theta}) ] =& \int \partial_i p(x;\hat{\theta}) \left\{ \log q(x) - \log p(x;\hat{\theta}) \right\} \mathrm{d} x=0, \end{align} so these two tangent vectors are orthogonal. It is known that the $e$-projection to an $m$-flat manifold is unique, and the $m$-projection to an $e$-flat manifold is also unique. \begin{defi} Assume that for any $p,q \in \mathcal{M}$ and $t \in (0,1)$, the element \begin{equation} r = t p + (1-t) q \in \mathcal{S} \end{equation} belongs to $\mathcal{M}$. Then, $\mathcal{M}$ is said to be $m$-autoparallel. Let $\mathcal{E}$ be a submanifold of $\mathcal{S}$. Assume that for any $p,q \in \mathcal{E}$ and $t \in (0,1)$, the element $r$ for which \begin{equation} \log r = t \log p + (1-t) \log q - a(t) \end{equation} belongs to $\mathcal{E}$, where the constant $a(t)$ is the normalizing factor. Then, $\mathcal{E}$ is said to be $e$-autoparallel. \end{defi} We note that technically the notion of autoparallel is defined in terms of the covariant derivative~\cite{amari2000methods}, but the above definition suffices for the purpose of this paper. \section{$em$ algorithm} Consider the situation that a random vector $X$ is observed while there exists a hidden variable $Z$. The problem is to determine the parameter $\theta$ of the statistical model $p(x,z;\theta)$ only from the observations $\{x_1,\dots,x_n\}$. Since there are hidden variables that cannot be observed, it is impossible to calculate all the statistics needed to specify a point in the space $\mathcal{P}$ only from the observed data. In this case, we first consider the marginal distribution of the observed variables and gather all the distributions that have the same marginal distribution as the empirical distribution of the observed variables. The set of these distributions conditioned by the marginal distributions represents observed data and is called the data manifold $\mathcal{D}$. We introduce a parameter $\eta$ to specify the point in the data manifold $\mathcal{D}$. Let $q(x)$ be the marginal distribution of $x$. All the points in $\mathcal{D}$ have the same marginal distribution and any point in $\mathcal{D}$ can be represented as \begin{align} q(x,z ; \eta) = q(x) q(z|x;\eta), \end{align} where $\eta$ is also regarded as the parameter of the conditional probability density function $q(z|x;\eta)$. A natural way of choosing a point in the model manifold $\mathcal{M}$ is to adopt the closest point in $\mathcal{M}$ from the data manifold $\mathcal{D}$. It can be achieved by measuring the statistical distance between a point $q(\eta)$ in $\mathcal{D}$ and a point $p(\theta)$ in $\mathcal{M}$ with the KL divergence as \begin{align} D(q(\eta), p(\theta)) = \int q(x,z;\eta) \log \frac{ q(x,z;\eta) } { p(x,z;\theta) } \mathrm{d} x \mathrm{d} z, \end{align} and obtaining the points $\hat{\eta}$ and $\hat{\theta}$ that minimize the divergence. The $em$ algorithm is a method of solving this estimation problem by applying the $e$-projection and the $m$-projection repeatedly. The procedure is composed of the following two steps. \begin{description} \item[$e$-step:] Apply the $e$-projection from $\theta_t$ to $\mathcal{D}$, and obtain $\eta_{t+1}$ by \begin{align} \eta_{t+1} = \mathop{\rm arg~min}\limits_{\eta} D(q(\eta), p(\theta_t)). \label{eq:e_step} \end{align} \item[$m$-step:] Apply the $m$-projection from $\eta_{t+1}$ to $\mathcal{M}$ and obtain $\theta_{t+1}$ by \begin{align} \theta_{t+1} = \mathop{\rm arg~min}\limits_{\theta} D(q_{t+1}, p(\theta)). \label{eq:m_step} \end{align} \end{description} Starting from an initial value $\theta_0$, the procedure is expected to converge to the optimal value after a sufficiently large number of iterations. \begin{figure}[ht] \centering \includegraphics[width=.8\linewidth]{em} \caption{Geometric perspective of the $em$ algorithm.} \label{fig:em} \end{figure} If the model manifold is $e$-flat and the data manifold is $m$-flat, it is shown that in each step, the projection is uniquely determined, but the algorithm can converge to one of the local minima in general. Note that the procedure in the $e$-step is equivalent to minimizing \begin{align} D(q(\eta), p(\theta)) =& \int q(x) q(z|x;\eta) \log \frac{ q(x) q(z|x;\eta) }{ p(x;\theta_t) p(z|x;\theta_t) } \mathrm{d} x \mathrm{d} z \\ =& \int q(x) \log \frac{q(x)}{p(x;\theta_t)} \mathrm{d} x \\ &+ \int q(x) q(z|x;\eta) \log \frac{ q(z|x;\eta) }{ p(z|x;\theta_t) } \mathrm{d} x \mathrm{d} z\\ =& \int q(x) \log \frac{q(x)}{p(x;\theta_t)} \mathrm{d} x \\ &+ \int q(x) D(q(z|x;\eta), p(z|x;\theta_t)) \mathrm{d} x. \end{align} It is reduced to minimizing the conditioned KL divergence $D(q(z|x;\eta), p(z|x;\theta_t))$. Because of the positivity of the KL divergence, in most cases, the parameter update $\eta_{t} \to \eta_{t+1}$ is realized by solving \begin{equation} q(z|x;\eta_{t+1}) = p(z|x;\theta_{t}) \end{equation} with respect to $\eta_{t+1}$. Remember that the EM algorithm is an alternating optimization procedure composed of the E and M steps. \begin{description} \item[E-step:] Calculate $Q(\theta,\theta_t)$ defined by \begin{align} Q(\theta,\theta_t) = \frac{1}{n} \sum_{i=1}^{n} \left\{ \int p(z|x_i; \theta_t) \log p(x_i,z; \theta) \mathrm{d} z \right\}. \end{align} \item[M-step:] Find $\theta_{t+1}$ that maximizes $Q(\theta,\theta_t)$ with respect to $\theta$: \begin{equation} \theta_{t+1} = \mathop{\rm arg~max}\limits_{\theta} Q(\theta,\theta_t). \end{equation} \end{description} The EM algorithm can be also seen as a motion on the data manifold and the model manifold. In the M-step, the estimate is obtained by the $m$-projection from a point in the data manifold to a point in the model manifold, and this operation is equivalent to the $m$-step. On the other hand, in the E-step, the conditional expectation is considered and this is slightly different from the $e$-projection in the $e$-step. Let $q(x)$ be the empirical distribution of the observed variables $X$. Suppose $q(z|x;\eta_{t+1}) = p(z|x;\theta_{t})$ holds in the $e$-step, then the objective function evaluated in the $m$-step is \begin{align} D(q(\eta_{t+1}), p(\theta)) =& \int q(x)p(z|x;\theta_t) \log \frac{ q(x) p(z|x;\theta_t) }{ p(x,z;\theta) } \mathrm{d} x \mathrm{d} z\\ =& \int q(x) p(z|x;\theta_t) \log q(x) p(z|x;\theta_t) \mathrm{d} x \mathrm{d} z - Q(\theta,\theta_t). \end{align} This shows that the $m$-step and the $M$-step are equivalent if the first term can be properly integrated. The problem occurs when the integrals including the empirical distribution, which is sum of delta functions, are not appropriately defined. In~\cite{AMARI19951379}, the case where $\mathcal{P}$ is an exponential family and the model manifold is a curved exponential family embedded in $\mathcal{P}$ was considered, and it was shown that the E-step and the $e$-step give different estimates. This result mainly comes from the fact that the expectation of the hidden variables and the expectation conditioned by the observed variables do not agree: \begin{equation} \mathbb{E}_{q(\eta)}[Z] \neq \mathbb{E}_{q(\eta)}[ Z|x=\mathbb{E}_{q(\eta)}[X]]. \end{equation} \subsection{Robust variant: $um$ algorithm} Since the EM algorithm is an algorithm for maximum likelihood estimation, in this paper, we mainly consider KL divergence. However, it is well known that KL divergence is vulnerable to outliers, as is maximum likelihood estimation, and robust estimation methods have been proposed using the Bregman divergence~\cite{BREGMAN1967200}. Let $U$ be a monotonically increasing convex function on $\mathbb{R}$, and $u$ be the derivative of $U$. We define $U^{\ast}(\zeta) = \sup_{z \in \mathbb{R}} \{z \zeta -U(z)\}$, that is, the Legendre transform of $U$, and $u^{\ast} = u^{-1}$ as the derivative of $U^{\ast}$. We consider transforming the function $f$ by $u^{\ast}(f)$ and denote it as $\breve{f} = u^{\ast}(f)$, which is called the $u$-representation of the function $f$. Then, the Bregman potential between two functions $f$ and $g$ is defined as \begin{align} d_U(f,g) = U^{\ast}(f) + U(\breve{g}) - f \breve{g}, \end{align} and the Bregman divergence is defined as \begin{align} D_{U}(p,q) = \int d_{U}(p(y),q(y)) \mathrm{d} \Lambda (y) = \int d_{U}(p,q) \mathrm{d} \Lambda, \end{align} where $p$ and $q$ are probability density or probability mass functions. Note that we omit the integral variable $y$ for notational simplicity. The most popular convex function $U$ and its related functions for Bregman divergence would be the exponential function, which leads to the KL divergence where \begin{align} \begin{aligned} U(z) &= \exp(z), & U^{\ast}(\zeta) &= \zeta ( \log \zeta -1 ), \\ u(z) &= \exp(z),& u^{\ast}(\zeta) &= \log \zeta. \end{aligned} \end{align} Other important examples include the $\eta$-type with $\eta \geq 0$ \begin{align} \begin{aligned} U(z) &= \exp(z) + \eta z, & U^{\ast}(\zeta) &= (\zeta - \eta) \{ \log (\zeta - \eta) +1\},\\ u(z) &= \exp(z) + \eta, & u^{\ast}(\zeta) &= \log (\zeta - \eta), \end{aligned} \end{align} and the $\beta$-type with $\beta \geq 0$ \begin{align} \begin{aligned} U(z) &= \frac{1}{\beta+1} (\beta z + 1)^{\frac{\beta+1}{\beta}}, & U^{\ast}(\zeta) &= \frac{\zeta^{\beta+1}}{\beta(\beta+1)} - \frac{\zeta}{\beta},\\ u(z) &= (\beta z + 1)^{1/\beta},& u^{\ast}(\zeta) &= \frac{\zeta^{\beta} -1}{\beta}. \end{aligned} \end{align} Both the $\eta$-type and $\beta$-type functions are known to lead to robust estimators. The Bregman divergence is also called the $u$-divergence, and the robust variant of the $em$ algorithm based on the Bregman divergence is called the $um$ algorithm. The basic idea is simply to change the $e$-projection to $u$-projection, i.e., instead of Eq.~\eqref{eq:e_step} in the $em$ algorithm, we consider \begin{equation} \psi^{(t+1)} = \mathop{\rm arg~min}\limits_{\psi} D_{U}(p(\psi), q(\theta^{(t)})). \end{equation} However, $u$-projections with respect to Bregman divergences such as $\beta$-divergence and $\eta$-divergence are generally not obtained in closed form. In~\cite{FujimotoMurataEM2007}, for estimating the model and mixture parameters in finite mixture models, two simplifications of the $m$ projection were proposed to make the inference computationally feasible. The influence function of the $u$-mixture of the exponential family models with respect to the outlying mixture component was derived in~\cite{HE_IG_2022}. We also note that the extension to the Bregman divergence is reconsidered in~\cite{DBLP:journals/corr/abs-2201-02447} and applied to the rate distortion problem in the quantum channel. \section{Geometric perspective of channel capacity} In this section, we introduce the information geometric perspective of the estimation algorithm of channel capacity. A memoryless channel with finite input alphabet $\Omega_1$ and finite output alphabet $\Omega_2$ is determined by a stochastic matrix $R : \Omega_1 \to \Omega_2$ or a family of distributions $\{r(\cdot | x )\}_{x \in \Omega_1}$ on $\Omega_2$. Let $\mathcal{S}_i$ be the sets of all probability distributions on $\Omega_i, i=1,2$: \begin{align} \mathcal{S}_i = \{ p : \Omega_i \to \mathbb{R}_{++} \mid \sum_{x \in \Omega_i} p(x) = 1\}, \; i=1,2, \end{align} where $\mathbb{R}_{++} = \{ x \in \mathbb{R} \mid x >0\}$. Similarly, let $\mathcal{S}_3$ be the set consisting of all probability distributions on $\Omega_1 \times \Omega_2$. A channel is defined by a triple $(\Omega_1, r(y|x), \Omega_2)$ of finite sets $\Omega_1, \Omega_2$ and a map $r : x \mapsto r(\cdot|x)$. The map $I:\mathcal{S}_3 \to \mathbb{R}$ defined by \begin{equation} I(p(x,y)) = D(p(x,y) , q(x) \cdot r(y)) \end{equation} is called the mutual information. Given a channel $(\Omega_1, r(y|x), \Omega_2)$, the channel capacity is defined by \begin{align} C = \sup_{q(x) \in S_{1}} I(q(x) \cdot r(y|x)). \end{align} Suppose that a probability distribution $\hat{q}(x) \in \mathcal{S}_1$ attains the channel capacity $C$. Then for any $x \in \Omega_1$, the following equation holds: \begin{align} D(r(y|x), r_{\hat{q}}(y)) = C \label{eq:CC_attain1} \end{align} where $r_{\hat{q}}(y)$ is the marginal distribution of $\hat{q}(x) \cdot r(y|x)$ on $\Omega_2$. Conversely, if there exist $\hat{C} \geq 0$ and $\hat{q} \in \mathcal{S}_1$ satisfying \begin{align} D(y(y|x), r_{\hat{q}}(y)) = \hat{C} \label{eq:CC_attain2} \end{align} for all $x \in \Omega_1$, then $\hat{C} \geq 0$ and $\hat{q}(x)$ are the channel capacity and a probability distribution that attains the channel capacity, respectively. The Arimoto algorithm~\cite{1054753} updates a distribution $q^{(t)}(x) \in \mathcal{S}_1$ by the update rule \begin{align} q^{(t+1)}(x) = \frac{ q^{(t)}(x) \exp \{ D(r(y|x) , r^{(t)}(y))\} }{ \sum_{x'} q^{(t)} (x') \exp \{ D(r(y|x') , r^{(t)}(y))\} }, \label{eq:Arimoto} \end{align} where $r^{(t)}(y)$ is the marginal distribution of $q^{(t)} (x) \cdot r(y|x)$ and is denoted as $r^{(t)}(y)= r_{q^{(t)}}(y)$. It is known that the Arimoto algorithm monotonically increases the mutual information $I(q^{(t)}(x) \cdot r(y|x))$, which converges to the channel capacity. \subsection{Information geometric perspective of channel capacity} Recently, the information geometric perspective of the Arimoto algorithm has been elucidated~\cite{Toyota2020}. Define subsets $\mathcal{M}$ and $\mathcal{E}$ of $\mathcal{S}_3$ as \begin{align} \mathcal{M} =& \{ q(x) \cdot r(y|x) \mid q(x) \in \mathcal{S}_1\},\\ \mathcal{E} =& \{ \tilde{q}(x) \cdot r(y) \mid q(x) \in \mathcal{S}_1, r(y) \in \mathcal{S}_2\}. \end{align} The subspace $\mathcal{M}$ is composed of probability distributions of the form $q(x) \cdot r(y|x)$. The conditional distribution $r(y|x)$ is a fixed channel; hence any point in $\mathcal{M}$ is specified by an input distribution $q(x) \in \mathcal{S}_1$. In contrast, $\mathcal{E}$ is composed of probability distributions of the form $\tilde{q}(x) \cdot r(y)$, namely, the distributions of the input and the output are mutually independent. Note that any input distribution in $\mathcal{E}$ is denoted by $\tilde{q}(x) \in \mathcal{S}_1$ to differentiate it from that in $\mathcal{M}$. It is easy to verify that $\mathcal{M}$ is $m$-autoparallel and $\mathcal{E}$ is $e$-autoparallel. For $p(x,y) \in \mathcal{S}_3$, the $m$-projection of $p$ onto $\mathcal{E}$ is $q(x)\cdot r(y)$ where $q(x)$ and $r(y)$ are marginal distributions of $p(x,y)$. Then, the capacity is written as \begin{align} C = \sup_{p(x,y) \in \mathcal{M}} D (p(x,y) , \Pi^{(m)} (p(x,y))), \end{align} where $\Pi^{(m)}(p(x,y))$ is the $m$-projection of $p(x,y)$ onto $\mathcal{E}$. From this expression, we see that the channel capacity $C$ is characterized by the largest divergence from $\mathcal{M}$ to $\mathcal{E}$. In contrast to the EM algorithm, for estimating the channel capacity, we must maximize the KL divergence between two flat statistical manifolds. We cannot expect convergence to the channel capacity by a simple iteration of $e$ and $m$ projections in the $em$ algorithm. For this problem, the inverse em algorithm was proposed in~\cite{Toyota2020}. For $q^{(t)}(x) \cdot r(y|x) = p^{(t)} (x,y) \in \mathcal{M}$, update $q^{(t+1)}(x) \cdot r(y|x) = p^{(t+1)}(x,y) \in \mathcal{M}$ as follows. \begin{description} \item[Backward $e$-step:] Search $\tilde{q}^{(t+1)}(x) \cdot r^{(t+1)}(y) \in \mathcal{E}$ such that the unique $e$-projection from $\tilde{q}^{(t+1)}(x) \cdot r^{(t+1)}(y)$ onto $\mathcal{M}$ is $p^{(t)}(x,y)$. \item[Backward $m$-step:] Search $q^{(t+1)}(x) \cdot r(y|x) \in \mathcal{M}$ such that the unique $m$-projection from $q^{(t+1)}(x) \cdot r(y|x)$ onto $\mathcal{E}$ is $\tilde{q}^{(t+1)}(x) \cdot r^{(t+1)}(y)$. Set $p^{(t+1)}(x,y) = q^{(t+1)}(x)\cdot r(y|x)$. \end{description} \begin{figure} \centering \includegraphics[width=.8\linewidth]{Bem} \caption{Left: The $em$ algorithm to minimize the KL divergence between two manifolds. Right: The backward $em$ algorithm to maximize the KL divergence between two manifolds.} \label{fig:Bem} \end{figure} It is proven that \begin{equation} I(p^{(t)}(x,y)) \leq I(p^{(t+1)}(x,y)) \end{equation} holds. For the backward $e$-step, it is shown that for a given probability distribution $p^{(t)}(x,y) \in \mathcal{M}$, there exists a probability distribution $\tilde{q}^{(t+1)}(x) \cdot r^{(t+1)}(y) \in \mathcal{E}$ that satisfies $\Pi^{(e)} (\tilde{q}^{(t+1)}(x) \cdot r^{(t+1)}(y)) = p^{(t)}(x,y)$, and it is written as \begin{align} \tilde{q}^{(t+1)}(x) \propto q^{(t)}(x) \exp \{ D(r(\cdot|x) , r(\cdot)) \}. \label{eq:q_in_E} \end{align} For later use, define an $e$-autoparallel subset $\mathcal{E}^{(t)}$ of $\mathcal{E}$ by \begin{align} \mathcal{E}^{(t)} = \{ \tilde{q}(x) \cdot r(y) \mid \Pi^{(e)} ( \tilde{q}(x) \cdot r(y)) = q^{(t)} (x) \cdot y(y|x) \in \mathcal{M} \}, \end{align} which is composed of candidates for the backward $e$-step. To carry out the backward $m$-step, it is important to choose an appropriate probability distribution $\tilde{q}^{(t+1)}(x) \cdot r^{(t+1)}(y) \in E^{(t)}$ so that there exists $p^{(t+1)}(x,y) \in \mathcal{M}$ such that $\Pi^{(m)}(p^{(t+1)}(x,y)) = \tilde{q}^{(t+1)}(x) \cdot r^{(t+1)}(y)$. Let $\Pi^{(m)}(\mathcal{M})$ be the projection of $\mathcal{M}$ to $\mathcal{E}$ by the $m$-projection (Fig.~\ref{fig:Arimoto}), and assume\footnote{The existence and uniqueness of the intersection are not guaranteed in general.} there exist intersections of $\Pi^{(m)}(\mathcal{M})$ and $\mathcal{E}^{(t)}$. Choose an arbitrary point $\tilde{q}^{(t+1)}(x) \cdot r^{(t+1)}(y) \in \Pi^{(m)}(\mathcal{M}) \cap \mathcal{E}^{(t)}$. For such a point, we can perform the backward $m$-step. \begin{figure} \centering \includegraphics[width=.8\linewidth]{Arimoto} \caption{Schematics of backward $em$ and Arimoto algorithms. In the backward $e$-step, the intersection of $\Pi^{(m)}(\mathcal{M})$ and $\mathcal{E}^{(t)}$ is searched. In contrast, the Arimoto algorithm only considers the restriction $\mathcal{E}^{(t)}$ for update.} \label{fig:Arimoto} \end{figure} The problem of finding $\tilde{q}^{(t+1)}(x) \cdot r^{(t+1)}(y) \in \mathcal{E}^{(t)}$ for the backward $m$-step is equivalent to finding an intersection of $\Pi^{(m)}(\mathcal{M}) \cap \mathcal{E}^{(t)}$. Let us focus on element $\tilde{q}^{(t+1)}(x) \cdot r^{(t+1)}(y) \in \mathcal{E}^{(t)}$. Given $r^{(t+1)}(y)$, the form of $\tilde{q}^{(t+1)}(x)$ is determined by Eq.~\eqref{eq:q_in_E}; hence it only depends on $r^{(t+1)}(y)$, and $\tilde{q}^{(t+1)}(x)$ is regarded as a function of $r^{(t+1)}(y)$ henceforth. Note that by the definition of $m$-projection to $\mathcal{E}$, $\Pi^{(m)}(q^{(t+1)}(x) \cdot r(y|x)) = \tilde{q}^{(t+1)}(x) \cdot r_{q^{(t+1)}}(y)$, where $r_{q^{(t+1)}}(y)$ is the marginal distribution of $q^{(t+1)}(x)\cdot r(y|x)$. Then, the function $r^{(t+1)}(y)$ must satisfy the following condition. \begin{align} \exists \tilde{q}^{(t+1)}(x) \in \mathcal{S}_1 \; s.t. \; \Pi^{(m)}(q^{(t+1)}(x) \cdot r(y|x)) = \tilde{q}^{(t+1)}(x) \cdot r_{q^{(t+1)}}(y) = q^{(t+1)}(x) \cdot r^{(t+1)}(y). \end{align} Concretely, \begin{align} r_{q^{(t+1)}}(y) = & \sum_{x \in \Omega_1} q^{(t)}(x) \cdot r(y|x) \\ \propto & \sum_{x \in \Omega_1} q^{(t)}(x) \cdot \exp D(r(\cdot | x), r^{(t+1)}(\cdot)) \cdot r(y|x), \end{align} hence we must solve \begin{align} r^{(t+1)}(y) = \frac{1}{Z(r^{(t+1)})} \sum_{x \in \Omega_1} \tilde{q}^{(t+1)}(x) \cdot \exp D(r(\cdot | x), r^{(t+1)}(\cdot)) \cdot r(y|x) \label{eq:BeStep} \end{align} with respect to $r^{(t+1)}(y)$, where $Z(r^{(t+1)})$ is the normalization term. This problem of finding the distribution $r^{(t+1)}(y)$ by solving Eq.~\eqref{eq:BeStep} is prohibitive in general. To make the problem tractable, consider approximating the KL divergence in Eq.~\eqref{eq:BeStep} by a constant. It is, by using Eq.~\eqref{eq:CC_attain1} and~\eqref{eq:CC_attain2}, regarded as approximating $D(r(\cdot |x), r^{(t+1)}(\cdot))$ by the attained channel capacity $C$. Then, the problem in Eq.~\eqref{eq:BeStep} is reduced to \begin{align} r^{(t+1)}(y) = \sum_{x \in \Omega_1} q^{(t)}(x) \cdot r(y|x) = r_{q^{(t)}}(y), \end{align} which is the explicit solution of $r^{(t+1)}(y)$, and $\tilde{q}^{(t+1)}(x) \in \mathcal{S}_1$ is also approximated as \begin{equation} \tilde{q}^{(t+1)}(x) \appropto q^{(t)}(x) \exp D(r(\cdot |x), r_{q^{(t)}}(\cdot)). \end{equation} The backward $m$-step also must be approximated because, owing to the approximation of the backward $e$-step, $\tilde{q}^{(t+1)}(x) \cdot r_{q^{(t)}}(y)$ is not necessarily in $\Pi^{(m)}(\mathcal{M}) \cap \mathcal{E}^{(t)}$. The backward $m$-step is simply approximated by the $m$-projection of $\tilde{q}^{(t+1)}(x) \cdot r_{q^{(t)}}(y)$ to $\mathcal{M}$ and given as \begin{align} \Pi^{(m)}(\tilde{q}^{(t+1)}(x) \cdot r_{q^{(t)}}(y)) = \tilde{q}^{(t+1)}(x) \cdot r(y|x). \end{align} In summary, the approximated backward $em$ algorithm is reduced to the updates of $\tilde{q}^{(t+1)}(x)$ by \begin{equation} \tilde{q}^{(t+1)}(x) \propto q^{(t)}(x) \exp D(r(\cdot |x), r_{q^{(t)}}(\cdot)), \end{equation} which is nothing but the Arimoto algorithm~\eqref{eq:Arimoto}. \subsection{Addendum: turbo decoding, LDPC code} Finally, we mention the information geometric approach for other instances of information theory. Turbo codes and low-density parity check (LDPC) codes have revolutionized code theory research and are now in practical use and standardized. The common features of these codes are that they are composed of simple codes and that they can be decoded with low computational complexity even when the code length is large. In addition, by designing appropriately long codes, it is possible to achieve a channel capacity close to the theoretical bound. Turbo decoding is a method of maximum posterior marginal decoding of codes passing a memoryless binary symmetric channel using two parity check words. It has a special iterative estimation structure. This iterative structure is different from that of the EM algorithm, but it is also analyzed precisely from the viewpoint of information geometry~\cite{IkedaTA2004}. \section{Parameter estimation of statistical models with structures} Various structures in the data distribution space can be modeled flexibly and naturally using statistical models. As an example, we introduce the problem of item preference parameter estimation, in which parameters on a probability simplex representing the ordinal structure of a finite number of items are estimated from observations on item pairs, and show that the problem can be solved using the $em$ algorithm. The mode of probability distribution is useful as a location parameter to characterize the distribution structure, but it is more difficult to handle than the expectation. In the latter half of this section, we introduce modal linear regression, a linear regression on the mode, and geometrically construct the $em$ algorithm for estimating the regression coefficients. The Boltzmann machine with hidden layers is a popular neural network generative model. The parameter estimation problem of Boltzmann machines is also formulated as the minimization of the KL divergence between two statistical manifolds, and its geometric structure is studied. \subsection{Preference parameter estimation in ranking models} Given a set of rating data for a set of items $\{I_1,\dots,I_N\}$, determining preference levels of items is an important problem. Various probability models for preference have been proposed. As an example, in~\cite{r.a.bradley52:_rank_analy_of_incom_block_desig}, the Bradley--Terry (BT) model was proposed, in which each item $I_i$ has a positive-valued parameter $\theta_i$, and the probability of being chosen item $I_i$ over item $I_j$, which is denoted by $I_i \succ I_j$, is given by $\Pr(I_i \succ I_j) = \frac{\theta_i}{\theta_i+\theta_j}$. Namely, we consider a parameter set $Q = \{\theta_i \}_{i \in \Lambda}, \; \sum_{i \in \Lambda} \theta_i = 1, \; \theta_i >0$, where $\Lambda =\{i\}$ is an index set. In this model, the greater the value of $\theta_i$, the more highly item $I_i$ is preferred. Assume that multiple users independently compare $i$ and $j$, and let $n_{ij}$ and $n_{ji}$ be the number of observed events $I_i \succ I_{j}$ and $I_j \succ I_i$, respectively. The log-likelihood of the BT model is given by \begin{equation} L(Q) = \sum_{i \neq j} n_{ij} \log \frac{\theta_i}{\theta_i + \theta_j}, \end{equation} and the estimate $\tilde{Q}$ is obtained as a solution of the following optimization problem: \begin{equation} \tilde{Q} = \mathop{\rm arg~max}\limits_{Q} L(Q) \quad \text{subject to} \quad \sum_{i \in \Lambda} \theta_i =1, \; \theta_i >0. \end{equation} There exists several parameter estimation algorithms~\cite{10.1214/aos/1028144844,NIPS2004_825f9cd5}. We can take another look at the BT estimation problem from the viewpoint of information geometry. Consider a space of categorical distributions \begin{align} \mathcal{M} = \left\{ Q=\{\theta_i\}_{i \in \Lambda} \; \middle| \; \sum_{i\in \Lambda} \theta_i = 1, \; \theta_i >0 \right\}. \end{align} Consider also a set of probabilities $P = \{ \pi_i\}_{i \in \Lambda}$. Items $I_i$ and $I_j$ are compared several times, and we observe the event $I_i \succ I_j$ $n_{ij}$ times and the event $I_j \succ I_i$ $n_{ji}$ times. This observation $(n_{ij},n_{ji})$ indicates a restriction for probabilities in the BT model as $\pi_i : \pi_j = n_{ij} : n_{ji}$. For the observation $(n_{ij}, n_{ji})$, we define a subspace $\mathcal{D}_{ij}$ of $\mathcal{M}$ that satisfies the observed ratio as \begin{align} \mathcal{D}_{ij} = \{ P = \{ \pi_{i} \}_{i \in \Lambda} \in \mathcal{M} \mid \pi_i : \pi_j = n_{ij} : n_{ji} \}. \end{align} This submanifold $\mathcal{D}_{ij}$ gives a constraint on the simplex in accordance with the observation $(n_{ij},n_{ji})$, as shown in Fig.~\ref{fig:BT} (left panel), and is called the data manifold. The data manifold $\mathcal{D}_{31}$ is, for example, composed of the count $(n_{13}, n_{31})$ of the events $I_1 \succ I_3$ and $I_3 \succ I_1$. It divides the edge of items $I_1$ and $I_3$ to the ratio $n_{31}:n_{13}$. \begin{figure}[ht] \centering \includegraphics[width=.9\linewidth]{BT} \caption{Left: Example of the $m$-flat data manifold embedded in the two-probability simplex. Middle: $e$-projection from $Q_t$ in $\mathcal{M}$ to data manifolds $\mathcal{D}_{ij}$. Right: $m$-projection from data manifolds to the model manifold.} \label{fig:BT} \end{figure} If the set of data manifolds $\{\mathcal{D}_{ij}\}$ correspond to all observations of the form $(n_{ij}, n_{ji}), \; i,j \in \Lambda$ is consistent, that is, it has a unique intersection, and it is adopted to be the estimate $\tilde{Q} = \cap_{i,j} D_{ij}$. However, this is not the case in general, and it is reasonable to seek a model that is maximally consistent with the observed pairwise comparison data. Let $N$ be the number of given data manifolds $\{ \mathcal{D}_{ij}\}$. A good estimate for the BT models is obtained as the nearest point in the simplex from these $N$ submanifolds. A natural choice of the measure of closeness in the simplex is the KL divergence. The KL divergence between points $P=\{\pi_i\}$ and $Q=\{\theta_i\}$ is given as \begin{equation} D(P,Q) = \sum_{i \in \Lambda} \pi_i \log \frac{\pi_i}{\theta_i}. \end{equation} On the basis of this, we define the KL divergence between a submanifold $\mathcal{D}$ and a point $Q$ as \begin{equation} D(\mathcal{D},Q) = \min_{P \in \mathcal{D}} D(P,Q). \end{equation} Then, an objective function for the parameter estimation on the simplex $\mathcal{M}$ is proposed as the average of the KL divergences between $\mathcal{D}_{ij}$ and $Q$ as \begin{align} F(Q) =\frac{1}{N} \sum_{i,j} D(\mathcal{D}_{ij},Q) = \frac{1}{N} \sum_{i,j} \min_{P \in \mathcal{D}_{ij}} D(P,Q), \end{align} and the minimizer of this function $F(Q)$ is obtained by solving the following optimization problem: \begin{align} \hat{Q} = \mathop{\rm arg~min}\limits_{Q \in \mathcal{M}} \left\{ \sum_{i,j} \min_{P \in \mathcal{D}_{ij}} D(P,Q) \right\}. \end{align} This is a nested optimization problem and direct optimization is difficult, and the $em$ algorithm is applicable to solve this problem. The data manifold $\mathcal{D}_{ij}$ is defined as the ratio of the observed pairwise comparisons $n_{ij}$ and $n_{ji}$, hence it is an $m$-flat manifold. In~\cite{DBLP:journals/neco/FujimotoHM11}, it is shown that there exists an $e$-flat subspace $\mathcal{S} (P)$ in $\mathcal{M}$ for an arbitrary point $P \in \mathcal{D}_{ij}$ and, conversely, an arbitrary point $Q \in \mathcal{M}$ has a unique point $P \in \mathcal{D}_{ij}$ such that $Q \in \mathcal{S} (P)$ holds. Based on these flat structures, it is guaranteed that the $e$-projection from $\mathcal{D}_{ij}$ to $Q \in \mathcal{M}$ defined as \begin{equation} \hat{P}_{ij} = \mathop{\rm arg~min}\limits_{P \in \mathcal{D}_{ij}} D(P,Q) \end{equation} and the $m$-projection from $\mathcal{M}$ to a set of points $\{ P_{ij} \in \mathcal{D}_{ij}\}$ defined as \begin{equation} \hat{Q} = \mathop{\rm arg~min}\limits_{Q \in \mathcal{M}} \sum_{i,j} D(P_{ij}, Q) \end{equation} are uniquely determined. In summary, the $em$ algorithm for estimating the preference parameter of the BT model is given as follows. Starting from an initial parameter $Q_0$, set $t=0$, and repeat the $e$- and the $m$-steps. \begin{description} \item{$e$-step:} For each $(i,j)$, find a point in $\mathcal{D}_{ij}$ by the $e$-projection \begin{align} \hat{P}_{ij,t} = \mathop{\rm arg~min}\limits_{P \in \mathcal{D}_{ij}} D(P,Q_t). \end{align} \item{$m$-step:} Find a point $Q_{t+1}$ that is the closest to $\mathcal{D}_{ij}$ by $m$-projection: \begin{align} Q_{t+1} = \mathop{\rm arg~min}\limits_{Q \in \mathcal{M}} \sum_{i,j} D(P_{ij,t}, Q). \end{align} \end{description} The $e$-projection is depicted in Fig.~\ref{fig:BT} (Middle) and the $m$-projection is depicted in Fig.~\ref{fig:BT} (Right). The natural extension of the BT model to the multiple comparison is given by Plackett~\cite{Plackett1975} and we refer to the model as the Plackett--Luce model, \begin{equation} \Pr(I_{a(1)} \succ I_{a(2)} \succ \cdots \succ I_{a(N)}) = \prod_{i=1}^{N-1} \frac{ \theta_{a(i)} }{ \sum_{j=i}^{N} \theta_{a(j)} }, \end{equation} where $a(j)$ denotes the index of the item that occupies the $j$-th position in the ranking, and its geometric properties are also investigated. It is further generalized~\cite{DBLP:conf/pakdd/HinoFM09,DBLP:journals/neco/HinoFM10} to cope with the grouped ranking observation, in which each of $U$ judges rates $N$ items on a scale of $1$ to $M$, $M\leq N$, assuming there is a latent ordering in a set of the same rated items, but we only observe $M$ groups of items that are divisions of $N$ items. The grouping by a user $u$ is denoted as $D_u = \{G_1^u,\dots,G_M^u\}$ where $G^u_m = \{ i \in \{1,\dots,N\} | I_i \in m\mbox{-th group}\}$. The problem of finding the optimal parameter $\theta$ most consistent with the observations $\{D_{u}\}_{u=1}^{U}$ is, also in this case, solved using the $em$ algorithm. Explaining preference levels of all users by a single set of preference parameter is not reasonable. In~\cite{DBLP:journals/neco/HinoFM10}, a mixture of different preference parameters and user clustering based on the mixture model was proposed. The application of the mixture model to the visualization of the item--user relationship and item recommendation were also proposed~\cite{DBLP:conf/ic3k/FujimotoHM09,DBLP:journals/neco/HinoFM10}. Moreover, in~\cite{DBLP:journals/neco/FujimotoHM11}, a weight for an observation was introduced to reflect the reliability of each observation, and the sensitivity to the outlying observation was also analyzed. \subsection{Modal linear regression model} Linear regression is used to model the conditional mean of a response variable $y$ given the predictor variable $x$. A well-known least-squares estimator for linear regression coefficients is highly sensitive to outliers. To alleviate this problem, many estimators have been developed. One of the reasons for this sensitivity stems from the mean estimation. Mode is a reasonable alternative to characterize the location of distributions and is used to, for example, robustly identify low-dimensional subspace~\cite{DBLP:journals/neco/SandoH20}. Modal linear regression~(MLR;\cite{Lee1989}) is used to model the conditional mode of $y$ given $x$ by using a linear predictor function of $x$. MLR relaxes the distribution assumptions for the robust M-estimators of linear regression~\cite{Hampel1986,Huber2011} and is robust against outliers compared with the least-squares estimation of linear regression coefficients. It is also robust against violations of standard assumptions on the usual mean regression, such as heavy-tailed noise and skewed conditional and noise distributions. In information geometry, a model manifold is often constructed using a parametric distribution. Estimates are regarded as the projection of an empirical distribution onto the model manifold. In the case of linear regression, we construct a model manifold under the assumption that an error variable has a normal distribution. Because of the lack of a parametric distribution, constructing a model manifold that corresponds to the MLR model is difficult with conventional approaches. Some studies have considered nonparametric models for information geometry. Pistone and Sempi~\cite{Pistone1995} showed a well-defined Banach manifold for probability measures. Grasselli~\cite{Grasselli2010} addressed the Fisher information and $\alpha$-connections for the Banach manifold. Zhang~\cite{Zhang2013} discussed the relationship between divergence functions, the Fisher information, $\alpha$-connections, and fundamental issues in information geometry. In contrast to these nonparametric approaches to information geometry, in~\cite{DBLP:conf/iconip/SandoAMH18,Sando2019}, the geometric operation, which leads to the mode or the operation that makes the MLR estimator robust, was elucidated and an information geometric perspective on MLR was obtained. Let $x \in \mathbb{R}^p$ and $y \in \mathbb{R}$ be a set of predictor variables and a response variable, respectively. The original least squares for linear regression estimates a conditional mean of $y$ given $x$, while MLR estimates a conditional mode of $y$ given $x$. We briefly explain the EM algorithm of MLR introduced in~\cite{Yao2012}. \subsubsection{Formulation}\label{sec:MLR_Formulation} Suppose that $\left\{x_i, y_i \right\}_{i=1}^{N}$ are i.i.d. observations, where the $i$-th predictor variable is denoted by $x_i\in \mathbb{R}^p$ and the corresponding response is denoted by $y_i \in \mathbb{R}$. With MLR, a conditional mode of $y$ given $x$ by a linear function of $x$ is modeled as \begin{align*} \text{Mode} \left[ y;x \right] &= x^{\top}\beta, \end{align*} where $\text{Mode} \left[ y;x \right] = \mathop{\rm arg~max}\limits_{y} f(y|x)$ for the conditional density function $f(y|x)$. Namely, $y$ and $x$ are related as \begin{align} \label{eq:MLR_Formulation_model} y = x^{\top}\beta + \epsilon, \quad \text{where} \quad \text{Mode} \left[ \epsilon;x \right] = 0. \end{align} To estimate $\beta$, Lee~\cite{Lee1989} introduced a loss function with the form \begin{equation} l(\beta ; y, x) = - \phi_{h} \left( y - x^{\top}\beta \right), \end{equation} where $\phi_h(x) = \frac{1}{h}\phi \left(\frac{x}{h} \right)$, $\phi(\cdot)$ is a kernel function, and $h$ is a bandwidth parameter. Minimizing the empirical loss leads to the estimate $\hat{\beta}$ of the linear coefficient: \begin{align} \hat{\beta}= \mathop{\rm arg~max}\limits_{\beta} \frac{1}{N} \sum_{i=1}^{N} \phi_h(y_i - x_i^{\top}\beta). \label{eq:MLR_Formulation_problem} \end{align} In this paper, $\phi(\cdot)$ denotes a standard normal density function. The consistency and asymptotic normality of the estimate $\hat{\beta}$ obtained by Eq.~\eqref{eq:MLR_Formulation_problem} have been established under certain regularity conditions on the samples, kernel function, parameter space, and vanishing rate of the bandwidth parameter~\cite{Gordon2012}. \subsubsection{EM algorithm for MLR}\label{sec:MLR_EMalgorithm} Here, we introduce the EM algorithm for the MLR parameter estimation proposed in~\cite{Yao2012}. The algorithm consists of two steps starting from an initial estimate $\beta^{(1)}$. \begin{description}[style=nextline] \item[E-Step:] Consider the surrogate function \begin{equation} \gamma (\beta;\beta^{(k)}) = \sum_{i=1}^{N} \pi_i^{(k)} \log \left[ \frac{ \frac{1}{N}\phi_h \left( y_i-x_i^{\top}\beta \right) }{\pi_i^{(k)}} \right], \label{eq:MLR_E-Step_surrogate} \end{equation} where \begin{align} \pi_i^{(k)} = \frac{\phi_h(y_i - x_i^{\top}\beta^{(k)}) }{\sum_{j=1}^{N} \phi_h(y_j - x_j^{\top}\beta^{(k)}) },\quad i = 1 \dots N. \label{eq:MLR_E-Step} \end{align} This function satisfies \begin{equation} \gamma (\beta^{(k)};\beta^{(k)}) =\log \left[ \frac{1}{N}\sum_{i=1}^{N}\phi_h\left( y_i - x_i^{\top}\beta^{(k)} \right) \right] \label{eq:MLR_E-Step_cond2} \end{equation} and \begin{align} \log \left[ \frac{1}{N}\sum_{i=1}^{N} \phi_h \left( y_i-x_i^{\top}\beta \right) \right] &= \log \left[ \sum_{i=1}^{N} \pi_i^{(k)} \frac{\frac{1}{N}\phi_h \left( y_i-x_i^{\top}\beta \right) }{\pi_i^{(k)}} \right] ,\quad \text{by Jensen's inequality} \notag \\ &\geq \sum_{i=1}^{N} \pi_i^{(k)} \log \left[ \frac{ \frac{1}{N}\phi_h \left( y_i-x_i^{\top}\beta \right) }{\pi_i^{(k)}} \right] = \gamma (\beta;\beta^{(k)}). \label{eq:MLR_E-Step_cond1} \end{align} \item[M-Step:] In this step, the parameter $\beta$ is updated to increase the value of $\frac{1}{N}\sum_{i=1}^{N}\phi_h \left(y_i-x_i^{\top}\beta\right)$. The updated parameter $\beta^{(k+1)}$ is given as \begin{align} \beta^{(k+1)} &= \mathop{\rm arg~max}\limits_{\beta} \gamma (\beta;\beta^{(k)}) \label{eq:MLR_M-Step_optimize_surrogate} . \end{align} The following inequality holds: \begin{align*} \log \left[ \frac{1}{N}\sum_{i=1}^{N}\phi_h \left( y_i-x_i^{\top}\beta^{(k+1)} \right) \right] &\geq \gamma (\beta^{(k+1)};\beta^{(k)}) \\ &\geq \gamma (\beta^{(k)};\beta^{(k)}) = \log \left[ \frac{1}{N}\sum_{i=1}^{N}\phi_h \left( y_i-x_i^{\top}\beta^{(k)} \right) \right] . \end{align*} Equation~\eqref{eq:MLR_M-Step_optimize_surrogate} is equivalent t \begin{align} \beta^{(k+1)} &= \mathop{\rm arg~max}\limits_{\beta} \sum_{i=1}^{N} \pi_i^{(k)} \log \phi_h(y_i - x_i^{\top}\beta). \label{eq:MLR_M-Step} \end{align} When $\phi(\cdot)$ is a standard normal density function, \begin{align*} \beta^{(k+1)} &= \left( X^{\top} W_{k} X \right)^{-1} X^{\top} W_{k} y, \quad & W_{k} = \text{diag} \begin{pmatrix} \pi_1^{(k)} & \cdots & \pi_N^{(k)} \end{pmatrix}. \end{align*} \end{description} The property of the estimate $\hat{\beta}$ was discussed in~\cite{Yao2012}. \subsubsection{Information geometry of MLR}\label{sec:informationgeometryofMLR} Sando et al.~\cite{Sando2019} analyzed MLR from the viewpoint of information geometry. They elucidated the source of the difficulty by constructing a model manifold and data manifold for the MLR model and proposed a framework for geometrically formulating the MLR model. \subsubsection{Problem of constructing manifolds}\label{sc:MLRandIG_problem} To elucidate the cause of the difficulty of constructing manifolds for the MLR model, consider the parameter estimation of a Gaussian mixture model as a specific example of statistical inferences in information geometry. Suppose that observations $x_i\in\mathbb{R}^{p}, i=1, \cdots, N$ are i.i.d. subject to a Gaussian mixture distribution expressed as \begin{align*} f(x; \mu,\Sigma) &= \sum_{i=1}^{K} \pi_i g(x;\mu_i, \Sigma_i) ,\\ &\text{where} \quad \left\{ \begin{aligned} &\pi_i \geq 0, \; \sum_{i=1}^{K} \pi_i = 1 ,\\ &g(x;\mu_i, \Sigma_i) = \frac{1}{\sqrt{2\pi}^p\sqrt{\mathrm{det}(\Sigma_i) }} \exp \left\{ -\frac{1}{2} (x-\mu_i)^{\top} \Sigma_i^{-1} (x-\mu_i) \right\} . \end{aligned} \right. \end{align*} Then, the model manifold consists of Gaussian mixture density functions whose parameters are the means and covariance matrices. The data manifold is constructed based on the empirical density function $\frac{1}{N}\sum_{i=1}^{N}\delta(x-x_i)$. In the parameter estimation of the Gaussian mixture model, the model manifold is constructed on the basis of parametric distribution. In contrast, even though they have the similarity that densities are approximated by a mixture of kernel functions, there is no assumption of parametric distributions in MLR. This makes it nontrivial to construct a model manifold and data manifold. To construct the model manifold for the MLR model, consider (i) the assumption that $\text{Mode}\left[ \epsilon;x \right] = 0$ and (ii) the form of the objective function of $\beta$ for the MLR model: $\frac{1}{N} \sum_{i=1}^{N} \phi_h \left(y_i - x_i^{\top}\beta \right)$. From this assumption and fact, the optimization problem expressed in Eq.~\eqref{eq:MLR_Formulation_problem} is regarded as a maximization problem of KDE at $\epsilon = 0$ for the probability density function of $\epsilon$. On the basis of the given observations, we propose constructing the following model for MLR: \begin{align} f(\epsilon;\beta) &= \frac{1}{N} \sum_{i=1}^{N} \phi_h \left( \epsilon - \epsilon_i(\beta) \right) \label{eq:MLRandIG_model} , \end{align} where $\epsilon_i(\beta) = y_i - x_i^{\top}\beta,\ i=1\dots N$ and the variable $\epsilon$ denotes an error variable. In~\cite{AMARI19951379}, the latent variable $Z\in \left\{1\dots N \right\}$, which specifies a mixture component from which an observation is obtained, was introduced. The joint density function of $\epsilon$ and $Z$ is \begin{align} \label{eq:jointD} g(\epsilon,z;\beta) &= \prod_{i=1}^{N} \left[ \frac{1}{N} \phi_h \left( \epsilon - \epsilon_i(\beta) \right) \right]^{\delta_i(z)}, \end{align} where $\delta_i(z) = 1$ if $z=i$ and $\delta_i(z)=0$ if $z\neq i$. The model manifold $\mathcal{M}$ is denoted by \begin{align} \mathcal{M} &= \left\{ g(\epsilon,z;\beta) \mid \beta \in \mathbb{R}^{p} \right\}, \label{eq:MLRandIG_modelmanifold} \end{align} which is a curved exponential family. We next consider constructing a data manifold for the MLR model. The empirical density function is often constructed on the basis of observations. The empirical density function is constructed as follows: \begin{align} p(\epsilon) &= \delta(\epsilon - 0) = \delta(\epsilon) \label{eq:MLRandIG_empiricaldensityfunction} . \end{align} By introducing the latent variable $Z\in \left\{1\dots N \right\}$ to Eq.~\eqref{eq:MLRandIG_empiricaldensityfunction}, $p(\epsilon)$ is extended to the empirical joint density function of $\epsilon$ and $Z$: \begin{align*} h(\epsilon, z) &= p(\epsilon) q(z \mid \epsilon) . \end{align*} By introducing the parameters $\left\{q_i \right\}_{i=1}^{N}$, the conditional density function $q(z\mid \epsilon)$ is modeled as \begin{align*} q(z \mid \epsilon) = \sum_{i=1}^{N} q_i \delta_i(z) ,\quad \text{where} \quad q_i \geq 0, \; \sum_{i=1}^{N} q_i = 1 \end{align*} Then, the empirical joint density function $h(\epsilon,z\;q_1\dots q_N)$ is expressed as \begin{align} h(\epsilon, z\ ; q_1 \dots q_N) &= \sum_{i=1}^{N} q_i \delta(\epsilon) \delta_i(z) \label{eq:MLRandIG_empiricaljointdensityfunction} ,\quad \text{where} \quad q_i \geq 0, \;\sum_{i=1}^{N} q_i = 1 \end{align} The data manifold $\mathcal{D}$ is defined as \begin{align*} \mathcal{D} = \left\{ h(\epsilon,z\ ;q_1\dots q_N) \mid q_i \geq 0, \quad \sum_{i=1}^{N} q_i = 1 \right\} . \end{align*} $\mathcal{D}$ is shown to be a mixture family. Consider the $e$-projection of a model with the parameters $\beta^{(k)}$ onto the data manifold: \begin{align} &\min_{h \in \mathcal{D}} D(h,g(\cdot,\cdot;\beta^{(k)})) \notag \\ &\rightarrow \quad \left| \begin{aligned} \min_{q_1 \dots q_N} &\quad D \left(h(\cdot,\cdot;q_1 \dots q_N),g(\cdot,\cdot;\beta^{(k)}) \right) ,\\ \text{s.t.} \quad q_i \geq 0, \; \;\sum_{i=1}^{N} q_i = 1 \end{aligned} \right. \label{eq:MLRandIG_optimizeQ} \end{align} An optimal solution for Eq.~\eqref{eq:MLRandIG_optimizeQ} is \begin{align} q_i^{(k)} = \frac{\phi_h \left( y_i - x_i^{\top}\beta^{(k)} \right) }{\sum_{j=1}^{N} \phi_h \left( y_j - x_j^{\top}\beta^{(k)} \right) }, \quad i = 1 \dots N, \label{eq:MLRandIG_optimalQ} \end{align} which is equivalent to the E-step in Eq.~\eqref{eq:MLR_E-Step}. Then, consider the $m$-projection of the empirical joint density function with the parameters $q_i=q_i^{(k)},\ i=1\dots N$ onto the model manifold: \begin{align*} \min_{g \in \mathcal{M}} D(h(\cdot,\cdot\ ;q_1=q_1^{(k)}\dots q_N=q_N^{(k)}),g) , \end{align*} which is solved by \begin{align} \max_{\beta} \sum_{i=1}^{N} q_i^{(k)} \log \phi_h \left( y_i - x_i^{\top}\beta \right), \label{eq:MLRandIG_equiv_optimizebeta} \end{align} and is equivalent to the M-step~\eqref{eq:MLR_M-Step}. \begin{figure} \centering \includegraphics[width=.8\linewidth]{mlr_em} \caption{Conceptual diagram of the $em$ algorithm corresponding to the MLR model} \label{fig:MLRandIG_diagramoftheemalgorithmofmlr} \end{figure} Figure~\ref{fig:MLRandIG_diagramoftheemalgorithmofmlr} shows the update process of the $em$ algorithm corresponding to the MLR model parameter estimation. \subsection{Boltzmann machine learning} The Boltzmann machine (BM) is a fully connected neural network model with $n$ neurons and is equivalent to a second-order log linear model with a binary random variable of length $n$, and is trained to approximate the distribution of the input data. The family of Boltzmann machines with $n$ neurons is written as \begin{align} \mathcal{B} = \left\{ B \in \mathcal{P}^{n} : B(x^n) = b \exp \left\{ \sum_{1 \leq i < j \leq n} w_{ij} x_i x_j \right\} \right\}, \end{align} where $\mathcal{P}^n$ is the collection of probability distributions on the set of machine states $\{0,1\}^{n}$. A BM is usually composed of visible units $1,\dots, v$ and hidden units, which are defined as the rest of the units in $\{1,\dots,n\}$. The behavior of the $v$ visible units is described by the marginal distribution $B_v$ determined from $B$ as \begin{align} B_v(x^v) = \sum_{y^n \in \{0,1\}^n} B(y^n) I_{x^v}(y^n), \end{align} where $I_{x^v}([y_1,\dots,y_n])=1$ if $[y_1,\dots,y_n]=x^v$ and $0$ otherwise. The objective of training the BM is to find a machine for which $B_v$ is close to $\hat{P}$, which is defined on the states of the visible units $[x_1,\dots,x_v] \in \{0,1\}^v$. Namely, by defining the family $\mathcal{D}$ of the desirable distributions on the set of machine state $\{0,1\}^n$ as \begin{align} \mathcal{D} = \left\{ P \in \mathcal{P}^{n} : \sum_{y^n \in \{0,1\}^{n}} P(y^n) I_{x^v}(y^n) = \hat{P}(x^v),\; \forall x^v \in \{0,1\}^{v} \right\} \end{align} whose marginal distribution on the visible units agrees with $\hat{P}$, the problem of BM learning is formulated as \begin{align} \inf_{B \in \mathcal{B}} \inf_{P \in \mathcal{D}} D(P , B). \end{align} Since $\mathcal{D}$ is a subspace of distributions consistent with the given observed (visible) variable, it plays a similar role to the data distribution in terms of the $em$ algorithm. Detailed convergence analysis of this bilevel optimization problem is given in~\cite{143375}. It is shown that the manifold of BM without hidden units is an $e$-flat manifold, so that an invariant divergence measure is introduced into it. Given an information source from an environment, find the BM that approximates the given probability distribution as its stationary distribution. It is shown that the best approximation is given by the $m$-geodesic projection, and is unique in the case of no hidden units. Furthermore, a generalized Pythagorean theorem makes it possible to decompose the approximation error in an invariant manner. The BM manifold with hidden units is not $e$-flat but has some interesting properties~\cite{125867}. The possibility of further speed up by simultaneously solving both the $e$- and $m$-steps using information geometrically derived gradient flows is suggested in~\cite{FUJIWARA1995317}. Also, an information geometric structure of Helmholtz machine learning, known as the Wake--Sleep algorithm, was elucidated in~\cite{NIPS1998_0771fc6f}, where, in contrast to the EM and $em$ algorithms, both the Wake-phase and Sleep-phase correspond to the $m$-projection. \section{Data analysis for distributional data} Here, we regard a distribution as a datum. In other words, a point $\theta$ in the parameter space $\mathcal{S}$ is given as a datum. Such generalization of data analysis in the Riemannian manifold has been attracting much attention~\cite{DBLP:journals/tmi/FletcherLPJ04}. One example of such a situation is found in the field of sensor fusion, where numerous sensors are distributed and plenty of data are obtained by each sensor, but a high communication capacity is required to collect all data from all sensors. One way to reduce the communication cost is to collect only a distribution parameter calculated in each sensor. Another example is transfer learning in machine learning, where there are many different tasks, each of which includes a set of data. By using such source tasks, it may be possible to improve the accuracy for a new target task that only has a small number of samples. In this section, we explain the EM-like iterative algorithm used in such a situation. \subsection{Statistical inference for distributional data} Let us consider a transfer learning scenario. Suppose we have distributions $p_1,\ldots,p_N$ from source tasks and $p_{\mathrm{new}}$ for a target task, where $p_{\mathrm{new}}$ is only based on a small number of samples. One simple way of transfer learning is to find a projection from $p_{\mathrm{new}}$ onto a flat subspace spanned by $p_1,\ldots,p_N$. If we take the $e$-flat subspace, the subspace is \begin{equation} \mathcal{M}_e = \left\{\theta \;\middle|\; \theta = \sum_i w_i \theta_i, \sum_i w_i=1, \theta\in \mathcal{S}\right\}, \end{equation} where $\theta_i$ is an $e$-coordinate of $p_i$, and we can define the $m$-flat subspace similarly. This transfer learning is merely a simple projection, and it can be solved by calculating the projection point (for uniqueness, it is natural to take $m$-projection for an $e$-flat subspace and $e$-projection for an $m$-flat subspace), which can be obtained in an explicit form or by a gradient descent method. However, the problem becomes difficult when $p_i$ is given by an empirical distribution $p_i(x) = \sum_j \delta(x-x_{ij})$, where $\mathcal{X}_i=\{x_{ij}\}$ is a sample set for $p_i$. In such a nonparametric case, the distribution does not have an explicit form of the $e$-coordinate. Takano et al.~\cite{DBLP:journals/neco/TakanoHAM16} solved this difficulty by avoiding the explicit expression of the $e$-coordinate and introducing a new geometrical algorithm based on a generalized Pythagorean theorem, as described below. Instead of an explicit form of the $e$-coordinate, we can use a characteristic of an $e$-mixture that is a member of $\mathcal{M}_e$, shown by Murata and Fujimoto~\cite{murata2009bregman} \begin{equation} \label{eq:emixture} p_w = \mathop{\rm arg~min}\limits_{q\in\mathcal{S}} \sum_i w_i D(q, p_i), \end{equation} where $p_w$ is an $e$-mixture defined as a distribution whose $e$-coordinate is $\sum_i w_i \theta_i$ in a parametric case. The right-hand side can be used as an implicit expression of the $e$-mixture, which only depends on the weight $w_i$ and divergence between $q$ and $p_i$. To find the projection of $p_{\mathrm{new}}$ onto $\mathcal{M}_e$, we must determine $q$ and $w_i$ in \eqref{eq:emixture}. Takano et al.~proposed an algorithm for optimizing $q$ and $w_i$ alternatively, as in the EM algorithm. Instead of the $e$-coordinate, $q$ is expressed in a nonparametric form, $q(x) = \sum_j v_j \delta(x-y_j)$, where the set $\mathcal{Y} = \{y_j\}$ is typically $\mathcal{Y}\subset\bigcup_i \mathcal{X}_i$. If $w_i$ is fixed, the right-hand side of \eqref{eq:emixture} is minimized with respect to $v_j$. This can be performed if $D(q, p_i)$ is expressed as a function of $v_j$ and $x_{ij}$. Such a nonparametric expression of divergence has been proposed in the research field of point process, for example, the method proposed in~\cite{hino2013information,hino2015non}. By using gradient descent of the expression, $v_j$ is optimized for fixed $w_i$. On the other hand, the projection of $p_{\mathrm{new}}$ onto $\mathcal{M}_e$ must satisfy the orthogonality \begin{equation} D(p_{\mathrm{new}}, p_i) = D(p_{\mathrm{new}}, q) + D(q, p_i) \end{equation} for all $i$. However, the current values of $w_i$ and $v_j$ do not necessarily satisfy this formula. Takano et al.\ proposed a simple update rule for $w_i$ as follows: if the left-hand side is larger than the right hand side, this means the point $q$ is too close to $p_i$, hence $w_i$ should be decreased from the current value; on the other hand, if the left-hand side is smaller, $w_i$ should be increased. The convergence of this algorithm has been investigated by Akaho et al.\cite{DBLP:conf/iconip/AkahoHM19}, who showed that the algorithm is guaranteed to converge under a mild condition. \subsection{Dimension reduction for distributional data} The method above becomes difficult if the number of source tasks increases, because the dimension of subspace becomes large, which may cause a curse of dimensionality. To reduce the dimension, the most well-known method is the principal component analysis (PCA). PCA finds the subspace that minimizes the mean square distance from sample points, which implicitly assumes that the sample points lie on the Euclidean space. Although PCA can be applied even to the sample points that are distribution parameters (Fig.~\ref{fig:ePCA}), there are some serious issues. One is that the projection from the point to the subspace is not necessarily included in the domain, for instance, it can be negative variance for Gaussian distribution as in the example in Fig.~\ref{fig:ePCA}. Another is that the Euclidean distance between distribution parameters would not be appropriate from a statistical point of view. From such considerations, Akaho~\cite{akaho2004pca} and Collins et al.\cite{collins2001generalization} proposed an extension of PCA to the case of probability distributions, in particular, exponential family distributions. As described in the previous sections, exponential family distributions have two dual coordinates, $e$-coordinate $\theta$ and $m$-coordinate $\eta$, and for each coordinate, the flat subspace is given by a linear equation of the coordinate. By this duality, there are two kinds of extension of PCA: $e$-PCA, which finds the flat subspace for the $e$-coordinate, and $m$-PCA for the $m$-coordinate. We describe the $e$-PCA here, while the $m$-PCA is defined in a similar way merely by exchanging $e$- with $m$- below. The goal of the $e$-PCA is to determine an affine subspace defined using $u=u_1,\ldots,u_K$ for some fixed $K$, \begin{equation} \mathcal{M}_e(u) = \left\{\hat{\theta}\;\middle|\; \hat{\theta}=\sum_{k=1}^{K} w_k u_k \in\mathcal{S}, \sum_{k=1}^{K} w_k=1\right\} \end{equation} to fit given samples $\theta_1,\ldots,\theta_N$. To guarantee a unique projection from a sample point to the subspace, it is natural to take dual projection, i.e., $m$-projection for $e$-PCA and $e$-projection for $m$-PCA. The projection can be formulated in terms of KL divergence, therefore the objective function for $e$-PCA is \begin{equation} L(w, u) = \sum_{i=1}^N D(\theta_i, \hat{\theta}_i),\quad \hat{\theta}_i = \sum_{k=1}^{K} w_{ik} u_k, \end{equation} where $\hat{\theta}_i$ is the projection point of $\theta_i$. We need to optimize this function with respect to $w_{ik}$ and $u_k$. If we fix $u_k$, the weights $w_{ik}$ are coefficients for the projected points of $\theta_i$, which can be uniquely determined from the duality, and it is typically minimized by a gradient descent method. The first derivative of $L(w, u)$ is given in a simple form, \begin{equation} \frac{\partial L(w, u)}{\partial w_{ik}} = u_k (\hat{\eta}_i - \eta_i),\quad \frac{\partial L(w, u)}{\partial u_k} = \sum_{i=1}^{N} w_{ik} (\hat{\eta}_i - \eta_i), \end{equation} where $\eta_i$ and $\hat{\eta}_i$ are the $m$-coordinates of $\theta_i$ and $\hat{\theta}_i$ respectively. From the above equations, we can optimize $w_{ik}$ and $u_k$ alternatively. Geometrically, it is easy to see that optimizing $w_{ik}$ for a fixed $u_k$ is the $m$-projection from $\theta_i$ onto a fixed subspace $\mathcal{M}_e(u)$. It also holds that optimizing $u$ is the $m$-projection of $(\theta_1,\ldots,\theta_N)$ onto a fixed subspace $\mathcal{N}_e(w)\subset \mathcal{S}^N$, where \begin{equation} \mathcal{N}_e(w)=\left\{(\hat{\theta}_1,\ldots,\hat{\theta}_N)\;\middle|\; \hat{\theta}_i=\sum_{k=1}^{K} w_{ki} u_k, \hat{\theta}_i\in\mathcal{S}\right\}. \end{equation} Therefore, both alternating optimizations of $w$ and $u$ have a unique optimum, but it does not mean that the algorithm finds the global optimum as the EM algorithm does. \begin{figure}[ht] \centering \includegraphics[width=.6\linewidth]{ePCA} \caption{Schematic figure of PCA for the parameters of a distribution.} \label{fig:ePCA} \end{figure} \subsection{Related topics on the dimension reduction for distributional data} The simplest application of $e$-PCA or $m$-PCA would be the case of $K=1$, which is a general center of samples. The $e$-center, that is, the $e$-PCA of this case, is explicitly given by \begin{equation} u_1=\theta\left(\frac1N\sum_{i=1}^{N}\eta_i\right), \end{equation} where $\theta(\eta)$ is the coordinate transformation of $\eta$ to the $e$-coordinate, and $\eta_i$ is the $m$-coordinate of $\theta_i$. The $m$-center has a similar form by exchanging $\theta$ and $\eta$. By using the $e$-center or $m$-center, we can generalize the k-means clustering to distributional data in a natural way. Watanabe et al.\cite{watanabe2009variational} proposed the method for solving clustering and dimension reduction simultaneously, where Bayesian formulation is also introduced. The nonnegative matrix factorization (NMF)\cite{lee2000algorithms,cichocki2009nonnegative} is also a widely used dimension reduction method. Given a high-dimensional data matrix $X$ with all nonnegative elements, we consider an approximation of $X$ by $W H$, where $W$ and $H$ are lower rank matrices with all nonnegative elements. Supposing $X=WH$ holds, let us consider the matrix $\hat{X}$, which is a normalization of $X$ so that the sum of elements of each row is equal to 1. We can show that there exist $\hat{W}$ and $\hat{H}$ such that $\hat{X}=\hat{W}\hat{H}$, where $\hat{W}$ and $\hat{H}$ are also normalized. Since nonnegative values summing to 1 can be regarded as a probability vector, the rows of $\hat{X}$, $\hat{W}$, and $\hat{H}$ can also be regarded as a probability vector. On the basis of this fact, Akaho et al.~\cite{akaho2018geometrical} provided a geometrical view of NMF. The probability vector is an $m$-coordinate of multivariate distribution, and the NMF can be considered as a dimension reduction of distributional data. This is similar to the $m$-PCA of the probability vectors, the difference being that the decomposed matrix must be nonnegative, which is a stronger assumption than that in the $m$-PCA. Without loss of generality, all values $u_k$ can be positive in the $m$-PCA, and NMF is the restriction of the $m$-PCA so that $w_{ik}\ge0$ is satisfied. In the NMF, the maximum likelihood estimation is often applied, which is equivalent to the $m$-projection. Although it is more natural to take the $e$-projection for the $m$-flat subspace, $m$-projection is also unique in this special case. Another example is dimension reduction for the Gaussian process. The Gaussian process is a stochastic process on some set $\mathcal{X}$, which is defined by a mean function $m(x)$ and a covariance function $v(x,x')$, where $x,x'\in\mathcal{X}$. For any set of values $x_1,\ldots,x_M\in\mathcal{X}$, function values $f_1,\ldots,f_M$ are generated from a multivariate Gaussian distribution with mean $(m(x_1),\ldots,m(x_M))$ and covariance matrix $V=(v(x_i,x_j))_{i,j=1,\ldots,M}$. Since the number of points $M$ is arbitrary, the Gaussian process is essentially an infinite dimensional distribution. Supposing there is a set of $N$ Gaussian processes $G\!P_1,\ldots,G\!P_N$, we can consider the application of the $e$-PCA or $m$-PCA to those Gaussian processes. However, it is not trivial owing to its infinite-dimensional nature. Ishibashi and Akaho~\cite{ishibashi2022principal} proved that if the Gaussian processes are different posteriors of the same prior distribution, the infinite-dimensional $e$-PCA ($m$-PCA) can be reduced to the finite-dimensional $e$-PCA ($m$-PCA). We have described the application of the $e$-PCA or $m$-PCA limited to the case that the distributions belong to the exponential family. One possible extension is a mixture distribution, $p(x;q,\theta)=\sum_{k=1}^{K} q_k p_k(x; \theta_k)$, which is often used for clustering. Here, we consider the mixture of the exponential family, \begin{equation} p(x) = \sum_{k=1}^{K} \pi_k f_k(x;\xi_k), \qquad \sum_{k=1}^K \pi_k = 1, \qquad \pi_k\ge0 \end{equation} where $\pi_k$ is the weight parameter for a component distribution $f_k(x;\xi_k)$. This distribution does not belong to the exponential family even when $f_k$ is an exponential family distribution. However, if we introduce a latent variable $z$ such that input variable $x$ is generated from $f_z(x;\xi_k)$, the joint distribution of $(x,z)$, $p(x,z) = \pi_z f_z(x;\xi_z)$ becomes an exponential family, which means that the mixture of exponential family distributions can be embedded into the space of the exponential family. On the basis of this idea, Akaho~\cite{akaho2008dimension} extended the framework of $e$-PCA and $m$-PCA to the case of a mixture of exponential family distributions. The key issue is the freedom of the order of components, i.e., the permutation of $z$ gives a different embedding even though the resulting mixture distributions are identical. Supposing two mixtures of distributions $p_1$ and $p_2$ written in latent variable forms, \begin{equation} p_1(x, z)=a_z f_z(x;\xi_z), \qquad p_2(x, z)=b_z f_z(x;\zeta_z), \end{equation} the divergence between $p_1$ and $p_2$ gives different values, depending on the order of $z$. Akaho resolved this problem by optimizing the order of $z$ so as to minimize the KL divergence between $p_1(x,z)$ and $p_2(x,z)$. This problem can be solved by the linear programming method. \section{Neural generative model} An image is represented as a single point in a high-dimensional space, but not every point in this space corresponds to a ``natural image''. For example, if randomly generated high-dimensional data were displayed in the same format as an image, it would look like a ``sand storm'' and would not be meaningful as an image. In other words, natural images occupy only a small region of the high-dimensional space, and this region might be relatively small compared with the entire space. To handle this problem in a probabilistic way, let us assume that the natural image is generated in accordance with a probability distribution $P$ in the higher-dimensional space, with some regions assigned a high probability and other regions assigned a very low, even $0$, probability. If we can well represent this probability distribution, which might be condensed at a very small region in high-dimensional space, we can generate a natural image stochastically. The idea of generative adversarial nets~(GAN; \cite{NIPS2014_5ca3e9b1}) is introduced as a method for modeling natural images using a deep neural network. Let $\mathcal{X}$ denote the high-dimensional space containing images. Let $P$ be the probability distribution on $\mathcal{X}$ that generates the natural image, and let $\mathcal{D}=\{x_{1},\dotsc,x_{n}\}$ be a data set of natural images sampled from $P$. We also prepare a low-dimensional probability distribution $R$ with known characteristics. For example, a uniform distribution on $[0,1]^{k}$ or a normal distribution on $\mathbb{R}^{k}$ is often used. Let $\mathcal{Z}$ be this sample space and $R$ be the probability distribution on $\mathcal{Z}$. We prepare a generator $G_{\theta}$, which is a machine that takes a single point on $\mathcal{Z}$ and generates a pseudo image $Y\in\mathcal{X}$, \begin{equation} z\in\mathcal{Z}\mapsto y=G_{\theta}(z)\in\mathcal{X}, \end{equation} and the output can be changed depending on parameter $\theta$. The generator $G_{\theta}$ is required to perform very complex transformations, so a deep neural network is commonly used. The pseudo image $Y$ is generated as \begin{equation} Y=G_{\theta}(Z),\quad Z\sim R, \end{equation} therefore, the probability distribution of $Y$ is the probability distribution on $\mathcal{X}$ determined by the generator $G_{\theta}$ and the reference distribution $R$ on $\mathcal{Z}$, which is called a pushforward measure. A formal definition is as follows. Let $\mathcal{X}$ be a sample space and $\mathcal{F}_{\mathcal{X}}$ be a $\sigma$-algebra on $\mathcal{X}$. Given a measurable function $G_{\theta}$, the pushforward measure of $R$ is defined by \begin{equation} P_{\theta}(B)=R(G_{\theta}^{-1}(B\cap G_{\theta}(\mathcal{Z}))),\; \forall B\in\mathcal{F}_{\mathcal{X}}. \end{equation} Note that the generator $G_{\theta}$ only transforms the low-dimensional space $\mathcal{Z}$, so that the support of $P_{\theta}$ is essentially a subspace of the same dimension with $\mathcal{Z}$. To approximate the probability distribution of a natural image, the pseudo images generated by transforming the data set $\mathcal{C}=\{z_{1},\dotsc,z_{n}\}$ on $\mathcal{Z}$ from the reference distribution $R$ \begin{equation} \mathcal{\tilde{D}} =G_{\theta}(\mathcal{C}) =\{y_{1},\dotsc,y_{n}\} \end{equation} imitate the natural images as if it comes from the same distribution of $\mathcal{D}$. This is a variation of the two-sample problem and the difference between the two samples can be evaluated using an appropriate statistical distance \cite{NIPS2014_5ca3e9b1,pmlr-v70-arjovsky17a}. To optimize the generator $G_{\theta}$, Goodfellow et al.~\cite{NIPS2014_5ca3e9b1} proposed an adversarial procedure. They introduced a discriminative model $D_{\phi}$ as well as a generative model $G_{\theta}$ and alternately trained both models as a minimax game. The learning procedure for the discriminative model is designed to estimate the probability that a sample comes from the distribution of natural images rather than pseudo images, which is given by \begin{equation} \text{maximize } L(\phi) = \mathbb{E}_{X\sim P}[\log D_{\phi}(X)] + \mathbb{E}_{X\sim P_{\theta}}[\log(1-D_{\phi}(X))], \end{equation} where $\mathbb{E}_{X\sim P}$ stands for the average with respect to the distribution $P$ and it is replaced by the empirical average with a data set $\mathcal{D}=\{x_{i},i=1,\dotsc,n\}$ sampled from a distribution $P$ in the learning process \begin{equation} \mathbb{E}_{X\sim P}[f(X)] \to \frac{1}{n}\sum_{i=1}^{n}f(x_{i}) \end{equation} and $\mathbb{E}_{X\sim P_{\theta}}$ is also replaced with a data set $\mathcal{C}=\{z_{i},i=1,\dotsc,n\}$ from the reference distribution $R$ as \begin{equation} \mathbb{E}_{X\sim P_{\theta}}[f(X)] \to \frac{1}{n}\sum_{i=1}^{n}f(G_{\theta}(z_{i})). \end{equation} Also, the procedure for the generative model is designed to maximize the probability that the discriminative model makes a mistake, which is given by \begin{align} \text{minimize } L(\theta) &= \mathbb{E}_{X\sim P_{\theta}}[\log(1-D_{\phi}(X))]\\ &= \mathbb{E}_{Z\sim R}[\log(1-D_{\phi}(G_{\theta}(Z)))]. \end{align} To see the geometrical picture of this sophisticated procedure, we introduce two model manifolds. One is a set of distributions of pseudo images generated by $G_{\theta}$, \begin{equation} \mathcal{M}_{g} = \left\{P_{\theta} : \text{pushforward measure of $R$ with } G_{\theta},\;\forall\theta\in\Theta \right\}, \end{equation} which corresponds to the generative model, and the other is a set of distributions for approximating midpoints of the grand truth distribution $P$ and the generator's distribution $P_{\theta}$, \begin{equation} \mathcal{M}_{d} = \left\{Q_{\phi} : \text{approximator of the $m$-midpoint of $P$ and $P_{\theta}$} \right\} \end{equation} which corresponds to the discriminative model. The loss of $D$ for discriminating datasets from $P$ and $P_{\theta}$ is defined as \begin{equation} L(D)=\mathbb{E}_{P}[\log(D(X))]+\mathbb{E}_{P_{\theta}}[\log(1-D(X))], \end{equation} therefore, the optimal discriminator $D_{*}$ is given by \begin{equation} D_{*}(x)=\frac{p(x)}{p(x)+p_{\theta}(x)}, \end{equation} where $p$ and $p_{\theta}$ are $m$-representations of $P$ and $P_{\theta}$, i.e., probability density functions of $P$ and $P_{\theta}$, respectively. Hence the crucial part for discriminator learning can be regarded as the estimation of the midpoint of $P$ and $P_{\theta}$. Using these models, the learning procedure is rewritten as follows. Let $q_{\phi}(x)$ be the estimate of the midpoint $(p(x)+p_{\theta}(x))/2$ in the discrimative model manifold $\mathcal{M}_{d}$. Using the relation \begin{equation} 1-D(x) = 1-\frac{p(x)}{p(x)+p_{\theta}(x)} = \frac{p_{\theta}(x)}{p(x)+p_{\theta}(x)}, \end{equation} the corresponding reversal discriminator $1-D_{\phi}$ is represented as \begin{equation} 1-D_{\phi}(x) = \frac{p_{\theta}(x)}{2q_{\phi}(x)}. \end{equation} Therefore, given a discriminator $D_{\phi}$, the optimal parameter of the generative model is estimated as \begin{align} \hat\theta &=\arg\min_{\theta} \mathbb{E}_{X\sim P_{\theta}}[\log(1-D_{\phi}(X))]\\ &=\arg\min_{\theta} \mathbb{E}_{X\sim P_{\theta}}[\log p_{\theta}(X)-\log 2q_{\phi}(X)]\\ &=\arg\min_{\theta} D(P_{\theta},Q_{\phi}), \end{align} which is the $e$-projection from $Q_{\phi}$ on $\mathcal{M}_{g}$. In the same way, for a given generative model $G_{\theta}$, the optimal parameter of the discriminative model is estimated as \begin{align} \hat\phi &=\arg\max_{\phi} \mathbb{E}_{X\sim P}[\log D_{\phi}(X)] + \mathbb{E}_{X\sim P_{\theta}}[\log(1-D_{\phi}(X))]\\ &=\arg\max_{\phi} \mathbb{E}_{X\sim P}[\log p(X)-\log 2q_{\phi}(X)] + \mathbb{E}_{X\sim P_{\theta}}[\log p_{\theta}(X)-\log 2q_{\phi}(X)]\\ &=\arg\min_{\phi} \mathbb{E}_{X\sim P}[\log q_{\phi}(X)] + \mathbb{E}_{X\sim P_{\theta}}[\log q_{\phi}(X)]\\ &=\arg\min_{\phi} \mathbb{E}_{X\sim (P+P_{\theta})/2}[\log q_{\phi}(X)]\\ &=\arg\min_{\phi} D((P+P_{\theta})/2,Q_{\phi}), \end{align} which is the $m$-projection from $(P+P_{\theta})/2$ on $\mathcal{M}_{d}$. These two projections are iterated until convergence, and their geometrical interpretation is schematically depicted in Fig.~\ref{fig:GAN}. It is worth noting that the adversarial procedure is summarized in terms of the Jensen--Shannon divergence. The Jensen--Shannon divergence is defined by using KL divergence with the $m$-midpoint as \begin{equation} D_{JS}(P,P_{\theta}) = D(P,(P+P_{\theta})/2)+ D(P_{\theta},(P+P_{\theta})/2). \end{equation} We introduce an approximated version of the Jensen--Shannon divergence with $Q_{\phi}$ as \begin{equation} \tilde{D}_{JS}(P,P_{\theta};Q_{\phi}) = D(P,Q_{\phi}) + D(P_{\theta},Q_{\phi}). \end{equation} Then the original two-sample problem is formulated as \begin{equation} \text{minimize } \tilde{D}_{JS}(P,P_{\theta};Q_{\phi}) \text{ with respect to $\theta$ and $\phi$}. \end{equation} \begin{figure}[ht] \centering \includegraphics[width=.6\linewidth]{GAN} \caption{Schematic figure of generative adversarial model.} \label{fig:GAN} \end{figure} \section{Conclusion and extension} We described the $em$ algorithm, the information geometric counterpart of the EM algorithm identified by Amari~\cite{AMARI19951379}. The $em$ algorithm is a meta-algorithm, which has a very wide range of applicability, and leaves room for customization to individual problems. Various extensions and applications of the $em$ algorithm were presented in this paper as good examples of how viewing a problem from a geometric point of view clarifies the structure of the problem and facilitates parameter estimation using iterative algorithms. The EM algorithm was based on Jensen's inequality to minimize the objective function (logarithmic marginal likelihood) and then to maximize it. More generally, there is the MM algorithm~\cite{hunter:mm}, which uses not only Jensen's inequality but also Cauchy--Schwartz's inequality, arithmetic-geometric mean, or quadratic approximation to minimize and maximize. The MM algorithm is a broader class of meta-algorithm that includes the EM algorithm as a special case, and is expected to have a similar geometric structure, but its unified treatment as an algorithm on statistical manifolds is not obvious, and future research on this issue is expected. \section*{Acknowledgments} Part of this work is supported by JSPS KAKENHI No.JP22H03653 and JP22486199. \section*{Data availability} There is no data related to this paper. \section*{Conflict of interest} There is no conflict of interest to declare. \newcommand{\noop}[1]{}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,730
{"url":"https:\/\/f.briatte.org\/r\/use-the-rds-format","text":"Remember to use the RDS format\n\nNote to self \u2013 Remember to serialize R objects as RDS files when it makes sense.\n\nImporting Stata data into R\n\nThe European Social Survey recently announced that it had added Round\u00a07 of its survey to its cumulative dataset, which can be downloaded in CSV, SPSS or Stata format.\n\nWhile my instinctive preference for storing data is to use CSV, in the case of survey data, many\/most measurements come with detailed variable and value labels.\n\nFurthermore, as is the case in the European Social Survey, the missing values of survey data generally take several different values to code for different forms of nonresponse, depending on whether the respondents \u201cdid not know\u201d what to answer, provided \u201cno answer,\u201d or \u201crefused to answer\u201d the question.\n\nFor these reasons, I tried to download the European Social Survey as a Stata dataset, only to realise later that the data had been produced with Stata\u00a014\u2014which means that it cannot be opened with older versions of Stata, unless the data were saved with the saveold command and with the appropriate argument for my version of Stata.\n\nFortunately, I was able to read the data in R with haven. The package, which wraps around the ReadStat C library, can import SAS, SPSS and Stata files. Once imported, the data are available as a standard data frame, with value labels accessible via functions like print_labels and as_factor.\n\nSaving the data as a RDS file\n\nAnother issue that then I faced with the European Social Survey dataset was its size: while only 103.5\u00a0MB compressed, the uncompressed Stata DTA file for the complete (all variables, all waves) version of the cumulative dataset is extremely large: 3.16\u00a0GB.\n\nIn comparison, the CSV file for the same dataset, which does not contain labels or detailed missing values, is 58.1\u00a0MB compressed and 559.7\u00a0MB uncompressed.\n\nHere again, R offers a superior alternative to both the CSV and Stata formats: by saving the file as a RDS file, which creates a serialized version of the dataset and then saves it with gzip compression, I was able to bring the size of the dataset down to 51.6\u00a0MB.\n\nNote that, when loaded into R, the RDS object still takes around 3\u00a0GB of (live) memory.\n\nThe full code used to convert the European Social Survey data from the DTA (Stata) to the RDS (R) format follows. The code requires the haven package, which is part of Hadley Wickham's tidyverse package suite.\n\nUpdate (December 14, 2016): having discussed the issue on Twitter, it appears that the data mentioned in this note can be compressed quite efficiently in Stata. That operation, however, requires Stata\u00a014 or above, if Stata keeps its commitment backwards compatibility. There is currently no other way to load the file in lower versions of Stata.\n\n\u2022 First published on December 12th, 2016","date":"2019-09-17 20:35:39","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.1811293363571167, \"perplexity\": 2258.868786587735}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-39\/segments\/1568514573121.4\/warc\/CC-MAIN-20190917203354-20190917225354-00038.warc.gz\"}"}
null
null
ECWCS (читается «и-си-дабл ю-си-эс» или «еквакс», «Расширенная система одежды для холодной погоды») — современная американская система высокотехнологичн. военн. одежды, предназначенная для обеспечения комфорта для военных по холодной и экстремально холодной погоды (при температуре от −50 до +5 ° C). Первое поколение системы разработанo в 1980-х годах компанией US Army Natick Soldier Center. Концепция системы заключается в том, что военнослужащий имеет постоянный набор элементов одежды, и надевает определенную их комбинацию, в соответствии с погодными условиями и режимов эксплуатации, обеспечивает максимальный комфорт. Системa ECWCS и ее модификации взятa на вооружение вооруженными силами США . Применяется параллельно или вместе с другими системами военной одежды : ACU (Army Combat Uniform), MCCUU (Marine Corps Combat Utility Uniform), NWU (Navy Working Uniform) и другими. По состоянию на 2013 год в вооруженных силах США применяется третье и второе поколение системы ECWCS, четвертое поколение находится в разработке. Есть много компонентов ECWCS, их копий и аналогов, изготовленных различными компаниями, предназначенные для продажи гражданским. Такие вещи пользуются спросом у охотников, рыболовов, туристов, спортсменов-экстремалов, страйкболистов. Примечания Литература Commander's Guide to Cold Weather Operations Военная форма США
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,593
Holi Celebrations at Kamand Campus ================================= Holi was celebrated with great zeal and enthusiasm this year at Kamand Campus. Celebrations began with Holika Dahan near the badminton court. On the day of "Dulhendi", special food was served along with sweets.
{ "redpajama_set_name": "RedPajamaGithub" }
1,155
Q: Syntax error in SQL statement "WITH" keyword throwing exception I have also added another TMP2 and was not able to run the query... Could you please help me on this query? I am using Oracle 11g. WITH TMP1(REQUEST_NO) AS (SELECT REQUEST_NO FROM QUOTE) SELECT TMP1.REQUEST_NO FROM TMP1; WITH TMP1(REQUEST_NO) AS (SELECT REQUEST_NO FROM QUOTE), TMP2(AGENT) AS (SELECT AGENT FROM AGENT_TAB) SELECT TMP2.AGENT FROM TMP2; The exception I got is : org.h2.jdbc.JdbcSQLException: Syntax error in SQL statement "WITH TMP1(REQUEST_NO) AS (SELECT REQUEST_NO FROM QUOTE), [*]TMP2(AGENT) AS (SELECT AGENT FROM AGENT_TAB) SELECT TMP2.AGENT FROM TMP2 "; expected "(, SELECT, FROM"; SQL statement: The query is fine in sql developer but not working in the Junit tests. jdbc:h2:mem:request_no;MODE=Oracle We are using the h2 version 1.3.171 with windows 7 (64 bit) and jdk 1.7.0_25. A: Oracle supports the WITH clause, but it is looks like H2 does not support it: H2 SQL grammar I would transform the query in the with part to the main query.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,125
Hormiopius ptericoptophagus is een insect dat behoort tot de orde vliesvleugeligen (Hymenoptera) en de familie van de schildwespen (Braconidae). De wetenschappelijke naam van de soort werd voor het eerst geldig gepubliceerd door Blanchard in 1962. Schildwespen
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,660
<?php if ( ! class_exists( 'GFForms' ) ) { die(); } class GF_Field_Radio extends GF_Field { public $type = 'radio'; public function get_form_editor_field_title() { return esc_attr__( 'Radio Buttons', 'gravityforms' ); } function get_form_editor_field_settings() { return array( 'conditional_logic_field_setting', 'prepopulate_field_setting', 'error_message_setting', 'label_setting', 'label_placement_setting', 'admin_label_setting', 'choices_setting', 'rules_setting', 'visibility_setting', 'duplicate_setting', 'description_setting', 'css_class_setting', 'other_choice_setting', ); } public function is_conditional_logic_supported() { return true; } public function validate( $value, $form ) { if ( $this->enableOtherChoice && $value == 'gf_other_choice' ) { $value = rgpost( "input_{$this->id}_other" ); } if ( $this->isRequired && $this->enableOtherChoice && $value == GFCommon::get_other_choice_value() ) { $this->failed_validation = true; $this->validation_message = empty( $this->errorMessage ) ? esc_html__( 'This field is required.', 'gravityforms' ) : $this->errorMessage; } } public function get_first_input_id( $form ) { return ''; } public function get_field_input( $form, $value = '', $entry = null ) { $form_id = $form['id']; $is_entry_detail = $this->is_entry_detail(); $is_form_editor = $this->is_form_editor(); $id = $this->id; $field_id = $is_entry_detail || $is_form_editor || $form_id == 0 ? "input_$id" : 'input_' . $form_id . "_$id"; $disabled_text = $is_form_editor ? 'disabled="disabled"' : ''; return sprintf( "<div class='ginput_container ginput_container_radio'><ul class='gfield_radio' id='%s'>%s</ul></div>", $field_id, $this->get_radio_choices( $value, $disabled_text, $form_id ) ); } public function get_radio_choices( $value = '', $disabled_text, $form_id = 0 ) { $choices = ''; $is_entry_detail = $this->is_entry_detail(); $is_form_editor = $this->is_form_editor(); $is_admin = $is_entry_detail || $is_form_editor; if ( is_array( $this->choices ) ) { $choice_id = 0; $other_default_value = ''; // add 'other' choice to choices if enabled if ( $this->enableOtherChoice ) { $other_default_value = GFCommon::get_other_choice_value(); $this->choices[] = array( 'text' => $other_default_value, 'value' => 'gf_other_choice', 'isSelected' => false, 'isOtherChoice' => true ); } $logic_event = $this->get_conditional_logic_event( 'click' ); $count = 1; foreach ( $this->choices as $choice ) { if ( $is_entry_detail || $is_form_editor || $form_id == 0 ) { $id = $this->id . '_' . $choice_id ++; } else { $id = $form_id . '_' . $this->id . '_' . $choice_id ++; } $field_value = ! empty( $choice['value'] ) || $this->enableChoiceValue ? $choice['value'] : $choice['text']; if ( $this->enablePrice ) { $price = rgempty( 'price', $choice ) ? 0 : GFCommon::to_number( rgar( $choice, 'price' ) ); $field_value .= '|' . $price; } if ( rgblank( $value ) && RG_CURRENT_VIEW != 'entry' ) { $checked = rgar( $choice, 'isSelected' ) ? "checked='checked'" : ''; } else { $checked = RGFormsModel::choice_value_match( $this, $choice, $value ) ? "checked='checked'" : ''; } $tabindex = $this->get_tabindex(); $label = sprintf( "<label for='choice_%s' id='label_%s'>%s</label>", $id, $id, $choice['text'] ); $input_focus = ''; // handle 'other' choice if ( rgar( $choice, 'isOtherChoice' ) ) { $onfocus = ! $is_admin ? 'jQuery(this).prev("input")[0].click(); if(jQuery(this).val() == "' . $other_default_value . '") { jQuery(this).val(""); }' : ''; $onblur = ! $is_admin ? 'if(jQuery(this).val().replace(" ", "") == "") { jQuery(this).val("' . $other_default_value . '"); }' : ''; $onkeyup = $this->get_conditional_logic_event( 'keyup' ); $input_focus = ! $is_admin ? "onfocus=\"jQuery(this).next('input').focus();\"" : ''; $value_exists = RGFormsModel::choices_value_match( $this, $this->choices, $value ); if ( $value == 'gf_other_choice' && rgpost( "input_{$this->id}_other" ) ) { $other_value = rgpost( "input_{$this->id}_other" ); } elseif ( ! $value_exists && ! empty( $value ) ) { $other_value = $value; $value = 'gf_other_choice'; $checked = "checked='checked'"; } else { $other_value = $other_default_value; } $label = "<input id='input_{$this->formId}_{$this->id}_other' name='input_{$this->id}_other' type='text' value='" . esc_attr( $other_value ) . "' aria-label='" . esc_attr__( 'Other', 'gravityforms' ) . "' onfocus='$onfocus' onblur='$onblur' $tabindex $onkeyup $disabled_text />"; } $choice_markup = sprintf( "<li class='gchoice_$id'><input name='input_%d' type='radio' value='%s' %s id='choice_%s' $tabindex %s $logic_event %s />%s</li>", $this->id, esc_attr( $field_value ), $checked, $id, $disabled_text, $input_focus, $label ); $choices .= gf_apply_filters( array( 'gform_field_choice_markup_pre_render', $this->formId, $this->id ), $choice_markup, $choice, $this, $value ); if ( $is_form_editor && $count >= 5 ) { break; } $count ++; } $total = sizeof( $this->choices ); if ( $count < $total ) { $choices .= "<li class='gchoice_total'>" . sprintf( esc_html__( '%d of %d items shown. Edit field to view all', 'gravityforms' ), $count, $total ) . '</li>'; } } return gf_apply_filters( array( 'gform_field_choices', $this->formId ), $choices, $this ); } public function get_value_default() { return $this->is_form_editor() ? $this->defaultValue : GFCommon::replace_variables_prepopulate( $this->defaultValue ); } public function get_value_submission( $field_values, $get_from_post_global_var = true ) { $value = $this->get_input_value_submission( 'input_' . $this->id, $this->inputName, $field_values, $get_from_post_global_var ); if ( $value == 'gf_other_choice' ) { //get value from text box $value = $this->get_input_value_submission( 'input_' . $this->id . '_other', $this->inputName, $field_values, $get_from_post_global_var ); } return $value; } public function get_value_entry_list( $value, $entry, $field_id, $columns, $form ) { return GFCommon::selection_display( $value, $this, $entry['currency'] ); } public function get_value_entry_detail( $value, $currency = '', $use_text = false, $format = 'html', $media = 'screen' ) { return GFCommon::selection_display( $value, $this, $currency, $use_text ); } /** * Gets merge tag values. * * @since Unknown * @access public * * @uses GFCommon::to_money() * @uses GFCommon::format_post_category() * @uses GFFormsModel::is_field_hidden() * @uses GFFormsModel::get_choice_text() * @uses GFCommon::format_variable_value() * @uses GFCommon::implode_non_blank() * * @param array|string $value The value of the input. * @param string $input_id The input ID to use. * @param array $entry The Entry Object. * @param array $form The Form Object * @param string $modifier The modifier passed. * @param array|string $raw_value The raw value of the input. * @param bool $url_encode If the result should be URL encoded. * @param bool $esc_html If the HTML should be escaped. * @param string $format The format that the value should be. * @param bool $nl2br If the nl2br function should be used. * * @return string The processed merge tag. */ public function get_value_merge_tag( $value, $input_id, $entry, $form, $modifier, $raw_value, $url_encode, $esc_html, $format, $nl2br ) { $use_value = $modifier == 'value'; $use_price = in_array( $modifier, array( 'price', 'currency' ) ); $format_currency = $modifier == 'currency'; if ( is_array( $raw_value ) && (string) intval( $input_id ) != $input_id ) { $items = array( $input_id => $value ); // Float input Ids. (i.e. 4.1 ). Used when targeting specific checkbox items. } elseif ( is_array( $raw_value ) ) { $items = $raw_value; } else { $items = array( $input_id => $raw_value ); } $ary = array(); foreach ( $items as $input_id => $item ) { if ( $use_value ) { list( $val, $price ) = rgexplode( '|', $item, 2 ); } elseif ( $use_price ) { list( $name, $val ) = rgexplode( '|', $item, 2 ); if ( $format_currency ) { $val = GFCommon::to_money( $val, rgar( $entry, 'currency' ) ); } } elseif ( $this->type == 'post_category' ) { $use_id = strtolower( $modifier ) == 'id'; $item_value = GFCommon::format_post_category( $item, $use_id ); $val = RGFormsModel::is_field_hidden( $form, $this, array(), $entry ) ? '' : $item_value; } else { $val = RGFormsModel::is_field_hidden( $form, $this, array(), $entry ) ? '' : RGFormsModel::get_choice_text( $this, $raw_value, $input_id ); } $ary[] = GFCommon::format_variable_value( $val, $url_encode, $esc_html, $format ); } return GFCommon::implode_non_blank( ', ', $ary ); } public function get_value_save_entry( $value, $form, $input_name, $lead_id, $lead ) { if ( $this->enableOtherChoice && $value == 'gf_other_choice' ) { $value = rgpost( "input_{$this->id}_other" ); } $value = $this->sanitize_entry_value( $value, $form['id'] ); return $value; } public function allow_html() { return true; } public function get_value_export( $entry, $input_id = '', $use_text = false, $is_csv = false ) { if ( empty( $input_id ) ) { $input_id = $this->id; } $value = rgar( $entry, $input_id ); return $is_csv ? $value : GFCommon::selection_display( $value, $this, rgar( $entry, 'currency' ), $use_text ); } /** * Strip scripts and some HTML tags. * * @param string $value The field value to be processed. * @param int $form_id The ID of the form currently being processed. * * @return string */ public function sanitize_entry_value( $value, $form_id ) { if ( is_array( $value ) ) { return ''; } $allowable_tags = $this->get_allowable_tags( $form_id ); if ( $allowable_tags !== true ) { $value = strip_tags( $value, $allowable_tags ); } $allowed_protocols = wp_allowed_protocols(); $value = wp_kses_no_null( $value, array( 'slash_zero' => 'keep' ) ); $value = wp_kses_hook( $value, 'post', $allowed_protocols ); $value = wp_kses_split( $value, 'post', $allowed_protocols ); return $value; } } GF_Fields::register( new GF_Field_Radio() );
{ "redpajama_set_name": "RedPajamaGithub" }
3,355
Secret Weapons Over Normandy è un videogioco per PlayStation 2, Xbox e PC, sequel di Secret Weapons of the Luftwaffe uscito solo per PC, entrambi di Lucasarts, ambientato durante la seconda guerra mondiale. Il protagonista James Chase è un americano volontario della RAF, più precisamente di una squadriglia d'élite nota come Battlehawk. Durante la storia a si avra modo di rivivere le più grandi battaglie di quella guerra, da Dunkerque a Stalingrado fino allo Sbarco in Normandia, e impedire al Terzo Reich di mettere le mani sulla bomba atomica. Nel gioco si potranno pilotare e collezionare numerosi velivoli. La colonna sonora sinfonica del gioco è stata composta dal musicista Michael Giacchino, autore delle musiche della serie televisiva Lost e del film Star Trek ispirato alla omonima saga televisiva. Modalità di gioco Il giocatore controlla James Chase, un volontario statunitense partito per combattere a fianco dei britannici agli inizi della guerra. Egli vola in uno squadrone speciale conosciuto come Battlehawks. Nel corso del gioco, il giocatore ha l'opportunità di: catturare nuovi aerei, o potenziare i suoi aerei già esistenti, e quelli che il gioco chiama: "i più insidiosi aeroplani del Terzo Reich." Il nemico principale del giocatore è la Luftwaffe, in particolare la controparte nemica dei Battlehawk, lo squadrone dei piloti d'élite tedeschi, nota come Nemesis (simile alla realmente esistita KG 200), comandata da Oberst Krieger. Il giocatore si ritroverà anche contro le forze dell'Impero del Giappone. In alcuni casi, il giocatore sarà chiamato a prendere il controllo di alcune armi AA, e difendere il cielo dagli aeroplani nemici, in alcune missioni invece si prenderà posto nella torretta ventrale di un Boeing B-17 Flying Fortress, e difendere un intero squadrone di bombardieri dagli attacchi della Luftwaffe. È da notare la presenza di alcuni velivoli rimasti allo stadio di prototipo, che effettivamente non presero mai parte ai combattimenti, o che vennero usati in numero limitato: l'XP-55 Ascender, l'XP-56 Black Bullet, il Chance-Vought Fritella Volante, lo Junkers Ju 390, ed il Daimler-Benz C. Diverse armi tedesche che non vennero mai completate o si dimostrarono un fallimento, hanno invece una parte nel gioco: ad esempio il Mistel ed il lanciamissili Wasserfall. Scoppiò anche una breve polemica che riguardava le prime anteprime del gioco, che mostravano l'asso tedesco Erich Hartmann venire rappresentato come cattivo principale. La famiglia di Hartmann minacciò un'azione legale, ma le immagini vennero rimosse e rimpiazzate con quelle di Hans-Ulrich Rudel per l'uso effettivo del gioco. Storia Personaggi James Chase: pilota volontario statunitense, ex-pilota della Marina americana, partito come volontario per aiutare i britannici a Dunkerque, finirà poi nello squadrone d'élite Battlehawk, di cui presto si dimostrerà un componente essenziale. Trevor: pilota britannico di grande talento, chiederà a Chase di entrare nei Battlehawk dopo aver visto le sue abilità in combattimento, verrà abbattuto dai Nemesis in una missione nel Pacifico, verrà creduto morto, ma ritornerà a sorpresa per assistere nuovamente Chase nei cieli norvegesi. Cedric: pilota membro della Resistenza francese, si unirà ai Battlehawk dopo aver impedito l'invasione della Gran Bretagna da parte delle forze tedesche, ma perirà in una missione di ricerca e distruzione di alcune fabbriche di armi segrete naziste. Lyle: meccanico britannico, è lui a potenziare o riparare gli aerei dei Battelhawk, in alcune missioni sfiderà Chase a fare più punti di lui, nell'abbattere più aerei nemici per esempio. Lydia Litvyak: pilota sovietica, alleata dei Battlehawk di cui poi ne farà parte, la si incontrerà per la prima volta alle porte di Stalingrado, dove guida alcune forze corazzate sovietiche nella cattura di un aeroporto finito in mani nemiche. Squadrone Eagle: squadriglia di volontari americani che come Chase hanno deciso di aiutare la Gran Bretagna contro la Germania nazista. Tigri Volanti: squadriglia di volontari americani in soccorso della Cina contro le forze di occupazione giapponese, aiuteranno più volte i Battlehawk contro Nemesis e i giapponesi. Trama Il gioco inizia con James Chase, volontario americano in Inghilterra impegnato nella battaglia di Dunkerque, qui deve proteggere le truppe francesi e britanniche dai bombardamenti dei Stuka e dei Bf-109, qui Chase fa la conoscenza di Trevor, pilota inglese di grande talento, dopo la battaglia distrugge dei ponti per rallentare le forze di terra tedesche, successivamente si scontra con Oberst Krieger, spietato asso tedesco, che però riesce ad abbatterlo, stupito di ciò, Trevor invita Chase a unirsi in una squadra segreta chiamata Battlehawk. La prima missione dei Battlehawk consiste nel proteggere la propria base, le città adiacenti e le stazioni radar, in questa missione riesce ad abbattere uno Ju 88 senza danneggiarlo gravemente. Con l'aereo riparato, Chase effettua una missione rischiosa: distruggere in una missione notturna le forze da sbarco tedesche attraccate in Francia, e impedire così l'Operazione Leone Marino, dopo il riuscito affondamento, Chase e Trevor, insieme ad un partigiano francese di nome Cedric, affrontano nuovamente l'Oberst Krieger che nel frattempo ha creato anch'egli una squadra speciale nota come Nemesis, ma riescono a fuggire su dei Bf-109 rubati, e tornare in Inghilterra. La missione successiva dei Battlehawk, si svolge in Nord Africa, dove Hitler ha inviato il suo migliore generale Erwin Rommel, il compito della squadriglia è quello di affondare la nave che trasporta il generale tedesco, i Battlehawk, su dei vecchi Swordfish riescono ad affondare la nave, ma più tardi si scoprirà che il generale era già arrivato a Tripoli, la Rosa Bianca, spia inglese tra i tedeschi, viene smascherata, ed è costretta a fuggire in un vicino aeroporto, dove incontra Pauline Armstrong, una nuova pilota dei Battlehawk, con essa prende un aereo che la riporta in Gran Bretagna. Dopo Pearl Harbor, i Battlehawk vengono inviati a dare supporto alle Tigri Volanti, impiegate contro i giapponesi nei cieli della Cina, e a fermare una tratta tra i Nemesis ed i giapponesi, le due squadriglie riescono a distruggere ciò che i tedeschi avevano portato per i giapponesi, ma i tedeschi riescono a fuggire con il loro carico su di un U-Boot. Pauline Armstrong è stata abbattuta durante la scorsa missione, è riuscita a sopravvivere allo schianto, ma è stata fatta prigioniera in un campo per prigionieri di guerra giapponese, un C-47 Skytrain è riuscito a trovare il campo prigionieri, la missione dei Battlehawk, insieme alle Tigri Volanti è quella di riuscire a far fuggire i prigionieri, la missione riesce, e i prigionieri riescono a rubare velivoli giapponesi, da un aeroporto lì vicino, sfortunatamente i Nemesis e Krieger, fanno di nuovo la loro comparsa, il tedesco riesce ad abbattere Trevor, mentre gli altri riescono a scappare prima dell'arrivo dei rinforzi giapponesi, Pauline è rimasta ferita durante la sua permanenza nel campo prigionieri. La nave più vicino in grado di curarla è la portaerei USS Yorktown, una volta a bordo, Chase scopre che i giapponesi intendono attaccare le isole Midway, a causa della scarsita dell'equipaggio, Chase si segna nel registro di volo della Yorktown è combatte prima per proteggere l'isola dai bombardamenti giapponesi, e poi affondando le 4 portaerei giapponesi Akagi, Kaga, Sōryū, e la Hiryū. Con la vittoria americana nel Pacifico, la Germania rappresenta una minaccia ancora maggiore, Chase viene inviato di nuovo in Europa in una missione per scortare il professor Niels Bohr, (uno scienziato danese che desidera disertare). Dal interrogatorio col professor Bohr viene rilevato che lo U-Boot tornato dal Pacifico trasportava l'equipaggiamento necessario per produrre un bombardiere transatlantico (chiamato Ju 390) in grado di colpire obiettivi nel territorio statunitense. Bohr rivela anche che il primo prototipo è già pronto per essere testato sul fronte orientale alle porte di Stalingrado, i Battlehawk, insieme alla pilota Lydia Litvyak e all'Armata rossa catturano il campo d'aviazione dove si trova il prototipo, ma a sorpresa compaiono altri due Ju-390, che distruggono tutto il campo d'aviazione per non far cadere niente nelle mani degli Alleati, Chase, pilotando uno Šturmovik riesce ad abbatterne uno, ma l'altro riesce a fuggire, la prossima missione è all'interno della Germania, per distruggere la fabbrica dei 390. Intanto il professor Bohr continua dare informazioni ai Battlehawk, questa volta rivela che lo Ju-390 era progettato per trasportare la prima bomba atomica sugli Stati Uniti. Rivela anche che la fabbrica Norske in Norvegia, unica in Europa, viene usata per produrre ossido di deuterio o acqua pesante, elemento essenziale per la costruzione di armi nucleari. La missione dei Battlehawk è quella di distruggere la fabbrica, a sorpresa torna anche Trevor, scappato non si sa come dai giapponesi, Chase e Trevor riescono a distruggere la fabbrica, ma mentre stanno per affondare alcuni battelli contenenti acqua pesante, vengono attaccati dai nuovi caccia a reazioni Me-262 dei Nemesis, a bordo dei loro Mosquito riescono a distruggere alcuni battelli, e incredibilmente ad abbattere qualche jet. Col programma di ricerca nucleare tedesco fermato, i Battlehawk, si ritrovano ad affrontare i nuovi caccia a reazione Me-262, velivoli rivoluzionari, che se prodotti in gran numero potrebbero significare la fine della supremazia aerea americana in Europa, Chase viene inviato a bordo di un B-17 Flying Fortress, per bombardare una base degli Me-262, quindi si paracaduta insieme alle bombe, ruba un Me-262, e torna alla base per permettere di studiarlo. La Rosa Bianca rivela che la fabbrica di Peenemünde ha creato diverse armi di distruzione chiamate V-1 e V-2, nella distruzione di questa base però perde la vita Cedric, alleato francese dei Battlehawk. Dopo questa missione, i Battlehawk sono inviati a distruggere un'altra fabbrica in Germania, insieme ad un campo di volo di Me 163. Nell'ultima missione i Battlehawk prendono parte al D-Day, proteggendo le truppe americane, dalle ultime V1 e V2 e dai bombardieri tedeschi Mistel, tutto sembra procedere bene, quando all'improvviso compare Krieger su di una enorme portaerei volante chiamate Daimler-Benz C, che scorta uno Junkers Ju 390, contenente una bomba atomica da sganciare sulle forze americane, i Battlehawk dopo un duro combattimento riescono a distruggere i due colossali aerei, anche se Krieger riesce a fuggire a bordo di uno dei Heinkel He 1078 attaccati al Daimler-Benz C, maledicendo i Battlehawk. La storia si conclude con il D-Day in un successo, Chase commenta che probabilmente i Nemesis sono ancora da qualche parte, aspettando e preparandosi al loro prossimo incontro. Aerei presenti nel gioco Aerei britannici Hawker Hurricane Supermarine Spitfire Fairey Swordfish De Havilland Mosquito Gloster Meteor Aerei tedeschi Messerschmitt Bf-109 Messerschmitt Bf 110 (non pilotabile) Daimler-Benz C (non pilotabile) Messerschmitt Me 262 Messerschmitt Me 163 Komet Junkers Ju 87 Stuka Junkers Ju 88 Junkers Ju 390 (non pilotabile) Heinkel He 111 (non pilotabile) Heinkel He 1078 Dornier Do 335 Focke-Wulf Fw-190 Aerei statunitensi Curtiss P-40 Lockheed P-38 Grumman F4F Wildcat Douglas SBD Dauntless Douglas Devastator Douglas C-47 (non pilotabile) Boeing B-17 (non pilotabile) North American P-51 Curtiss-Wright XP-55 Northrop XP-56 Chance-Vought XF5U Frittella Volante Aerei giapponesi Mitsubishi A6M Mitsubishi G3M (non pilotabile) Aichi D3A (non pilotabile) Nakajima B5N (non pilotabile) Aerei russi Ilyushin Il-2 Šturmovik Aerei speciali da Guerre stellari Caccia TIE Ala X Collegamenti esterni LucasArts Simulatori di volo sulla seconda guerra mondiale
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,540
Cannes, France – In healthcare, we talk a lot about adherence and compliance. But, as brands, our role is really helping support change: How do we help people act on the health decisions they've made for themselves? How do we empower them be more resilient to try and try again? We've all made those commitments – to talk to a doctor, change a diet, or stick with a new routine. But, too often, life gets in the way and those commitments are left behind. Our brands that take the blame – we're not effective, not easy, not useful. inVentiv Health wanted to understand more about why. So, we followed 30 families for 2 years to understand how they lived with and dealt with acute and chronic medical issues from diabetes and mental illness to infertility and cancer. At Lions Health, Kathleen Starr, PhD, head of inVentiv Behavioral Insights, and Susan Perlbachs, EVP, Executive Director at GSW, shared the findings and explained how they should change the way we think about creativity. As an industry, we need to shift from individual, patient-centric support to wider, more social-centric change. 'Patient' isn't an identity people have for themselves. They see themselves as wife, mother, volunteer, friend, etc. Even seriously ill patients attach to the emotional roles in their lives; the ones that connect them to other people. Families work in a "patient of the day" mentality. We respond to daily upheaval more than we are guided by careful planning. In many circumstances, the flu can beat cancer in terms of immediate priority. In most households, there's a "one basket" budget. Diabetes treatments, asthma medicine, and new hockey sticks come out of the same fund with shifting priorities of which is paid for first. Our healthcare system is expanding. People say they're overwhelmed by their engagements with the core healthcare system but they add onto it, asking a massage therapist about their headaches, a pharmacist about their allergies, and Dr Oz about everything else. That's right, Dr Oz is part of the "patient journey" for many Americans. This proliferation of counsel is about a clear conflict. We've lost trust. But we want to trust. In our society, temptation is culture. To succeed, people have to avoid all the bad. Avoidance is a difficult behavior to maintain. Perlbachs stepped up to help the world-class creative leads in the room understand our role in applying this knowledge. She said, today, creative has to work in each person's system. It can't influence the individual alone. Translate Key Health Concepts Into Real-World Context: The language is never going to change, but the (personal) meaning has to. Unite Patient and Family: The participation of family is "make or break" in healthcare success. Resolve Conflict to Show a Clear Path: People are drowning in opinions but starving for wisdom—what's right for them, really? Create Cultural Relevance: Is culture a barrier or a boost? We need to be part of the culture they want to thrive in.
{ "redpajama_set_name": "RedPajamaC4" }
8,500
/* -*- Mode: C++; tab-width: 2; indent-tabs-mode: nil; c-basic-offset: 2 -*- */ /* vim:set ts=2 sw=2 sts=2 et cindent: */ /* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ #ifndef nsSubstring_h___ #define nsSubstring_h___ #ifndef nsAString_h___ #include "nsAString.h" #endif #endif // !defined(nsSubstring_h___)
{ "redpajama_set_name": "RedPajamaGithub" }
5,427
Q: Pipeline input to executable with PowerShell I need to execute the following command in PowerShell: %windir%\system32\inetsrv\appcmd add site /in < c:\mywebsite.xml I am trying to do it like this: $appCmd = "$Env:SystemRoot\system32\inetsrv\appcmd.exe" [String] $targetFilePath = $restoreFromDirectory + "config.xml" $AllArgs = @('add', 'site', '/in') & $appCmd $AllArgs | Get-Content $targetFilePath But this is apperantly wrong since it gives me an error: The input object cannot be bound to any parameters for the command either because the command does not take pipeline input or the input and its properties do not match any of the parameters that take pipeline input. Please assist on what is the correct alternative to the above mentioned script in PowerShell. A: PowerShell pipe takes input on the left, and passes it into the command on the right. In this case, you are passing the output of your command to Get-Content, which doesn't take an input parameter. Change your call line so that the input flows from left to right: Get-Content $targetFilePath | & $appCmd $AllArgs See this answer on StackOverflow for an example.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,805