text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
Agapostemon tyleri is a species of sweat bee in the family Halictidae. References Further reading tyleri Articles created by Qbugbot Insects described in 1917
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,811
Q: Non type-variable argument in the constraint for Arbitrary typeclass For an exercise in Chapter 15 of Haskell Programming From First Principles, I'm trying to write an Arbitrary instance based on another Arbitrary instance: module AccumulateRight where import Data.Semigroup import Test.QuickCheck data Validation a b = Fail a | Pass b deriving (Eq, Show) newtype AccumulateRight a b = AccumulateRight (Validation a b) deriving (Eq, Show) type TestType = AccumulateRight String [Int] instance Semigroup b => Semigroup (AccumulateRight a b) where _ <> (AccumulateRight (Fail x)) = Fail x (AccumulateRight (Fail x)) <> _ = Fail x (AccumulateRight (Success a)) <> (AccumulateRight (Success b)) = AccumulateRight . Success $ a <> b instance (Arbitrary a, Arbitrary b) => Arbitrary (Validation a b) where arbitrary = oneof [Fail <$> arbitrary, Pass <$> arbitrary] instance Arbitrary (Validation a b) => Arbitrary (AccumulateRight a b) where arbitrary = AccumulateRight <$> arbitrary semigroupAssoc :: (Eq m, Semigroup m) => m -> m -> m -> Bool semigroupAssoc a b c = (a <> (b <> c)) == ((a <> b) <> c) type Assoc = TestType -> TestType -> TestType -> Bool main :: IO () main = quickCheck (semigroupAssoc :: Assoc) but the following error occurs: • Non type-variable argument in the constraint: Arbitrary (Validation a b) (Use FlexibleContexts to permit this) • In the context: Arbitrary (Validation a b) While checking an instance declaration In the instance declaration for 'Arbitrary (AccumulateRight a b)' | 22 | instance Arbitrary (Validation a b) => Arbitrary (AccumulateRight a b) where | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Failed, no modules loaded. So am I doing anything wrong here? Why can't I use the typeclass of an existing data as the constraint here? A: It's a silly restriction that was put in place before it was understood how difficult typeclasses would be to implement. Turns out it's easy enough to support, so there's a language extension -- mentioned in the error -- that lets you say that. You can turn it on by adding {-# LANGUAGE FlexibleContexts #-} to the top of your file, and as extensions go this one is considered completely benign. However, in this case, you should not turn it on, and instead should just write instance (Arbitrary a, Arbitrary b) => Arbitrary (AccumulateRight a b) -- after all, (Arbitrary a, Arbitrary b) are exactly the conditions under which Arbitrary (Validation a b) holds.
{ "redpajama_set_name": "RedPajamaStackExchange" }
483
\section{Introduction} Systems biology aims to understand biological systems at a system level, including their structures and their dynamics \cite{kitano2002systems}. Often, a biological system is modeled by a system of ordinary differential equations (ODEs), which describes the dynamics of the various concentrations of chemical and molecular species as a function of time. These biological models usually introduce some parameters that are unknown and required to be estimated accurately and efficiently. Hence, one central challenge in systems biology is the estimation of unknown model parameters (e.g., rate constants), after which we can perform the prediction of model dynamics. Parameter estimation requires observations of the state variables of the system, but due to technical limitations, only part of the state variables are observable in experiments, which makes parameter estimation even more difficult. \begin{figure}[htbp] \centering \includegraphics[width=\textwidth]{the_workflow_for_the_development_and_identification_of_a_systems_biologicalmodel.pdf} \caption{\textbf{The workflow for the development and identification of a systems biological model.}} \label{fig:flowchart} \end{figure} In this chapter, we introduce the workflow for the development and identification of systems biological models (Fig.~\ref{fig:flowchart}). The whole workflow includes the following several steps: \begin{itemize} \item Step 1: Data acquisition and systems-biological model development (Section~\ref{sec:model-definition}). As the first step, we need to collect experimental data for the underlying system and develop ODEs to model the system dynamics. This is not the focus of this chapter, and we directly use the ultradian endocrine model for glucose-insulin interaction \cite{sturis1991computer}. \item Step 2: Structural identifiability analysis (Section~\ref{sec:struc-ident}). With a proposed model, we determine which parameters of the model are structurally identifiable. If the parameters are not structurally identifiable, Step 1 is revisited such as adding more data or fixing certain parameters. If the parameters are locally identifiable, we need to limit their search range. \item Step 3: Parameter estimation via systems-biology informed neural network (SBINN) (Section~\ref{sec:SBINN}). We next use a SBINN to infer the unknown model parameters from the data. \item Step 4: Practical identifiability analysis (Section~\ref{sec:pract-ident}). With the inferred parameters, we check the quality of the estimates via practical identifiable analysis. If the parameters are practically identifiable, we can use the identified model for forecasting, otherwise, we need to revisit Step~1. \end{itemize} The code used in the chapter is publicly available from the GitHub repository \url{https://github.com/lu-group/sbinn}. \section{Ultradian endocrine model for glucose-insulin interaction} \label{sec:model-definition} To demonstrate the methods, we consider the system of the glucose-insulin interactions and use a relatively simple ultradian model \cite{sturis1991computer} with 6 state variables and 21 parameters. The state variables are plasma insulin concentration $I_{p}$, interstitial insulin concentration $I_{i}$, glucose concentration $G$, and a three stage filter $(h_1,h_2,h_3)$ that mimics the response of the plasma insulin to glucose levels. Eqs.~\eqref{eq:glucose1} and \eqref{eq:glucose2} provide the system of equations for the model where major parameters include (i) $E$, a rate constant for exchange of insulin between the plasma and remote compartments; (ii) $I_G$, the exogenous (externally driven) glucose delivery rate; (iii) $t_p$, the time constant for plasma insulin degradation; (iv) $t_i$, the time constant for the remote insulin degradation; (v) $t_d$, the delay time between plasma insulin and glucose production; (vi) $V_p$, the volume of insulin distribution in the plasma; (vii) $V_i$, the volume of the remote insulin compartment; (viii) $V_g$, the volume of the glucose space \cite{sturis1991computer,albers2017personalized}. Furthermore, in Eq.~\eqref{eq:glucose2}, $f_1(G)$ provides the rate of insulin production; $f_2(G)$ defines insulin-independent glucose utilization; $f_3(I_i)$ is the insulin-dependent glucose utilization; and $f_4(h_3)$ represents delayed insulin-dependent glucose utilization. \begin{subequations}\label{eq:glucose1} \begin{eqnarray} \frac{dI_p}{dt} = f_1(G)-E\bigl(\frac{I_{p}}{V_{p}}-\frac{I_i}{V_{i}}\bigr)-\frac{I_{p}}{t_{p}}, && \frac{dI_i}{dt} = E\bigl(\frac{I_{p}}{V_{p}}-\frac{I_i}{V_{i}}\bigr)-\frac{I_{i}}{t_{i}},\\ \frac{dG}{dt} = f_4(h_3)+I_{G}(t)-f_2(G)-f_3(I_i)G, && \frac{dh_1}{dt} = \frac{1}{t_d}\bigl(I_p-h_1\bigr), \\ \frac{dh_2}{dt} = \frac{1}{t_d}\bigl(h_1-h_2\bigr), && \frac{dh_3}{dt} = \frac{1}{t_d}\bigl(h_2-h_3\bigr), \end{eqnarray} \end{subequations} where $f_1$--$f_4$ and the nutritional driver of the model $I_G(t)$ are given by \begin{subequations}\label{eq:glucose2} \begin{eqnarray} f_1(G) = \frac{R_m}{1+ \exp(\frac{-G}{V_g c_1} + a_1)}, && f_2(G) = U_b \left(1-\exp(\frac{-G}{C_2V_g}) \right), \\ f_3(I_i) = \frac{1}{C_3 V_g} \left( U_0 + \frac{U_m}{1+(\kappa I_i)^{-\beta}} \right), && f_4(h_3) = \frac{R_g}{1 + \exp(\alpha (\frac{h_3}{C_5 V_p}-1))}, \\ \kappa = \frac{1}{C_4} \left(\frac{1}{V_i} + \frac{1}{E t_i} \right), && I_G(t) = \sum^N_{j=1}{m_j k\exp(k(t_j-t))}, \end{eqnarray} \end{subequations} where the nutritional driver $I_G(t)$ is a systematic forcing term that acts as nutritional intake of glucose and is defined over $N$ discrete nutrition events \cite{albers2014dynamical} with $k$ as the decay constant and event $j$ occurs at time $t_j$ with carbohydrate quantity $m_j$. The nominal values of the parameters are provided in Table \ref{table:glucose-app}. \begin{table}[htbp] \centering \caption{\textbf{Parameters for the ultradian glucose-insulin model \cite{albers2017personalized}.} The search range of the first 7 parameters is adopted from \cite{sturis1991computer}, and the search range of the other parameters is $(0.2p^*, 1.8p^*)$, where $p^*$ is the nominal value of that parameter.} \begin{tabular}{ccccc} \toprule Parameter & Nominal value & Unit & Search range & Inferred Value \\ \midrule $V_p$ & $3$ & $lit$ & -- & --\\ \hline $V_i$ & $11$ & $lit$ & -- & --\\ \hline $V_g$ & $10$ & $lit$ & -- & --\\ \hline $E$ & $0.2$ & $lit \ min^{-1}$ & (0.100, 0.300) & $0.201$\\ \hline $t_p$ & $6$ & $min$ & (4.00, 8.00) & $5.99$ \\ \hline $t_i$ & $100$ & $min$ & (60.0, 140) & $101.20$ \\ \hline $t_d$ & $12$ & $min$ & (25/3, 50/3) & $11.98$ \\ \hline $k$ & $0.0083$ & $min^{-1}$ & (0.00166, 0.0150) & $0.00833$ \\ \hline $R_m$ & $209$ & $mU \ min^{-1}$ & (41.8, 376) & $208.62$ \\ \hline $a_1$ & $6.6$ & & (1.32, 11.9) & $6.59$ \\ \hline $C_1$ & $300$ & $mg \ lit^{-1}$ & (60.0, 540) & $301.26$ \\ \hline $C_2$ & $144$ & $mg \ lit^{-1}$ & (28.8, 259) & $37.65$ \\ \hline $C_3$ & $100$ & $mg \ lit^{-1}$ & -- & -- \\ \hline $C_4$ & $80$ & $mU \ lit^{-1}$ & (16.0, 144) & $78.76$ \\ \hline $C_5$ & $26$ & $mU \ lit^{-1}$ & (5.20, 46.8) & $25.94$ \\ \hline $U_b$ & $72$ & $mg \ min^{-1}$ & (14.4, 130) & $71.33$ \\ \hline $U_0$ & $4$ & $mg \ min^{-1}$ & (0.800, 7.20) & $0.0406C_3$ \\ \hline $U_m$ & $90$ & $mg \ min^{-1}$ & (18.0, 162) & $0.890C_3$ \\ \hline $R_g$ & $180$ & $mg \ min^{-1}$ & (36.0, 324) & $179.86$ \\ \hline $\alpha$ & $7.5$ & & (1.50, 13.5) & $7.54$ \\ \hline $\beta$ & $1.772$ & & (0.354, 3.190) & $1.783$ \\ \bottomrule \end{tabular} \label{table:glucose-app} \end{table} Synthetic data is generated by numerically solving the system from time $t=0$ to $t=1800 \ min$ with the initial conditions $\mathbf{x}(0)=[12.0\ (\mu U/ml) \ 4.0\ (\mu U/ml) \ 110.0\ (mg/dl) \ 0.0 \ 0.0 \ 0.0]$ and three nutrition events $(t_j, m_j) = [(300, 60) \ (650, 40) \ (1100, 50)] \ (min, g)$ pairs. This completes the first two steps of the flowchart in Fig~\ref{fig:flowchart}. We assume the only observable is the glucose level measurements $G$, which are sampled randomly, as shown in Fig. \ref{fig:glucose-input}. \begin{figure}[htbp] \centering \includegraphics{ultradian_glucose-insulin_model_observation_data_for_parameter_inference.pdf} \caption{\textbf{Ultradian glucose-insulin model observation data for parameter inference.} 360 measurements on glucose level ($G$) only are randomly sampled in the time window of $0-1800$ minutes ($\sim$ one day). Figure is adapted with permission from \cite{yazdani2020systems}.} \label{fig:glucose-input} \end{figure} \section{Structural identifiability analysis} \label{sec:struc-ident} In this section, we investigate whether a set of unknown parameters in the Ultradian endocrine model for glucose-insulin interaction is structurally identifiable from the glucose concentration data $G$. Simply fitting a model to the data is not sufficient to show how reliable the estimated parameters are. Insufficient data can produce very different sets of parameters without affecting the fit of data if a model is structurally non-identifiable. To resolve the non-identifiability issue, there are two options: one is to acquire data for more species, another is to fix certain parameters as their nominal values. Suppose we are given a dynamical system of the following abstract form \begin{equation*} X' = f(X,\Theta, u), \quad y = g(X,\Theta, u), \end{equation*} where $X = (X_1,\cdots,X_n)$ represents the state variables, $y= (y_1,\cdots,y_m)$ represents the observables. $\Theta = \left(\theta_1,\cdots,\theta_k\right)$ contains the parameters to identify, and $u$ represents the input variable to the system. A parameter set $\Theta$ is called structurally \textit{globally} identifiable if \begin{equation}\label{eq:struc_model} g(X,\Theta, u) = g(X,\Phi, u) \quad\implies \quad\Theta = \Phi \end{equation} for every $\Phi = (\phi_1,\cdots,\phi_k)$ in the same space as $\Theta$. \textit{Local} identifiability only requires Eq.~(\ref{eq:struc_model}) to hold in a neighbourhood of $\Theta$. As a consequence, if a model parameter turns out to be locally identifiable, it is suggested that one should limit the search range for this parameter before fitting the model. For globally identifiable parameters, this step is not required. In this section, we only test for the local identifiability of the system since the existing software packages may suffer from out-of-memory issues when testing the global identifiability of a system with a large number of state variables and a small number of observables. For convenience, we will refer to a system as being identifiable when it is structually locally identifiable. We use the Julia library \textit{StructuralIdentifiability} \cite{structidjl} to test for structural identifiability of the model. The existing algorithms implemented in the library require both $f$ and $g$ to be rational functions, which are fractions of polynomials. \subsection{Preprocessing} There are exponential functions and power functions in Eq.~\eqref{eq:glucose2} of the model, and thus a preprocessing step is required to get rid of the transcendental components of the system. One method is to introduce a set of extra state variables $g_i$ which are equal to these transcendental components and apply the chain rule to find their derivatives. In our example, one can set \begin{subequations} \begin{eqnarray*} g_1(t) = 1+ \exp(\frac{-G(t)}{V_g C_1} + a_1), && g_2(t) = 1-\exp(\frac{-G(t)}{C_2V_g}),\\ g_3(t) = 1+(\kappa I_i(t))^{-\beta}, && g_4(t) = 1 + \exp(\alpha (\frac{h_3(t)}{C_5 V_p}-1)). \end{eqnarray*} \end{subequations} It follows from the chain rule that \begin{subequations} \begin{eqnarray*} \frac{dg_1}{dt} = -\frac{g_1 -1}{V_gC_1}\frac{dG}{dt}, && \frac{dg_2}{dt} = -\frac{g_2 -1}{V_gC_1}\frac{dG}{dt},\\ \frac{dg_3}{dt} = -\beta k\frac{g_3-1}{kI_i}\frac{dI_i}{dt}, && \frac{dg_4}{dt} = \frac{\alpha}{C_5V_p}(g_4-1)\frac{dh_3}{dt}. \end{eqnarray*} \end{subequations} The ODE system in Eqs.~\eqref{eq:glucose1} and \eqref{eq:glucose2} can be rewritten in the following rational form: \begin{subequations}\label{eq:new_system} \begin{eqnarray} \frac{dI_p}{dt} = \frac{R_m}{g_1} - E(\frac{I_p}{V_p}-\frac{I_i}{V_i})-\frac{I_p}{t_p}, && \frac{dI_i}{dt} = E(\frac{I_p}{V_p}-\frac{I_i}{V_i})-\frac{I_i}{t_i},\\ \frac{dG}{dt} = \frac{R_g}{g_4} + I_G - U_bg_2-\frac{G}{C_3V_g}(U_0+\frac{U_m}{g_3}), && \frac{dg_1}{dt} =-\frac{g_1-1}{V_gC_1}\frac{dG}{dt},\\ \frac{dg_2}{dt} =-\frac{g_2-1}{V_gC_2}\left(\frac{R_g}{g_4} + u_1 - U_bg_2-\frac{G}{C_3V_g}(U_0+\frac{U_m}{g_3})\right), && \frac{dg_3}{dt} =-\beta \kappa\frac{g_3-1}{\kappa I_i}\frac{dI_i}{dt},\\ \frac{dg_4}{dt} = \frac{\alpha}{C_5V_pt_d}(g_4-1)(h_2-h_3), && \frac{dh_1}{dt} =\frac{1}{t_d}(I_p-h_1),\\ \frac{dh_2}{dt} =\frac{1}{t_d}(h_1-h_2), && \frac{dh_3}{dt} =\frac{1}{t_d}(h_2-h_3), \end{eqnarray} \end{subequations} where $I_G$ is treated as the input to the system and $G$ is the output/observable of the system. Note that the initial conditions of all the ODE systems are assumed to be unknown for the \textit{StructuralIdentifiability} library to work. This means there are 4 extra degrees of freedom lying in the initial conditions of $g_1,g_2,g_3,g_4$ in Eq.~\eqref{eq:new_system} compared to Eqs.~\eqref{eq:glucose1} and \eqref{eq:glucose2}. Consequently, any identifiable parameter in the new system will be identifiable in the original system, but not the other way around. Our goal now reduces to find a set of identifiable parameters in Eq.~\eqref{eq:new_system}. \subsection{Structural identifiability results} By inspection of the three scaling invariances of the ODE in Eq.~\eqref{eq:new_system}, namely \begin{equation}\label{eq:invariance} \begin{cases} R_m = cR_m \\ I_p(0) = cI_p(0) \\ I_i(0) = cI_i(0) \end{cases},\quad \begin{cases} \alpha = c\alpha \\ C_5 = cC_5 \end{cases}, \begin{cases} U_0 = cU_0 \\ U_m = cU_m \\ C_3 = cC_3 \end{cases}, \end{equation} we found intrinsic structural non-identifiability of these parameters of the model. To get rid of the scaling invariances, one needs to fix one parameter in each system of equations of Eq.~\eqref{eq:invariance}. For illustration purposes, we fix $R_m$, $C_3$, and $C_5$ as their nominal values in Table~\ref{table:glucose-app} as an example and check whether the rest of the parameters are locally identifiable, see the code in Fig.~\ref{code:julia_model}. \begin{figure}[htbp] \centering \includegraphics{specify_the_ode_model.pdf} \caption{\textbf{Specify the ODE model.} We specify the parametric ODE model in Eq.~\eqref{eq:new_system} using the \texttt{@ODEmodel} macro. \texttt{x'(t)} is the derivative of state variable \texttt{x(t)}, which is assumed to be unknown if not specified otherwise. \texttt{y(t)} defines the output variable which is assumed to be given. The last line tests the local identifiability of the model.} \label{code:julia_model} \end{figure} Here, 15 undetermined parameters are remaining in the modified system, and it is impossible to fit all of them simultaneously, as shown in the first row of Table~\ref{table:struc_moredata}. This is reasonable because we assume that there is only one observable $G$ and it is hard to infer all parameters with limited amount of data. As demonstrated in Fig.~\ref{fig:flowchart}, to resolve the identifiability issue, one possible option is to acquire more data. It can be observed from the second row of Table~\ref{table:struc_moredata} that taking $I_i$ and $I_G$ as additional observables makes $t_p$ and $t_i$ locally identifiable. Still, a large proportion of parameters remain structurally non-identifiable. The second option (fixing certain parameters) is also considered. Here, we consider three different cases, where $(V_p)$, $(V_p,V_i)$ and $(V_p,V_i,V_g)$ are fixed respectively. We still assume that we only have the glucose concentration $G$ available. In the fourth and fifth rows of Table~\ref{table:struc_moredata}, we see that more parameters become identifiable when we fix $V_p$ and $V_i$, but the model is still not identifiable. It is only identifiable when all three parameters are set as fixed values. \begin{table}[htbp] \centering \caption{\textbf{Local structural identifiability result of the ultradian endocrine model with different observables and parameters.} $R_m$, $C_3$ and $C_5$ are prefixed. More parameters become structurally identifiable when more data ($I_p$ and $I_i$) are given. With only $G$ given, the model is structurally locally identifiable when $V_p, V_i, V_g$ are fixed.} \label{table:struc_moredata} \begin{tabular}{cccccccccccccccc} \hline Parameter & $V_p$ & $V_i$ & $V_g$ & $E$ & $t_p$ & $t_i$ & $t_d$ & $C_1$ & $C_2$ & $U_b$ & $U_0$ & $U_m$ & $R_g$ & $\alpha$ & $\beta$ \\ \hline Given $G$ & \xmark & \xmark & \xmark & \xmark & \xmark & \xmark & \cmark & \xmark & \xmark & \cmark & \xmark & \xmark & \cmark & \xmark & \cmark \\ \hline Given $G, I_p, I_i$ & \xmark & \xmark & \xmark & \xmark & \cmark & \cmark & \cmark & \xmark & \xmark & \cmark & \xmark & \xmark & \cmark & \xmark & \cmark \\ \hline Given $G$ & -- & \xmark & \xmark & \xmark & \xmark & \xmark & \cmark & \xmark & \xmark & \cmark & \xmark & \xmark & \cmark & \cmark & \cmark \\ \hline Given $G$ & -- & -- & \xmark & \cmark & \cmark & \cmark & \cmark & \xmark & \xmark & \cmark & \xmark & \xmark & \cmark & \cmark & \cmark \\ \hline Given $G$ & -- & -- & -- & \cmark & \cmark & \cmark & \cmark & \cmark & \cmark & \cmark & \cmark & \cmark & \cmark & \cmark & \cmark \\ \hline \end{tabular} \end{table} In summary, the ODE system of Eq.~\eqref{eq:new_system} is structurally locally identifiable when $R_m$, $C_3$, $C_5$, $V_p$, $V_i$ and $V_g$ are fixed. As a final step, we relate the identifiability of the modified system to original Ultradian glucose-insulin model described by Eqs.~\eqref{eq:glucose1} and \eqref{eq:glucose2}. Note that the scaling invariance between $R_m$ and $I_p(0), I_i(0)$ breaks when the later two are provided in the training as the initial conditions. Also the scaling invariance between $\alpha$ and $C_5$ does not hold in the original system, since the value for $\alpha$ can be uniquely determined by the initial condition for $g_4$. The scaling invariance for $U_0$, $U_m$ and $C_3$ still holds in Eqs.~\eqref{eq:glucose1} and \eqref{eq:glucose2}, but one can expect $U_0/C_3$, $U_m/C_3$ to be a constant. Therefore, one needs to only to fix $V_p$, $V_i$ and $V_g$ in the parameter estimation process. \section{Parameter estimation via SBINN} \label{sec:SBINN} \subsection{Deep neural networks} \label{sec:DNN} Deep neural networks (DNNs) function by recursively transforming inputs linearly and nonlinearly, i.e., compositional functions. Many types of DNNs have been developed such as convolutional neural networks and recurrent neural networks, and here we only consider fully connected neural networks (FNNs). A FNN is composed of many layers (Fig.~\ref{fig:nn}). We denote a $L$-layer neural network (i.e., $(L-1)$ hidden layers) by $\mathcal{N}^L(\mathbf{x}): \mathbb{R}^{d_{\text{in}}} \to \mathbb{R}^{d_{\text{out}}}$, where $d_{\text{in}}$ and $d_{\text{out}}$ are the dimensions of the input and output, respectively. Each layer has a number of neurons, which can be thought as data processors, which take the output of the previous layer as the input, transform it, and then provide the output to the next layer. We use $N_\ell$ to denote the number of neurons in the $\ell$-th layer. At the input layer we have $N_0 = d_{\text{in}}$, and at the output layer we have $N_L = d_{\text{out}}$. \begin{figure}[htbp] \centering \includegraphics[width=6cm]{architecture_of_a_fully_connected_neural_network.pdf} \caption{\textbf{Architecture of a fully connected neural network.} A neural network consists of an input layer (the input $t$), several hidden layers (composed of weights $\bm{W}^{\ell}$, bias $\bm{b}^{\ell}$, and activation function $\sigma$), and an output layer.} \label{fig:nn} \end{figure} To define a FNN rigorously, in the $\ell$-th layer, we define a weight matrix $\bm{W}^\ell$, a bias $\mathbf{b}^\ell$, and an activation function $\sigma$. Examples of $\sigma$ include logistic sigmoid ($1/(1+e^{-x})$), the hyperbolic tangent ($\tanh$), and the rectified linear unit (ReLU, $\max\{x, 0\}$). Then a FNN is defined as: \begin{align*} \text{input layer:} & \quad \mathcal{N}^0(\textbf{x}) = \textbf{x} \in \mathbb{R}^{d_{\text{in}}}, \\ \text{hidden layers:} & \quad \mathcal{N}^\ell(\textbf{x}) = \sigma(\bm{W}^{\ell}\mathcal{N}^{\ell-1}(\textbf{x}) + \bm{b}^{\ell}) \in \mathbb{R}^{N_\ell}, \quad \text{for} \quad 1 \le \ell \le L-1, \\ \text{output layer:} & \quad \mathcal{N}^{L}(\textbf{x}) = \bm{W}^{L}\mathcal{N}^{L-1}(\textbf{x}) + \bm{b}^{L} \in \mathbb{R}^{d_{\text{out}}}. \end{align*} All the weights and biases are the neural network parameters $\boldsymbol{\theta}$. \subsection{Systems-biology informed neural networks (SBINN)} SBINN was proposed in \cite{yazdani2020systems} and uses systems-biological models (e.g., Eqs.~\eqref{eq:glucose1} and \eqref{eq:glucose2}) to inform a deep neual network. The network input is time $t$, and the output is a vector of state variables $\hat{\mathbf{x}}(t; \boldsymbol{\theta}) = (\hat{x}_1(t; \boldsymbol{\theta}), \hat{x}_2(t; \boldsymbol{\theta}), \dots, \hat{x}_S(t; \boldsymbol{\theta}))$, which acts as a proxy to the ODE solution. We use Python to implement the code, see Appendix \ref{AppendixA} for an introduction to Python. We can directly implement SBINN using general deep learning frameworks such as TensorFlow \cite{abadi2016tensorflow} and PyTorch \cite{paszke2019pytorch}, but the implementation becomes much easier if we use the open-source library DeepXDE \cite{lu2019deepxde}. DeepXDE is a library for scientific machine learning and can use either TensorFlow or PyTorch as its computational engine (called backend). We begin with importing DeepXDE and the backend being used (Fig.~\ref{code:python_import}). Here we choose TensorFlow as the backend, and the code for PyTorch backend is almost the same. \begin{figure}[htbp] \centering \includegraphics{importing_deepxde_and_the_tensorflow_backend.pdf} \caption{\textbf{Importing DeepXDE and the TensorFlow backend.}} \label{code:python_import} \end{figure} We then implement SBINN. As the first step, we define all parameters to estimate (all the parameters in Table \ref{table:glucose-app} except $V_p$, $V_i$ and $V_g$, which are easily measurable) with an initial guess of zero using \verb|dde.Variable|, and create a list of all the variables to be used later (Fig.~\ref{code:sbinn}). \begin{figure}[htbp] \centering \includegraphics{creating_parameters_to_estimate.pdf} \caption{\textbf{Creating parameters to estimate.} We initialize all parameters to zero and create a list of these parameters.} \label{code:sbinn} \end{figure} Next we use these parameters to implement the ODEs for the system. Because we only use the observation of $G$, based on our structural identifiability analysis, we need to limit the search range for the parameters. In this case, the range of seven parameters is adopted from \cite{sturis1991computer}, and the range for other parameters is set as $(0.2p^*, 1.8p^*)$, where $p^*$ is the nominal value of that parameter (Table~\ref{table:glucose-app}). We implement the search range and the ODE system of Eqs.~\eqref{eq:glucose1} and \eqref{eq:glucose2} in Fig.~\ref{code:ode}. \begin{figure}[htbp] \centering \includegraphics{implementation_of_the_ode_system_in_eqs_1_and_2.pdf} \caption{\textbf{Implementation of the ODE system in Eqs.~\eqref{eq:glucose1} and \eqref{eq:glucose2}.} The Python function \texttt{ODE} returns the residuals of all ODEs, i.e., the difference between the left-hand size and the right-hand size of the ODE.} \label{code:ode} \end{figure} The next step is to import the data measurements of glucose concentration $G$ via \texttt{dde.PointSetBC} (Fig.~\ref{code:BC}). Other data measurements such as the initial conditions can be done similarly. \begin{figure}[htbp] \centering \includegraphics{implementation_of_the_data_observation_of_g.pdf} \caption{\textbf{Implementation of the data observation of $G$.}} \label{code:BC} \end{figure} \begin{figure}[htbp] \centering \includegraphics{neural_network_with_features_architecture.pdf} \caption{\textbf{Neural network architecture for SBINN.} The input-scaling layer and output-scaling layer scale the network input and outputs to order one. The feature layer provides features directly to the first fully-connected layer.} \label{fig:featurenn} \end{figure} We have implemented the ODE system and data measurements. Next, we build our neural network model. In order to speed up network training, rather than just the FNN described in Section~\ref{sec:DNN}, we can add additional layers described as follows (Fig.~\ref{fig:featurenn}). \begin{itemize} \item Input-scaling layer. In the case of a large time domain, $t$ varies by multiple orders of magnitude, which negatively effects our NN training. We apply a linear scaling function to $t$ using the maximum value of the time domain $T$ to create $\tilde{t} = t/T$ which will be $\sim \mathcal{O}(1)$. \item Feature layer. Often, ODE solutions have a pattern such as periodicity or exponential decay. Rather than letting a NN determine these features on its own, we add these patters in a feature layer. Though feature choice is problem specific, the setup is similar for any problem. We use the $L$ function $e_1(\cdot), e_2(\cdot), \dots, e_L(\cdot)$ to construct the $L$ features $e_1(\tilde{t}), e_2(\tilde{t}), \dots, e_L(\tilde{t})$ as seen in Fig~\ref{code:feature_transform}. If no pattern is easily identifiable, it is best to leave out the feature layer than to include something incorrect; this is just a technique to aid in training, not a requirement for the SBINN to work. \begin{figure}[htbp] \centering \includegraphics{input_scaling_and_feature_transform.pdf} \caption{\textbf{Input scaling and feature transform.} We use the periodicity of $\sin$ as our feature.} \label{code:feature_transform} \end{figure} \item Output-scaling layer. The outputs $\hat{x}_1, \hat{x}_2, \dots, \hat{x}_S$ may have a disparity of magnitudes. As such, we can scale the network outputs by $\hat{x}_1 = k_1\tilde{x}_1$, $\hat{x}_2 = k_2\tilde{x}_2$, $\dots, \hat{x}_S = k_S\tilde{x}_S$ like in Fig.~\ref{code:output_transform}, where $k_1, k_2, \dots, k_S$ are the magnitudes of the ODE solution $x_1, x_2, \dots, x_S$, respectively. \begin{figure}[htbp] \centering \includegraphics{output_transform_to_scale_the_output_of_the_parameters.pdf} \caption{\textbf{Output transform to scale the outputs of the network.}} \label{code:output_transform} \end{figure} \end{itemize} To train the neural network, we need to constrain it to the system of ODEs and the observations we created. This is done by defining a loss function, which computes the difference between the output of the neural network and the desired behavior: following the data at the time $t_1, t_2, \dots, t_{N^{data}}$ and the ODEs at time points $\tau_1, \tau_2, \dots, \tau_{N^{ode}}$. The ODE time points could be chosen at random or uniformly spaced. We define the total loss as a function of $\boldsymbol{\theta}$ and $\mathbf{p}$: \begin{equation*} \mathcal{L(\boldsymbol{\theta}, \mathbf{p})} = \mathcal{L}^{data}(\boldsymbol{\theta}) + \mathcal{L}^{ode}(\boldsymbol{\theta}, \mathbf{p}) + \mathcal{L}^{aux}(\boldsymbol{\theta}). \end{equation*} $\mathcal{L}^{data}$ is defined for $M$ sets of observations $\mathbf{y}$: \begin{equation*} \mathcal{L}^{data}(\boldsymbol{\theta}) = \sum_{m=1}^M w^{data}_m \mathcal{L}^{data}_m = \sum_{m=1}^M w^{data}_m \left[\frac{1}{N^{data}} \sum_{n=1}^{N^{data}} \left(y_m(t_n) - \hat{x}_{s_m}(t_n;\boldsymbol{\theta})\right)^2 \right]. \end{equation*} $\mathcal{L}^{ode}$ is for our system of ODEs: \begin{equation*} \mathcal{L}^{ode}(\boldsymbol{\theta}, \mathbf{p}) = \sum_{s=1}^S w^{ode}_s \mathcal{L}^{ode}_s = \sum_{s=1}^S w^{ode}_s \left[ \frac{1}{N^{ode}} \sum_{n=1}^{N^{ode}} \left( \frac{d\hat{x}_s}{dt} |_{\tau_n} - f_s\left(\hat{x}_s(\tau_n;\boldsymbol{\theta}),\tau_n;\mathbf{p}\right)\right)^2 \right]. \end{equation*} The last term in the total loss function is $\mathcal{L}^{aux}$, which is used for additional information on system identification. For example, here we assume that we have the measurements of the state variables at two distinct times $T_0$ and $T_1$. While this is essentially a part of the data loss, we include it as its own loss function due to it being given for all state variables at the two time instants. Here, we use the initial condition for $T_0$ and the final time instant for $T_1$, and other choices of $T_0$ and $T_1$ can also be used depending on the available data information. \begin{equation*} \mathcal{L}^{aux}(\boldsymbol{\theta}) = \sum_{s=1}^S w^{aux}_s \mathcal{L}^{aux}_s = \sum_{s=1}^S w^{aux}_s \frac{(x_s(T_0) - \hat{x}_s(T_0;\boldsymbol{\theta}))^2 + (x_s(T_1) - \hat{x}_s(T_1;\boldsymbol{\theta}))^2}{2}. \end{equation*} The weights $w$ were selected such that all parts of the loss function would be of the same order of magnitude. With the loss functions set up, we can train the network and infer the parameters of the ODEs $\mathbf{p}$ by minimizing the loss function via a gradient-based optimizer, e.g., Adam \cite{kingma2014adam}: \begin{equation*} \boldsymbol{\theta}^*, \mathbf{p}^* = \arg\min_{\boldsymbol{\theta}, \mathbf{p}} \mathcal{L(\boldsymbol{\theta}, \mathbf{p})}. \end{equation*} We first train 10,000 epochs by setting all weights to zero except for data and then train against all parts of the loss function (Fig.~\ref{code:train}). We also track the variables during training by using the callback \texttt{VariableValue}. We also plot the loss function as a function of epochs. \begin{figure}[htbp] \centering \includegraphics{training_the_model_using_the_adam_optimizer_with_a_learning_rate_of_.pdf} \caption{\textbf{Training the model using the Adam optimizer with a learning rate of 0.001.} We train first for 10,000 epochs to let the network understand the data, then for 600,000 for the network to train all the losses.} \label{code:train} \end{figure} \subsection{Results of SBINN} The inferred parameters are given in Table \ref{table:glucose-app}. We observe good agreement between the inferred values and the target values. We next consider the case of a nutrition event at $t_j = 2000 \ min$ with carbohydrate intake of $m_j = 100 \ g$. We then test how well our model can predict this extra nutrition event using the inferred parameters, the results of which are in Fig.~\ref{fig:glucose-output}. With high accuracy, we determined the glucose levels after the nutrition event. \begin{figure}[htbp] \centering \includegraphics{glucose_data_output.pdf} \caption{\textbf{Inferred dynamics and forecasting via SBINN.} The network learned the system from time $t = [0,1800]$. The estimated parameters were able to accurately forecast a meal event at $t_j = 2000 \ min$. Figure is adapted with permission from \cite{yazdani2020systems}.} \label{fig:glucose-output} \end{figure} \section{Practical identifiability analysis} \label{sec:pract-ident} The Fisher information matrix (FIM) can be used to develop confidence intervals of parameters as well as determine their practical identifiability, assuming parameters are structurally identifiable, as outlined in Fig.~\ref{fig:flowchart}. The main difference between structural identifiability and practical identifiability lies in the fact that structural identifiability analysis is normally conducted before the fitting of the model and is used to study the uniqueness of parameters assuming noiseless observables. On the other hand, practical identifiability analysis is performed a posteriori and is often used to analyze whether the inferred parameter value will be sensitive to noise in the data. As a consequence, we need both analyses to determine whether the fitting result would be reliable. We use Julia for practical identifiability analysis. In Julia, we import the required packages via \texttt{using}. If you are unfamiliar with Julia, see Appendix~\ref{AppendixB}. We start by defining the system in Eqs.~\eqref{eq:glucose1} and \eqref{eq:glucose2}. In our case, $I_p$ is written as \texttt{x1}, and so on. For their derivatives, we define them as \texttt{dx[1]}, \texttt{dx[2]}, etc. We also declare all parameters that our SBINN determined as a vector \texttt{p}. System definition in Julia is found in Fig.~\ref{code:julia_ode}. \begin{figure}[htbp] \centering \includegraphics{implementation_of_the_ode_system_in_julia.pdf} \caption{\textbf{Implementation of the ODE system.}} \label{code:julia_ode} \end{figure} To implement the practical sensitivity analysis, we first compute FIM, which is constructed by estimating the sensitivities of the system of ODEs with respect to the parameters. The code to compute FIM is in Fig.~\ref{code:FIM}. We note that even though the data was generated with no noise, we need to consider a noise level of the measurements to compute a meaningful FIM, and we use a low noise level of 1\% in the code. In this example, we only have one observable variable; the code for the problem with more than one observable is almost the same, and we only need to modify \texttt{cov\_error} and \texttt{cols} to indicate the indices of observable variables, see the example and code in \cite{yazdani2020systems}. \begin{figure}[htbp] \centering \includegraphics{computing_fim_and_needed_parameters.pdf} \caption{\textbf{Computing FIM.} \texttt{sigma[3]} in the code refers to the standard deviation of the third state variable $G$.} \label{code:FIM} \end{figure} There are different ways to utilize FIM, and here we show two important ones: (1) computing the correlation matrix of all parameters, and (2) computing eigenvectors of FIM associated with the zero eigenvalues (i.e., null eigenvectors). The correlation matrix $R$ is computed as $R_{ij}$ = $\text{FIM}^{-1}_{ij}/\text{FIM}^{-1}_{ii}$. $|R_{ij}| \approx 1$ indicates that the two parameters $i$ and $j$ are highly correlated, and thus are not individually identifiable from each other. The correlation matrix is shown in Fig.~\ref{fig:correlation_matrix}. \begin{figure}[htbp] \centering \includegraphics[width=12cm]{correlation_matrix_for_practical_identifiability.pdf} \vspace{-10pt} \caption{\textbf{Correlation matrix.}} \label{fig:correlation_matrix} \end{figure} Next, we compute eigenvalues and eigenvectors of FIM. The eigenvalues is shown in Fig.~\ref{fig:eigenvector} left. There is only one eigenvalue close to 0, and the associated eigenvector (i.e., null eigenvalue) is shown in Fig.~\ref{fig:eigenvector} right, where the value for the $C_2$ component is dominant and all other components are approximately zero. This indicates $C_2$ has little to no effect on state variables and is therefore practically unidentifiable from the dataset. \begin{figure}[htbp] \centering \includegraphics[width=.9\textwidth]{eigenvector_analysis_for_practical_identifiability.pdf} \vspace{-15pt} \caption{\textbf{Null eigenvector analysis.} (Left) Eigenvalues of FIM. There is one eigenvalue close to 0. (Right) The eigenvector associated with the eigenvalue. The dominant component is $C_2$.} \label{fig:eigenvector} \end{figure} In this example, the result from the null eigenvector analysis is consistent with our inferred values in Table~\ref{table:glucose-app}, but the correlation matrix is not. We note that FIM-based practical identifiability analysis has many limitations and can be problematic in certain problems. There are other methods available for determining practical identifiability such as the bootstrapping approach \cite{balsacanto2008bootstrap} or using a probabilistic framework to quantify sensitivity of the system dynamics to variations in its parameters \cite{foo2009probabilistic}. \section{Discussion of time-dependent parameters} We have considered all parameters are constants, but in real problems, the parameters could vary over time, i.e., time dependent parameters. Here, we briefly describe the idea of implementing time dependent parameters in SBINN. Let us assume $p_1$ is a time dependent parameter to be inferred. We add an extra neuron $\hat{p}_1$ in the network output to represent $p_1$, as shown in Fig~\ref{fig:time_dependent}, and then $\hat{p}_1$ becomes a function of time. Everything else remains the same as the SBINN we introduced in Section \ref{sec:SBINN}. \begin{figure}[htbp] \centering \includegraphics[width=6cm]{neural_network_architecture_with_time_dependent_parameters.pdf} \caption{\textbf{SBINN for time dependent parameters.}} \label{fig:time_dependent} \end{figure} \section{Summary} We have provided a complete workflow for analysing biological systems described by a system of ODEs, including structural identifiability analysis, parameter estimation via systems-biology informed neural networks (SBINN), and practical identifiability analysis based on the Fisher information matrix (FIM). \bibliographystyle{unsrt}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,662
Don't Let Go! Making Your Love Unbreakable Are you looking for ways to build a strong and lasting relationship with your partner? If you want to find out how to create an ever-lasting connection with your significant other, don't miss out on this great opportunity to learn more about it Is It Possible To Be "In Love" But Also Disappointed? Have you ever been "in love" but also disappointed? It's possible to feel both "in love" and disappointed at the same time. And it's not necessarily a bad thing. Here's why... 5 ways to tell if a man loves you When it comes to love, sometimes it's hard to tell if a person is really in it for the long haul. This can be especially true for men, who are often more guarded with their emotions. But there are some telltale signs that a man is deeply in love with you. How does the communication between two people change when they move from a platonic friendship to a romantic relationship? Relationships Q&A When two people transition from a platonic friendship to a romantic relationship, the communication between them is likely to become more intimate. They may talk about their innermost thoughts and feelings more openly than... When two people transition from a platonic friendship to a romantic relationship, the communication between them is likely to become more intimate. They may talk about their innermost thoughts and feelings more openly than they did before and might even share secrets that they wouldn't tell anyone else. As the relationship progresses, there will be a lot of mutual trust and understanding between them, which will in turn allow for a deeper level of communication. Furthermore, there may be more physical contact, such as hugs and kisses, that can also act as a form of communication. In conclusion, when two people move from a platonic friendship to a romantic relationship, the communication between them is likely to become more intimate, deeper, and trusting. This will in turn create a stronger bond between them that can last for years to come. How does culture affect the way people express their love? Culture has a profound effect on how we express love and romance. Each culture has its own set of beliefs, customs, values and behaviors that shape the way people interact with one another in terms of... Culture has a profound effect on how we express love and romance. Each culture has its own set of beliefs, customs, values and behaviors that shape the way people interact with one another in terms of relationships. The language and gestures used to express love can vary greatly from culture to culture. For example, in some cultures it is more common to express love with physical affection, such as hugging and kissing, whereas in other cultures this type of expression may be seen as too forward. The forms of courtship and dating can also vary greatly from culture to culture, with some cultures putting greater emphasis on arranged marriages or family introductions than others. Additionally, how gender roles play into romantic relationships is largely determined by culture, with some cultures having more traditional views on the roles of men and women in relationships. Ultimately, understanding cultural influences on love and romance is key to engaging in meaningful relationships with people from different backgrounds. No matter where someone is from, one thing remains true: love is universal. All cultures have the capacity to love and be loved, but express it differently. It's important to learn about different cultural expressions of love so that all relationships can be enriched with a deeper understanding of each other.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,348
#include "sceneimporter.h" fs::path GUIGeometryConverter::locateResource(const fs::path &resource) { fs::path result; emit locateResource(resource, &result); return result; } SceneImporter::SceneImporter(FileResolver *resolver, const fs::path &sourceFile, const fs::path &directory, const fs::path &targetScene, const fs::path &adjustmentFile, bool sRGB) : Thread("impt"), m_resolver(resolver), m_sourceFile(sourceFile), m_directory(directory), m_targetScene(targetScene), m_adjustmentFile(adjustmentFile), m_srgb(sRGB) { m_wait = new WaitFlag(); } SceneImporter::~SceneImporter() { } void SceneImporter::run() { Thread::getThread()->setFileResolver(m_resolver); #if defined(MTS_HAS_COLLADA) try { m_converter.setSRGB(m_srgb); m_converter.convert(m_sourceFile, m_directory, m_targetScene, m_adjustmentFile); m_result = m_converter.getFilename(); } catch (const std::exception &ex) { SLog(EWarn, "Conversion failed: %s", ex.what()); } catch (...) { SLog(EWarn, "An unknown type of error occurred!"); } #else SLog(EWarn, "The importer was disabled in this build!"); #endif m_wait->set(true); }
{ "redpajama_set_name": "RedPajamaGithub" }
4,516
from PyQt4.QtCore import * from PyQt4.QtGui import * from qgis.core import * import os import time from Action import Action from ..dialogs import SpatialJoinDialog from ..util import layer_helper, reader_csv from ..DEFINES import * from ..logic import mem, code_generator class SpatialJoinAction(Action): def __init__(self, iface, menu_name): super(SpatialJoinAction, self).__init__(iface, menu_name, "1 - Assign ID_CAD") def create_dialog(self): return SpatialJoinDialog() def initialize(self): self.volumes_layer = layer_helper.get_layer(self.dlg.volumes_layer_name()) self.cadastre_layer = layer_helper.get_layer(self.dlg.cadastre_layer_name()) self.cadastre_terrain_layer = layer_helper.get_layer(self.dlg.cadastre_terrain_layer_name()) self.attributes = [] self.fields = QgsFields() for attr in self.volumes_layer.pendingFields(): field = QgsField(attr.name(), attr.type()) self.attributes.append(field) self.fields.append(field) field = QgsField(FIELD_CODCAT, QVariant.String) self.attributes.append(field) self.fields.append(field) def compute(self, progress): cadastre_features = layer_helper.load_features(self.cadastre_layer) index = layer_helper.build_spatialindex(cadastre_features.values()) cadastre_terrain_features = layer_helper.load_features(self.cadastre_terrain_layer) index_cadastre_terrain = layer_helper.build_spatialindex(cadastre_terrain_features.values()) self.new_features = [] features = layer_helper.load_features(self.volumes_layer) count_max = len(features) count = 0 for f in features.values(): count += 1 progress.emit(int(count * (100.0 / count_max))) feature = QgsFeature(QgsFields(self.fields)) idf = layer_helper.get_intersection_max_area(index, f, cadastre_features) add = False if idf >= 0: add = True feature[FIELD_CODCAT] = cadastre_features[idf][FIELD_CODCAT] # Try with the cadastre terrain. if not add: idf = layer_helper.get_intersection_max_area(index_cadastre_terrain, f, cadastre_terrain_features) if idf >= 0: add = True feature[FIELD_CODCAT] = cadastre_terrain_features[idf][FIELD_CADASTRE_TERRAIN_ID] # Final we add this feature if add: feature.setGeometry(QgsGeometry(f.geometry())) for attr in self.volumes_layer.pendingFields(): feature[attr.name()] = f[attr.name()] self.new_features.append(feature) def apply(self): self.output_layer = layer_helper.create_layer(os.path.splitext(os.path.basename(self.dlg.location()))[0], self.attributes, self.volumes_layer) self.output_layer.startEditing() self.output_layer.dataProvider().addFeatures(self.new_features) self.output_layer.commitChanges() if self.dlg.location() != '': result = layer_helper.save_layer(self.output_layer, self.dlg.location()) if result == QgsVectorFileWriter.NoError: layer_name = self.output_layer.name() QgsMapLayerRegistry.instance().removeMapLayer(self.output_layer.id()) self.output_layer = self.iface.addVectorLayer(self.dlg.location(), layer_name, "ogr") else: QgsMessageLog.logMessage("Failed to save layer, error: " + str(result))
{ "redpajama_set_name": "RedPajamaGithub" }
2,859
The structure of the book has been changed radically, separating benign from malignant conditions, congenital from acquired, and with a strong emphasis on patient choice, safety and prevention of complications. Chapters that are new or radically changed include the opening chapter on the Health Needs of Women in a Changing Society; Teaching and Learning Surgical Techniques; and chapters on Minimal Access Surgery, which are richly illustrated by remarkable photographic images. Chapters on Pelvic Floor and Cancer Surgery retain well-established operative procedures where relevant, but include accounts of exciting newer techniques, such as robotic surgery. Up to the minute chapters on Radiology, Radiotherapy and Chemotherapy are provided by colleagues from these specialties who now form part of modern multidisciplinary teams.The 6th Edition, which appeared in 2001, reflected the growing replacement of conventional abdominal and vaginal surgery with radiological interventions, endoscopic surgery and medical treatments. The last 10 years have seen worldwide acceptance of the benefits of less invasive gynaecological procedures, allowing ever shorter stay in hospital and quicker recovery for the patient, but often with the use of much more expensive and complex technological equipment. At the same time, there has been an increasing trend towards super-specialisation so that in many parts of the world gynaecologists focus on a narrower field of activity. Specialists need to be aware of these developments even though the individual gynaecologist may carry out only a limited range of operations or therapies.
{ "redpajama_set_name": "RedPajamaC4" }
3,006
\section{Introduction}\label{introduction} There are various ways to associate polyhedra to objects of interest in combinatorial optimization and discrete mathematics. A prominent example with many applications is the \emph{cut polytope}~${\mathrm{Cut}}^{\square}(G)$ of a graph~$G$ which has taken the attention of researchers from many fields. Cut polytopes are well studied, see for example~\cite{BM, D, DD, DL1}. Indeed, cut polytopes are closely related to some well-known problems, like the \emph{MAXCUT} problem (see for example~\cite{DL, DL2, DL3}) and the \emph{four color} theorem in graph theory (see \cite{LM}). Several geometric properties of cut polytopes of graphs have been studied for instance in~\cite{CKNR, KR, LM, O1}. For more information about cut polytopes of graphs, see in particular the book of Deza and Laurent~\cite{DL}. An important class of classical combinatorial objects is the one of matroids. As a generalization of polyhedral objects like cut polytopes, Barahona and Gr\"otschel introduced in \cite{BGr} the cycle polytope $P_{\mathrm{Cyc}}(M)$ associated to a matroid $M$. From the geometric point of view these polytopes are the core objects studied in this paper. Observe that in the special case that the underlying matroid is the cographic matroid~$M(G)^*$ of a graph~$G$, the cycle polytope coincides with the cut polytope~${\mathrm{Cut}}^{\square}(G)$. A description of the facets of cycle polytopes of binary matroids as well as other properties of interest can be found for instance in \cite{BS, BGr, CP, GLPT, GT, GT1, KS}. Another special case of interesting cycle polytopes of matroids arises when the underlying matroid $M(G)$ is graphic for a given graph $G$. This polytope is called the \emph{Eulerian subgraph polytope} of~$G$ and we denote it by~$\mathrm{Euler}(G)$, see for example~\cite{BGr}. Eulerian subgraphs occur in various context in graph theory, see for example~\cite{Catlin}. In general, there are also toric algebras attached to $0/1$-polytopes which have been of great interest from the point of view of both algebraic geometry and commutative algebra. See the books~\cite{BG, CLS, MS} concerning such algebras and their geometrical aspects. In the particular case of cut polytopes, the aforementioned toric algebras and their defining ideals, which were studied first in~\cite{SS}, are called \emph{cut algebras} and \emph{cut ideals}, respectively. For further studies around cut algebras and ideals, see, e.g.,~\cite{En, KR, KNP, NP, O2, PS, RS}. For applications to algebraic statistics related to binary graph models, Markov random fields and phylogenetic models on split systems as a generalization of binary Jukes-Cantor models, see for example~\cite{SS}. In this paper, besides a better understanding of cycle polytopes of matroids in general, we study the associated toric algebras and ideals, called \emph{cycle algebras} and \emph{cycle ideals}. One of our main approaches is based on the investigation of certain operations on matroids to obtain faces of cycle polytopes, which belong again to this class of objects, as well as induced algebra retracts of cycle algebras. The organization of the paper is as follows. In Section~\ref{matroids}, we recall some ingredients on matroid theory used throughout the paper. In particular, we give a brief overview on some classical operations on matroids as well as well-known classes of matroids. In Section~\ref{cycle polytopes}, cycle polytopes are defined and basic properties of them are studied. In particular, operations on matroids are discussed which yield faces of cycle polytopes belonging again to this class of polytopes. We also pay particular attention to two special classes of cycle polytopes: cut and Eulerian subgraph polytopes. In Section~\ref{algebra retracts}, we introduce cycle algebras and cycle ideals of matroids. Retracts of cycle algebras are considered to understand transition phenomena of algebraic properties of interest. In particular, we study retracts obtained by faces of cycle polytopes and very useful retracts which do not arise in this way. For this purpose, we first define the new notion of a \emph{matroidal retract} of a matroid in Definition~\ref{matroidal retract-def}. Then, as one of our main results, Theorem~\ref{matroidal retract-theorem} states that matroidal retracts induce algebra retracts. We also introduce \emph{binary matroidal retracts} as a special case of matroidal retracts (see Definition~\ref{binary matroidal retract-def}). Finally, combining binary matroidal retracts with classical deletions as well as certain types of contractions, we get a new type of minors of binary matroids, which we call \emph{generalized series minors} (in short, g-series minors) of a given binary matroid and which are crucial in the rest of the paper. In Section~\ref{Cographic case}, we study g-series minors in the case of cographic matroids. Theorem~\ref{neighborhood-g-series minor-corollary} has in particular as a corollary the main results in \cite{RS}, which provides algebra retracts of cut algebras induced by neighborhood-minors of graphs. In Section~\ref{highest degree}, we study highest possible degrees in minimal homogeneous systems of generators of cycle ideals. As starting point, in Lemma~\ref{zero ideal} it is characterized when the ideal is zero in terms of data of the underlying matroid. In Lemma~\ref{mu} it is also observed that cycle ideals never contain linear forms. Moreover, Corollary~\ref{mu comparison} yields inequalities between the highest degree $\mu(M)$ of minimal homogeneous systems of generators of cycle ideals of a matroid $M$ and the ones obtained by various types of minors of $M$. In Theorem~\ref{simplification} we get cases where such inequalities are equalities. In Section~\ref{degree 2}, we focus on small values of $\mu(M)$ for binary matroids. We discuss certain necessary and sufficient conditions for $\mu(M)\leq 2$ and $\mu(M)\leq 5$, respectively. The aforementioned conditions are given in terms of different excluded minors. In particular, all graphic and cographic matroids~$M$ with $\mu(M)\leq 2$ are classified in one of the main results of the paper, Theorem~\ref{degree2-characterization}. We also discuss the relationship of the results of this section and two conjectures posed in~\cite{SS}. Throughout the paper, we discuss examples and also pose several problems and conjectures arising from different parts of our work. \section{Ingredients from matroid theory}\label{matroids} In this section, we give a brief overview of some properties of matroids and certain operations on them as well as of some well-known classes of matroids which are of importance for this work. For a general discussion on matroids see, e.g., \cite{Ox, Ox1}. Let $M$ be a matroid on the ground set $E(M)$, whose set of \emph{independent} sets, \emph{bases} (i.e.~maximal independent sets) and \emph{circuits} (i.e.~minimal dependent sets) are denoted by $\mathcal{I}(M)$, $\mathcal{B}(M)$ and $\mathcal{C}(M)$, respectively. We may also write $E$, $\mathcal{I}$, $\mathcal{B}$ and $\mathcal{C}$ for the aforementioned sets if the matroid $M$ is known from the context. The \emph{dual} matroid $M^*$ of $M$ is the matroid with the same ground set $E$ as $M$ and whose set of bases is defined as $\mathcal{B}(M^*)=\{E-B:B\in \mathcal{B}(M)\}$. It is well-known that a set $C\subseteq E$ is a circuit of $M^*$ if and only if it is a minimal set having non-empty intersection with every basis of $M$. The elements of $\mathcal{B}(M^*)$ and $\mathcal{C}(M^*)$ are also called \emph{cobases} and \emph{cocircuits} of $M$. For any circuit $C$ and cocircuit $C^*$, one has \begin{equation}\label{intersection of circuit and cocircit} |C\cap C^*|\neq 1; \end{equation} see, e.g., \cite[Proposition~2.1.11]{Ox}. Observe that $(M^*)^*=M$. An element $e\in E$ is called a \emph{loop} of $M$ if $\{e\}$ is a circuit. A pair of elements $e,f\in E$ are called \emph{parallel} in $M$ if $\{e,f\}$ is a circuit. A \emph{parallel class} of $M$ is a maximal subset of $E$ with the property that any two distinct elements of it are parallel, and no element is a loop. The loops, parallel elements and parallel classes of the dual matroid $M^*$ are called \emph{coloops}, \emph{coparallel elements} and \emph{coparallel classes} of $M$. Coparallel classes of $M$ are also known as its \emph{series classes}. A matroid is said to be \emph{simple} (resp.~\emph{cosimple}) if it has no loops (resp.~coloops) and no non-trivial parallel (resp.~coparallel) classes. In particular, if $M$ is simple (resp.~cosimple), then $\{e\}$ is a parallel (resp.~coparallel) class of $M$ for any $e\in E$ and these are the only parallel (resp.~coparallel) classes. Recall that $M$ is connected if and only if for any two distinct elements of $E$, there is a circuit containing both of them; see, e.g., \cite[Proposition~4.1.4]{Ox}. It is well-known that $M$ is connected if and only if $M^*$ is connected; see, e.g., \cite[Corollary~4.2.8]{Ox}. Next, we recall two important matroid operations, namely deletion and contraction. By the \emph{deletion} of $e\in E$ from $M$, one obtains a matroid denoted by $M\setminus e$ with the ground set $E-\{e\}$ and $ \mathcal{C}(M\setminus e)=\{C\subseteq E-\{e\}:C\in \mathcal{C}(M)\}. $ Repeating this procedure yields a deletion of a subset of the ground set. In particular, if $X\subseteq E$, then the \emph{restriction} of $M$ to $X$ is defined as the deletion of $E-X$ from $M$, which is denoted by $M|X$. This is the matroid on $X$ with $ \mathcal{C}(M|X)=\{C\subseteq X:C\in \mathcal{C}(M)\}. $ If $e$ is not a loop of $M$, then by the \emph{contraction} of $e$ in $M$, one gets a matroid, denoted by $M/e$, with the ground set $E-\{e\}$ whose circuits are the minimal elements of $\{C-\{e\}: C \in \mathcal{C}(M)\}$. If $e$ is a loop of $M$, then by definition $M/e:=M\setminus e$. Similar to deletion, one can contract a subset $T$ of $E$ in $M$. Then one obtains the matroid $M/T$ on the ground set $E-T$ and whose circuits are the minimal non-empty elements of $\{C-T: C \in \mathcal{C}(M)\}$. Observe that duality, deletion and contraction are related to each other as follows: \begin{equation}\label{dual-deletion-contraction} M^*/T={(M\setminus T)}^*~~~~\text{and}~~~~M^*\setminus T={(M/T)}^*. \end{equation} A \emph{minor} of a matroid $M$ is a matroid which can be obtained from $M$ by a sequence of deletions and contractions. There are special types of minors in the literature such as \emph{parallel minors} and \emph{series minors} which we recall in the following: A \emph{parallel minor} of $M$ is a matroid which can be obtained from $M$ by a sequence of contractions and \emph{parallel deletions}. Here, a parallel deletion of $M$ is a matroid of the form $M\setminus e$ where $e$ is contained in a $2$-circuit of $M$. A \emph{series minor} of $M$ is a matroid which can be obtained from $M$ by a sequence of deletions and \emph{series contractions}. Here, a series contraction of $M$ is a matroid of the form $M/e$ where $e$ is contained in a $2$-cocircuit of $M$. Observe that $N$ is a parallel minor of $M$ if and only if $N^*$ is a series minor of $M^*$. Let $M$ and $N$ be two matroids. Then $M$ is said to be $N$-\emph{minor~free} if $M$ has no minor isomorphic to $N$. The notion of $N$-\emph{parallel minor~free} and $N$-\emph{series minor~free} are defined analogously. Next, we recall another useful operation on matroids. Let $M_1$ and $M_2$ be two matroids with $E(M_1)\cap E(M_2)=\emptyset$. The \emph{direct sum} or \emph{$1$-sum} $M_1\oplus M_2$ of $M_1$ and $M_2$ is the matroid with $E(M_1\oplus M_2)=E(M_1)\cup E(M_2)$, \[ \mathcal{I}(M_1\oplus M_2)=\{I_1\cup I_2: I_i\in \mathcal{I}(M_i),i=1,2\}\quad \text{and} \quad \mathcal{C}(M_1\oplus M_2)=\mathcal{C}(M_1)\cup \mathcal{C}(M_2). \] In the remaining part of this section, we briefly recall important classes of matroids which will be used throughout the paper. Let $m,n$ be non-negative integers. The \emph{uniform matroid} $U_{m,n}$ is a matroid on the ground set $E$ of cardinality $n$ whose independent sets are exactly the subsets of $E$ of cardinality at most $m$. Therefore, \[ \mathcal{C}(U_{m,n})=\{C\subseteq E: |C|=m+1\}. \] In particular, the uniform matroids $U_{n,n}$ have no circuits. Indeed, they are the only matroids with this property, and are called \emph{free}. Moreover, $U_{0,0}$, which is the unique matroid with the empty ground set, is called the \emph{empty matroid}. Let $A$ be an $m\times n$-matrix over a field $\KK$, and let $E$ be the set of column labels of $A$. Moreover, let $\mathcal{I}$ be the set of subsets $X$ of $E$ for which the multiset of columns labeled by $X$ is linearly independent in $\KK^m$. Then it is easily seen that $(E,\mathcal{I})$ is a matroid which is called the \emph{vector matroid} of $A$. A matroid $M$ which is isomorphic to the vector matroid of a matrix over a field $\KK$ is said to be \emph{representable} over $\KK$. A matroid which is representable over the field $\FF_2$, i.e.~the unique field with two elements, is called \emph{binary}. There are various characterizations of binary matroids and here we summarize some of them which are used throughout the paper: \begin{Theorem}\label{binary} {\em(}\cite[Theorem~9.1.2 and Theorem~9.1.5]{Ox}{\em)} Let $M$ be a matroid. Then the following statements are equivalent: \begin{enumerate} \item[{\em(a)}] $M$ is binary; \item[{\em(b)}] $M$ is $U_{2,4}$-minor-free; \item[{\em(c)}] For every circuit $C$ and cocircuit $C^*$ of $M$, $|C\cap C^*|$ is even; \item[{\em(d)}] If $C_1$ and $C_2$ are distinct circuits of $M$, then $C_1\Delta C_2=(C_1\cup C_2)\setminus (C_1\cap C_2)$ is a disjoint union of circuits. \end{enumerate} \end{Theorem} It is straightforward from Theorem~\ref{binary} that the dual of a binary matroid is binary as well. Moreover, the class of binary matroids is minor-closed; see, e.g., \cite[Proposition~3.2.4]{Ox}. In particular, all different types of minors of binary matroids, which were mentioned in this section, are again binary matroids. Let $G$ be a graph with the edge set $E=E(G)$. Attached to $G$ is the matroid $M(G)$ on the ground set $E$, whose circuits are exactly the edge sets of cycles of $G$. The matroid $M(G)$ is called the \emph{cycle matroid} or \emph{polygon matroid} of $G$. In particular, a loop and a parallel class in $M(G)$, respectively, correspond to a loop and a maximal set of pairwise parallel (or multiple) edges in $G$, respectively. Thus, $M(G)$ is a simple matroid if and only if $G$ is a simple graph, i.e.~a graph with no loops and no parallel edges. For a graph $G$ with at least three vertices and no isolated vertices and loops, one has that $M(G)$ is connected if and only if $G$ is a $2$-connected graph; see, e.g., \cite[Proposition~4.1.8]{Ox}. Let $e\in E$ which is not a loop of $G$. Then the graph $G\setminus e$ denotes the graph on the same vertex set as $G$ obtained by deleting the edge $e$ from $G$. Moreover, the graph $G/e$ denotes the graph which is obtained from $G$ by identifying the endpoints of $e$ and deleting $e$. This operation is called the contraction of the edge $e$ in $G$. Then \begin{equation}\label{graphic-deletion-contraction} M(G)\setminus e=M(G\setminus e)~~~~~~~\text{and}~~~~~~~M(G)/e=M(G/e), \end{equation} for any $e\in E$. Note that similar to matroids, if $e$ is a loop in $G$, then $G/e=G\setminus e$. Observe that even if $G$ is a simple graph, then $G/e$ has not to be simple. Any matroid isomorphic to $M(G)$ for some graph $G$, is called a \emph{graphic matroid}. It is well-known that graphic matroids are representable over any field, and in particular, they are binary matroids; see, e.g., \cite[Proposition~5.1.2]{Ox}. It follows immediately from the well-known Whitney's theorem that for any graphic matroid $M$, there exists a connected graph $G$ such that $M$ is isomorphic to $M(G)$. A \emph{cographic matroid} $M(G)^*$ is the dual of a graphic matroid $M(G)$. This is one way to see that cographic matroids are also binary. Recall that an \emph{edge cut} in a graph $G$ is a set of edges $X$ such that $G\setminus X$ has more connected components than $G$. If $X$ consists of only one edge $e$, then $e$ is called a \emph{bridge} of $G$. Observe that the circuits of $M(G)^*$ are exactly the minimal edge cuts of $G$; see, e.g., \cite[Proposition~2.3.1]{Ox}. A minimal edge cut of a graph is also called a \emph{bond}. The loops of $M(G)^*$ are exactly the bridges of $G$, and $e,f$ are parallel elements of $M(G)^*$ if and only if $\{e,f\}$ is a minimal edge cut. Hence, $M(G)^*$ is simple, or equivalently $M(G)$ is cosimple, if and only if any minimal edge cut of $G$ has at least three elements, and, in particular, any edge of $G$ is contained in a cycle of $G$. Throughout this paper, all matroids are assumed to be non-free matroids, and, in particular, they are non-empty matroids. \section{Cycle polytopes of matroids}\label{cycle polytopes} Let $M$ be a matroid. A \emph{cycle} of $M$ is defined to be a disjoint union of some of its circuits. We denote the set of all cycles of $M$ by $\mathrm{Cyc}(M)$. In particular, $\emptyset\in \mathrm{Cyc}(M)$. Attached to $M$ is a polytope $P_{\mathrm{Cyc}}(M)$ in $\mathbb{R}^{E(M)}$, called the \emph{cycle polytope} of $M$, which is defined as the convex hull of the characteristic vectors of cycles of $M$. Here a characteristic vector $\chi_C$ of a cycle $C$ of $M$ is a $0/1$-vector in $\ZZ^{E(M)}$ whose $e^{th}$ coordinate is $1$ if $e\in C$ and $0$ otherwise. If $M$ is a binary matroid, then $P_{\mathrm{Cyc}}(M)$ is exactly the convex hull of all $0/1$-vectors $x$ in $\RR^{E(M)}$ such that $A x\equiv 0$ (mod $2$) where $A$ is the representation matrix of $M$ over $\FF_2$. It follows from \cite[Theorem~4.1]{BGr} that \[ \dim P_{\mathrm{Cyc}}(M)=\text{the~number~of~coparallel~classes~of}~M. \] In the following theorem we determine certain faces of the cycle polytope of a matroid. First, recall that a \emph{face} $F$ of a polytope $P$ is a subset of $P$ which is the intersection of $P$ with a hyperplane $H$ such that $P$ is entirely contained in one of the two half-spaces defined by $H$. The hyperplane $H$ is then called a \emph{supporting hyperplane} of $P$. Moreover, a \emph{morphism} of polytopes $P$ and $Q$ is a map $\varphi\colon P\rightarrow Q$ which can be extended to an affine map $\tilde{\varphi}\colon \mathrm{aff}(P)\rightarrow \mathrm{aff}(Q)$. If the morphism $\varphi$ is an isomorphism, then the polytopes $P$ and $Q$ are said to be \emph{affinely isomorphic}. \begin{Theorem}\label{face} Let $M$ be a matroid and let $M'$ be a matroid obtained from $M$ by \begin{enumerate} \item [{\em(a)}] a deletion, or \item [{\em(b)}] a series contraction, or \item [{\em(c)}] a coloop contraction. \end{enumerate} Then $P_{\mathrm{Cyc}}(M')$ is affinely isomorphic to a face of $P_{\mathrm{Cyc}}(M)$. In particular, if $M'$ is a series minor of $M$, then $P_{\mathrm{Cyc}}(M')$ is affinely isomorphic to a face of $P_{\mathrm{Cyc}}(M)$. \end{Theorem} \begin{proof} Let $E=E(M)$, $e\in E$ and $E'=E-\{e\}$. Then $\RR^{E'}$ is naturally (isomorphic to) a subspace of $\RR^E$. In the following, we denote the $f^{th}$ coordinate of a vector $u$ in $\RR^E$ or $\RR^{E'}$ by ${u}_{f}$, for any $f\in E$. (a) First assume that $M'=M\setminus e$. Let $H$ be the hyperplane in $\RR^{E}$ defined by $x_{e}=0$, and let $F:=P_{\mathrm{Cyc}}(M)\cap H$. Then clearly $F$ is a face of $P_{\mathrm{Cyc}}(M)$. We claim that $P_{\mathrm{Cyc}}(M')$ is affinely isomorphic to $F$. Since zero is an element of $\mathrm{aff}(P_{\mathrm{Cyc}}(M'))$ corresponding to the empty cycle of $M'$, we have that $\mathrm{aff}(P_{\mathrm{Cyc}}(M'))= \mathrm{span}(P_{\mathrm{Cyc}}(M'))$ is a subspace of $\RR^E$. Next, define \begin{eqnarray*} \varphi\colon P_{\mathrm{Cyc}}(M') &\rightarrow & F \end{eqnarray*} such that for any $w\in P_{\mathrm{Cyc}}(M')$, \begin{displaymath} {\varphi(w)}_f= \left \{\begin {array}{ll} w_f&\mathrm{if}~~~f\neq e,\\ 0&\text{if}~~~f=e. \end{array}\right. \end{displaymath} The map $\varphi$ is well-defined, because $\mathcal{C}(M')=\{C\subseteq E-\{e\}: C\in \mathcal{C}(M)\}$ and thus for a characteristic vector ${\chi}_{C'}\in \RR^{E'}$ of a cycle $C'$ of $M'$ one knows that $\phi(\chi_{C'})$ is a characteristic vector of a cycle of $M$ which lies in $H$. Clearly, $\varphi$ is the restriction of the affine/linear map \begin{eqnarray*} \tilde{\varphi}\colon \mathrm{aff}(P_{\mathrm{Cyc}}(M'))&\rightarrow & \mathrm{aff}(F)\subseteq \RR^{E} \end{eqnarray*} such that for any $w\in \mathrm{aff}(P_{\mathrm{Cyc}}(M'))$, \begin{displaymath} {\tilde{\varphi}(w)}_f= \left \{\begin {array}{ll} w_f&\mathrm{if}~~~f\neq e,\\ 0&\text{if}~~~f=e. \end{array}\right. \end{displaymath} Hence, $\varphi$ is a morphism of the involved polytopes. By analogous reasons, \begin{eqnarray*} \psi\colon F&\rightarrow & P_{\mathrm{Cyc}}(M') \end{eqnarray*} defined as \[ {\psi(v)}_f=v_f \quad \text{for any} \quad v\in F \quad \text{and} \quad f\neq e, \] is well-defined and is a morphism of the polytopes $F$ and $P_{\mathrm{Cyc}}(M')$ which is the inverse to $\varphi$. This concludes the proof of~(a). (b) Next assume that $e$ and $f$ are coparallel for some $f\in E$, namely $\{e,f\}$ is a cocircuit of $M$, and let $M'=M/e$. We claim that $P_{\mathrm{Cyc}}(M')$ is affinely isomorphic to $P_{\mathrm{Cyc}}(M)$. Note that, since $\{e,f\}$ is a cocircuit of $M$, it follows from \eqref{intersection of circuit and cocircit} that any circuit of $M$ either contains both of $e$ and $f$, or contains none of them. As any cycle of $M$ is a disjoint union of circuits, the same property holds for any cycle of $M$. For $v\in P_{\mathrm{Cyc}}(M)$, this yields $v_{e}=v_{f}$. We define \begin{eqnarray*} \varphi \colon P_{\mathrm{Cyc}}(M')&\rightarrow & P_{\mathrm{Cyc}}(M) \end{eqnarray*} such that for any $w\in P_{\mathrm{Cyc}}(M')$, \begin{displaymath} {\varphi(w)}_{e'}= \left \{\begin {array}{ll} w_f&\mathrm{if}~~~e'=e,\\ w_{e'}&\text{if}~~~e'\neq e. \end{array}\right. \end{displaymath} It follows from the discussion above and the relation of cycles of $M'$ and $M$ that $\varphi$ is well-defined. Moreover, $\varphi$ is the restriction of the affine/linear map \begin{eqnarray*} \tilde{\varphi}\colon \mathrm{aff}(P_{\mathrm{Cyc}}(M')) &\rightarrow& \mathrm{aff}(P_{\mathrm{Cyc}}(M)) \end{eqnarray*} defined as \begin{displaymath} {\tilde{\varphi}(w)}_{e'}= \left \{\begin {array}{ll} w_f&\mathrm{if}~~~e'=e,\\ w_{e'}&\text{if}~~~e'\neq e, \end{array}\right. \end{displaymath} for any $w\in \mathrm{aff}(P_{\mathrm{Cyc}}(M'))$. Hence, $\varphi$ is a morphism of the involved polytopes. By analogous reasons, \begin{eqnarray*} \psi \colon P_{\mathrm{Cyc}}(M)&\rightarrow & P_{\mathrm{Cyc}}(M') \end{eqnarray*} defined as \[ {\psi(v)}_{e'}=v_{e'} \quad \text{for any} \quad v\in P_{\mathrm{Cyc}}(M) \quad \text{and} \quad e'\neq e, \] is a well-defined morphism of polytopes which is the inverse to $\varphi$. Hence, $P_{\mathrm{Cyc}}(M')$ is isomorphic to $P_{\mathrm{Cyc}}(M)$. (c) Finally, assume that $e$ is a coloop of $M$. Then, by \eqref{intersection of circuit and cocircit}, $e$ is not contained in any circuit of $M$. This implies that $\MC(M/e)=\MC(M)=\MC(M\setminus e)$, which yields \begin{equation}\label{cycles-coloop} \mathrm{Cyc}(M/e)=\mathrm{Cyc}(M)=\mathrm{Cyc}(M\setminus e). \end{equation} This implies that $P_{\mathrm{Cyc}}(M/e)=P_{\mathrm{Cyc}}(M\setminus e)\subseteq \RR^{E'}$. Thus, by part~(a), it follows that $P_{\mathrm{Cyc}}(M/e)$ is affinely isomorphic to a face of $P_{\mathrm{Cyc}}(M)$. Note that \eqref{cycles-coloop} also implies that the $e$-th coordinate of any vertex (and hence any element) of $P_{\mathrm{Cyc}}(M)$ is equal to zero. Hence, in this case, $P_{\mathrm{Cyc}}(M/e)$ is indeed affinely isomorphic to $P_{\mathrm{Cyc}}(M)$ itself, since by using the notation of part~(a), we have $F=P_{\mathrm{Cyc}}(M)$. \end{proof} In the following we consider two important special cases of cycle polytopes arising from graphs. \begin{Example}\label{cut polytope} {\em Let $G=(V,E)$ be a graph, and let $M=M(G)$. \begin{enumerate} \item \textbf{Eulerian subgraph polytopes}: It is a classical fact in graph theory that a graph $H$ is \emph{Eulerian}, (i.e.~all of its vertices have even degree) if and only if its edge set is the disjoint union of the edge sets of some cycles of $H$. It follows that the cycles of $M$ are exactly the edge sets of the Eulerian subgraphs of $G$. Then the cycle polytope of $M$ is indeed the convex hull of the incident vectors of the Eulerian subgraphs of $G$, namely the vectors $\delta_H\in \RR^{E}$ with \begin{displaymath} \delta_{H,e}= \left \{\begin {array}{ll} 1&\mathrm{if}~~~e\in E(H),\\ 0&\mathrm{otherwise}, \end{array}\right. \end{displaymath} where $H$ is an Eulerian subgraph of $G$ and $e\in E$. This polytope is also known as the \emph{Eulerian subgraph polytope}; see, e.g., \cite{BGr}. In the following, we denote this polytope by $\mathrm{Euler}(G)$. \\ \item \textbf{Cut polytopes}: Given a subset $A$ of $V$, the \emph{cut set} $\mathrm{Cut}(A)$ of $G$ is a subset of $E$ consisting of those edges of $G$ which have exactly one endpoint in $A$. The \emph{cut polytope} of $G$, which is denoted by ${\mathrm{Cut}}^{\square}(G)$, is the convex hull of the \emph{cut vectors} $\delta_{A}\in \mathbb{R}^{E}$ of $G$, which are defined as \begin{displaymath} \delta_{A,e}= \left \{\begin {array}{ll} 1&\mathrm{if}~~~e\in \mathrm{Cut}(A),\\ 0&\mathrm{otherwise}, \end{array}\right. \end{displaymath} for any $A\subseteq V$ and $e\in E$. The cut polytope of $G$ has been intensively studied by many authors; see, e.g., \cite{CKNR, D, DD, DL2, O1, O2, RS, SS}. It is clear that any cut set $\mathrm{Cut}(A)$ is an edge cut (in the sense of Section~\ref{matroids}) if $A\neq \emptyset, V$. The converse is not true in general, but one can see that any minimal edge cut is a cut set. Hence, minimal edge cuts and minimal cut sets of $G$ coincide. Then, it follows from \cite[Exercises~4.1.27~and~4.1.28]{W} that a subset $C$ of $E$ is a disjoint union of minimal edge cuts (i.e.~a cycle of $M^*$) if and only if $C=\mathrm{Cut}(A)$ for some $\emptyset\neq A\subset V$. Note that the zero vector corresponds to $A=\emptyset$ and the empty cycle. Therefore, in this case, the cycle polytope of $M^*$ is exactly the cut polytope of $G$. \end{enumerate} } \end{Example} \section{Cycle algebra of a matroid and its algebra retracts}\label{algebra retracts} Given a field $\KK$, associated to any lattice polytope $P\subseteq \RR^d$ is a toric algebra $\KK[P]$ whose generators bijectively correspond to the lattice points of $P$, namely, \[ \KK[P]=\KK[\mathbf{y^a}z:\mathbf{a}\in P\cap \ZZ^d]. \] If $P\subseteq \RR^d_{\geq 0}$, then the algebra $\KK[P]$ is a $\KK$-subalgebra of the polynomial ring $\KK[y_1,\ldots,y_d,z]$ where $\mathbf{y^a}=y_1^{a_1}\cdots y_d^{a_d}$ with $\mathbf{a}=(a_1,\ldots,a_d)\in \ZZ^d$. The toric algebra $\KK[P]$ is naturally a standard graded $\KK$-algebra (generated in degree~1) induced by setting $\deg (z)=1$ and $\deg (y_i)=0$ for all $i=1,\ldots ,d$. \medskip Now, let $M$ be a matroid. We define the \emph{cycle algebra} of $M$ to be the toric algebra $\KK[P_{\mathrm{Cyc}}(M)]$, and to simplify the notation, we denote it by $\KK[\mathrm{Cyc}(M)]$. More precisely, \[ \KK[\mathrm{Cyc}(M)]:=\KK[\mathbf{y}^{C}z:C\in \text{Cyc}(M)] \] is a $\KK$-subalgebra of the polynomial ring $R_M=\KK[y_e, z: e\in E(M)]$, where the monomial $\mathbf{y}^C=\prod_{e\in C} y_e$ corresponds to the characteristic vector $\chi_C$ of $C$. Let $S_M=\KK[x_C: C\in \mathrm{Cyc}(M)]$. One gets a representation for this toric ring by the following homogeneous $\KK$-algebra homomorphism: \begin{eqnarray*} \phi_{M}\colon S_M &\longrightarrow& \KK[\mathrm{Cyc}(M)] \\ x_C &\mapsto& \mathbf{y}^{C}z, \end{eqnarray*} for any $C\in \mathrm{Cyc}(M)$. The defining ideal of this algebra is denoted by $I_{\mathrm{Cyc}(M)}$ and we call it the \emph{cycle ideal} of $M$. The cycle ideal is a graded ideal in $S_M$ generated by pure homogeneous binomials of the form~$\prod_{i=1}^dx_{C_i}-\prod_{i=1}^dx_{D_i}$ with $C_i, D_i\in \mathrm{Cyc}(M)$ and $d\geq 1$. \begin{Example}\label{cut ideal} {\em Let $G$ be a graph. \begin{enumerate} \item \textbf{Eulerian algebras}: Let $M=M(G)$. Then, we refer the associated cycle algebra and the cycle ideal, as the \emph{Eulerian algebra} and the \emph{Eulerian ideal} of $G$, respectively, which are denoted by $\KK[\mathrm{Euler}(G)]$ and $I_{\mathrm{Euler}(G)}$, respectively. To the best of our knowledge, the Eulerian algebra has not been studied in the literature before. \\ \item \textbf{Cut algebras}: Let $M=M(G)^*$. If $G$ is connected, then the corresponding cycle algebra and ideal is naturally identified with the \emph{cut algebra} and \emph{cut ideal} of $G$, introduced in \cite{SS}, (see also \cite{RS}). If $G$ is disconnected, then the cycle algebra of $M$ is naturally isomorphic to the cut algebra of $G$, but cycle ideals could differ by linear forms (see \cite[Proposition~3.2]{RS} and Lemma~\ref{mu}). We denote the cut algebra and ideal of $G$ by $\KK[\mathrm{Cut}(G)]$ and $I_{\mathrm{Cut}(G)}$, respectively. \end{enumerate} } \end{Example} The main goal of this section is to provide useful algebra retracts of cycle algebras of matroids generalizing corresponding results of \cite{RS}. First let us recall the well-known definition of an algebra retract of a graded algebra. Here, if it is not stated otherwise, by ``graded" we mean standard $\ZZ$-graded. \begin{Definition}\label{retract-def} {\em Let $A$ and $B$ be graded $\KK$-algebras and let $\iota\colon A\rightarrow B$ be an (injective) homogeneous $\KK$-algebra homomorphism. Then $A$ is called an \emph{algebra retract} of $B$, if there exists a homogeneous (surjective) homomorphism of $\KK$-algebras $\gamma\colon B\rightarrow A$ such that $\gamma \circ \iota=\mathrm{id}_{A}$. } \end{Definition} It is clear from the above definition that if $A$, $B$ and $C$ are graded $\KK$-algebras such that $A$ is an algebra retract of $B$, and $B$ is an algebra of $C$, then $A$ is an algebra retract of $C$. Note that in this paper the homogeneous homomorphisms are not necessarily of degree~$0$, and also we do not consider more general types of algebra retracts, namely, arbitrary (non-graded) ones. The highest degree of an element of a minimal homogeneous generating set, the projective dimension~$\projdim_A(I)$ and the Castelnuovo-Mumford regularity~$\reg_A(I)$, and more generally, the graded Betti numbers~$\beta_{i,j}^A(I)$ of a defining ideal~$I$ of a graded $\KK$-algebra~$A/I$ where $A$ is a polynomial ring over a field $\KK$ do not increase by retraction. A precise statement is: \begin{Proposition}\label{Betti} {\em (}\cite[Corollary~2.5]{OHH}{\em )} Let $R=A/I$ and $S=B/J$ be graded $\KK$-algebras where $A$ and $B$ are polynomial rings over a field $\KK$. Suppose that $R$ is an algebra retract of $S$, and $I$ and $J$ are graded ideals containing no linear forms. Then \begin{enumerate}\label{betti-retract} \item [{\em (a)}] $\beta_{i,j}^A(I)\leq \beta_{i,j}^B(J)$ for all $i,j\in \ZZ_{\geq 0}$. \item [{\em (b)}] $\projdim_A(I)\leq \projdim_B(J)$. \item [{\em (c)}] $\reg_A(I)\leq \reg_B(J)$. \end{enumerate} \end{Proposition} Observe that in Section~\ref{highest degree} we will verify that cycle ideals of matroids contain no linear forms, and hence the above proposition is applicable to them. \medskip We divide this section into two parts in which we discuss different types of algebra retracts. First, in Proposition~\ref{face retract}, we consider a class of face retracts of cycle algebras of matroids. The following is indeed a consequence of Theorem~\ref{face}. \begin{Proposition}\label{face retract} Let $M$ and $M'$ be two matroids such that $M'$ is obtained by a sequence of series minors and coloop contractions from $M$. Then $\KK[\mathrm{Cyc}(M')]$ is an algebra retract of $\KK[\mathrm{Cyc}(M)]$. \end{Proposition} \begin{proof} In general, for a lattice polytope $P\subseteq \RR^d_{\geq 0}$ and a face $F$ of $P$, one has that $\KK[F]$ is an algebra retract of $\KK[P]$ (see, e.g., \cite[Corollary 4.34]{BG}). This fact and applying Theorem~\ref{face} conclude the proof. \end{proof} \begin{Remark}\label{contraction-face} {\em Note that in spite of deletions, contractions (and hence arbitrary minors), in general, do not necessarily provide a face. For example, if $M$ is the cographic matroid of the $4$-cycle $C_4$ and $e$ is an edge of $C_4$, then $P_{\mathrm{Cyc}}(M/e)\iso P_{\mathrm{Cyc}}(M(P_4)^*)\iso \mathrm{Cut}^{\square}(P_4)$ is not affinely isomorphic to any faces of $P_{\mathrm{Cyc}}(M)$. Indeed, this example even shows that contraction may not provide an algebra retract at all; see \cite[Remark~4.4]{RS}. } \end{Remark} Next, we provide other algebra retracts which are not necessarily induced by faces of cycle polytopes. The following definition is motivated by the proof of \cite[Theorem~5.4]{RS}: \begin{Definition}\label{matroidal retract-def} Let $M$ and $M'$ be two matroids with $E(M')\subseteq E(M)$. Suppose that there exist two maps $\lambda \colon \mathrm{Cyc}(M')\rightarrow \mathrm{Cyc}(M)$ and $\pi \colon \mathrm{Cyc}(M)\rightarrow \mathrm{Cyc}(M')$ which satisfy the following conditions: \begin{enumerate} \item[{\em(a)}] $\pi \circ \lambda=\mathrm{id}_{\mathrm{Cyc}(M')}$. \item[{\em(b)}] If $\sum_{i=1}^d \chi_{C'_i}=\sum_{i=1}^d \chi_{D'_i}$ for some $C'_i,D'_i\in \mathrm{Cyc}(M')$ with $i=1,\ldots,d$ and $d\geq 1$, then $\sum_{i=1}^d \chi_{\lambda(C'_i)}=\sum_{i=1}^d \chi_{\lambda(D'_i)}$. \item[{\em(c)}] If $\sum_{i=1}^d \chi_{C_i}=\sum_{i=1}^d \chi_{D_i}$ for some $C_i,D_i\in \mathrm{Cyc}(M)$ with $i=1,\ldots,d$ and $d\geq 1$, then $\sum_{i=1}^d \chi_{\pi(C_i)}=\sum_{i=1}^d \chi_{\pi(D_i)}$. \end{enumerate} Then we say that $M'$ is a \textbf{matroidal retract} of $M$. \end{Definition} \begin{Remark}\label{condition (iii)} {\em Using the notation in Definition~\ref{matroidal retract-def}, we would like to observe that a natural example of a map $\pi$ which satisfies condition~(c), is: \begin{equation} \pi(C)=C\cap E(M'), \quad \text{for any}~~C\in \mathrm{Cyc}(M). \end{equation} } \end{Remark} The following theorem allows us to build up certain algebra retracts for the cycle algebra of a matroid which arise from matroidal retracts. \begin{Theorem}\label{matroidal retract-theorem} Let $M$ be a matroid and let $M'$ be a matroidal retract of $M$. Then $\KK[\mathrm{Cyc}(M')]$ is an algebra retract of $\KK[\mathrm{Cyc}(M)]$. \end{Theorem} \begin{proof} Let $\lambda$ and $\pi$ be two maps with the desired properties of Definition~\ref{matroidal retract-def}. First we define the homomorphism \begin{eqnarray*} \tilde{\lambda}\colon S_{M'}&\longrightarrow& S_M \\ x_C&\mapsto& x_{\lambda(C)}. \end{eqnarray*} Let $\iota$ be the induced map from $\tilde{\lambda}$, defined as follows: \begin{eqnarray*} \iota \colon S_{M'}/I_{\mathrm{Cyc}(M')}&\longrightarrow& S_M/I_{\mathrm{Cyc}(M)} \\ f+I_{\mathrm{Cyc}(M')}&\mapsto& \tilde{\lambda}(f)+I_{\mathrm{Cyc}(M)}, \end{eqnarray*} for any $f\in S_{M'}$. To check that $\iota$ is a well-defined map, it is enough to show that $\tilde{\lambda}(I_{\mathrm{Cyc}(M')})\subseteq I_{\mathrm{Cyc}(M)}$. Let $f=\prod_{i=1}^d x_{C'_i}-\prod_{i=1}^d x_{D'_i}$ be an element of a minimal homogeneous generating set of $I_{\mathrm{Cyc}(M')}$ for some $d\geq 1$, where $C'_i$'s and $D'_i$'s are cycles of $M'$. Then $\phi_{M'}(f)=0$, since $I_{\mathrm{Cyc}(M')}=\ker \phi_{M'}$. This together with the definition of the map $\phi_{M'}$ implies that $\prod_{i=1}^d{\mathbf{y}}^{C'_i}=\prod_{i=1}^d\mathbf{y}^{D'_i}$, or equivalently \begin{equation}\label{well-defined1} \sum_{i=1}^d \chi_{C'_i}=\sum_{i=1}^d \chi_{D'_i}. \end{equation} Since $M'$ is a matroidal retract of $M$, Definition~\ref{matroidal retract-def}~(b) and (\ref{well-defined1}), yield $\sum_{i=1}^d \chi_{\lambda(C'_i)}=\sum_{i=1}^d \chi_{\lambda(D'_i)}$, and then \[ \prod_{i=1}^d{\mathbf{y}}^{\lambda(C'_i)}=\prod_{i=1}^d\mathbf{y}^{\lambda(D'_i)}. \] The latter equality implies that $\phi_M(\tilde{\lambda}(f))=0$. Hence $\tilde{\lambda}(f)\in I_{\mathrm{Cyc}(M)}$, as desired. Next, we define the homomorphism \begin{eqnarray*} \tilde{\pi}\colon S_{M}&\longrightarrow& S_{M'} \\ x_C&\mapsto& x_{\pi(C)}. \end{eqnarray*} Let $\gamma$ be the induced map from $\tilde{\pi}$, as follows: \begin{eqnarray*} \gamma \colon S_{M}/I_{\mathrm{Cyc}(M)}&\longrightarrow& S_{M'}/I_{\mathrm{Cyc}(M')} \\ f+I_{\mathrm{Cyc}(M)}&\mapsto& \tilde{\pi}(f)+I_{\mathrm{Cyc}(M')}, \end{eqnarray*} for any $f\in S_{M}$. To see that $\gamma$ is a well-defined map, it suffices to show that $\tilde{\pi}(I_{\mathrm{Cyc}(M)})\subseteq I_{\mathrm{Cyc}(M')}$. Let $f=\prod_{i=1}^d x_{C_i}-\prod_{i=1}^d x_{D_i}$ be an element of a minimal homogeneous generating set of $I_{\mathrm{Cyc}(M)}$ for some $d\geq 1$, where $C_i$'s and $D_i$'s are cycles of $M$. Then $\phi_{M}(f)=0$, since $I_{\mathrm{Cyc}(M)}=\ker \phi_{M}$. Thus, $\prod_{i=1}^d{\mathbf{y}}^{C_i}=\prod_{i=1}^d\mathbf{y}^{D_i}$, or equivalently \begin{equation}\label{well-defined2} \sum_{i=1}^d \chi_{C_i}=\sum_{i=1}^d \chi_{D_i}. \end{equation} Since $M'$ is a matroidal retract of $M$, it follows from Definition~\ref{matroidal retract-def}~(c) and (\ref{well-defined2}) that $\sum_{i=1}^d \chi_{\pi(C_i)}=\sum_{i=1}^d \chi_{\pi(D_i)}$, and then \[ \prod_{i=1}^d{\mathbf{y}}^{\pi(C_i)}=\prod_{i=1}^d\mathbf{y}^{\pi(D_i)}. \] Therefore, $\phi_{M'}(\tilde{\pi}(f))=0$, and hence $\tilde{\pi}(f)\in I_{\mathrm{Cyc}(M')}$, as desired. Finally, we need to show that $\gamma \circ \iota=\mathrm{id}_{S_{M'}/I_{\mathrm{Cyc}(M')}}$. Indeed, for any $f\in S_{M'}$, we have \[ \gamma \circ \iota(f+I_{\mathrm{Cyc}(M')})=\gamma(\tilde{\lambda} (f)+I_{\mathrm{Cyc}(M)})=\tilde{\pi} (\tilde{\lambda}(f))+I_{\mathrm{Cyc}(M')}=f+I_{\mathrm{Cyc}(M')}, \] where the last equality follows from Definition~\ref{matroidal retract-def}~(a), since $M'$ is a matroidal retract of $M$. Hence $\KK[\mathrm{Cyc}(M')]$ is an algebra retract of $\KK[\mathrm{Cyc}(M)]$. \end{proof} In Proposition~\ref{face retract}, deletions and more generally series minors and coloop contractions were discussed to obtain algebra retracts. In the following we consider another type of contractions in binary matroids which are important for us in the sequel. \begin{Definition}\label{binary matroidal retract-def} Let $M$ be a binary matroid, and let $E$ and $E'$ be two disjoint subsets of $E(M)$ such that $|E|=|E'|=s$ with $E=\{e_1,\ldots,e_s\}$ and $E'=\{e'_1,\ldots,e'_s\}$ for some $s\geq 1$. Suppose that the following conditions hold: \begin{enumerate} \item[{\em(a)}] $E'\in \mathcal{C}(M)$. \item[{\em(b)}] For any $C\in \mathcal{C}(M)$ and $p=0,\ldots,s-1$, one has $C\cap E'=\{e'_{i_1},\ldots,e'_{i_p}\}$ if and only if either \begin{enumerate} \item[{\em(i)}] $C\cap E=\{e_{i_1},\ldots,e_{i_p}\}$, or \item[{\em(ii)}] $C\cap E=E-\{e_{i_1},\ldots,e_{i_p}\}$. \end{enumerate} \end{enumerate} Then we say that the binary matroid $M/E'$ is a \textbf{binary matroidal retract} of $M$. \end{Definition} \begin{Remark}\label{symmetic} {\em Observe that condition~(b) in Definition~\ref{binary matroidal retract-def} is symmetric in terms of $E$ and $E'$. Indeed, it is easily seen that condition~(b) is equivalent to the following condition: For any $C\in \mathcal{C}(M)$ and $p=0,\ldots,s-1$, one has: \begin{enumerate} \item[(i)] $C\cap E'=\{e'_{i_1},\ldots,e'_{i_p}\}$ implies that \[ C\cap E=\{e_{i_1},\ldots,e_{i_p}\}~ \text{or}~E-\{e_{i_1},\ldots,e_{i_p}\}, \] \item[(ii)] $C\cap E=\{e_{i_1},\ldots,e_{i_p}\}$ implies that \[ C\cap E'=\{e'_{i_1},\ldots,e'_{i_p}\}~\text{or}~E'-\{e'_{i_1},\ldots,e'_{i_p}\}. \] \end{enumerate} } \end{Remark} Next, we show that binary matroidal retracts result in algebra retracts in the case of binary matroids. For this purpose, we use the following theorem which determines a nice property of binary matroids. \begin{Theorem}\label{cycles in binary matroids} {\em (}\cite[Corollary~9.3.7]{Ox}{\em )} Let $M$ be a binary matroid, let $C\in \mathcal{C}(M)$ and let $e\in E(M)-C$. Then, either $C\in \mathcal{C}(M/e)$ or $C$ is a disjoint union of two circuits of $M/e$. In both cases, $M/e$ has no other circuits contained in $C$. \end{Theorem} The next fact is also useful in the proof of Theorem~\ref{binary matroidal retract-theorem} below. \begin{Remark}\label{circuit of cardinality at least two} {\em Note that it is clear that if $C$ is a circuit of an arbitrary matroid $M$ with $|C|\geq 2$ and $e\in C$, then $C-\{e\}$ is a circuit of $M/e$, see also \cite[Page~317]{Ox}. } \end{Remark} \begin{Theorem}\label{binary matroidal retract-theorem} A binary matroidal retract of a binary matroid is a matroidal retract. \end{Theorem} \begin{proof} Let $M$ be a binary matroid and let $M'$ be a binary matroidal retract of $M$. Then there are disjoint subsets $E$ and $E'$ of $E(M)$ as in Definition~\ref{binary matroidal retract-def} such that $M'=M/E'$. The goal is to show that maps $\lambda$ and $\pi$ exist with the desired properties of Definition~\ref{matroidal retract-def}. At first, we define the map $\lambda$ as follows: \begin{eqnarray*} \lambda\colon \mathrm{Cyc}(M/E')&\longrightarrow& \mathrm{Cyc}(M) \\ C'&\mapsto& C'\cup \{e'_{i_j}:e_{i_j}\in C'\}. \end{eqnarray*} Note that in particular, $\lambda(\emptyset)=\emptyset$. We have to verify that $\lambda$ is well-defined, and for this, it remains to prove that for any non-empty $C'\in \mathrm{Cyc}(M/E')$, one has $\lambda(C')\in \mathrm{Cyc}(M)$. So, let $C'\neq \emptyset$ be a cycle of $M/E'$. Then there exist pairwise disjoint circuits $C_1,\ldots,C_t$ of $M/E'$, for some $t\geq 1$, such that $C'=\cup_{i=1}^t C_i$. By definition of $\lambda$, it is immediately clear that $\lambda(C_i)$'s are pairwise disjoint as well, and $\lambda(C')=\cup_{i=1}^t\lambda(C_i)$. To see that $\lambda(C')\in \mathrm{Cyc}(M)$, it is enough to show that for each $i=1,\ldots,t$, we have $\lambda(C_i)\in \mathrm{Cyc}(M)$. For simplicity, we may assume right away that $C'\in \mathcal{C}(M/E')$. This implies that $C'=C-E'$ for some $C\in \mathcal{C}(M)$ with $C\neq E'$. Let \[ C'\cap E=\{e_{j_1},\ldots,e_{j_q}\} \] for some $q$ with $0\leq q\leq s$. Then $C\cap E=\{e_{j_1},\ldots,e_{j_q}\}$, since $C'\cap E=C\cap E$. We distinguish three cases: \emph{Case}~1. Assume that $q=0$. Then Remark~\ref{symmetic}~(ii) yields $C\cap E'=\emptyset$, since $C\cap E=C'\cap E=\emptyset$ and since $C$ and $E'$ are two distinct circuits. Hence, $C=C'$. Thus, $\lambda(C')=C'=C\in \mathrm{Cyc}(M)$. \emph{Case}~2. Assume that $q=s$. This implies that $E\subseteq C'$, and hence $\lambda(C')=C'\cup E'$. It follows directly from Definition~\ref{binary matroidal retract-def} that $E'\in \mathcal{C}(M)$ and $C'\cap E'=\emptyset$ by the choice of $C'=C-E'$. By a similar argument as in Case~1, in Definition~\ref{binary matroidal retract-def}~(b) only the case $C\cap E'=\emptyset$ is possible, because $E=C'\cap E=C\cap E$. Thus, $C'=C$ and $\lambda(C')\in \mathrm{Cyc}(M)$. \emph{Case}~3. Assume that $1\leq q\leq s-1$. Then $\lambda(C')=C'\cup \{e'_{j_1},\ldots,e'_{j_q}\}$. By Definition~\ref{binary matroidal retract-def}, we have either $C\cap E'=\{e'_{j_1},\ldots,e'_{j_q}\}$ or $C\cap E'=E'-\{e'_{j_1},\ldots,e'_{j_q}\}$. In the first case we get $\lambda(C')=C$ which is a circuit of $M$, while in the second case one obtains $\lambda(C')=C\Delta E'$ which is a cycle of $M$, by Theorem~\ref{binary}, since $M$ is a binary matroid and $E'\in \mathcal{C}(M)$. All together we see that $\lambda$ is indeed well-defined. Related to $\lambda$ it remains to show that condition~(b) in Definition~\ref{matroidal retract-def} holds. Let $C_i, D_i\in \mathrm{Cyc}(M/E')$ for $i=1,\ldots,d$ with $d\geq 1$, and assume that \begin{equation}\label{union of cycles} \sum_{i=1}^d \chi_{C_i}=\sum_{i=1}^d \chi_{D_i}. \end{equation} The goal is to prove that $\sum_{i=1}^d \chi_{\lambda(C_i)}=\sum_{i=1}^d \chi_{\lambda(D_i)}$. For any $e\in E(M)$, let $m_e$ and $m'_e$ denote the coordinate corresponding to $e$ in $\sum_{i=1}^d \chi_{\lambda(C_i)}$ and $\sum_{i=1}^d \chi_{\lambda(D_i)}$, respectively. Using this notation, it remains to see that $m_e=m'_e$. Note that if $e\in E(M/E')$, then it is clear by the definition of $\lambda$ that $m_e$ is equal to the $e$-th coordinate in $\sum_{i=1}^d \chi_{C_i}$, and $m'_e$ is equal to the $e$-coordinate of $\sum_{i=1}^d \chi_{D_i}$. Thus, in this case, $m_e=m'_e$, according to (\ref{union of cycles}). Next consider the remaining case $e=e'_j$ in $E'$ for some $j\in \{1,\ldots,s\}$. At first assume that $m_e=0$. So, $e=e'_j\notin \lambda(C_i)$ for all $i=1,\ldots,d$, and then $e_j\notin C_i$ for all $1=1,\ldots,d$. Thus, by (\ref{union of cycles}), we deduce that $e_j\notin D_i$ for all $i$, and hence $e=e'_j\notin \lambda(D_i)$ for all $i=1\ldots,d$. Therefore, it follows that also $m'_{e}=0$ as desired. In the second case assume that $m_e=t$ for some positive integer $t$. Then there exist exactly $t$ different indices $k_1,\ldots,k_t\in\{1,\ldots,d\}$ such that $e\in \lambda(C_{k_{\ell}})$ for all $\ell=1,\ldots,t$, because the sets $\lambda(C_i)$ are disjoint as observed above. The definition of $\lambda$ and the assumption $e=e'_j$ yield that $C_{k_1},\ldots,C_{k_t}$ are the only cycles among $C_i$'s which contain $e_j$. Thus, the $e_j$-th coordinate of $\sum_{i=1}^d \chi_{C_i}$ is equal to $t$, and hence by (\ref{union of cycles}), the same coordinate of $\sum_{i=1}^d \chi_{D_i}$ equals to $t$. This means that there are exactly $t$ different indices $h_1,\ldots, h_t$ for which $D_{h_1},\ldots,D_{h_t}$ contain $e_j$, and hence $\lambda(D_{h_1}),\ldots,\lambda(D_{h_t})$ are the only ones among $\lambda(D_i)$'s which contain $e=e'_j$. This implies that $m'_e=t$. This concludes the verification of Definition~\ref{matroidal retract-def}~(b). Continuing the discussion that Definition~\ref{matroidal retract-def} holds, we have to define an appropriate map $\pi$. For this we set: \begin{eqnarray*} \pi\colon \mathrm{Cyc}(M)&\longrightarrow& \mathrm{Cyc}(M/E') \\ C&\mapsto& C-E'. \end{eqnarray*} Observe that in particular, $\pi(\emptyset)=\pi(E')=\emptyset$. As a first task we have to see that $\pi$ is well-defined and for this one has to show that for any non-empty cycle $C$ of $M$, $C-E'$ is a cycle of $M/E'$. Note that the cycle $C$ can be written as $C=\cup_{i=1}^tC_i$ for some $t\geq 1$, where $C_i$'s are pairwise disjoint circuits of $M$ for $i=1,\ldots,t$. Hence, $C-E'$ is the disjoint union of the sets $C_1-E',\ldots,C_t-E'$. Thus, it is enough to show that for each $i=1,\ldots,t$, one has $C_i-E'\in \mathrm{Cyc}(M/E')$. So, without loss of generality, we may assume that $C\in \mathcal{C}(M)$. To prove that $C-E'\in \mathrm{Cyc}(M/E')$, we distinguish two case: \emph{Case}~1. Assume that $C\cap E'=\emptyset$. Then $C-E'=C$. Since $M$ is a binary matroid, Theorem~\ref{cycles in binary matroids} implies that either $C\in \mathcal{C}(M/e'_1)$ or $C=C_1\cup C_2$ where $C_1,C_2\in \mathcal{C}(M/e'_1)$ and $C_1\cap C_2=\emptyset$. Since $M/e'_1$ is a binary matroid as well, one can apply Theorem~\ref{cycles in binary matroids} to this matroid for contraction with respect to the element $e'_2$, and each of the probable circuits $C$ or $C_1$ and $C_2$. Then, by repeating this procedure, after a finite number of steps, it follows that $C$ is either a circuit of $M/E'$ or a disjoint union of certain circuits of $M/E'$. Hence $C$ is a cycle of $M/E'$, as desired. \emph{Case}~2. Assume that $C\cap E'\neq \emptyset$. If $C=E'$, then the claim is trivially true. So, suppose that $C\neq E'$. Then, $|C|\geq 2$, since $E'$ and $C$ are both circuits and can not contain each other. Moreover, if $C\cap E'=\{e'_{j_1},\ldots,e'_{j_{\ell}}\}$ for some $1\leq \ell <s$, then for any proper subset $T$ of $\{e'_{j_1},\ldots,e'_{j_{\ell}}\}$, one has $|C-T|\geq 2$. Thus, according to Remark~\ref{circuit of cardinality at least two}, we have $C-\{e'_{j_1},\ldots,e'_{j_{\ell}}\}\in \mathcal{C}(M/\{e'_{j_1},\ldots,e'_{j_{\ell}}\})$. Since $M/\{e'_{j_1},\ldots,e'_{j_{\ell}}\}$ is a binary matroid, and $E'\cap (C-\{e'_{j_1},\ldots,e'_{j_{\ell}}\})=\emptyset$, similar to the Case~1, by applying Theorem~\ref{cycles in binary matroids} repeatedly, it follows that $C-E'$ is indeed a cycle of $M/E'$. It is clear that for any $C\in \mathrm{Cyc}(M)$, we have $\pi(C)=C\cap E(M/E')$, which by Remark~\ref{condition (iii)} implies that Definition~\ref{matroidal retract-def}~(c) also holds. Finally, according to the definitions of the maps $\lambda$ and $\pi$, it is obvious that $\pi \circ \lambda$ is the identity map on the $\mathrm{Cyc}(M/E')$, which verifies condition~(a) in the definition of a matroidal retract. This concludes the proof that $M'=M/E'$ is a matroidal retract of $M$. \end{proof} \begin{Problem}\label{binary matroidal retract-problem} The proof of Theorem~\ref{binary matroidal retract-theorem} uses at several places the binary assumption. We leave it as an interesting question to decide whether an analogous statement of Theorem~\ref{binary matroidal retract-theorem} holds or not, if one drops the word "binary" everywhere in Definition~\ref{binary matroidal retract-def}. \end{Problem} Motivated by the main results of this section, we introduce a type of minors arising from deletions and certain contractions: \begin{Definition}\label{g-series minor-definition} Let $M$ and $M'$ be matroids where $M'$ is obtained from $M$ by a sequence of \begin{enumerate} \item [{\em(a)}] deletions, \item [{\em(b)}] series contractions, \item [{\em(c)}] coloop contractions, and \item [{\em (d)}] binary matroidal retracts. \end{enumerate} Then we call $M'$ a \textbf{generalized series minor} {\em (}or shortly, \textbf{g-series minor}{\em )} of $M$. \end{Definition} \begin{Remark}\label{comparison of minor types} {\em Observe that for a $g$-series minor, one is allowed to apply two more operations ``coloop contraction" and ``binary matroidal retract" on matroids compared to the case of a series minor. In particular, every series minor is a $g$-series minor. } \end{Remark} The next corollary is an immediate consequence of Proposition~\ref{face retract}, Theorem~\ref{matroidal retract-theorem} and Theorem~\ref{binary matroidal retract-theorem}. \begin{Corollary}\label{algebra retract-corollary} Let $M$ be a binary matroid and $M'$ be a $g$-series minor of $M$. Then $\KK[\mathrm{Cyc}(M')]$ is an algebra retract of $\KK[\mathrm{Cyc}(M)]$. \end{Corollary} \section{Generalized series minors of cographic matroids}\label{Cographic case} In this section, we consider cographic matroids, which are in particular binary, and investigate certain $g$-series minors of them. The main goal here is to verify that Corollary~\ref{algebra retract-corollary} recovers one of the main results from~\cite{RS} (see \cite[Theorem~5.4]{RS}). First, let us recall some definitions from \cite{RS}. Let $G=(V,E)$ be a simple graph. Recall that an \emph{induced subgraph} $G_T$ on any non-empty subset $T$ of $V$ is the subgraph of $G$ whose vertex set is $T$ and whose edges are those edges of $G$ which have both endpoints in $T$. Next, let $v\in V$. Then \[ N_G(v)=\{w\in V: w~\mathrm{is~a~neighbor~of}~v~\mathrm{in}~G\}, \] where a vertex $w\in V$ is said to be a \emph{neighbor} of $v$ in $G$, if it is adjacent to $v$. Furthermore, \[ N_G[v]=N_G(v)\cup \{v\} ~~ \mathrm{and}~~ N_G(T)=\cup_{v\in T} N_{G}(v), \] for every non-empty subset $T$ of $V$. Assume that $V=W\cup W'$ with $W\cap W'=\emptyset$ and $W,W'\neq \emptyset$, and let $H=G_{W}$ be the induced subgraph of $G$ on $W$. Suppose that there exists a vertex $v\in W$ with $W\cap N_{G}(W')\subseteq N_{H}[v]$. Then $H$ is said to be a \emph{neighborhood-minor} of $G$. Recall that a minor of $G$ is a graph obtained from $G$ by applying a sequence of the operations ``edge deletion" and ``edge contraction", together with disregarding the isolated vertices. One can check that a neighborhood minor of a graph is indeed a minor of it, but the converse does not hold in general. For example, by removing an edge from a graph $G$, one obtains a minor, but the obtained graph is not an induced subgraph of $G$ while neighborhood minors are always induced subgraphs by definition. We would like to remark that the notion of neighborhood-minors was defined in \cite[Definition~5.2]{RS}. But, as it was mentioned in \cite[Remark~5.3]{RS}, in the special case that $|W'|=1$ it has been previously considered in studying cut polytopes for different purposes; see, e.g., \cite[Theorem~2]{D}. \medskip In the following theorem, we see the relationship between neighborhood-minors of graphs and $g$-series minors of cographic matroids. \begin{Theorem}\label{neighborhood-g-series minor-corollary} Let $G$ be a simple graph and $H$ be a neighborhood-minor of $G$. Then $M(H)^*$ is a g-series minor of $M(G)^*$. \end{Theorem} \begin{proof} Assume that $H$ is a neighborhood-minor of $G$ with $H=G_W$, where $V(G)=W\cup W'$ with $W\cap W'=\emptyset$ and $W,W'\neq \emptyset$ such that $W\cap N_G(W')\subseteq N_H[v]$ for some $v\in W$. If $W\cap N_{G}(W')=\emptyset$, then $G$ is just the disjoint union of the two graphs $H$ and $G_{W'}$, and equivalently $M(G)^*$ is the direct sum of $M(H)^*$ and $M(G_{W'})^*$. In this case, it is easily seen that $M(H)^*$ is a $g$-series minor of $M(G)^*$, as desired, e.g.~by using a suitable number of deletions. Next we consider the case $W\cap N_{G}(W')\neq \emptyset$. First suppose that $G_{W'}$ is connected. Let $(W\cap N_{G}(W'))-\{v\}=\{v_1,\ldots,v_s\}$ for some $s\geq 0$, and let $e_i=\{v,v_i\}$ be the corresponding edges for all $i=1,\ldots,s$. Now, we consider at first certain actions at the level of graphs and then interpret them in the level of corresponding cographic matroids. Consider the following steps: \begin{enumerate} \item[(i)] By consecutive contractions of edges as well as deleting all loops which might occur during contractions, we get from $G_{W'}$ just a vertex of $W'$, say $w'$, since $G_{W'}$ is connected. In this way a new graph $G'$ is obtained from $G$ on the vertex set $W\cup \{w'\}$ such that $G'_W=G_W$, and \begin{equation}\label{neighborhood} W\cap N_{G'}(w')=W\cap N_{G}(W'). \end{equation} In particular, $W\cap N_{G'}(w')\subseteq N_H[v]$. \item[(ii)] We distinguish two cases: \begin{enumerate} \item [(1)] Suppose that $v\in N_{G'}(w')$. In this case we contract the edge $e=\{v,w'\}$. Note that some parallel edges to $e_1,\ldots,e_s$ might appear. By removing those new edges, we get $H$ as a minor of $G$. \item [(2)] Suppose that $v\notin N_{G'}(w')$. Thus, it follows that $s\geq 1$. By (\ref{neighborhood}), we deduce that $w'$ is adjacent to all $v_1,\ldots,v_s$. Let $e'_i=\{w',v_i\}$ for $i=1,\ldots,s$. Then, by removing the edges $e'_1,\ldots, e'_s$, we obtain a graph, say $H'$, which is just the union of $H$ and the isolated vertex $w'$. Then, we disregard the isolated vertex and so we get the desired result. \end{enumerate} \end{enumerate} Using the aforementioned steps, we interpret the equivalent and analogous steps at the level of cographic matroids: \begin{enumerate} \item[(i)$'$] As we discussed in Section~\ref{matroids}, the contractions of edges and deletions of loops in step~(i), result in certain deletions and coloop contractions, respectively, in the corresponding cographic matroids, and hence it follows that $M(G')^*$ is a $g$-series minor of $M(G)^*$. \item[(ii)$'$] Corresponding to the cases~(1) and~(2) we have: \begin{enumerate} \item [(1)$'$] Case~(1) implies that $e$ is an element of $M(G')^*$ which is deleted from $M(G')^*$. The possible occurring parallel edges in~(1), imply certain coparallel elements in the corresponding cographic matroid, and we contract them to obtain $M(H)^*$. Thus, in this case, $M(H)^*$ is a $g$-series minor of $M(G')^*$, and hence a $g$-series minor of $M(G)^*$ by~(i)$'$. \item [(2)$'$] Case~(2) implies that $e$ is not an element of $M(G')^*$, but $e_1,\ldots,e_s$ are elements of $M(G')^*$ for some $s\geq 1$. Then one sees that \[ E(M(G')^*)=E(H)\cup \{e'_1,\ldots,e'_s\}. \] Deleting the edges $e'_1,\ldots,e'_s$ from $G'$, yields the contraction $M(H')^*=M(G')^*/\{e'_1,\ldots,e'_s\}$ from $M(G')^*$. As $w'$ is just an isolated vertex of $H'$, one knows that $M(H')^*$ and $M(H)^*$ are isomorphic matroids. We claim that $M(G')^*/\{e'_1,\ldots,e'_s\}$ is a binary matroidal retract of $M(G')^*$. Let $E=\{e_1,\ldots,e_s\}$ and $E'=\{e'_1,\ldots,e'_s\}$, which are disjoint subsets of $E(M(G')^*)$. Then it is clear that $E'$ is a minimal edge cut of $G'$, because deleting the edges in $E'$ from $G'$, disconnects the vertex $w'$ from $H$, and none of the proper subsets of $E'$ disconnects $G'$. Thus, $E'$ is a circuit of $M(G')^*$, and hence Definition~\ref{binary matroidal retract-def}~(a) is fulfilled. To verify~(b) in the same definition, let $C\in \mathcal{C}(M(G')^*)$. Then according to Example~\ref{cut polytope}, it follows that $C=\mathrm{Cut}(A)$ is a minimal cut set of $G'$ for a set $\emptyset\neq A\subseteq V(G)$. Suppose that $C\cap E'=\{e'_{i_1},\ldots,e'_{i_p}\}$ for some $p\in \{0,\ldots,s-1\}$. If either $v,w'\in A$ or $v,w'\in A^c$, then just by the definition of a cut set we deduce that either $A^c\cap \{v_{1},\ldots,v_{s}\}=\{v_{i_1},\ldots,v_{i_p}\}$ or $A\cap \{v_{1},\ldots,v_{s}\}=\{v_{i_1},\ldots,v_{i_p}\}$, respectively, which implies that $C\cap E=\{e_{i_1},\ldots,e_{i_p}\}$. If $v\in A$ and $w'\in A^c$ or vice versa, then similarly it follows that $C\cap E=E-\{e_{i_1},\ldots,e_{i_p}\}$. Hence we obtain one implication of the statement of condition~(2) in Definition~\ref{binary matroidal retract-def}. The other implication follows similarly by symmetry. \end{enumerate} \end{enumerate} Finally, suppose that $G_{W'}$ is disconnected with connected components on disjoint sets of vertices $W'_1,\ldots,W'_r$. Set $G_0=G$ and $G_t=G_{V-\cup_{i=1}^t W'_i}$ for all $t=1,\ldots,r$. Since $H$ is a neighborhood-minor of $G$, it follows that $G_t$ is a neighborhood-minor of $G_{t-1}$ for all $t=1,\ldots,r$. Notice that $G_{W'_t}$ is clearly connected for each $t$. Then, by changing the role of $G_{W'}$ by $G_{W'_t}$ and the role of $G_W$ by $G_t$ in the previous cases of the proof, no matter $(V-\cup_{i=1}^t W'_i)\cap N_G(W'_t)$ is empty or not, it follows that $M(G_t)^*$ is a $g$-series minor of $M(G_{t-1})^*$ for all $t$. Hence, $M(H)^*$ is a $g$-series minor of $M(G)^*$ which concludes the proof. \end{proof} The next corollary follows immediately from Corollary~\ref{algebra retract-corollary} and Theorem~\ref{neighborhood-g-series minor-corollary}. \begin{Corollary}\label{neighborhood-retract-corollary} {\em (}\cite[Theorem~5.4]{RS}{\em)} Let $G$ be a simple graph and $H$ be a neighborhood-minor of $G$. Then $\KK[\mathrm{Cut}(H)]$ is an algebra retract of $\KK[\mathrm{Cut}(G)]$. \end{Corollary} We see that neighborhood-minors yield a method for cographic matroids to obtain $g$-series minors and thus algebra retracts on the level of algebras. We end this section by posing the following problem concerning the Eulerian algebras of graphs to find similar results in this case. \begin{Problem}\label{Eulerian retract} Let $H$ and $G$ be two graphs. It would be interesting to provide some explicit graphical conditions on $H$ and $G$ under which $M(H)$ is a g-series minor of $M(G)$, and hence $\KK[\mathrm{Euler}(H)]$ is an algebra retract of $\KK[\mathrm{Euler}(G)]$. \end{Problem} \section{The comparison of the highest degrees of minimal homogeneous generators of cycle ideals of matroids}\label{highest degree} In this section, we consider cycle ideals of matroids in more detail and in particular discuss certain situations where one can relate or compare the highest degree of minimal homogeneous generating sets of cycle ideals of two matroids with each other. Let denote by $\mu(M)$ the highest degree of an element of a minimal homogeneous generating set of $I_{\mathrm{Cyc}(M)}$ and set $\deg (0)=-\infty$. First we characterize when a cycle ideal is zero. Let $M$ be matroid. For simplicity, let \[ d(M):=\text{the number of coparallel classes of}~M. \] As it was mentioned in Section~\ref{cycle polytopes}, it is known that $\dim P_{\mathrm{Cyc}}(M)=d(M)$. This together with \cite[Proposition~4.22]{BG} imply that the Krull dimension of the cycle algebra of $M$ is given by \[ \dim \KK[\mathrm{Cyc}(M)]= d(M)+1, \] and hence \begin{equation}\label{height} \height I_{\mathrm{Cyc}(M)}=|\mathrm{Cyc}(M)|-d(M)-1. \end{equation} \begin{Lemma}\label{zero ideal} Let $M$ be a matroid. Then the following statements are equivalent: \begin{enumerate} \item[{\em(a)}] $I_{\mathrm{Cyc}(M)}=\langle 0 \rangle$; \item[{\em(b)}] $d(M)=|\mathrm{Cyc}(M)|-1$. \end{enumerate} \end{Lemma} \begin{proof} The desired result follows from \eqref{height} and since clearly $I_{\mathrm{Cyc}(M)}=\langle 0 \rangle$ if and only if $\height I_{\mathrm{Cyc}(M)}=0$, since it is a prime ideal. \end{proof} \begin{Corollary}\label{zero ideal-cosimple} Let $M$ be a cosimple matroid. Then the following statements are equivalent: \begin{enumerate} \item[{\em(a)}] $I_{\mathrm{Cyc}(M)}=\langle 0 \rangle$; \item[{\em(b)}] $|E(M)|=|\mathrm{Cyc}(M)|-1$. \end{enumerate} \end{Corollary} \begin{proof} Since $M$ is cosimple, $\{e\}$ is a coparallel class of $M$ for any $e\in E(M)$ and these are the only coparallel classes, as it was mentioned in Section~\ref{matroids}. Thus, $d(M)=|E(M)|$ which together with Lemma~\ref{zero ideal} yield the desired result. \end{proof} The following example provides a cosimple matroid satisfying the equivalent conditions of Corollary~\ref{zero ideal-cosimple}. \begin{Example}\label{F7-1} {\em The \emph{Fano matroid} $F_7$ is a matroid with the ground set $E=\{1,\ldots,7\}$ whose bases are all 3-subsets of $E$, except the ones shown in Figure~\ref{Fano} by straight lines and a curve, i.e.~$C_1=\{1,2,6\}$, $C_2=\{1,3,5\}$, $C_3=\{2,3,4\}$, $C_4=\{2,5,7\}$, $C_5=\{3,6,7\}$, $C_6=\{1,4,7\}$, $C_7=\{4,5,6\}$. This implies that \[ \mathcal{C}(F_7)=\{C_i,C_i^c : i=1,\ldots,7\}, \] where $C_i^c$ denotes the complementary set of $C_i$ for each~$i$. Then, $F_7$ is in particular a simple matroid. Thus, one can observe that $F_7^*$ is a cosimple matroid with \[ \mathcal{C}(F_7^*)=\{C_i^c : i=1,\ldots,7\}, \] and hence $\mathrm{Cyc}(F_7^*)=\{\emptyset\}\cup \mathcal{C}(F_7^*)$. Thus, Corollary~\ref{zero ideal-cosimple} implies that $I_{\mathrm{Cyc}(F_7^*)}=\langle 0 \rangle$. } \end{Example} \begin{figure}[h!] \centering \begin{tikzpicture}[scale = 1.2] \definecolor{zzttqq}{rgb}{1,1,1} \fill[line width=2pt,color=zzttqq] (-1,5) -- (-2.5,2) -- (0.5,2) -- cycle; \draw [line width=2pt] (-1,5)-- (-2.5,2); \draw [line width=2pt] (-2.5,2)-- (0.5,2); \draw [line width=2pt] (0.5,2)-- (-1,5); \draw [line width=2pt] (-1.0009636766678007,2.9433265697243844) circle (0.9433270619571432cm); \draw [line width=2pt] (-1,5)-- (-1,2); \draw [line width=2pt] (-2.5,2)-- (-0.2695,3.539); \draw [line width=2pt] (-1.7275,3.545)-- (0.5,2); \draw (-1.04,5.57) node[anchor=north west] {1}; \draw (-1.04,5.57) node[anchor=north west] {1}; \draw (-2.96,2.25) node[anchor=north west] {2}; \draw (0.65,2.23) node[anchor=north west] {3}; \draw (-0.97,1.93) node[anchor=north west] {4}; \draw (-0.1,3.93) node[anchor=north west] {5}; \draw (-2.2,3.85) node[anchor=north west] {6}; \draw (-0.99,3.68) node[anchor=north west] {7}; \begin{scriptsize} \draw [fill=black] (-1,5) circle (2.5pt); \draw [fill=black] (-2.5,2) circle (2.5pt); \draw [fill=black] (0.5,2) circle (2.5pt); \draw [fill=black] (-1.7275,3.545) circle (2.5pt); \draw [fill=black] (-0.2695,3.539) circle (2.5pt); \draw [fill=black] (-1,2) circle (2.5pt); \draw [fill=black] (-1,3.0349697377269673) circle (2.5pt); \end{scriptsize} \end{tikzpicture} \caption{Geometric representation of the Fano matroid $F_7$} \label{Fano} \end{figure} In the special case of cographic matroids of simple graphs, the cycle ideal is rarely zero. Indeed, $I_{\mathrm{Cut}(G)}$ is zero if and only if $G$ is the complete graph $K_2$ or $K_3$, see \cite[Proposition~3.1]{RS}. But, by Lemma~\ref{zero ideal} and Example~\ref{F7-1}, it seems that in general there are more interesting cases with zero cycle ideals. Now, it is natural to pose the following problem: \begin{Problem}\label{zero characterization} It would be interesting to give an excluded minor characterization or explicit list of all matroids $M$ with $I_{\mathrm{Cyc}(M)}=\langle 0 \rangle$. \end{Problem} Next, we investigate the numbers $\mu(M)$ for matroids whose cycle ideals are not zero. \begin{Lemma}\label{mu} Let $M$ be a matroid. Then the following statements hold: \begin{enumerate} \item [{\em(a)}] ${(I_{\mathrm{Cyc}(M)})}_1=\langle 0 \rangle$. In particular, if $I_{\mathrm{Cyc}(M)}\neq \langle 0 \rangle$, then $\mu(M)\geq 2$. \item [{\em(b)}] If $M'$ is a matroid such that $\KK[\mathrm{Cyc}(M')]$ is an algebra retract of $\KK[\mathrm{Cyc}(M)]$, then $\mu(M')\leq \mu(M)$. \end{enumerate} \end{Lemma} \begin{proof} Part (a) follows, since by definition no binomial of the form $x_C-x_D$ belongs to $I_{\mathrm{Cyc}(M)}$, where $C\neq D$ are cycles of $M$. Part~(b) follows from part~(a) together with Proposition~\ref{Betti}~(a). \end{proof} We see by Lemma~\ref{mu}~(a) that cycle ideals never contain linear forms. As a consequence of Lemma~\ref{mu}~(b) together with Proposition~\ref{face retract}, Theorem~\ref{matroidal retract-theorem} and Corollary~\ref{algebra retract-corollary}, some operations are considered in the next corollary under which the highest degree of a minimal homogeneous set of generators of the corresponding ideals does not increase. \begin{Corollary}\label{mu comparison} Let $M$ be a matroid and let $M'$ be a minor of $M$. Assume that one of the following holds: \begin{enumerate} \item[{\em(a)}] $M'$ is a series minor of $M$, \item[{\em(b)}] $M'$ is a binary matroidal retract of $M$ and $M$ is binary, or \item[{\em(c)}] $M'$ is a g-series minor of $M$ and $M$ is binary. \end{enumerate} Then \[ \mu(M')\leq \mu(M). \] \end{Corollary} Next, the following theorem provides some situations where the highest degrees of minimal homogeneous sets of generators of cycle ideals are the same, though the underlying matroids are not. \begin{Theorem}\label{simplification} Let $M$ and $M'$ be two matroids where $M'$ is obtained by either \begin{enumerate} \item [{\em(a)}] a coloop contraction, or \item [{\em(b)}] a series contraction \end{enumerate} of $M$. Then $\mu(M)=\mu(M')$. \end{Theorem} \begin{proof} \begin{enumerate} \item [(a)] Assume that $M'=M/e$ where $e$ is a coloop of $M$. In the proof of Theorem~\ref{face} part~(c), we observed that, in this case, $\mathrm{Cyc}(M)=\mathrm{Cyc}(M')$ and, in particular, no cycle of $M$ contains $e$. This, by definition, implies that $I_{\mathrm{Cyc}(M)}=I_{\mathrm{Cyc}(M')}$ (as ideals in $S_M=S_{M'}$), and hence $\mu(M)=\mu(M')$. \item [(b)] Next assume that $M'=M/e$ where $\{e,f\}$ is a cocircuit of $M$ for some $f\in E(M)$. By Corollary~\ref{mu comparison}~(a), we have $\mu(M')\leq \mu(M)$. We claim that the other inequality also holds. For this, at first observe that \[ \mathrm{Cyc}(M')=\{D-\{e\}: D\in \mathrm{Cyc}(M)\}. \] This indeed follows, since we have $\mathcal{C}(M')=\{C-\{e\}: C\in \mathcal{C}(M)\}$ which means that the sets $C-\{e\}$ are all minimal. The latter equation follows from the fact that if $C\in \mathcal{C}(M)$ with $e\in C$, then $C-\{e\}$ is not contained in any circuits of $M$ except $C$, since as it was discussed in the proof of Theorem~\ref{face}~(b), any circuit of $M$ contains $e$ if and only if it contains $f$. Indeed, in the critical case which is the case that $e\in C$, if $C-\{e\}\subseteq D$ for some $D\in \mathcal{C}(M)$, then $f\in D$ which yields $e\in D$. Therefore, $C\subseteq D$, and hence $C=D$, since both are circuits. Define the homomorphism $\alpha:S_{M'}\rightarrow S_M$ with \begin{displaymath} \alpha(x_C)= \left \{\begin {array}{ll} x_C&\mathrm{if}~~~f\notin C,\\ x_{C\cup \{e\}}&\mathrm{if}~~~f\in C, \end{array}\right. \end{displaymath} for any $C\in \mathrm{Cyc}(M')$. Then $\alpha$ clearly provides an isomorphism between $S_{M'}$ and $S_M$. Let $\{f_1,\ldots,f_k\}$ be a minimal homogeneous generating set for $I_{\mathrm{Cyc}(M')}$. We claim that $\{\alpha(f_1),\ldots,\alpha(f_k)\}$ is a generating set for $I_{\mathrm{Cyc}(M)}$. Then, it follows that $\mu(M)\leq \mu(M')$, since $\alpha$ is a homogeneous homomorphism of degree zero. First we need to show that $\alpha(f_j)\in I_{\mathrm{Cyc}(M)}$ for each $j$. Since $f_j\in I_{\mathrm{Cyc}(M')}$, we may assume that $f_j$ is a homogeneous binomial, namely $f_j=\prod_{i=1}^dx_{C_i-\{e\}}-\prod_{i=1}^dx_{D_i-\{e\}}$ for some $d$ and $C_i,D_i\in \mathrm{Cyc}(M)$. It follows that $\phi_{M'}(f_j)=0$, and hence $\sum_{i=1}^{d}\chi_{C_i-\{e\}}=\sum_{i=1}^{d}\chi_{D_i-\{e\}}$. This implies that the number of those $C_i$'s and $D_i$'s which contain $f$ are the same. Since exactly such $C_i$'s and $D_i$'s also contain $e$, it follows that $\sum_{i=1}^{d}\chi_{C_i}=\sum_{i=1}^{d}\chi_{D_i}$ which yields \[\alpha(f_j)= \prod_{i=1}^dx_{C_i}-\prod_{i=1}^dx_{D_i}\in I_{\mathrm{Cyc}(M)}, \] as desired. Next, assume that $\mathcal{G}$ is a generating set of homogeneous binomials for $I_{\mathrm{Cyc}(M)}$, and let $g\in \mathcal{G}$ with $g=\prod_{i=1}^dx_{C_i}-\prod_{i=1}^dx_{D_i}$ for some $d$ and some cycles $C_i$ and $D_i$ of $M$. Then $\alpha^{-1}(g)=\prod_{i=1}^dx_{C_i-\{e\}}-\prod_{i=1}^dx_{D_i-\{e\}}$ which is an element of $I_{\mathrm{Cyc}(M')}$ by a similar argument as above. Hence, we have $\alpha^{-1}(g)=\sum_{i=1}^{k}h_if_i$ for some $h_i\in S_{M'}$, which implies that $g=\sum_{i=1}^{k} \alpha(h_i) \alpha(f_i)$. Thus, $\{\alpha(f_1),\ldots,\alpha(f_k)\}$ is indeed a generating set for $I_{\mathrm{Cyc}(M)}$. This concludes the proof. \end{enumerate} \end{proof} \section{Cycle ideals generated in small degrees}\label{degree 2} In this section, cycle ideals of matroids $M$ with small values for $\mu(M)$ are considered. In particular, we discuss when cycle ideals are generated by quadrics. In the following denoted by~$K_n$ we mean the complete graph with~$n$ vertices. \begin{Lemma}\label{degree2} Let $M$ be a binary matroid. Consider the following statements: \begin{enumerate} \item[{\em(a)}] $\mu (M)\leq 2$. \item[{\em(a$'$)}] $M$ is $M(K_4)$-minor free. \item[{\em(b)}] $M$ is $M(K_4)$-g-series minor free. \item[{\em(c)}] $M$ has no $M(K_4)$ as a minor obtained by deletions, series contractions or coloop contractions. \end{enumerate} Then the following implications hold: \[ (a)\implies (b)\implies (c), \] and \[ (a')\implies (b)\implies (c). \] \end{Lemma} \begin{proof} Note that since $M(K_4)$ is self-dual, its cycle polytope is affinely isomorphic to the cut polytope of $K_4$. Therefore,~(a)~implies~(b), by Corollary~\ref{mu comparison}, since $I_{\mathrm{Cyc}(M(K_4))}$ has a minimal generator of degree $4$, see, e.~g.,~\cite[Example~7.1]{RS}. The other implications simply follow from Definition~\ref{g-series minor-definition}. \end{proof} \begin{Remark}\label{F7-2} {\em Here, we discuss the statements of Lemma~\ref{degree2} in the cases of some well known matroids. \begin{enumerate} \item If $M$ is a connected graphic or cographic matroid, then by \cite[Corollary~5.4.12]{Ox} and the fact that $M(K_4)$ is a self-dual matroid it follows that~(c) implies~(a$'$). So~(b),~(c) and~(a$'$) are equivalent in these cases. But, the next part shows that these equivalences do not hold in general. \item As we observed in Example~\ref{F7-1}, the cycle ideal of $F_7^*$ is zero. It follows from \cite[Proposition~6.4.8]{Ox} that $F_7$ and hence its dual $F_7^*$ are binary matroids. Therefore, $F_7^*$ satisfies conditions~(a),~(b) and~(c) in Lemma~\ref{degree2}. On the other hand, all contractions $F_7^*/e$, for any $e\in E(F_7^*)$, are isomorphic to $M(K_4)$, see, e.g., \cite[Example~1.5.6]{Ox}. Thus, $F_7^*$ does not satisfy~(a$'$). \end{enumerate} } \end{Remark} By a \emph{series-parallel network} we mean a 2-connected graph obtained from the complete graph~$K_2$ by subdividing and duplicating edges. It is clear that any series-parallel network is a planar graph. There are several ways to describe this class of graphs, see, e.g., \cite{Ep}. There are also some equivalent statements in terms of the graphic matroid, see, e.g., \cite[Corollary~5.4.12]{Ox}. Using the two latter descriptions, Engstr\"om showed in \cite{En} the following which in particular proved \cite[Conjecture~3.5]{SS}. \begin{Proposition}\label{Eng} {\em (See} \cite[Corollary~2.8]{En}{\em )} Let $G$ be a series-parallel network. Then $\mu (M(G)^*)\leq 2$. \end{Proposition} In the next theorem we continue the discussion of Remark~\ref{F7-2}~(1) for graphic and cographic matroids. In the cographic case, this generalizes the main result of \cite{En} related to cut ideals, while the graphic case related to Eulerian ideals is new to the best of our knowledge. \begin{Theorem}\label{degree2-characterization} Let $M$ be a graphic or cographic matroid of a simple connected graph. Then the statements~(a), (b), (c) and (a$'$) in Lemma~\ref{degree2} are equivalent. \end{Theorem} \begin{proof} According to Lemma~\ref{degree2} and Remark~\ref{F7-2}~(a), it remains to show that the equivalent conditions~(b), (c) and (a$'$) imply~(a) in the graphic and cographic cases. First assume that $M=M(G)^*$ where $G$ is a simple connected graph, and assume that (a$'$) holds. Then, $M^*=M(G)$ is also connected and $M(K_4)$-minor free, by self-duality of $M(K_4)$. Hence, by \cite[Corollary~5.4.12]{Ox}, it follows that $G$ is a series-parallel network. Thus, Theorem~\ref{Eng} yields $\mu (M(G)^*)\leq 2$, and hence~(a) holds. Next, assume that $M=M(G)$ where $G$ is a simple connected graph, and assume that (a$'$) holds. Then, \cite[Corollary~5.4.12]{Ox} implies that $G$ is a series-parallel network, and hence a planar graph. So, it follows from \cite[Corollary~6.6.6]{Ox} that $M$ is a cographic matroid. Therefore, the desired result follows from the previous case. \end{proof} \begin{Remark}\label{preconj} {\em \begin{enumerate} \item A graphic matroid $M(G)$ is a connected simple binary matroid for which the statements~(a), (b), (c) and (a$'$) in Lemma~\ref{degree2} are equivalent according to Theorem~\ref{degree2-characterization}, when $G$ is a connected simple graph. \item A cographic matroid $M(G)^*$ is a connected cosimple binary matroid for which the statements~(a), (b), (c) and (a$'$) in Lemma~\ref{degree2} are equivalent according to Theorem~\ref{degree2-characterization}, when $G$ is a connected simple graph. \item The matroid $F_7^*$ is an example of a simple and cosimple connected binary matroid that is not cographic and for which the statements~(a), (b) and (c) in Lemma~\ref{degree2} are equivalent, but (a$'$) does not hold, as we saw in Remark~\ref{F7-2}. \end{enumerate} } \end{Remark} Having Theorem~\ref{degree2-characterization} and Remark~\ref{preconj} in mind, we now pose the following conjecture: \begin{Conjecture}\label{degree2-Conj} Let $M$ be a connected binary matroid which is simple or cosimple. Then the statements~(a), (b) and (c) in Lemma~\ref{degree2} are equivalent. \end{Conjecture} The next lemma also gives partial information about cycle ideals generated in degrees~at most~5. \begin{Lemma}\label{higher degrees} Let $M$ be a binary matroid. Consider the following statements: \begin{enumerate} \item[{\em(a)}] $\mu (M)\leq 5$; \item[{\em(b)}] $M$ is $M(K_5)^*$-g-series minor free; \item[{\em(c)}] $M$ has no $M(K_5)^*$ as a minor obtained by deletions, series contractions or coloop contractions. \end{enumerate} Then the following implications hold: \[ (a)\implies (b)\implies (c). \] \end{Lemma} \begin{proof} It follows from \cite[Table~1]{SS} that $\mu(M(K_5)^*)=6$. Therefore,~(a)~implies~(b), by Corollary~\ref{mu comparison}. The other implication is clearly obtained from Definition~\ref{g-series minor-definition}. \end{proof} It would be interesting to provide certain classes of binary matroids for which the statements~(a),~(b) and~(c) in Lemma~\ref{higher degrees} are equivalent, or to give any explicit characterization of matroids $M$ with $\mu(M)\leq 5$. Here, we pose the following conjecture: \begin{Conjecture}\label{conj-degree5} Let $M$ be a connected binary matroid which is simple or cosimple. Then the statements~(a), (b) and (c) in Lemma~\ref{higher degrees} are equivalent. \end{Conjecture} Note that in the case of cographic matroids of a simple connected graph (i.e., cut ideals case) Conjecture~\ref{conj-degree5} was essentially stated in \cite[Conjecture~3.6]{SS}. In the latter paper, the authors even suggest $\mu (M(G)^*)\leq 4$ instead of $\mu (M(G)^*)\leq 5$. Also, note that \cite[Conjecture~3.6]{SS} was stated in terms of any minors in graphs, not special types of minors. Indeed, it is easily seen that if a simple graph $G$ has a complete graph~$K_n$ as a (graphical) minor, then it can be obtained only by contraction of edges, deletion of loops or deletion of multiple edges (as well as removal of isolated vertices). This means that, the corresponding cographic matroid $M(G)^*$ has $M(K_n)^*$ as a minor which is obtained only by deletions, series contractions or coloop contractions, (i.e. the three conditions mentioned in Lemma~\ref{higher degrees}~(c)). Seeing Theorem~\ref{degree2-characterization} as well as the two conjectures posed in this section, it is natural to ask if there are examples of g-series minors which are not obtained only from deletions, series contractions or coloop contractions. We end this section with such an example: \begin{Example}\label{g-series not series} {\em Let $M$ be the Fano plane $F_7$. Using the notation of Definition~\ref{binary matroidal retract-def}, let $E=\{e_1=4,e_2=5,e_3=3\}$ and $E'=\{e'_1=1,e'_2=2,e'_3=6\}$. Recall the circuits of $F_7$ from Example~\ref{F7-1}. We have that $E'=C_1$ is a circuit of $M$. It is easily seen that the desired conditions of Definition~\ref{binary matroidal retract-def} are satisfied. The circuits of $M/E'$ are as follows: \[ \{3,5\}, \{4,7\}, \{3,4\}, \{5,7\}, \{1,4,5\}, \{1,3,7\}. \] Thus, $M/E'$ is a binary matroidal retract of $M$, and hence a g-series minor. One observes that this minor can not be obtained only by deletions, series contractions or coloop contractions. Indeed, $M\setminus E'$ is different from $M/E'$, since its only circuit is $\{3,4,5,7\}$. On the other hand, it can be seen that by replacing any of the three deletions in $M\setminus 1\setminus 2\setminus 6$ by contractions, we do not have series contractions or coloop contractions. } \end{Example}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,351
CS Năvodari is a Romanian rugby union club based in Năvodari, Romania. It was founded in 2007 and is currently playing in the Liga Națională de Rugby. Before entering the new league format, Navodari won in 2019 the Romanian second tier league, the Divizia Nationala de Seniori. References External links Liga Nationala de Rugby link Romanian rugby union teams
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,231
Shell Case Shorts Winner The time has come to announce the very first winner of the Shell Case Shorts writing competition. There were 10 entries in total which isn't a bad start to what will be a regular feature on The Shell Case. Although the entries were of a very high standard, for me, there was one story that stuck with me even after I read it and that was The Bone Carver by Patrick Burdine, aka @somnicidal. His Warhmmer 40,000 story wasn't your typical slaughter-fest but it was well written, well paced and compelling from start to finish, so I'm pleased to say that he will be receiving a signed copy of The Gildar Rift by the lovely Sarah Cawkwell. And a massive thank you to her for agreeing to provide the prize. Special commendations must go to James Wilson (@JamesMEWilson) for his Dystopian Wars story Traitor, and Michael Barnes (@elblondino) for his Warhammer 40,000 story Escape From Madness. Their stories will be included in the Shell Case Shorts anthology released the beginning of next year. The next competition will open on February 1st so keep your eyes peeled. But for now, please enjoy the winning entry… The Bone Carver by Patrick Burdine A gust of wind shoved the old man like a belligerent drunk. He staggered back and slipped to his knees in the deep crust of snow. Rime caked his beard, the frost turning his graying red hair even lighter. Clusters of hair had frozen together like dreadlocks on both his head and beard. One of his eyes was covered with an old leather patch. His bushy eyebrows had none of the gray strands which wove through his beard and hair and glowed like embers though one was half hidden under the patch. Leaning his weight onto his walking staff he rose and looked up at his destination. The cave stood out against the white crested mountain like a black lightning strike frozen in time. An avalanche had revealed the cave just a week ago as the old man had predicted. His Vision was almost always true. It was close now. Less than a mile. He turned and looked back down at the village which had been his home for these last months. The wind spun the spiraling black smoke of the cooking fires like a dancer led by a furious partner. He knew that soon the smoke would vanish and snow would bury the entire village as surely as any grave digger. In his mind's eye he pictured the village as he had left it. The bodies lay where they had fallen though he had visited every single one of them taking the talismans which filled the wolf-bladder sack hanging from his belt. The blood from the bodies of the villagers had begun crystallizing even before he left. When the weather began to turn and the ice thawed run-off from the Spring break up would sweep away the structures. To anyone who noticed, Fireholme was just another casualty of the Fenrisian winter. This brought to mind a Fenrisian proverb. "An oath written in snow will melt in the Spring." His own oath didn't last even that long. A fierce howl brought him from his reverie. The wolf was tall, even by Fenrisian standards though it was painfully thin. The bones of its ribs stood out like icicles hanging from a bony spine. Like the old man, one of the sockets which should have contained an eye was as black and hollow as the cave behind it. Its fur was matted and there were long jags of scar where the fur refused to grow. It howled again and this time the man heard the discordant notes of fear and desperation. And under it all, hunger. The smell of the meat in the sack at his belt had summoned the wolf. Or perhaps it had stumbled upon the cave and intended it to be a tomb where it could lay down and die and it resented this intruder. In any case, its hackles were up and its teeth bared. Despite the threat, or perhaps because of it, the old man felt an immediate kinship with the wolf. He kept his eye on the wolf but slid his pack off of one of his shoulders. He felt through his pack and pulled out a slab of smoked meat, gifted by Vala Vendotter just last night at his Moving On celebration. He threw the food on the ground as far from himself as he could. The wolf crept toward the meat and though its tail was low its predatory eyes never left the old man. The wolf gulped the smoked meat down in two quick bites. The wolf growled at the old man. It seemed to be weighing its hunger for fresher meat against the smell of power surrounding the old man. The old man raised his staff over his head and threw back his head with a howling cry. He pointed back down at the village with his staff and the wolf set off down the hill at a lope. The beast couldn't understand how it knew, but its mouth began to water and the prospect of the meat that the stranger's howl had promised. It would gorge and then, perhaps, pay the old man a visit in the night when the man-things were most vulnerable. The man watched the wolf as it slipped and tumbled in the snow and then righted itself and kept running. He smiled, imagining that the wolf had looked just the same when it was playing as a pup and then turned back toward the cave. The wolf might be back and it might not. One might be able to touch the mind of a beast, but one could never understand it The old man stopped at the entrance to the cave. He took a deep breath and tasted sulphur on the air. This, then, must be a vent for one of the many volcanoes nestled within the mountain ranges of Fenris. Wind had piled snow up into the cave for several feet but the old man walked into the darkness until he felt solid stone under his feet. He stomped his feet and shook his head and snow fell down like dandruff. He took the pack off his back and pulled out the two fire logs he had brought with him from the village. He set them at his feet and unwrapped the emberstone from the oiled kraken skin that kept its heat contained. It glowed warm in his palm and would soon be hot enough to sear him. He used the feeble light it gave off to build a small fire pit from the rocks strewn about on the floor. He added two small rows of stones and laid the fire logs on top of them. He stuffed some kindling into the gap under the logs and slid in the emberstone. It began to glow more brightly as it activated and the cave walls flickered as shadows sought what shelter they could from the hungry light. The old man took off his heavy traveling cloak and laid it on the ground near the fire. Hopefully it would dry be the time he needed to use it as a makeshift bed. He found a largish stone and moved it in front of the fire to use as a seat and found another that he set up as a work area. Satisfied with his arrangements he unstrapped the large pot that he had bound with sinew to the outside of his pack. He took the pot to the front of the cave and scooped it full of snow. He packed it down with his fist and added more on top, which he packed down again. He spared a quick glance for the lone wolf but even its paw prints had been swallowed by the storm. He returned to the fire and set the pot on one of the rocks of the flame pit and the snow quickly began to return to water. He removed an iron knife from his belt and set it on the makeshift table and sat down. He took the sack off of his belt and squeezed it gently. The trophies inside had frozen together on the walk, sealed, no doubt, by icy chains of blood, and felt like a massive lumpy ball. He hit the bag firmly on the ground and he could tell by how it flattened out that many of the chains had been shattered. The warmth near the fire would thaw the rest. He reached into the sack and pulled out a handful of fingers like a fisherman reaching into a pail of worms. He set them on the rock table and picked one up to inspect. He felt the calluses and though rigor mortis had tried to make it curl, the arthritis swelling the knuckles had stymied that motion in death as surely as it had in life. The finger likely belonged to one of the three elders of the village and that was certainly a good sign. He pulled out two more fingers. It was best to do three at a time. The second one was also callused though he could still feel the greasy sheen of seal fat. The woman had tried to keep her hands supple despite the hard labor of her life. He reached in to complete the first and most important trinity of grisly offerings. The final one belonged to a child. The fates were indeed pleased. The seasons of life were each represented. He took up the knife and began sawing through the joints and separating the knuckles one by one and then tossing them in the pot to boil off the flesh. He continued in clusters of threes sometimes seeing some mark which identified the owner – here was Ulf Seawarder, his third finger halved by a predator fish tangled in his net – here was Girda Vulfwife, flesh scarred by a fire that had claimed her husband. The pot was soon full and his sack empty. He watched the roiling water and as the flesh and fat peeled off the bone and several times the old man carried the pot out of the front of the cave. He sloshed off the floating meat and much of the water and then repacked the pot with snow. He did this for several hours before the bones were clean. He was exhausted but knew that he couldn't sleep before he was finished. His time on this world was almost over and he had much work to do. He drained the water from the pot and set it to cool and took out the knuckles. He picked a suitable one and began to use his knife to carve in runes in ancient Fenrisian. Each bone got a single rune. The knife would occasionally slip, drawing blood from the old man and ruining the rune, but that was why he had collected all of the fingers, not just enough for the hundred or so knuckles he needed. He worked through the night and as the fire began to burn low he noticed that there was enough light coming in from the mouth of the cave to see. He decided to take a quick break and pulled a salted strip of fish from his pack and walked to the entrance of the cave. The snow had stopped falling sometime during the night. He was surprised to see the one-eyed wolf curled up in front of the cave. The wolf had obviously eaten the snow where the old man had been dumping the refuse from the pot. It raised its head to look at him and then smiled as wolves do, its long pink tongue lolling wildly. The old man took a final bite of the fish and tossed the little bit that was left to the wolf who snatched it out of the air and then laid his head back down. The old man returned to the charcoal that remained of his fire pit. It was still giving off a bit of warmth as the man completed his work. He inspected each of the runes looking for the tiniest of flaws but was unable to find even one. He filled the sack with the runed knuckle bones and tied it off with the same sinew with which had bound the pot to his pack. He found a crevice big enough for a single person to shelter in within the cave and tucked the runes in the far corner. He then wrestled one of the stones from the fire pit over into the crevice and used it to shelter the runes. He knew it would be a very long time before the runes were destined to be discovered by an aspirant to the Space Wolves but he didn't want a curious animal to thwart his hard work and planning. Finally the old man laid down his staff and the rest of his belongings near the fire pit. Clad in a simple woolen shift he walked out of the cave for a final time. The wolf raised its head questioningly as the old man walked over to it. It raised its lip in a snarl but didn't growl. The old man placed his hand on the wolf's head – he felt it only proper to reward its loyalty. He spoke a word of power and the wolf stiffened as eldritch forces flowed through it. "Guard this place. Wait for him to come. No new scars will mar you, though the elder ones will mark you." A new light glowed in the wolf's eye as its sentience shifted and something ancient took hold. His work done, his vision made manifest and a trap set, Magnus the Red spoke a final word of power to shed the form he had assumed and return to his home in the Warp. January 30, 2012 March 7, 2012 by Phil Inspired by @jraferguson I have decided to launch the first ever Shell Case Shorts writing competition. Simply enough it is a fan fiction writing competition. All you need to do is write a 2,000 word short story set in your favourite tabletop wargame universe that captures the essence of that universe whilst still delivering an exciting/interesting story. You've got 3 weeks to get something down after which the submissions will be read by me and a single winner chosen. The prize will be a signed copy of The Gildar Rift by Sarah Cawkwell. It is my hope to run a few of these over the year and then take all the winner's pieces as well as the honourable mentions and put them into a Shell Case Shorts Anthology available for free download. Send your entries to phil@theshellcase.com Rules are below and good luck. The rules are simple: 1 submission per person. Stories must be 2,000 words (+/- 10%) All submissions must be fan fiction based on an established wargaming IP e.g. Warhammer, 40k, Warmachine etc. Work believed to be plagiarised will be disqualified. All submissions must be sent as a Word document attached to an email. Submissions must include at the top of the first page; the etrants name, a contact email address and the title of the story (and Twitter name if applicable). All submissions must be received by noon on the 22nd January. Submissions received after this will not be considered. 1 winner will be chosen and notified by email. No discussion will be entered into, my decision is final. The prize may not be exchanged for its cash value, and no alternative will be offered. January 2, 2012 January 5, 2012 by Phil Sixty Seconds Earlier in the year, before this humble blog was founded and before I started to explore new game systems I entered a short story competition at my local Games Workshop. The criteria were simple; 1,000 words and it had to be set in the Warhammer 40,000 universe. Long story short; I won so I thought it'd be fun to share with you what I wrote… A siren blared giving the boarding party its thirty-second warning. Grav-harnesses wound down into their primed positions, explosive bolts arming with a click. Crewman first class Elijah Neilson spared a glance down the length of the Shark assault boat before bracing himself. A violent jolt pushed Neilson into his harness as the crafts retros fired at full burn. Neilson spared the crewman opposite him one last look before burying his chin in his chest and clung onto his shotcannon for all he was worth. The impact was as if God-Emperor himself was trying to undo the fabric of creation. The jarring pain subsided as Neilson felt the stimm-stick in his harness jab into the back of his neck, flooding his body with stimms to counteract the shock of the impact & combat drugs to heighten his reflexes and sharpen his mind to dangerous levels of hyper stimulation. Thunder rolled through the boarding craft as forty sets of explosive bolts blew out, snapping the grav-harnesses up into the ceiling and the hatch charges at the front of the craft blasted outwards into the enemy ship as an exclamation point to follow the roar of an angry, immortal, god. Neilson was on his feet, moving to the breach, shotcannon clutched to his chest. He reached down and set his chrono counter spinning down. Sixty seconds. That's how long it took for a boarding party to clear the breach. Any longer and the initiative would be lost and they'd be slaughtered in the corridors. He knew the drill; primary targets were the main magazine and the reactor core. Keep moving, move quickly and do as much damage as you can on the way. Neilson reached the breach. Beyond was smoke, darkness and flickering lights. He gritted his teeth pulled down the firing stud of his shotcannon and jumped through. Flame, hot lead & noise preceded Neilson as his armoured boots rang against the deck plating, pitching shapes in the gloom with howls of agony. He continued to unload shot after shot into the oncoming enemy, the cannon bucking in his hands as it belched flame. Neilson glanced around as he killed; the flare of his cannon and the flicker of failing overhead lights revealing his surroundings. Of all the nightmares that Neilson had jumped into this was the worst. Every surface and wall was covered in blood and viscera. Indeterminate body parts littered the floor or hung amid spilled cables from ruptured plating. Such was the violence of their arrival they had pulped the heretics that had been stationed in this section of the ship. Neilson kept firing as the rest of the boarding party disembarked and set about the gruesome business of war. Within moments the enemy counterattacked; a surge of debase humans brandishing cruel blades and ancient pistols sprang from the darkness screaming foul oaths that made Neilson's stomach turn. Volleys of cannon fire tore into the cultists, blasting bodies from their feet, and ragged holes blown in chests or limbs torn from torsos in vivid red sprays. Dozens fell, but their numbers were many and soon sickle blades glinted red in the failing light amidst screams of the dying. The boarding party was getting penned in even as they attempted to drive the enemy back, desperately trying to break out before they were overrun. Neilson allowed a spent ammo drum drop from his cannon and expertly slotted a fresh one home, wracking back the slide as he watched a rating drive the butt of his cannon into the face of a cultist. Its face caved inwards and it fell to the ground, the rating putting a shell in its head for good measure. The man was possessed, driven over the edge by the combat drugs. He deftly unloaded his cannon one-handed into a squad of warriors as they rounded the corner, bellowing the litanies of hate. He failed to see the cultist hierarch in the shadows. The wiry thin wretch jumped from the gloom, a wicked knife clutched in a pallid hand. The rating's furious recital was cut short as he fell in a spray of his own vital fluids. Neilson reacted, putting a shell into the hierarch, but not before it sprang forward and took the head off the shoulders of a second rating in a single swipe of blade that now crackled with dark energies. Both bodies tumbled to the floor amidst a slick of gore. The corridor was filling with smoke; fires were burning from shattered remains of terminals. All around Neilson shapes were moving, stabbing, fighting, shouting and dying, back lit by yellow flashes of gun fire. Sweat drenched Neilson's face and stung his eyes as he aimed and fired, aimed and fired, aimed and fired. Every shot found a target. Every shot killed. But for every heretic that fell there was another to fill its place. Time had started to slow. He watched a seventeen year old boy lead a break out against a unit of cultists dug in behind a hastily constructed barricade, as if he could reach out and stop the boy's mad charge. All around the boy men died, their bodies riddled with las fire, the boy alone made it the position and lived long enough to yank the arming pins from the grenades hooked to his webbing. The blast blew out the bulkhead and momentarily exposed the corridor to hard vacuum before containment fields snapped into place. Neilson had less than a heartbeat to watch the remaining twenty souls of the boarding party, and fifty heretics, get blown into space locked in a death struggle even at the end,. His watch timer pipped. Sixty seconds. He was dimly aware of warm breath on his neck. On the bridge of the Imperial cruiser Indomitable Will, amidst the displays and tactical readouts, Master of Ordnance Archibald Drake was overseeing orders that would see great misery brought down upon his foes with the God-Emperor's own thunder. He stood at his station willing the gun crews to reload faster as the ship rocked beneath another enemy fusillade. An indicator light flashed from green to red, catching his eye. Boarding craft SABM5443 was registering zero life signs. Drake noted the loss of all forty souls including its four senior crewmen in the ships logs before returning to his duties.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,540
Q: Alternatives to Bonjour for Windows I'm looking to implement a ZeroConf application for Windows. I've noticed Bonjour and Mono.Zeroconf but was wondering if there were any decent alternatives? A: There is a python implementation of zeroconf called pyzeroconf: https://github.com/paulsm/pyzeroconf, that is pretty useful, though there are some bugs in it that you will have to fix and is not implemented for ipv6.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,652
Q: GT 710 good with far cry 3, stutters virtual dj fix Okay I know this is an office GPU and its not meant for gaming. So this GPU (GT 710) is extremely weak for this processor i5-2400 even when running light applications like Virtual DJ8. When I drag a video or music to any deck, the GPU renders so slowly the graphical elements on each deck. The waves are super slow unlike when I use Intel HD graphics to do it. But am confused as when running a graphics intensive title like Far Cry 3 the game is super smooth when the card is overclocked to memory clock +200Mhz and a core clock of +300Mhz with MSI Afterburner. The same overclock settings applied and then Virtual DJ restarted I get music/video lagging issue persists. The lagging is with every graphical element that a GPU should render in the Virtual DJ program like the waves, the round disc icon and the video playback. It's ironic that Far Cry 3 is heavier than Virtual DJ 8 graphic-wise but far cry 3 feels the over clock and gives me a better play FPS at 1080p with middle settings. So I want Virtual DJ to use Intel HD Graphics driver for all its gpu-related tasks but I don't know how. If I can boot with both drivers and then apply programA renders with Intel HD Graphics and then programB renders with nVidia GPU then that could be good. But that is not possible because i guess on windows one can only use one graphics driver when using a single monitor, So please help me fix this virtual dj stutter This should not be the case with an nVidia card that is way better than an intel hd graphics 2000 check with userbenchmarks,the intel hd graphics is good with light applications but very poor at gaming, it doesn't even have directX 11. The nVidia GPU is good with heavy titles{pardon me in this context, far cry 3 is more graphic intensive than virtual dj} but stutters virtual dj, this whole gpu cpu theory is messed up...` All that said an overclock of +400Mhz on the gt 710 memory clock stops the stutter and virtual dj feels the overclock and obeys but again far cry 3 shows tearing and FPS loss. Please help me fix this. A: (Warning: Long-term overclocking may be harmful to your CPU.) I understand that you wish to bind a specific application to one of your GPUs. In a recent enough Windows 10, you may do it this way: * *Go to Settings > System > Display *Click Graphics settings *In the Graphics settings windows, set the drop-down to "Desktop app" *Click Browse *Find the .exe of the program and click Add (ensure that you are selecting the game program and not just its launcher) *Once the program is added to the list, click it *Select Options *Choose from one of the following options: GPU Preference Description System default To let Windows decide the best GPU for your application. Power saving A request to run the application on the most power saving GPU available. High performance A request to run the application on the most high performance GPU available. The GT 710 would likely be the "most high performance GPU available". *Done. For more details and screenshots, see the article How to Set Preferred GPU for Apps in Windows 10.
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,128
Robert De Niro talks Scorsese's 'The Irishman,' calls Trump 'a joke' after 'Joker' screening Robert De Niro is unamused by reports that President Donald Trump screened his new movie "Joker" at the White House and enjoyed it. "This administration is a joke," says the actor, one of Trump's most vocal critics, who plays a pivotal role in the divisive Batman villain origin story starring Joaquin Phoenix. "We've hopefully got to get past it and out of it. It's not good." Otherwise, the two-time Oscar-winning star of "Raging Bull" and "The Godfather: Part II" has little interest in discussing politics. After all, he's getting some of the best reviews of his nearly six-decade career for mob epic "The Irishman," which begins streaming on Netflix Wednesday and reunites him with longtime collaborators Martin Scorsese ("Taxi Driver"), Al Pacino ("Heat") and Joe Pesci ("Goodfellas"). 'I can't wait to see him in jail': Robert De Niro calls Trump a 'gangster president' Everything you need to know about 'The Irishman': Martin Scorsese's Netflix epic is 3½ hours long "Irishman" is something of passion project for De Niro, 76. The sprawling 3½-hour drama is based on Charles Brandt's 2004 nonfiction book "I Heard You Paint Houses" about Frank Sheeran, a truck driver-turned-hit man who worked closely for decades with mobster Russell Bufalino (Pesci) and Teamster Jimmy Hoffa (Pacino). De Niro read the book in 2007 and was immediately drawn to Frank, a quiet, mostly reactive character in the film, who keeps his head down and follows orders. But his criminal career ultimately costs him his family and friends, and Frank is left feeling regretful and alone as an old man. "All that was in the book: the descriptions Frank had given, the situations, the circumstances," De Niro says. "The dialogue was all very real to me. There are some people who have said, 'Well, that didn't really happen (in real life).' That's fine, because as Marty said, it's a movie and this is the story we are telling." Your Thanksgiving movie guide is here! Everything you should see over the holiday Review: Gangster epic 'The Irishman' lives up to the hype Pacino recently told USA TODAY that making "Irishman" brought up bittersweet feelings about his own aging and legacy, which De Niro echoes. "Sure, how could it not?" he says. "That's part of the attraction of the story: getting older, life going on and everything that happens." "Irishman" follows Frank over roughly 60 years, from his mid-20s to early 80s, which required costly de-aging techology to give De Niro and his co-stars the appearances of younger men. At one point, producers discussed using younger actors in the movie's first hour, but Scorsese and De Niro were adamant about wanting the film to be as ambitious as it was entertaining and emotional. "We were all excited about doing this de-aging thing because we could play the characters throughout, from beginning to end, and that was good," De Niro says. To achieve it, the visual effects team "asked me to do a test of a scene I did in 'Goodfellas,' and they were comparing and going off that to (model) how I would look." Al Pacino: The actor made 'Irishman' so he could finally work with Scorsese (and eat free ice cream) Fact-checking 'The Irishman':Here's why the film's portrayal of Jimmy Hoffa's death is controversial After "Irishman" was dropped by Paramount in 2017 because of its hefty price tag, Netflix bought the rights and financed its $150 million budget, and De Niro helped convince friends Pacino and Pesci to co-star. "I don't think it would've gotten made without Bob's persistence," producer Jane Rosenthal says. "Every time he would be promoting another movie (and journalists would ask), 'Will you and Marty Scorsese ever work together again?' – he would bring up this project." His tenacity will likely pay off: On awards site Gold Derby,De Niro is near-unanimously predicted to receive his sixth best actor Oscar nomination for the performance, which was praised by USA TODAY movie critic Brian Truitt for its "quiet vulnerability." Netflix has launched a splashy theatrical run and robust awards campaign for "Irishman," which has a strong shot at becoming the streaming service's first best picture Oscar winner after Alfonso Cuaron's "Roma" came close earlier this year. Working in this movie's favor are its familiar genre, rapturous reviews (96% fresh on Rotten Tomatoes), and beloved roster of Oscar-winning talent in front of and behind the camera. Given the crime drama's lengthy run time and slow-burning period story, De Niro is interested in how it will play on the big screen vs. at home, as people begin to watch it on Netflix. "That's a good question," De Niro says. "If you see it on a big TV, which a lot of people have, I guess that's one thing, and if you can see it with no interruptions or maybe just one in the middle. I don't know – I'm curious to see what the reaction will be."
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,816
This page displays sold house prices for Rupert Square in Reading. Rupert Square in Reading RG1 consists predominantly of flats. Properties on Rupert Square typically have values around under £100,000, ranging up to around £100,000 for top end flats. Map showing Rupert Square in Reading.
{ "redpajama_set_name": "RedPajamaC4" }
3,060
California Implements Extreme New Sex Ed Curriculum Mary Margaret Olohan / @MaryMargOlohan / July 09, 2019 New pieces of education legislation in California mandate that school districts require sex ed and encourage students to question their parents on sexual topics—topics explored in the kindergarten through 12th grade sex education curricula. (Photo: Muni Yogeshwaran/Getty Images) The California Board of Education implemented progressive sex and gender education curriculum in public schools across the state, regardless, in some cases, of parental knowledge or consent. Progressive groups, including Planned Parenthood, collaborated on AB-329 in 2016 and the recently introduced Health Education Framework in May as highlighted by a video created by the conservative group Our Watch. Both these pieces of education legislation mandate that school districts require sex ed and encourage students to question their parents on sexual topics—topics explored in the kindergarten through 12th grade sex education curricula implemented in California schools. Lawmakers Create the California Healthy Youth Act, a Bill Mandating K-12 Sex Ed AB-329, otherwise known as the California Healthy Youth Act, was created in 2016 and has several aimed purposes. The bill aims to teach K-12 students how to ward off HIV and other STDs; to teach "healthy attitudes" toward sexual orientation, gender, and relationships; and to "promote understanding of sexuality as a normal part of human development." The bill also promises to "provide educators with clear tools and guidance to accomplish that end." AB-329 allows for parents to opt their children out of sexual education. However, the bill prohibits parents from opting their children out of materials that discuss gender, gender identity, gender expression, and sexual orientation. The law also prohibits abstinence-only education and prohibits any discussion of religious doctrine, according to an ACLU handout. The handout adds that beginning in seventh grade, children must be taught "all FDA-approved methods preventing pregnancy and transmission of HIV and other sexually transmitted infections (including condoms, contraceptives, and antiretroviral treatment) and abstinence." Educators Must 'Affirmatively Recognize Different Sexual Orientations and Be Inclusive' The California Board of Education introduced the Health Education Framework in May—a curriculum on sex education that some California parents found troubling, as the Christian Post reported in May. The Health Education Framework affirms language in AB-329 and included books and supplemental materials such as the Amazon bestseller "S.E.X.: The All-You-Need-to-Know Sexuality Guide to Get You Through Your Teens and Twenties," a book that describes sexual activity and gender theory. The California Board of Education removed this book and several others from the curriculum after outrage from Californian families, as reported by the Christian Post and reflected in the Health Education Framework. The Health Education Framework notes that as AB-329 orders, teachers must "affirmatively recognize different sexual orientations and be inclusive of same-sex relationships in discussions," and "teach about gender, gender expression, gender identity, and the harm of negative gender stereotypes." Board members for the Health Education Framework included school district representatives, teachers, and academics from across California as well as a school nurse. The director of community education and outreach at Planned Parenthood, Amy Streavel, was also on the board, according to the California Department of Education. A spokeswoman for the California Department of Education referred The Daily Caller News Foundation to the sections of the California Education code on a parents' right to opt their child out of sex ed and the primary purposes of the California Healthy Youth Act when asked to comment. She did not respond when pressed for further comment. Planned Parenthood did not respond to requests for comment from The Daily Caller News Foundation. Parents React to Positive Prevention Plus California parent John Andrews of the Murrieta School District said that schools in his district are using Positive Prevention Plus Sex Ed Curriculum, a curriculum that contains explicit photos and drawings of sexual activity. "They talk about anal and oral sex as an alternative to regular sex because you can't get pregnant," Andrews said in a June video posted June 26 by the conservative group Our Watch. The video generated no local or national media coverage until a tipster alerted The Daily Caller News Foundation. "They talk about mutual masturbation," he added. "They discuss gender roles, the gender spectrum, and in the support materials … they take it even further. They discuss everything, topics like roleplaying for different genders, blood play, dental dams … fisting is mentioned. I mean, they mention it all." Screenshot of fifth grade materials included in Positive Prevention Plus provided to The Daily Caller News Foundation by Pastor Tim Thompson. "If I were to show that material to a child, I would be brought up on charges," Andrew said. "But somehow our public schools are allowed to teach this to junior high and high school kids." The curriculum describes itself as "California's best source for evidence-based instruction in Comprehensive Sexual Health Education and Teen Pregnancy Prevention." It also boasts full compliance with California and National Health Education Standards and California Education Code, including the "California Healthy Youth Act." Positive Prevention Plus was begun as early as 1993, according to the curriculum's website, in order to develop an HIV and AIDS prevention curriculum. But California Education codes instituted in 2004 began specifying "the content of teen pregnancy prevention education." Research findings included in the curriculum show that use of Positive Prevention Plus results in students' higher use of "reproductive health care services," more use of contraceptive services, and significant improvements in "the delay in the onset of sexual activity." Screenshot of Table of Contents for Teacher's Use in Positive Prevention Plus curriculum. The ACLU Trains Teachers to Bypass Parental Authority The June Our Watch video shows a variety of factors involved in California's progressive sex ed programs. Pastor Tim Thompson told The Daily Caller News Foundation that he published the video through Our Watch to help make parents more aware of how progressive the California sex educational programs are. "We knew parents had to see for themselves or else they weren't going to believe it," Thompson told The Daily Caller News Foundation. The video depicts ACLU staff attorney Ruth Dawson instructing teachers on how to help students obtain abortions without parental knowledge or consent. "Regardless of how old a student is, they can walk into a doctor's office and consent to these services without parental consent," says Dawson, according to footage from the video, referring to abortion when she said "these services." She was initially misidentified in the video. The ACLU attorney notes that these services include pregnancy and prenatal care, contraception, emergency contraception, and abortion. "And for these there is no parental notification." "I think a good way to think about all these services that California has decided are so important that we are going to allow minors to go into a doctor's office and consent to these services," Dawson added. "Because they are just that important and students need to be able to access them." The ACLU said in a statement to The Daily Caller News Foundation that all statements made by ACLU representatives during the meeting are "in accord with California law" and claims the video was doctored. However, when pressed on the matter, the ACLU did not comment on what aspects of the video were doctored. Activists and Experts Weigh In "Get Out Now: Why You Should Pull Your Child from Public School Before It's Too Late" author, attorney, and Director of the Catholic Women's Forum Mary Rice Hasson believes that most parents do not understand what their children are being exposed to—and often being exposed to without parental permission. "The California sex and gender "health" curriculum shows kids explicit images, normalizes kinky and perverse sexual activity, and teaches kids that their basic identity—as male or female—is something fluid or changeable," Hasson told The Daily Caller News Foundation, saying that schools see parents as "obstacles or barriers to their efforts to indoctrinate an entire generation." "Parents—especially religious parents—are portrayed as ignorant or untrustworthy when it comes to issues of sexual identity or activity—as if only the schools can be trusted to 'protect' kids and teach them all about," Hasson said. Parental Rights in Education Executive Director Suzanne Gallagher told The Daily Caller News Foundation that public schools in America are facilitating a national cultural crisis. Gallagher's organization seeks to keep families up to date on infringements of parental rights in public schools across the nation. "There is a clear political agenda to destroy the traditional family in America," Gallagher told The Daily Caller News Foundation. "Until now, the American family was considered to be the foundation of civic life; the smallest form of government, where children are taught responsibility, respect for authority, and national pride." Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities for this original content, email licensing@dailycallernewsfoundation.org. @MaryMargOlohan Mary Margaret Olohan Mary Margaret Olohan is a senior reporter for The Daily Signal. She previously reported for The Daily Caller and The Daily Wire, where she covered national politics as well as social and cultural issues. Email her at marymargaret.olohan@dailysignal.com.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,077
OSHA certifies New York State plan for public employees (8/16) OSHA announced that it will approve plan amendments and certify the state of New York's occupational safety and health plan for its public employees. OSHA determined that all developmental commitments have been met and that the state's plan is structurally complete. "This is a major milestone for the state of New York in the development of its occupational safety and health program," said OSHA director Edwin G. Foulke Jr. "We congratulate them on this accomplishment and for their ongoing commitment to the safety and health of their public employees." OSHA's certification indicates that the state plan contains all the necessary structural elements (standards, statutory and regulatory authorities, and procedures) to operate a program for its public employees which is "at least as effective" as the federal program. Absent a state plan, state and local government employees are not covered by the Occupational Safety and Health Act.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
245
from __future__ import print_function import unittest import paddle import paddle.fluid as fluid import paddle.fluid.core as core import numpy as np from threading import Thread def user_reader(inputs): def _reader(): for d in inputs: yield d return _reader def batch_feeder(batch_reader, pin_memory=False, img_dtype="float32"): def _feeder(): for batch_data in batch_reader(): sample_batch = [] label_batch = [] for sample, label in batch_data: sample_batch.append(sample) label_batch.append([label]) tensor = core.LoDTensor() label = core.LoDTensor() place = core.CUDAPinnedPlace() if pin_memory else core.CPUPlace() tensor.set(np.array(sample_batch, dtype=img_dtype), place) label.set(np.array(label_batch, dtype="int64"), place) yield [tensor, label] return _feeder class TestPyReader(unittest.TestCase): def setUp(self): self.capacity = 10 self.shapes = [(-1, 3, 2, 1), (-1, 1)] self.lod_levels = [0, 0] self.dtypes = ['float32', 'int64'] def test_pin_memory_pyreader(self): with fluid.program_guard(fluid.Program(), fluid.Program()): place = fluid.CUDAPlace(0) if fluid.core.is_compiled_with_cuda( ) else fluid.CPUPlace() executor = fluid.Executor(place) data_file = fluid.layers.py_reader( capacity=self.capacity, dtypes=self.dtypes, lod_levels=self.lod_levels, shapes=self.shapes) # feed_queue = data_file.queue read_out_data = fluid.layers.read_file(data_file) self.inputs = [] for _ in range(10): sample = np.random.uniform( low=0, high=1, size=[3, 2, 1]).astype("float32") label = np.random.randint(low=0, high=10, dtype="int64") self.inputs.append((sample, label)) self.input_tensors = [] for d, l in batch_feeder( paddle.batch( user_reader(self.inputs), batch_size=2), pin_memory=True if fluid.core.is_compiled_with_cuda() else False)(): ta = fluid.LoDTensorArray() ta.append(d) ta.append(l) self.input_tensors.append(ta) self.batched_inputs = [] for batch in paddle.batch(user_reader(self.inputs), batch_size=2)(): feed_d = [] feed_l = [] for d, l in batch: feed_d.append(d) feed_l.append([l]) self.batched_inputs.append([feed_d, feed_l]) data_file.decorate_tensor_provider( batch_feeder( paddle.batch( user_reader(self.inputs), batch_size=2), pin_memory=True if fluid.core.is_compiled_with_cuda() else False)) executor.run(fluid.default_startup_program()) self.outputs = [] data_file.start() for _ in self.input_tensors: self.outputs.append( executor.run(fetch_list=list(read_out_data))) data_file.reset() self.validate() def validate(self): self.assertEqual(len(self.batched_inputs), len(self.outputs)) for in_data_list, out_data_list in zip(self.batched_inputs, self.outputs): self.assertEqual(len(in_data_list), len(out_data_list)) in_data_list_np = [ np.array(in_lod_tensor) for in_lod_tensor in in_data_list ] for in_data, out_data in zip(in_data_list_np, out_data_list): self.assertTrue((in_data == out_data).all()) if __name__ == '__main__': unittest.main()
{ "redpajama_set_name": "RedPajamaGithub" }
7,798
What's the point of slouching through Fin de siècle Taipei if you do not indulge in a little hedonism? Unfortunately, that seems to be the best life can offer one lost beauty. She will find far more consolation in artificial stimulants and pounding club music than from her spectacularly unhealthy lover in Hou Hsiao-hsien's Millennium Mambo (trailer here), which screens tomorrow at the Smithsonian's Freer Gallery in Washington, DC. Vicky is a stunning beauty, but she has made some terrible choices, such as getting involved with Hao-hao, an emotional abusive deadbeat. She would like to make a clean break from him, but every time she tries, he keeps coming back, worming into her life and living space once again. However, when Vicky lets Jack, a mid-level gangster, serve as her sugar-daddy she might finally be well rid of Hao-hao. Nevertheless, do not expect a happy ending for their apparently platonic whatever-it-is. Mambo's opening shot of Vicky walking through a somewhat sketchy looking pedestrian bridge is a visual tour-de-force with all the iconic sexuality of Marilyn Monroe's subway vent encounter, but infused with a potent sense of menace. Unfortunately, the rest of the film lacks the same level of pop. While Hou's anesthetized vibe is a deliberate strategy that sort of works, his temporal shifts are not clearly delineated. Still, Vicky's dispassionate narration, told from the vantage point of ten years in the future, is eerily disconcerting. It almost sounds as if she were whispering from the graveyard, even though there is no reason to believe she will not bounce back from her setbacks, landing on her feet or what-have-you. Few films give viewers such intimate knowledge of its characters, yet somehow we never really feel we understand who they truly are. Of course, that is the whole point. Despite her inscrutability, Shu Qi holds viewers' attention in a vice-lock. It is not just her ethereal beauty. We can see there is something dramatic brewing in her eyes, we just can't tell what. As Hao-hao, Tuan Chun-hao makes a contemptible character strangely forgettable, but the steely gravitas of Jack Kao's namesake at least gives Shu Qi some memorable support during the third act. Arguably, Mambo is very definitely a product of its hipster millennial time. By now, the combination of its dreamy neon visuals and driving electronica already feels a little dated. Still, the film's evocative nocturnal look is a prime example why Mark Lee Ping-bin is considered one of the world's foremost cinematographers. It is hardly perfect, but it is still quite worth seeing, if only for Shu Qi's seductively raw performance. It should also help tide over fans as we wait and hope for The Assassin, Hou's first wuxia film, naturally starring Shu Qi. Recommended for those who appreciate Hou's more rarified art-house releases, Millennium Mambo screens (for free) tomorrow (12/21), at the Freer Gallery in DC.
{ "redpajama_set_name": "RedPajamaC4" }
4,657
package jExif.core; import java.util.Arrays; import org.junit.Assert; import org.junit.Before; import org.junit.Rule; import org.junit.Test; import org.junit.rules.ExpectedException; public class ByteSequenceTests { private static final byte[] BYTES = new byte[] { (byte)0x19, (byte)0x07, (byte)0x84 }; private static final byte[] REVERSED_BYTES = new byte[] { (byte)0x84, (byte)0x07, (byte)0x19 }; private static final Endianness ENDIANNESS = Endianness.BIG_ENDIAN; private static final Endianness REVERSED_ENDIANNESS = Endianness.LITTLE_ENDIAN; private static final byte ASCII_A = (byte)0x41; private static final byte ASCII_B = (byte)0x42; private static final byte ASCII_C = (byte)0x43; private ByteSequence sequence; @Rule public final ExpectedException thrown = ExpectedException.none(); @Before public void setUp() throws Exception { this.sequence = new ByteSequence(BYTES, ENDIANNESS); } @Test public void constructorCopiesByteArray() { byte[] bytes = new byte[] { (byte)0x00, (byte)0x01 }; ByteSequence byteSequence = new ByteSequence(bytes, Endianness.BIG_ENDIAN); Assert.assertTrue(Arrays.equals(bytes, byteSequence.bytes())); bytes[0] = (byte)0xFF; Assert.assertFalse(Arrays.equals(bytes, byteSequence.bytes())); } @Test public void constructorThrowsOnNullArray() { this.thrown.expect(IllegalArgumentException.class); new ByteSequence(null, Endianness.BIG_ENDIAN); } @Test public void constructorThrowsOnNullEndianness() { this.thrown.expect(IllegalArgumentException.class); new ByteSequence(new byte[] {}, null); } @Test public void constructorSetsLength() { Assert.assertEquals(BYTES.length, this.sequence.length); } @Test public void constructorSetsEndianness() { Assert.assertEquals(ENDIANNESS, this.sequence.endianness); } @Test public void constructorDefaultsToBigEndian() { Assert.assertEquals(Endianness.BIG_ENDIAN, new ByteSequence(new byte[] {}).endianness); } @Test public void getterReturnsCorrectBytes() { Assert.assertArrayEquals(BYTES, this.sequence.bytes()); } @Test public void getterCopiesByteArray() { byte[] bytes = this.sequence.bytes(); Assert.assertTrue(Arrays.equals(bytes, this.sequence.bytes())); bytes[0] = (byte)0xFF; Assert.assertFalse(Arrays.equals(bytes, this.sequence.bytes())); } @Test public void toByteObjectsReturnsCorrectBytes() { Byte[] byteObjects = this.sequence.toByteObjects(); Assert.assertEquals(BYTES.length, byteObjects.length); for (int i = 0; i < byteObjects.length; ++i) { Assert.assertEquals(BYTES[i], byteObjects[i].byteValue()); } } @Test public void toReversedSequenceYieldsCorrectBytes() { Assert.assertArrayEquals(REVERSED_BYTES, this.sequence.toReversedSequence().bytes()); } @Test public void toReversedSequenceYieldsCorrectEndianness() { Assert.assertEquals(REVERSED_ENDIANNESS, this.sequence.toReversedSequence().endianness); } @Test public void toSubsequenceRetainsEndianness() { Assert.assertEquals(this.sequence.endianness, this.sequence.toSubsequence(0, 1).endianness); } @Test public void toSubsequenceReturnsFirstByte() { Assert.assertArrayEquals(Arrays.copyOfRange(this.sequence.bytes(), 0, 1), this.sequence.toSubsequence(0, 1).bytes()); } @Test public void toSubsequenceReturnsSecondAndThirdByte() { Assert.assertArrayEquals(Arrays.copyOfRange(this.sequence.bytes(), 1, 3), this.sequence.toSubsequence(1, 3).bytes()); } @Test public void toSubsequenceReproducesOriginalSequence() { Assert.assertEquals(this.sequence, this.sequence.toSubsequence(0, this.sequence.length)); } @Test public void toReversedSequenceYieldsOriginalSequenceOnDoubleInvocation() { Assert.assertEquals(this.sequence, this.sequence.toReversedSequence().toReversedSequence()); } @Test public void toIntegerThrowsOnTooLongSequence() { this.thrown.expect(IllegalStateException.class); new ByteSequence(new byte[] { 1, 2, 3, 4, 5 }).toInteger(); } @Test public void toIntegerReturnsCorrectNumberInBigEndian() { ByteSequence intSequence = new ByteSequence(new byte[] { 1, 2, 3 }, Endianness.BIG_ENDIAN); Assert.assertEquals(3 + 256 * (2 + 256 * 1), intSequence.toInteger()); } @Test public void toIntegerReturnsCorrectNumberInLittleEndian() { ByteSequence intSequence = new ByteSequence(new byte[] { 1, 2, 3 }, Endianness.LITTLE_ENDIAN); Assert.assertEquals(1 + 256 * (2 + 256 * 3), intSequence.toInteger()); } @Test public void toRationalThrowsOnUnevenLength() { this.thrown.expect(IllegalStateException.class); new ByteSequence( new byte[] { 1, 2, 3 }).toRational(); } @Test public void toRationalThrowsOnTooLongSequence() { this.thrown.expect(IllegalStateException.class); new ByteSequence(new byte[] { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }).toRational(); } @Test public void toRationalReturnsCorrectNumber() { ByteSequence rationalSequence = new ByteSequence(new byte[] { 1, 2, 3, 4 }); Assert.assertEquals(new Rational(2 + 256 * 1, 4 + 256 * 3), rationalSequence.toRational()); } @Test public void toAsciiStringReturnsCorrectStringInBigEndian() { ByteSequence asciiSequence = new ByteSequence( new byte[] { ASCII_A, ASCII_B, ASCII_C }, Endianness.BIG_ENDIAN); Assert.assertEquals("CBA", asciiSequence.toAsciiString()); } @Test public void toAsciiStringReturnsCorrectStringInLittleEndian() { ByteSequence asciiSequence = new ByteSequence( new byte[] { ASCII_A, ASCII_B, ASCII_C }, Endianness.LITTLE_ENDIAN); Assert.assertEquals("ABC", asciiSequence.toAsciiString()); } @Test public void equalsItself() { Assert.assertTrue(this.sequence.equals(this.sequence)); } @Test public void doesNotEqualNull() { Assert.assertFalse(this.sequence.equals(null)); } @Test public void doesNotEqualObjectOfAnotherClass() { Assert.assertFalse(this.sequence.equals(new Object())); } @Test public void equalsSequenceWithIdenticalBytesAndEndianness() { ByteSequence other = new ByteSequence(this.sequence.bytes(), this.sequence.endianness); Assert.assertTrue(this.sequence.equals(other)); } @Test public void equalsSequenceWithReversedBytesAndEndianness() { ByteSequence other = new ByteSequence(REVERSED_BYTES, REVERSED_ENDIANNESS); Assert.assertTrue(this.sequence.equals(other)); } @Test public void doesNotEqualSequenceWithReversedBytesAndIdenticalEndianness() { ByteSequence other = new ByteSequence(REVERSED_BYTES, ENDIANNESS); Assert.assertFalse(this.sequence.equals(other)); } @Test public void doesNotEqualSequenceWithIdenticalBytesAndReversedEndianness() { ByteSequence other = new ByteSequence(BYTES, REVERSED_ENDIANNESS); Assert.assertFalse(this.sequence.equals(other)); } @Test public void doesNotEqualSequenceWithDifferentLength() { ByteSequence other = new ByteSequence(new byte[] {}, ENDIANNESS); Assert.assertFalse(this.sequence.equals(other)); } @Test public void emptySequencesWithIdenticalEndiannessAreEqual() { ByteSequence sequence1 = new ByteSequence(new byte[] {}, ENDIANNESS); ByteSequence sequence2 = new ByteSequence(new byte[] {}, ENDIANNESS); Assert.assertTrue(sequence1.equals(sequence2)); } @Test public void emptySequencesWithReversedEndiannessAreEqual() { ByteSequence sequence1 = new ByteSequence(new byte[] {}, ENDIANNESS); ByteSequence sequence2 = new ByteSequence(new byte[] {}, REVERSED_ENDIANNESS); Assert.assertTrue(sequence1.equals(sequence2)); } }
{ "redpajama_set_name": "RedPajamaGithub" }
2,963
I once thought that, if only my work was "good enough," I would feel nothing but overwhelming pride in it. For that reason, I wrote one and a half unpublished books before I turned 21 and three vaguely related books before I churned out The Love Mindset. Shame hides in all sorts of packages. Everyone who has ever publicly produced anything of value has had to face those voices that scream "You're not good enough!" and "You don't know enough!" Facing them hurts. It hurts a lot. You can create something, look fear in the eye, and shirk away, but it'll be right back there next time you try to get anywhere. Face the fear. Face it bravely. It's all part of the journey. Once you allow yourself to stand up to your fears of not being good enough, then people start projecting their own fears onto you. Especially those who say they love you will fear losing you, so they'll unintentionally try to keep you small just to keep you for themselves. Find a tribe. Find people who believe in you and who will not, ever, tell you that you can't or shouldn't. Find people who admire your courageous, bright light instead of being blinded by it. Then, once your critics see you're serious, their criticism might just melt into inspiration. Honestly, this was hard for me. My parents and the school system taught me that I could and should be perfect. I relished that feeling. I bathed in the glory of 100%. Then, I became an author. Where's the 100%? Where's the ceiling? When will it be perfect? Anytime I found an error or a flaw, I would bathe in shame. Anytime I made a mistake, my heart would break. About a year and hundreds of mistakes later, I've realized that perfection is a key component of all systems that suffocate learning. In the real world, there's no ceiling. Once I learned to see past my own conditioned fear, I realized that imperfection is a beautiful, beautiful thing. Not just with writing, but with all art, there is no line that divides your creative outlet and your reality. Your outlet is a direct mirror of your reality. Whatever energy breeds in your day to day world will immediately show itself to your readers. Everything I ever wrote that was full of authentic emotion got an authentic emotional reply. Everything I approached halfheartedly got no reply. There's no fooling the system. If you want to improve your art, improve yourself. I tried for some time to be a distant authority. I thought I could teach about love and spirituality without sharing the experiences that had brought me to my epiphanies. That failed. It failed horribly. All of the parts of my past that I thought I could brush under the carpet, I now proudly display as my scars from the battle. And people love those. People want heroes with scars because people have scars. And people will try to give it to you at every turn. The more you make it into the public spotlight, the more people will knock at your door offering their opinion. Remember that everyone has an opinion and sharing it is the easiest thing in the world. Get inspired by people who have actually been there and learn from your experiences. Believe those who believe in you and, when you do take negative feedback, don't involve your self-worth in it. I used to bask in the feeling of certainty. I thought if I could control everything, I would be happy. Now, I pretty much always feel like I'm floating off the surface of the earth, never quite feeling like I've got my feet secured on the ground. I learn and grow every day. Learning is not a process of certainty. This is something I never learned at school. Learning requires a willingness to be confused, lost, and courageous. If you feel like you have no idea what's going on, stay there. That's where the magic happens. I wrote a book about healing and happiness. I knew how to theorize and philosophize unconditional love. Then what? Right after I finished writing my last draft, I began to understand that practice and theory are very different. When I began to go out and lead workshops, give speeches, and coach people, I learned that writing about inspiration isn't enough. In order to really learn what I was teaching, I had to be willing to become a student of the same process I was teaching again and again. I used to think that writing a book was the only thing I had to do. I thought that was the hard part. Honestly, writing a book is easy. All you do is sit down, find your inspiration source, and flow from it. What's hard is putting it out there in the public eye, facing your own issues of self-worth, responding to critics, and learning from failure. Writer's block (or any other art block) is often a result of too much output and not enough input. It's taken me many evenings of sitting in front of blank pages, trying desperately to glue together old ideas, before I realized that new ideas come from new material. I need to be inspired by what inspires me. I may spend my time writing about self-love, spirituality, and peace of mind, but I don't draw inspiration from that sort of literature. I am most turned on by philosophy, psychology, music, and real human stories. If you want to inspire people, find what makes you come alive and access it frequently. Your own inspiration will spill into their hearts. Thank you for these. The thing about input is so true. I find that my best writing comes out after I've consumed some sort of content that's truly left me reeling. I also don't write about writing often. I find so many writers do. I dunno. Doesn't seem to appeal to me unless its something that's just been begging to get down in a post. I suppose everything's hard in its own way, isn't it? I'm 20 years and I study medicine but lately I've been thinking of writing a book or something with life lessons, my views on different aspects of life hoping I can help some people develop another way of thinking/perspective or inspire them. I really don't know how to write so I've been researching how to write a book. My friends says that I'm crazy and that no one will read them. But I want to do it. I'd be grateful if you could give me tips. I don't think you are crazy. If you feel you want to share something with the world, share it. What I've learned more than anything is that you have to do a lot of editing. I edited my second book about 100 times (maybe more). Read it, revise it, and then leave it for a few months. Then come back to it again. Good writing happens in that process (and very rarely in the initial writing). I also think it's important to build up an online audience (with a blog, for example) while you write. Regardless of whether you self-publish or go the traditional route, you have to get your own readers these days. I hope that's helpful!
{ "redpajama_set_name": "RedPajamaC4" }
6,475
Q: Development environment setup and configuration for web development I know this is stupid question.I am newbie. But me and my friend want to work on a website project. We both are located at few miles from each other. So I want information and steps towards making a smooth working environment for both of us by which we can see updates and results from both of us and watever changes we make to the website. I wanted to know how can we use git(as this will be used for version control), zend framework (we decided to use this one), phpdesigner (our IDE) collectively in developing this site. Also I wanted to know steps and information on how we work locally and push our changes at final product using git. Right now I have all scattered information about git and zend. So if someone would please align all these scattered things and let me know how can we can setup our first development environment. Also if someone could tell me how to setup development, test, pre-production and production environment. Dude "m learning " man :) A: Here are the steps I use to work collectively. For this you have to use netbeans as IDE & need to have Github account as you are using git as scm. * *create account at Github *Create Repo *Copy your repo url *Create Branch *Clone it from netbeans *Now push, pull or fetch *Create a pull request before merge *Merge pull request This are the steps I follow for my personal work with working on a team. I hope this helps you.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,147
HomeEvents & communityRegionsSouthDirector of the Year Awards The London & South 2020 Winners We are delighted to announce the winners of the virtual IoD London & South Director of the Year awards held on 11th September at 12:30 Director of the Year - Equality, Diversity & Inclusion Clare Martin: Jardine Motors Group Director of the Year - Third / Public Sector, kindly sponsored by Unity Trust Bank plc Claire Hook: Imperial College Healthcare NHS Trust Director of the Year - Non-Exec Megan McCracken: Folk2Folk Director of the Year - International Daniel Brooks: VHR Director of the Year - Social Value and Sustainability Impact, kindly sponsored by The Planet Mark Yvonne Obuaya: Curado Ltd Director of the Year - Family Oliver Butts: Christina May Director of the Year - Innovation Tim Brownstone: KYMIRA Director of the Year - Start-Up James Bush: NHCC (New Homes Customer Care) Director of the Year - SME up to £5 million, kindly sponsored by Blend Marketing Steve Hodges: Astro Technology Group Limited Director of the Year - SME, £5 to £50 million, kindly sponsored by Schroders Personal Wealth Dr Nasser Siabi OBE: Microlink PC (UK) Ltd New Chartered Director Anne-Marie Mountifield, Solent Local Enterprise Partnership Chairs Award Megan McCracken, Folk2Folk A special thanks goes to our panel of judges. For a full list of all of our 2020 finalists please please visit our dedicated Director of the Year Awards website. We are pleased to announce that Dame Esther Rantzen DBE will be joining us as guest of honour at our Awards dinner on Friday 11th September 2020. We will also be supporting ChildLine during the evening, further details to follow. Dame Esther Rantzen DBE Dame Esther received an OBE for services to broadcasting, a CBE for services to children, and in the New Year Honours of 2015, a DBE for services to children and older people through ChildLine and The Silver Line. A graduate from Oxford, Esther Rantzen's career in broadcasting began with BBC Radio as a sound effects assistant. From there she moved into television as a researcher/reporter for Braden's Week and then in 1973 as producer/presenter of That's Life, which ran for 21 years on BBC Television. Esther has made a number of pioneering programmes on subjects such as British women's experience of childbirth, stillbirth, mental health and child abuse. In 1986 she invented the concept of ChildLine and chaired the charity for 20 years. After the merger of ChildLine with the NSPCC in February 2006, Esther became President of ChildLine and a trustee of the NSPCC. In 2012 she invented The Silver Line, a helpline for older people and having chaired it in its first year, she is now its President and a Trustee. She contributes regularly to the Daily Mail and other publications, and lectures on children's issues, broadcasting and is the only Trustee so far to have appeared on Strictly Come Dancing, been in ITV's Australian jungle and appeared on Question Time while standing as an independent candidate for Luton South. For her work in broadcasting and for children she has been awarded 7 honorary doctorates, and is a Patron of 19 charities. In 2011 she published "Running Out of Tears" to celebrate ChildLine's 25th Anniversary. A child contacts Childline every 25 seconds. They talk to us about things like bullying, self-harm and depression. Problems that children often feel they can't talk to anyone else about. With your generosity and support Childline can be here to help those young people find their voice. We're here for children who feel they have no one else to talk to and nowhere else to turn. Every day of the year, 24 hours a day, our counsellors are there to listen, whatever their worry, online and on the phone. The good news is that many children did find the courage to speak to us last year and get the help they need so they can get their life back on track. But this number puts a great strain on our services. The simple truth is, we can only respond to 3 out of 4 children who need our help. It's only with the support of people like you that Childline can continue to be there for the children who desperately need us so that they can look to the next day with hope. You can donate here. Are you looking for brand exposure among the pinnacle of leadership and business excellence community? By partnering our awards you will benefit from brand awareness and have the opportunity to forge new relationships all whilst being provided with an exceptional hospitality occasion Read more about sponsorship opportunities Thank you to our 2020 sponsors: Sponsors of Director of the Year - SME up to £5m turnover Blend is one of the world's top B2B HubSpot agencies. Based in Berkshire and serving customers worldwide, Blend is dedicated to delivering specialist B2B marketing services for tech and professional services companies with ambitious growth plans. Sean Sweet and Phil Vallender founded Blend in 2010 with a vision to provide high-quality work underpinned with the commercial understanding often lacking in the creative industry. Ten years later, Blend is a multi-award-winning company with a team of 32 in-house marketing professionals, delivering outstanding website design projects, highly successful results-driven digital campaigns, and efficient HubSpot onboarding services. Sponsors of Director of the Year - Third / Public Sector Unity Trust Bank is a specialist business bank with a difference. For over 36 years we have worked with organisations and SMEs that share our values and philosophy, offering a full range of banking services including current accounts, savings accounts, and loans. We're delighted to sponsor the London and South Director of the Year Awards and are honoured to help to recognise some of the best businesses and directors in the London & South regions. Sponsors of Director of the Year - Social Value & Sustainability Impact Around the globe, climate change apathy has given way to overwhelming attention. Nearly 70% of British people want urgent action to tackle the issues of our time. From children, to CEOs, there is a strong call for change – time to transform talk into action. At The Planet Mark we see a world where we all contribute to a thriving planet. Our members deliver results that go beyond compliance, cutting carbon by an average 6% per year and engaging their people in the process. Together, we are proving that sustainability is good for business and that acting responsibly is the new norm. Join us in the Decade of Action. Sponsors of Director of the Year - SME £5-£50m turnover Schroders Personal Wealth is a joint venture between Lloyds Banking Group and Schroders – two of the UK's largest names in banking and asset management. We were created to help more people across the UK benefit from financial advice. We have the advantage of solid foundations and a strong heritage. But we take a fresh, transparent and personal approach to financial planning. We aim to provide clients with clarity and transparency in everything we do. This includes using technology to explain how long-term financial planning can add value to people's lives; to give people access to information about their financial wellbeing; and to communicate with their adviser when it's convenient for the client. Our heritage may be 400 years old, but our approach is built for the future. We are pleased to announce the judges for the 2020 Awards are: Alicia Andrews: Southeastern Railway Nis Arend: Top 1% Alison Bourne: The Dash Charity Ian Calder CDir: Centrix Ltd Keith Cornell CDir: BySide/Active Solutions Peter Digby: Xtrac Ltd Chris Dodson OBE DL: Torftech Group Murray Eldridge CDir: Actinium CS Ltd William English CBE, CDir: OSTC Group Board, Owen English & Son Claire Horton CBE: Battersea Dogs & Cats Home Janthana Kaenprakhamroy: Tapoly "Insurance on Tap" Gemma Lacey: Southern Co-op Barry Lewis: Cadline Ltd Steve Malkin: Planet First Tara Mei: Bread & Jam Sue Nelson: EY John Palmer CDir: Exec Express Ltd Jitesh Patel: Peldon Rose Luke Quilter: Sleeping Giant Media Ross Wilson: Wilson Partners For full details of judges please click here
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,350
Where is Tanger located in Morocco? Travelling to Tanger, Morocco? Find out more with this detailed interactive online map of Tanger downtown, surrounding areas and Tanger neighborhoods. Travelling to Tanger, Morocco? Find out more with this detailed online map of Tanger provided by Google Maps.
{ "redpajama_set_name": "RedPajamaC4" }
1,684
\section{Introduction}\label{introduction} Nowadays, many people invest their retirement savings in a defined contribution pension scheme. In such a scheme, the contributions are agreed upon and are, e.g., a percentage of one's salary. The pension, however, is uncertain as it depends on the returns on investment. At retirement, the accumulated wealth is converted to a pension income that intends to replace a proportion of the investor's income, typically about 70\%, which is referred to as the replacement ratio. In this paper, we propose a dynamic strategy that optimally steers the investor towards a replacement ratio target. Our dynamic strategy will reduce risk after several years of good returns on investment. It presumes that upward potential concurs with downside risk. Our pension investor is only interested in reaching her replacement ratio target, i.e., not making the target is considered downside risk and she feels indifferent about any two values above the target. We will show that, in this sense, the designed dynamic strategy outperforms static life cycle strategies. By decreasing risk after several good years, our dynamic strategy prevents unnecessary risk taking. A well-known static life cycle strategy is known as Bogle's rule \citep{bogle}, which prescribes to invest $100\%$ minus one's age in risky assets. Decreasing risk in the course of the life cycle in such a way is called a glide path. When the glide path is known in advance up to retirement, the strategy is static and does not adjust as events unfold. Therefore, static strategies may take unnecessary risk when returns on investment are better than anticipated, see \citet{Arnott2013, Graf2017} for a discussion of drawbacks of static life cycle strategies. The strategy we propose is also rule-based, but it is dynamic as the prescribed rule depends on events that still have to unfold. In the literature, dynamic strategies are often studied in the context of dynamic programming \citep{Bellman}. Dynamic programming optimizes the investment strategy backwards in time by optimizing decisions for the coming period given that consecutive decisions are already taken optimally. \citet{Mer69} was the first to apply dynamic programming to an asset allocation problem with two assets, a risky and a risk-free asset, also allowing for consumption during the investment period. Optimal decisions were based on the constant relative risk aversion utility function. \citet{Mer69} showed that the optimal strategy continuously rebalances, i.e., the optimal allocation is constant. The literature on optimal asset allocation is very rich, and we cite here some contributions that influenced our work. The authors in \citet{Li2000} introduced mean-variance strategies with respect to a wealth target. The wealth target then allows the investor to identify a surplus: wealth up to the target may be invested in stocks, any remainder is invested in the risk-free rate. \citet{Zhang} solved a similar, however utility-based, problem and combined dynamic programming with the least squares Monte Carlo method. Upper and lower bounds for the wealth were prescribed in that paper, showing that upward potential comes with downside risk. Terminal wealth is steered towards a desired range by investing the difference between a risk-free-discounted upper bound value and the current wealth in the risk-free asset. \citet{Forsyth} also applied dynamic programming and used a PDE solver to solve a so-called time-consistent mean-variance problem, meaning that similar mean-variance problems were solved at future times. In addition to mean-variance that balances the mean and variance of returns, they studied a problem with a fixed wealth target. To reduce risk, both \citet{Forsyth} and \citet{Zhang} proposed to invest excess wealth in a risk-free asset. Similarly, the rule-based strategies introduced in this paper will invest excess wealth into a so-called matching portfolio. Compared to static strategies, distributions of outcomes are more centered around the target value and the area below the target value will become smaller. Besides many positive aspects, dynamic programming and its resulting strategies also have some drawbacks. First of all, dynamic programming is computationally rather intensive. Secondly, the corresponding investment decisions can be sensitive to small changes in parameters and underlying assumptions. Because of this, the allocation may fluctuate over time resulting in large turnovers, of which, from a practical perspective, it is hard to explain why they are required. Intuitively defined rules, typically, do not suffer from these drawbacks. Moreover, it is not straightforward to apply dynamic programming to the pension settings as an investor's replacement ratio target often depends on inflation influencing the future, which in turn also influences the future contributions. Rule-based dynamic strategies fall in between the static and dynamic programming paradigms, when well constructed they aim for the best of both worlds. As shown by \citet{Basu2011}, even simple rule-based strategies that reduce risk half way in the life cycle can outperform static life cycle strategies. Compared to \citet{Basu2011}, our rule-based strategies can reduce risk annually, and consider the market price of future pension payments instead of a wealth target. Next to the rule-based strategies, we will also combine a rule-based strategy with dynamic programming in an integrated approach. \section{The optimal asset allocation problem} \subsection{Model setting}\label{theoretical_background} To demonstrate the rule-based strategy's practical value, we will consider a specific pension investor (we will choose typical retirement data from the Netherlands). At $t=0$, the 26 year old investor will start saving up to retirement at time $t=T$, coinciding here with a retirement age of 67 years. She intends to replace 70\% of her income by her pension (including government allowances for old age). Although, in practice, an investor might be interested in insuring longevity risk or be interested in employing advanced withdrawal strategies, \citet{Blanchett2012} illustrates that simple withdrawal strategies can perform well, e.g., based on an annuity with a maturity roughly equal to an investor's life expectancy. Therefore, as we focus on accumulating wealth before retirement, we simply assume the investor buys an annuity that indexes with the expected inflation, i.e., a bond which, apart from indexation for expected inflation, equals annual payoffs, for a period of $N$, say 20, years after retirement. Whichever withdrawal strategy an investor might follow, the assumption here is that this annuity gives a good estimate of, at least, the investor's income in her first year after retirement, and, thereby, to what extend she can replace her salary for 70\% with a pension. The investor can invest her wealth $W_t$ in a risky, equity-like, asset, which is called the return portfolio, or in a safe, bond-like, asset with annual payoffs during retirement, the matching portfolio. In our setting, the strategy will use the matching portfolio to protect the current gains, and it grows with inflation. Therefore, the matching portfolio also carries risk. Put differently, we assume the investor doesn't hedge inflation risk with inflation protected securities as the market for inflation protected is illiquid and strategies that hedge against inflation are not straightforward to follow in practice \citep{Martellini2014}. Finally, we assume there is no risk-free rate to invest money in. The investor annually manages her portfolio, i.e., decisions, contributions and pension payments are made in discrete time, which runs up to retirement, from $t=0$ to $t=T$. The pension payments start at $t=T$ and run up to $t=T+N-1$. At time $t\leq T$ before retirement, she invests a fraction $\alpha_t$ of her wealth $W_t$ in the return portfolio. The investor is not allowed to short-sell assets or borrow money, so that \begin{equation} 0\leq\alpha_t\leq1. \end{equation} In the dynamic programming literature, $\alpha_t$ is referred to as the control (as decisions intend to give the investor control over the outcome). A strategy maps information $Z_t$ available at time $t$, e.g., past returns and current wealth $W_t$, to the desired allocation: \begin{equation}\label{eq:def_control} \alpha_t:\mathbb{R}^K\ni Z_t \mapsto \alpha_t(Z_t) \in [0,1]. \end{equation} Here $Z_t$ is adapted to a filtration $\mathcal{F}_t$, governing the underlying stochastic processes. Before time $t$, the information $Z_t$ is not yet available, and $\alpha_t$ is thus a stochastic quantity. In a static strategy, such as Bogle's rule, $\alpha_t$ only depends on time and is known, i.e., not stochastic, not even when the information $Z_t$ is not yet available. In practice, risk is reduced towards retirement, meaning that $\alpha_t$ typically decreases over time. Just before rebalancing, the investor makes a contribution $c_t$ to the portfolio. These contributions resemble an age-dependent percentage $p_t$, see Table \ref{table_salary}, of the investor's salary $s_t$ which she earned in the period $t-1$ up to $t$. We assume that the investor's salary $s_t$ follows a deterministic career path, i.e., it increases with age. The investor's salary also increases stochastically with the wage inflation $w_t$, see Appendix \ref{sec:career-path-and-contribution-rate}. The investor's objective is to achieve her 70\% replacement ratio target at retirement without encurring too much downside risk. The replacement ratio at retirement, $R_T$, is given by \begin{equation}\label{eqn:def_replacement_ratio} R_T= \frac{W_T}{M_T} \cdot \frac{T+1}{\sum\limits_{t = 0}^{T} s_t\prod\limits_{\tau=t+1}^{T} (1+\pi_\tau)}. \end{equation} Here, the second term divides the investor's average wage in nominal amounts indexed with inflation $\pi_t$ to retirement at $t=T$, and $M_t$ is the market value factor that discounts $N$ future pension payments indexed by expected inflation to time $t\leq T$: \begin{equation}\label{eqn:market value factor} M_t=\sum\limits_{\tau=T}^{T+N-1} (1+r^{\tau-t}_t)^{\tau-t}\;\mathbb{E}_t\left[\prod\limits_{\tau'=t+1}^\tau(1+\pi_{\tau'})\right], \end{equation} where $\mathbb{E}_t$ is the expectation, conditional on $\mathcal{F}_t$ (i.e., conditional on the information available at time $t$), and $r^{\tau-t}_t$ represents the market rates that discount payments from $\tau-t$ years into the future back to the present time. Using the market value factor $M_T$ at retirement, the first term in \eqref{eqn:def_replacement_ratio} converts the accumulated wealth $W_T$ to $N$ annual income payments indexed for expected future inflation. To measure whether a strategy achieves the investor's objective, we use a utility function, $U$, which, whenever decisions are to be taken, intends to maximize the following expression in expectation: \begin{equation} \max_{\alpha_t,\ldots, \alpha_{T-1}}~\mathbb{E} \left[{U(Z_T)}\left|\right.\mathcal{F}_t\right], \end{equation} where $\mathcal{F}_t$ represents current market information, $\alpha_t$ is as in \eqref{eq:def_control} and $Z_T$ is a vector with outcomes including the terminal replacement ratio. Although other choices are possible, we choose $U(.)$ to be the shortfall below the investor's target replacement ratio of 70\%: \begin{equation}\label{eqn:utility_shortfall} U(Z_T)=\min(RR_T-70\%,0), \end{equation} where there is no shortfall in replacement ratio if it ends above $70\%$. Note that this measure is not conditional on the shortfall. So, additionally, we will also evaluate a strategy's performance using the 10\% conditional value at risk $\mathrm{CVaR_{0.1}}(RR_T)$ of the replacement ratio, i.e., the expectation of the $10\%$ worst case outcomes as defined by \begin{equation}\label{kkk} \mathrm{CVaR}_\alpha\; (RR_T) = \mathbb{E}\left[RR_T\left|RR_T\leq F_{RR_T}^{-1}(\alpha)\right.\right], \end{equation} where $F_{RR_T}^{-1}(\alpha)$ is the inverse cumulative distribution function of terminal replacement ratio $RR_T$ and represents the $\alpha$-th quantile below which are the worst case outcomes. \subsection{Governing stochastic model}\label{sec:governing-stochastic-model} For general applicability, we require that the designed strategies are not defined in terms of the governing stochastic model parameters. That is, the strategies can be applied when different governing stochastic models would be used. We merely assume that the governing stochastic model can be simulated by means of a Monte Carlo simulation. To make this explicit, we choose to use a standard model developed to make risk analyses comparable between Dutch pension funds, see \citep{KNW2009}. The model and its calibration are well documented \citep{Draper2014}. Calibration on recent market data and a Monte Carlo simulation of the model are publicly available at the website of the Dutch Central Bank \citep{HBT2016}. In this paper, we use the set of 2017 (quarter 1), which is calibrated on data up to ultimo 2016 and start simulating from there. In discrete time, the model is a VAR(1) model with normally distributed increments, see \citet{Muns2015} for a short summary of the model specification. In the calibration, some structure is imposed to achieve realistic market dynamics. Based on the model, sample paths are generated for the following variables: \begin{itemize} \item Equity returns $x_t$, which are used for the return portfolio; \item Inflation $\pi_{t}$; \item Wage inflation $w_t$, which equals inflation $\pi_{t}$ plus $0.5\%$; \item A yield curve with interest rates $r_t^m$ containing rates for each maturity $m$. \end{itemize} The matching portfolio is tailored to the investor's retirement age. Its returns $m_t$ equal the rate of change in the market value factor: \begin{equation}\label{eqn:returns matching portfolio} m_t=\frac{M_t}{M_{t-1}}-1, \end{equation} where $M_t$ is defined in \eqref{eqn:market value factor}. Note that the matching portfolio protects the investor against \textit{expected} future inflation. To determine the expected future inflation, we use the least squares Monte Carlo technique, as presented in Section \ref{sec:least-squares-monte-carlo-method}. Table \ref{tab:annual_stats} gives the annual return statistics of the variables. Due to the fluctuating market price of future pension payments, the standard deviation of the matching returns is very similar to the one of the equity returns. Although the matching portfolio follows these fluctuations, it is considered less risky, in terms of the investor's goals. By investing in the matching portfolio, the pensioner will receive the corresponding amount from the annuity, no matter the future market prices. \begin{table}[H] \centering \begin{tabular}{lrrrrr} {} & $x_t$ & $m_t$ & $r_t^{10}$ &$\pi_t$ & $w_t$ \\ \toprule Mean & 6.1\% & 3.4\% & 2.5\% & 1.6\% & 2.1\% \\ Standard deviation& 18.3\% & 18.5\% & 2.4\% & 1.5\% & 1.5\% \\ \midrule Correlations&&\\ Equity return ($x_t$) & 1.00 & & & & \\ Matching return ($m_t$) & -0.06 & 1.00 & & & \\ 10 year interest ($r_t^{10}$) & 0.17 & -0.17 & 1.00 & & \\ Inflation ($\pi_t$ ) & 0.11 & -0.04 & 0.82 & 1.00 & 1.00 \\ Wage inflation ($w_t$) & 0.11 & -0.04 & 0.82 & 1.00 & 1.00 \\ \bottomrule \end{tabular} \caption{Annual statistics of the underlying stochastic model calculated on a sample that combines all sample paths.}\label{tab:annual_stats}% \end{table} \section{Rule-based strategies}\label{rule_based} In this section, we define three rule-based strategies: a cumulative target strategy that decreases risk once it reaches a cumulative target for the contributions paid so far, an individual target strategy that tracks the investments of the contributions separately and decreases risk once it reaches the target for that contribution, and a combination strategy that combines the two with dynamic programming. The strategies all intend to steer towards a target replacement ratio of 70\%, and decrease risk when return on investment develops well. The strategies differ in their views on when return on investment has been developing well enough to decrease risk. \subsection{Cumulative target strategy}\label{cumulative_target_strategy} The cumulative target strategy that we consider here has similarities with the strategies studied in~\citet{Zhang} and~\citet{Forsyth}: risk is reduced once wealth exceeds a pre-defined wealth target. Contrary to~\citet{Zhang} and~\citet{Forsyth}, however, our investor saves for retirement and we relate the wealth target to the price of a bond with payoff equal to the desired pension. Given a density forecast for the matching and return portfolios, see Section \ref{sec:governing-stochastic-model}, the strategy depends on two parameters: a required real rate of return $r$ (before retirement) and a discount rate $\delta$ (after retirement) to discount pension payments after retirement to the retirement date. At time $t$ before retirement, i.e., $t\in 0\ldots T$, the investor contributes $c_t$ to her pension savings, see Table \ref{table_salary}. The contributions $c_\tau$ up to time $t$, i.e., $\tau=0,\ldots,t$, are supposed to grow with inflation $\pi$, plus the real rate of return $r$, to a target wealth $c_\tau\; \mathbb{E}_t F_\tau$ at retirement, where $F_t$ is given by \begin{equation}\label{eqn:target_wealth_factor} F_\tau = \prod\limits_{\tau'=\tau+1}^T(1+r+\pi_{\tau'}), \end{equation} and the conditional expectation, $\mathbb{E}_t$, enforces that the realized inflation is used before time $t$ and the expected inflation is used beyond time $t$. The wealth targets at retirement for all contributions $c_\tau$ up to time $t$ are combined and converted into a target pension using a discount factor, $\tilde M_T$, which is based on the discount rate $\delta$: \begin{equation}\label{eqn:post_retirement_factor} \tilde M_T=\sum\limits_{\tau=T}^{T+N-1} \frac{1}{(1+\delta)^{\tau-T}}. \end{equation} Using the market value factor $M_t$, as defined in \eqref{eqn:market value factor}, this gives us the following current target wealth $\tilde{W}_t$: \begin{equation}\label{eqn:cumulative_target_wealth} \tilde{W}_t=\frac{M_t}{\tilde M_T}\sum\limits_{\tau=0}^{t} c_\tau \; \mathbb{E}_tF_\tau, \end{equation} where the summation represents the combined wealth targets at retirement for all contributions $c_\tau$ up to time $t$. The cumulative target strategy starts by investing new contributions $c_t$ in the risky asset. If the current wealth $W_t$, including the current contribution $c_t$, exceeds the target wealth $\tilde{W}_t$, risk is reduced and $W_t$ is transferred to the matching portfolio. For the matching portfolio, the investor follows a buy and hold strategy. New contributions invested in the risky asset, will also be transferred to the matching portfolio if the current wealth $W_t$, which consists of the current contribution $c_t$, the value of the matching portfolio and the value of the return portfolio, exceeds the target wealth $\tilde{W}_t$. In other words, at $t=0$, the control $\alpha_0$ is given by \begin{numcases}{\alpha_0=\label{eqn:cumulative_control_alpha_0}} 0 & if $W_0\geq\tilde W_0$,\\ 1 & otherwise, \end{numcases} and, for $t=1\ldots T$, the control $\alpha_t$ is given by \begin{numcases}{\alpha_t=\label{eqn:cumulative_control_alpha_t}} 0 & if $W_t\geq\tilde W_t$,\\ \frac{\alpha_{t-1}(1+x_t)}{\alpha_{t-1}(1+x_t)+(1-\alpha_{t-1})(1+m_t)} & otherwise. \end{numcases} \subsection{Individual target strategy}\label{sec:indv_target} Contrary to the cumulative target strategy, the individual target strategy, which is the second strategy we will analyze here, defines a wealth target per contribution and invests each contribution separately, i.e., the wealth $W_t$ is seen a sum of the individual wealth components resulting from investing the contributions separately: \begin{equation} W_t=\sum\limits_{\tau=0}^t W_{t,\tau}, \end{equation} where $W_{t,\tau}$ is the wealth component from investing the contribution $c_\tau$. As in \eqref{eqn:cumulative_target_wealth}, a wealth target $\tilde{W}_{t,\tau}$, at time $t$ for a contribution invested at time $\tau\leq t$, is given by \begin{equation}\label{eqn:individual_wealth_target} \tilde{W}_{t,\tau}=\frac{M_t}{\tilde M_T} c_\tau \; \mathbb{E}_tF_\tau. \end{equation} Apart from this, the strategy works similarly: the individual contributions are invested in the risky asset until the invested amount exceeds the wealth target for that contribution, in which case they are transferred to the matching portfolio until retirement. Thus, the control $\alpha_{t,\tau}$ for investing contribution $c_{t,\tau}$ is given by \begin{numcases}{\alpha_{t,\tau}=\label{eqn:individual_control_alpha_t}} 0 & if for any $\tau'=\tau\ldots t$ we have $W_{\tau',t} \geq \tilde W_{\tau',\tau},$\\ 1 & otherwise. \end{numcases} At the aggregated level, the control $\alpha_t$ is now given by \begin{equation} \alpha_t=\frac{1}{W_t}\sum\limits_{\tau=0}^t W_{t,\tau}\alpha_{t,\tau}. \end{equation} Conceptually, the difference between the cumulative target strategy and the individual target strategy is what triggers the risk reduction. Contrary to the individual target strategy, in the cumulative target strategy new investments have to make up for insufficient past returns before a transfer to the matching portfolio can take place. On the other hand, in the cumulative target strategy good past returns may cause new contributions to be transferred immediately to the matching portfolio. With the individual target strategy, each contribution has to generate sufficient return on investment before such a transfer takes place. \subsection{Combination strategy}\label{combination_strategy} Both the cumulative and the individual target strategy either reduce risk by switching completely to the matching portfolio or don't reduce risk at all. Instead of completely switching or not switching at all, the combination strategy, which is the third strategy considered, combines the individual target strategy with dynamic programming to dynamically steer the wealth $W_{t,\tau}$ resulting from the contribution $c_\tau$ above its wealth target $\tilde W_{t,\tau}$. For this, we define the following wealth to target ratio, \begin{equation}\label{eqn:state_variable} Z_{t,\tau} := \frac{W_{t,\tau}}{\tilde W_{t,\tau}}, \end{equation} and solve \begin{equation}\label{eqn:dynamic_programming_problem_combination_strategy} V(z,t,\tau) = \sup_{\mathcal{A}_{t,\tau}} \EX{\check{U}(Z_{T,\tau}) | Z_{t,\tau} = z}, \end{equation} where $\check{U}$ is a utility function, $V(z,t,\tau)$ is the value function in the dynamic programming problem and the control $\mathcal{A}_{t,\tau}$ consists of the future investment decisions: \begin{equation} \mathcal{A}_{t,\tau} = \{\alpha_{t,\tau}, \ldots,\alpha_{T,\tau}\}. \end{equation} Using the dynamic programming principle, it follows that the optimal control, $\mathcal{A}_{t,\tau}^*$, satisfies \begin{equation} \mathcal{A}_{t,\tau}^* = \{\alpha_{t,\tau}^*, \mathcal{A}_{t+1,\tau}^*\}, \end{equation} which allows us to solve for the optimal control problem for $\mathcal{A}_{t,\tau}^*$, backwards in time. In this context, we choose a utility function that steers the ratio $Z_{T,\tau}$ in between the bounds $z^*_{\mathrm{min}}$ and $z^*_{\mathrm{max}}$. This is in line with the investor's goal of minimizing downside risk, and with our assumption that upward potential comes with downside risk. The utility function should be positive concave and takes here the following functional form: \begin{align}\label{eq:utility} \check{U}(z) &= \frac{-\lrp{z-\beta}^2 - \lrp{z - z^*_{\mathrm{min}}}^2}{z}, \end{align} where \begin{align*} \beta &= \sqrt{2\lrp{z^*_{\mathrm{max}}}^2 - \lrp{z^*_{\mathrm{min}}}^2}, \end{align*} see Figure \ref{utility_function} (note that this is a different utility function than $U(\cdot)$ from (\ref{eqn:utility_shortfall}). Utility function $\check{U}(\cdot)$ is clearly concave and continuous on the domain $\mathbb{R}_{>0}$. We set $z^*_{\mathrm{min}}=1$ and $z^*_{\mathrm{max}}=3$, as this choice fits well with the investor's replacement ratio target and, as we will show in Section \ref{sec:acarulebased}, is sufficient to demonstrate the strategy's added value. \begin{figure} \centering \input{Tikzpictures/utility_function} \caption{Plot of \eqref{eq:utility} with $z^*_{\mathrm{min}}=1$ and $z^*_{\mathrm{max}}=3$.} \label{utility_function} \end{figure} Now, we will show that the ratio $Z_{t,\tau}$, between the current wealth $W_{t,\tau}$ and its target $\tilde W_{t,\tau}$, evolves in time by making returns on investment in the nominator and updating the inflation expectation in the denominator. Since this time evolution is independent of $\tau$, we can show that the optimal control $\alpha_{t,\tau}^*$ is independent of $\tau$, i.e., once the optimal control is found, it can be applied to all contributions. \begin{lemma}\label{lemma:indepence_tau} The optimal control $\alpha_{t,\tau}^*$ of dynamic programming problem \eqref{eqn:dynamic_programming_problem_combination_strategy} is independent of the contribution $c_\tau$ and the time $\tau$ at which the contribution is made. \end{lemma} \begin{proof} The portfolio wealth $W_{t,\tau}$, accumulated by investing contribution $c_\tau$, increases with the return on investment and, therefore, satisfies \begin{equation}\label{eqn:recursing_wealth} W_{t,\tau}=\left[(1+x_t)\alpha_{t-1,\tau}+(1+m_t)(1-\alpha_{t-1,\tau})\right]W_{t-1,\tau}. \end{equation} From \eqref{eqn:individual_wealth_target}, \eqref{eqn:target_wealth_factor} and \eqref{eqn:returns matching portfolio}, it follows that the wealth target, $\tilde W_{t,\tau}$, satisfies \begin{equation}\label{eqn:recursing_wealth_target} \tilde W_{t,\tau}=\frac{\mathbb{E}_t F_{t-1}}{\mathbb{E}_{t-1}F_{t-1}} (1+m_t) \tilde W_{t-1,\tau}. \end{equation} Substitution of \eqref{eqn:recursing_wealth} and \eqref{eqn:recursing_wealth_target} in \eqref{eqn:dynamic_programming_problem_combination_strategy} yields that the optimal controls $\alpha_{t,\tau}^*$ solve \begin{equation}\label{eqn:optimal_control} \sup_{\mathcal{A}_{t,\tau}} \EX{\left. \check{U}\left( z \prod\limits_{\tau'=t+1}^T \frac{(1+x_{\tau'+1})\alpha_{\tau',\tau}+(1+m_{\tau'+1})(1-\alpha_{\tau',\tau})}{\frac{E_{\tau'+1} F_{\tau'}}{E_{\tau'} F_{\tau'}} (1+m_{\tau'+1}) } \right) \right| Z_{t,\tau} = z}. \end{equation} This shows that both the value function $V(z,t,\tau)$ and the optimal control $\alpha_{t,\tau}^*$ are independent of $\tau$. \end{proof} Lemma \ref{lemma:indepence_tau} implies that, theoretically, the dynamic programming problem has to be solved only once, i.e., the investment decisions for the first contribution $c_0$ can be used for all other contributions. For the practical implementation for the dynamic programming algorithm, readers may refer to Appendix \ref{sec:algorithm}. \subsection{Target replacement ratio} \label{sec:target-replacement-ratio} The variable $r$, used in the construction of the wealth target, can be interpreted in multiple ways. First of all, it serves as a discount rate, which is used to compute the present value of contributions that are made in the future. It can also be viewed as an annual return requirement: each contribution is required to have an average annual return of $r$. A third interpretation of $r$ is that of a future expected annual return. The computation of the expected replacement ratio requires a future annual return assumption. Let $t\in\mathcal{T}$ and let $\mathcal{F}_t$ be the corresponding filtration. The expected replacement ratio $R_t$ is defined as \begin{align*} R_t &:= \frac{\EX{P | \mathcal{F}_t}}{\EX{\sum\limits_{t = 0}^{T} s_t\prod\limits_{\tau=t+1}^{T} (1+\pi_\tau)|\mathcal{F}_t}}, \end{align*} where \begin{align*} \EX{P|\mathcal{F}_t} &= \frac{\EX{W_T|\mathcal{F}_t}}{\EX{M_T|\mathcal{F}_t}}, \end{align*} with \begin{align*} \EX{W_T|\mathcal{F}_t}&= \lrb{1+r+ I(T;t)}^{T-t}W_t + \sum_{k=t+1}^{T-1} \lrb{1+r+I(T; k})^{T-k}\EX{c_k|\mathcal{F}_t}\:. \end{align*} Computation of the expected replacement ratio requires four different estimators. The discount rate $r$ is used as an estimator for the future expected annual return. The estimator for the future inflation, $I(T;t)$, has to be estimated through regression between the future and the past cumulative inflation, as shown in Equation (\ref{eqn:inflation}). Future salaries are based on the information from Table \ref{table_salary}. Lastly, the estimator for the market value factor at the end of the investment horizon, $\EX{M_T|\mathcal{F}_t}$, is based on regression between $M_t$ and $M_T$, with $\Phi = \{1,x\}$. See Appendix \ref{sec:least-squares-monte-carlo-method} for details of the regression method used. The market value factor is considered to be independent of the discount rate $r$, inflation and wage inflation (the division operator can therefore be taken out of the expected value operator). The computation of the target replacement ratio at time $t$ is similar to the computation of the expected replacement ratio. The only difference is that the portfolio wealth, $W_t$, is replaced by the target terminal wealth, $W^*(t)$. The target wealth definition causes the target replacement ratio, $R^*(t)$, to be independent of the market value factor: \begin{align*} R^*(t) &= \frac{\EX{P^*|\mathcal{F}_t}}{\EX{\sum\limits_{t = 0}^{T} s_t\prod\limits_{\tau=t+1}^{T} (1+\pi_\tau)|\mathcal{F}_t}}, \end{align*} by using the independence of the market factor $M_T$ and the wealth process and the definition of the current target wealth $\tilde{W}_t$, \begin{align*} \EX{P^*|\mathcal{F}_t} &= \frac{\EX{W^*(T)|\mathcal{F}_t}}{\EX{M_T|\mathcal{F}_t}} =\tilde{W}_t. \end{align*} As mentioned, the investor's target is to reach a replacement ratio of $70\%$. To translate this target into the wealth target in terms of portfolio wealth, we have: \begin{align*} \tilde{W}_t &= R^*(t)\EX{\sum\limits_{t = 0}^{T} s_t\prod\limits_{\tau=t+1}^{T} (1+\pi_\tau)|\mathcal{F}_t}\:. \end{align*} To steer towards a fixed replacement ratio target, $\tilde{W}(t)$ would have to be altered for each scenario. It is, however, easier to differ the quantity $R^*(t)$ slightly between scenarios, from a computational point of view. Instead, $\tilde{W}_t$ is defined as in Equation (\ref{eqn:cumulative_target_wealth}) and $r$ is set to the required annual return. Numerically, we find that the target replacement ratio within a scenario is almost constant throughout time, as can be seen in the bottom-left plot of Figure \ref{single_scenario_cum}. Small alterations are caused by the estimators for the inflation and the wage inflation. Alterations of up to $0.01$ within a scenario are observed for a discount rate of $2.5\%$. Target replacement ratios are between $0.6847$ and $0.7033$ for a discount rate of $2.5\%$. \section{Numerical evaluation} In this section, we apply the rule-based strategies described in Section \ref{rule_based} to the pension investor introduced in Section \ref{theoretical_background}, using the governing stochastic model described in Section \ref{sec:governing-stochastic-model}. \subsection{Rule-based strategies}\label{sec:acarulebased} To illustrate the dynamics of the rule-based strategies, Figure \ref{single_scenario_cum} shows one of the 2000 sample paths for the investor's portfolio dynamics. In particular, the top left figure shows the investor's wealth $W_t$ when following the cumulative target strategy (orange) and when the investor's wealth exceeds the target $\tilde W_t$ (yellow). Note that when this occurs, the investments are transferred to the matching portfolio (orange line, bottom right figure). The individual target strategy (in green) works similarly, but, as discussed, uses a target per contribution, so that, typically, only part of the wealth is transferred to the matching portfolio (green line, bottom right figure). The bottom left figure illustrates that, in this sample path, the rule-based strategies outperform the optimal static strategy in terms of expected replacement ratio - although only in the first 10 years of the investment the rule-based strategies take substantially more risk, i.e., have a substantially higher allocation to the return portfolio. Therefore, in this particular sample path, one could argue that the better performance comes from the rule-based strategies and not from increased exposure to risk. \usepgfplotslibrary{groupplots} \begin{figure}[tp!] \centering \input{Tikzpictures/single_scenario_cum} \caption{Sample paths of wealth, return of the matching and return portfolio, expected replacement ratio, and allocation to the matching portfolio applied to the pension investor introduced in Section \ref{theoretical_background} following rule-based strategies with the discount rates $r=2\%$ and $\pi=2.5\%$, and using the governing stochastic model described in Section \ref{sec:governing-stochastic-model}. Top left: wealth $W_t$ for respectively the cumulative target strategy (orange), its wealth target $\tilde W_t$ (yellow), individual target strategy (green), the optimal static strategy (blue) and cumulative contribution (dark blue). Top right: return of the matching portfolio (blue) and return portfolio (orange). Bottom left: $70\%$ replacement ratio target (yellow) together with the expected replacement ratio of the cumulative target strategy (orange), individual target strategy (green) and optimal static strategy (blue). Bottom right: allocation $1-\alpha_t$ to the matching portfolio for the cumulative target strategy (orange), individual target strategy (green) and optimal static strategy (blue).} \label{single_scenario_cum} \end{figure} The combination strategy is best illustrated by means of the resulting investment decisions, i.e., the optimal control $\alpha_{t,\tau}$ as defined by equation \eqref{eqn:optimal_control}. Figure \ref{fig:decision_combination_strategy} illustrates the optimal allocation to the matching portfolio, $1-\alpha_{t,0}$, for the first contribution $c_0$ as a function of time $t$ and the wealth to target ratio, as defined by \eqref{eqn:state_variable}. In this example, allocations are restricted to multiples of $20\%$. Note that, contrary to the rule-based strategies, in the combination strategy risk can be increased and investments can be transferred from the matching to the return portfolio. All together, this makes the combination strategy more refined than the rule-based strategies, which follow a ``risk on'' or ``risk off'' approach, in terms of their allocation. \begin{figure}[t!] \input{Tikzpictures/color_map_26_67_rr} \caption{Optimal allocation $1-\alpha_{t,0}$ to the matching portfolio, as a function of wealth to target ratio $\nicefrac{W_{t,0}}{\tilde W_{t,0}}$ (y-axis), as defined by equation \eqref{eqn:state_variable}, and time $t$ (x-axis) for the first contribution of an investor following the combination strategy, discussed in Section \ref{combination_strategy}, and using the stochastic model of section \ref{sec:governing-stochastic-model}. Allocations are restricted to multiples of $20\%$.} \label{fig:decision_combination_strategy} \end{figure} Figure \ref{fig:replacement_ratio} compares the distribution of the terminal replacement ratio for the following best performing strategies in terms of the expected shortfall below the investor's $70\%$ replacement ratio target: two rule-based strategies, a combination strategy and a static strategy. The figure illustrates that, as intended, the dynamic strategies reduce downward risk at the expense of upward potential, i.e., the dynamic strategies are centered more around the target replacement ratio of $70\%$. \begin{figure}[pt!] \input{Tikzpictures/hist_replacement_ratio_indv_academic} \caption{Distribution of the replacement ratio for, respectively, the cumulative target strategy with $r=3.06\%$ (orange), individual target strategy $r=2.99\%$ (green), combination strategy with $r=1\%$ (yellow) and a static strategy with $46.02\%$ constant allocation to the return portfolio.} \label{fig:replacement_ratio} \end{figure} A comparison of all strategies is best made by comparing the strategy successes, i.e., whether a strategy achieves the intended $70\%$ replacement ratio target, versus its downside risk, and parametrize the strategies by the parameters that control the strategy's risk appetite, see Figure \ref{fig:10CVaR}. From this figure, we conclude that all the dynamic strategies clearly outperform the traditional static strategies. Together with the intuitive rationale to reduce risk after several good years, we believe this sufficiently demonstrates the added value of these dynamic strategies. We do, however, find these simulations insufficient to rank the dynamic strategies based on their effectiveness. It is well-known that the relative performance of dynamic strategies can be sensitive to the characteristics of the underlying stochastic model. As such, the characteristics are not completely objective, and we believe that use of the strategies in practice is an appropriate way to test the strategies further (which lies beyond the scope of this research). \begin{figure} \input{Tikzpictures/cvar_measure_indv_academic} \caption{Expected shortfall below the investor's $70\%$ replacement ratio target, see equation \eqref{eqn:utility_shortfall}, versus $10\%$ CVaR of the terminal replacement ratio, as defined by (\ref{kkk}), for: the cumulative target strategy (orange), the individual target strategy (green), the combination strategy (yellow), all parametrized by the real rate of return $r$ (yellow), and, also, several annually rebalanced allocations (blue), and several default life cycles reducing risk with the investor's age.} \label{fig:10CVaR} \end{figure} \subsection{Discussion} One of the intended advantages of a static life cycle strategy is the reduced risk close to the retirement, meaning that one can provide the investor with an accurate estimate of her retirement income in the years before retirement. Table \ref{table:overview_strategies} provides a comparison of the dynamic strategies and traditional life cycle strategies. In particular, the table lists the standard deviations of the difference between the expected replacement ratio 5 years before retirement and the replacement ratio at retirement. We conclude that when following the rule-based strategies the investor can be provided with a similarly accurate estimate of the replacement ratio before retirement. \enlargethispage{2.5cm} \begin{table}[H] \hspace{-1.7cm} \begin{tabular}{l|lll|lll|lll} &\multicolumn{3}{c|}{\bfseries\sffamily Static mix} & \multicolumn{3}{c|}{\bfseries\sffamily Static life cycle} & \multicolumn{3}{c}{\bfseries\sffamily Dynamic strategies} \\ & $0\%$ &$100\%$&$46.02\%$& Def. & Neut. & Off. & Cum. & Indiv. & Comb. \\ \hline Averages ($R$)&&&&&&&&& \\ \qquad Mean & $0.56$ & $1.13$ & $0.77$ & $0.66$ & $0.71$ & $0.79$ & $0.70$ & $0.70$ & $0.75$ \\ \qquad $10\%$ CVaR & $0.44$ & $0.28$ & $0.41$ & $0.42$ & $0.42$ & $0.40$ & $0.36$ & $0.41$ & $0.40$ \\ \qquad $5\%$ CVaR & $0.42$ & $0.24$ & $0.37$ & $0.39$ & $0.39$ & $0.36$ & $0.28$ & $0.34$ & $0.35$ \\ Percentiles ($R$) &&&&&&&&&\\ \qquad Median & $0.56$ & $0.83$ & $0.72$ & $0.64$ & $0.67$ & $0.72$ & $0.72$ & $0.73$ & $0.75$ \\ \qquad $10\%$ VaR & $0.46$ & $0.36$ & $0.47$ & $0.47$ & $0.47$ & $0.46$ & $0.51$ & $0.52$ & $0.47$ \\ \qquad $5\%$ VaR & $0.44$ & $0.29$ & $0.41$ & $0.43$ & $0.44$ & $0.41$ & $0.37$ & $0.44$ & $0.41$ \\ Goal ($70\%~R$) &&&&&&&&&\\ \qquad Shortage & $0.139$ & $0.093$ & $0.070$ & $0.088$ & $0.078$ & $0.073$ & $0.044$ & $0.045$ & $0.063$ \\ \qquad Goal reached& $6\%$ & $60\%$ & $53\%$ & $35\%$ & $44\%$ & $53\%$ & $65\%$ & $65\%$ & $57\%$ \\ Estim. error ($R$) &&&&&&&&& \\ \qquad Mean & $0.088$ & $0.398$ & $0.139$ & $0.095$ & $0.092$ & $0.115$ & $0.113$ & $0.091$ & $0.144$ \\ \qquad Std dev. & $0.06$ & $0.57$ & $0.12$ & $0.06$ & $0.06$ & $0.10$ & $0.09$ & $0.07$ & $0.12$ \\ \end{tabular} \caption{Statistics for different investment strategies. Values in this table were calculated using $r = 3.06\%$ for the combination strategy, $r = 2.99\%$ for the individual strategy, and $r = 1\%$ for the combination strategy.} \label{table:overview_strategies} \end{table} Although the rule-based strategies outperform other strategies in our examples, we wish to point that there are also disadvantages in the all-or-nothing approach, e.g, the portfolio remains $100\%$ invested in the more risky return portfolio when targets are not reached. Such truly worst case scenarios appear to have a minor influence, but are, e.g., illustrated in the far left lower tail in Figure \ref{fig:10CVaR}. The individual target strategy presented in Section \ref {sec:indv_target} suffers less from the all-or-nothing disadvantages, as it defines a target per contribution. As a result, inferior past performance does not influence the required performance of current and future contributions. Compared to the rule-based strategies, the combination strategy does not exploit the fact that the matching portfolio can grow an investment securely to its intended target (indexed by expected inflation) until retirement. As the rule-based strategies explicitly made use of this, the combination strategy could be further improved. One advantage of the combination strategy in practical use is that the corresponding asset allocation is much more smooth than for the rule-based strategies. The necessity of large turnovers is difficult to explain and investors might be uncomfortable to follow such a drastic strategy to the end. \section{Conclusion}\label{conclusion} In this paper, we discussed several dynamic strategies, suitable for pension investors that aim to replace a proportion of their salary with a retirement income. The strategies reduce risk after several good years and steer the investor to her target. By having the allocation depend on return on investment, the approaches exploit a freedom which is typically not used by traditional static approaches. We have shown that the dynamic approaches may outperform some traditional static approaches and prevent unnecessary risk taking. Two simple and intuitive rule-based strategies were introduced that secure investments in a cash flow matching portfolio once they yielded sufficient return. Although both rule-based strategies can straightforwardly be implemented in practice, we recommend to also investigate alternatives where the investor, e.g., switches between an aggressive traditional life cycle and a matching portfolio to rule out very aggressive portfolios close to retirement. The rule-based strategies were further refined into a combination strategy based on dynamic programming. In the current setup, the combination strategy may not be superior and we even found that the rule-based strategies outperform the combination strategy in a numerical example. We certainly believe that dynamic strategies based on dynamic programming can be further improved, as also this research clearly demonstrates their added value to pension investors. A most suitable dynamic strategy is hard to determine objectively as its performance depends on the governing stochastic model. Also, such a dynamic strategy should fit well with practical requirements, such as whether an investor will follow through on the strategy or will feel the need to combine such a strategy with her own judgement, and whether such strategies comply with regulations. This research, however, demonstrates the added value of dynamic strategies to pension investors. In summary, such strategies exploit freedom that is not used by traditional approaches, can steer a pension investor to her target and prevent unnecessary risk taking. \printbibliography
{ "redpajama_set_name": "RedPajamaArXiv" }
3,768
{"url":"https:\/\/discourse.mc-stan.org\/t\/why-transformations-need-to-be-invertible-in-change-of-variable-in-probability-theory\/22317","text":"Why transformations need to be invertible in change of variable in probability theory?\n\nHi everyone,\n\ntoday I have been asking myself about something that although not related directly to Stan, I think it is a topic of interest for Stan users. It is something that for a reason it is not explained (or at least I have not found the explanation) but rather always assumed. I think that this is a great forum to find the answer.\n\nThe question is simple. When we have a change of variables, F: X \\rightarrow Y, why do we require that the transformation between F must be a bijection?. I guess the answer is because we must require that each element of the set X is identified in each element of the set in Y. In this case, the absolute value of the determinant of the Jacobian is in charge of accounting for the change in volume that F produces.\n\np(Y) = p(F(X))\\left|\\det \\frac{\\partial F}{\\partial Y} \\right|\n\nThe reason I ask this, and apologize if it is a stupid or trivial question, is because I have seen that the general construction in the formula above comes froma more general viewpoint which is the integration by substitution, Integration by substitution - Wikipedia, where the condition required is that F is injective, and not really bijective. In other words, I have seen that this rule we apply in probability theory holds for more general transformations F beyond the bijective ones, and I was wondering why, in the specific case of probability theory, we require our transformation F to be bijective.\n\n2 Likes\n\nAn injective function with its codomain restricted to its image is bijective, is it not?\n\nAs a first guess, for the integration you do not care about the part of the codomain that is not in the image? \u201cWe\u201d also only care about the image of the function?\n\n2 Likes\n\nYes, I think that if the codomain of a function is the image then the function will be a bijection. But I think that what I exposed above holds without this observation, so not sure about your point.\n\nI think that the fact that you integrate or not does not really make a difference. There is an example in the link I send where under subtitle: \u201cApplication to probability theory\u201d\n\nSuppose the function is injective but not bijective. Now we want a Jacobian adjustment that works for an arbitrary prior that we might place on Y. If f isn\u2019t bijective, then there\u2019s a good chance that we will pick a prior that places nonzero probability mass on an element of Y that cannot be mapped back to X. If, on the other hand, we declare Y with an appropriate constraint to ensure no prior mass over the parts that don\u2019t map to X, then f is a bijection.\n\nEdit:\nPut slightly differently, the restriction that we need is that our prior doesn\u2019t put any probability density on elements of Y not in the image of f (this is related the integration that @Funko_Unko is talking about). What\u2019s the convenient way to express this restriction? Well, let\u2019s require that f be bijective. If we expand the codomain of f beyond its image, then we just need to immediately crop those extra parts of the codomain back out via the prior.\n\nNote that for any choice of f that is injective but not bijective, with an appropriate prior that puts no probability density outside the image, we can without loss of generality restrict the codomain to the image of f. So we don\u2019t lose anything important by stating the requirement as \u2018bijective.\u2019\n\nNote further that for a host of computational reasons (ranging from initialization to floating-point precision near the boundary) it is good practice in Stan to declare parameters with constraints whenever constraints are implied by the prior. So we get better computation by explicitly restricting the codomain to coincide with the image anyway.\n\n2 Likes\n\nThanks both for the answer. So based on both of them we can somehow conclude that the reason is because we must require that each element of the set X is identified in each element of the set in Y without having to do it by directly expressing constraints through prior probabilities, but rather directly by construction.\n\n1 Like\n\nIt needs to be bijective because you are gonna use the inverse of the function F. Lets compute a probabliliy:\n\nP(F(X) < x) = P( X < F^{-1} (x) )\n\nThe only measure space you really know is the one on X and not of F(X) so every tine you need to compute probabilities for F(X) you need to do that step. And therefore you need a bijective function.\n\nThis is actually a theorem the theorem of transformation densities for random variables.\n\nActually, if you are not trying to do a nice global change of variables, F need not be invertible, you could just use its preimage.\n\nSay F(x)=x^2, you can still compute P(F(x)<1) from p(x).\n\nI guess the reason why you want a bijection between X and Y is exactly that you do not want to lose any information you have on X or Y, and hence you need this one-to-one mapping.\n\nConcerning the bijection vs injection, exp: R \\to R is clearly an injection but no bijection, but we can just restrict it to exp: R \\to R^+, so there\u2019s not really an issue there, or is there?\n\nEdit:\n\nAfter some more careful checks, I have realized that, as noted also by @asael_am , we can perform change of variables and the only requirement is that the function that performs the change h() is measurable. However, only when h() is either a bijection or an injection, one can use the equation I placed at the beginning of my post, and there are several ways to arrive at it (once is integral by substitution, but also writting P(h(X) \\leq y) = P(X \\leq h^{-1}(Y)) and then derivate to obtain the density gives the result.\n\nHowever, I think that beyond this fact, I think it is interesting to know what push us to use bijections and not injections. I guess, as already stated, it is because we want our elements in X to be uniquely determined in our elements in Y. However, beyond this fact, why not just an injection? Is there any other reasons? What could be the implications?\n\nThanks again\n\nEvery injective function corresponds to a bijective function whose codomain is restricted to the image. So the only question here is about how we think about the codomain.\n\nThe purpose of doing a Jacobian adjustment is to obtain inference based on some prior density function expressed over the codomain. For example, given some univariate function f(x) whose codomain is the entire real line, suppose that I have prior knowledge that f(x) is normally distributed, and so I write target += normal_lpdf(f(x) | 0, 1). The purpose of the Jacobian adjustment is to ensure that the prior density for f(x) is actually the standard normal.\n\nIf f(x) = e^x, then I cannot achieve my desired prior density for f(x)! So the Jacobian adjustment hasn\u2019t worked! Instead of e^x \\sim Normal(0,1), it yields e^x \\sim RTHN(0, 1), where RTHN is the Right-hand Tail of a Half Normal. This is inconsistent with my domain knowledge and is not the prior that I intended!\n\nIf the codomain of f were restricted to the positive reals, then I would have known from the beginning that I couldn\u2019t expect f(x) to be normally distributed, and I would have known that in writing target += normal_lpdf(f(x) | 0, 1) plus a Jacobian adjustment, that I was obtaining a half-normal prior rather than a standard normal.\n\nLet\u2019s back up a little bit because all of the talk about injectivity, bijectivity, images, and codomains is missing some important points. The problem with trying to understand the change of variables formula and its limitations is that it requires a deep dive into probability theory. I\u2019ll try to do that here, but it\u2019ll take a while. If any of the below concepts are confusing to anyone reading along then I recommend taking a look at my probability theory case study, Probability Theory (For Scientists and Engineers).\n\nBefore talking about maps let\u2019s make sure we\u2019re on the same page with the basics. A probability space consists of an ambient space X, endowed with a \\sigma-algebra \\mathcal{X} consisting of \u201cnice\u201d subsets of X and a probability distribution \\pi that maps elements of the \\sigma-algebra into probabilities in way that\u2019s compatible with countable unions, intersections, and complements.\n\nNow let\u2019s consider another space Y equipped with its own \\sigma-algebra \\mathcal{Y} along with a map F: X \\rightarrow Y.\n\nNominally F just maps points in X to points in Y but this point-wise mapping can also induce maps from objects defined on X to objects defined on Y. For example by breaking a subset A \\subset X into points and then mapping them to Y before collecting those output points in other subset F(A) \\subset Y the original map F induces a map from subsets on X to subsets on Y. This kind of induced map in the same direction of F is called a pushforward along F.\n\nAt the same time F might also induce maps from objects defined on Y to objects defined on X. If F isn\u2019t bijective then we can\u2019t define an inverse point-wise map F^{-1} : Y \\rightarrow X, but we can we can define a map from subsets B \\subset Y to subsets F^{-1}(B) \\subset X. This kind of induced map in the opposite direction of F is called a pullback along F.\n\nSo the point-wise map F induces both a pushforward and pullback map between subsets on X and Y. These induced maps, however, will not in general respect the \\sigma-algebras. In particular if A \\in \\mathcal{X} then the output of the pushforward map F(A) need not be in \\mathcal{Y}, and vice versa for the pullback map.\n\nIf the pullback map is compatible with the \\sigma-algebras so that for every B \\in \\mathcal{Y} we have F^{-1}(B) \\subset \\mathcal{X} then we can define another induced pushforward map, this time between probability distributions. Every probability distribution \\pi defined on X defines a pushforward probability distribution F_{*} \\pi on Y via the probabilities\n\n\\mathbb{P}_{F_{*} \\pi}[B] = \\mathbb{P}_{\\pi}[ F^{-1}(B) ].\n\nAgain we need F^{-1}(B) to be in \\mathcal{X} otherwise the initial probability distribution won\u2019t know how to assign a probability to the pullback subset.\n\nMeasurable functions\/maps\/transformations are just the maps satisfying the compatibility requirement that allows us to define pushforward probability distributions. In other words measurable maps are the only maps that allow us to translate probability distributions from one space to another.\n\nNote that at this point no other requirement has been made on the structure of X, Y, and F. X and Y don\u2019t have to have the same dimensions, F doesn\u2019t have to be bijective or even injective so long as it satisfies the \\sigma-algebra consistency property.\n\nIf the dimension of Y is less than the dimension of X then a measurable surjection F : X \\rightarrow Y is commonly known as projection map, and pushforward distributions are known as marginal distributions.\n\nIf the dimension of X and Y are the same and both F and F^{-1} are measurable then a bijection F: X \\rightarrow Y is commonly known as a reparameterization.\n\n(Side note: codomains are irrelevant here as the \\sigma-algebras and probability distributions of interest are all defined over the entire domain).\n\nThey key difference between these two types of maps is that projections loose information while reparameterizations do not. If F is a reparameterization then we can start at \\pi on X, pushforward to F_{*} \\pi on Y, then pushforward along F^{-1} to recover the original distribution,\n\n(F^{-1})_{*} F_{*} \\pi = \\pi.\n\nThis is not true of projection functions \u2013 we can map \\pi on X to F_{*} \\pi on Y but there\u2019s no way to recover \\pi from that pushforward distribution.\n\nOkay, so now we\u2019re finally ready to talk about probability density functions. Probability density functions are functions that quantify the difference between two measures. Mathematically we denote the density function of \\pi_{2} with respect to \\pi_{1} as\n\n\\pi_{21}(x) = \\frac{ \\mathrm{d} \\pi_{2} }{ \\mathrm{d} \\pi_{1} } (x).\n\nMost often we correct some standard \u201cuniform\u201d distribution on the ambient space to the probability distribution of interest. If X is a real space then that uniform distribution is the Lebesgue measure, \\mathcal{L}. In other words the probability density function of \\pi is actually the probability density function of \\pi relative to the Lebesgue measure,\n\n\\pi(x) = \\frac{ \\mathrm{d} \\pi }{ \\mathrm{d} \\mathcal{L} } (x).\n\nUsing the above machinery we can in some cases work out how to construct pushforward probability density functions. The basic idea is to take a distribution on X, push it forward along F to F_{*} \\pi on Y and then construct the density of each with respect to the uniform measures on X and Y respectively. In other words\n\n\\pi(x) = \\frac{ \\mathrm{d} \\pi }{ \\mathrm{d} \\mathcal{L}_{X} } (x) \\mapsto \\pi(y) = \\frac{ \\mathrm{d} F_{*} \\pi }{ \\mathrm{d} \\mathcal{L}_{Y} } (y).\n\nNotice that we pushforward \\pi along F but we define the densities with respect to the uniform distributions on X and Y respectively. We don\u2019t transform the uniform distribution on X to some distribution on Y because that pushforward distribution will in general no longer be uniform! Indeed when F: X \\rightarrow Y is a measurable bijection the amount by which F warps the initial uniform distribution is just the Jacobian determinant!\n\nMathematically when F is a bijection we can write\n\n\\begin{align*} \\pi(y) &= \\frac{ \\mathrm{d} F_{*} \\pi }{ \\mathrm{d} \\mathcal{L}_{Y} } (y) \\\\ &= \\frac{ \\mathrm{d} F_{*} \\pi }{ \\mathrm{d} F_{*} \\mathcal{L}_{X} } (y) \\cdot \\frac{ \\mathrm{d} F_{*} \\mathcal{L}_{X} }{ \\mathrm{d} \\mathcal{L}_{Y} } (y) \\\\ &= \\pi(F^{-1}(y)) \\cdot | J |(y) \\end{align*}\n\nwhich is exactly the usual \u201cchange of variables\u201d formula that\u2019s pulled out of thin air.\n\nWhen F is a surjection then the density of the pushforward uniform distribution from X relative to the uniform distribution on Y, \\mathrm{d} \\mathcal{L}_{X} \/ \\mathrm{d} \\mathcal{L}_{Y} is singular and so the usual change of variables formula cannot be applied. In these cases working out the pushforward probability density functions, or the marginal density functions, is much, much harder and usually cannot be done analytically.\n\n6 Likes\n\nOnce the probability distribution is parameterized yielding a probability density function over \\mathcal{Y}, is this not precisely equivalent to requiring that the codomain be restricted to the image?\n\n1 Like\n\nNo, not necessarily. The requirement is not just that F^{-1}(B) \\subset X but rather that F^{-1}(B) is an element of the particular \\sigma-algebra \\mathcal{X} defined on X. One cannot talk about probability distributions, let alone the transformation of probability distributions, using the the structure of the ambient space alone; one always has to consider the specific \\sigma-algebras that accompanies that space.\n\nI didn\u2019t go into this above, but in most cases there is a natural \\sigma-algebra to consider based on the topology of the ambient space, known as the Borel \\sigma-algebra. When using the Borel \\sigma-algebras most maps that preserve topological structure will automatically be measurable. For example in this case continuous, surjective maps are measurable.\n\nThat said on the real numbers one has technically to consider not just the Borel but also the Lebesgue \\sigma-algebras which are ever so slightly different.\n\n3 Likes\n\nEres un puto crack!. Thanks for this reply.\n\n2 Likes","date":"2022-05-24 15:30:02","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9845900535583496, \"perplexity\": 431.00428415372653}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652662573053.67\/warc\/CC-MAIN-20220524142617-20220524172617-00669.warc.gz\"}"}
null
null
The Order of the Star of Romania (Romanian: Ordinul Steaua României) is Romania's highest civil Order and second highest State decoration after the defunct Order of Michael the Brave. It is awarded by the President of Romania. It has five ranks, from lowest to the highest: Officer, Commander, Grand Officer, Grand Cross, and Grand Cross with Collar. History In 1863, Alexandru Ioan Cuza, the Domnitor of the United Principalities of Moldavia and Wallachia, asked the Romanian representative to Paris to contact the then well-known jewellery house Krétly, to manufacture a state decoration. Krétly presented a model, which was immediately accepted by the domnitor, and based on his agreement, 1,000 pieces of the order were made. It was decided that the order would have five ranks: Knight (Cavaler), Officer (Ofițer), Commander (Comandor), Grand Officer (Mare Ofițer), and Grand Cross (Mare Cruce). Unlike all other decorations in that time that were mostly inspired on the French Légion d'honneur, or which had their insignia like a Maltese cross, the model proposed by Krétly for this order was a blue cross crosslet (cruce repetată), a design that was then unique in decorational design. The domnitor decided that the name of the honour would be "The Order of the Union" ("Ordinul Unirii"). It was planned to institute the order on 24 January 1864, the date when the 5th anniversary of his election would be celebrated and a moment that marked the unification of the principalities of Moldavia and Wallachia. Because of this, the motto of the new order would fit the event: "GENERE ET CORDES FRATRES" ("BROTHERS THROUGH ORIGINS AND FEELINGS"). The obverse of the insignia would bear the numbers "5" and "24", the days of January when he was elected in both Moldova and Wallachia. However, due to the overthrow of Alexandru Ioan Cuza by a palace coup, he was unable to actually institute the order, and he awarded the insignia therefore only as a personal present, not as a state decoration. Most of the insignia produced for him remained stored in the Royal Palace's dungeons. In April 1877, when Romania gained independence from the Ottoman Empire, the debate regarding the institution of Romanian decorations was revived. Mihail Kogălniceanu, Minister of Foreign Affairs in the Ion Brătianu cabinet, took part in the debates in the Assembly of Deputies regarding the institution of a state decoration. Because of the already earlier supplied "Order of The Union", it was decided that the shape of the decoration would be the same, modifying only the domnitor's seal. The motto was also changed, because the old one was not appropriate to the moment, to "IN FIDE SALUS" ("IN FAITH IS THE SALVATION"). Regarding the name, Kogălniceanu insisted on "Steaua Dunării" ("The Star of The Danube"). The name "Steaua României" ("The Star of Romania") appeared on May 10, 1887, when the law was voted in the Parliament, as the first law of the Sovereign Romania. By Royal Decree (no. 1545/1932), King Carol II changed the order of precedence in the Romanian honours system. As a result, in 1932, The Star of Romania dropped in precedence from second place (where it had been since 1906) to fourth place (after the Order of Carol I and the Order of Ferdinand I). In 1937, it dropped to seventh place. The main shape of the order, the blue repeated cross (called also "Romanian cross") was kept, but the rays between the cross' arms were replaced by four heraldic eagles with wings spread, the insignia of King Carol I was placed on the obverse, and the reverse bore the year of its establishment, "1877". Also the number of persons that could be awarded The Star of Romania was increased: Knight (Cavaler): 1,000 civilians and 350 military; Officer (Ofițer): 500 civilians and 150 military; Commodore or Commander (Comandor): 200 civilians and 75 military; Grand Officer (Mare Ofițer): 75 civilians and 25 military; Grand Cross (Mare Cruce): 35 civilians and 10 military. In 1938, the order was given a superior rank, called "Clasa I" (First Class in English), between the Grand Officer rank and the Grand Cross rank, with a maximum of 50 civilians and 15 military personnel. The statutes established by King Carol II were changed by General Ion Antonescu (who became Conducător on 4 September 1940). Generally, the rules were the ones used during World War I. The order "The Star of Romania" became the second in the national hierarchy, after that of the Order of Michael the Brave. Inspired by the German Iron Cross, Ion Antonescu decided that the first three grades of the orders the Star of Romania and the Crown of Romania, with spades (swords), and the ribbon of The Medal "The Military Virtue" would be awarded for exceptionally brave acts with an oak leaf, attached to the ribbon. After 1948, all the existing decorations were outlawed, and their wearing was forbidden. Just by keeping the insignia, one was considered a delinquent in the first years of communism. After many attempts, in 1998/1999 the National Order "The Star of Romania" was reinstituted, with a design similar to the one used in 1932, but without the insignia of King Carol I, and with the republican insignia. Grades As per Law 29/2000, regarding Romania's national system of decorations, there are currently six grades: 1st Class: Collar (Colan); 2nd Class: Grand Cross (Mare Cruce); 3rd Class: Grand Officer (Mare Ofiţer); 4th Class: Commander (Comandor); 5th Class: Officer (Ofiţer); 6th Class: Knight (Cavaler). Notable recipients First issue (1877–1948) Ernesto Burzagli Archduke Eugen of Austria (1881) Pratap Singh of Idar (1921) Jan Karcz Aristide Razu (1918) Harry Gideon Wells (1919) Ismail of Johor (1920) Hendrik Pieter Nicolaas Muller (1922) Scarlat Cantacuzino Artur Phleps Edward Rydz-Śmigły Jack Corbu (1930) Stanisław Maczek Amha Selassie of Ethiopia Rudolf Walden Fritz Witt (1942) Martin Unrein Jagatjit Singh of Kapurthala (1935) Walter Staudinger (1942) Ismail of Johor (1942) Walther Wenck (1943) Emmerich Jordan (1944) Samuel C. Cumming Paul de Smet de Naeyer Joseph Maria von Radowitz Second issue (since 1998) Foreign citizens By class 1st Class Collars Abdullah II of Jordan George Abela Valdas Adamkus Albert II of Belgium Albert II, Prince of Monaco Amha Selassie Teoctist Arăpașu Gloria Macapagal Arroyo Traian Băsescu Beatrix of the Netherlands Zine El Abidine Ben Ali Carl XVI Gustaf Jacques Chirac Carlo Azeglio Ciampi Emil Constantinescu Nicolae Corneanu Ion Dragalina Andrzej Duda Elizabeth II Matthew Festing Joachim Gauck Mihai Ghimpu Dalia Grybauskaitė Tarja Halonen Harald V of Norway Ioan Holender François Hollande Ion Iliescu Toomas Hendrik Ilves Jaber Al-Ahmad Al-Sabah Lech Kaczyński Andrej Kiska Thomas Klestil Émile Lahoud Margrethe II of Denmark Sergio Mattarella Michael I of Romania Giorgio Napolitano Nursultan Nazarbayev Josef Šnejdárek Angelo Sodano Konstantinos Stephanopoulos Petar Stoyanov Hamad bin Khalifa Al Thani Gherman Titov Ezer Weizman 2nd Class Grand Crosses Alois Lexa von Aehrenthal Martti Ahtisaari Yıldırım Akbulut Albert I of Belgium Archduke Albrecht, Duke of Teschen Prince Albert Victor, Duke of Clarence and Avondale Alexander III of Russia Alexander of Battenberg Alexandra, Countess of Frederiksborg Prince Alfons of Bavaria Alfred, 2nd Prince of Montenuovo Teoctist Arăpașu Count Kasimir Felix Badeni Ehud Barak Bartholomew I of Constantinople David Beatty, 1st Earl Beatty Kurt Beck Radu Beligan Silvio Berlusconi Andrew Bertie Birendra of Nepal Herbert von Bismarck Otto von Bismarck Albrecht von Boeselager Victor, Prince Napoléon Boutros Boutros-Ghali Josip Broz Tito Bernhard von Bülow Ernesto Burzagli Leo von Caprivi Prince Carl, Duke of Västergötland Carol I of Romania Charles, Prince of Wales Christian IX of Denmark Prince Christian of Schleswig-Holstein Doina Cornea Pat Cox Patriarch Diodoros of Jerusalem Bülent Ecevit Edmond de Gaiffier d'Hestroy Edward VII Ernest Louis, Grand Duke of Hesse Ernst I, Duke of Saxe-Altenburg Archduke Eugen of Austria Prince Eugen, Duke of Närke Laurent Fabius Felipe VI of Spain Ferdinand I of Romania Archduke Franz Ferdinand of Austria Franz Joseph I of Austria Frederick VIII of Denmark Frederick I, Duke of Anhalt Frederick III, German Emperor Prince Frederick of Hohenzollern-Sigmaringen Frederick William, Grand Duke of Mecklenburg-Strelitz Frederik, Crown Prince of Denmark Kurt Fricke Prince Friedrich Leopold of Prussia Prince Georg of Bavaria Agenor Maria Gołuchowski Dan Grigore Eremia Grigorescu Wilhelm von Hahnke Ionel Haiduc Tarja Halonen Rafic Hariri Prince Heinrich of Hesse and by Rhine Prince Henry of Prussia (1862–1929) Stefan Hell Henri, Grand Duke of Luxembourg Henrik, Prince Consort of Denmark Jaap de Hoop Scheffer Klaus Iohannis Mugur Isărescu Prince Joachim of Denmark Archduke Joseph Karl of Austria Lionel Jospin Jean-Claude Juncker Ioan Kalinderu Viatcheslav Moshe Kantor Karekin II Karl Anton, Prince of Hohenzollern Hüseyin Kıvrıkoğlu Konstantin of Hohenlohe-Schillingsfürst Aleksey Kuropatkin Aleksander Kwaśniewski Eugeniusz Kwiatkowski Chuan Leekpai Prince Leopold of Bavaria Leopold, Prince of Hohenzollern Liviu Librescu Louis IV, Grand Duke of Hesse Prince Ludwig Ferdinand of Bavaria Archduke Ludwig Viktor of Austria Luís I of Portugal Horia Macellariu Maria Teresa, Grand Duchess of Luxembourg Michael I of Romania Grand Duke Michael Alexandrovich of Russia Louis Michel Milan I of Serbia Helmuth von Moltke the Younger Louis Mountbatten, 1st Earl Mountbatten of Burma Hendrik Pieter Nicolaas Muller Valeriu Munteanu (politician) Adrian Năstase Nicholas II of Russia Mariana Nicolesco Olav V of Norway Archduke Otto of Austria (1865–1906) George Emil Palade Queen Paola of Belgium Alexander August Wilhelm von Pape Pedro II of Brazil Maurice Pellé Göran Persson Nicolae Petrescu-Comnen Christian Poncelet Romano Prodi Mozaffar ad-Din Shah Qajar Antoni Wilhelm Radziwiłł Jean-Pierre Raffarin Archduke Rainer Ferdinand of Austria Ioan Rășcanu George Robertson, Baron Robertson of Port Ellen Gil Carlos Rodríguez Iglesias Prince Rudolf of Liechtenstein Rudolf, Crown Prince of Austria Rupprecht, Crown Prince of Bavaria Edward Rydz-Śmigły Gerhard Schröder Wolfgang Schüssel Walter Schwimmer Queen Silvia of Sweden Jagatjit Singh Pratap Singh of Idar Vassilios Skouris Queen Sofía of Spain Edmund Stoiber Jan Syrový Eduard Taaffe, 11th Viscount Taaffe Alfred von Tirpitz Alexandru Todea Ernest Troubridge Charles d'Ursel Victoria, Crown Princess of Sweden Grigore Vieru Charles J. Vopicka Rudolf Walden Georg Wassilko von Serecki Alan Watson, Baron Watson of Richmond Count Hans Weiss William, Prince of Hohenzollern William, Prince of Wied Sergei Witte August zu Eulenburg Adrian Zuckerman (attorney) 3rd Class Grand Officers Dinu Adameșteanu Radu Aldulescu (musician) Ioan Arhip Constantin C. Arion Randolph L. Braham (Resigned) Gheorghe Brega Nicolae Cajal Alexandru Cernat Dietrich von Choltitz Gheorghe Cipăianu Liviu Ciulei Nadia Comăneci Ileana Cotrubaș Nicolae Dăscălescu Constantin Dumitrescu (general) Ivan Fichev Ismail of Johor Lucien Loizeau Marian-Jean Marinescu Lucian Pintilie Constantin Poenaru Dumitru Prunariu Ioan Mihail Racoviță Constantin Sănătescu Hans-Georg von Seidel Alexandru Slătineanu Simion Stoilow Alexandru Tzigara-Samurcaș Gheorghe Vlădescu-Răcoasa Elie Wiesel Arthur Zimmermann Alexandru Zub 4th Class Commanders Robert Aderholt Vasile Atanasiu Grigore Bălan James Berry (surgeon) Ion Boițeanu Randolph L. Braham Leonid Brezhnev Karl von Bülow Ronald L. Burgess Jr. Leopold Bürkner Ernesto Burzagli Ion Buzdugan Ben Cardin Nicolae Ciupercă Constantin Constantinescu-Claps Aurel Cosma Lucian Croitoru Salvator Cupcea Mircea Dinescu Eugen Doga Émile Dossin de Saint-Georges Mihai Drăgănescu Wim van Eekelen Ștefan Fălcoianu Nikolaus von Falkenhorst Angela Gheorghiu Hans Globke Maximilian Hacman Orrin Hatch Friedrich-Wilhelm Hauck Francis Howard (British Army officer, born 1848) Dietrich von Hülsen-Haeseler Sergěj Ingr Ron Johnson Hunor Kelemen Gunther Krichbaum Emil Krukowicz-Przedrzymirski Tadeusz Kutrzeba Alexandru Lapedatu Chris Lauzen Wolf Lepenies Charles W. Lyons Stanisław Maczek Solomon Marcus Valeriu Moldovan Vasile Moldoveanu Teodor Negoiță Devin Nunes Artur Phleps Tadeusz Piskor Karl von Plettenberg David Popescu Andrei Rădulescu Aristide Razu Mike Rogers (Alabama politician) Frank Rolleston Marco Rubio Nicolae Samsonovici Gustav von Senden-Bibran Ioanel Sinescu Ilie Șteflea Rudolf Stöger-Steiner von Steinstätten Anastase Stolojan Dejan Subotić Nicolae Tătăranu Rudolf Toussaint Alexandru Vulpe Jackie Walorski Bolesław Wieniawa-Długoszowski 5th Class Officers Paul Alexiu Ilie Antonescu Petre Antonescu (general) Constantion Bădescu Ștefan Balaban Ioan A. Bassarabescu Constantin Brătescu Mihai Ciucă Constantin Climescu Mihail Corbuleanu Dumitru Coroamă Ilie Crețulescu Anton Crihan Constantin Cristescu Nicolae Dabija (soldier) Dumitru Dămăceanu Alexandru Dobriceanu Constantin Doncea Anton Durcovici Constantin Eftimiu Eremia Grigorescu Jan Karcz Radu Korne Dan Lupașcu Raoul Magrin-Vernerey Gheorghe Manoliu Sergiu Niță Alexandru Pastia Oana Pellea Irina Petrescu Artur Phleps Constantin Poenaru Iulian Pop David Praporgescu Nicolae Samsonovici Alexandru Șerbănescu Oleg Serebrian Constantin Tobescu 6th Class Knights Ecaterina Andronescu Gheorghe Avramescu Constantin Bălăceanu-Stolnici Colin Robert Ballard Gelu Barbu Viorel P. Barbu Ion Besoiu Marcian Bleahu Mihai Brediceanu Nicolae Cambrea Scarlat Cantacuzino Ion Caramitru Nicolae Ciupercă Dina Cocea Titus Corlățean Pierre de Coubertin Corina Crețu Ioan Culcer Samuel C. Cumming Marțian Dan Neagu Djuvara Valer Dorneanu Mariana Drăgescu Tudor Gheorghe Marcel Guguianu Thomas Hunton Gabriel Liiceanu Leonard Mociulschi Ovidiu Iuliu Moldovan Iulia Motoc Marioara Murărescu Dan Nica Andrei Oișteanu Richard W. O'Neill Gabriel Oprea Octavian Paler Gică Petrescu Teodosie Petrescu Colea Răutu Aristide Razu Mihai Tănăsescu Radu Timofte László Tőkés (Withdrawn) Corneliu Vadim Tudor (Withdrawn)(2004 until 2007, when it was withdrawn) Petre Țuțea Unknown Class Otto Adler Ilham Aliyev Petre Andrei Kofi Annan Gheorghe Arsenescu Giuseppe Arzilli Beatrix of the Netherlands Tarcisio Bertone Bhumibol Adulyadej Josef Bílý Volkan Bozkır Constantin Budișteanu George W. Bush Gheorghe Buzatu Mihail Cămărașu Fernando Henrique Cardoso Aníbal Cavaco Silva Marin Ceaușu Mauro Chiaruzzi Henri Cihoski Jack Corbu Paul de Smet de Naeyer Süleyman Demirel Radko Dimitriev Roman Dmowski Werner Ehrig Eddie Fenech Adami Alberto Fujimori Victor Gomoiu Árpád Göncz Kolinda Grabar-Kitarović Gheorghe Ionescu-Sisești Emmerich Jordan Juan Carlos I Mihail Kogălniceanu Stiliyan Kovachev Milan Kučan Leonid Kuchma Ricardo Lagos Ivan Loiko Mircea Lucescu Petru Lucinschi Ferenc Mádl Leon Malhomme Rexhep Meidani Stjepan Mesić Miron Mitrea Alois Mock Aleksander Piotr Mohl Maria Morganti Dumitru C. Moruzi Bolesław Mościcki Zayed bin Sultan Al Nahyan Danail Nikolaev Pietro Parolin Rosen Plevneliev Kazimierz Porębski David Popovici Ștefan Procopiu Roberto Raschi Arnold Rüütel Said Halim Pasha Jorge Sampaio Eustachy Sapieha Marian Sârbu Rudolf Schuster Walter Staudinger Michel Suleiman Jan Szembek (diplomat) Păstorel Teodoreanu Nicolae Timofti Martin Unrein Guy Verhofstadt Vaira Vīķe-Freiberga Matei Vlădescu Harry Gideon Wells Walther Wenck Fritz Witt Valdis Zatlers Ferdynand Zarzycki Ernesto Zedillo See also List of military decorations National Decorations System (Romania) References Other sources Ordinul național "Steaua României", Presidency of Romania website Recipients of the order (Excel sheet), Presidency of Romania website Romanian decorations Military awards and decorations of Romania Star of Romania, Order of the
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,203
Q: AWK script automatically removing leading 0s from String I have a file BLACK.FUL.eg2: 10>BLACK.FUL>272/GSMA/000000>151006>01 15>004401074905590>004401074905590>B>I>0011>Insert>240/PLMN/000100>>5000-K525122-15 15>004402145955010>004402145955010>B>I>0011>Insert>240/PLMN/000100>>1200-K108534-14 15>004402146016260>004402146016360>B>I>0011>Insert>240/PLMN/000100>>1200-K-94878-14 15>004402452698630>004402452698630>B>I>0011>Insert>240/PLMN/000100>>5000-K538947-14 90>BLACK.FUL>272/GSMA/000000>151006>01>4 I've written this AWK script: awk 'NR > 2 { print p } { p = $0 }' BLACK.FUL.eg2 | awk -F">" \ '{if (length($2) == 15) print substr($2,1,length($2)-1)","substr($3,1,length($3)-1)","$6","$8; \ else print $2","$3","$6","$8;}' | awk -F"," '{if ($2 == $1) print $1","$3","$4; \ else {if (length($1) > 14) {v = substr($1,9,6); t = substr($2,9,6); \ while(v <= t) print substr($2,1,8)v++substr($2,15,2)","$3","$4;} \ else {d = $1;while(d <= $2) print d++","$3","$4;}}}' which gives me an output of: 00440107490559,0011,240/PLMN/000100 00440214595501,0011,240/PLMN/000100 440214601626,0011,240/PLMN/000100 440214601627,0011,240/PLMN/000100 440214601628,0011,240/PLMN/000100 440214601629,0011,240/PLMN/000100 440214601630,0011,240/PLMN/000100 440214601631,0011,240/PLMN/000100 440214601632,0011,240/PLMN/000100 440214601633,0011,240/PLMN/000100 440214601634,0011,240/PLMN/000100 440214601635,0011,240/PLMN/000100 440214601636,0011,240/PLMN/000100 00440245269863,0011,240/PLMN/000100 with one problem: the leading 0s of strings in field1, are automatically getting removed due to a numeric operation on them. So my actual expected output is: 00440107490559,0011,240/PLMN/000100 00440214595501,0011,240/PLMN/000100 00440214601626,0011,240/PLMN/000100 00440214601627,0011,240/PLMN/000100 00440214601628,0011,240/PLMN/000100 00440214601629,0011,240/PLMN/000100 00440214601630,0011,240/PLMN/000100 00440214601631,0011,240/PLMN/000100 00440214601632,0011,240/PLMN/000100 00440214601633,0011,240/PLMN/000100 00440214601634,0011,240/PLMN/000100 00440214601635,0011,240/PLMN/000100 00440214601636,0011,240/PLMN/000100 00440245269863,0011,240/PLMN/000100 For that I'm trying the below updated AWK script: awk 'NR > 2 { print p } { p = $0 }' BLACK.FUL.eg2 | awk -F">" \ '{if (length($2) == 15) print substr($2,1,length($2)-1)","substr($3,1,length($3)-1)","$6","$8; \ else print $2","$3","$6","$8;}' | awk -F"," '{if ($2 == $1) print $1","$3","$4; \ else {if (length($1) > 14) {v = substr($1,9,6); t = substr($2,9,6); \ while(v <= t) print substr($2,1,8)v++substr($2,15,2)","$3","$4;} \ else {d = $1; for ( i=1;i<length($1);i++ ) if (substr($1,i++,1) == "0") \ {m=m"0"; else exit 1;}; while(d <= $2) print md++","$3","$4;}}}' But getting an error: awk: cmd. line:4: {m=m"0"; else exit 1;}; while(d <= $2) print md++","$3","$4;}}} awk: cmd. line:4: ^ syntax error Can you please highlight what I'm doing wrong to achieve the expected output. Modification only for my already existing AWK script will be of much help. Thanks NOTE: The Leading 0s can be of any number of occcurence, not only 2 0s in every case as in the above example outputs. A: since your field sizes are fixed, for the given example just change the last print statement to $ awk ... printf "%014d,%s,%s\n",d++,$3,$4}}}' 00440107490559,0011,240/PLMN/000100 00440214595501,0011,240/PLMN/000100 00440214601626,0011,240/PLMN/000100 00440214601627,0011,240/PLMN/000100 00440214601628,0011,240/PLMN/000100 00440214601629,0011,240/PLMN/000100 00440214601630,0011,240/PLMN/000100 00440214601631,0011,240/PLMN/000100 00440214601632,0011,240/PLMN/000100 00440214601633,0011,240/PLMN/000100 00440214601634,0011,240/PLMN/000100 00440214601635,0011,240/PLMN/000100 00440214601636,0011,240/PLMN/000100 00440245269863,0011,240/PLMN/000100 UPDATE if your field size is not fixed, you can capture the length (or desired length) and use the same pattern. Since your code is too complicated, I'm going to write a proof of concept which you can embed into your script. this is essentially your problem, increment a zero padded number and the leading zeros dropped. $ echo 0001 | awk '{$1++; print $1}' 2 this is the proposed solution with parametric length with zero padding. $ echo 0001 | awk '{n=length($1); $1++; printf "%0"n"s\n", $1}' 0002
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,472
\section{Introduction} \label{Section:Intro} Ad hoc networks comprise mobiles that communicate without centralized control or a pre-existing infrastructure. The preferred channel access for ad hoc networks is direct-sequence or frequency-hopping (FH) spread spectrum. This paper focuses specifically on frequency-hopping spread spectrum ad hoc networks. Such networks are characterized by independent, identical, FH radios that share the same carriers and frequency channels, and are nearly stationary in location over a single hop duration. The first part of this paper is concerned with the analysis of the outage probability of FH networks, where outage probability is the probability that the signal-to-noise-and-interference ratio (SINR) falls below a predetermined threshold. By limiting the fading to be slow Rayleigh fading and excluding shadowing, the paper first presents an exact closed-form expression for the outage probability conditioned on the locations of the interferers. The interferers are assumed to be uniformly distributed in an annular area, where the inner radius is a minimum interferer distance that could be imposed by an interference-avoidance protocol \cite{hasan:2007}, such as carrier-sense multiple access, and the outer radius is the maximum distance set by the network's geographic footprint. By averaging over the uniform locations of the interferers, the spatially averaged outage probability is obtained in closed form. A distinguishing feature of this paper is that it considers networks of limited area, in contrast with the current popular literature, which typically assumes networks of infinite extent (e.g., \cite{andrews:2010}, \cite{win:2009}). The number of mobiles in the network may be either fixed or random. Initially, a fixed number of mobiles is assumed, in which case the mobile locations are a realization of a binomial point process (BPP). Next, it is assumed that the number of mobiles is Poisson distributed, in which case the mobile locations are a realization of a Poisson point process (PPP). Considering a PPP allows us to obtain results that are consistent with the current popular literature, and in fact, our results coincide with the infinite-network results of \cite{baccelli:2006, Linnartz:1992, Zorzi:1995} when we let the network boundary extend to infinity. However, the BPP results are of practical interest because of limitations to the PPP model. The most significant limitation to the PPP model is that it allows an unbounded number of users, which is not possible in a finite network. However, the PPP has been favored in the literature because it enables the use of Campbell's theorem \cite{stoyan:1996}, which often leads to tractable mathematical expressions that vastly simplify the performance analysis. Having found the spatially averaged outage probability under the BPP and PPP models, the paper next derives closed-form expressions for the {\em transmission capacity} \cite{weber:2005}, which is the spatial spectral efficiency; i.e., the rate of successful transmissions per Hz and $m^2$. We propose a modification to the transmission capacity metric of \cite{weber:2005} that accounts for modulation and coding constraints. The utility of the modulation-constrained transmission capacity is that it can be used to optimize the main parameters that influence the network's performance. The network's performance depends on several parameters related to the choice of modulation and coding, and also depends on the number of hopping channels. It is assumed that the system uses noncoherent binary continuous-phase frequency-shift keying (CPFSK) modulation, which is the most common choice of modulation for FH systems \cite{cheng:ciss2007}. The main parameter associated with binary CPFSK is the {\em modulation index}, which characterizes the relative separation between the two tones. It is furthermore assumed that the system uses a capacity-approaching code (e.g., turbo or LDPC), which allows the achievable performance to be characterized by the capacity of the system, under constraints of the modulation and noncoherent detection technique. Under the assumption of coded noncoherent CPFSK, the performance of the network is a function of three parameters: the code rate, the modulation index, and the number of hopping channels. By using the modulation-constrained transmission capacity as the objective function, the paper optimizes the network with respect to these three parameters. Initially, a brute-force exhaustive optimization is proposed that optimizes over a wide range of discretized parameters. Because the results of the exhaustive optimization suggest that the optimization problem is convex, a gradient search algorithm \cite{boyd:2004} is proposed that offers a good tradeoff between accuracy and efficiency. The main contributions of this paper are (1) the closed-form expressions for conditional outage probability and spatially averaged outage probability in the presence of finite-area networks with mobiles drawn from both a BPP and a PPP, (2) the development of {\em modulation constrained} transmission capacity as a performance metric, and (3) a method for optimizing the parameters associated with the ad hoc network. The methodology presented in this paper presents a new approach to the analysis and optimization of finite ad hoc networks and presents fresh insight into the tradeoffs among the number of frequency-hopping channels, the modulation index, and the code rate in a frequency-hopping network. \section{Network Model} \label{Section:SystemModel} The network comprises $M+2$ mobiles that include a reference receiver, a reference transmitter $X_{0}$, and $M$ interfering transmitters $X_{1},...,X_{M}.$ The coordinate system is selected such that the receiving mobile $X_0$ is at the origin. The variable $X_{i}$ represents both the $i^{th}$ mobile and its location, and $||X_{i}||$ is the distance from $X_i$ to the receiving mobile. While the interferers can be located in any arbitrary region, we assume they are located in an annular region with inner radius $r_{ex}$ and outer radius $r_{net}$. A nonzero $r_{ex}$ may be used to model the effects of interference-avoidance protocols \cite{hasan:2007}. In particular, a nonzero $r_{ex}$ models an exclusion zone placed around the receiver, which can be realized by having the receiver send a short clear-to-send (CTS) packet in response to a request-to-send (RTS) packet sent by the transmitter. Under a carrier-sense multiple-access (CSMA) protocol, mobiles within distance $r_{ex}$ from the receiver that overhear the CTS will suppress their transmission. $X_{i}$ transmits a signal whose average received power in the absence of fading is $P_{i}$ at a reference distance $r_{0}$. At the receiving mobile, $X_i$'s power is \begin{eqnarray} \rho_i & = & P_i g_i f( ||X_i|| ) \label{eqn:power} \end{eqnarray} where $g_i$ is the power gain due to fading, and $f( ||X_i|| )$ is a path-loss function. Each $g_i = a_i^2$, where $a_i$ is Rayleigh and $g_i$ unit-mean exponential, i.e. $g_i \sim \mathcal E(1)$. \ For $r\geq r_{0}$, the path-loss function is expressed as the attenuation power law: \begin{eqnarray} f \left( r \right) & = & \left( \frac{r}{r_0} \right)^{-\alpha} \label{eqn:pathloss} \end{eqnarray} where $\alpha \geq 2$ is the attenuation power-law exponent and $r_0$ is sufficiently large so that the signals are in the far field. Channel access is through a synchronous frequency-hopping protocol. The hopping is slow, with multiple symbols per hop, which is a more suitable strategy for ad hoc networks than fast hopping \cite{torrieri:2011}. An overall frequency band of $B$ Hz is divided into $L$ frequency channels, each of bandwidth $B/L$ Hz. The transmitters independently select their transmit frequencies with equal probability. Let $p_i$ denote the probability that interferer $X_i$ selects the same frequency as the source. Let $d_i \leq 1$ be the duty factor of the interferer. It follows that $p_i=d_i/L$ and that using a duty factor less than unity is equivalent to hopping over more than $L$ frequencies \cite{torrieri:2011}. Assuming that $d_i=d$ for all interferers, $L'=L/d$ denotes the {\em equivalent} number of frequency channels. It is assumed that the \{$g_{i}\}$ remain fixed for the duration of a hop, but vary independently from hop to hop (block fading). While the $\{g_{i}\}$ are independent from user to user, they are not necessarily identically distributed. The instantaneous SINR at the receiving mobile is \begin{eqnarray} \gamma & = & \frac{ \rho_0 }{ \displaystyle {\mathcal N} + \sum_{i=1}^{M} I_i \rho_i } \label{Equation:SINR1} \end{eqnarray} where $\mathcal N$ is the noise power and $I_i$ is a variable that indicates the presence and type of interference (i.e. co-channel interference or adjacent-channel interference). When adjacent-channel interference \cite{torrieri:2011} is neglected, $I_i=1$ when $X_i$ selects the same frequency as $X_0$, and $I_i=0$ otherwise. It follows that $I_i$ is Bernoulli with probability $P[I_i=1]=p_i$. Substituting (\ref{eqn:power}) and (\ref{eqn:pathloss}) into (\ref{Equation:SINR1}), the SINR is \begin{eqnarray} \gamma & = & \frac{ g_0 \Omega_0^{-1} }{ \displaystyle \Gamma^{-1} + \sum_{i=1}^M I_i g_i \Omega_i^{-1} } \label{Equation:SINR2} \end{eqnarray} where $\Gamma = r_0^\alpha P_{0}/\mathcal{N}$ is the signal-to-noise ratio (SNR) when the transmitter is at unit distance and fading is absent, $\Omega_i = (P_0/P_i)||X_i||^{\alpha}$ is the inverse normalized power of $X_i$ at the receiver, and $\Omega_0 = ||X_0||^{\alpha}$. Without loss of generality\footnote{Changing $||X_0||$ is equivalent to a scaling of $r_{ex}$ and $r_{net}$.}, we assume that $||X_0|| = 1$ for the remainder of this paper. Furthermore, all examples and numerical results in this paper assume that all mobiles transmit with the same power, i.e. $P_i = P_0$ for all $i$. \section{Conditional Outage Probability} \label{Section:Outage} Let $\beta$ denote the minimum SINR required for reliable reception and $\boldsymbol{\Omega }=\{\Omega_{0},...,\Omega _{M}\}$ represent the set of inverse normalized powers. An \emph{outage} occurs when the SINR falls below $\beta$. Conditioning on $\boldsymbol{\Omega }$, the outage probability is \begin{eqnarray} \epsilon_{\Omega} & = & P \left[ \gamma \leq \beta \big| \boldsymbol \Omega \right]. \label{Equation:Outage1} \end{eqnarray} Because it is conditioned on $\boldsymbol{\Omega }$, the outage probability depends on the particular network geometry, which has dynamics over timescales that are much slower than the fading. By defining a variable \vspace{-0.5cm} \begin{eqnarray} \mathsf Z_{M} & = & \beta^{-1} g_0 \Omega_0^{-1} - \sum_{i=1}^M g_i I_i \Omega_i^{-1} \label{eqn:z} \end{eqnarray} the conditional outage probability may be expressed as \begin{eqnarray} \epsilon_{\Omega} & = & P \left[ \mathsf Z_{M} \leq \Gamma^{-1} \big| \boldsymbol \Omega \right] = F_{\mathsf Z_{M}} \left( \Gamma^{-1} \big| \boldsymbol \Omega \right) \label{Equation:OutageCDF} \end{eqnarray} which is the cumulative distribution function (cdf) of $\mathsf Z_M$ conditioned on $\boldsymbol \Omega$ and evaluated at $\Gamma^{-1}$. By defining $\mathsf S = \beta^{-1} \Omega_0^{-1} g_0$ and $\mathsf Y_i =I_i g_i \Omega_i^{-1}$, (\ref{eqn:z}) may be rewritten as \vspace{-0.5cm} \begin{eqnarray} \mathsf Z_{M} & = & \mathsf S - \sum_{i=1}^M \mathsf Y_i. \end{eqnarray} where $\mathsf S \sim {\mathcal E}( \beta \Omega_0 )$. The cdf of $\mathsf S$ is \begin{eqnarray} F_{\mathsf S}(y) & = & \left( 1 - e^{-\beta \Omega_0 y} \right) u(y) \label{cdf} \end{eqnarray} where $u(y)$ is the unit-step function. Taking into account the Rayleigh fading and Bernoulli $\{I_i\}$, the pdf of ${\mathsf Y_i}$ is \begin{eqnarray} f_{\mathsf Y_i}(y) & = & (1-p) \delta(y) + p_i \Omega_i e^{- \Omega_i y } u(y) \label{pdf} \end{eqnarray} where $\delta(y)$ is the Dirac delta function. First consider the single-interferer case ($M=1$). The cdf of $\mathsf Z_1$ is expressed as \vspace{-0.25cm} \begin{eqnarray} F_{\mathsf Z_1}(z \big| \boldsymbol \Omega) & = & \int_{0}^{\infty}F_{\mathsf S}(z+y)f_{\mathsf Y_1}(y) dy. \label{op3} \end{eqnarray} Substituting (\ref{cdf}) and (\ref{pdf}) into (\ref{op3}) yields \begin{eqnarray} F_{\mathsf Z_1}(z \big| \boldsymbol \Omega) & = & \int_{0 }^{\infty} \left[ 1 - e^{-\beta \Omega_0(z + y) } \right] f_{\mathsf Y_1}(y) dy \nonumber \\ & = & 1 - e^{-\beta \Omega_0 z } \left( \frac{(1-p_1)\beta \Omega_0 + \Omega_1 }{\beta \Omega_0 +\Omega_1} \right) \label{op4} \end{eqnarray} for $z \geq 0$. Using the fact that $\mathsf Z_{M} = \mathsf Z_{M-1} - \mathsf Y_M$, and working iteratively \vspace{-0.25cm} \begin{multline} F_{\mathsf Z_M}(z \big|\boldsymbol \Omega) = 1 - e^{-\beta \Omega_0 z } \prod_{i=1}^M \left[ \frac{(1-p_i)\beta \Omega_0 +\Omega_i}{\beta \Omega_0+\Omega_i} \right] \label{op7} \end{multline} for $z \geq 0$. The outage probability is found by substituting $z=\Gamma^{-1}$ into the above expression. \begin{figure}[t] \centering \hspace{-0.5cm} \includegraphics[width=9.25cm]{figures2/Fig1} \vspace{-0.5cm} \caption{Conditional outage probability $\epsilon_{\Omega}$ as a function of SNR $\Gamma$. Analytical curves are solid, while dots represent simulated values. Top curve: $\beta=10$ dB. Middle curve: $\beta=0$ dB. Bottom curve: $\beta=-10$ dB. The network geometry is shown in the inset. The receiving mobile is represented by the five-pointed star at the center of the network, the desired transmitting mobile by the six-pointed star immediately above the transmitter, and the 50 interferers are shown as dots. \label{Figure:Example1} } \vspace{-0.5cm} \end{figure} {\bf Example \#1:} Consider a specific network topology with the transmitting mobile placed one unit North of the receiving mobile, and fifty interferers arbitrarily placed in an annular region with outer radius $r_{net}=2$ and inner radius $r_{ex}=0.25$. The resulting network is shown in the inset of Fig. \ref{Figure:Example1}. The $\boldsymbol \Omega$ was determined by assuming a path-loss exponent $\alpha = 3$ and a common transmit power $P_i=P_0$. The equivalent number of frequency channels was set to $L'=200$. Fig. \ref{Figure:Example1} shows the outage probability as a function of the SNR $\Gamma$, computed at each SNR point by evaluating (\ref{op7}) at $z=\Gamma^{-1}$. Three cases were considered for the SINR threshold: $\beta= -10$ dB, $\beta= 0$ dB and $\beta= 10$ dB. Also shown are results generated by simulation, which involved randomly generating the exponentially-distributed $\{g_i\}$. The analytical and simulation results coincide, which is what is to be expected because (\ref{op7}) is exact. Any discrepancy between the curves can be attributed to the finite number of Monte Carlo trials (one million trials were executed per SNR point). \section{Outage of a BPP} \label{Section:BPP} Because it is conditioned on ${\boldsymbol \Omega}$, the outage probability $\epsilon_\Omega$ presented in the last section depends on the geometry of the particular network, i.e., the location of the interferers. The conditioning on ${\boldsymbol \Omega}$ can be removed by averaging ${F}_{\mathsf Z_M}(z|\boldsymbol \Omega)$ over the spatial distribution of the network. In a BPP, a fixed number $M$ of mobiles are independently and uniformly distributed over the network. Let $\epsilon_M$ be the spatially averaged outage probability when the interferers are drawn from a BPP, which is found by taking the expectation of ${F}_{\mathsf Z_M}(z|\boldsymbol \Omega)$ with respect to ${\boldsymbol \Omega}$: \begin{eqnarray} \epsilon_M &=& E_{\boldsymbol \Omega} \left[ \epsilon_\Omega \right] = E \left[ {F}_{\mathsf Z_M}\left( \Gamma^{-1} \big| \boldsymbol \Omega \right) \right] = {F}_{\mathsf Z_M}\left( \Gamma^{-1} \right). \nonumber \\ \end{eqnarray} In the above equation, ${F}_{\mathsf Z_M}\left( z \right)$ is the cdf of $\mathsf Z_M$ averaged over the spatial distribution, which can be found analytically or computed through Monte Carlo simulation. To compute it via Monte Carlo simulation, generate a large number $N$ of networks, each containing $M$ interferers drawn from a BPP. Compute the ${F}_{\mathsf Z_M}(z|\boldsymbol \Omega)$ for each network by using the method outlined in Section III, and average over the $N$ networks. Letting $\boldsymbol \Omega_n$ be normalized inverse power coefficients of the $n^{th}$ randomly generated network, the Monte Carlo estimate of the cdf is \begin{eqnarray} {F}_{\mathsf Z_M}(z) & = & \frac{1}{N} \sum_{n=1}^N {F}_{\mathsf Z_M}(z|\boldsymbol \Omega_n).\label{Equation:MC} \end{eqnarray} Note that the Monte Carlo simulation only requires the realization of the interferer locations, and does not require the realization of the fading coefficients. For a BPP constrained to an annular region with inner radius $r_{ex}$ and outer radius $r_{net}$, the spatial coordinates can be represented as the complex value $X_i=r_i e^{j \theta_i}$. The location can then be realized by drawing two independent numbers $x_{1,i}$ and $x_{2,i}$ from the uniform distribution over $\left[\left(\frac{r_{ex}}{r_{net}}\right)^2,1 \right]$ and $[0,1]$ respectively and then setting $r_i= r_{net} \sqrt{x_{1,i}}$ and $\theta_i=2 \pi x_{2,i}$. To avoid the computational burden of a Monte Carlo simulation, a closed-form expression for the spatially averaged outage probability is preferred. Since $||X_i||=r_i$ and $\Omega_i=||X_i||^\alpha$, it follows that the normalized inverse power of the $i^{th}$ interferer is \begin{eqnarray} \Omega_i &=& \left( \sqrt{x_{1,i}} r_{net} \right)^{\alpha}. \end{eqnarray} The pdf of $\Omega_i$ is \begin{eqnarray} f_{\Omega_i}(\omega) &=& \frac{2}{\alpha} \omega^{\frac{2-\alpha}{\alpha}} \left( r_{net}^2- r_{ex}^2 \right)^{-1} \label{pdf_omega} \end{eqnarray} for $r_{ex}^{\alpha} \leq \omega \leq r_{net}^\alpha$, and zero otherwise. The spatially averaged outage probability can now be obtained using (\ref{op7}) and ($\ref{pdf_omega}$) as follows: \begin{eqnarray} F_{\mathsf Z_M}(z) &=& \int f_{\boldsymbol \Omega}(\boldsymbol \omega) F_{\mathsf Z_M}(z \big| \boldsymbol \omega) d\boldsymbol \omega \label{cdf_M} \end{eqnarray} where the $M$-fold integral is over the joint pdf of $\{\Omega_1, ..., \Omega_M\}$. Substituting (\ref{op7}) and ($\ref{pdf_omega}$) into ($\ref{cdf_M}$) and using the fact that the $\{\Omega_i\}$ are independent yields: \begin{multline} F_{\mathsf Z_M}(z) = 1 - e^{-\beta \Omega_0 z } \prod_{i=1}^M \frac{2}{\alpha} \left( r_{net}^2- r_{ex}^2 \right)^{-1} \\ \int_{r_{ex}^{\alpha} }^{r_{net}^{\alpha} } \omega^{\frac{2-\alpha}{\alpha}} \left( \frac{(1-p_i)\beta \Omega_0 +\omega}{\beta \Omega_0+\omega} \right) d\omega \label{cdf_M_1} \end{multline} Evaluating the integral and assuming $p_i=p$ for all users results in: \begin{eqnarray} F_{\mathsf Z_M}(z)= 1 - e^{-\beta_0 z } \kappa^M \left \{ \Psi \left( r_{net}^{\alpha} \right) - \Psi \left( r_{ex}^{\alpha} \right) \right \}^M \label{cdf_BPP} \end{eqnarray} where $\beta_0= \beta \Omega_0$, \begin{eqnarray} \kappa & = & \left( r_{net}^2- r_{ex}^2 \right)^{-1} \label{kappa} \end{eqnarray} \begin{multline} \Psi(x) = x^{\frac{2}{\alpha}} \cdot (1-p) + \frac{2 \cdot p }{\alpha+2} \cdot \frac{x^{\frac{2+\alpha}{\alpha}}}{\beta_0} \\ \times {_2F}_1\left( \left[1, \frac{\alpha+2}{\alpha}\right]; \frac{2\alpha+2}{\alpha}, -\frac{x}{\beta_0}\right) \label{Psi} \end{multline} and $_2F_1([a,b];c,x)$ is the Gauss hypergeometric function given by \cite{Abramowitz:1965} \vspace{-0.5cm} \begin{multline} _2F_1 ([a,b];c;x) = \frac{\Gamma(c)}{\Gamma(b)\Gamma(c-b)} \\ \times \int_{0 }^{1} \nu^{b-1}(1-\nu)^{c-b-1}(1-\nu x)^{-a}d \nu \end{multline} \vspace{-0.25cm} where \vspace{-0.25cm} \begin{eqnarray} \Gamma(z) & = & \int_{0 }^{\infty} t^{z-1} e^{-t} dt. \end{eqnarray} While the results in this section are for an annular network centered upon the reference receiver, we note that other network shapes can be accommodated by determining the appropriate pdf of the $\Omega_i$ and substituting into (\ref{cdf_M}). Furthermore, shadowing can be accommodated by using appropriately defined $f_{\Omega_i}(\omega)$. \begin{figure}[t] \centering \hspace{-0.5cm} \includegraphics[width=9.25cm]{figures2/Fig2} \vspace{-0.5cm} \caption{Outage probability $\epsilon_M$ as a function of $M$ for five values $L'$ for networks with $M$ interfering mobiles drawn from a BPP. The SINR threshold is $\beta = 3.7$ dB, the SNR is set to $\Gamma = 10$ dB, and the other parameters are identical to those used to generate Fig. \ref{Figure:Example1}. Analytical curves are solid, while dots represent simulated values. \label{Figure:Example2} } \vspace{-0.5cm} \end{figure} {\bf Example \#2:} Reconsider Example \#1, but now instead of the network assuming the specific topology shown in Fig. \ref{Figure:Example1}, let the $M$ interferers be placed according to a BPP. By using (\ref{cdf_BPP}), the spatially averaged outage probability can be found. Fig. \ref{Figure:Example2} shows the outage probability as a function of $M$ for five values of $L'$ when the SINR threshold is set to $\beta = 3.7$ dB and the SNR is set to $\Gamma = 10$ dB. The values of $r_{ex}$, $r_{net}$, and $\alpha$ are the same as in Example \#1. The solid curves show the spatially averaged outage probability evaluated analytically, i.e., by using (\ref{cdf_BPP}), while the dots show the probability found by Monte Carlo averaging, i.e, by using (\ref{Equation:MC}) with $N=10 \, 000$ randomly generated networks. Because (\ref{cdf_BPP}) is exact and $N$ large, the analytical and simulation results coincide. From Fig. \ref{Figure:Example2} it is observed that the outage probability degrades with increasing $M$ and decreasing $L'$. \begin{figure}[t] \centering \hspace{-0.5cm} \includegraphics[width=9.25cm]{figures2/EM_Beta} \vspace{-0.5cm} \caption{Outage probability $\epsilon_M$ as a function of $\beta$ for three values of $\Gamma$ for an annular network area with outer radius of $r_{net}=2$, inner radius of $r_{min}=0.25$. A fixed number of interfering mobiles ($M = 50$) is drawn from a BPP and a path-loss exponent $\alpha=3$ is fixed. Analytical curves are solid, while dots represent simulated values. \label{Figure:Example3} } \vspace{-0.5cm} \end{figure} {\bf Example \#3:} The dependence on $\beta$ and $\Gamma$ is investigated in Fig. \ref{Figure:Example3}. The values of $r_{ex}$, $r_{net}$, and $\alpha$ are the same as in Example \#1. The number of interferers is set to $M=50$ and the equivalent number of frequency channels set to $L' = 200$. As in Example \#2, the spatially averaged outage probability is computed analytically and through simulation. In particular, the solid curves show the spatially averaged outage probability evaluated using (\ref{cdf_BPP}), while the dots show the probability found by Monte Carlo averaging with $N=10 \, 000$ randomly generated networks. The outage probability is shown as a function of the SINR threshold $\beta$ for three values of $\Gamma$. Again the analytical curves coincide with the simulation results. From \ref{Figure:Example3}, it is observed that the outage probability increases with increasing $\beta$ and decreases with increasing $\Gamma$. \section{Outage of a PPP} \label{Section:Poisson Point Process in a Finite Network} Suppose that the network now has a variable number of interferers $M$. Let $p_M(m)$ indicate the probability mass function (pmf) of $M$. Let $F_{\mathsf Z_m}(z)$ be the cdf of $Z_m$ when there are $m$ interferers drawn from a BPP. It follows that the spatially averaged cdf for the variable-sized network is \begin{eqnarray} F_{\mathsf Z}(z) & = & \sum_{ m=0}^{ \infty } p_{M}(m) F_{\mathsf Z_m}(z) \label{cdf_PPP} \end{eqnarray} and the outage probability averaged over a spatial distribution with a variable number of interferers is $\epsilon = E[ \epsilon_M ] = F_{\mathsf Z}\left( \Gamma^{-1} \right)$, where the expectation is with respect to the distribution of $M$ as given by (\ref{cdf_PPP}). We note that the $\{ F_{\mathsf Z_M}(z) \}$ in (\ref{cdf_PPP}) may be obtained either through simulation using (\ref{Equation:MC}), which requires the summation in (\ref{cdf_PPP}) to be truncated. Alternatively, the analytical expression given by (\ref{cdf_BPP}) may be used, which as will be shown below, does not require truncation of the summation. When the spatial distribution is a PPP, the distribution of $M$ is Poisson with intensity $\eta=\lambda A$, where $\lambda$ is the density of the points per unit area and $A$ is the area over which the points are distributed. For a PPP of density $\lambda$, the number of interfering mobiles $M$ within area $A$ has pmf \vspace{-0.2cm} \begin{eqnarray} p_{M}(m) & = & \frac{(\lambda A)^{m}}{m!}e^{-\lambda A} \label{pmf_Poisson} \end{eqnarray} for $m\geq 0$. \noindent Substituting ($\ref{cdf_BPP}$) and ($\ref{pmf_Poisson}$) into ($\ref{cdf_PPP}$) yields: \vspace{-0.25cm} \begin{multline} F_{\mathsf Z}(z)= e^{-\lambda A}\sum_{ m=0}^{ \infty } \frac{(\lambda A)^{m}}{m!} \\ \left\{ 1 - e^{-\beta_0 z } \left \{ \kappa \cdot \left[ \Psi \left( r_{net}^{\alpha} \right) - \Psi \left( r_{ex}^{\alpha} \right)\right] \right \}^m \right\} \label{cdf_PPP_2} \end{multline} where $\kappa$ and $\Psi(x)$ are given by ($\ref{kappa}$) and ($\ref{Psi}$), respectively. By using the identity \cite{Gradshteyn:2007} \vspace{-0.2cm} \begin{eqnarray} \sum_{ m=0}^{ \infty } \frac{a^m}{m!}\left( 1-c \cdot b^m \right)= e^a-c \cdot e^{a \cdot b} \end{eqnarray} and $A = \pi \left( r_{net}^2 - r_{ex}^2 \right )$, (\ref{cdf_PPP_2}) may be expressed as \begin{eqnarray} F_{\mathsf Z}(z) & = & 1 - \exp \left\{ -\beta_0 z - \pi \lambda \left( r_{net}^2 - r_{ex}^2 \right) \right. \nonumber \\ & & \left. \times \left ( 1- \kappa \left[ \Psi \left( r_{net}^{\alpha} \right) - \Psi \left( r_{ex}^{\alpha} \right)\right] \right ) \right\}. \label{PPPNN} \end{eqnarray} The expression given in (\ref{PPPNN}) generalizes an earlier expression given in Baccelli et al. \cite{baccelli:2006} for the outage probability of an infinite network ($r_{net} \rightarrow \infty$) with no exclusion zone ($r_{ex}=0$) and constantly transmitting mobiles ($p=1$). To see this, set $p=1$ and $r_{ex}=0$ in (\ref{PPPNN}) and take the limit as $r_{net} \rightarrow \infty$, \vspace{-0.3cm} \begin{multline} F_{\mathsf Z}(z) = \lim_{r_{net} \rightarrow \infty } 1 - \exp \left\{-\beta_0 z - \pi \lambda r_{net}^2 \left[ 1- \frac{r_{net}^{\alpha}}{\beta_0} \right. \right. \\ \left. \left. \frac{2 }{(\alpha+2)} \cdot {_2F}_1\left( \left[1, \frac{\alpha+2}{\alpha}\right]; \frac{2\alpha+2}{\alpha}, -\frac{r_{net}^{\alpha}}{\beta_0}\right) \right] \right\}. \label{Limit2} \end{multline} By using the identity \cite{Abramowitz:1965} \vspace{-0.2cm} \begin{eqnarray} {_2F}_1\left( \left[a, b \right]; c, z \right) = (1-z)^{-b} {_2F}_1\left( \left[ b, c-a \right]; c, \frac{z}{z-1} \right) \label{identity} \end{eqnarray} and performing some algebraic manipulations, (\ref{Limit2}) becomes \vspace{-0.4cm} \begin{multline} F_{\mathsf Z}(z)= 1 - \lim_{r_{net} \rightarrow \infty } \exp \left \{-\beta_0 z -\frac{2 \pi \lambda r_{net}^{\alpha+2}}{\beta_0 (\alpha+2)} \left( \frac{r_{net}^{\alpha}}{\beta_0}+1 \right)^{- \frac{\alpha+2}{\alpha}} \right. \\ \left. {_2F}_1\left( \left[ \frac{\alpha+2}{\alpha}, \frac{\alpha+2}{\alpha}\right]; \frac{2\alpha+2}{\alpha}, \frac{\frac{r_{net}^{\alpha}}{\beta_0}}{\frac{r_{net}^{\alpha}}{\beta_0}+1} \right) \right \}. \label{Limit4} \end{multline} Because $\frac{r_{net}^{\alpha}}{\beta_0}+1 =\frac{r_{net}^{\alpha}}{\beta_0}$ when $r_{net} \rightarrow \infty$, (\ref{Limit4}) can be simplified to \vspace{-0.4cm} \begin{multline} F_{\mathsf Z}(z)= 1 - \lim_{r_{net} \rightarrow \infty } \exp \left \{-\beta_0 z -\frac{2 \pi \lambda}{(\alpha+2)} \beta_0^{\frac{2}{\alpha}} \right. \\ \left. {_2F}_1\left( \left[ \frac{\alpha+2}{\alpha}, \frac{\alpha+2}{\alpha}\right]; \frac{2\alpha+2}{\alpha}, \frac{\frac{r_{net}^{\alpha}}{\beta_0}}{\frac{r_{net}^{\alpha}}{\beta_0}+1} \right) \right\}. \label{Limit5} \end{multline} Since $ \displaystyle \lim_{r_{net} \rightarrow \infty } \frac{ r_{net}^{\alpha}/ \beta_0 }{ r_{net}^{\alpha} / \beta_0 + 1} = 1$, (\ref{Limit5}) is equal to \vspace{-0.2cm} \begin{multline} F_{\mathsf Z}(z)= 1 -\exp \left \{-\beta_0 z -\frac{2 \pi \lambda}{(\alpha+2)} \beta_0^{\frac{2}{\alpha}} \right. \\ \left. {_2F}_1\left( \left[ \frac{\alpha+2}{\alpha}, \frac{\alpha+2}{\alpha}\right]; \frac{2\alpha+2}{\alpha}, 1 \right) \right\}. \label{Limit6} \end{multline} By using the identity \cite{Abramowitz:1965} \begin{eqnarray} {_2F}_1\left( \left[a, b \right]; c, 1 \right)& = & \frac{\Gamma(c)\Gamma(c-a-b)}{\Gamma(c-a)\Gamma(c-b)} \label{identity1} \end{eqnarray} and performing a few algebraic manipulations, (\ref{Limit6}) becomes \begin{multline} F_{\mathsf Z}(z) = 1 -\exp \left \{-\beta_0 z -\frac{2 \pi \lambda}{{\alpha}}\beta_0^{\frac{2}{\alpha}} \Gamma\left(\frac{2}{\alpha}\right) \Gamma\left(1-\frac{2}{\alpha}\right)\right \}. \label{Baccelli} \end{multline} which, in the absence of noise, coincides with equation (3.4) of \cite{baccelli:2006} and equation (61) of \cite{weber:2010}. \begin{figure}[t!] \centering \hspace{-0.5cm} \includegraphics[width=9.25cm]{figures/Fig3} \vspace{-0.5cm} \caption{Spatially averaged outage probability $\epsilon$ as a function of the mobile density $\lambda$ when interferers are drawn from a PPP. Analytical curves are solid, while dots represent simulated values. Top solid curve: $r_{net}=10$. Bottom solid curve: $r_{net}=2$. The dotted line curve is the $\epsilon$ as a function of $\lambda$ for an infinite network. \label{Figure:Example4} } \vspace{-0.5cm} \end{figure} {\bf Example \#4:} As with Example \#2, suppose that $||X_{0}|| = 1$, $\alpha = 3$, and $\Gamma=10$ dB. Let $r_{ex}=0$ and $L'=1$. The interfering mobiles are now placed in a circular region of radius $r_{net}$ according to a PPP with node density $\lambda$. Three network radii are considered: $r_{net} = \{2, 10, \infty\}$. The SINR threshold is set to $\beta=3.7$ dB. Fig. \ref{Figure:Example4} shows the spatially averaged outage probabilities for each of the three values of $r_{net}$ as a function of $\lambda$. The outage probabilities of the two networks with finite radius were computed using (\ref{PPPNN}), while the outage probability of the infinite network was computed using ($\ref{Baccelli}$). In addition, simulation results are shown for the two finite networks, which coincide with the theoretical curves. Again, one million trials were executed per $\lambda$ point, and any discrepancy between the theoretical result and the simulation is due to the finite number of trials. Fig. \ref{Figure:Example4} shows that the outage probability increases with increasing $\lambda$ and/or increasing $r_{net}$. {\bf Example \#5:} Fig. \ref{Figure:Example5} investigates the influence of the path-loss exponent $\alpha$ and the number of equivalent hopping channels $L'$. As in Example \#4, $||X_{0}|| = 1$ and $\Gamma=10$ dB. The interferering mobiles are placed in an annular region with an inner radius $r_{ex}=0.25$ and an outer radius $r_{net}=2$ according to a PPP with node density $\lambda$, and the SINR threshold is fixed to $\beta=3.7$ dB. Fig. \ref{Figure:Example5} shows the outage probability as a function of $\lambda$ for three values of $L'$ and three values of path-loss exponent $\alpha$. For each set of ($L',\alpha$), the outage probability was computed analytically using (\ref{PPPNN}), as shown by the curves, and by Monte Carlo simulation with one million trials, as indicated by the dots. Consistent with observations made in Examples \#2 and \#4, Fig. \ref{Figure:Example5} shows that the outage probability increases with decreasing $L'$ and increases with increasing node density. In addition, Fig. \ref{Figure:Example5} shows that the outage probability decreases with increasing path-loss exponent $\alpha$. \begin{figure}[t!] \centering \hspace{-0.5cm} \includegraphics[width=9.25cm]{figures2/EM_lambda_L} \vspace{-0.5cm} \caption{Spatially averaged outage probability $\epsilon$ as a function of node density $\lambda$ for three values $L'$ when interferers are drawn from a PPP. The source transmitter is placed at unit distance from the reference receiver and a exclusion-zone is imposed at the receiver with radius $r_{ex}=0.25$. The SINR threshold is fixed to $\beta = 3.7$ dB and the SNR is set to $\Gamma = 10$. Analytical curves are solid, while dots represent simulated values. \label{Figure:Example5} } \vspace{-0.5cm} \end{figure} \section{Transmission Capacity} Often, networks are constrained to ensure that the outage probability $\epsilon$ does not exceed a maximum outage probability $\zeta$ $\in$ $\left[ 0,1 \right]$; i.e., $\epsilon \leq \zeta$. Under such a constraint, the maximum density of transmissions is of interest, which is quantified by the {\em transmission capacity} (TC) \cite{weber:2010}. With outage constraint $\zeta$, the TC is \begin{eqnarray} \tau_c\left(\zeta \right) & = & \epsilon^{-1}(\zeta)(1-\zeta) \label{TC_definition} \end{eqnarray} where $\epsilon^{-1}(\zeta)$ is the density of the underlying process (BPP or PPP) whose spatially averaged outage probability satisfies the constraint $\epsilon \leq \zeta$ with equality\footnote{Since $\epsilon$ is a monotonically increasing function of $\lambda$, the TC is maximized when the constraint $\epsilon \leq \zeta$ is met with equality.}, and $(1-\zeta)$ ensures that only successful transmissions are counted. The TC represents the spatial spectral efficiency; i.e. the rate of successful data transmission per unit area. With appropriately normalized variables, the TC can assume units of bits-per-second per Hz per $m^2$ (bps/Hz/$m^2$). Closed-form expressions for TC can be found in Rayleigh fading for networks drawn from either a BPP or a PPP. For the BPP case, $\epsilon^{-1}(\zeta)$ is found be solving $\epsilon = F_{\mathsf Z_M}( \Gamma^{-1} ) = \zeta$ for $\lambda$. By substituting $M = \lambda A$ into (\ref{cdf_BPP}), \begin{eqnarray} \zeta & = & 1 - e^{-\beta_0 \Gamma^{-1}} \left[ \kappa \left \{ \Psi \left( r_{net}^{\alpha} \right) - \Psi \left( r_{ex}^{\alpha} \right) \right \} \right]^{\lambda A}\hspace{-0.4cm}. \end{eqnarray} By solving for $\lambda$ and setting the result to $\epsilon^{-1}(\zeta)$, \begin{eqnarray} \epsilon^{-1}(\zeta) & = & \frac{ \log( 1 - \zeta ) + \beta_0 \Gamma^{-1} }{ A \log \left\{ \kappa \left[ \Psi \left( r_{net}^{\alpha} \right) - \Psi \left( r_{ex}^{\alpha} \right)\right] \right\} }. \label{epsinv} \end{eqnarray} By substituting (\ref{epsinv}) into (\ref{TC_definition}), the TC for a BPP is \begin{eqnarray} \tau_c\left(\zeta \right) = \frac{ (1-\zeta) \left[ \log( 1 - \zeta ) + \beta_0 \Gamma^{-1} \right] }{ A \log \left\{ \kappa \left[ \Psi \left( r_{net}^{\alpha} \right) - \Psi \left( r_{ex}^{\alpha} \right)\right] \right\} }. \label{TC_BPP} \end{eqnarray} \begin{figure}[t] \centering \hspace{-0.5cm} \includegraphics[width=9.25cm]{figures2/Fig7} \vspace{-0.5cm} \caption{Transmission capacity $\tau_c(\zeta)$ as a function of the outage constraint $\zeta$ for different values of SNR $\Gamma$, when interferers are drawn from a BPP. The network dimensions are $r_{ex}=0$ and $r_{net} = 2$. The curves were produced using parameters $\beta=-10$ dB, $L'=1$, and $\alpha = 3$. Top curve: $\Gamma=10$ dB. Middle curve: $\Gamma=0$ dB. Bottom curve: $\Gamma=-10$ dB.\label{Figure:Example6} } \vspace{-0.5cm} \end{figure} {\bf Example \#6:} In this example, a circular network is assumed with $r_{ex}=0$, $r_{net}=2$, and interferers drawn from a BPP. The path-loss exponent is $\alpha =3$, and the equivalent number of frequency channels is $L'=1$, which results in $p=1$. The SINR threshold is set to $\beta=-10$ dB. Fig. \ref{Figure:Example6} shows the transmission capacity as a function of the outage constraint $\zeta$ for three values of SNR $\Gamma$. The curves were produced by using (\ref{TC_BPP}) and show that transmission capacity increases with $\Gamma$. When the interferers are drawn from a PPP, $\epsilon^{-1}(\zeta)$ is found by using solving $\epsilon = F_{\mathsf Z}( \Gamma^{-1} ) = \zeta$ for $\lambda$, where $F_{\mathsf Z}( z )$ is given by (\ref{PPPNN}), \begin{eqnarray} \zeta = 1 - \exp \left\{- \beta_0 \Gamma^{-1} -\pi \lambda \kappa^{-1} \left\{ 1- \kappa \left[ \Psi \left( r_{net}^{\alpha} \right) - \Psi \left( r_{ex}^{\alpha} \right)\right] \right\} \right \}. \nonumber \end{eqnarray} Solving for $\lambda$ and setting the result to $\epsilon^{-1}(\zeta)$, \begin{eqnarray} \epsilon^{-1}(\zeta) = \frac{\log(1- \zeta)^{-1} - \beta_0 \Gamma^{-1}} {\pi \kappa^{-1} \left\{ 1- \kappa \left[ \Psi \left( r_{net}^{\alpha} \right) - \Psi \left( r_{ex}^{\alpha} \right)\right] \right\} }. \label{epsinv2} \end{eqnarray} By substituting (\ref{epsinv2}) into (\ref{TC_definition}), the TC for a PPP is \begin{eqnarray} \tau_c\left(\zeta \right) = \frac{\left(1-\zeta \right) \left[ \log\left(1-\zeta \right)^{-1}- \beta_0 \Gamma^{-1}\right]}{\pi \kappa^{-1} \left\{1- \kappa \left[ \Psi \left( r_{net}^{\alpha} \right) - \Psi \left( r_{ex}^{\alpha} \right)\right] \right\} }. \label{tcppp} \end{eqnarray} When $r_{net} \rightarrow \infty$, $r_{ex} = 0$ and $p=1$, (\ref{tcppp}) becomes \begin{eqnarray} \tau_c \left(\zeta \right) = \frac{\left( 1- \zeta\right) \left[\log \left( 1-\zeta \right)^{-1} - \beta_0 \Gamma^{-1} \right] }{\pi \beta_0^{\frac{2}{\alpha}}\frac{2 \pi}{\alpha} \csc \left( \frac{2 \pi}{\alpha}\right)}. \label{Baccelli2} \end{eqnarray} This expression agrees with equation (4.10) in \cite{TransCap:2012}, which traces back to equation (62) of \cite{weber:2010} in the absence of noise. \begin{figure}[t] \centering \hspace{-0.5cm} \includegraphics[width=9.25cm]{figures2/Fig8} \vspace{-0.5cm} \caption{ Transmission capacity $\tau_c(\zeta)$ as a function of outage constraint $\zeta$ for three network radii, when interferers are drawn from a PPP and $r_{ex}=0$. The curves were produced using parameters $\beta = -10$ dB , $\Gamma =10$ dB and $L'=1$. Top curve: $r_{net}=2$. Middle curve: $r_{net}=10$. Bottom curve: $r_{net}=\infty$. \label{Figure:Example7} } \vspace{-0.5cm} \end{figure} {\bf Example \#7:} In this example, a circular network is assumed with $r_{ex}=0$ and interferers drawn from a PPP. Three different values of $r_{net}$ are considered: $r_{net} = \{2,10,\infty\}$. The path-loss exponent is $\alpha =3$, and the equivalent number of frequency channels is $L'=1$, which results in $p=1$. The SINR threshold is set to $\beta=-10$ dB, and the SNR is $\Gamma = 10$ dB. Fig. \ref{Figure:Example7} shows the transmission capacity as a function of the outage constraint $\zeta$ for the three values of network radius $r_{net}$. The transmission capacity of the two networks with finite radius was computed using (\ref{tcppp}), while the transmission capacity of the infinite network was computed using ($\ref{Baccelli2}$). The curves show that the TC decreases with increasing network radius. \section{Modulation-Constrained TC} The transmission capacity expressions presented in the previous section are functions of the SINR threshold $\beta$ and make no assumptions about the existence of any particular type of modulation or channel coding. In practice, the SINR threshold is a function of the modulation and coding that is used. Let $C( \gamma )$ be the maximum achievable rate that can be supported by the chosen modulation at an instantaneous SINR of $\gamma$. If a rate $R$ code is used, then an outage will occur when $C(\gamma) \leq R$. Since $C(\gamma)$ is monotonic, it follows that $\beta$ is the value for which $C(\beta)=R$, and therefore we can write $\beta = C^{-1}(R)$. Frequency-hopping systems often use noncoherent CPFSK modulation \cite{cheng:ciss2007,torrieri:2011}. The maximum achievable rate of noncoherent CPFSK is given in \cite{cheng:ciss2007} for various modulation indices $h$, where it is called the {\em symmetric information rate}. In particular, Fig. 1 of \cite{cheng:ciss2007} shows the symmetric information rate of binary CPFSK as a function of $\gamma$ for various $h$. To emphasize the dependence of the capacity on $h$, we use $C(h,\gamma)$ in the sequel to denote the rate of CPFSK with modulation index $h$. For any value of $h$, the value of the SINR threshold $\beta$ can be found from the corresponding curve by finding the value of $\gamma$ for which $C(h,\gamma)=R$. For instance, when $R=1/2$ and $h=1$, the required $\beta = 3.7$ dB. In \cite{torrieri:2008}, it was found that in practice, and over a wide range of code rates, turbo-coded noncoherent CPFSK is consistently about 1 dB away from the corresponding modulation-constrained capacity limit. Thus, the $\beta$ required in practice will generally be higher than the value of $\beta_{min}(R,h)$ by a small margin. For instance, if a 1 dB margin is used, then the SINR threshold for noncoherent binary CPFSK with $R=1/2$ and $h=1$ should be set to $\beta = 4.7$ dB. When accounting for modulation and coding, the maximum data transmission rate is determined by the bandwidth $B/L$ of a frequency channel, the spectral efficiency of the modulation, and the code rate. Let $\eta$ be the spectral efficiency of the modulation, given in symbols per second per Hz, and defined by the symbol rate divided by the 99 percent-power bandwidth of the modulation\footnote[1]{Percent-power bandwidths other than 99 can be used, but will influence the amount of adjacent-channel interference.}. The spectral efficiency of CPFSK can be found by numerically integrating the normalized power-spectral densities given in \cite{torrieri:2011}, or since we assume many symbols per hop, by Equation (3.4-61) of \cite{proakis:2008} and then inverting the result. To emphasize the dependence of $\eta$ on $h$, we denote the spectral efficiency of CPFSK as $\eta(h)$ in the sequel. When combined with a rate-$R$ code, the spectral efficiency of CPFSK becomes $R \eta(h)$ (information) bits per second per Hz, where $R$ is the ratio of information bits to code symbols. The data rate supported by the channel is $R \eta(h) B/L$ bits per second. The average data rate, or throughput, must account for the duty factor $d$ and only count correct transmissions. Hence, the throughput is \begin{eqnarray} T & = & \frac { R \eta(h) B d (1-\epsilon) }{L} = \frac { R \eta(h) B (1-\epsilon) }{L'}. \end{eqnarray} The {\em modulation-constrained} transmission capacity is the throughput multiplied by the node density, \begin{eqnarray} \tau (\lambda) & = & \mathcal \lambda T = \frac{\lambda R \eta(h) B (1-\epsilon) }{L'}.\label{Equation:TC} \end{eqnarray} In contrast with (\ref{TC_definition}), this form of transmission capacity explicitly takes into account the code rate $R$, as well as the spectral efficiency of the modulation $\eta(h)$. It furthermore accounts for the hopping bandwidth $B/L'$. Rather than constraining outage probability, $\tau( \lambda )$ fixes the node density $\lambda$ and allows the outage probability to vary accordingly. Since it accounts for the actual system bandwidth $B$, (\ref{Equation:TC}) assumes units of $bps/m^2$. By dividing by bandwidth, the {\em normalized} modulation-constrained transmission capacity \begin{eqnarray} \tau'(\lambda) & = & \frac{\tau}{B} = \frac{\lambda R \eta(h) (1-\epsilon) }{L'}.\label{Equation:TCnorm} \end{eqnarray} takes on units of $bps/Hz/m^2$. However, unlike (\ref{TC_definition}), $\tau'(\lambda)$ is in terms of {\em information} bits rather than {\em channel} bits. \section{Network Optimization \\ by an Exhaustive search }\label{Section:Optimization} The main goal of this paper is to find the $(L',R,h)$ that maximizes the normalized TC $\tau'(\lambda)$ for a frequency-hopping ad hoc network, assuming that transmissions occur using a capacity-approaching code (e.g., turbo or LDPC) and noncoherent binary continuous-phase frequency shift keying (CPFSK) modulation. The optimization can be accomplished using an exhaustive search by performing the following steps: \begin{enumerate} \item \label{pickBetaE} Pick a value of $\beta$. \item \label{pickhE} Pick a value of $h$, and determine the rate $R$ corresponding to the current $\beta$ (this is found by setting $R=C(h,\beta)$) and its corresponding bandwidth efficiency $ \eta(h)$. \item \label{pickLE} Pick a value of $L'$. \item Use (\ref{cdf_BPP}) to compute the average outage probability $\epsilon_{M}$ if the interferers are drawn from a BPP or (\ref{cdf_PPP}) to compute $\epsilon$ if the interferers are drawn from a PPP . \label{OPESE} \item For the set of $(h,R)$ found in step \ref{pickhE}, determine $\tau'(\lambda)$ by using (\ref{Equation:TCnorm}). \item Return to step \ref{pickLE} until all $L'$ are considered. \item Return to step \ref{pickhE} until all $h$ are considered. \item Return to step \ref{pickBetaE} until all $\beta$ are considered. \end{enumerate} The above procedure will find the $\tau'(\lambda)$ for each $(L',R,h)$ considered, and the optimal value of these parameters are the ones that maximize $\tau'(\lambda)$. By limiting $L'$ to be integer valued (which is not necessary if $d$ is a fraction), the number of values is finite and an exhaustive search up to some maximum value is feasible. The value of $\beta$ is continuous, and therefore must be quantized. For the exhaustive search results presented in this section, $\beta$ was quantized to a spacing of $0.1$ dB over the range $-2$ dB $\leq \beta \leq 12$ dB, and $h$ was quantized to a spacing of 0.01 over the range $0 \leq h \leq 1$. \subsection{Optimization Results for a BPP}\label{Section:Results_BPP} The optimization was run for a network of $M=50$ interferers placed according to a BPP with an inner radius of $r_{ex} = 0.25$ and an outer radius $r_{net} = 2$. The path-loss exponent was fixed to $\alpha=3$. Fig. \ref{Figure:Example8} shows the maximum normalized modulation-constrained TC $\tau_{opt}'(\lambda)$ as function the SNR $\Gamma$. For each value of $\Gamma$, the optimal set of $(L',R,h)$ that maximizes the TC was found using the previously described exhaustive search. The value of $\tau_{opt}'(\lambda)$ was computed assuming a capacity-achieving code. Suppose that instead, the code has a {\em gap} of 1 dB from capacity, i.e. that the required threshold $\beta$ is 1 dB higher than that predicted by information theory. The transmission capacity will be lower due to this gap. The curve labeled $\tau_{1}'$ shows the TC of a code when using a code with a 1 dB gap from capacity when using the optimal values of $(L',R,h)$ found assuming a capacity-achieving code. As can be seen, a modest loss in TC occurs when the code has a 1 dB gap from capacity. In addition, Fig. \ref{Figure:Example8} also shows the normalized TC $\tau_{sub}'(\lambda)$ of a system with a suboptimal but typical choice of parameters: $(L',R,h) = (200,1/2,1)$. The results shown in Fig. \ref{Figure:Example8} highlight the importance of parameter optimization. The TC is improved by a factor of 5-10 by selecting optimal, rather than arbitrary parameters. \begin{figure}[t] \centering \hspace{-0.5cm} \includegraphics[width=9.25cm]{figures2/Tau_SNR} \vspace{-0.5cm} \caption{Maximum normalized modulation-constrained TC $\tau'(\lambda)$ as a function of the SNR $\Gamma$ for three cases: (1) the code achieves capacity; (2) the code has a 1 dB gap from capacity; and (3) a typical set of ($(L',R,h)$)=($200,1/2,1$) is used. The interferers are drawn from a BPP. \label{Figure:Example8} } \end{figure} Fig. \ref{Figure:Example9}-\ref{Figure:Example11} explore the relative importance of each of the three parameters. In each curve, the SNR was set to $\Gamma = 10$ dB and one parameter is varied. At each value of the parameter, the TC is maximized with respect to the other two parameters. Three values of path-loss exponent are considered, $\alpha = \{3,3.5,4\}$. The optimal values of each of the parameters can be identified by locating the peaks of each curve. A general trend is that the TC improves with increasing $\alpha$, though the optimal parameter values are not strongly influenced by the $\alpha$. \begin{figure}[t] \centering \hspace{-0.5cm} \includegraphics[width=9.25cm]{figures2/BPP_tau_alpha_L} \vspace{-0.5cm} \caption{Maximum normalized modulation-constrained TC $\tau_{opt}'(\lambda)$ as a function of the equivalent number of frequency channels $L'$. The interferers are drawn from a BPP and the network dimensions are $r_{ex}$ = 0.25 and $r_{net}$ = 2. For each value of $L'$, the optimal $R$ and $h$ are found. Top curve: $\alpha$ = 4. Middle curve: $\alpha$ = 3.5. Bottom curve: $\alpha$ = 3. \label{Figure:Example9} } \hspace{-0.5cm} \end{figure} \begin{figure}[t] \centering \hspace{-0.5cm} \includegraphics[width=9.25cm]{figures2/BPP_tau_alpha_R} \vspace{-0.5cm} \caption{Maximum normalized modulation-constrained TC $\tau_{opt}'(\lambda)$ as a function of the code rate $R$. The interferers are drawn from a BPP and the network dimensions are $r_{ex}$ = 0.25 and $r_{net}$ = 2. For each value of $R$, the optimal $L'$ and $h$ are found. Top curve: $\alpha$ = 4. Middle curve: $\alpha$ = 3.5. Bottom curve: $\alpha$ = 3. \label{Figure:Example10} } \end{figure} \begin{figure}[t] \centering \hspace{-0.5cm} \includegraphics[width=9.25cm]{figures2/BPP_tau_alpha_h} \vspace{-0.5cm} \caption{Maximum normalized modulation-constrained TC $\tau_{opt}'(\lambda)$ as a function of the modulation index $h$. The interferers are drawn from a BPP and the network dimensions are $r_{ex}$ = 0.25 and $r_{net}$ = 2. For each value of $h$, the optimal $L'$ and $R$ are found. Top curve: $\alpha$ = 4. Middle curve: $\alpha$ = 3.5. Bottom curve: $\alpha$ = 3.\label{Figure:Example11} } \vspace{-0.5cm} \end{figure} \subsection{Optimization Results for a PPP}\label{Section:Results_PPP} Next, the optimization was run for a network with interferers drawn from a PPP with an inner radius of $r_{ex} = 0.25$ and an outer radius $r_{net} = 2$. The SNR was set to $\Gamma=10$ dB and the path-loss exponent to $\alpha=3$. Fig. \ref{Figure:Example12} shows the maximum modulation-constrained normalized TC $\tau_{opt}'(\lambda)$ as function of the mobile density $\lambda$. For per each value of $\lambda$, the optimal set of $(L',R,h)$ that maximizes the TC was found using the previously described exhaustive search. Similar to Fig. \ref{Figure:Example8}, Fig. \ref{Figure:Example12} shows the performance $\tau'_1(\lambda)$ when the code has a 1 dB gap from capacity, and shows the performance $\tau_{sub}'(\lambda)$ of a system that uses the typical choice of parameters: $(L',R,h) = (200,1/2,1)$. While the loss due to using a code with a 1 dB gap from capacity is quite minimal, the loss due to using suboptimal parameters is quite high, especially in sparser networks. \begin{figure}[t] \centering \hspace{-0.5cm} \includegraphics[width=9.25cm]{figures2/PPP_tau} \vspace{-0.5cm} \caption{Maximum normalized modulation-constrained TC $\tau'(\lambda)$ as function of mobile density $\lambda$ for three cases: (1) the code achieves capacity; (2) the code has a 1 dB gap from capacity; and (3) a typical set of ($(L',R,h)$)=($200,1/2,1$) is used. The interferers are drawn from a PPP. \label{Figure:Example12} } \end{figure} \begin{figure}[t] \centering \hspace{-0.5cm} \includegraphics[width=9.25cm]{figures2/PPP_tau_alpha_L} \vspace{-0.5cm} \caption{Maximum normalized modulation-constrained TC $\tau_{opt}'(\lambda)$ as a function of the equivalent number of frequency channels $L'$. The interferers are drawn from a PPP and the network dimensions are $r_{ex}$ = 0.25 and $r_{net}$ = 2. For each value of $L'$, the optimal $R$ and $h$ are found. Curves from top to bottom: (1) $\lambda$ = 5; (2) $\lambda$ = 2; (3) $\lambda$ = 0.5; (4) $\lambda$ = 0.1. \label{Figure:Example13} } \end{figure} \begin{figure}[t] \centering \hspace{-0.5cm} \includegraphics[width=9.25cm]{figures2/PPP_tau_alpha_R} \vspace{-0.5cm} \caption{Maximum normalized modulation-constrained TC $\tau_{opt}'(\lambda)$ as a function of the code rate $R$. The interferers are drawn from a PPP and the network dimensions are $r_{ex}$ = 0.25 and $r_{net}$ = 2. For each value of $R$, the optimal $L'$ and $h$ are found. Curves from top to bottom: (1) $\lambda$ = 5; (2) $\lambda$ = 2; (3) $\lambda$ = 0.5; (4) $\lambda$ = 0.1. (4) $\lambda$ = 0.1. \label{Figure:Example14} } \end{figure} \begin{figure}[t] \hspace{-0.5cm} \centering \hspace{-0.5cm} \includegraphics[width=9.25cm]{figures2/PPP_tau_alpha_h} \vspace{-0.5 cm} \caption{Maximum normalized modulation-constrained TC $\tau_{opt}'(\lambda)$ as a function of the modulation index $h$. The interferers are drawn from a PPP and the network dimensions are $r_{ex}$ = 0.25 and $r_{net}$ = 2. For each value of $h$, the optimal $L'$ and $R$ are found. Curves from top to bottom: (1) $\lambda$ = 5; (2) $\lambda$ = 2; (3) $\lambda$ = 0.5; (4) $\lambda$ = 0.1. \label{Figure:Example15} } \end{figure} Fig. \ref{Figure:Example13}-\ref{Figure:Example15} explore the relative importance of each of the three parameters by varying one parameter. At each value of the parameter, the modulation-constrained TC is maximized with respect to the other two parameters. Four values of mobile density are considered, $\lambda = \{0.1, 0.5, 2, 5\}$. The optimal values of each of the parameters can be identified by locating the peaks of each curve. In Fig. \ref{Figure:Example13}, it is shown the maximum normalized TC as a function of the equivalent number of frequency channels $L'$ for a certain annular network area that has three different density of interferers per unit Area ($\lambda=5$, $\lambda=2$ and $\lambda=0.5$ ) when for each of them the optimal modulation index and the optimal rate of the coded modulation are used. Fig. \ref{Figure:Example13} shows a strong dependence on $L'$, with the optimal $L'$ becoming larger with increasing network density. Fig. \ref{Figure:Example14} shows a relatively weak dependence on $R$, with denser networks requiring a slightly lower rate. Fig. \ref{Figure:Example15} shows a weak dependence on $h$, with $h \approx 0.59$ providing optimal performance for every density. \section{Network Optimization by Gradient search method}\label{Section:Optimization_GM} \begin{table} \centering \caption{Results of the Optimization for an annular network area where the interferers are drawn from a BPP. The number of interferers is fixed to $M=50$. \label{maintable}} \vspace{-0.3cm} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $r_{net}$ & $r_{ex}$ & $\alpha$ & $\tau'_{opt} $ & $\tau'_{opt_{\nabla}} $ & $I_{\nabla}$ \\ \hline 1 & 0.25 & 3 & 0.04427 & 0.04427 & 66 \\ \cline{3-6} & & 3.5 & 0.04395 & 0.04395 & 66 \\ \cline{3-6} & & 4 & 0.04372 & 0.04372 & 66 \\ \cline{2-6} & 0.5 & 3 & 0.04503 & 0.04503 & 87 \\ \cline{3-6} & & 3.5 & 0.04468 & 0.04468 & 66 \\ \cline{3-6} & & 4 & 0.04437 & 0.04437 & 66 \\ \hline 2 & 0.25 & 3 & 0.01590 & 0.01590 & 54 \\ \cline{3-6} & & 3.5 & 0.01688 & 0.01688 & 54 \\ \cline{3-6} & & 4 & 0.01792 & 0.01792 & 54 \\ \cline{2-6} & 0.5 & 3 & 0.01641 & 0.01641 & 63 \\ \cline{3-6} & & 3.5 & 0.01752 & 0.01752 & 63 \\ \cline{3-6} & & 4 & 0.01871 & 0.01871 & 57 \\ \hline 4 & 0.25 & 3 & 0.00983 & 0.00983 & 50 \\ \cline{3-6} & & 3.5 & 0.01187 & 0.01187 & 47 \\ \cline{3-6} & & 4 & 0.01395 & 0.01395 & 50 \\ \cline{2-6} & 0.5 & 3 & 0.01024 & 0.01024 & 44 \\ \cline{3-6} & & 3.5 & 0.01252 & 0.01252 & 44 \\ \cline{3-6} & & 4 & 0.01484 & 0.01484 & 47 \\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{Results of the Optimization for an annular network area where the interferers are drawn from a PPP. The intensity $\lambda$ per unit area is fixed to $\lambda=1$. \label{maintable1}} \vspace{-0.3cm} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $r_{net}$ & $r_{ex}$ & $\alpha$ & $\tau'_{opt} $ & $\tau'_{opt_{\nabla}} $ & $I_{\nabla}$ \\ \hline 1 & 0.25 & 3 & 0.04654 & 0.04654 & 129 \\ \cline{3-6} & & 3.5 & 0.04623 & 0.04623 & 129 \\ \cline{3-6} & & 4 & 0.04598 & 0.04598 & 129 \\ \cline{2-6} & 0.5 & 3 & 0.05932 & 0.05932 & 129 \\ \cline{3-6} & & 3.5 & 0.05881 & 0.05881 & 129 \\ \cline{3-6} & & 4 & 0.05838 & 0.05838 & 129 \\ \hline 2 & 0.25 & 3 & 0.01597 & 0.01597 & 126 \\ \cline{3-6} & & 3.5 & 0.01697 & 0.01697 & 126 \\ \cline{3-6} & & 4 & 0.01801 & 0.01801 & 134 \\ \cline{2-6} & 0.5 & 3 & 0.01731 & 0.01731 & 134 \\ \cline{3-6} & & 3.5 & 0.01845 & 0.01845 & 134 \\ \cline{3-6} & & 4 & 0.01973 & 0.01973 & 123 \\ \hline 4 & 0.25 & 3 & 0.00977 & 0.00977 & 117 \\ \cline{3-6} & & 3.5 & 0.01180 & 0.01180 & 117 \\ \cline{3-6} & & 4 & 0.01387 & 0.01387 & 117 \\ \cline{2-6} & 0.5 & 3 & 0.01030 & 0.01030 & 120 \\ \cline{3-6} & & 3.5 & 0.01258 & 0.01258 & 135 \\ \cline{3-6} & & 4 & 0.01491 & 0.01491 & 123 \\ \hline \end{tabular} \vspace{-0.5cm} \end{table} The results of the exhaustive search presented in the previous section suggest that the modulation-constrained TC is a concave function of ($L',R,h$). It follows that the optimization is a convex optimization problem and can be efficiently solved through a gradient-search method \cite{boyd:2004}. In particular, the optimization can be accomplished by performing the following steps: \begin{enumerate} \item Pick intervals for $L'$ ($[L'_{min},L'_{max}]$), $\beta$ ($[\beta_{min},\beta_{max}]$) and $h$ ($[h_{min},h_{max}]$). \label{interval} \item Create sets $ L_{set}=\{ L'_{min}, \left(L'_{max} + L'_{min} / 2 \right), L'_{max} \}$, $ \beta_{set}=\{ \beta_{min}, \left(\beta_{max} + \beta_{min} / 2 \right), \beta_{max} \}$, and $ h_{set}=\{ h_{min}, \left(h_{max} + h_{min} / 2 \right), h_{max} \}$ composed of the two extreme points and center point of each interval. \label{vector} \item Pick one of the three values of $\beta$. \item Pick one of the three values of $h$ and determine the rate $R$ corresponding to the current $h$ and $\beta$ (this is found by setting $R=C(h,\beta)$) and its corresponding bandwidth efficiency $ \eta(h)$. \label{opth} \item For all three values of $L'$ and for the set of $(h,R)$ found in the last step, determine $\tau'(\lambda)$ by using (\ref{Equation:TCnorm}). \label{optL} \item Once $\tau'(\lambda)$ is computed for all three values of $L'$, determine which value has the largest normalized TC: \begin{enumerate} \item If the maximum is at one of the two external points, the center of the search points is moved in that direction and two new external points are chosen closer to the new center point; \item If the maximum is at the center point, the two external points are moved closer to the center. Each time the maximum is in the center, the distance between the two external points and the center is gradually decreased. \end{enumerate} \label{optL1} \item Return to step \ref{optL} and use the new three points, until the distance between the two external points and the center reaches the values fixed in step \ref{interval} and the maximum stays in the center. \label{stepL} \item Repeat step \ref{optL}, \ref{optL1} and \ref{stepL}, for all three values of $h$ and save the normalized TC of them when $L$ is optimized. \label{opth} \item As step \ref{optL1}, once $\tau'(\lambda)$ is computed for all three values of $h$, determine which value has the largest normalized TC: \begin{enumerate} \item If the maximum is at one of the two external points, the center of the search points is moved in that direction and two new external points are chosen closer to the new center point; \item If the maximum is at the center point, the two external points are moved closer to the center. Each time the maximum is in the center, the distance between the two external points and the center is gradually decreased. \end{enumerate} \label{opth1} \item Return to step \ref{opth} and use the new three points of $h$, until the distance between the two external points is sufficiently small and the maximum point remains in the center. \label{steph} \item Repeat step \ref{opth}, \ref{opth1} and \ref{steph}, for all three values of $\beta$ and save the normalized TC of each. \label{optbeta} \item As step \ref{opth1}, once $\tau'(\lambda)$ is computed for all three values of $\beta'$, determine which value has the largest normalized TC:\label{optbeta1} \begin{enumerate} \item If the maximum is at one of the two external points, the center of the search points is moved in that direction and two new external points are chosen closer to the new center point; \item If the maximum is at the center point, the two external points are moved closer to the center. Each time the maximum is in the center, the distance between the two external points and the center is gradually decreased. \end{enumerate} \item Return to step \ref{optbeta} and use the new three points of $\beta$, until the distance between the two external points is sufficiently small and the maximum point remains in the center. \end{enumerate} The algorithm is initialized by the initial intervals selected at step 1. As the algorithm runs, the size of the intervals get successively smaller. The algorithm can stop once both of the following conditions are satisfied at the same time: (1) the optimal values of $L'$, $h$ and $\beta$ are the same of the previous iteration; (2) the difference between each of the three elements inside the sets $L_{set}$, $h_{set}$ and $\beta_{set}$ is equal to the quantization step for respectively $L$, $h$ and $\beta$ . Tables \ref{maintable} and \ref{maintable1} compare the results of optimizations performed by exhaustive search and gradient search for networks distributed according to BPP and PPP processes, respectively. The column marked $\tau_{opt}$ is the maximum modulation-constrained TC found by using the exhaustive-search technique of Section \ref{Section:Optimization}, while $\tau_{opt_{\nabla}}$ is the value found using the gradient-search technique presented in this section. For each type of spatial distribution, three values of $r_{net}$, two values of $r_{ex}$, and three values of $\alpha$ were considered. For the BPP, the number of interferers was set to $M=50$, while for the PPP, the density was set to $\lambda = 1$. For both processes, the SNR was set to $\Gamma = 10$ dB. The tables also indicate the number of iterations required for the gradient-search technique to converge. The value is indicated by the column marked $I_{\nabla}$. Each iteration requires that 200 values of $\tau'(\lambda)$ be evaluated, since for L was used a spacing of 1 over the range $1 \leq L \leq 200$. Notice the slight variation in the number of iterations. This is in contrast with the exhaustive-search algorithm, which requires that fixed number of values of $\tau'(\lambda)$ be evaluated. In particular, the exhaustive-search optimization considered $2,848,200$ sets of discretized parameters by using the same parameter spacings described in Section \ref{Section:Optimization}. As we can see from either Table \ref{maintable} and Table \ref{maintable1}, the gradient-search method gives the same maximum normalized TC as the exhaustive-search method ( $\tau_{opt}=\tau_{opt_{\nabla}}$ ). However, the gradient-search technique is more efficient because it requires fewer values of $\tau'(\lambda)$ to be evaluated. \balance \section{Conclusion} \label{sec_conclusion} The combination of frequency-hopping, noncoherent CPFSK modulation, and capacity-approaching coding is a sensible choice for modern ad hoc networks. For such systems, the performance depends critically on the number of frequency-hopping channels, the modulation index, and the code rate. While these parameters are often chosen arbitrarily, the system performance can be significantly improved by the joint optimization of the three parameters. The modulation-constrained transmission capacity is an appropriate objective function for the optimization. Preliminary results using modulation-constrained TC as the objective function suggest that the optimization problem is convex and therefore a good candidate for the gradient-search algorithm proposed in this paper. The derivation of modulation-constrained transmission capacity required a careful analysis of the outage probability under the assumptions made in this paper. By extending the analysis, other fading distributions, such as Nakagami, can be considered, as can shadowing. More sophisticated spatial models can be considered, for instance by imposing a minimum separation among all users. While more sophisticated models might not be analytically tractable, they are good candidates for the Monte Carlo method proposed in this paper, which requires the random placement of mobiles but does not require the realization of the fading coefficients. The results presented in this paper are just a sample of what is possible using this methodology. In addition to considering more sophisticated channel models, future work could consider other network topologies (other than the annular region considered in this paper). One example of such a network is one where the reference receiver is allowed to move from the center of a disk to its perimeter. Other types of modulation and reception could be considered, such as nonbinary CPFSK with multi-symbol reception \cite{valenti:2010}. Directional antennas could be considered, as could the impact of adjacent-channel interference due to the effect of spectral splatter. \bibliographystyle{ieeetr}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,532
\section{Conclusion} \vspace{-2pt} In this paper, we study synonym discovery on privacy-aware clinical data, which is a new yet practical setting and consumes less sensitive information to discover synonyms. We propose a novel and effective framework named \textsf{\textsc{SurfCon}}\xspace that considers both the surface form information and the global context information, can handle both InV and OOV query terms, and substantially outperforms various baselines on real-world datasets. As future work, we will extend \textsf{\textsc{SurfCon}}\xspace to infer more semantic relationships (besides synonymity) between terms and test it on more real-life datasets. \section{Experiments} \label{section:exp} Now we evaluate our proposed framework \textsf{\textsc{SurfCon}}\xspace to show the effectiveness of leveraging both surface form information and global context information for synonym discovery. \vspace{-10pt} \subsection{Datasets}\label{exp:dataset} \vspace{-2pt} \noindent \textbf{{Medical Term Co-occurence Graph.}} We adopt publicly available sets of medical terms with their co-occurrence statistics which are extracted by \citet{finlayson2014building} from 20 million clinical notes collected from Stanford Hospitals and Clinics\cite{lowe2009stride} since 1995. Medical terms are extracted using an existing phrase mining tool ~\cite{lependu2012annotation} by matching with 22 clinically relevant ontologies such as SNOMED-CT and MedDRA. And co-occurrence frequencies are counted based on how many times two terms co-occur in the same temporal \textit{bin} (i.e., a certain timeframe in patient's records), e.g., 1, 7, 30, 90, 180, 365, and $\infty$-day \textit{bins}. Without loss of generality, we choose 1-day per-bin and $\infty$-day per-bin\footnote{Per-bin means each unique co-occurring term-term pair is counted at most once for each relevant bin of a patient. We refer readers to \citet{finlayson2014building} for more information.} graphs to evaluate different methods. We first convert the global counts between nodes to the PPMI values \cite{levy2014linguistic} and adopt subsampling \cite{mikolov2013distributed} to filter very common terms, such as "medical history", "medication dose", etc. We choose these two datasets {because they have very different connection density as shown in Table \ref{tab:dataset-statistics}}, and denote them as {\textbf{1-day} and \textbf{All-day}} datasets. \noindent \textbf{Synonym Label.} \label{synlabel} In the released datasets, \citet{finlayson2014building} provided a term-to-UMLS CUI mapping based on the same 22 ontologies as used when extracting terms. They reduced the ambiguity of a term by suppressing its least likely meaning so as to provide a high-quality mapping. We utilized such mapping to obtain the synonym labels: Terms mapped to the same UMLS CUI are treated as synonyms, e.g., terms like "c vitamin", "vit c", "ascorbic acid" are synonyms as they are all mapped to the concept "Ascorbic Acid" with ID \text{C0003968}. \noindent \textbf{Query Terms.} Given a medical term-term co-occurrence graph, terms in the graph that can be mapped to UMLS CUIs are treated as potential query terms, and we split all such terms into training, development and testing sets. Here, since all terms appear in the given co-occurrence graph, this testing set is referred to as the \textbf{InV testing set}. We also create an \textbf{OOV testing set}: Under a UMLS CUI, terms not in the co-occurrence graph are treated as OOV query terms and are paired with their synonyms which are in the graph to form positive pairs. We sample 2,000 of such OOV query terms for experiments. In addition, since synonyms with different surface forms tend to be more challenging to discover (e.g., "vitamin c" vs. "ascorbic acid"), we also sample a subset named \textbf{Dissim} under both \text{InV} and \text{OOV testing set}, where query terms paired with their dissimilar synonyms\footnote{Dissimilarity is measured by Levenshtein edit distance \cite{gomaa2013survey} with a threshold (0.8).} are selected. Statistics of our training/dev/testing sets are given in Table \ref{tab:dataset-statistics}. \input{dataset_statistics.tex} \input{main_results.tex} \vspace{-8pt} \subsection{Experimental Setup} \label{exp:setup} \vspace{-2pt} \subsubsection{Baseline methods.} \label{baseline} We compare \textsf{\textsc{SurfCon}}\xspace with the following 10 methods. {The baselines} can be categorized by three types: (i) Surface form based methods, which focus on capturing the surface form information of terms. (ii) Global context based methods, which try to learn embeddings of terms for synonym discovery; (iii) Hybrid methods, which combine surface form and global context information. The others are our model variants. \noindent \textbf{Surface form based methods}. (1) \textit{CharNgram}~\cite{hashimoto2017jmt}: We borrow pre-trained character n-gram embeddings from ~\citet{hashimoto2017jmt} and take the average of unique n-gram embeddings for each term as its feature, and then train a bilinear scoring function following previous works~\cite{qu2017automatic, zhang2019synonymnet}. (2) \textit{CHARAGRAM} \cite{wieting2016charagram}: Similar as above, but we further fine-tune CharNgram embeddings using synonym supervision. (3) \textit{SRN} \cite{neculoiu2016learning}: A Siamese network structure is adopted with a bi-directional LSTM to encode character sequence of each term and cosine similarity is used as the scoring function. \noindent \textbf{Global context based methods}. (4) \textit{Word2vec} \cite{mikolov2013distributed}: A popular distributional embedding method. We obtain word2vec embeddings by doing {SVD decomposition over the Shifted PPMI co-occurrence matrix \cite{levy2014neural}}. We treat the embeddings as features and use a bilinear score function for synonym discovery. (5) \textit{LINE(2nd)} \cite{tang2015line}: A widely-adopted graph embedding approach. Similarly, embeddings are treated as features and a bilinear score function is trained to detect synonyms. (6) \textit{DPE-NoP} \cite{qu2017automatic}: DPE is proposed for synonym discovery on text corpus, and consists of a distributional module and a pattern module, where the former utilizes global context information and the latter learns patterns from raw sentences. Since raw texts are unavailable in our setting, we only deploy the distributional module (a.k.a. DPE-NoP in \citet{qu2017automatic}). \noindent \textbf{Hybrid methods}. (7) \textit{Concept Space Model} \cite{wang2015medical}: A medical synonym extraction method that combines word embeddings and heuristic rule-based string features. (8) \textit{Planetoid} \cite{yang2016revisiting}: An inductive graph embedding method that can generate embeddings for both observed and unseen nodes. We use the bi-level surface form encoding vectors as the input and take the intermediate hidden layer as embeddings. Similarly, a bilinear score function is used for synonym discovery. \noindent \textbf{Model variants}. (9) \textit{\textsf{\textsc{SurfCon}}\xspace (Surf-Only)}: A variant of our framework which only uses the surface score for ranking. (10) \textit{\textsf{\textsc{SurfCon}}\xspace (Static)}: Our framework with static representation mechanism. By comparing these variants, we verify the performance gain brought by modeling global contexts using different matching mechanisms. For baseline methods (1-3 and 8) and our models, we test them under both InV and OOV settings. For the others (4-7), because they rely on embeddings that are only available for InV terms, we only test them under InV setting. \vspace{-5pt} \subsubsection{Candidate Selection and Performance Evaluation.} For evaluating baseline methods and our model, we experiment with two strategies: (1) Random candidate selection. For each query term, we randomly sample 100 non-synonyms as negative samples and mix them with synonyms for testing. This strategy is widely adopted by previous work on synonym discovery for testing efficiency~\cite{wang2015medical, zhang2019synonymnet}. (2) Inference-stage candidate selection. As mentioned in section \ref{subsec:train-inference}, at the inference stage, we first obtain high potential candidates in a lightweight way. Specifically, after the context predictor is pre-trained, for all terms in the given graph as well as the query term, we generate their surface form vector $s$ and context semantic vector $v$ obtained by the static representation. Then we find top 50 nearest neighbors of the query term respectively {based on} $s$ and $v$ using cosine similarity. Finally, we apply {our methods and baselines} to re-rank the 100 high potential candidates. {We refer to these two strategies as \textit{random candidate selection} and \textit{inference-stage candidate selection}.} For evaluation, we adopt a popular ranking metric Mean Average Precision defined as $\textsf{MAP}=\frac{1}{|Q|} \sum_{i=1}^{|Q|}\frac{1}{m_i} \sum_{j=1}^{m_i} \textsf{Precision}(R_{ij})$, where $R_{ij}$ is the set of ranked terms from $1$ to $j$, $m_i$ is the length of $i$-th list, and $|Q|$ is the number of queries. \vspace{-5pt} \subsubsection{Implementation details} \label{details} Our framework is implemented in Pytorch \cite{paszke2017automatic} with Adam optimizer \cite{kingma2014adam}. The dimensions of character embeddings ($d_c$), word embeddings ($d_w$), surface vectors ($d_s$), and sementic vectors ($d_e$) are set to be 100, 100, 128, 128. Early stopping is used when the performance in the dev sets does not increase continuously for 10 epochs. We directly optimize Eqn. \ref{eqn:context_loss} since the number of terms in our corpus is not very large, and set $f_s(\cdot)$ and $f_c(\cdot)$ to be cosine similarity and bilinear similarity function respectively, based on the model performance on the dev sets. When needed, string similarities are calculated by using the Distance package\footnote{https://github.com/doukremt/distance}. Pre-trained CharNgram \cite{hashimoto2017jmt} embeddings are borrowed from the authors\footnote{https://github.com/hassyGo/charNgram2vec}. For CHARAGRAM \cite{wieting2016charagram}, we initialize the n-gram embeddings by using pre-trained CharNgram and fine-tune them on our dataset by the synonym supervision. We learn LINE(2nd) embeddings \cite{tang2015line} by using OpenNE\footnote{https://github.com/thunlp/OpenNE}. Heuristic rule-based matching features of Concept Space model are implemented according to ~\cite{wang2015medical}. Code, datasets, and more implementation details are available online\footnote{\url{https://github.com/yzabc007/SurfCon}}. \vspace{-5pt} \subsection{Results and Analysis} \label{main-results} \subsubsection{Evaluation with {Random Candidate Selection}} We compare all methods under random candidate selection strategy with the results shown in Table \ref{tab:main-results}. \noindent \textbf{(1) Comparing \textsf{\textsc{SurfCon}}\xspace with surface form based methods.} \\ Our model beats all surface form based methods, including strong baselines such as SRN that use complicated sequence models to capture character-level information. This is because: 1) Bi-level encoder of \textsf{\textsc{SurfCon}}\xspace could capture surface form information from both character- and word-level, while baselines only consider either of them; 2) \textsf{\textsc{SurfCon}}\xspace captures global context information, which could complement surface form information for synonym discovery. In addition, in comparison with CharNgram and CHARAGRAM, our model variant \textsf{\textsc{SurfCon}}\xspace (Surf-Only), which also only uses surface form information, obtains consistently better performance, especially in the OOV Test set. The results demonstrate that adding word-level surface form information is useful to discover synonyms. \noindent \textbf{(2) Comparing \textsf{\textsc{SurfCon}}\xspace with global context based methods.} \\ \textsf{\textsc{SurfCon}}\xspace substantially outperforms all other global context based methods (Word2vec, LINE(2nd) and DPE-NoP). This is largely due to the usage of surface form information. In fact, as one can see, global context based methods are generally inferior to surface form based methods, partly due to the fact that a large part of synonyms are similar in surface form, while only a small portion of them are in very different surface form. Thus, detecting synonyms without leveraging surface information can hardly lead to good results. Besides, our context matching component conducts context prediction and matching strategies, which takes better advantage of global context information and thus lead to better performance on the synonym discovery task. \noindent \textbf{(3) Comparing SurfCon with hybrid methods.} We also compare our model with baselines that combine both surface form and global context information. First, \textsf{\textsc{SurfCon}}\xspace is superior to the concept space model because the latter simply concatenates distributional embeddings with rule-based string features, e.g., the number of shared words as features and apply a logistic regression classifier for classification. Further, \textsf{\textsc{SurfCon}}\xspace also performs better than Planetoid, partly because our framework more explicitly leverages both surface form and global context information to formulate synonym scores, while Planetoid relies on one embedding vector for each term which only uses surface form information as input. \noindent \textbf{(4) Comparing \textsf{\textsc{SurfCon}}\xspace with its variants.} To better understand why \textsf{\textsc{SurfCon}}\xspace works well, we compare it with several variants. Under {both datasets}, \textsf{\textsc{SurfCon}}\xspace (Surf-Only) {already outperforms} all baselines demonstrating the effectiveness of our bi-level surface form encoding component. With the context matching component in \textsf{\textsc{SurfCon}}\xspace (Static), the performance is further improved, especially under \textit{InV Test Dissim} setting where synonyms tend to have different surface forms and we observe around 4\% performance gain. Further, by using dynamic representation in context matching mechanism, \textsf{\textsc{SurfCon}}\xspace obtains better results, which demonstrates that the dynamic representation is more effective to utilize context information compared with the static strategy. \input{inference_results.tex} \vspace{-5pt} \subsubsection{Evaluation at Inference Stage} To further evaluate the power of our model in real practice, we test its performance at the inference stage as mentioned in section \ref{subsec:train-inference}. Due to space constraint, we only show the comparison in Table \ref{main-results-practical} between \textsf{\textsc{SurfCon}}\xspace and several strong baselines revealed by Table \ref{tab:main-results}. In general, the performance of all methods decreases at the inference stage compared with the random candidate selection setting, because the constructed list of candidates becomes harder to rank since surface form and context information are already used for the construction. For example, a lot of non-synonyms with similar surface form are often included in the candidate list. Even though the task becomes harder, we still observe our model outperforms the strong baselines by a large margin (e.g., around 8\% at least) under all settings. \begin{figure}[t!] \centering \resizebox{\linewidth}{!}{ \subfloat{\includegraphics[]{para_gamma_final.pdf}} \subfloat{\includegraphics[]{para_num_contexts_final.pdf}} ~ } \vspace{-10pt} \caption{{Performance w.r.t. (a) the coefficient of context score $\gamma$ and (b) the number of context terms $K$}.} \label{fig:parameter_sensitivity} \vspace{-16pt} \end{figure} \vspace{-5pt} \subsubsection{Parameter Sensitivity} Here we investigate the effect of two important hyper-parameters: The coefficient $\gamma$ which balances the surface score and the context score, and the number of predicted contexts $K$ used for context matching. As shown in Figure \ref{fig:parameter_sensitivity}(a), the performance of \textsf{\textsc{SurfCon}}\xspace first is improved as $\gamma$ increases, which is expected because as more semantic information is incorporated, \textsf{\textsc{SurfCon}}\xspace could detect more synonyms that are semantically similar. When we continue to increase $\gamma$, the performance begins to decrease and the reason is that surface form is also an important source of information that needs to be considered. \textsf{\textsc{SurfCon}}\xspace achieves the best performance roughly at $\gamma=0.3$ indicating surface form information is relatively more helpful for the task than global context information. This also aligns well with our observation that synonyms more often than not have similar surface forms. Next, we show the impact of $K$ in Figure \ref{fig:parameter_sensitivity}(b). In general, when $K$ is small (e.g., $K=10$), the performance is not as good since little global context information is considered. Once $K$ increases to be large enough (e.g., $\geq50$), the performance is not sensitive to the variation under most settings showing that we can choose smaller $K$ for computation efficiency but still with good performance. \input{case_study.tex} \vspace{-8pt} \subsection{Case Studies} \vspace{-2pt} We further conduct case studies to show the effectiveness of \textsf{\textsc{SurfCon}}\xspace. Two query terms {"unable to vocalize"} and {"marijuana"} are chosen respectively from the InV and OOV test set where the former is defined as the inability to produce voiced sound and the latter is a psychoactive drug used for medical or recreational purposes. As shown in Table \ref{tab:case_study}, for the InV query {"unable to vocalize"}, our model can successfully detect its synonyms such as "unable to phonate", which already exists in the labeled synonym set collected based on term-to-UMLS CUI mapping as we discussed in Section \ref{task-setting}. More impressively, our framework also discovers some highly semantically similar terms such as "does not vocalize" and "aphonia", even if some of them are quite different in surface form from the query term. For the OOV query {"marijuana"}, \textsf{\textsc{SurfCon}}\xspace ranks its synonym "marijuana abuse" and "cannabis" at a higher place. Note that the other top-ranked terms are also very relevant to "marijuana". \section{Introduction} \label{sec:intro} Clinical texts in Electronic Medical Records (EMRs) are enriched with valuable information including patient-centered narratives, patient-clinician interactions and disease treatment outcomes, which can be especially helpful for future decision making. To extract knowledge from unstructured clinical texts, synonym discovery \cite{wang2015medical} is an important task which can benefit many downstream applications. For example, when a physician issues a query term (e.g., "vitamin C") to find relevant clinical documents, automatically discovering its synonyms (e.g., "c vitamin", "vit c", "ascorbic acid") or even commonly misspelled variations (e.g. "viatmin c") can help to expand the query and thereby enhance the retrieval performance. \begin{figure}[t] \resizebox{\linewidth}{!}{% \includegraphics[width=\linewidth, left]{intro_intuition.pdf}} \vspace{-15pt} \caption{Task illustration: We aim to discover synonyms for a given query term from privacy-aware clinical data by effectively leveraging two important types of information: Surface form and global contexts. \nop{echo in the introduction}} \vspace{-15pt} \label{fig:intro_intuition} \end{figure} For the sake of patient privacy and security, it is usually quite difficult, if not impossible, for medical institutes to grant public access to large-scale raw or even de-identified clinical texts \cite{beam2018clinical}. Consequently, medical terms\footnote{A medical term is a single- or multi-word string (e.g., "Aspirin", "Acetylsalicylic Acid").} and their aggregated co-occurrence counts extracted from raw clinical texts are becoming a popular (although not perfect) substitute for raw clinical texts for the research community to study EMR data~\cite{finlayson2014building, ta2018columbia, beam2018clinical}. For example, \citet{finlayson2014building} released millions of medical terms extracted from the clinical texts in Stanford Hospitals and Clinics as well as their global co-occurrence counts, rather than releasing raw sentences/paragraphs/documents from the clinical text corpus. In this work, we refer to the given set of medical terms and their co-occurrence statistics in a clinical text corpus as \textit{privacy-aware} clinical data, and {investigate synonym discovery task on such data ({Figure \ref{fig:intro_intuition}}): \textit{Given a set of terms extracted from clinical texts as well as their global co-occurrence graph\footnote{where each node is a medical term and each edge between two nodes is weighted by the number of times that two terms co-occur in a given context window.}, recommend a list of synonyms for a query term}. Developing effective approaches under this setting is particularly meaningful, as they will suggest that one can utilize less sensitive information (i.e., co-occurrence statistics rather than raw sentences in clinical texts) to perform the task well}. A straightforward approach to obtain synonyms is to map the query term to a knowledge base (KB) entity and retrieve its synonyms or aliases stored in the KBs. However, it is widely known that KBs are incomplete and outdated, and their coverage of synonyms can be very limited~\cite{wang2015knowledge}. In addition, the informal writing of clinical texts often contain variants of surface forms, layman terms, frequently misspelling words, and locally practiced abbreviations, which should be mined to enrich synonyms in KBs. Recent works~\cite{wang2015medical, qu2017automatic, zhang2019synonymnet} have been focused on automatic synonym discovery from massive text corpora such as Wikipedia articles and PubMed paper abstracts. {When predicting if two terms are synonyms or not, such approaches usually leverage the original sentences (a.k.a. \textit{local} contexts) mentioning them, and hence do not apply or work well under our privacy-aware data setting where such sentences are unavailable.} {Despite the lack of local contexts, {we observe} two important types of information carried in the privacy-aware data - surface form information and global context information (i.e., co-occurrence statistics).} In this work, we aim to effectively leverage these two types of information for synonym discovery, {as shown in Figure \ref{fig:intro_intuition}}. Some recent works~\cite{neculoiu2016learning, mueller2016siamese} model the similarity between terms in the character-level. For example, \citet{mueller2016siamese} learn the similarity between two sequences of characters, which can be applied for discovering synonyms that look alike such as "vit c" and "vitamin c". However, we observe two common phenomena that such approaches cannot address well and would induce false positive and false negative predictions respectively: (1) Some terms are similar in surface form but do not have the same meaning (e.g., "hemostatic" and "homeostasis", where the former means a process stopping bleeding while the latter refers to a constant internal environment in the human body); (2) Some terms have the same meaning but are different in surface form (e.g., "ascorbic acid" and "vitamin c" are the same medicinal product but look different). On the other hand, given a term co-occurrence graph, various distributional embedding methods such as \cite{pennington2014glove, tang2015line, levy2014neural} have been proposed to learn a {distributional} representation (a.k.a. embedding) for each term based on its \textit{global} contexts (i.e., terms connected to it in the co-occurrence graph). The main idea behind such methods is that two terms should have similar embedding vectors if they share a lot of global contexts. However, we observe that the privacy-aware clinical data tends to be very \textit{noisy} due to the original data processing procedure\footnote{\normal{This tends to be a common issue in many scenarios as raw data has to go through various pre-processing steps for privacy concerns.}}, which presents new challenges for utilizing global contexts to model semantic similarity between terms. For example, \citet{finlayson2014building} prune the edges between two terms co-occurring less than 100 times, which can lead to missing edges between two related terms in the co-occurrence graph. \citet{ta2018columbia} remove all concepts with singleton frequency counts below 10. Hence, \normal{the noisy nature of the co-occurrence graph makes it less accurate to embed a term based on their original contexts. Moreover, when performing the synonym discovery task, users are very likely to issue a query term that does not appear in the given co-occurrence data. We refer to such query terms as Out-of-Vocabulary (OOV). Unlike In-Vocabulary\footnote{Query terms that appear in the given co-occurrence graph are referred to as In-Vocabulary (InV).} query terms, OOV query terms do not have their global contexts readily available in the given graph, which makes synonym discovery even more challenging}. In this paper, to address the above challenges and effectively utilize both the \ul{surf}ace form and the global \ul{con}text information in the privacy-aware clinical data, we propose a novel framework named {\textsf{\textsc{SurfCon}}\xspace} which consists of a bi-level surface form encoding component and a context matching component, both based on neural models. The bi-level surface form encoding component exploits both character- and word-level information to encode a medical term into a vector. It enables us to compute a surface score of two terms based on their encoding vectors. As mentioned earlier, such surface score works well for detecting synonyms that look similar in surface form. However, it tends to miss synonymous terms that do not look alike. Therefore, we propose the context matching component to model the semantic similarity between terms, which plays a complementary role in synonymy discovery. Our context matching component first utilizes the bi-level surface form encoding vector for a term to predict its potential global contexts. Using predicted contexts rather than the raw contexts in the given graph enables us to handle OOV query terms and also turns out to be effective for InV query terms. Then we generate a semantic vector for each term by aggregating the semantic features from predicted contexts using two mechanisms - static and dynamic representation mechanism. Specifically, given term $a$ and term $b$, the dynamic mechanism aims to learn to weigh the importance of individual terms in $a$'s contexts based on their {semantic matching degree} with $b$'s contexts, while the static mechanism assigns equal weights to all terms in one's contexts. The former takes better advantage of individual terms within the contexts and empirically demonstrates superior performance. Our contributions are summarized in three folds: \begin{itemize}[leftmargin=*] \item We study the task of synonym discovery under a new setting, i.e., on privacy-aware clinical data, where only a set of medical terms and their co-occurrence statistics are given, and local contexts (e.g., sentences mentioning a term in a corpus) are not available. It is a practical setting given the wide concern about patient privacy for access to clinical texts and also presents unique challenges to address for effective synonym discovery. \item We propose a novel and effective framework named \textsf{\textsc{SurfCon}}\xspace that can discover synonyms for both In-Vocabulary (InV) and Out-of-Vocabulary (OOV) query terms. \textsf{\textsc{SurfCon}}\xspace considers two complementary types of information {based on neural models} - surface form information and global context information of a term, where the former works well for detecting synonyms that are similar in surface form while the latter can help better find synonyms that do not look alike but are semantically similar. \item We conduct extensive experiments on publicly available privacy-aware clinical data and demonstrate the effectiveness of our framework in comparison with various baselines and our own model variants. \end{itemize} \section{\textsf{\textsc{SurfCon}}\xspace Framework} \label{sec:framework} In this section, we introduce our proposed framework \textsf{\textsc{SurfCon}}\xspace for synonym discovery on privacy-aware clinical data. \vspace{-5pt} \subsection{Overview} \label{subsec:framework-overview} We observe two important types of information carried in the privacy-aware clinical data: surface form information of a medical term and the global contexts from the given co-occurrence graph. On the one hand, existing approaches \cite{neculoiu2016learning} using character-level features to detect synonyms could work well when synonyms share a high string similarity, but tend to produce false positive predictions (when two terms look similar but are not synonyms, e.g., "hemostatic" and "homeostasis") and false negative predictions (when two terms are synonyms but look very different, e.g., "ascorbic acid" and "vitamin c"). On the other hand, the global contexts of a term under the privacy-aware setting tend to be noisy partly due to the original data pre-processing procedure, which also presents challenges for using them to model the semantic similarity between terms. Thus, a framework that is able to effectively leverage these two types of information needs to be carefully designed. \begin{figure}[t!] \centering \resizebox{\linewidth}{!}{% \includegraphics[width=\linewidth]{framework.pdf}} \vspace{-20pt} \caption{{Framework overview. For each query term, a list of candidate terms will be ranked based on both the surface and context scores.}} \vspace{-15pt} \label{fig:framework-overview} \end{figure} Towards that end, we propose \textsf{\textsc{SurfCon}}\xspace (Figure \ref{fig:framework-overview}) and summarize its high-level ideas as below: \noindent (1) Given a query term (whether being InV or OOV), the {bi-level surface form encoding component} and the context matching component score a candidate term\footnote{Every term in the given co-occurrence graph can be a candidate term.} respectively based on the surface form information and global context information. The former enables us to find synonyms that look similar to the query term by considering both character- and word-level information, and the latter complements it by capturing the semantic similarity between terms to better address the false positive and false negative problem mentioned earlier. \noindent (2) Considering the original global contexts being noisy as well as the existence of OOV query terms, instead of directly leveraging the raw global contexts, the context matching component will first utilize the surface form encoding vector of a term to \textit{predict} its potential global contexts\footnote{For terms in the co-occurrence graph, predicting contexts can be treated as denoising its original global contexts (or edges)}. We then investigate a novel dynamic context matching mechanism (see Section \ref{subsubsec:context-matching} for details) to evaluate if two terms are synonyms based on their predicted contexts. \noindent (3) The two components are combined by a weighted score function, in which parameters are jointly optimized with a widely used ranking algorithm ListNet \cite{cao2007learning}. At testing time, given a query term, candidate terms are ranked based on the optimized score function. \vspace{-5pt} \subsection{Methodology} Now we describe the two components of \textsf{\textsc{SurfCon}}\xspace: Bi-level Surface Form Encoding and Context Matching in details. \label{subsec:methodology} \subsubsection{\textbf{Bi-level Surface Form Encoding}} \label{subsubsec:bi-level-encoding} The bi-level surface form encoding of our framework aims to model the similarity between two terms at the surface form level, as we observe that two terms tend to be synonymous if they are very similar in surface forms. Such observation is intuitive but works surprisingly well in synonym discovery task. Driven by this observation, we design the bi-level surface form encoding component in a way that both of character- and word-level information of terms are captured. Then, a score function is defined to measure the surface form similarity for a pair of terms based on their bi-level encoding vectors. The bi-level encoders are able to encode surface form information of both InV terms and OOV terms. Specifically, as shown in Figure \ref{fig:framework-overview}, given a query term $q$ and a candidate term $c$, we denote their character-level sequences as $x_q=\{x_{q, 1}, ..., x_{q, m_q}\}, x_c=\{x_{c, 1}, ..., x_{c, m_c}\}$, and their word-level sequences as $w_q=\{w_{q, 1}, ..., w_{q, n_q}\}, w_c=\{w_{c, 1}, ..., w_{c, n_c}\}$, where $m_q,n_q,m_c,n_c$ are the length of the character-level sequence and word-level sequence of the query term and the candidate term respectively. Then we build two encoders $\text{ENC}^{ch}$ and $\text{ENC}^{wd}$ to capture the surface form information at the character- and word-level respectively: \begin{equation} \label{eqn:encoder} \small \begin{aligned} s_q^{ch}&=\text{ENC}^{ch}(x_{q,1},...,x_{q,m_q}), s_q^{wd}=\text{ENC}^{wd}(w_{q,1},...,w_{q,n_q})\\ s_c^{ch}&=\text{ENC}^{ch}(x_{c,1},...,x_{c,m_c}), s_c^{wd}=\text{ENC}^{wd}(w_{c,1},...,w_{c,n_c}) \end{aligned} \end{equation} \noindent where $s_q^{ch}, s_c^{ch}\in \mathbb{R}^{d_c}$ are the character-level embeddings for the query and candidate terms, and $s_q^{wd},s_c^{wd}\in \mathbb{R}^{d_w}$ are the word-level embeddings for the query and candidate terms respectively. Note that there has been a surge of effective encoders that model sequential information from character-level or word-level, ranging from simple look-up table (e.g., character n-gram~\cite{hashimoto2017jmt} and Skip-Gram~\cite{mikolov2013distributed}) to complicated neural network architectures (e.g., CNN~\cite{kim2016character}, LSTM~\cite{ballesteros2015improved} and Transformer~\cite{vaswani2017attention}, etc.). For simplicity, here, we adopt simple look-up tables for both character-level embeddings and word-level embeddings. Instead of randomly initializing them, we borrow pre-trained character n-gram embeddings from \citet{hashimoto2017jmt} and word embeddings from \citet{pennington2014glove}. Our experiments also demonstrate that these simple encoders can well encode surface form information of medical terms for synonym discovery task. We leave {evaluating} more complicated encoders as our future work. After we obtain the embeddings at both levels, we concatenate them and apply a nonlinear function to get the surface vector $s$ for the query and candidate term. Let us denote such encoding process as a function $h(\cdot)$ with the input as term $q$ or $c$ and the output as the surface vector $s_q$ or $s_c$: \begin{equation} \begin{aligned} s_q&=h(q)=\text{tanh}( [s_q^{ch},s_q^{wd}] W_s + b_s),\\ s_c&=h(c)=\text{tanh}( [s_c^{ch},s_c^{wd}] W_s + b_s) \end{aligned} \end{equation} \noindent where the surface vectors $s_q, s_c\in \mathbb{R}^{d_s}$, and $W_s \in \mathbb{R}^{(d_c+d_w)\times d_s}, b_s \in \mathbb{R}^{d_s}$ are weight matrix and bias for a fully-connected layer. Next, we define the surface score for a query term $q$ and a candidate term $c$ to measure the surface form similarity based on their encoding vectors $s_q$ and $s_c$: \begin{equation} \textsf{Surface Score}\,(q, c) = f_s(s_q, s_c) \end{equation} \subsubsection{\textbf{Context Matching}} \label{subsubsec:context-matching} In order to discover synonyms that are not similar in surface form, and also observing that two terms tend to be synonyms if their global contexts in the co-occurrence graph are semantically very relevant, we design the context matching component to capture the semantic similarity of two terms by carefully leveraging their global contexts. We first illustrate the intuition behind this component using a toy example: \newtheorem{exam}{Example} \begin{exam} \label{example:contecxt-matching} \textbf{\emph{[Toy Example for Illustration.]}} Assume we have a query term \textit{"vitamin c"} and a candidate term \textit{"ascorbic acid"}. The former is connected with two terms \textit{"iron absorption"} and \textit{"vitamin b"} in the co-occurrence graph as global contexts, while the latter has \textit{"fatty acids"} and \textit{"anemia"} as global contexts. \end{exam} \noindent Our context matching component essentially aims to use a term's contexts to represent its semantic meaning and a novel \textit{dynamic context matching mechanism} is developed to determine the importance of each individual term in one's contexts. For example, \textit{"iron absorption"} is closely related to \textit{"anemia"} since the disease "anemia" is most likely to be caused by the iron deficiency. Based on the observation, we aim to increase the relative importance of \textit{"iron absorption"} and \textit{"anemia"} in their respective context sets when representing the semantic meaning of \textit{"vitamin c"} and \textit{"ascorbic acid"}. Therefore, we develop a novel dynamic context matching mechanism to be introduced shortly. In order to recover global contexts for OOV terms and also noticing the noisy nature of the co-occurrence graph mentioned earlier, we propose an \textit{inductive context prediction module} to predict the global contexts for a term based on its surface form information instead of relying on the raw global contexts in the given co-occurrence graph. \noindent \textbf{Inductive Context Prediction Module}. Let us first denote a general medical term as $t$. For a term-term co-occurrence graph, we treat all InV terms as possible context terms and denote them as $\{u_j\}_{j=1}^{|V|}$ where $|V|$ is the total number of terms in the graph. The inductive context prediction module aims to predict how likely term $u_j$ appears in the context of $t$ (denoted as the conditional probability $p\,(u_j|t)$). To learn a good context predictor, we utilize {all} existing terms in the graph as term $t$, i.e., $t \in \{u_i\}_{i=1}^{|V|}$ and the conditional probability becomes $p\,(u_j|u_i)$. Formally, the probability of observing term $u_j$ in the context of term $u_i$ is denoted as: \vspace{-10pt} \begin{equation} p\,(u_j|u_i)= \frac{\text{exp} \, (\nu_{u_j}^T\cdot s_{u_i})} {\sum_{k=1}^{|V|}\text{exp} \, (\nu_{u_k}^T \cdot s_{u_i})} \end{equation} where $s_{u_i}=h(u_i)$ and $h(\cdot)$ is the same encoder function defined in section \ref{subsubsec:bi-level-encoding}. $\nu_{u_j} \in \mathbb{R}^{d_o}$ is the context embedding vector corresponding to term $u_j$ and we let $d_o=d_s$. The predicted distribution $p\,(u_j|u_i)$ is optimized to be close to the empirical distribution $\hat{p}\,(u_j|u_i)$ defined as: \vspace{-10pt} \begin{equation} \hat{p}\,(u_j|u_i)= \frac{w_{ij}}{\sum_{(i,k)\in E} w_{ik}} \end{equation} where $E$ is the set of edges in the co-occurrence graph and $w_{ij}$ is the weight between term $u_i$ and term $u_j$. We adopt the cross entropy loss function for optimizing: \begin{equation} \label{eqn:context_loss} L_n= -\sum_{u_i,u_j \in V} \hat{p}(u_j|u_i)\ \text{log} \, (p(u_j|u_i)) \end{equation} When the number of terms in the graph $|V|$ is very large, it is computationally costly to calculate the conditional probability $p\,(u_j|u_i)$, and one can utilize the negative sampling algorithm~\cite{mikolov2013efficient} to train our inductive context predictor efficiently. The loss function Eqn. \ref{eqn:context_loss} can be modified as: \begin{equation} \log \sigma(\nu_{u_j}^T\cdot s_{u_i}) + \sum_{n=1}^{N_0} E_{u_n \sim P_n(u)}[\log \sigma (-\nu_{u_n}^T \cdot s_{u_i})] \end{equation} \noindent where $\sigma(x)=1/(1+\exp(-x))$ and $u_n$ is the negative sample drawn from the noise distribution $P_n(u)\propto d_{u}^{3/4}$. $N_0$ is the number of negative samples and $d_{u}$ is the degree of term $u$ in the co-occurrence graph. Now, given a term $t$ (either InV or OOV), we can select the top-$K$ terms as its predicted contexts based on the predicted probability distribution $p\,(\cdot|t)$. Next, we describe the dynamic context matching mechanism to model the semantic similarity of two terms based on their predicted contexts. \noindent \textbf{Dynamic Context Matching {Mechanism}}. Inspired by previous works on neighborhood aggregation based graph embedding methods~\cite{hamilton2017inductive, velickovic2017graph}, which generate an embedding vector for an InV node by aggregating features from its neighborhood (contexts), we introduce two semantic vectors respectively for the query term and the candidate term, $v_q, v_c \in \mathbb{R}^{d_e}$, and learn them by aggregating the feature vectors of their corresponding {top-$K$} predicted contexts from previous module. Let us define $v_q^i \in \mathbb{R}^{d_e}$ as the feature vector of the $i$-th term in query term $q$'s context while $v_c^j \in \mathbb{R}^{d_e}$ as the feature vector of the $j$-th term in candidate term $c$'s context, and their context sets as $\Phi(q)=\{v_q^i\}_{i=1}^K$, $\Phi(c)=\{v_c^j\}_{j=1}^K$. Essentially, as we aim to capture the semantic meaning of terms, the feature vectors $v_q^i$'s and $v_c^j$'s are expected to contain semantic information. Also noticing that all predicted context terms are InV terms (i.e., in the co-occurrence graph), which allows us to adopt widely used graph embeddings, such as LINE(2nd)~\cite{tang2015line} as their feature vectors. One naive way to obtain the context semantic vectors, $v_q$ and $v_s$ is to average vectors in their respective context set. Since such $v_q$ (or $v_c$) does not depend on the other one, we refer to such vectors as "static" representations for terms. \vspace{-10pt} \begin{figure}[htbp!] \centering \includegraphics[width=0.9\linewidth]{dynamic_matching.pdf} \vspace{-10pt} \caption{Dynamic Context Matching Mechanism.} \vspace{-10pt} \label{fig:dynamic_matching} \end{figure} In contrast to the static approach, we propose the \textit{dynamic context matching mechanism} (as shown in Figure \ref{fig:dynamic_matching}), which weighs each term in the context of $q$ (or $c$) based on its matching degree with terms in the context of $c$ (or $q$) and hence the context semantic vector representation $v_q$ (or $v_c$) is \textit{dynamically} changing depending on which terms it is comparing with. More specifically, let us define $g(x, y)=\text{tanh}(xW_my^T)$ as a nonlinear function parameterized with weight matrix $W_m\in \mathbb{R}^{d_e\times d_e}$ to measure the similarity between two row vectors $x$ and $y$. For each context vector $v_q^i$ of the query term, {we calculate its weight based on how it matches with $c$'s contexts overall}: \begin{equation} \textsf{match} \,[v_q^i, \Phi(c)] = \textsf{Pooling}\, [g(v_q^i, v_c^1), ..., g(v_q^i, v_c^K)] \end{equation} For the pooling operation, we empirically choose the \textsf{mean} pooling strategy as it performs better than {alternatives such as \textsf{max} pooling} in our experiments. Then we normalize the weight of $v_q^i$ as: \begin{equation} \alpha_q^i = \frac{\Large{\textit{e}}^{\;\textsf{match}[v_q^i, \Phi(c)]}}{\sum_{k=1}^K \Large{\textit{e}}^{\;\textsf{match}[v_q^{k}, \Phi(c)]}} \end{equation} Finally, the context semantic vector for the query term $v_q$ is calculated through a weighted combination of $q$'s contexts: \begin{equation} v_q = \sum_{i=1}^K \alpha_q^i \cdot v_q^i \end{equation} Following the same procedure, we can obtain the context semantic vector $v_c$ for the candidate term w.r.t. the query term. Then we define the context score for a query term $q$ and a candidate term $c$ to measure their semantic similarity based on $v_q$ and $v_c$: \begin{equation} \textsf{Context Score} \, (q, c)=f_c(v_q, v_c) \end{equation} \subsection{Model Optimization and Inference} \label{subsec:train-inference} \textbf{Objective Function.} Given a query term $q$ and a candidate term $c$, to capture their similarity based on surface forms and global contexts, we define the final score function as: \begin{equation} \label{eqn:final-score} f(q, c) = (1-\gamma) \cdot f_s(s_q, s_c) + \gamma \cdot f_c(v_q, v_c) \end{equation} \noindent {$f_s(\cdot)$ and $f_c(\cdot)$ are similarity functions between two vectors, e.g., cosine similarity or bilinear similarity.} Now we obtain the recommendation probability of each candidate $t_i \in \{t_1, ..., t_N\}$ given a query $q$: \begin{equation} p(t_i|q)=\frac{\Large{\textit{e}}^{\, f(q, t_i)}}{\sum_{k=1}^{N} \Large{\textit{e}}^{\,f(q, t_k)}} \end{equation} where $N$ is the size of the candidate set. Finally, we adopt the ListNet~\cite{cao2007learning} ranking framework which minimizes the cross entropy loss for query term $q$: \vspace{-10pt} \begin{equation} \label{eqn:ranking_loss} L_r= -\sum_{i=1}^{N} p^*(t_i|q) \ \text{log} \, p(t_i|q) \end{equation} where $p^*(t_i|q)$ is the normalized ground-truth distribution of a list of ranking scores as $\{r_i\}_{i=1}^N$ where $r_i$ equals to $1$ if $q$ and $t_i$ are synonyms and $0$ otherwise. \noindent \textbf{Training}. For efficiency concerns, we adopt a two-phase training strategy: We first train the inductive context prediction module by loss function $L_n$ (Eqn. \ref{eqn:context_loss}) in the term-term co-occurrence graph, and sample top-K contexts based on the predicted probability distribution and use them in the context matching component. Then, we train the ranking framework by minimizing the ranking loss $L_r$ (Eqn. \ref{eqn:ranking_loss}). \noindent \textbf{Inference}. At the inference stage, we treat all InV terms as candidates for a given query. Since the dynamic representation mechanism involves pairwise term matching between the contexts of the query term and those of each candidate term and can have a high computational cost when the candidate set size is large, we adopt a two-step strategy: (1) For a given query term, select its top-N high potential candidates based on the surface form encoding vector and the context semantic vector obtained by the static representation mechanism; (2) Re-rank the selected candidates by applying our \textsf{\textsc{SurfCon}}\xspace framework with the dynamic representation mechanism. \section{Conclusion} \vspace{-2pt} In this paper, we study synonym discovery on privacy-aware clinical data, which is a new yet practical setting and consumes less sensitive information to discover synonyms. We propose a novel and effective framework named \textsf{\textsc{SurfCon}}\xspace that considers both the surface form information and the global context information, can handle both InV and OOV query terms, and substantially outperforms various baselines on real-world datasets. As future work, we will extend \textsf{\textsc{SurfCon}}\xspace to infer more semantic relationships (besides synonymity) between terms and test it on more real-life datasets. \section{Related Work} \vspace{-2pt} \noindent \textbf{Character Sequence Encoding.} To capture the character-level information of terms, neural network models such as Recurrent Neural Networks and Convolutional Neural Networks can be applied on character sequences ~\cite{ballesteros2015improved,kim2016character}. Further, CHARAGRAM~\cite{wieting2016charagram}, FastText~\cite{bojanowski2016enriching}, and CharNGram~\cite{hashimoto2017jmt} are proposed to represent terms and their morphological variants by capturing the shared subwords and $n$-grams information. However, modeling character-level sequence information only is less capable of discovering semantically similar synonyms, {and our framework considers global context information to discover those synonyms.} \noindent \textbf{Word and Graph/Network Embedding.} Word embedding methods such as word2vec~\cite{mikolov2013distributed} and Glove ~\cite{pennington2014glove} have been proposed and successfully applied to mining relations of medical phrases~\cite{wang2015medical,pakhomov2016corpus}. More recently, there has been a surge of graph embedding methods that seek to encode structural graph information into low-dimensional dense vectors, such as Deepwalk~\cite{perozzi2014deepwalk}, LINE~\cite{tang2015line}. Most of the embedding methods can only learn embedding vectors for words in the corpus or nodes in the graph, and thus fail to address the OOV issue. On the other hand, some more recent inductive graph embedding works, such as Planetoid~\cite{yang2016revisiting}, GraphSAGE ~\cite{hamilton2017inductive}, and SEANO~\cite{liang2018semi}, could generate embeddings for nodes that are unobserved in the training phase by utilizing their node features (e.g., text attributes). \textit{However, most of them assume the neighborhood of those unseen nodes is known, which is not the case for our OOV issue as the real contexts of an OOV term are unknown.} Since Planetoid~\cite{yang2016revisiting} can generate node embeddings based on node features such as character sequence encoding vectors, it can handle the OOV issue and is chosen as a baseline model. \noindent \textbf{Synonym Discovery.} A variety of methods have been proposed to detect synonyms of medical terms, ranging from utilizing lexical patterns~\cite{weeds2004characterising} and clustering~\cite{matsuo2006graph} to the distributional semantics models~\cite{hagiwara2009supervised}. There are some more recent works on automatic synonym discovery ~\cite{wang2015medical,qu2017automatic,zhang2019synonymnet, Shen2019SynSetMine}. For example, \citet{wang2015medical} try to learn better embeddings for terms in medical corpora by incorporating their semantic types and then build a linear classifier to decide whether a pair of medical terms is synonyms or not. \citet{qu2017automatic} combine distributional and pattern based methods for automatic synonym discovery. However, many aforementioned models focus on finding synonyms based on raw texts information, which is not suitable for our privacy-aware clinical data. In addition, nearly all methods could only find synonyms for terms that appear in the training corpus and, thus cannot address the OOV query terms. \section{Introduction} \label{sec:intro} Clinical texts in Electronic Medical Records (EMRs) are enriched with valuable information including patient-centered narratives, patient-clinician interactions and disease treatment outcomes, which can be especially helpful for future decision making. To extract knowledge from unstructured clinical texts, synonym discovery \cite{wang2015medical} is an important task which can benefit many downstream applications. For example, when a physician issues a query term (e.g., "vitamin C") to find relevant clinical documents, automatically discovering its synonyms (e.g., "c vitamin", "vit c", "ascorbic acid") or even commonly misspelled variations (e.g. "viatmin c") can help to expand the query and thereby enhance the retrieval performance. \begin{figure}[t] \resizebox{\linewidth}{!}{% \includegraphics[width=\linewidth, left]{intro_intuition.pdf}} \vspace{-15pt} \caption{Task illustration: We aim to discover synonyms for a given query term from privacy-aware clinical data by effectively leveraging two important types of information: Surface form and global contexts. \nop{echo in the introduction}} \vspace{-15pt} \label{fig:intro_intuition} \end{figure} For the sake of patient privacy and security, it is usually quite difficult, if not impossible, for medical institutes to grant public access to large-scale raw or even de-identified clinical texts \cite{beam2018clinical}. Consequently, medical terms\footnote{A medical term is a single- or multi-word string (e.g., "Aspirin", "Acetylsalicylic Acid").} and their aggregated co-occurrence counts extracted from raw clinical texts are becoming a popular (although not perfect) substitute for raw clinical texts for the research community to study EMR data~\cite{finlayson2014building, ta2018columbia, beam2018clinical}. For example, \citet{finlayson2014building} released millions of medical terms extracted from the clinical texts in Stanford Hospitals and Clinics as well as their global co-occurrence counts, rather than releasing raw sentences/paragraphs/documents from the clinical text corpus. In this work, we refer to the given set of medical terms and their co-occurrence statistics in a clinical text corpus as \textit{privacy-aware} clinical data, and {investigate synonym discovery task on such data ({Figure \ref{fig:intro_intuition}}): \textit{Given a set of terms extracted from clinical texts as well as their global co-occurrence graph\footnote{where each node is a medical term and each edge between two nodes is weighted by the number of times that two terms co-occur in a given context window.}, recommend a list of synonyms for a query term}. Developing effective approaches under this setting is particularly meaningful, as they will suggest that one can utilize less sensitive information (i.e., co-occurrence statistics rather than raw sentences in clinical texts) to perform the task well}. A straightforward approach to obtain synonyms is to map the query term to a knowledge base (KB) entity and retrieve its synonyms or aliases stored in the KBs. However, it is widely known that KBs are incomplete and outdated, and their coverage of synonyms can be very limited~\cite{wang2015knowledge}. In addition, the informal writing of clinical texts often contain variants of surface forms, layman terms, frequently misspelling words, and locally practiced abbreviations, which should be mined to enrich synonyms in KBs. Recent works~\cite{wang2015medical, qu2017automatic, zhang2019synonymnet} have been focused on automatic synonym discovery from massive text corpora such as Wikipedia articles and PubMed paper abstracts. {When predicting if two terms are synonyms or not, such approaches usually leverage the original sentences (a.k.a. \textit{local} contexts) mentioning them, and hence do not apply or work well under our privacy-aware data setting where such sentences are unavailable.} {Despite the lack of local contexts, {we observe} two important types of information carried in the privacy-aware data - surface form information and global context information (i.e., co-occurrence statistics).} In this work, we aim to effectively leverage these two types of information for synonym discovery, {as shown in Figure \ref{fig:intro_intuition}}. Some recent works~\cite{neculoiu2016learning, mueller2016siamese} model the similarity between terms in the character-level. For example, \citet{mueller2016siamese} learn the similarity between two sequences of characters, which can be applied for discovering synonyms that look alike such as "vit c" and "vitamin c". However, we observe two common phenomena that such approaches cannot address well and would induce false positive and false negative predictions respectively: (1) Some terms are similar in surface form but do not have the same meaning (e.g., "hemostatic" and "homeostasis", where the former means a process stopping bleeding while the latter refers to a constant internal environment in the human body); (2) Some terms have the same meaning but are different in surface form (e.g., "ascorbic acid" and "vitamin c" are the same medicinal product but look different). On the other hand, given a term co-occurrence graph, various distributional embedding methods such as \cite{pennington2014glove, tang2015line, levy2014neural} have been proposed to learn a {distributional} representation (a.k.a. embedding) for each term based on its \textit{global} contexts (i.e., terms connected to it in the co-occurrence graph). The main idea behind such methods is that two terms should have similar embedding vectors if they share a lot of global contexts. However, we observe that the privacy-aware clinical data tends to be very \textit{noisy} due to the original data processing procedure\footnote{\normal{This tends to be a common issue in many scenarios as raw data has to go through various pre-processing steps for privacy concerns.}}, which presents new challenges for utilizing global contexts to model semantic similarity between terms. For example, \citet{finlayson2014building} prune the edges between two terms co-occurring less than 100 times, which can lead to missing edges between two related terms in the co-occurrence graph. \citet{ta2018columbia} remove all concepts with singleton frequency counts below 10. Hence, \normal{the noisy nature of the co-occurrence graph makes it less accurate to embed a term based on their original contexts. Moreover, when performing the synonym discovery task, users are very likely to issue a query term that does not appear in the given co-occurrence data. We refer to such query terms as Out-of-Vocabulary (OOV). Unlike In-Vocabulary\footnote{Query terms that appear in the given co-occurrence graph are referred to as In-Vocabulary (InV).} query terms, OOV query terms do not have their global contexts readily available in the given graph, which makes synonym discovery even more challenging}. In this paper, to address the above challenges and effectively utilize both the \ul{surf}ace form and the global \ul{con}text information in the privacy-aware clinical data, we propose a novel framework named {\textsf{\textsc{SurfCon}}\xspace} which consists of a bi-level surface form encoding component and a context matching component, both based on neural models. The bi-level surface form encoding component exploits both character- and word-level information to encode a medical term into a vector. It enables us to compute a surface score of two terms based on their encoding vectors. As mentioned earlier, such surface score works well for detecting synonyms that look similar in surface form. However, it tends to miss synonymous terms that do not look alike. Therefore, we propose the context matching component to model the semantic similarity between terms, which plays a complementary role in synonymy discovery. Our context matching component first utilizes the bi-level surface form encoding vector for a term to predict its potential global contexts. Using predicted contexts rather than the raw contexts in the given graph enables us to handle OOV query terms and also turns out to be effective for InV query terms. Then we generate a semantic vector for each term by aggregating the semantic features from predicted contexts using two mechanisms - static and dynamic representation mechanism. Specifically, given term $a$ and term $b$, the dynamic mechanism aims to learn to weigh the importance of individual terms in $a$'s contexts based on their {semantic matching degree} with $b$'s contexts, while the static mechanism assigns equal weights to all terms in one's contexts. The former takes better advantage of individual terms within the contexts and empirically demonstrates superior performance. Our contributions are summarized in three folds: \begin{itemize}[leftmargin=*] \item We study the task of synonym discovery under a new setting, i.e., on privacy-aware clinical data, where only a set of medical terms and their co-occurrence statistics are given, and local contexts (e.g., sentences mentioning a term in a corpus) are not available. It is a practical setting given the wide concern about patient privacy for access to clinical texts and also presents unique challenges to address for effective synonym discovery. \item We propose a novel and effective framework named \textsf{\textsc{SurfCon}}\xspace that can discover synonyms for both In-Vocabulary (InV) and Out-of-Vocabulary (OOV) query terms. \textsf{\textsc{SurfCon}}\xspace considers two complementary types of information {based on neural models} - surface form information and global context information of a term, where the former works well for detecting synonyms that are similar in surface form while the latter can help better find synonyms that do not look alike but are semantically similar. \item We conduct extensive experiments on publicly available privacy-aware clinical data and demonstrate the effectiveness of our framework in comparison with various baselines and our own model variants. \end{itemize} \section{Task Setting} \label{task-setting} In this section, we clarify several terminologies used in this paper as well as our problem definition: \noindent \textbf{Privacy-aware Clinical Data.} Electronic medical records (EMRs) typically contain patient medical information such as discharge summary, treatment, and medical history. In EMRs, a significant amount of clinical information remains under-tapped in the unstructured clinical texts. However, due to privacy concerns, access to raw or even de-identified clinical texts in large quantities is quite limited. Also, traditional de-identification methods, e.g., removing the 18 HIPAA identifiers~\cite{stubbs2015annotating}, require significant manual efforts for the annotation~\cite{dorr2006assessing}. Moreover, there also exists the risk that de-identified data can be attacked and recovered by the re-identification in some cases \cite{garfinkel2015identification}. Thus, to facilitate research on EMRs, an increasingly popular substitute strategy for releasing raw clinical texts is to extract medical terms and their aggregated co-occurrence counts from the corpus \cite{beam2018clinical,ta2018columbia, finlayson2014building}. We refer to such data as privacy-aware clinical data in this paper. Converting raw sentences to co-occurrence data protects privacy as original patient records are very unlikely to be recovered. However, the local context information contained in the raw sentences is also lost, which makes various tasks including synonym discovery more challenging under privacy-aware datasets. \noindent \textbf{Medical Term Co-occurrence Graph.} A medical term-term co-occurrence graph is defined as $G$=$(V, E)$, where $V$ is the set of vertices, each representing a medical term extracted from clinical texts. Each vertex has a surface form string (e.g., "vitamin c", "cancer") which is the spelling of the medical term. $E$ is the set of edges, each weighted by how many times two terms co-occur in a certain context window ({e.g., notes from patient records within 1 day}). \noindent \textbf{Medical Term Synonym.} Synonyms of a medical term refer to other medical terms that can be used as its alternative names~\cite{qu2017automatic}. For example, "vit c", "c vitamin" and "ascorbic acid" refer to the same medicinal product, while "Alzheimer's disease" and "senile dementia" represent the same disease. In our dataset, the extracted medical terms are mapped to the Unified Medical Language System (UMLS) \cite{bodenreider2004unified} Concept Unique Identifier (CUI) {by \cite{finlayson2014building}}. Different terms mapping to the same UMLS CUI are treated as synonyms for {model training/development/testing}. \noindent \textbf{Task Definition.} We formally define our task of {synonym discovery on privacy-aware clinical data} as: \textit{Given a medical term co-occurrence graph $G$, for a query term $q$ (which can be either In-Vocabulary or Out-of-Vocabulary), recommend a list of medical terms from $G$ that are likely to be synonyms of $q$. } \section{Experiments} \label{section:exp} Now we evaluate our proposed framework \textsf{\textsc{SurfCon}}\xspace to show the effectiveness of leveraging both surface form information and global context information for synonym discovery. \vspace{-10pt} \subsection{Datasets}\label{exp:dataset} \vspace{-2pt} \noindent \textbf{{Medical Term Co-occurence Graph.}} We adopt publicly available sets of medical terms with their co-occurrence statistics which are extracted by \citet{finlayson2014building} from 20 million clinical notes collected from Stanford Hospitals and Clinics\cite{lowe2009stride} since 1995. Medical terms are extracted using an existing phrase mining tool ~\cite{lependu2012annotation} by matching with 22 clinically relevant ontologies such as SNOMED-CT and MedDRA. And co-occurrence frequencies are counted based on how many times two terms co-occur in the same temporal \textit{bin} (i.e., a certain timeframe in patient's records), e.g., 1, 7, 30, 90, 180, 365, and $\infty$-day \textit{bins}. Without loss of generality, we choose 1-day per-bin and $\infty$-day per-bin\footnote{Per-bin means each unique co-occurring term-term pair is counted at most once for each relevant bin of a patient. We refer readers to \citet{finlayson2014building} for more information.} graphs to evaluate different methods. We first convert the global counts between nodes to the PPMI values \cite{levy2014linguistic} and adopt subsampling \cite{mikolov2013distributed} to filter very common terms, such as "medical history", "medication dose", etc. We choose these two datasets {because they have very different connection density as shown in Table \ref{tab:dataset-statistics}}, and denote them as {\textbf{1-day} and \textbf{All-day}} datasets. \noindent \textbf{Synonym Label.} \label{synlabel} In the released datasets, \citet{finlayson2014building} provided a term-to-UMLS CUI mapping based on the same 22 ontologies as used when extracting terms. They reduced the ambiguity of a term by suppressing its least likely meaning so as to provide a high-quality mapping. We utilized such mapping to obtain the synonym labels: Terms mapped to the same UMLS CUI are treated as synonyms, e.g., terms like "c vitamin", "vit c", "ascorbic acid" are synonyms as they are all mapped to the concept "Ascorbic Acid" with ID \text{C0003968}. \noindent \textbf{Query Terms.} Given a medical term-term co-occurrence graph, terms in the graph that can be mapped to UMLS CUIs are treated as potential query terms, and we split all such terms into training, development and testing sets. Here, since all terms appear in the given co-occurrence graph, this testing set is referred to as the \textbf{InV testing set}. We also create an \textbf{OOV testing set}: Under a UMLS CUI, terms not in the co-occurrence graph are treated as OOV query terms and are paired with their synonyms which are in the graph to form positive pairs. We sample 2,000 of such OOV query terms for experiments. In addition, since synonyms with different surface forms tend to be more challenging to discover (e.g., "vitamin c" vs. "ascorbic acid"), we also sample a subset named \textbf{Dissim} under both \text{InV} and \text{OOV testing set}, where query terms paired with their dissimilar synonyms\footnote{Dissimilarity is measured by Levenshtein edit distance \cite{gomaa2013survey} with a threshold (0.8).} are selected. Statistics of our training/dev/testing sets are given in Table \ref{tab:dataset-statistics}. \input{dataset_statistics.tex} \input{main_results.tex} \vspace{-8pt} \subsection{Experimental Setup} \label{exp:setup} \vspace{-2pt} \subsubsection{Baseline methods.} \label{baseline} We compare \textsf{\textsc{SurfCon}}\xspace with the following 10 methods. {The baselines} can be categorized by three types: (i) Surface form based methods, which focus on capturing the surface form information of terms. (ii) Global context based methods, which try to learn embeddings of terms for synonym discovery; (iii) Hybrid methods, which combine surface form and global context information. The others are our model variants. \noindent \textbf{Surface form based methods}. (1) \textit{CharNgram}~\cite{hashimoto2017jmt}: We borrow pre-trained character n-gram embeddings from ~\citet{hashimoto2017jmt} and take the average of unique n-gram embeddings for each term as its feature, and then train a bilinear scoring function following previous works~\cite{qu2017automatic, zhang2019synonymnet}. (2) \textit{CHARAGRAM} \cite{wieting2016charagram}: Similar as above, but we further fine-tune CharNgram embeddings using synonym supervision. (3) \textit{SRN} \cite{neculoiu2016learning}: A Siamese network structure is adopted with a bi-directional LSTM to encode character sequence of each term and cosine similarity is used as the scoring function. \noindent \textbf{Global context based methods}. (4) \textit{Word2vec} \cite{mikolov2013distributed}: A popular distributional embedding method. We obtain word2vec embeddings by doing {SVD decomposition over the Shifted PPMI co-occurrence matrix \cite{levy2014neural}}. We treat the embeddings as features and use a bilinear score function for synonym discovery. (5) \textit{LINE(2nd)} \cite{tang2015line}: A widely-adopted graph embedding approach. Similarly, embeddings are treated as features and a bilinear score function is trained to detect synonyms. (6) \textit{DPE-NoP} \cite{qu2017automatic}: DPE is proposed for synonym discovery on text corpus, and consists of a distributional module and a pattern module, where the former utilizes global context information and the latter learns patterns from raw sentences. Since raw texts are unavailable in our setting, we only deploy the distributional module (a.k.a. DPE-NoP in \citet{qu2017automatic}). \noindent \textbf{Hybrid methods}. (7) \textit{Concept Space Model} \cite{wang2015medical}: A medical synonym extraction method that combines word embeddings and heuristic rule-based string features. (8) \textit{Planetoid} \cite{yang2016revisiting}: An inductive graph embedding method that can generate embeddings for both observed and unseen nodes. We use the bi-level surface form encoding vectors as the input and take the intermediate hidden layer as embeddings. Similarly, a bilinear score function is used for synonym discovery. \noindent \textbf{Model variants}. (9) \textit{\textsf{\textsc{SurfCon}}\xspace (Surf-Only)}: A variant of our framework which only uses the surface score for ranking. (10) \textit{\textsf{\textsc{SurfCon}}\xspace (Static)}: Our framework with static representation mechanism. By comparing these variants, we verify the performance gain brought by modeling global contexts using different matching mechanisms. For baseline methods (1-3 and 8) and our models, we test them under both InV and OOV settings. For the others (4-7), because they rely on embeddings that are only available for InV terms, we only test them under InV setting. \vspace{-5pt} \subsubsection{Candidate Selection and Performance Evaluation.} For evaluating baseline methods and our model, we experiment with two strategies: (1) Random candidate selection. For each query term, we randomly sample 100 non-synonyms as negative samples and mix them with synonyms for testing. This strategy is widely adopted by previous work on synonym discovery for testing efficiency~\cite{wang2015medical, zhang2019synonymnet}. (2) Inference-stage candidate selection. As mentioned in section \ref{subsec:train-inference}, at the inference stage, we first obtain high potential candidates in a lightweight way. Specifically, after the context predictor is pre-trained, for all terms in the given graph as well as the query term, we generate their surface form vector $s$ and context semantic vector $v$ obtained by the static representation. Then we find top 50 nearest neighbors of the query term respectively {based on} $s$ and $v$ using cosine similarity. Finally, we apply {our methods and baselines} to re-rank the 100 high potential candidates. {We refer to these two strategies as \textit{random candidate selection} and \textit{inference-stage candidate selection}.} For evaluation, we adopt a popular ranking metric Mean Average Precision defined as $\textsf{MAP}=\frac{1}{|Q|} \sum_{i=1}^{|Q|}\frac{1}{m_i} \sum_{j=1}^{m_i} \textsf{Precision}(R_{ij})$, where $R_{ij}$ is the set of ranked terms from $1$ to $j$, $m_i$ is the length of $i$-th list, and $|Q|$ is the number of queries. \vspace{-5pt} \subsubsection{Implementation details} \label{details} Our framework is implemented in Pytorch \cite{paszke2017automatic} with Adam optimizer \cite{kingma2014adam}. The dimensions of character embeddings ($d_c$), word embeddings ($d_w$), surface vectors ($d_s$), and sementic vectors ($d_e$) are set to be 100, 100, 128, 128. Early stopping is used when the performance in the dev sets does not increase continuously for 10 epochs. We directly optimize Eqn. \ref{eqn:context_loss} since the number of terms in our corpus is not very large, and set $f_s(\cdot)$ and $f_c(\cdot)$ to be cosine similarity and bilinear similarity function respectively, based on the model performance on the dev sets. When needed, string similarities are calculated by using the Distance package\footnote{https://github.com/doukremt/distance}. Pre-trained CharNgram \cite{hashimoto2017jmt} embeddings are borrowed from the authors\footnote{https://github.com/hassyGo/charNgram2vec}. For CHARAGRAM \cite{wieting2016charagram}, we initialize the n-gram embeddings by using pre-trained CharNgram and fine-tune them on our dataset by the synonym supervision. We learn LINE(2nd) embeddings \cite{tang2015line} by using OpenNE\footnote{https://github.com/thunlp/OpenNE}. Heuristic rule-based matching features of Concept Space model are implemented according to ~\cite{wang2015medical}. Code, datasets, and more implementation details are available online\footnote{\url{https://github.com/yzabc007/SurfCon}}. \vspace{-5pt} \subsection{Results and Analysis} \label{main-results} \subsubsection{Evaluation with {Random Candidate Selection}} We compare all methods under random candidate selection strategy with the results shown in Table \ref{tab:main-results}. \noindent \textbf{(1) Comparing \textsf{\textsc{SurfCon}}\xspace with surface form based methods.} \\ Our model beats all surface form based methods, including strong baselines such as SRN that use complicated sequence models to capture character-level information. This is because: 1) Bi-level encoder of \textsf{\textsc{SurfCon}}\xspace could capture surface form information from both character- and word-level, while baselines only consider either of them; 2) \textsf{\textsc{SurfCon}}\xspace captures global context information, which could complement surface form information for synonym discovery. In addition, in comparison with CharNgram and CHARAGRAM, our model variant \textsf{\textsc{SurfCon}}\xspace (Surf-Only), which also only uses surface form information, obtains consistently better performance, especially in the OOV Test set. The results demonstrate that adding word-level surface form information is useful to discover synonyms. \noindent \textbf{(2) Comparing \textsf{\textsc{SurfCon}}\xspace with global context based methods.} \\ \textsf{\textsc{SurfCon}}\xspace substantially outperforms all other global context based methods (Word2vec, LINE(2nd) and DPE-NoP). This is largely due to the usage of surface form information. In fact, as one can see, global context based methods are generally inferior to surface form based methods, partly due to the fact that a large part of synonyms are similar in surface form, while only a small portion of them are in very different surface form. Thus, detecting synonyms without leveraging surface information can hardly lead to good results. Besides, our context matching component conducts context prediction and matching strategies, which takes better advantage of global context information and thus lead to better performance on the synonym discovery task. \noindent \textbf{(3) Comparing SurfCon with hybrid methods.} We also compare our model with baselines that combine both surface form and global context information. First, \textsf{\textsc{SurfCon}}\xspace is superior to the concept space model because the latter simply concatenates distributional embeddings with rule-based string features, e.g., the number of shared words as features and apply a logistic regression classifier for classification. Further, \textsf{\textsc{SurfCon}}\xspace also performs better than Planetoid, partly because our framework more explicitly leverages both surface form and global context information to formulate synonym scores, while Planetoid relies on one embedding vector for each term which only uses surface form information as input. \noindent \textbf{(4) Comparing \textsf{\textsc{SurfCon}}\xspace with its variants.} To better understand why \textsf{\textsc{SurfCon}}\xspace works well, we compare it with several variants. Under {both datasets}, \textsf{\textsc{SurfCon}}\xspace (Surf-Only) {already outperforms} all baselines demonstrating the effectiveness of our bi-level surface form encoding component. With the context matching component in \textsf{\textsc{SurfCon}}\xspace (Static), the performance is further improved, especially under \textit{InV Test Dissim} setting where synonyms tend to have different surface forms and we observe around 4\% performance gain. Further, by using dynamic representation in context matching mechanism, \textsf{\textsc{SurfCon}}\xspace obtains better results, which demonstrates that the dynamic representation is more effective to utilize context information compared with the static strategy. \input{inference_results.tex} \vspace{-5pt} \subsubsection{Evaluation at Inference Stage} To further evaluate the power of our model in real practice, we test its performance at the inference stage as mentioned in section \ref{subsec:train-inference}. Due to space constraint, we only show the comparison in Table \ref{main-results-practical} between \textsf{\textsc{SurfCon}}\xspace and several strong baselines revealed by Table \ref{tab:main-results}. In general, the performance of all methods decreases at the inference stage compared with the random candidate selection setting, because the constructed list of candidates becomes harder to rank since surface form and context information are already used for the construction. For example, a lot of non-synonyms with similar surface form are often included in the candidate list. Even though the task becomes harder, we still observe our model outperforms the strong baselines by a large margin (e.g., around 8\% at least) under all settings. \begin{figure}[t!] \centering \resizebox{\linewidth}{!}{ \subfloat{\includegraphics[]{para_gamma_final.pdf}} \subfloat{\includegraphics[]{para_num_contexts_final.pdf}} ~ } \vspace{-10pt} \caption{{Performance w.r.t. (a) the coefficient of context score $\gamma$ and (b) the number of context terms $K$}.} \label{fig:parameter_sensitivity} \vspace{-16pt} \end{figure} \vspace{-5pt} \subsubsection{Parameter Sensitivity} Here we investigate the effect of two important hyper-parameters: The coefficient $\gamma$ which balances the surface score and the context score, and the number of predicted contexts $K$ used for context matching. As shown in Figure \ref{fig:parameter_sensitivity}(a), the performance of \textsf{\textsc{SurfCon}}\xspace first is improved as $\gamma$ increases, which is expected because as more semantic information is incorporated, \textsf{\textsc{SurfCon}}\xspace could detect more synonyms that are semantically similar. When we continue to increase $\gamma$, the performance begins to decrease and the reason is that surface form is also an important source of information that needs to be considered. \textsf{\textsc{SurfCon}}\xspace achieves the best performance roughly at $\gamma=0.3$ indicating surface form information is relatively more helpful for the task than global context information. This also aligns well with our observation that synonyms more often than not have similar surface forms. Next, we show the impact of $K$ in Figure \ref{fig:parameter_sensitivity}(b). In general, when $K$ is small (e.g., $K=10$), the performance is not as good since little global context information is considered. Once $K$ increases to be large enough (e.g., $\geq50$), the performance is not sensitive to the variation under most settings showing that we can choose smaller $K$ for computation efficiency but still with good performance. \input{case_study.tex} \vspace{-8pt} \subsection{Case Studies} \vspace{-2pt} We further conduct case studies to show the effectiveness of \textsf{\textsc{SurfCon}}\xspace. Two query terms {"unable to vocalize"} and {"marijuana"} are chosen respectively from the InV and OOV test set where the former is defined as the inability to produce voiced sound and the latter is a psychoactive drug used for medical or recreational purposes. As shown in Table \ref{tab:case_study}, for the InV query {"unable to vocalize"}, our model can successfully detect its synonyms such as "unable to phonate", which already exists in the labeled synonym set collected based on term-to-UMLS CUI mapping as we discussed in Section \ref{task-setting}. More impressively, our framework also discovers some highly semantically similar terms such as "does not vocalize" and "aphonia", even if some of them are quite different in surface form from the query term. For the OOV query {"marijuana"}, \textsf{\textsc{SurfCon}}\xspace ranks its synonym "marijuana abuse" and "cannabis" at a higher place. Note that the other top-ranked terms are also very relevant to "marijuana". \section{\textsf{\textsc{SurfCon}}\xspace Framework} \label{sec:framework} In this section, we introduce our proposed framework \textsf{\textsc{SurfCon}}\xspace for synonym discovery on privacy-aware clinical data. \vspace{-5pt} \subsection{Overview} \label{subsec:framework-overview} We observe two important types of information carried in the privacy-aware clinical data: surface form information of a medical term and the global contexts from the given co-occurrence graph. On the one hand, existing approaches \cite{neculoiu2016learning} using character-level features to detect synonyms could work well when synonyms share a high string similarity, but tend to produce false positive predictions (when two terms look similar but are not synonyms, e.g., "hemostatic" and "homeostasis") and false negative predictions (when two terms are synonyms but look very different, e.g., "ascorbic acid" and "vitamin c"). On the other hand, the global contexts of a term under the privacy-aware setting tend to be noisy partly due to the original data pre-processing procedure, which also presents challenges for using them to model the semantic similarity between terms. Thus, a framework that is able to effectively leverage these two types of information needs to be carefully designed. \begin{figure}[t!] \centering \resizebox{\linewidth}{!}{% \includegraphics[width=\linewidth]{framework.pdf}} \vspace{-20pt} \caption{{Framework overview. For each query term, a list of candidate terms will be ranked based on both the surface and context scores.}} \vspace{-15pt} \label{fig:framework-overview} \end{figure} Towards that end, we propose \textsf{\textsc{SurfCon}}\xspace (Figure \ref{fig:framework-overview}) and summarize its high-level ideas as below: \noindent (1) Given a query term (whether being InV or OOV), the {bi-level surface form encoding component} and the context matching component score a candidate term\footnote{Every term in the given co-occurrence graph can be a candidate term.} respectively based on the surface form information and global context information. The former enables us to find synonyms that look similar to the query term by considering both character- and word-level information, and the latter complements it by capturing the semantic similarity between terms to better address the false positive and false negative problem mentioned earlier. \noindent (2) Considering the original global contexts being noisy as well as the existence of OOV query terms, instead of directly leveraging the raw global contexts, the context matching component will first utilize the surface form encoding vector of a term to \textit{predict} its potential global contexts\footnote{For terms in the co-occurrence graph, predicting contexts can be treated as denoising its original global contexts (or edges)}. We then investigate a novel dynamic context matching mechanism (see Section \ref{subsubsec:context-matching} for details) to evaluate if two terms are synonyms based on their predicted contexts. \noindent (3) The two components are combined by a weighted score function, in which parameters are jointly optimized with a widely used ranking algorithm ListNet \cite{cao2007learning}. At testing time, given a query term, candidate terms are ranked based on the optimized score function. \vspace{-5pt} \subsection{Methodology} Now we describe the two components of \textsf{\textsc{SurfCon}}\xspace: Bi-level Surface Form Encoding and Context Matching in details. \label{subsec:methodology} \subsubsection{\textbf{Bi-level Surface Form Encoding}} \label{subsubsec:bi-level-encoding} The bi-level surface form encoding of our framework aims to model the similarity between two terms at the surface form level, as we observe that two terms tend to be synonymous if they are very similar in surface forms. Such observation is intuitive but works surprisingly well in synonym discovery task. Driven by this observation, we design the bi-level surface form encoding component in a way that both of character- and word-level information of terms are captured. Then, a score function is defined to measure the surface form similarity for a pair of terms based on their bi-level encoding vectors. The bi-level encoders are able to encode surface form information of both InV terms and OOV terms. Specifically, as shown in Figure \ref{fig:framework-overview}, given a query term $q$ and a candidate term $c$, we denote their character-level sequences as $x_q=\{x_{q, 1}, ..., x_{q, m_q}\}, x_c=\{x_{c, 1}, ..., x_{c, m_c}\}$, and their word-level sequences as $w_q=\{w_{q, 1}, ..., w_{q, n_q}\}, w_c=\{w_{c, 1}, ..., w_{c, n_c}\}$, where $m_q,n_q,m_c,n_c$ are the length of the character-level sequence and word-level sequence of the query term and the candidate term respectively. Then we build two encoders $\text{ENC}^{ch}$ and $\text{ENC}^{wd}$ to capture the surface form information at the character- and word-level respectively: \begin{equation} \label{eqn:encoder} \small \begin{aligned} s_q^{ch}&=\text{ENC}^{ch}(x_{q,1},...,x_{q,m_q}), s_q^{wd}=\text{ENC}^{wd}(w_{q,1},...,w_{q,n_q})\\ s_c^{ch}&=\text{ENC}^{ch}(x_{c,1},...,x_{c,m_c}), s_c^{wd}=\text{ENC}^{wd}(w_{c,1},...,w_{c,n_c}) \end{aligned} \end{equation} \noindent where $s_q^{ch}, s_c^{ch}\in \mathbb{R}^{d_c}$ are the character-level embeddings for the query and candidate terms, and $s_q^{wd},s_c^{wd}\in \mathbb{R}^{d_w}$ are the word-level embeddings for the query and candidate terms respectively. Note that there has been a surge of effective encoders that model sequential information from character-level or word-level, ranging from simple look-up table (e.g., character n-gram~\cite{hashimoto2017jmt} and Skip-Gram~\cite{mikolov2013distributed}) to complicated neural network architectures (e.g., CNN~\cite{kim2016character}, LSTM~\cite{ballesteros2015improved} and Transformer~\cite{vaswani2017attention}, etc.). For simplicity, here, we adopt simple look-up tables for both character-level embeddings and word-level embeddings. Instead of randomly initializing them, we borrow pre-trained character n-gram embeddings from \citet{hashimoto2017jmt} and word embeddings from \citet{pennington2014glove}. Our experiments also demonstrate that these simple encoders can well encode surface form information of medical terms for synonym discovery task. We leave {evaluating} more complicated encoders as our future work. After we obtain the embeddings at both levels, we concatenate them and apply a nonlinear function to get the surface vector $s$ for the query and candidate term. Let us denote such encoding process as a function $h(\cdot)$ with the input as term $q$ or $c$ and the output as the surface vector $s_q$ or $s_c$: \begin{equation} \begin{aligned} s_q&=h(q)=\text{tanh}( [s_q^{ch},s_q^{wd}] W_s + b_s),\\ s_c&=h(c)=\text{tanh}( [s_c^{ch},s_c^{wd}] W_s + b_s) \end{aligned} \end{equation} \noindent where the surface vectors $s_q, s_c\in \mathbb{R}^{d_s}$, and $W_s \in \mathbb{R}^{(d_c+d_w)\times d_s}, b_s \in \mathbb{R}^{d_s}$ are weight matrix and bias for a fully-connected layer. Next, we define the surface score for a query term $q$ and a candidate term $c$ to measure the surface form similarity based on their encoding vectors $s_q$ and $s_c$: \begin{equation} \textsf{Surface Score}\,(q, c) = f_s(s_q, s_c) \end{equation} \subsubsection{\textbf{Context Matching}} \label{subsubsec:context-matching} In order to discover synonyms that are not similar in surface form, and also observing that two terms tend to be synonyms if their global contexts in the co-occurrence graph are semantically very relevant, we design the context matching component to capture the semantic similarity of two terms by carefully leveraging their global contexts. We first illustrate the intuition behind this component using a toy example: \newtheorem{exam}{Example} \begin{exam} \label{example:contecxt-matching} \textbf{\emph{[Toy Example for Illustration.]}} Assume we have a query term \textit{"vitamin c"} and a candidate term \textit{"ascorbic acid"}. The former is connected with two terms \textit{"iron absorption"} and \textit{"vitamin b"} in the co-occurrence graph as global contexts, while the latter has \textit{"fatty acids"} and \textit{"anemia"} as global contexts. \end{exam} \noindent Our context matching component essentially aims to use a term's contexts to represent its semantic meaning and a novel \textit{dynamic context matching mechanism} is developed to determine the importance of each individual term in one's contexts. For example, \textit{"iron absorption"} is closely related to \textit{"anemia"} since the disease "anemia" is most likely to be caused by the iron deficiency. Based on the observation, we aim to increase the relative importance of \textit{"iron absorption"} and \textit{"anemia"} in their respective context sets when representing the semantic meaning of \textit{"vitamin c"} and \textit{"ascorbic acid"}. Therefore, we develop a novel dynamic context matching mechanism to be introduced shortly. In order to recover global contexts for OOV terms and also noticing the noisy nature of the co-occurrence graph mentioned earlier, we propose an \textit{inductive context prediction module} to predict the global contexts for a term based on its surface form information instead of relying on the raw global contexts in the given co-occurrence graph. \noindent \textbf{Inductive Context Prediction Module}. Let us first denote a general medical term as $t$. For a term-term co-occurrence graph, we treat all InV terms as possible context terms and denote them as $\{u_j\}_{j=1}^{|V|}$ where $|V|$ is the total number of terms in the graph. The inductive context prediction module aims to predict how likely term $u_j$ appears in the context of $t$ (denoted as the conditional probability $p\,(u_j|t)$). To learn a good context predictor, we utilize {all} existing terms in the graph as term $t$, i.e., $t \in \{u_i\}_{i=1}^{|V|}$ and the conditional probability becomes $p\,(u_j|u_i)$. Formally, the probability of observing term $u_j$ in the context of term $u_i$ is denoted as: \vspace{-10pt} \begin{equation} p\,(u_j|u_i)= \frac{\text{exp} \, (\nu_{u_j}^T\cdot s_{u_i})} {\sum_{k=1}^{|V|}\text{exp} \, (\nu_{u_k}^T \cdot s_{u_i})} \end{equation} where $s_{u_i}=h(u_i)$ and $h(\cdot)$ is the same encoder function defined in section \ref{subsubsec:bi-level-encoding}. $\nu_{u_j} \in \mathbb{R}^{d_o}$ is the context embedding vector corresponding to term $u_j$ and we let $d_o=d_s$. The predicted distribution $p\,(u_j|u_i)$ is optimized to be close to the empirical distribution $\hat{p}\,(u_j|u_i)$ defined as: \vspace{-10pt} \begin{equation} \hat{p}\,(u_j|u_i)= \frac{w_{ij}}{\sum_{(i,k)\in E} w_{ik}} \end{equation} where $E$ is the set of edges in the co-occurrence graph and $w_{ij}$ is the weight between term $u_i$ and term $u_j$. We adopt the cross entropy loss function for optimizing: \begin{equation} \label{eqn:context_loss} L_n= -\sum_{u_i,u_j \in V} \hat{p}(u_j|u_i)\ \text{log} \, (p(u_j|u_i)) \end{equation} When the number of terms in the graph $|V|$ is very large, it is computationally costly to calculate the conditional probability $p\,(u_j|u_i)$, and one can utilize the negative sampling algorithm~\cite{mikolov2013efficient} to train our inductive context predictor efficiently. The loss function Eqn. \ref{eqn:context_loss} can be modified as: \begin{equation} \log \sigma(\nu_{u_j}^T\cdot s_{u_i}) + \sum_{n=1}^{N_0} E_{u_n \sim P_n(u)}[\log \sigma (-\nu_{u_n}^T \cdot s_{u_i})] \end{equation} \noindent where $\sigma(x)=1/(1+\exp(-x))$ and $u_n$ is the negative sample drawn from the noise distribution $P_n(u)\propto d_{u}^{3/4}$. $N_0$ is the number of negative samples and $d_{u}$ is the degree of term $u$ in the co-occurrence graph. Now, given a term $t$ (either InV or OOV), we can select the top-$K$ terms as its predicted contexts based on the predicted probability distribution $p\,(\cdot|t)$. Next, we describe the dynamic context matching mechanism to model the semantic similarity of two terms based on their predicted contexts. \noindent \textbf{Dynamic Context Matching {Mechanism}}. Inspired by previous works on neighborhood aggregation based graph embedding methods~\cite{hamilton2017inductive, velickovic2017graph}, which generate an embedding vector for an InV node by aggregating features from its neighborhood (contexts), we introduce two semantic vectors respectively for the query term and the candidate term, $v_q, v_c \in \mathbb{R}^{d_e}$, and learn them by aggregating the feature vectors of their corresponding {top-$K$} predicted contexts from previous module. Let us define $v_q^i \in \mathbb{R}^{d_e}$ as the feature vector of the $i$-th term in query term $q$'s context while $v_c^j \in \mathbb{R}^{d_e}$ as the feature vector of the $j$-th term in candidate term $c$'s context, and their context sets as $\Phi(q)=\{v_q^i\}_{i=1}^K$, $\Phi(c)=\{v_c^j\}_{j=1}^K$. Essentially, as we aim to capture the semantic meaning of terms, the feature vectors $v_q^i$'s and $v_c^j$'s are expected to contain semantic information. Also noticing that all predicted context terms are InV terms (i.e., in the co-occurrence graph), which allows us to adopt widely used graph embeddings, such as LINE(2nd)~\cite{tang2015line} as their feature vectors. One naive way to obtain the context semantic vectors, $v_q$ and $v_s$ is to average vectors in their respective context set. Since such $v_q$ (or $v_c$) does not depend on the other one, we refer to such vectors as "static" representations for terms. \vspace{-10pt} \begin{figure}[htbp!] \centering \includegraphics[width=0.9\linewidth]{dynamic_matching.pdf} \vspace{-10pt} \caption{Dynamic Context Matching Mechanism.} \vspace{-10pt} \label{fig:dynamic_matching} \end{figure} In contrast to the static approach, we propose the \textit{dynamic context matching mechanism} (as shown in Figure \ref{fig:dynamic_matching}), which weighs each term in the context of $q$ (or $c$) based on its matching degree with terms in the context of $c$ (or $q$) and hence the context semantic vector representation $v_q$ (or $v_c$) is \textit{dynamically} changing depending on which terms it is comparing with. More specifically, let us define $g(x, y)=\text{tanh}(xW_my^T)$ as a nonlinear function parameterized with weight matrix $W_m\in \mathbb{R}^{d_e\times d_e}$ to measure the similarity between two row vectors $x$ and $y$. For each context vector $v_q^i$ of the query term, {we calculate its weight based on how it matches with $c$'s contexts overall}: \begin{equation} \textsf{match} \,[v_q^i, \Phi(c)] = \textsf{Pooling}\, [g(v_q^i, v_c^1), ..., g(v_q^i, v_c^K)] \end{equation} For the pooling operation, we empirically choose the \textsf{mean} pooling strategy as it performs better than {alternatives such as \textsf{max} pooling} in our experiments. Then we normalize the weight of $v_q^i$ as: \begin{equation} \alpha_q^i = \frac{\Large{\textit{e}}^{\;\textsf{match}[v_q^i, \Phi(c)]}}{\sum_{k=1}^K \Large{\textit{e}}^{\;\textsf{match}[v_q^{k}, \Phi(c)]}} \end{equation} Finally, the context semantic vector for the query term $v_q$ is calculated through a weighted combination of $q$'s contexts: \begin{equation} v_q = \sum_{i=1}^K \alpha_q^i \cdot v_q^i \end{equation} Following the same procedure, we can obtain the context semantic vector $v_c$ for the candidate term w.r.t. the query term. Then we define the context score for a query term $q$ and a candidate term $c$ to measure their semantic similarity based on $v_q$ and $v_c$: \begin{equation} \textsf{Context Score} \, (q, c)=f_c(v_q, v_c) \end{equation} \subsection{Model Optimization and Inference} \label{subsec:train-inference} \textbf{Objective Function.} Given a query term $q$ and a candidate term $c$, to capture their similarity based on surface forms and global contexts, we define the final score function as: \begin{equation} \label{eqn:final-score} f(q, c) = (1-\gamma) \cdot f_s(s_q, s_c) + \gamma \cdot f_c(v_q, v_c) \end{equation} \noindent {$f_s(\cdot)$ and $f_c(\cdot)$ are similarity functions between two vectors, e.g., cosine similarity or bilinear similarity.} Now we obtain the recommendation probability of each candidate $t_i \in \{t_1, ..., t_N\}$ given a query $q$: \begin{equation} p(t_i|q)=\frac{\Large{\textit{e}}^{\, f(q, t_i)}}{\sum_{k=1}^{N} \Large{\textit{e}}^{\,f(q, t_k)}} \end{equation} where $N$ is the size of the candidate set. Finally, we adopt the ListNet~\cite{cao2007learning} ranking framework which minimizes the cross entropy loss for query term $q$: \vspace{-10pt} \begin{equation} \label{eqn:ranking_loss} L_r= -\sum_{i=1}^{N} p^*(t_i|q) \ \text{log} \, p(t_i|q) \end{equation} where $p^*(t_i|q)$ is the normalized ground-truth distribution of a list of ranking scores as $\{r_i\}_{i=1}^N$ where $r_i$ equals to $1$ if $q$ and $t_i$ are synonyms and $0$ otherwise. \noindent \textbf{Training}. For efficiency concerns, we adopt a two-phase training strategy: We first train the inductive context prediction module by loss function $L_n$ (Eqn. \ref{eqn:context_loss}) in the term-term co-occurrence graph, and sample top-K contexts based on the predicted probability distribution and use them in the context matching component. Then, we train the ranking framework by minimizing the ranking loss $L_r$ (Eqn. \ref{eqn:ranking_loss}). \noindent \textbf{Inference}. At the inference stage, we treat all InV terms as candidates for a given query. Since the dynamic representation mechanism involves pairwise term matching between the contexts of the query term and those of each candidate term and can have a high computational cost when the candidate set size is large, we adopt a two-step strategy: (1) For a given query term, select its top-N high potential candidates based on the surface form encoding vector and the context semantic vector obtained by the static representation mechanism; (2) Re-rank the selected candidates by applying our \textsf{\textsc{SurfCon}}\xspace framework with the dynamic representation mechanism. \section{Related Work} \vspace{-2pt} \noindent \textbf{Character Sequence Encoding.} To capture the character-level information of terms, neural network models such as Recurrent Neural Networks and Convolutional Neural Networks can be applied on character sequences ~\cite{ballesteros2015improved,kim2016character}. Further, CHARAGRAM~\cite{wieting2016charagram}, FastText~\cite{bojanowski2016enriching}, and CharNGram~\cite{hashimoto2017jmt} are proposed to represent terms and their morphological variants by capturing the shared subwords and $n$-grams information. However, modeling character-level sequence information only is less capable of discovering semantically similar synonyms, {and our framework considers global context information to discover those synonyms.} \noindent \textbf{Word and Graph/Network Embedding.} Word embedding methods such as word2vec~\cite{mikolov2013distributed} and Glove ~\cite{pennington2014glove} have been proposed and successfully applied to mining relations of medical phrases~\cite{wang2015medical,pakhomov2016corpus}. More recently, there has been a surge of graph embedding methods that seek to encode structural graph information into low-dimensional dense vectors, such as Deepwalk~\cite{perozzi2014deepwalk}, LINE~\cite{tang2015line}. Most of the embedding methods can only learn embedding vectors for words in the corpus or nodes in the graph, and thus fail to address the OOV issue. On the other hand, some more recent inductive graph embedding works, such as Planetoid~\cite{yang2016revisiting}, GraphSAGE ~\cite{hamilton2017inductive}, and SEANO~\cite{liang2018semi}, could generate embeddings for nodes that are unobserved in the training phase by utilizing their node features (e.g., text attributes). \textit{However, most of them assume the neighborhood of those unseen nodes is known, which is not the case for our OOV issue as the real contexts of an OOV term are unknown.} Since Planetoid~\cite{yang2016revisiting} can generate node embeddings based on node features such as character sequence encoding vectors, it can handle the OOV issue and is chosen as a baseline model. \noindent \textbf{Synonym Discovery.} A variety of methods have been proposed to detect synonyms of medical terms, ranging from utilizing lexical patterns~\cite{weeds2004characterising} and clustering~\cite{matsuo2006graph} to the distributional semantics models~\cite{hagiwara2009supervised}. There are some more recent works on automatic synonym discovery ~\cite{wang2015medical,qu2017automatic,zhang2019synonymnet, Shen2019SynSetMine}. For example, \citet{wang2015medical} try to learn better embeddings for terms in medical corpora by incorporating their semantic types and then build a linear classifier to decide whether a pair of medical terms is synonyms or not. \citet{qu2017automatic} combine distributional and pattern based methods for automatic synonym discovery. However, many aforementioned models focus on finding synonyms based on raw texts information, which is not suitable for our privacy-aware clinical data. In addition, nearly all methods could only find synonyms for terms that appear in the training corpus and, thus cannot address the OOV query terms. \section{Task Setting} \label{task-setting} In this section, we clarify several terminologies used in this paper as well as our problem definition: \noindent \textbf{Privacy-aware Clinical Data.} Electronic medical records (EMRs) typically contain patient medical information such as discharge summary, treatment, and medical history. In EMRs, a significant amount of clinical information remains under-tapped in the unstructured clinical texts. However, due to privacy concerns, access to raw or even de-identified clinical texts in large quantities is quite limited. Also, traditional de-identification methods, e.g., removing the 18 HIPAA identifiers~\cite{stubbs2015annotating}, require significant manual efforts for the annotation~\cite{dorr2006assessing}. Moreover, there also exists the risk that de-identified data can be attacked and recovered by the re-identification in some cases \cite{garfinkel2015identification}. Thus, to facilitate research on EMRs, an increasingly popular substitute strategy for releasing raw clinical texts is to extract medical terms and their aggregated co-occurrence counts from the corpus \cite{beam2018clinical,ta2018columbia, finlayson2014building}. We refer to such data as privacy-aware clinical data in this paper. Converting raw sentences to co-occurrence data protects privacy as original patient records are very unlikely to be recovered. However, the local context information contained in the raw sentences is also lost, which makes various tasks including synonym discovery more challenging under privacy-aware datasets. \noindent \textbf{Medical Term Co-occurrence Graph.} A medical term-term co-occurrence graph is defined as $G$=$(V, E)$, where $V$ is the set of vertices, each representing a medical term extracted from clinical texts. Each vertex has a surface form string (e.g., "vitamin c", "cancer") which is the spelling of the medical term. $E$ is the set of edges, each weighted by how many times two terms co-occur in a certain context window ({e.g., notes from patient records within 1 day}). \noindent \textbf{Medical Term Synonym.} Synonyms of a medical term refer to other medical terms that can be used as its alternative names~\cite{qu2017automatic}. For example, "vit c", "c vitamin" and "ascorbic acid" refer to the same medicinal product, while "Alzheimer's disease" and "senile dementia" represent the same disease. In our dataset, the extracted medical terms are mapped to the Unified Medical Language System (UMLS) \cite{bodenreider2004unified} Concept Unique Identifier (CUI) {by \cite{finlayson2014building}}. Different terms mapping to the same UMLS CUI are treated as synonyms for {model training/development/testing}. \noindent \textbf{Task Definition.} We formally define our task of {synonym discovery on privacy-aware clinical data} as: \textit{Given a medical term co-occurrence graph $G$, for a query term $q$ (which can be either In-Vocabulary or Out-of-Vocabulary), recommend a list of medical terms from $G$ that are likely to be synonyms of $q$. }
{ "redpajama_set_name": "RedPajamaArXiv" }
2,691
O Fort Lauderdale Club de Fútbol, conhecido como Fort Lauderdale CF, é um clube de futebol profissional americano com sede em Fort Lauderdale, Flórida, que jogará na USL League One, o terceiro nível do futebol americano. O clube foi fundado em 1 de fevereiro de 2020 e é o time reserva do clube da Major League Soccer, Inter Miami CF. História Em 9 de outubro de 2019, foi anunciado pelo Inter Miami, da Major League Soccer, que eles teriam uma equipe reserva na USL League One em 2020. Alguns meses depois, em 1º de fevereiro de 2020, o clube anunciou o nome do time como Fort Lauderdale Club de Fútbol e que jogaria no Lockhart Stadium, estádio reconstruído pelo time principal. Veja também Inter Miami CF USL League One Fort Lauderdale Strikers Equipes da USL League One Clubes de futebol fundados em 2019 Clubes de futebol da Flórida
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,888
\section{Introduction} Research on small particles containing up to a few tens of atoms is largely driven by their novel properties that are significantly affected by (quantum) size effects, particularly in the interplay between structural and electronic degrees of freedom.\cite{clusters} Such clusters, thus, carry the potential of major technological advances for applications exploiting their already exemplified unique optical, magnetic, and chemical properties. Atomically resolved structural information is a key prerequisite towards employing these envisioned functionalities, considering that the latter will be tailored to the atomic scale. In this respect not only the ground state isomer will be of importance, but potentially all energetically low-lying metastable isomers. A materials modeling targeting the identification of such relevant cluster isomers involves the global and local exploration of the corresponding vast configuration space, suitably represented by the high-dimensional potential-energy surface (PES) \cite{wales03} $E(\{{\bf R}_m\})$ where ${\bf R}_m$ is the position of atom $m$ in the cluster. The rapid growth of the number of local PES minima, i.e. metastable isomers, with increasing cluster size quickly limits approaches focusing only on structural motifs provided by chemical intuition. Required are instead more systematic unbiased sampling techniques and, among those (see e.g. Refs. \onlinecite{kirkpatrick83,szu87,deaven95,deaven96,wolf98,goedecker04}), approaches based on the basin-hopping (BH) \cite{li87,wales97,doye98,wales99,wales00} idea are widespread. In this idea the configuration space is explored by performing consecutive jumps from one local PES minimum to another. To achieve this, positions of atom(s) in the cluster are randomly perturbed in a so-called trial move, followed by a local geometry optimization which brings the system again into a local PES minimum. \begin{figure} \centering \includegraphics[width=3.8cm,angle=-90]{pics/fig1.eps} \caption{(Color online) Schematic representation of the original and transformed potential energy surface, $E(\{{\bf R}_m\})$ and $\tilde E(\{{\bf R}_m\})$ respectively, as well as of a basin-hopping trial move (see text).} \label{fig1} \end{figure} Rather than exploring $E(\{{\bf R}_m\})$, BH approaches concentrate therefore on the transformed PES $\tilde E(\{{\bf R}_m\})$, where the energy at any point in configuration space is assigned to that of the local minimum obtained by the given geometry optimization technique. This maps the PES onto a set of interpenetrating staircases with plateaus, or basins of attraction, corresponding to the set of configurations which lead to a given minimum after optimization. As apparent from Fig. \ref{fig1} the resulting PES topography significantly facilitates interbasin transitions, which constitutes already part of the reason for the success and efficiency of the BH method. In its classical form, BH employs a Metropolis criterion based on an effective temperature $T_{\rm eff}$ to either accept or reject the jump into the PES minimum reached by the trial move. This generates a canonic ensemble on $\tilde E(\{{\bf R}_m\})$, and introduces therewith both the desired importance sampling of the energetically lowest-lying isomers and the possibility to surmount barriers on multiple-funnel type PESs.\cite{doye98} Obvious ramifications of this basic acceptance rule are e.g. to either further promote the downhill driving force to the global minimum by applying a simulated annealing type sequential reduction of $T_{\rm eff}$ during the run, or to extend the importance sampling to all isomers in an energy window above the ground-state by unconditionally accepting all trial isomers with energies in a range above the lowest-energy isomer identified at any given moment in the run. When envisioning a predictive and material-specific modeling the accuracy of the PES underlying the sampling is of central importance. Due to the already mentioned intricate coupling of structural and electronic degrees of freedom in small clusters, the nature of the PES must be quantum-mechanic. Compared to simple analytic model potentials corresponding first-principles electronic structure calculations come at a high computational cost, even when describing electronic exchange and correlation only on the level of density-functional theory (DFT) with semi-local functionals. This dictates utmost efficiency of the employed sampling to reduce the number of required energy and force evaluations to the absolute minimum. Apart from the acceptance criterion, the efficiency of the BH method is predominantly governed by the recipe with which trial moves are generated. Among the phletora of move types suggested in the literature many contain technical parameters that are unspecified and which one would correspondingly seek to optimize to reduce the computational cost of a first-principles sampling run. Moreover, rather than revealing inefficient settings only {\em a posteriori}, this optimization would best be carried out by monitoring on-the-fly analyzable performance indicators that allow to adapt an ongoing run. Unfortunately, there are few to none general prescriptions of how to set technical move parameters that do not require detailed system-specific insight. With respect to on-the-fly performance indicators there exists at best the rule-of-thumb to aim at an overall acceptance of new trial structures of roughly one half \cite{wales97,frenkel02}. However, this rule emerges from the empirical observation that a factor one half ensures an efficient sampling of canonic ensemble averages and thus must not necessarily carry over to the intended goal of searching for the energetically lowest lying isomers with the least possible number of energy and force evaluations. A second complication arises from the stochastic nature of the BH algorithm. Any analysis measuring the efficiency of technical BH settings or the reliability of suggested on-the-fly performance indicators therefore necessarily needs to involve an averaging over a sufficiently large number of different BH runs starting from different initial structures and using different random number seeds. This would not be too much of a problem when using numerically undemanding model potentials, but then it would be unclear whether the obtained findings are meaningful for proper quantum-mechanic PESs. A straightforward evaluation based on first-principles energetics, on the other hand, is hitherto computationally involved even when only considering smaller clusters up to say 10 atoms. In this situation, the aim of the present study is to establish a corresponding framework for a systematic performance analysis of first-principles BH sampling runs. An important ingredient herein is the use of a hopping matrix type concept that provides not only a valuable analysis tool, but also helps to bring down the computational cost for the manifold of first-principles BH runs required in the averaging procedure. Using DFT within the generalized gradient approximation to describe the PES, we illustrate the scheme for Si clusters as a system with more directional, covalent type of bonding and for Cu clusters as representative of a metallic system. As a typical example of move classes involving technical parameters we concentrate on so-called single-particle and collective moves, in which either a single randomly chosen atom or all atoms in the cluster at once are displaced in a random direction by some prescribed move distance, respectively. For small clusters up to 10 atoms, our analysis indicates that these moves still enable efficient jumps anywhere in configuration space, i.e. between any PES minima, so that the actual BH acceptance criterion becomes less important. The thereby disentangled influence of move class and acceptance criterion allows us to separately assess the algorithm performance solely with respect to the technical move parameters, here the move distances. The analysis of the obtained results clearly identifies the governing factors and bottlenecks for the sampling efficiency of the investigated small systems, and gives indications on how they scale with increasing cluster size. Apart from providing detailed insights for the specific move classes studied, this stimulates ideas with respect to on-the-fly adaptive settings and establishes a protocol to benchmark more specialized move types. \section{Theory} \subsection{Density-Functional Theory} The underlying PESs are obtained from DFT calculations within the generalized gradient approximation \cite{perdew96} as implemented in the all-electron full-potential code FHI-aims \cite{aims}. In order to suppress a potential complication in the performance analysis due to the spin degrees of freedom all calculations were consistently carried out in a non-spin polarized way. In FHI-aims the Kohn-Sham orbitals are expanded in basis sets consisting of numeric atom-centered orbitals. All calculations reported here were conducted with the so-called ``minimal+$spd$'' basis set. For each considered system we recomputed all stable cluster isomers within an energy range up to 1\,eV above the ground-state, namely those listed in Figs. \ref{fig4}-\ref{fig6} below, also with hierarchically constructed larger basis sets available in FHI-aims. From these calculations we deduce that the relative energies between these isomers are converged to within 10\,meV at the ``minimal+$spd$'' basis set level, which is fully sufficient for the arguments and conclusions put forward below. We also ran several test BH runs with larger basis sets, but never obtained isomers other than those already revealed at the ``minimal+$spd$'' level. This suggests that not only the local minima, but also the other parts of the PES are sufficiently described with the employed ``minimal+$spd$'' basis. Local structural optimization is done using the Broyden-Fletcher-Goldfarb-Shanno method \cite{recipes}, relaxing all force components to smaller than $10^{-2}$\,eV/{\AA}. While this tight force criterion typically ensures structural convergence to below $10^{-3}$\,{\AA}, it is virtually impossible to converge the DFT total energies up to the number of digits required to uniquely distinguish different isomers from each other. We therefore use the difference norm of all interatomic distances in the cluster as additional tool for the comparison of isomer structures. Two isomers $A$ and $B$ are considered to be equivalent if \begin{equation} \frac{ \sum_i \left( d_{A,\{i\}} - d_{B,\{i\}} \right)^2 }{\sum_i \left( d_{A,\{i\}}^2 + d_{B,\{i\}}^2 \right)} < \Delta , \end{equation} where $d_{A,\{i\}}$ and $d_{B,\{i\}}$ are the sorted interatomic distances of the two isomers to compare. The denominator serves as normalization which yields a dimensionless quantity that is furthermore species- and cluster-size independent. $\Delta$ can be tuned such that all isomers in the energy range of interest are unambigously distinguished and was taken as $10^{-4}$. In order to check whether the thus identified different isomers are true local minima and not saddle points, they were subjected after the BH run to a vibrational analysis based upon a Hessian matrix obtained by finite differences of the analytical atomic forces when displacing all atoms by $10^{-3}$\,{\AA}. \subsection{Basin-Hopping} The BH runs explore the configuration space through a sequence of jumps from one PES minimum to another. For this, an initially random cluster structure (created in the spirit of the big-bang method \cite{leary97,yang06}) is subject to so-called trial moves, which correspond to a random structural modification, followed by a local relaxation as depicted in Fig. \ref{fig1}. As representative and widely used move classes we focus in this work on single-particle and collective moves, in which either a single randomly chosen atom or all atoms in the cluster are randomly displaced, respectively. The corresponding displacement vector of atom $m$ is suitably expressed in spherical coordinates as \begin{equation} \Delta {\bf R}_m = r_m {\bf e}(\theta, \phi) \quad , \end{equation} where ${\bf e}(\theta, \phi)$ is a unit vector in the displacement direction defined by the angles $\theta$ and $\phi$ with respect to an arbitrary, but fixed axis. For an unbiased sampling $\theta \in [0,\pi ]$ and $\phi \in [ 0, 2\pi]$ must be obtained as uniformly distributed random numbers. On the contrary, the move distance $r_m$ is {\em a priori} not specified, but will sensitively determine the jumps in configuration space and therewith the algorithmic performance. It provides therefore a nice example of a technical parameter that one would like to optimize for a first-principles sampling run, yet without introducing bias or system-specific insight. It is furthermore {\em a priori} not clear whether it is preferable to focus on one optimum move distance or whether it possibly advantageous for the overall sampling to include partly shorter and partly longer moves. We study this by drawing the move distances as random numbers distributed around some average value $r_{\rm o}$ $a$, where $a$ is the computed dimer bond length and $r_{\rm o}$ correspondingly a less system-dependent unitless quantity. A preference for one optimum distance can then be evaluated by considering a peaked distribution centered around $r_{\rm o}$, whereas the effect of a wide variation of move distances can be tested with a distribution that allows for a broader range of values. Specifically, we use either a normal distribution (width $0.07\sqrt{r_{\rm o}}$) around $r_{\rm o}$ for the prior and a uniform distribution (width $r_{\rm o}$) centered around $r_{\rm o}$ for the latter. The goal is therefore to assess the dependence of the sampling efficiency on $r_{\rm o}$ and the form of the distribution around it. In all of these cases an additional important factor is to prevent an entropy-driven dissociation of the cluster during the BH run. We achieve this by disregarding trial moves as well as local relaxations that generate loosely connected or partly dissociated structures characterized by an atom having a nearest-neighbor distance larger than twice the dimer bond length. Similarly discarded are moves that place atoms at distances of less than 0.5\,{\AA} from each other. Apart from the move class the second fundamental ingredient that needs to be specified in a BH run is the acceptance criterion according to which a generated trial structure is accepted and replaces the current cluster structure as starting point for the following trial move. In order to introduce a downhill driving force towards the energetically low-lying (and ultimately ground-state) isomers it is clear that a more stable trial structure should be unconditionally accepted. In its classical form, the BH scheme also accepts less stable trial structures according to a Metropolis rule, $~ {\rm exp}(- \Delta \tilde{E} / k_{\rm B} T_{\rm eff})$, where $k_{\rm B}$ is the Boltzmann constant, $\Delta \tilde{E} > 0$ the energy difference to the new trial structure, and introducing another unspecified technical parameter which may crucially affect the algorithmic performance, the effective temperature $T_{\rm eff}$. The original motivation behind this Metropolis rule is that the finite possibility to climb uphill enables the algorithm to effectively surmount high-energy barrier regions on multiple-funnel type transformed PESs $\tilde E(\{{\bf R}_m\})$. However, as long as the employed move class enables efficient jumps between all parts of configuration space, this acceptance criterion is only of subordinate importance. As we will see below this is still the case for the small cluster sizes studied here, and we therefore simply accept all generated cluster structures within a predefined energy range of interest above the ground-state isomer. \subsection{Sampling efficiency} The intended performance analysis requires a well-defined measure for the success of a sampling run. A common choice for this in the literature is the number of trial moves until the global minimum has been found for the first time. Here, we adapt this criterion to the stated goal of identifying not only the global minimum, but also all relevant energetically lowest-lying isomers. Correspondingly, the considered indicator of sampling efficiency which we aim to optimize is the number of moves $N$ until all relevant isomers have been found at least once, where of course one needs to define what a relevant isomer is ({\em vide infra}). While certainly a useful measure for the performance of the employed BH moves, it should still be stressed that due to the slightly varying number of geometry steps for the local relaxation of each trial structure, $N$ is only roughly proportional to the total computational cost of the first-principles BH run. Due to the stochastic nature of the BH method, both with respect to the generation of the initial starting structure and the generation of trial structures, $N$ is only a statistically meaningful quantity after averaging over sufficiently many runs. Even for the small cluster sizes considered here, this implies having to run of the order of 100 different first-principles BH runs to obtain a $N_{\rm av}$ that is converged to within $\pm 1$, and this for each BH setting (e.g. move distance or distribution) one wants to analyze. Since this straightforward approach quickly becomes computationally involved, we instead resort to the concept of a ``hopping matrix'' $h$, which summarizes the transition probabilities between all isomers under the chosen BH settings. Specifically, the matrix element $h_{ij}$ is then the normalized probability to jump from the local minimum $i$ to the local minimum $j$. If all local minima are explicitly accounted for, one obviously has the condition \begin{equation} \sum_j h_{ij} = 1 \quad . \label{eq3} \end{equation} Assuming that the matrix $h_{ij}$ is completely known, a sufficiently large number of sampling runs starting in random isomers can be readily simulated entirely on the basis of these transition probabilities without the need for further first-principles calculations, let alone that the individual matrix elements aka transition probabilities provide valuable insight into the sampling process and efficiency. Notwithstanding, with a rapidly growing number of isomers with cluster size this approach merely shifts the computational burden of an increasing number of direct BH runs to the equally expensive computation of an exploding number of hopping matrix elements, i.e. converged transition probabilities. Yet, below we will show that an approximate, but for our purposes sufficient determination of $N_{\rm av}$ is possible by restricting the explicit calculations to a limited number of hopping matrix elements. \begin{figure} \centering \includegraphics[width=3.7cm,angle=-90]{pics/fig2.eps} \caption{(Color online) Schematic illustration of successful, unsuccessful and high-energy trial moves in the BH scheme. The horizontal dashed (red) lines indicate the targeted energy window entering the acceptance criterion.} \label{fig2} \end{figure} In order to further analyze the obtained performance data, it is useful to disentangle the different possible outcomes of a trial move. First of all, the system might relax back into the structure from which the trial move has been performed so that in terms of isomer information nothing has been gained. Correspondingly, we denote such a move as unsuccessful, cf. Fig. \ref{fig2}, and define the fraction of hitherto unsuccessful moves $\alpha_{\rm unsucc.}$ as \begin{equation} \alpha_{\rm unsucc.} \;=\; \frac{N_{\rm unsucc.}}{N} \quad , \label{unsucc} \end{equation} where $N_{\rm unsucc.} < N$ is the number of unsuccessful moves during the run. Even if the trial move leads to a different local minimum, the move might still be rejected due to the acceptance criterion, if the new minimum is higher up in energy. The fraction of moves rejected on this basis is defined as \begin{equation} \alpha_{{\rm high}E} \;=\; \frac{N_{{\rm high}E}}{N} \quad , \label{highE} \end{equation} where $N_{{\rm high}E} < N$ is the corresponding number of rejected moves. Only the remaining fraction \begin{equation} \alpha_{\rm succ.} \;=\; 1 - \alpha_{\rm unsucc.} - \alpha_{{\rm high}E} \quad \label{succ} \end{equation} are successful moves at least in the sense that they bring the algorithm to a different minimum out of which the next trial move is performed, albeit not necessarily leading to a minimum that had hitherto not yet been sampled. Just as in the case of $N_{\rm av}$, it only makes sense to analyze the fractions $\alpha_{\rm unsucc.,av}$, $\alpha_{{\rm high}E,{\rm av}}$ and $\alpha_{\rm succ.,av}$ once averaged over sufficiently many different BH runs. \section{Performance analysis for small cluster sizes} Our performance analysis concentrates on small clusters formed of Si and small clusters formed of Cu atoms. Both systems have already been subject to extensive theoretical studies and are therefore natural choices for the intended benchmarking. Extensive work on small silicon clusters has both been carried out using wavefunction-based techniques \cite{raghavachari88, zhu03} and DFT \cite{yoo05,hellmann07}. Databases for small silicon isomers can e.g. be found in Refs. \onlinecite{cambridge_database,hellmann07b}. Recent works on small copper clusters using {\em ab initio} methods are e.g. Refs. \onlinecite{yang06,massobrio98,calaminici00,jug02,yang05}. The choice of these two materials is further motivated by their different chemistry, which can be characterized as more covalent and directional in the case of Si, and more metallic in the case of Cu. We therefore expect the direct comparison of results obtained for Si$_7$ and Cu$_7$ to reflect a possible material-specificity of the findings, while an additional comparison of the results obtained for Si$_7$ and Si$_{10}$ aims at assessing the variation with cluster size in the range where due to the limited dimensionality of the configuration space the BH acceptance criterion does not play much of a role ({\em vide infra}). \subsection{Existence of dominant isomers} \begin{figure*}[ht] \centering \subfigure { \includegraphics[width=4cm, angle=-90]{pics/fig3a.eps} } \subfigure { \includegraphics[width=4cm, angle=-90]{pics/fig3b.eps} } \subfigure { \includegraphics[width=4cm, angle=-90]{pics/fig3c.eps} } \caption{Histograms of the probability with which trial moves end up in the lowest-energy isomers of Si$_7$, Si$_{10}$ and Cu$_7$. The identified isomers are numbered with decreasing stability, with isomer \#1 corresponding to the identified ground-state and those isomers shown with bracketed numbers revealed as unstable by an {\em a posteriori} vibrational analysis (see text). The histograms comprise all isomers found in an energy range up to 2\,eV above the ground-state isomer, as obtained from long BH hopping runs using single-particle moves and normally distributed move distances around the average values $r_{\rm o} = 1.5, 2.0$\, and $2.5$. The geometric structures behind the truly stable isomers in an energy range up to 1\,eV above the identified ground-state are summarized in Figs. \ref{fig5}-\ref{fig7}. } \label{fig3} \end{figure*} \begin{figure}[ht] \centering \includegraphics[width=3.5cm, angle=-90]{pics/fig4.eps} \caption{Probabilities for the lowest-energy isomers of $\rm Si_7$ as in Fig. \ref{fig3}. Shown is the evolution when binning the histogram entries over consecutive sampling periods containing 50 moves each using single-particle moves and normally distributed move distances around the average values $r_{\rm o} = 2.5$. Entries for all isomers higher in energy than isomer \#4 are bundled into one entry labeled ``$>$\,\#4''.} \label{fig4} \end{figure} As a prelude to the actual performance analysis we present in Fig. \ref{fig3} the histograms of the number of times with which low-energy isomers were identified in long BH runs for the three systems addressed, i.e. Si$_7$, Si$_{10}$, and Cu$_7$. Each run consisted of several hundred unconditionally accepted moves and was carried out until the shape of the histogram, i.e. the normalized probability with which the different low-energy isomers are identified, was fully converged. In all cases the evolution towards convergence was rather uniform as demonstrated by Fig. \ref{fig4} for Si$_7$, which presents the histogram entries binned over consecutive sampling periods containing 50 moves each. Apparently, the ratios of the histogram entries for each sampling period are roughly the same. In view of the overall still limited system dimensionality and concomitant small number of low-energy isomers, a natural interpretation for this is that the employed moves enable jumps between any parts of the PES. In this situation, a simple acceptance criterion that unconditionally accepts moves within a pre-defined energy range and rejects all others is then sufficient to separately assess the dependence of the algorithm efficiency on the move parameters. Even though Fig. \ref{fig3} comprises the data obtained using single-particle moves with three quite different move distances it is interesting to observe that some isomers are always sampled much more often than others. For Si$_7$ for example, more than one third of all executed moves in the BH runs ended up in the isomer structure labeled \#4, regardless of the actual move distance employed. In the case of collective moves, the corresponding histograms look qualitatively the same so that the existence of such ``preferred'' isomers, which we will henceforth term dominant isomers, seems even independent of the specific move class employed. In this respect, one should mention that some of the isomers listed in Fig. \ref{fig3} turned out to be unstable in the concluding vibrational analysis and are correspondingly not further considered below. Distinguishing and discarding these structures, which correspond either to flat or saddle-point PES regions, directly in the BH run is unfortunately impossible as it would imply a prohibitive computational cost when performing a vibrational analysis immediately after each trial move. As apparent from Fig. \ref{fig3} the total number of times in which the BH runs end up in such unstable structures is at least not too large, so that the actual computational time wasted is small. The one notable exception is isomer \#4 of $\rm Cu_7$, which exhibits small imaginary eigenmodes, but is sampled about as frequently as the truly stable isomer \#5. Since the algorithm thus spends some appreciable time in this basin, we retained isomer \#4 in the ensuing performance analysis despite its instability. \begin{figure} \includegraphics[width=7cm, angle=-90]{pics/fig5.eps} \caption{Identified stable $\rm Si_7$-isomers in the energy range up to 1\,eV above the ground-state. The isomer numbering follows the one of Fig. \ref{fig3} and reflects the decreasing cluster stability as indicated by the stated energies relative to the ground-state isomer \#1.} \label{fig5} \centering \end{figure} \begin{figure} \includegraphics[width=7cm, angle=-90]{pics/fig6.eps} \caption{Identified stable $\rm Si_{10}$-isomers in the energy range up to 1\,eV above the ground-state. The isomer numbering follows the one of Fig. \ref{fig3} and reflects the decreasing cluster stability as indicated by the stated energies relative to the ground-state isomer \#1.} \label{fig6} \centering \end{figure} \begin{figure} \includegraphics[width=7cm]{pics/fig7.eps} \caption{ Identified stable $\rm Cu_7$-isomers in the energy range up to 1.1\,eV above the ground-state. The isomer numbering follows the one of Fig. \ref{fig3} and reflects the decreasing cluster stability as indicated by the stated energies relative to the ground-state isomer \#1. Note that isomer \#4 exhibits small imaginary eigenmodes but is nevertheless retained in the performance analysis, see text.} \label{fig7} \centering \end{figure} One immediate rationalization for the existence of dominant isomers is that their corresponding basin of attraction on the PES is huge and thus hit by the trial moves many times. Inspection of the geometric structures of the lowest-energy isomers for the three systems as summarized in Figs. \ref{fig5} - \ref{fig7} points, however, at a second potential reason. Many of the dominant isomers correspond to rather low-symmetry structures, e.g. isomer \#4 for Si$_7$, isomer \#6 for Si$_{10}$ or isomer \#10 for Cu$_7$. In terms of the PES, these low-symmetry structures possess a larger number of local minima than the symmetric ones \cite{wales00}, and it is this multiplicity, and not necessarily only the size of the basin of attraction of each individual minimum that is responsible for the large number of times with which the BH algorithm yields the corresponding isomer. This relation to the underlying PES shape also motivates why certain isomers are dominant irrespective of the employed move class. Any general purpose move class that enables unbiased jumps to anywhere on the PES should be similarly affected by a varying size or multiplicity of the different basins of attraction. This is an important point as an at first glance appealing approach to improve the efficiency of BH sampling would be to reduce the number of times that the algorithm gets stuck in always the same dominant isomers and instead aim to increase jumps into the rare minima. Within the understanding of the relation to the PES topology it seems unlikely that this can be realized without either resorting to moves that are specifically tailored to the system at hand or make use of local PES information. At least for the limited isomer number of the small cluster sizes studied here, the main bottleneck of purely stochastic moves is thus that the algorithm will often revisit the same dominant isomers. In this situation, the overall performance is then dictated by the way it can deal with these dominant isomers, e.g. how efficiently it can hop out of them. \subsection{Approximate hopping matrix} On the basis of the histograms presented in Fig. \ref{fig3} we can now specify which of the energetically lowest-lying isomers are the target of the sampling runs. In the general case, this would be dictated by the physics of the problem at hand, e.g. prescribing that the sampling should yield the ground-state isomer, as well as all isomers in a certain energy range above it. In view of the discussion above, it is clear that the overall sampling performance will in any case be governed by the dominant isomers involved, since the algorithm spends most of its time jumping out of these minima. For the intended performance analysis we therefore choose as the sampling target the identification of all dominant isomers determined in the histogram BH-runs. As indicator of the sampling efficiency we correspondingly focus on the number of moves $N$ until all of these dominant isomers are found at least once. In the case of Si$_7$ and Si$_{10}$ the dominant isomers are included in an energy range up to 1\,eV above the ground-state as apparent from Figs. \ref{fig3}, \ref{fig5} and \ref{fig6}. In the case of Cu$_7$, this energy range is slightly extended to 1.1\,eV above the ground-state to also include the dominant isomer \#10, cf. Figs. \ref{fig3} and \ref{fig7}. With a thus defined sampling target the BH acceptance criterion employed is to unconditionally accept trial moves that lead into any isomer in the corresponding energy window, and to unconditionally reject any trial move that leads into an isomer that is higher in energy. It would only be necessary to change the latter to some, e.g. Boltzmann weighted, conditional acceptance rule, if a multiple-funnel type PES would necessitate passages via such higher-energy isomers. However, as discussed above this is not the case for the systems studied here. In terms of the hopping matrix corresponding energy-window BH runs require only the knowledge of a limited number of hopping matrix elements. Definitely required are the transition probabilities between any of the targeted low-energy isomers. Since trial moves into higher energy isomers are rejected, it suffices in addition to know the overall probability to jump from each one of the low-energy isomers into any of the higher energy ones, without the need to further resolve the latter. For the example of Si$_7$ the targeted energy window comprises four different isomers, and energy-window BH runs can therefore be simulated on the basis of 20 hopping matrix elements: 16 transition probabilities between any of the four different low-energy isomers, as well as one hopping matrix element per low-energy isomer that describes the sub-summed transition probability to jump out of the isomer into any of the higher energy ones. For a specified BH setting (i.e. fixed move type and fixed technical move parameters) we obtain the required hopping matrix elements by performing a fixed number of trial moves out of each of the low-energy isomers, recording the probabilities with which the moves led into each of the other low-energy isomers or any of the higher-energy ones. After 100 moves these probabilities are converged to within $\pm 0.1$ at a confidence interval at the level of 95\%, which we found to be sufficient for the conclusions put forward below. With the thus determined hopping matrix, a large number of energy-window BH runs from different starting isomers and with different random number sequences can be quickly simulated without the need for further first-principles calculations. This allows to arrive at a properly averaged number $N_{\rm av}$ of moves required to determine all low-energy isomers at least once, albeit with the disadvantage that the transition probabilities are only known within the confidence interval of $\pm 0.1$. To account for the latter, we therefore randomly varied the individual hopping matrix elements within this uncertainty range and under the constraint of Eq. (\ref{eq3}). Determining the $N_{\rm av}$ for several thousands of correspondingly created hopping matrices, we finally quote below the average value together with error bars given by the standard deviation. This remaining uncertainty incurred from the approximate hopping matrix procedure does not affect any of the trend conclusions made below, yet on the other hand leads to quite some reduction in the computational effort: In order to determine a converged $N_{\rm av}$ for the systems studied here typically required an averaging over some hundred BH runs starting from different initial isomers and with different random number sequences. As shown below in the range of settings studied $N_{\rm av}$ is of the order of 10-40, so that a straightforward determination of $N_{\rm av}$ by averaging over individual first-principles BH runs would require a few thousand trial moves, with a corresponding number of first-principles energy and force evaluations. For the described hopping matrix based approach, however, only 100 moves out of each of the few low-energy isomers need to be done on the basis of first-principles calculations. Since the ensuing hopping matrix based simulations are computationally undemanding, this significantly reduces the overall computational cost and provides furthermore detailed data on the sampling process in form of the individual hopping matrix elements. \subsection{Dependence on move parameters} \begin{figure*} \centering \subfigure { \includegraphics[width=7cm, angle=-90]{pics/fig8a.eps} } \subfigure { \includegraphics[width=7cm, angle=-90]{pics/fig8b.eps} } \subfigure { \includegraphics[width=7cm, angle=-90]{pics/fig8c.eps} } \caption{(Color online) Performance analysis of BH runs for Si$_7$, Si$_{10}$, and Cu$_7$, using collective moves and a normal distribution for the atomic displacements. Upper panel: Variation of the average number of moves $N_{\rm av}$ required to determine the low-energy dominant isomers with the average move distance $r_{\rm o}$ (see text). Lower panel: Corresponding variation of the fraction of unsuccessful moves $\alpha_{\rm unsucc.,av}$, of moves into high energy isomers $\alpha_{{\rm high}E,{\rm av}}$, and of successful moves $\alpha_{\rm succ.,av}$, cf. Eqs. (\ref{unsucc}-\ref{succ}). The error bars reflect the uncertainty due to the employed approximate hopping matrix procedure (see text).} \label{fig8} \end{figure*} We begin the analysis with the performance data obtained for collective moves and a normal distribution for the atomic displacements. Figure \ref{fig8} compiles the corresponding results and reveals a similar dependence of $N_{\rm av}$ on the average move distance for the three systems. In all cases, a too small value of $r_{\rm o}$ leads to a large move number required to determine the low-energy isomers. With increasing $r_{\rm o}$ the performance gets better, and goes through an optimum that is more pronounced for Si$_{10}$ than for the two smaller systems. This overall dependence is well rationalized by analysing the move fractions defined in Eqs. (\ref{unsucc}-\ref{succ}) above. Not surprisingly, the bad performance at too small move distances results from the inability of the algorithm to escape from the present basin of attraction, as reflected by a fraction $\alpha_{\rm unsucc.,av}$ approaching unity, cf. Fig. \ref{fig8}. With increasing move distances, this fraction of unsuccessful moves decreases and the overall performance improves. Interestingly, within the studied range of move distances $\alpha_{\rm unsucc.,av}$ only quickly decays to around zero for Si$_{10}$, whereas for the two smaller systems it seems to level off at a finite value. This behavior arises from the afore discussed multiplicity of some of the dominant isomers. In terms of the hopping matrix, $\alpha_{\rm unsucc.,av}$ is just the average of the diagonal elements $h_{ii}$ for the different isomers $i$ weighted by the corresponding histogram entries, where $h_{ii}$ gives the probability that a hop out of isomer $i$ has unsuccessfully relaxed back into it. Inspecting these diagonal elements for the different isomers separately we find only the elements of the most symmetric isomers to vanish with increasing move distance. On the contrary, for the least symmetric isomers the corresponding hopping matrix elements stay almost constant over the range of move distances studied. The rational is that by choosing a sufficiently large move distance, the system can be prevented from relaxing back into the previous PES minimum, but not from jumping into another symmetry-equivalent basin of attraction. The value at which $\alpha_{\rm unsucc.,av}$ saturates is therefore system-dependent and governed by the symmetry properties of the dominant isomers in the targeted energy range. This finite energy range of interest, and the correspondingly applied acceptance criterion, introduces a second ruling factor for the overall efficiency of the algorithm. As apparent from Fig. \ref{fig8}, the fraction of rejected moves that has led to isomers outside the targeted energy window rises monotonically with increasing move distance. Naively equating the move distance with the perturbation induced by the trial move, this is somehow intuitive. In view of the rapidly increasing total number of isomers with system size one may further consider the steeper increase of $\alpha_{{\rm high}E,{\rm av}}$ for Si$_{10}$ as reflecting the increasing fraction of isomers that fall outside the defined low-energy window in this larger system. Even when for instance only focusing on the energy range up to 2\,eV above the identified ground-state isomer, the long BH runs behind the histograms shown in Fig. \ref{fig3} found only 2 and 4 stable isomers outside the presently targeted low-energy window for Si$_7$ and Cu$_7$, respectively, but already 12 in the case of Si$_{10}$. While the fraction of unsuccessful moves is thus the bottleneck at short move distances, so is the fraction of moves outside the energy window at large distances, and this will become more severe with increasing system size or when reducing the targeted energy range. The variation of the fraction of successful moves $\alpha_{\rm succ.,av}$ with move distance is determined by the opposing trends of $\alpha_{\rm unsucc.,av}$ and $\alpha_{{\rm high}E,{\rm av}}$, cf. Eq. (\ref{succ}), and exhibits a clear correlation with the obtained performance. As obvious from Fig. \ref{fig8}, the average number of moves $N_{\rm av}$ required to find all low-energy isomers is least when the fraction of successful moves is maximized. This is the case when the move distance is large enough to efficiently lead the system out of the present basin of attraction, but not too large to yield a high energy isomer outside the targeted energy window. With the much more pronounced increase of $\alpha_{{\rm high}E,{\rm av}}$ for Si$_{10}$ this gives rise to a narrowly defined range of optimum move distances, which is concomitantly also shifted to smaller values compared to the two smaller systems. As apparent from the error bars in Fig. \ref{fig8} this overall performance behavior and its analysis in terms of the different move fractions $\alpha_{\rm unsucc.,av}$, $\alpha_{{\rm high}E,{\rm av}}$, and $\alpha_{\rm succ.,av}$ is robust against the uncertainty introduced by the approximate hopping matrix procedure. It is furthermore equivalently obtained for the other move schemes investigated, i.e. single-particle vs.\,collective moves involving atomic displacements following a uniform or normal distribution around the average distance $r_{\rm o}$. \begin{table} \caption{\label{table1} Lowest obtained average number of moves $N_{\rm av}$ to identify the targeted low-energy isomers of Si$_7$, Cu$_7$, and Si$_{10}$ using different trial move schemes. Quoted are the values and the corresponding average move distance $r_{\rm o}$, that within the finite resolution computed comes closest to the optimum settings. Within the understanding gained from the two smaller systems, the run for Si$_{10}$ using single-particle moves with a uniform distribution was not performed.} \begin{ruledtabular} \begin{tabular}{ll|cc|cc} & & \multicolumn{2}{c}{normal distribution} & \multicolumn{2}{c}{uniform distribution} \\ & & $r_{\rm o}$ & $N_{\rm av}$ & $r_{\rm o}$ & $N_{\rm av}$ \\[0.1ex] \hline single-particle & Si$_7$ & 1.5 & 20 & 1.5 & 31 \\ & Cu$_7$ & 2.0 & 9 & 1.5 & 20 \\ & Si$_{10}$ & 1.5 & 10 & $-$ & $-$ \\[0.5ex] collective & Si$_7$ & 0.75 & 21 & 0.75 & 18 \\ & Cu$_7$ & 0.75 & 9 & 0.75 & 8 \\ & Si$_{10}$ & 0.5 & 10 & 0.5 & 15 \\[0.5ex] \end{tabular} \end{ruledtabular} \end{table} Table \ref{table1} summarizes for the different schemes the obtained lowest values for $N_{\rm av}$ at the move distance that within the finite resolution computed comes closest to the optimum setting. Starting with single-particle moves we observe a significantly better performance for displacements that are drawn from a normal distribution peaked around the average value $r_{\rm o}$. This demonstrates that for the systems studied the wide range of move distances featured by the uniform distribution is not advantageous for the sampling. Instead, there is indeed an optimum atomic displacement on which the employed moves should focus. This is consistent with the understanding of the limiting factors at too small and too large displacements developed above, and in this respect we believe this result to be more generally valid. Our interpretation for the much less pronounced performance difference between uniform and normal distribution in case of collective moves, cf. Table \ref{table1}, is correspondingly that even when displacing all atoms by random distances that are uniformly distributed over a wide range there is a certain probability that at least one of these distances comes close to the optimum value. Regardless of the other displacements, for the small systems studied this one near-optimum displacement is then sufficient for an efficient sampling as also indicated by the essentially identical performance of single-particle and collective moves obeying a normal distribution. This said, we nevertheless note that another factor entering here is that the optimum $r_{\rm o}$ in case of collective moves is much shorter, with a concomitant reduction in the width of the employed uniform distribution and therewith of the difference between the two distributions studied. The shorter values for the optimum displacement in case of collective moves are intuitive considering that in order to change the geometric configuration significantly the more atoms are involved, the less each atomic position needs to be disturbed. It is, however, intriguing to see that in terms of the dimensionless quantity $r_{\rm o}$ the optimum values obtained for the three investigated systems are rather similar, both in case of single-particle moves and in case of collective moves. In view of the different chemistry of Si and Cu, this suggests that employing the computed dimer bond length $a$ as natural unit for the move distance is useful for these monoatomic systems. While the general philosophy of the present work aims at an optimization of the sampling efficiency, a tentative generalization of our findings would therefore nevertheless be that setting the move distance to somewhere short of the dimer bond length in case of collective moves or at around 1.5 times the dimer bond length in case of single-particle moves is not a bad strategy to achieve already quite decent sampling. In this respect, we also note that the performance variation with $r_{\rm o}$ is in all cases similar to the one illustrated for collective moves with normal distribution in Fig. \ref{fig8}: Over the distance range studied, which was $a$ to 2.5$a$ for single-particle and $a/3$ to $a$ for collective moves, the efficiency of the BH scheme is thus quite robust and varies in most cases much less than an order of magnitude. In light of the discussion concerning the fraction of moves $\alpha_{{\rm high}E,{\rm av}}$ that lead to isomers outside the targeted energy window we expect this variation to become much more pronounced for larger systems or a reduced energy range of interest. In this situation optimization of the move settings will be crucial and the observed and intuitive correlation of the overall performance with the fraction of successful moves may then suitably be exploited to analyze and possibly even adapt the settings of an on-going run. However, as illustrated by the data in Fig. \ref{fig8} aiming at an absolute value for the ratio of accepted trial structures, like the empirical factor one half to achieve good sampling of canonic ensemble averages \cite{wales97,frenkel02}, seems not the right approach. Even though in Fig. \ref{fig8} $\alpha_{\rm succ.,av}$ is at optimum move distance indeed about 50\% for Si$_7$ and Si$_{10}$, it is about 70\% in case of Cu$_7$. Aiming at about 50\% in the latter case would instead result in a move distance that is too short ($0.5a$), at a performance that is by a factor 2-3 worse than at optimum settings, cf. Fig. \ref{fig8}. On the contrary we consistently observe for all studied systems, move types, and displacement distributions that the best performance is reached when the ratio of accepted trial structures is largest. This suggests that algorithms aiming to maximize $\alpha_{\rm succ.,av}$ instead of achieving a preset target value are the right way when thinking about adapting move settings on-the-fly. \section{Conclusions} In conclusion we have presented a systematic performance analysis of first-principles basin-hopping runs, with the target to identify all low-energy isomers of small atomic clusters within a defined energy range. As representative and widely employed general-purpose move classes we have focused on single-particle and collective moves, in which one or all atoms in the cluster at once are displaced in a random direction by some prescribed move distance, respectively. For the systems Si$_7$, Cu$_7$, and Si$_{10}$ studied, our analysis shows that there is indeed an optimum move distance and that it is not advantageous for the overall sampling to include partly shorter and partly longer moves. The governing factors leading to this optimum move distance are the inability to escape from the basin of attraction of the present configuration at too short distances and the increased probability to end up in high-energy isomers at too large distances. Despite the distinctly different chemistry of Si and Cu, the obtained optimum move distance is similarly roughly 0.75 times the dimer bond length in case of collective moves or at around 1.5 times the dimer bond length in case of single-particle moves. This suggests the dimer bond length as a useful natural unit for these monoatomic systems and as a simple rule-of-thumb that setting the move distance to the mentioned values should enable relatively decent sampling. This is furthermore supported by the observation of only moderate variations of the overall efficiency over quite a range of move distances away from the optimum values. From our analysis we expect this variation to become more pronounced with increasing system size or when reducing the targeted energy window. With the then increased necessity to optimize the move settings, a possibility to adapt the latter already during an on-going run would be to exploit the confirmed correlation between sampling performance and fraction of accepted trial structures. The latter quantity is an on-the-fly measurable performance indicator, which according to our data devised algorithms adapting the move settings should strive to maximize, rather than aiming for a prescribed target value. However, for larger systems these ideas require further scrutiny. For the small cluster sizes studied here, the sampling problem is still very modest and the employed single-particle or collective moves enable efficient jumps anywhere in configuration space, as also reflected by the essentially identical performance of the two move classes at optimized settings. With increasing system size this is unlikely to hold, and the actual BH acceptance criterion above the targeted energy window will start to play a role to tackle concomitantly developing multiple-funnel potential-energy surfaces. While the here investigated size range up to 10 (or slightly more) atoms might not yet be too challenging from a sampling point of view, it is certainly a range that can no longer be reliably covered by resorting to chemical intuition and testing for usual-suspect structures. This holds in particular for systems exhibiting strong Jahn-Teller distortions \cite{gehrke08} and when aiming to identify not only the ground-state, but all low-energy isomers. The in this size range furthermore delicate quantum interplay between structural and electronic degrees of freedom dictates an energetics that is based on computationally intense first-principles calculations. In this respect, the observed performance of the BH algorithm employing two simple, general-purpose move classes is reassuring. For all three systems studied, the low-energy isomers in the range up to about 1\,eV above the ground-state are at near-optimum settings identified with a number of trial moves that is perfectly manageable on present-day capacity compute infrastructures. With the still limited number of metastable structures even for the Si$_{10}$ cluster, this algorithmic performance is bound by frequent revisits to a few dominant isomers. Tracing the latter back to the size or multiplicity of the corresponding basins of attraction on the potential-energy surface it seems unlikely that the performance may be significantly improved by other move classes, unless specifically tailoring the latter to the system at hand or making use of local PES information. Nevertheless, when assessing such more specialized move types (also in view of the much more demanding size range beyond ten atoms) the evaluation should be based on a performance analysis protocol as presented in this work. \section{Acknowledgements} Funding within the MPG Innovation Initiative ``Multiscale Materials Modeling of Condensed Matter'' is gratefully acknowledged. We are indebted to Dr. Volker Blum and the FHI-aims development team for useful discussions and technical support.
{ "redpajama_set_name": "RedPajamaArXiv" }
1,575
ON THE COVER: The Tarter and Swartz Saloon, located on the southwest corner of Magnolia Street and Franklin Avenue, was one of Pinedale's earliest business establishments. Pictured here are locals at the saloon, from left to right, "Daddy" Landers, Rudolph Swartz in door, unidentified, and Alex Price Sr. in 1905. Like many saloons around the West at this time, it was sometimes referred to as the "Bucket of Blood." (Courtesy of Paul Allen.) # Pinedale # Ann Chambers Noble Copyright © 2008 by Ann Chambers Noble 9781439636442 Published by Arcadia Publishing Charleston, South Carolina Printed in the United States of America Library of Congress Catalog Card Number: 2008926271 For all general information contact Arcadia Publishing at: Telephone 843-853-2070 Fax 843-853-0044 E-mail sales@arcadiapublishing.com For customer service and orders: Toll-Free 1-888-313-2665 Visit us on the Internet at www.arcadiapublishing.com To the "old-timers" in Pinedale who continue to help me document the area's history, and to my husband, David, my best editor. # Table of Contents Title Page Copyright Page Dedication ACKNOWLEDGMENTS INTRODUCTION One \- THE EARLY YEARS Two \- SUSTAINING A FRONTIER TOWN Three \- BUILDING A TOWN Four \- SERVING OUR COUNTRY Five \- THE TOWN MODERNIZES INDEX # ACKNOWLEDGMENTS Photographs in this volume were gathered from a variety of sources, including the Sublette County Historical Society. Thanks to Laurie Hartwig and Millie Pape for help with this collection and Clint Gilchrist, who gathered many of these photographs on behalf of the Sublette County Historic Preservation Board. I am also thankful to the families who dug through trunks, cabinets, and closets to share their personal albums. These include Jack Doyle with the Thurston and Leita Doyle photographs, Ralph and Charlotte Faler, Paul and Bette Hagenstein, Dorothy and Cindy Noble, Ruth Noble, Pat and Ben Pearson, Tom and Shiree Prather, Ruth Shriver, Erma Steele Shriver, Mary Ellen Steele, and Dave Takacs. Special thanks to Sue Sommers for her help with scanning and, particularly, editorial assistance. And to my husband, David, for his continued support with my history projects. # INTRODUCTION John F. Patterson, known as the founder of Pinedale, proposed establishing a town in the Green River Valley along Pine Creek in western Wyoming. He offered to build and stock a general store if local ranchers Charles A. Petersen and Robert O. Graham each donated five acres for the town site. The three gentlemen agreed to this plan, a surveyor was hired, and the town of Pinedale, named after the post office on Petersen's ranch, became a town on paper owned by these three men. The ranchers' property line would become Pine Street. Founders Day was September 26, 1904, when the first town plat was drawn on a piece of yellow cloth, showing blocks, lots, and streets. Free town lots were offered to early settlers, one of whom was C. Watt Brandon, John Patterson's nephew. Brandon built a newspaper office for his paper, the Pinedale Roundup, which also housed the new post office. More than 100 years later, the newspaper is still in print. The tiny town provided important services for small yet thriving industries in the area. This included supplying provisions for the early "tie hacks" living in mountain camps. "Tie hacks," the men who cut the trees and shaped railroad ties from them, came from around the world for this work. The Union Pacific Railroad expanded its tracks through Wyoming in the early years of the 20th century, creating a demand for ties to support the new rails. The ties came from pine trees in the mountains surrounding Pinedale that were cut and hewn before being floated down the flooded springtime mountain streams to Green River, Wyoming, where they were initially gathered before being sent to the working railhead. Tourism has always been a source of economic support for Pinedale. Even before Pinedale was founded, "dudes," or guests, paid to be assisted in visiting the mountains. From the late 1800s to the present, tourists have come to enjoy horse-pack trips, fishing, and hunting in the beautiful mountains surrounding Pinedale. The town also became a destination, creating a demand for hotels and later motels. As the town celebrated its heritage with parades, rodeos, and Rendezvous, visitors were often included. Cattle ranching has long supported Pinedale. Great herds of Herefords and Black Angus have roamed the Green River Valley on area ranches, some established before Pinedale. Ranchers and cowboys, along with their families, were among the early settlers of the town. They have long been the pillars of the community, supporting the town economically, socially, and politically. In 1912, Pinedale was incorporated, gaining the popular claim to fame as being the farthest incorporated town from a railroad in the United States. It is nearly 110 miles to Rock Springs, Wyoming, the closest railroad. Ripley's Believe It or Not apparently checked it out and found it accurate. It became a slogan for the town for decades, used on town letterhead and tourist information, as well as the front-page flag for the Pinedale Roundup. Despite the community's isolation, its locals have always been connected to world events. When the United States entered the World War I in 1917, Pinedale's young men volunteered to serve in France with the Machine Gun Company, 3rd Wyoming Infantry. Two Pinedale servicemen lost their lives in France in that war. The body of Sidney Edwards was brought back to Pinedale and reinterred in the Pinedale Cemetery in August 1921. Clifford Phillips remains buried in France. The town's American Legion Post 47 is named in honor of Phillips and Edwards. With help from World War I veterans, Pinedale slowly grew into a community during the 1920s. The town council concerned itself with building streets and sidewalks throughout the decade. Once they were built, the council solicited townspeople to assist in planting trees along the streets, then keeping them watered. Lighting the streets was also a priority at this time, made possible by a small electric light plant. The town nearly doubled in size as new stores, hotels, and other businesses were built along with modest homes. A big victory for the area came in February 1921, when Sublette County was formed via a bill passed by the Wyoming State Legislature and signed by Gov. Robert Carey. A few months later, Pinedale was chosen as the county seat in a heated and controversial election that created hard feelings with the other town in the new county, Big Piney. A decade later, the town was impacted by economic hard times brought on by the Great Depression of the 1930s. Pres. Franklin D. Roosevelt's New Deal programs helped the community when federal aid came with the Civil Works Administration. This program assisted with work on Pinedale streets and the construction of a storage dam near Fremont Lake, approximately three miles north of Pinedale. Additional aid came when the Works Progress Administration provided funds that constructed and improved the water and sewer system, developed the airport, and supplied the labor for a new brick schoolhouse. The largest federal New Deal program in the area was a Civilian Conservation Corps (CCC) camp built on the south shores of Fremont Lake. Several young men from the local area worked at the CCC camp, but their numbers were small compared to the hundreds brought in from around the country. Pinedale was again directly impacted by world events when the nation went to war in December 1941. World War II was felt close to home, and the townspeople raised money through war bonds, collected materials for recycling, and took war precautions. Young men, and some women, from Pinedale and the surrounding area joined the military services in great numbers and fought for their country around the world. Two young Pinedale men lost their lives in this war. S.Sgt. Ralph Wenz lost his life in Alaska when his bomber crashed on December 21, 1943, and S.Sgt. Boyd Skinner was killed in action at Iwo Jima on March 10, 1945. In the morning of Memorial Day 1949, the Pinedale airport was formally dedicated as the Ralph Wenz Field, while in the afternoon, the town park was dedicated as the Boyd Skinner Park after the fallen veterans. The postwar boom enjoyed throughout the country in the 1950s was also experienced in Pinedale. During this decade, the town again improved its infrastructure, renovated some of the streets and sidewalks, and expanded the fire department. Private industry also contributed to the town's growth at this time. Telephones were converted to the dial system in 1955; and after a tragic fire, the electric plant was expanded and improved. The electric company also brought television in 1957. The highways were open year-round for the first time, resulting in an expanded Pinedale school district as more of the surrounding rural school districts closed and sent their children to Pinedale. The highways were not traveled extensively, especially in the winter; but rather, the community looked to itself for its needs. The town established its first public medical clinic, built a new high school, added kindergarten to the elementary school, and initiated a county library during the 1950s. The community also built an outdoor swimming pool and ice-skating rink. The post–World War II era is also referred to as the cold war, a time when the United States was involved in "police actions" around the world, notably in Korea and Vietnam. Young men from Pinedale again put on military uniforms and answered their country's call to serve. Paying the ultimate price was Mike Wilson, a Pinedale High School graduate from the class of 1968. He was killed in action in Vietnam in June 1969. His final resting place is the Pinedale Cemetery. The 1960s will long be remembered in the United States as a time of social unrest and transition. Voices for social and political change were heard in Pinedale but did not seem to impact those making a living from the land and the limited economy of Sublette County. Pinedale streets were not the setting for protests and marches, but rather continued to be used for cattle drives, homecoming parades, and raceways for chariot and cutter races. Throughout its history, Pinedale citizens have always worked and played hard. Hundreds of people turned out to participate in or to watch rodeos, parades, chariot races, ski-joring, dogsled races, and other events sponsored at the summer and winter carnivals. Nearby Fremont Lake has always been a popular gathering place for Pinedale citizens to relax and enjoy themselves. The deep-blue glacial lake has drawn folks for picnics, boating, water-skiing, and, of course, fishing, even in wintertime. Early photographs of picnickers are often set at Fremont Lake—on the ice and snow. Ice fishing has been popular, despite the subzero weather known for most of the winter months. Another popular gathering spot for the townspeople has been the local ski hill, White Pine. Located about 12 miles from town, White Pine was built on Fortification Mountain at Surveyor Park. In September 1939, a cable tow arrived and, by the first snows, was operating, taking skiers up the hill. The CCC workers further expanded the resort by clearing runs and then using the logs to build a lodge. Potluck dinners were hosted for years in this lodge by the locals, who enjoyed their meals after a few runs on the ski hill. Pinedale has long had a sense of its own unique history and heritage, and has found special ways to celebrate it. One especially important commemoration is Rendezvous, beginning in 1936 and held every July, when the community reenacts the early fur-trade era between Native Americans and fur-trading companies. The original Rendezvous of the 1830s were held near Pinedale. Activities during Rendezvous include rodeos, parades, and picnics. While enjoyed by the locals, Rendezvous has also been an important tourist attraction. Pinedale has always experienced relatively small growth. It is a community that economically survived throughout the first half of the 20th century by supporting industry in the area, especially agriculture and tourism. The town's fierce isolation created hardy and independent citizens who were forced to be self-sufficient. Self-reliant Pinedale citizens took care of themselves and one another. Despite the isolation, though, Pinedale people have always served their country when called. People who have survived here were the true rugged individuals identified with the American West, who were also outstanding American citizens. The Pinedale of the past, though, began to change profoundly starting in the late 1990s, with the exploration and mining of natural gas in giant gas fields south of town. The community grapples with rapid change, especially with the influx of new people into Pinedale and the surrounding area who are trying to take advantage of the numerous jobs made available by minerals extraction. Perhaps that is why there is a growing appreciation and an occasional yearning for how life was in Pinedale during its isolated, quiet years during the first half of its history. # One # THE EARLY YEARS Charles A. Petersen is seen here with his family at his first cabin on Pine Creek in the late 1890s. He would later donate five acres of his ranch land to help create the town of Pinedale. The family's last two children, of eight, were born on this ranch. Petersen claimed he had the first baby in the town, though it was before Founder's Day and south of the town site. (Courtesy of Sublette County Historical Society.) Children pose for a school picture at the Charles A. Petersen ranch in 1903. Children attending this school were from the Cantlin, Bloom, Bayer, Allen, Sweeney, and Petersen families. Frank E. McGrew, not pictured, was their teacher. McGrew, referred to by the locals as "Professor," became the first teacher in Pinedale when the town built its first school in 1904. (Courtesy of Paul Allen.) Pinedale's first saloon was located on the Petersen ranch, pictured here. The large man standing is Charles Petersen. This was the original building that housed the first Pinedale Post Office, until Petersen went into the saloon business; at which time he had the post office turned over to Celia Graham, who took the office into her home. (Courtesy of Sublette County Historical Society.) These freight wagons to Pinedale began their trip in Rock Springs, Wyoming, the nearest railhead town, located 100 miles away. Four and six horses are pulling these wagons. Larger freight wagons often used 12 horses pulling four wagons at a time with loads up to 20,000 pounds. This was the only available means to bring supplies to Pinedale in the early 1900s. (Courtesy of Sublette County Historical Society.) Boots Williams stands on the left next to an unidentified man in front of Pinedale's first building, the Franklin Mercantile Company, owned by John F. Patterson, the town's founder. The photograph was taken in 1905, when the town was only a year old. (Courtesy of Sublette County Historical Society.) C. Watt and Mayme Brandon stand in front of their Pinedale Roundup Building on July 4, 1905. Brandon came to Pinedale at the request of his uncle, John Patterson, to start a newspaper. The first issue of his paper, the Pinedale Roundup, rolled off the press on September 8, 1904, a few weeks before the town's Founder's Day. (Courtesy of Sublette County Historical Society.) Early Pinedale settlers Bunch Glover and Mrs. Jack Reynolds travel in a two-horse open sleigh in the early 1900s. Because of the long winters, sleighs were a necessary form of transportation. (Courtesy of Sublette County Historical Society.) This eight-horse freight train is pulling three wagons plus the cooster. It is on Franklin Street, in front of the Pinedale Roundup Building and the Woodman Hall, about 1908. The freighter's "home on the road" was the cooster, set up much like a sheepherder's wagon. A trip from Rock Springs to Pinedale usually took two weeks, weather and floods cooperating! (Courtesy of Sublette County Historical Society.) Fremont Lake was a popular gathering place year-round. This group of Pinedale locals was fishing and picnicking on January 17, 1904. From left to right are Elsie Winn Faler (Mrs. Ralph Faler), a Mrs. Winn, Beulah Montrose, Phil Burch, Nettie Hoff, Jennie Faler, Alice Montrose, Bunch Glover, Dr. J. W. Montrose, Lena Edmunson, Bert Clark Sr., Lee Edmunson, Frank McGrew, and Ralph Faler. (Courtesy of Sublette County Historical Society.) On July 17, 1910, a Community Church building was dedicated on the corner of Mill Street and Maybel Avenue, facing west on land donated by John F. and Maybel Patterson. The Congregationalists were largely the builders of this church, but it was initially dedicated as a Community Church to be used by different denominations. (Courtesy of Ellen Cole.) This early photograph of Pinedale looks south down Franklin Avenue in 1906. Buildings on the left (east) side of the street are the Pinedale Roundup Building and post office, Woodman Hall (the first two-story building in town), and the schoolhouse. On the west side, from left to right, are Sturdevant's drugstore, Franklin Mercantile Company or Patterson Store, and the Patterson home. (Courtesy of Paul Allen.) The Pines Hotel, often referred to as the Fardy Hotel, was the first building facing Pine Street. Under construction here in 1913, it is pictured with a typical "freight train" of the early 20th century pulling onto the street. (Courtesy of Sublette County Historical Society.) Winters are long in Pinedale. Battling subzero temperatures and heavy snows are a way of life. Pictured here are early settlers clearing spring snows on the road to Pinedale from Rock Springs. Prior to World War II, Pinedale residents were isolated in the community throughout most of the winter. (Courtesy of Sublette County Historical Society.) Until 1919, eighth grade was the highest level taught at the Pinedale School. This photograph shows the first eighth-grade graduating class in front of the Woodman Hall in 1911. At left is a Mr. Webber, the teacher. The boys in the car's front seat are (from left to right) Frank Allen, Ira Bourm, and Jim Landers. In the back seat are Tru (Gertrude) Allen, Bertha Cantlin, Lee Wright, and Jane Jones. (Courtesy of the Paul Allen Collection.) This south facing view of Main Street in Pinedale was taken in 1918. The official street name is Franklin Avenue, named by town founder John F. Patterson's oldest son. Note the early electric lines on the right-hand side. Electricity was available in Pinedale as early as September 1904. Only a few hundred people lived here at this time, yet there are several cars. (Courtesy of Sublette County Historical Society.) An unidentified rider pauses in front of the Pinedale Inn on Franklin Street in the 1920s. Western writer Zane Grey stayed here on his visits to Pinedale. Also pictured to the right of the inn are the telephone company; the State Bank of Pinedale; and the Jones, Son, and Company General Mercantile. Note the automobiles in the street with the horseback rider. (Courtesy of Sublette County Historical Society.) In 1916, the Bourm Hotel, operated by Henry Clodius, was commonly known as the Pinedale Inn. The hotel is the two-story building next to the telephone company and the State Bank of Pinedale on south Franklin Street. Note the early electric lines and trademark pine trees along Pine Creek. (Courtesy of Sublette County Historical Society.) Madelyn and Frances Wilson play in front of their home in Pinedale in the 1920s. Despite the isolation of the community, note the fashionable clothes and hairstyles the girls are wearing. (Courtesy of David Takacs.) Wilson Hall was built in 1923 by John and Dave Wilson on the south side of Pine Street between Fremont and Sublette Avenues. This was an important community gathering place for movies, plays, dances, fund-raisers, and similar events for decades. The Wilson home is directly behind the hall. (Courtesy of David Takacs.) Children from grades one to eight were educated in this schoolhouse from 1912 to 1923. Only town and nearby ranch children were able to attend in winter because the heavy snows made travel difficult. The white clapboard building was the town's second schoolhouse. The original school was built in 1904 but was too small for the growing town by 1912. (Courtesy of Sublette County Historical Society.) Fremont Lake is frozen solid usually from December until May every year. One popular activity throughout the years has been ice-skating, though clearing snow from the ice is usually required. Pictured above are two local girls taking a break from their ice-skating excursion on the lake in the 1920s. (Courtesy David Takacs.) Dave Wilson is seen at left cutting ice out of Half Moon Lake. Ice blocks cut from lakes or creeks during the winter were stored in "icehouses," usually insulated with sawdust, to be used in the summer. The blocks of ice were the only form of refrigeration for the early settlers. (Courtesy of David Takacs.) The photograph on this postcard, featuring children on and around a small burro, was likely a typical sight. The community was home to many young families, and everyone used animals for work, transportation, and, occasionally, pets. (Courtesy of Sublette County Historical Society.) This line up of workhorses is pulling a building to another location. It was common for the early, simple, log buildings to be moved. Schoolhouses may have been the most frequently moved structures, especially on ranches, where they were moved to be closest to the most children. (Courtesy of Mike and Ruth Noble.) This was the first hotel in Pinedale, built in 1904 by E. N. Sprague and located on the town's main street, at that time Franklin Avenue. Later owners John W. and Minnie Bloom referred to it as the Bloom Hotel until they renamed their establishment the Old Trails Hotel in the 1920s, as it appears here. (Courtesy of Sublette County Historical Society.) The Fardy Hotel, formally dubbed the Pines Hotel, was owned and operated by Gus and Ida Caviter Fardy until Gus's untimely death in 1931, at which time Ida Fardy took over the hotel, restaurant, and bar. Her frugal business practices kept her in business during the Great Depression of the 1930s. She was long remembered for her generosity toward those in the community most in need. (Courtesy of Sublette County Historical Society.) Jones, Son, and Company was formerly the Franklin Mercantile and was the town's first store. The Pinedale Roundup of June 22, 1911, reported: "The dance given by Jones, Son & Co. last Saturday evening in the new store building was well attended and a fine time had by all, many were unable to attend on account of high water, weather conditions and the short notice." (Courtesy of Dorothy Noble.) This 1920s scene of Pine Street features the Pines Hotel on the right. Note the automobile on the right-hand side of the street, while the left side still has hitching posts for horses. Pine trees planted by the townspeople along the street were thriving. (Courtesy of Dorothy Noble.) Arthur Faler and his wife, Christine Petersen, stand behind their four children in this family portrait, taken in 1915. The Falers were among the earliest settlers in the area. The elk head on the wall was killed by Arthur and was later given as collateral to Billie Postel for hay. Apparently, Faler never went back for the mount with money to pay for the hay. (Courtesy of Sublette County Historical Society.) The hunter, identified only as Boulsby, shows off his elk trophy and gun. The trophy heads were coveted but so too was the meat from the animal. A large kill could sustain a family during the winter. (Courtesy of Sublette County Historical Society.) Road building and improvements were ongoing for the early Pinedale settlers. Pictured here is an early Fresno scraper work in 1920. Horse-pulled Fresnos were used to scrape and move dirt. (Courtesy of Ralph and Charlotte Faler.) River crossings have long been a challenge for travelers, including those in automobiles. This traveler is being assisted with a pull across the Hoback River north of Pinedale at the V-V Ranch. The dudes of Pinedale resident Thomas Lars Clementsen are on their way to Yellowstone National Park in 1915. (Courtesy of Pat and Ben Pearson.) Fremont Lake's most famous boat was the Laura E., owned by John H. ("Beer Jack") Anderson, the mayor of Rock Springs and a saloon owner. It is seen here floating into the peaceful Box Bay on Fremont Lake. The Laura E. was hauled from Rock Springs by a wagon and six horses in May 1914 and was launched with ceremony. Built by the Brooks Boat Company of Michigan, it was 30 feet long, had a 9-foot beam, and was mostly enclosed, with an elegant wood-and-glass cabin. It was powered by a 2-cylinder, 14-horsepower engine. The mayor planned to move the city government to the boat during the hot summer months, which is not believed to have happened. Wyoming photographer Joseph E. Stimpson took this photograph. (Courtesy of Ann Chambers Noble.) Ralph R. Doubleday was a famous rodeo photographer, capturing action shots of the sport from 1910 to 1965. He would get in the arena with the rodeo contestants to obtain his pictures. He captured this photograph at a rodeo in Pinedale in 1915. "Riding them straight up," is what he called this picture. The rider looks sharp with his angora chaps, white shirt, and tie! (Courtesy of Ralph and Charlotte Faler.) In this 1915 Pinedale rodeo picture, the cowboys are "earing 'im down." This was a rodeo practice of twisting a horse's ear, or even biting it, to subdue it while blindfolded. This usually brought the horse to a kneeling position, enabling a cowboy to mount before his ride. (Courtesy of Ralph and Charlotte Faler.) Less Faler is photographed here riding a bucking horse in 1919. "Breaking horses" required the rider to stay on until the horse quit bucking. Eventually, the horse would, hopefully, be able to work with a cowboy rather than just give him a ride. (Courtesy of Ralph and Charlotte Faler.) A popular entry was the Mess Wagon Race, captured here by rodeo photographer Ralph R. Doubleday at a 1915 Pinedale rodeo. These competitors were required to bring in their wagons, light fires, prepare and eat dinner, set up bedrolls, go to "sleep," and then pack up and leave as fast as possible to beat their competitor. (Courtesy of Ralph and Charlotte Faler.) Vint Faler poses with one of his life's prized possessions, Jim Baker's rifle. Baker, a 19th-century frontiersman, mountain man, and government scout, gave the gun to Faler in 1885 when Faler was only a boy. Faler moved to the Pinedale area in 1889 with his family. The Falers were among the earliest white settlers in the area. (Courtesy of Ralph and Charlotte Faler.) Vint Faler's freight team was a familiar sight along the Wind River Mountains throughout the early 20th century. Faler's jerk-line team delivered loads from Rock Springs and Green River to South Pass and Pinedale. Faler and his horses retired from freighting when gasoline-operated vehicles replaced them. (Courtesy of Ralph and Charlotte Faler.) # Two # SUSTAINING A FRONTIER TOWN Early ranchers used horses for all their work. Taming wild or young horses was an ongoing chore for the cowboy. This work was referred to as "breaking" a horse to ride or work. It took great skill, and this talent became the basis of the rodeo. Shown here is an area cowboy in the 1920s on a "buckin' horse" at a local rodeo. (Courtesy of Sublette County Historical Society.) Felling trees was the first step in the work of tie hacks and loggers. Pictured here is a forest harvested for logs. It was common to harvest logs in the winter then gather the logs in springtime. Note the high tree stumps in this photograph, indicating a winter cutting. (Courtesy of Sublette County Historical Society.) Cy Kelly is seen here driving an ox team pulling large logs to town for a construction project. Sawmills were set up in Pinedale and the surrounding areas during the early settlement years, enabling the town to provide lumber for its own building projects. (Courtesy of Sublette County Historical Society.) Men came from around the world to cut the pine trees in the Wyoming mountains. The trees they cut were hewn before floating down rivers to be used to lay the tracks for the transcontinental railroad. These workers became known as tie hacks. Pictured here is lunchtime for the tie hacks around 1904 in Kendall, north of Pinedale. (Courtesy of Sublette County Historical Society.) The Union Pacific expanded its railroad service in Wyoming in the early years of the 20th century, creating a demand for ties to support the new rails. In springtime, the rivers around Pinedale were full of freshly hewn logs, making their way to new rail lines across southern Wyoming. Pictured here are rail ties headed down river to the town of Green River. (Courtesy of Sublette County Historical Society.) Irv Lozier, owner of the Box R Ranch, is shown at left in 1904. Lozier's dude ranch is located in Cora, north of Pinedale. Lozier and his outfit took guests into the Wind River Mountains on horse-pack trips for fishing and hunting expeditions. (Courtesy of Irv Lozier.) Guests coming to the Box R Ranch often traveled from far away to enjoy a hunting expedition in western Wyoming. Pictured below is a four-horse wagon bringing dudes on the last leg of their trip to the Box R Ranch from Opal, the railhead west of Kemmerer, in September 1903. (Courtesy of Irv Lozier.) From left to right, Box R guests Hubert Litchfield Jr., H. Sampson Jr., and C. A. Comstock relax at the ranch in October 1904. The gentlemen came to the area for an elk-hunting trip, which was successful. Guests often stayed several weeks. (Courtesy of Irv Lozier.) Western painter Carl Rungius spent time in the mountains around Pinedale based out of the Box R Ranch. Rungius often painted from photographs he took while on trips with the Loziers. Rungius later made this September 1904 pack-trip photograph, taken in Green River Canyon, into a painting. (Courtesy of Irv Lozier.) Snows come early in western Wyoming. Irv Lozier rides his horse through belly-deep snow near his Box R Ranch, leading hunting guests. (Courtesy of Irv Lozier.) Guests at the Box R Ranch camp in a few feet of snow while on a hunting trip in 1904 at Heart Lake in the Wind River Mountains. Snowstorms were common on hunting expeditions in western Wyoming. (Courtesy of Irv Lozier.) Irv Lozier packs out an elk head and rack for one of his guests, Henry Sampson Jr., in October 1904. Hunters came from around the world for big-game hunting trips at the Box R Ranch, near Pinedale. (Courtesy of Irv Lozier.) Irv Lozier brings out a string of guests and packhorses from an elk- and moose-hunting trip in 1904. Hunters were usually successful in getting their trophy animals in the Wind River Mountains at this time. (Courtesy of Irv Lozier.) Horse-pack trips into the mountains surrounding Pinedale, particularly in the Wind River Range, have long been a popular activity for locals and tourists alike. Pictured here is a horse-pack string in the high country around 1930. (Courtesy of Sublette County Historical Society.) Horses were able to get riders and their gear high into the Wind River Mountains. Pictured here are packhorses resting at a high mountain lake in 1930. (Courtesy of Sublette County Historical Society.) Walt McPherson (left) and Carroll Richard Noble are seen here enjoying the Wind River Mountain high country. They built the raft for their fishing expedition, with plans to "catch the big one!" Many visits to the mountains were also successful fishing trips. (Courtesy of Mike and Ruth Noble.) A Mr. Basham from Missouri is pictured here with his mountain sheep trophy in the early 1910s. The Wind River Mountains are home to several big game animals, such as the mountain sheep, and has long been a popular hunting area. (Courtesy of Mary Ellen Steele.) Pinedale rancher Frank Steele (right) is leading his hunting friend from Missouri, a Mr. Basham, back to the Steele ranch after a successful mountain sheep–hunting trip in the Wind River Mountains in the early 1910s. Both men's packhorses are carrying their hunting trophies. (Courtesy of Mary Ellen Steele.) Horses were used to navigate the difficult mountain terrain of the Wind River Mountains. Photographed here is Pinedale rancher Frank Steele with his friend, a Mr. Basham. Their packhorses are carrying their successful mountain sheep trophies. (Courtesy of Mary Ellen Steele.) Fremont Lake, located three miles north of Pinedale, has long been a popular place for locals and tourists to relax and visit. Pictured here is a fishing party in 1915, with the ladies showing off their day's fishing catch. (Courtesy of Sublette County Historical Society.) The General John C. Fremont excursion boat offered locals and tourists cruises around Fremont Lake starting in 1911. In October 1912, the big boat sank while it was tied up near Box Bay when a small boat tied to its side wore a hole in it. It was pulled to shore and repaired but later sank again, this time in deeper water southeast of Box Bay, where it remains. (Courtesy of Sublette County Historical Society.) Pinedale resident Elmer Faler sits with his fishing trophies in the 1940s. Faler's German Brown trout were caught near the power plant on Pine Creek. (Courtesy of Ralph and Charlotte Faler.) It was not unusual for women to homestead in Wyoming. This is Freida Noble Hittle's homestead cabin in Bondurant, north of Pinedale. Note the outhouse in back. It was common to see automobiles at various ranch homes in the early 20th century, but most of the ranch work was done with horses. (Courtesy of Mike and Ruth Noble.) The James and Minerva J. Westfall family proudly poses in front of their cabin on their homestead north of Pinedale in the 1890s. Note the baby antelope in front and next to the young man. This structure has already had an addition put on the back and has glass-paned windows. (Courtesy of Mike and Ruth Noble.) Pictured here in the late 1920s at the James Noble Ranch north of Pinedale is Shoshone chief Neep-a-Water (kneeling, front), surrounded by his family from the Wind River Reservation. James Noble stands on the far right next to his wife, Pauline. Also pictured are seven unidentified, Nels Jorgensen, Ted Wineman, and his wife from Pennsylvania. (Courtesy of Mike and Ruth Noble.) James Noble (left), Pauline Rahm Noble (center), and Shoshone chief Neep-a-Water (right) were photographed in 1928 in front of the Noble cattle herd in Cora. The cattle are Black Angus and will become part of the oldest continuous Black Angus herd in Wyoming. The span of the Wind River Mountains is visible from the ranch. (Courtesy of Mike and Ruth Noble.) Cattle from local ranches grazed in the high mountains north of Pinedale in the summer and on the open mesa south of town in the spring. Gathering the cattle took numerous cowboys, such as the group pictured here, to cover the miles of ranch terrain. (Courtesy of Sublette County Historical Society.) Carl Jorgensen, wearing angora chaps on the left, was one of the early Pinedale cattle ranchers. He is seen here on horseback pushing a cow and her calf, likely to summer range. (Courtesy of Sublette County Historical Society.) Arthur Faler stands with his wolf kill in early 1900. Wolves were killed in the area to enable cattle and sheep ranching to survive. At various times, the United States and Wyoming state government paid hunters to kill the predators. Ranchers also formed bounty associations and paid dues to fund predator control. (Courtesy of Sublette County Historical Society.) Pinedale-area trappers display their harvest in early 1900. Trappers were paid by the pelt, bringing a welcomed income where paychecks were hard to get. (Courtesy of Sublette County Historical Society.) Frank Steele is pictured here around 1912. Steele came to the area in the late 1880s as a baby when his parents, Ed and Emma P. Steele, homesteaded in the Boulder area, east of Pinedale. Frank later moved to the west side of Pinedale, where several of his family members homesteaded on the New Fork River. (Courtesy of Erma Shriver.) "Look at the camera," the boy in the saddle seems to be saying to his younger brother sitting behind him. The boys, James Richard and Carroll Richard Noble, lived on a Pinedale-area ranch. They are pictured here returning from a hunting expedition in early 1900. Hanging on their saddle with their shotgun appears to be a badger and two sage grouse. (Courtesy of Pat and Ben Pearson.) Many children were born and raised in Pinedale and on area ranches. Pictured here in the early 1930s are, from left to right, Mike Noble, Bill Thompson, and the Feltner children—Wayne, Juana, and Elma. The baby is not identified. (Supporting the baby is a mother behind the horse). The patient horse, Old Pig, seems content with her load. (Courtesy of Pat and Ben Pearson.) A rancher drives his team of horses through several feet of snow to feed his cattle. Snow depths between six and eight feet were common in the early 20th century. The snow often came early and stayed late, covering the ground from October until May. (Courtesy of Sublette County Historical Society.) Members of the Wilson family are bundled against the cold temperatures. Winter travel by horse and sleigh could be quite frigid. (Courtesy of Dave Takacs.) Common forms of winter transportation in the early 20th century were dogsleds and skis. Seen here are ranchers returning on skis with supplies or mail pulled on sleds by their dogs. (Courtesy of Sublette County Historical Society.) The dog's load is a lucky rider in this winter photograph. The other person pictured is using skis, which were usually homemade. Leather straps connected the skier's boots to long, wide boards. (Courtesy of Sublette County Historical Society.) Grace Alexander and Clara Alexander are attending to the dishes at their home on the upper New Fork River in 1907. The Alexanders were one of the earliest families to homestead in the Upper Green River Valley. (Courtesy of Sublette County Historical Society.) An early rancher and his team of horses plow this field to establish a hay meadow. At Pinedale's high altitude, starting at 7,175 feet, area ranchers are able to harvest only one hay crop annually. With the long winter, getting the most of the crop was critical. (Courtesy of Sublette County Historical Society.) Mary Ellen Steele watches over the Steele family cattle in November 1947 on their New Fork River homestead, located west of Pinedale in view of the Wind River Mountain range. Like many ranchers in the area, the Steeles raised Hereford cattle. (Courtesy of Mary Ellen Steele.) Pinedale townspeople often helped area ranchers with their brandings. Pictured here are several cowboys and helpers at the Noble Angus Ranch in Cora, west of Pinedale. The pay for the hard day's work was a big dinner prepared by the ranch wife. (Courtesy of Mike and Ruth Noble.) Mike Noble (center) is getting ready to grab the roped calf for branding. Area ranchers took turns helping one another at brandings, which required many extra workers. The brands put on the cattle were often handed down through the family for generations and were a source of pride for the ranchers. (Courtesy of Mike and Ruth Noble.) Carroll Noble, on horseback, and his son Mike are trying to sort their Black Angus cattle in the corral. The animals were often unruly after a summer on the open range. It took an astute cowboy to work with the cattle. (Courtesy of Mike and Ruth Noble.) A single horse is pulling a wagon and dump rake with hay crewmembers across the Green River. Rivers could be challenging to cross with all the haying equipment, even in the low-flow, late-summer waters. Note the rubber tire dangling inside the large metal wheel. This helped prevent hay from tangling between the rake teeth and wheel. (Courtesy of Mary Ellen Steele.) A Pinedale-area rancher is bringing in the hay crop in this photograph. The two horses are pushing a sweep, which gathers windrows of hay into big piles. The piles were then pushed up the beaver slide, a large log slide, into the stack. Ranchers often made their own haying equipment. (Courtesy of Erma Shriver.) Ranchers are making hay on the Steele Ranch in August 1958. The driver to the left is sitting on a tractor modified to work as a sweep, which brings piles of hay to the base of the slide. In the middle is the plunger truck, which is responsible for pushing the hay up the beaver slide, visible on the far right edge of the photograph. (Courtesy of Mary Ellen Steele.) Working on the stack, these two men, appropriately called "stackers," are waiting for their next load of hay, which is being pushed up the beaver slide by horses pushing the plunger. The stackers' job required careful placement of the hay on the stack to assure it would stay. Stackers often took great pride in how their stacks looked when completed. (Courtesy of Erma Shriver.) In this Pinedale-haying photograph, again on the Steele Ranch in 1958, a load of hay is pushed up the beaver slide onto the stack with a truck. Note the man working as the stacker near the top of the haystack. (Courtesy of Mary Ellen Steele.) Harold Sanborn was a Denver photographer who documented Colorado and Wyoming from the 1920s until the 1960s. Many of his photographs were made into postcards. This Sanborn photograph of the "Wind River Range From Pinedale, Wyoming" was taken in the 1930s and has his trademark name and photograph number in the bottom right corner. This particular postcard was sent by Nean Noble to her oldest son, Sgt. Carroll L. "Mike" Noble, who was in the army during World War II in the Philippines. "Dear Mikey," the message reads, "I thot this was a good mountain scene. Love from all, Nean." (Courtesy of Mike and Ruth Noble.) An early Pinedale rancher is bringing in the hay with his sweep. Dogs often accompanied the ranchers in all their work, as shown in this picture. (Courtesy of Sublette County Historical Society.) The driver of this sweep sits on a simple metal seat jutting from the back of the equipment. Ranchers claimed these were very comfortable. Well-trained horses maneuvered equipment around the field efficiently. (Courtesy of Sublette County Historical Society.) Frank Steele and family friend Leonna Mae Allen visit in front of his ranch house in the 1950s. Note Steele's modified tractor. It was common for ranchers to cut down an old car and move the axles closer together. These modified vehicles were able to turn quicker, making them more useful for ranch work. (Courtesy of Erma Shriver.) Ranch children started working in the hay fields at young ages. Donald Shriver sits in the driver seat while Fred Shriver sits on the left next to Frank Shriver during a 1950s haying season. The Shriver family purchased a ranch west of Pinedale in 1944 and always used vehicles in their operation; their neighbors were often still using horses. (Courtesy of Erma Shriver.) Fred Shriver is pictured here harvesting meadow foxtail seeds on August 3, 1960, in fields west of Pinedale. Shriver purchased a combine for the harvesting when the prices were good for the new seed. He usually yielded 200 pounds of seed per acre. (Courtesy of Erma Shriver.) # Three # BUILDING A TOWN A wolf pup is standing on the hood of a Ford Model T parked in front of an early U.S. Forest Service office in Pinedale. The U.S. Forest Service has had a presence in Pinedale since the spring of 1905 when Supervisor Zeph Jones moved the office onto two lots donated by the town. Jones would serve as Pinedale's first mayor from 1912 to 1913. (Courtesy of Sublette County Historical Society.) A major event for the usually quiet and isolated town occurred on July 8, 1926, when the crown prince of Sweden visited Pinedale. Townspeople and folks from the surrounding communities came to greet the royalty. Pictured here is the welcoming crowd for the prince gathered along Franklin Avenue at the intersection of Pine Street. (Courtesy of Paul Allen.) This early parade down Pine Street features a posse of horses and riders in front of a few automobiles during the 1930s. Parades were a common part of the town's Fourth of July celebrations. (Courtesy of David Takacs.) Prior to World War II, most of the people living in and around Pinedale did not own cars. Transportation services, therefore, were important. Pictured here is Walter Scott, owner of the Scott Stage Company, in his vehicle. Scott made regular trips to and from Rock Springs. (Courtesy of Jack Doyle.) Art Doyle sits on his tractor, attached to the road grader on the left, that he used to maintain the road from Pinedale to Rock Springs throughout the 1920s. Note the tractor's metal wheels. Doyle lost his leg at 16 years of age while working cattle in Mexico. In his later years, he would tell people that he climbed into a haystack, fell asleep, and a pig came and ate his leg off. (Courtesy of Jack Doyle.) This Sanborn photograph captures Pinedale with automobiles and horse-drawn wagons in the late 1930s. The photographer is standing on the east hill to take this panorama shot of the town, looking west. On the left is the county building, with several businesses and homes visible on the right side of Pine Street. The hill in the middle of Pine Street would later be leveled. (Courtesy of Albert "Sunny" and Fanny Korfanta.) The second two-story building in Pinedale, pictured here, was the Masonic Temple, built in 1928 and home to Franklin Lodge 31 A.F. and A.M. This building, located on the southeast corner of Tyler Avenue and Magnolia Street, was built by the Wilson brothers. The group was organized in 1911 and originally met in the Woodman Hall. (Courtesy of Sublette County Historical Society.) The Sublette County building is seen here shortly after its completion in 1931. Louis H. Hennick had donated the land for the site. The two-story building, a modified Colonial design, housed the county courtroom, judge's chamber, jury room, and county attorney and clerk of court offices. The sheriff, treasurer, and assessor's offices were also located here, along with the jail. (Courtesy of the Sublette County Historical Society.) It is believed that every youngster who grew up in Pinedale between the 1930s and the 1950s had Madge Funk for a teacher. Well remembered for her musical training, Funk is seen here during the 1930s in front of the schoolhouse with her young rhythm band, in costumes. (Courtesy of Ralph and Charlotte Faler.) The Pinedale High School girls' basketball team is pictured here in 1932. Pictured are, from left to right, (sitting) Nadine Mortimer, June Healey, and Rita Faler; (standing) Alice Sargent, Ruth Jones, Eunice Healey, Eloise Westley, and Francis Bloom. (Courtesy of Erma Shriver.) Young adolescents appear mischievous and happy in this class photograph. Standing in front of the Pinedale School building in the spring of 1932, their teacher, May Sommers, is in the back row. Students came from the Swartz, Clark, Rahm, Cantleberry, Westley, Montimore, Nelson, Ervin, Adrey, Holt, Easton, Cooper, Dalley, Cantlin, and Healey families. (Courtesy of Erma Shriver.) The Wilson Photo Studio, along with an apartment, was an addition to the family's soda pop factory. When the building contained only the soda pop factory, begun in 1927, the children referred to it as the "Pop House." Barrels of unmixed soda pop hung from the ceiling on chains. The children would "ride" the barrels, swinging back and forth until the ingredients were thoroughly blended. (Courtesy of David Takacs.) Alyna (left) and Dave Wilson enjoy their "rock garden" behind the Wilson Photo Studio and their home in 1937. Their proud garden creation worked well for the short summers and high altitude. (Courtesy of Dave Takacs.) Pinedale's main street, Pine Street, is seen here in the 1930s. Note the many pine trees planted throughout town by volunteers under the leadership of C. C. Feltner, including the landmark tree in the middle of Pine Street. The Wyoming Highway Department permanently removed the tree in the middle of the street when the road was paved in the late 1950s, but a few cars almost removed it several times prior. (Courtesy of Sublette County Historical Society.) Fred and Jay Mollring opened their Pioneer Pinedale General Store in the early 1920s after buying out Jones, Son, and Company. From groceries to dry goods and clothing, the Mollring store also sold ranch equipment and supplies, hoping cattle prices would be good enough for ranchers to pay off their charged goods once a year. (Courtesy of Paul Allen.) The cornerstone for St. Andrew's in the Pines Episcopal Church was laid on May 4, 1938. The log structure was located on the south side of Pine Street, along the banks of Pine Creek. In 1953, the whole building was lifted and placed on top of a basement, adding space for church work and socialization. When the congregation outgrew this building, it was moved to private property across the street. (Courtesy of Paul Allen.) Paul Hagenstein Sr. built the Pinedale Garage in 1935 and ran the business until 1944. The business offered Conoco gasoline, Pennzoil, and Goodyear tires. The building stands on the southeast corner of Pine Street and Lake Avenue. (Courtesy of Sublette County Historical Society.) The Fardy Hotel, originally called the Pines Hotel, is seen here in the 1940s after a major expansion. Located on the corner of Pine Street and Maybel Avenue, the original hotel was enlarged to include a restaurant, bar, barbershop, and offices. It housed numerous travelers as well as ranch children who attended high school but lived too far away for a daily commute to school. (Courtesy of Sublette County Historical Society.) There is not much traffic on Pine Street in this 1930s photograph. Note the pine tree in the middle of the street at the top of the street's knoll (the street would later be leveled out). The car on the right is parked in front of the Pinedale Roundup Building on Tyler Street. (Courtesy of Ann Chambers Noble.) The Pinedale School, seen here on the right, housed all the grades, from elementary to high school, when it opened in 1937. It was only the grade school by the time it closed in 1987; the high school moved to a new building in the 1960s. The old school was built on property donated by Louis H. Hennick and was located south of the courthouse, pictured on the left. (Courtesy of Paul Hagenstein.) The Pinedale second graders in 1940 are posed here for their concert featuring the Virginia reel. Their proud teacher, Marilyn Summers Jensen, was a lifetime teacher and resident. (Courtesy of Ralph and Charlotte Faler.) Posing on the front steps of the schoolhouse is a Pinedale elementary class in 1940–1941. With improved roads and modes of transportation, some rural school districts were closing. The Daniel, Cora, Boulder, Eastfork, Bondurant, and Bronx school districts began busing their children to Pinedale for school starting in the 1940s. (Courtesy of Paul Allen.) The Pinedale High School football team in 1942 is pictured with its coach, a Mr. Wright. The Wranglers won 7 out of 10 games that year, their most successful season during the four years Pinedale participated in the sport. The boys played teams from Reliance, Diamondville, Jackson, Eden, and Big Piney. (Courtesy of Mike and Ruth Noble.) Pool halls were popular gathering places in Pinedale. Early residents Ralph Faler and the Hoff brothers are among those seen here in the Bayer Pool Hall on Franklin Street. Owner Allie Bayer offered soft drinks and billiards, but not alcohol, from 1920 until 1933 due to Prohibition. Gambling, however, was legal throughout this time. (Courtesy of Harold Faler.) Lester Faler stands behind the bar at his business, the MF Corral, named for his partner Judson (Mac) McCormick and himself. The MF Corral originally opened in the old Wilson Hall on Pine Street in the 1930s, as seen in this photograph. It would later move to a new location farther west along Pine Street. (Courtesy of Ralph and Charlotte Faler.) Judson (Mac) McCormick stands on the left next to his longtime business partner, Lester Faler. The two men, seen here in their older years, ran one of Pinedale's most popular gathering places, the MF Corral. The bar was in business on Pine Street for decades. (Courtesy of Ralph and Charlotte Faler.) The long winters have never been a deterrent for enjoying the great outdoors. Pictured here are early downhill skiers. After hiking up the mountain, they used a single pole to steer and slow down during their descent. From left to right are unidentified, Ed Hicks, Sam Hicks, Buzz Farwell, and Jack Hicks. (Courtesy of Swede McAlister and Jonita Sommers.) On January 5, 1940, the Surveyor Park Ski Area officially opened 12 miles north of Pinedale at Fortification Mountain. Skiers were pulled to the top of the hill, as shown here, by a cable tow powered by a Chevrolet motor mounted on a concrete slab. The local Civilian Conservation Corps, operating from a camp near the ski hill in the 1930s, cleared the trees from the hillside for the ski runs. (Courtesy of David Takacs.) Frances Wilson stands beside a feed horse on the Eklund Ranch, east of Pinedale. The feed sleigh could carry up to a ton of hay, which was scattered on the snow for the cattle and horses. The hay first had to be loaded onto the sleigh, then off again around the field, all by hand with a pitchfork. (Courtesy of Dave Takacs.) Jack Funk is seen here at Christmas enjoying a drink in the company of his canine friend, Rusty, in the 1930s. Funk, a government predator trapper, moved to Pinedale in 1929 or 1930. It is believed that he was the model for the main character in Zane Grey's book Man of the Forest. (Courtesy of Paul Allen.) James Jorgensen (left) and Carroll Richard Noble show off their elk. This hunting trip in 1928 was "on the Rim," or the area near Fall Creek north of Pinedale, which boasted a well-known elk habitat. Note the angora chaps on both hunters. (Courtesy of Pat and Ben Pearson.) A local predator hunter displays his jackrabbit harvest in the 1940s. Residents were known to use the meat and pelts from these hunts. (Courtesy of Ralph and Charlotte Faler.) This elderly fisherman caught his giant trout on a fly rod near Pinedale. Fishing has always been a popular sport in the area, and the catch has often been a welcome meal. (Courtesy of Sublette County Historical Society.) Val McAlister, pictured below on the left with an unidentified friend, shows off the day's catch. The successful Pinedale fishermen are at Shoal Lake, a popular fishing place for locals. (Courtesy of Ralph and Charlotte Faler.) The Sublette County Historical Society sponsored the first Rendezvous reenactment in 1936 to commemorate the 100th anniversary of the original fur-trade rendezvous held near Pinedale. The early reenactments, such as the one photographed here, were held in Daniel, west of Pinedale, near the original rendezvous site. (Courtesy of Sublette County Historical Society.) Fur-trade rendezvous reenactments were elaborate affairs, complete with teepees and two- and four-horse Conestoga wagons. Actors donned costumes depicting the mountain men, Native Americans, and early white missionaries. Pictured here is an early reenactment at the Daniel Rendezvous grounds. (Courtesy of Sublette County Historical Society.) Daniel, west of Pinedale, was the site of the original Rendezvous in the 1820s and 1830s; the last one was held in 1840. The actors here are performing on the same ground where the original events took place more than 100 years before. (Courtesy of Ralph and Charlotte Faler.) Pinedale residents and area ranchers were the featured actors in the Rendezvous reenactments. In addition to providing their own horses, they made their own costumes. Photographed here is the cast proceeding to their performance. (Courtesy of Ralph and Charlotte Faler.) On the night of March 11, 1939, a fire started at the Murphy Warehouse and Lumber Company. When it was over, the new drugstore, the Elk Café, and the power plant had all gone up in flames, along with the warehouse. Fire suppression was hampered by frozen fire hydrants. It was the worst fire in Pinedale's history. (Both, courtesy of Albert "Sunny" and Fanny Korfanta.) # Four # SERVING OUR COUNTRY Frank Allen stands at left with a buddy in France during World War I in 1917. Allen volunteered to serve with the U.S. Army, as did nearly every young man from the Pinedale area. (Courtesy of Sublette County Historical Society.) Recruitment for local soldiers began in Pinedale in April 1917 immediately after America's entrance into World War I, or the Great War as it was referred to at the time. In a front-page news story, Capt. H. H. Waugh of Kemmerer called for volunteers and the formation of a machine gun company in this section of the state. It would be known as the Machine Gun Company, 3rd Wyoming Infantry. On August 3, 1917, a caravan of automobiles, pictured here, was secured to take the young volunteers to Rock Springs, where they would catch the train to Cheyenne and then report for duty. The Pinedale Roundup reported of the departing event: "They were assigned to the capacity of each car and the good byes were said with many a broken voice, and a touch of sadness overspread the populace who had turned out to wish them God speed and a safe return as the order to proceed came." (Courtesy of Sublette County Historical Society.) Young volunteers lined up in front of town hall prior to departing for service in the U.S. Army in 1917. Pinedale was the headquarters for the machine gun company formed from volunteers in the western part of Wyoming. (Courtesy of Sublette County Historical Society.) This photograph, printed on a postcard, is dated August 26, 1917, and was sent by Lt. Jess Miller to Charles F. Patterson, Pinedale's mayor and editor of the Pinedale Roundup, after Miller's arrival at Fort D. A. Russell in Cheyenne. The message reads, in part, "Dear Friend Charlie—We are getting along nicely all the men coming 'out of it' in fine shape." (Courtesy of Paul Allen.) Pinedale native Sidney Edwards was killed in a hospital in France when an airplane shelled the building on July 15, 1918. He was in the hospital recovering from appendicitis. His body was exhumed from its initial grave in France and was brought back to Pinedale for interment in the Pinedale Cemetery in August 1921. (Courtesy of Helen Stout.) The last passage in Clifford Phillip's diary, which he faithfully kept throughout the war, was written by his brother Mason. Dated July 28, 1918, it read: "Clifford was killed by sniper." Clifford, pictured here, was buried in France. Pinedale's Phillips-Edwards American Legion Post 47 was named in honor of the two local men who lost their lives during World War I. (Courtesy of American Legion Post 47.) One of the most popular and successful New Deal programs during the Great Depression was the Civilian Conservation Corps (CCC), which combined work relief with the preservation of natural resources. One of the first CCC camps in the country was Camp Fremont (known as F-13), which opened in 1933 on the south shore of Fremont Lake. (Courtesy of Paul Allen.) Civilian Conservation Corps recruits came to Pinedale from around the country and lived in army barracks that were quickly built. When the camp closed in 1942, these buildings would be relocated to Pinedale and the surrounding area after being sold for a nominal fee. (Courtesy of Sublette County Historical Society.) Capt. Byron H. Lytle was one of several commanding officers at Camp Fremont. Under his charge during his years at Pinedale were hundreds of CCC recruits. (Courtesy of Sublette County Historical Society.) The water tower, shown on the left, supplied the mess hall and washrooms at Camp Fremont. The U.S. Army built and operated the Civilian Conservation Corps camps around the nation. Camp Fremont was one of more than 1,500 camps. By the end, more than 2.5 million men and 8,000 women were put to work. (Courtesy of Sublette County Historical Society.) The inside of the army-style barrack at Camp Fremont is photographed above shortly before a visit from the commanding officer. (Courtesy of Sublette County Historical Society.) This photograph captured the inside of the barrack at Camp Fremont with no visit from the commanding officer scheduled in the near future. (Courtesy of Sublette County Historical Society.) When the young men arrived, such as this group, mostly from Ohio, it was not uncommon for them to be overwhelmed by the higher altitude. Once the adjustment was made, they were put to hard work, which was extensive and varied, including insect control, building projects, stream improvement, and fighting forest fires. (Courtesy of Sublette County Historical Society.) Part of their regular routine while at camp was staying fit with calisthenics. Many of the young recruits were from cities and had little or no experience with outdoor work. A Pinedale man hired to work as a foreman for the young recruits commented years later, "They were good kids, but dumb! They didn't know anything." Most had never held a shovel or ax before joining. (Courtesy of Sublette County Historical Society.) Assisting the agriculture industry was a CCC priority. Camp Fremont enrollees built stock bridges, such as this one, in the Wyoming National Forest. The CCC workers also constructed stock fences and holding pens in the forests. (Courtesy of Sublette County Historical Society.) One of the larger projects was road building under the direction of Pinedale local Jack Funk. Pictured here are CCC enrollees building the road to Half Moon Lake from Pinedale. They constructed roads from Kendall to Green River Lake, from LaBarge to Star Valley, and one alongside Fremont Lake. They also improved Skyline Drive and the road from Pinedale to Big Piney. (Courtesy of Sublette County Historical Society.) Recruits at Willow Creek are taking a lunch break. CCC enrollees working from Camp Fremont came from New York, California, Ohio, Virginia, Illinois, Kentucky, and Wyoming. Many enrollees at Camp Fremont developed a lasting love for the country, making the area a vacation destination throughout their lives because of their CCC experience. (Courtesy of Sublette County Historical Society.) Construction work completed by Camp Fremont recruits included repairing or building telephone lines, electrical lines, drift fences, and an office building on south Franklin Street in Pinedale in 1933. This would be used as the headquarters for the Fremont Ranger District of the Wyoming National Forest. The young men also built the ranger station at Kendall, pictured here, and at Willow Creek. (Courtesy of Sublette County Historical Society.) Several spike camps worked out of Camp Fremont, including Big Piney, Dutch Joe, Newfork, Green River, Cottonwood Creek, Snider Basin, LaBarge Creek, and Granite Creek. The spike camps, photographed here, were temporary and enabled workers to be closer to their job sites. (Courtesy of Sublette County Historical Society.) Camp Fremont's Dutch Joe spike camp, photographed here, was near the Utah/Wyoming border, indicating how far the Pinedale headquarters extended its work. Projects undertaken by this camp included construction of stock driveways, cattle bridges, and corrals for sheep moving between national forests and private land. (Courtesy of Sublette County Historical Society.) During the early years of the Civilian Conservation Corps, the camps in Wyoming closed for the harsh winters, and enrollees were sent to Oregon and California to work. By 1935, however, men were working year-round at Camp Fremont, earning bragging rights when they did not miss a day's work despite the cold. Photographed here is Camp Fremont operating during the winter. (Courtesy of Sublette County Historical Society.) Camp Fremont is seen here from the Fremont Lake outlet. The CCC recruits completed work at the lake, including building campgrounds, a dock, and a boathouse for the U.S. Forest Service. A dock at the inlet on the upper end of the lake was never finished due to the camp's sudden closure in 1942. Recruits also constructed fish-rearing ponds at Newfork and Fremont Lakes, and then planted the fish in area lakes. (Courtesy of Ralph Wenz.) With the bombing of Pearl Harbor on December 7, 1941, the United States suddenly entered World War II. Young men and women from across the country put on military uniforms and served their country around the world. Young Pinedale residents, such as Harold Faler, photographed at left, were among those marching off to war. Faler served in the U.S. Navy. (Courtesy of Sublette County Historical Society.) In this photograph below, Pinedale natives J. D. Wilson (left) and Clem "Bud" Skinner are enjoying some rest and relaxation while in the service. Wilson writes, "Clem and I at "Sweets" in Oakland. I wish he was here to go on more of those liberties." Pinedale's servicemen kept in touch with newsletters written, printed, and mailed by school superintendent Lawrence "Pops" Trenary and Lillian Allen. (Courtesy of Sublette County Historical Society.) Later World War II observances in Pinedale would be enhanced by personal stories from residents who were present at historic events. Navy officer Jeff Kaul, pictured here, and army private Bruz (Starling Oris) Bryant were at Pearl Harbor on the fateful morning of December 7, 1941. Hayden H. Huston was a chief petty officer who witnessed the surrender of the Japanese on August 14, 1945, marking the end of hostilities. (Courtesy of Wilma Kaul.) World War II was the first war that encouraged women to join the military. Young women from Pinedale answered the call, including Wilma Kaul, who served as a WAVE (Women Accepted for Volunteer Emergency Service), the women's branch of the navy. She would later serve as the Pinedale selective service clerk during the Vietnam War. (Courtesy of Wilma Kaul.) Pinedale resident S.Sgt. Ralph Wenz, pictured here, was killed in the line of duty when his bomber crashed in the wilds of Alaska on December 21, 1943. Pinedale's municipal airport was named in his memory on Memorial Day 1949. (Courtesy of Wenz family.) S.Sgt. Boyd Skinner was killed in action at Iwo Jima on March 10, 1945. The Pinedale native, pictured here, enlisted in the marines in July 1940, following his graduation from Pinedale High School with the class of 1940. The town park at the south end of Franklin Avenue is dedicated to him. (Courtesy of Sublette County Historical Society.) # Five # THE TOWN MODERNIZES Fremont Lake is famous for its large mackinaw fish. From left to right, fishermen Monte Wight, unidentified, and Tom Astle show off their mackinaw catch on February 17, 1963. The gentlemen are standing in front of a snow plane, commonly used for ice fishing. Snow planes, so called because of the large propeller, were earthbound, slowly cruising over the snow on large metal skis. (Courtesy of Paul Allen.) Harold Sanborn captured this picture of Pinedale's business district, looking east down Pine Street, sometime in the late 1940s. Originally, the town's business district was on Franklin Avenue, but by the 1920s, it had shifted to Pine Street. The main highway into Pinedale was changed from Franklin Avenue to Pine Street, leading many businesses to change their locations, too. (Courtesy of Carol Artes.) The winters have always been long in Pinedale, but the heaviest snows on record for the area came during the 1940s. Photographed here is a local looking east down Pine Street one winter afternoon in the late 1940s. Note the snowbanks along the street sides. It has always been a challenge keeping the streets clear of snow. (Courtesy of Paul Allen.) Showing their hometown patriot pride was common for Pinedale citizens. Photographed here is a 1950s parade down Pine Street with the lead rider carrying the American flag. Parades were common in town, especially on the Fourth of July. (Courtesy of Paul Allen.) Children often participated in parades, especially for the school's homecoming celebration and during Rendezvous. This group of young people is heading down Pine Street for Rendezvous during the 1960s. Note the many bikes replacing the horses common in earlier parades. (Courtesy of Paul Allen.) Bud Skinner proudly shows off his Pinedale letterman's sweater as he stands in front of the Pinedale Drug Store in the 1950s. Athletes earned the sweater for their team participation. The drugstore was a popular place for high school students to come for lunch, which was usually a milk shake. (Courtesy of David Takacs.) Pinedale's first and only stoplight is pictured here. It was installed at the intersection of Pine Street and Tyler Avenue in 1960. The location was chosen to assist children crossing Pine Street from the school on south Tyler Avenue. After only a short time, the stoplight was deemed unnecessary and was removed. It was placed in storage, where it remained for decades. (Courtesy of Sublette County Historical Society.) Cattle drives through the middle of town have long been common in Pinedale. This Hereford cattle herd is heading east along Pine Street around 1960. While amusing for tourists, the locals were sometimes inconvenienced by the traffic holdup. (Courtesy of Paul Allen.) The Hereford cattle drive continues through town, heading east, around 1960. Using roads such as Pine Street and the highways worked well for cowboys moving cattle. With no feed available on the roads, the cattle were more likely to keep moving. (Courtesy of Paul Allen.) Fremont Lake was a favorite scenic subject for photographer Harold Sanborn. In this picture, he captures the lake from the outlet on the south side in the 1930s. In the foreground, the rock piles are at the dam. Peaking over the lake ridge in the background are the Wind River Mountains. (Courtesy of Albert "Sunny" and Fanny Korfanta.) Harold Sanborn, a Denver photographer who documented Colorado and Wyoming, came to Pinedale every decade of his career from the 1920s until the 1960s. Many of his photographs were made into postcards. Sanborn captures post–World War II Pinedale in this photograph of Pine Street, looking west. The businesses on the north side of the street are, from left to right, the Fardy Hotel, Faler's Market, Mobilgas, Pinedale Cash Store, Elks Café, and Pinedale Drug Store. (Courtesy of Albert "Sunny" and Fanny Korfanta.) Ice-skating races for boys and girls were held at the public rink located east of the Sublette County building throughout the 1950s, often as part of the winter carnival. Racers dashed madly around the track, which was marked off with coffee cans set up on the ice. Silver dollars were given as prizes. (Courtesy of Paul Allen Collection.) Downhill skiing was a popular sport for many of Pinedale's young people. Pictured here are members of the Pinedale Ski Club, a group that was active throughout the 1960s. In addition to recreational skiing, the club was active with hosting ski races in which skiers were invited from other Wyoming towns as well as Utah and Idaho. (Courtesy of Sublette County Historical Society.) The Pinedale High School cheerleaders for 1964–1965 pose for their team picture. Their uniforms reflected the Western cowboy culture of the town. (Courtesy Ralph and Charlotte Faler.) The 1947 Pinedale High School basketball team poses for their team photograph. Their coach, Albert "Sunny" Korfanta, stands on the right in the second row. The team played 8 conference and 10 nonconference games before 4 district competitions. Their 3 state tournament competitions were against Tensleep, Moorcroft, and Glenrock. (Courtesy Ralph and Charlotte Faler.) Harold Sanborn photographed this "Picturesque Signboard at Pinedale, Wyoming" on one of his trips through the area. The town's business people recognized the importance of tourists for their economy and worked diligently throughout the decades to promote the area. A tourist information center opened on June 1, 1958. The main arrow pointing to "Hunting—Fishing—Water Sports" is directing traffic to Fremont Lake. (Courtesy of Sublette County Historical Society.) Hunting mountain sheep is most challenging, especially with the animals' ability to climb quickly up rocky slopes, making them difficult to catch. Despite this challenge, these three hunters were successful in 1950. From left to right are Mike Noble, Morris Nesmith, and Albert "Sunny" Korfanta with their mountain sheep on the porch at the Noble Ranch. (Courtesy of Mike and Ruth Noble.) Fishing is likely the most favorite outdoor pastime for Pinedale residents and visitors. Pictured at right are three generations of Kauls. From left to right are Floyd Jr., Floyd Sr., and young Alan with their day's catch from a local stream in the late 1950s. (Courtesy of Wilma Kaul.) Local geese hunters show off their success in this early 1960s photograph below. The hunters are, from left to right, Wyoming Game and Fish game warden Duane Hyde, Bill Faler, Tom Astle, and Ralph Faler. (Courtesy of Ralph and Charlotte Faler.) Bette and Paul Hagenstein raise their hands in cheer at a Rendezvous reenactment. The Hagensteins were being recognized for their years of service to the Sublette County Historical Society and the Rendezvous celebration. The success of Rendezvous in Pinedale for decades was the result of people like the Hagensteins who donated hours of their time and talent. (Courtesy of Paul and Bette Hagenstein.) The Mad Hatters sang at numerous events around town throughout the 1960s, including weddings, funerals, Rendezvous, and banquets. Pictured here from left to right are members Donna Sievers, Miriam Kerback, and Bette Hagenstein at the homecoming parade in 1967. They became famous when the governor invited them to Cheyenne to sing Kerback's song, "Wyoming," as part of the state's 75th birthday celebration in 1965. (Courtesy of Paul and Bette Hagenstein.) Paul Allen stands next to his daughter Beverly after a fishing trip. It would appear that the young lady was more successful than her father! A lifetime Pinedale resident, Allen left a historical photographic legacy by recording current events and re-photographing old-timers' picture collections. (Courtesy of Paul Allen.) When the State Bank of Pinedale closed on December 31, 1934, Pinedale was left without a bank for decades. By the early 1960s, a strong area economy was sustaining the small community, so locals made an effort to establish a bank in Pinedale again. The new bank would be known as the First National Bank of Pinedale, pictured here as it appeared in the 1960s. (Courtesy of First National Bank of Pinedale.) Opening-day ceremonies for the First National Bank of Pinedale were held on April 4, 1963, at 210 West Pine Street. Attending the festivities are, from left to right, Beryl Fullerton, Cecil Shaw, Vernon T. Delgado, Joe Hicks, Ross Copenhaver, Harvey Taylor, George Mill, Robert W. Sievers, and Charlie Fisher. All of these gentlemen were on the bank's board of directors, except for Wyoming state treasurer Copenhaver and state superintendent of public instruction Shaw. (Courtesy of First National Bank of Pinedale.) Town locals take a break from their work for a photograph in the 1960s. The well-known men are, from left to right, Whitey Kape, Ted Weideranders, Bill Williams, and Charlie McAlister. The pine trees in the background line Pine Creek, which runs through town. (Courtesy of Ralph and Charlotte Faler.) This 1960s view of Pine Street is looking west from the junction of Pine Street and Fremont Lake Road. Town businesses had expanded to this end of town by this time. Harold Faler opened his new IGA supermarket in March 1961 on the north side of Pine Street, while the Sundance Motel, pictured on the left, was constructed on the southwest corner of Pine Street and Sublette Avenue. (Courtesy of Sublette County Historical Society.) Fr. Charles Bartek (left) stands with Harry Warinner in the July 1956 Rendezvous reenactment. Father Bartek, the local Catholic priest, played Fr. Pierre Jean De Smet for many years in the Rendezvous. Warinner portrayed a Shoshone Native American. (Courtesy of Sublette County Historical Society.) The Rendezvous reenactments were put on hold during World War II, but after the war, they came back to bigger crowds than before. To accommodate the spectators, reenactment planners moved the event to the rodeo grounds in Pinedale. Photographed here is the show at the rodeo grounds in 1960. (Courtesy of Sublette County Historical Society.) James Harrower was the first Rendezvous reenactment director and held the job for decades. He used a script written by Mary A. Scott. Harrower, photographed here, also played the part of fur trader Robert Campbell in the show. (Courtesy of Sublette County Historical Society.) This 1960s photograph of the Rendezvous reenactment indicates the size and number of participants. Many townspeople took part in the event and so did several ranch families from surrounding communities. Dozens of horses and numerous wagons were also pressed into service for the impressive celebration of the height of the mountain-man era. (Courtesy of Sublette County Historical Society.) It was common for the town to close Pine Street in the wintertime in order to run cutter races. The cutters were often steel barrels attached to iron runners pulled by two horses. Cutter-racing competition was fierce among locals and area ranchers throughout the 1950s and 1960s. Above, Albert "Sunny" Korfanta is keeping tabs on the competition as he whips his team. Below, a Mr. Farwell and a Mr. Wilson vie against each other for the lead. (Both, courtesy of Albert "Sunny" and Fanny Korfanta.) Ski-joring was a popular sport in Pinedale. Photographed here are two competitors racing down Pine Street during Pinedale's winter carnival in the 1950s. Skiers were pulled behind horses in the race to be the fastest. Albert "Sunny" Korfanta, the town's pharmacist, is the racer on the right behind the Palomino horse. (Courtesy of Albert "Sunny" and Fanny Korfanta.) Snow planes were common in the Upper Green River Valley from the late 1940s through the early 1960s. Mike Noble stands in front of his snow plane in this picture. During the winter of 1951–1952, Noble courted Ruth Phillips, a schoolteacher in an isolated ranch on Willow Creek, using his snow plane. Snow planes were replaced in the late 1960s with the more reliable and versatile snowmobiles. (Courtesy of Mike and Ruth Noble.) Harv Atwood was photographed here riding bareback on a bucking bronco in a Pinedale rodeo in the 1960s. Note the cigarette in his mouth. The ride does not appear to be hindering his ability to smoke! (Courtesy of Paul Allen.) Bob Penton was showing off his roping ability in this photograph at a Pinedale rodeo in 1956. From his horse, he is attempting to rope a calf at full run. Penton was usually successful at his rodeo work. (Courtesy of Paul Allen.) Identical signs with messages on the front and the back were placed at each end of town in 1963. A photograph caption in the April 11, 1963, Pinedale Roundup noted wryly, "visitors to Pinedale should now be well informed as to what town they are about to enter.... These very attractive signs will no doubt be very impressive to strangers and will cause some to remember Pinedale as the town with the big log signs.... Congratulations City Fathers!" The signs were constructed by the Town of Pinedale under the direction of Mayor Jim Harrower, and the State Highway Department helped install them. (Both, courtesy of Paul Allen.) # INDEX Allen, Frank Allen, Paul Anderson, Les Astle, Tom Atwood, Harv Bartek, Fr. Charles Bayer, Allie Bloom, John W. and Minnie Brandon, C. Watt and Mayme Bryant, Bruz Community Church Delgado, Vernon T. Doyle, Art Edwards, Sidney Faler, Arthur Faler, Bill Faler, Elmer Faler, Harold Faler, Lester Faler, Ralph Faler, Vint Fardy, Gus and Ida Feltner, C. C. Funk, Jack Funk, Madge General John C. Fremont Glover, Bunch Hagenstein, Paul Jr, and Bette Hagenstein, Paul Sr Harrower, James Hennick, Louis H. Hittle, Freida Noble Huston, Hayden H. Hyde, Duane Jones, Zeph Jorgensen, Carl Jorgensen, James Kaul, Floyd Kaul, Jeff Kaul, Wilma Kelly, Cy Kerback, Miriam Korfanta, Albert "Sunny," Laura E. Lozier, Irv Lytle, Byron H. McAlister, Val McCormick, Judson (Mac) McGrew, Frank E. McPherson, Walt Mollring, Fred and Jay Montrose, Dr. J. W. Noble, Carroll Richard Noble, James Noble, Mike Patterson, Charles F. Patterson, John F. Penton, Bob Petersen, Charles A. Phillips, Clifford Scott, Walter Shriver, Donald Shriver, Frank Shriver, Fred Sievers, Donna Skinner, Boyd Skinner, Bud St. Andrew's in the Pines Episcopal Church Steele, Frank Steele, Mary Ellen Sprague, E. N. Warinner, Harry Wenz, Ralph Westfall, James and Minerva J. Wight, Monte Wilson, Dave Wilson, Frances Wilson, John Find more books like this at www.imagesofamerica.com Search for your hometown history, your old stomping grounds, and even your favorite sports team.
{ "redpajama_set_name": "RedPajamaBook" }
6,876
Q: Definition of energy in relation to thermal energy Energy is often defined as the capacity of a system to do work. How does this definition apply to thermal energy? Work is done by displacing an object under a force. But under what force is displacement occurring on the microscopic level? A: Take a large container of air. Plug the opening with a stopper with weight sitting on the top of stopper. The work done could be to move the weight upwards due to increase in pressure. Then work equals force times distance. The pressure is due to movement of the gas filing the container. A: Thermal energy can and does do work. Thermal energy is the energy possessed by a system due to the motion of particles within that system. Thermal energy is a type of energy, where indeed "energy" can be defined as "the capacity or ability to do work". And yes, mechanical work is the movement of an object due to an applied force. Therefore, can thermal energy be described as the ability/capacity of a system to cause movement of an object due to an "applied force"? The answer is yes. Consider now a system which contains a cylinder and inside that cylinder is a piston where a certain gas is enclosed by the cylinder and piston. As stated before, this gas has energy due to the motion of the gas molecules. If we heat this gas, we increase the thermal energy and consequently the piston moves. Clearly, the energy of the gas has done work on the piston. This is an example of a "force (manifested by an increase in pressure) causing a displacement" as you stated.
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,448
LifeBridge Health > Press Releases > Carroll Hospital Presents Awards to Team Members for their Outstanding Care Carroll Hospital Presents Awards to Team Members for their Outstanding Care Carroll Hospital recognized several team members with awards during December. The recipients were honored for their compassionate and exceptional patient care, which exemplifies the hospital's SPIRIT values. The honorees included: Registered nurse Melanie Morrison, team leader in The Family Birthplace, who was presented with Carroll Hospital's November DAISY Award. A nurse at the hospital for 25 years, Morrison was recognized for her patience, attentiveness and overall compassion toward a patient who was a first-time mom, as she was preparing to give birth in The Family Birthplace. Since the patient's mother could not be with her, Morrison stood in as a supportive figure for the patient during her delivery. Registered nurse Brittany Weatherly, oncology, at the William E. Kahlert Regional Cancer Center, was presented with the Dragonfly Award. A nurse at the hospital since 2018, Weatherly was recognized for her exceptional care when she went out of her way to provide a patient with a resource to help him understand his family member's illness and the best way to provide care. Registered nurse Kerry Byrne, intermediate care unit, has worked at the hospital for four years. She was also honored with the Dragonfly Award. She went over and above to help brighten the day of a patient, who had very few visitors, by giving her a personal "spa day" during her hospital stay. Community members and hospital employees are encouraged to nominate a hospital team member who has demonstrated a special dedication and devotion to his/her job in a variety of ways. For more information on how to nominate a Carroll Hospital team member, contact the Human Resources Department at 410-871-6833 or visit Lifebridgehealth.org/recognition.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,377
Il Tiratoio degli Angeli era un antico edificio dell'Arte della Lana a Firenze, situato nell'attuale via degli Alfani. In disuso, fu distrutto nel XVII secolo e al suo posto sorse il palazzo Guidi Raggio, tuttora esistente. Storia e descrizione La lavorazione del panni di lana, un tempo una delle più redditizie di Firenze, necessitava, tra i vari passaggi lavorativi, di una stesura al fresco in terrazze coperte e aerate, dove essi, opportunamente stesi e "tirati", asciugassero dopo le operazioni di coloritura e lavaggio. Per tali operazioni l'Arte della lana possedeva alcune grandi strutture apposite denominate appunto "tiratoi". A Firenze se ne sono contati cinque principali: quello delle Grazie, al cui posto oggi sorge la Camera di Commercio, quello della Pergola, quello degli Angeli in via Alfani, e quello dell'Uccello in piazza di Cestello, che fu sostituito poi con l'unico edificio ancora esistente, il tiratoio di San Frediano in piazza del Tiratoio. Il tiratoio degli Angeli doveva il suo nome all'antistante monastero di Santa Maria degli Angeli, ed era in una zona in cui l'Arte possedeva numerosi fabbricati: ne è esempio la vicina casa dell'Arte della Lana al canto alla Catena. I Giugni, proprietari dell'attiguo palazzo, acquistarono l'immobile e il terreno circostante verso la fine del Seicento, presumibilmente in occasione del matrimonio tra Niccolò Giugni e Luisa Giraldi (1691), in un momento in cui la famiglia molto aveva investito per ampliare ed arricchire la propria residenza. L'attuale edificio fu tuttavia eretto solo a Ottocento inoltrato. Bibliografia , p. 80; , n. 34. Voci correlate Palazzo Guidi Raggio Collegamenti esterni Claudio Paolini, scheda nel Repertorio delle architetture civili di Firenze di Palazzo Spinelli (testi concessi in GFDL). Architetture di Firenze scomparse
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,237
Satellite Docking System Market Research Report 2022 - Global Forecasts to 2032 with Analysis of Satellite Docking System Suppliers and Enabling Solution Providers Wednesday, January 25, 2023 at 9:29am UTC Dublin, Jan. 25, 2023 (GLOBE NEWSWIRE) -- The "Satellite Docking System Market - A Global and Regional Analysis: Focus on Service Type, End User, Spacecraft Type, and Country - Analysis and Forecast, 2022-2032" report has been added to ResearchAndMarkets.com's offering. The global satellite docking system market is estimated to reach $1,011.34 million in 2032 from $66.5 million in 2022, at a growth rate of 31.3% during the forecast period 2022-2032. The growth in the global satellite docking system market is expected to be driven by the enforcement of regulations on space sustainability and increase in in-orbit services. Market Lifecycle Stage Over the past few years, trends in the number of satellites launched by commercial satellite operators have been increasing drastically. As per the publisher's space database, the global satellite launch forecast estimates 45,131 satellites to be launched within the 2022-2032 timeline. Out of these 45,131 satellites, 95% of satellites are expected to operate in low Earth orbit (LEO). This indicates that over 95% of the satellites are expected to operate in one orbital segment leading to a growing state of congestion, which further adds to the risk perception of collision and space debris concerns. In addition, commercial satellite operators are opting for the life extension program to keep their existing satellites alive in space for longer periods. This will help them to reduce the satellite operation cost and increase their revenue with the existing satellite. Furthermore, it helps to remove the active space debris to keep the space debris-free and sustainable and reduce the risk of collision. Given the circumstance, the need for a satellite docking system is very high at this point, and the same is expected to persist as well. The global satellite docking system market is observing rising investment across all the satellite platforms, which drives the investments across the docking system technology. The major challenge in developing the satellite docking system is there is no standardization in design and mechanism among the satellite docking manufacturers. T his leads the developer to integrate their own satellite docking system in their in-orbit service vehicle, which will create incompatibility with the docking system that is integrated with the target satellites. In addition, different small satellite operators are building different types of satellites. Although many satellites use standardized satellite platforms, they are customized products with unique configurations. Therefore, their docking solution requirement will also vary from operator to operator. This will create a need for a variety of satellite docking solutions both for target satellites and service satellites. Service Type:Based on service type, the satellite docking system market is expected to be dominated by the refueling service satellite during the forecast period. End User: Based on end user, the global satellite docking system market is expected to be dominated by commercial end users. Region: North America is expected to account for the highest share of 78% in the satellite docking system market by value in 2021, owing to a significant number of companies based in the region. How can this report add value to an organization? Growth/Marketing Strategy: The global satellite docking system market has seen major development activities by key players operating in the market, such as business expansion activities, contracts, mergers, partnerships, collaborations, and joint ventures. The favored strategy for the companies has been contracts to strengthen their position in the satellite docking system market. For instance, in September 2022, ClearSpace signed a contract with the U.K. Space Agency of about $2.32 million to perform a feasibility study for a mission to remove derelict objects from low Earth orbit (LEO). Furthermore, in May 2022, Starfish Space collaborated with Benchmark Space Systems to develop advanced precision on-orbit refueling and docking capabilities. To optimize spacecraft control accuracy, Starfish is integrating and testing its CEPHALOPOD RPOD software with Benchmark's non-toxic hydrogen peroxide-fueled Halcyon thruster for manoeuvres. Competitive Strategy: Key players in the global satellite docking system market analyzed and profiled in the study involve satellite docking system manufacturers that offer docking systems and enabling capabilities. Moreover, a detailed competitive benchmarking of the players operating in the global satellite docking system market offers various solutions to satellites in space through in-orbit refueling, remove active space debris, inspect, repair, and replace the defect devices. Additionally, comprehensive competitive strategies such as contracts, partnerships, agreements, acquisitions, and collaborations will aid the reader in understanding the untapped revenue pockets in the market. Recent Developments in the Global Satellite Docking System Market In November 2022, Starfish Space announced that its Otter Pup satellite with high-performance low-thrust electric propulsion, which includes a satellite docking system, is planned to launch in the spring of 2023 to dock with another satellite in the fall of 2023. In October 2022, High Earth Orbit (HEO) Robotics collaborated with Satellogic to integrate Satellogic's growing satellite constellation and high-resolution satellite imagery with HEO's flyby inspection and computer-vision capabilities. In September 2022, Astroscale Holdings, Inc. received funding of $1.79 million from the U.K. Space Agency to develop technologies and capabilities for Cleaning Outer Space Mission through Innovative Capture, which consists of Astroscale's robotic debris capture capabilities and rendezvous and proximity operations to remove space debris and defunct satellites. In May 2022, Momentus Inc. signed a partnership with SpaceX for the integration of its Vigoride Orbital Transfer Vehicle and customer payloads on Falcon 9, which would be used for the transporter-5 mission. In April 2022, Lockheed Martin Corporation released an open-source Augmentation System Port Interface (ASPIN), a non-proprietary interface standard to support on-orbit servicing and mission augmentation. It uses Mission Augmentation Port (MAP) interface standard online that provides a mechanical interface design for docking spacecraft to one another. In March 2022, Rogue Space Systems Corporation announced that Seldor Capital is their first institutional investor, which helps the company to scale up its engineering and operations teams. Report Attribute Details Forecast Period 2022 - 2032 Estimated Market Value (USD) in 2022 $66.5 Million Forecasted Market Value (USD) by 2032 $1011.34 Million Compound Annual Growth Rate 31.3% Regions Covered Global Key Market Players and Competition Synopsis The companies that are profiled have been selected based on inputs gathered from primary experts and analysis of the company's coverage, product portfolio, and market penetration. Some prominent established names in this market are: Satellite Docking System Supplier Altius Space Machines, Inc. Astroscale Holdings, Inc. ClearSpace Orbit Fab, Inc. Rogue Space Systems Corporation Starfish Space Enabling Solution Providers D-Orbit SpA Momentus Inc. Obruta Space Solutions Corp. Orbit Recycling Tethers Unlimited, Inc. In-Orbit Services Sector: Overview Refueling Inspection, Repair, Replacement, and De-orbiting Growing Space Situational Awareness Services Market Startups and Investment Landscape Business Dynamics Growing Demand for Sustainable Space Operations Growing Demand for Optimizing Satellite Operation Cost Lack of Industry Wide Standardization of Docking Solutions Reduction in Manufacturing and Launch Costs Impacting the Financial Viability of In-Orbit Services Investments, Business Expansions, and Mergers and Acquisitions Partnerships, Collaborations, Agreements, and Contracts Opportunity for Software Solution for Rendezvous/Proximity Operations Evolution of Standardized Satellite Platform Enabling Capabilities for Rendezvous/Proximity Operations For more information about this report visit https://www.researchandmarkets.com/r/lbvmqq-docking?w=12 About ResearchAndMarkets.com ResearchAndMarkets.com is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends. Global Satellite Docking System Market CONTACT: ResearchAndMarkets.com Laura Wood,Senior Press Manager press@researchandmarkets.com For E.S.T Office Hours Call 1-917-300-0470 For U.S./ CAN Toll Free Call 1-800-526-8630 Crime, Courts & Emergencies State, Nation and World Austin Living Magazine © 2022, Austin Daily Herald
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,472
/* jshint browser:true */ var util = require('util'); var oauthUtil = require('./oauth_util'); var Bluebird = require('bluebird'); var BaseBrowserFlow = require('./base_browser_flow'); var OauthError = require('./oauth_error'); /** * An Oauth flow that runs in the browser and requests user authorization by * popping up a window and prompting the user. * @param {Object} options See `BaseBrowserFlow` for options. * @constructor */ function PopupFlow(options) { BaseBrowserFlow.call(this, options); this._authorizationPromise = null; } util.inherits(PopupFlow, BaseBrowserFlow); PopupFlow.prototype.startAuthorization = function(authUrl, state) { var me = this; var popup, popupTimer, listener; function cleanup() { if (popup && popupTimer) { clearInterval(popupTimer); popupTimer = null; } if (listener) { window.removeEventListener('message', listener, false); listener = null; } } me._authorizationPromise = new Bluebird(function(resolve, reject) { listener = function(event) { var receivedUrl; try { receivedUrl = event.data.receivedUrl; } catch (e) {} if (receivedUrl) { // Every request should have a unique `state` parameter. // We can key off of that to determine whether this request was // intended for this window. var params = oauthUtil.parseOauthResultFromUrl(receivedUrl); if (params.state === state) { state = null; // don't ever respond to again cleanup(); if (params.error) { reject(new OauthError(params)); } else { resolve(params); } } } }; window.addEventListener('message', listener, false); popup = window.open(authUrl, 'asana_oauth', me._popupParams(800, 600)); // Detect popup blocking and fail. if (!popup) { cleanup(); reject(new OauthError({ 'error': 'access_denied', 'error_description': 'The popup window containing the ' + 'authorization UI was blocked by the browser.' })); return; } // Detect popup closure (which may not be handled by the content, because // it may never load) and fail. If the popup posts a message to us, we // SHOULD get that message before it closes and this interval fires, // but just in case we wait for two successive intervals. var seenClosed = false; popupTimer = setInterval(function() { if (popup.closed) { if (seenClosed) { cleanup(); reject(new OauthError({ 'error': 'access_denied', 'error_description': 'The popup window containing the ' + 'authorization UI was closed by the user.' })); } else { seenClosed = true; } } }, 500); }); return Bluebird.resolve(); }; PopupFlow.prototype.finishAuthorization = function() { return this._authorizationPromise; }; PopupFlow.prototype._popupParams = function(popupWidth, popupHeight) { var left = window.screenX || window.screenLeft || 0; var top = window.screenY || window.screenTop || 0; var width = window.outerWidth || document.documentElement.clientWidth; var height = window.outerHeight || document.documentElement.clientHeight; var popupLeft = Math.max(left, Math.round(left + (width - popupWidth) / 2)); var popupTop = Math.max(top, Math.round(top + (height - popupHeight) / 2.5)); return util.format( 'left=%d,top=%d,' + 'width=%d,height=%d,' + 'dialog=yes,dependent=yes,scrollbars=yes,location=yes', popupLeft, popupTop, popupWidth, popupHeight); }; PopupFlow.runReceiver = function() { window.addEventListener('load', function() { var currentUrl = window.location.href; oauthUtil.removeOauthResultFromCurrentUrl(); var opener = window.opener; if (window.parent !== window.top) { opener = opener || window.parent; } if (window.opener) { console.log('Posting message', currentUrl, window.location.origin); opener.postMessage({ receivedUrl: currentUrl }, window.location.origin); window.close(); } else { console.log('No opener found for this window, not sending message'); } }, false); }; module.exports = PopupFlow;
{ "redpajama_set_name": "RedPajamaGithub" }
5,357
It will be interesting to see if the FedEx driver delivering to our home on Tuesday will voluntarily make note of this "error in judgment" … or if the little off-roading incident will go unmentioned? I'll need to check on things when the snow melts, but perhaps someone reading this knows how it is best handled? I think I'll archive a couple security photos to my blog just in case there is an issues with the lawn repair … of course they may say I should have put in driveway markers?
{ "redpajama_set_name": "RedPajamaC4" }
2,168
The Lochaber man behind celtic tunes in ITV drama Sanditon by Louise Glen August 28, 2019, 6:54 am Ewen Henderson on the set of ITV drama production Sanditon. Based on Jane Austen's unfinished work and set in a Regency seaside town in England, the Highland connection is not immediately obvious. But viewers of ITV's new period drama Sanditon are being treated to some celtic tunes, thanks to Lochaber musician Ewen Henderson. The front man of traditional group Manran was involved in recording music for the period drama, and even made a short appearance on screen. The idea of having Highland music on the production is not as unusual as it first may seem, as Mr Henderson explains: "It might just have been a happy co-incidence that Highland music was used in Sanditon, but traditional tunes were popular in the Regency period in the south. The music really seems to work as a backdrop for Sanditon. "I wasn't aware of this Regency period fashion for traditional Highland music until I was researching for the recordings, and I found that Nathaniel Gow, the son of a very famous Highland musician Neil Gow, was one of many band members from the north who were popular in the Regency ballrooms in London and the south of England at the time. "So it seems appropriate that traditional music and Gaelic culture are at the centre of the new drama." Inspired by Jane Austen's unfinished final novel, Sanditon is the story of young, penniless and beautiful Charlotte Heywood as she navigates the social life of a developing Regency seaside town at the forefront of the economic changes of the age. Mr Henderson continued: "I was delighted to be asked to record the music for the production by its composer Ruth Barrett. I am very thankful for the opportunity to work on something so exciting. "At first I was only involved in the music for a few of the ballroom scenes and in a tavern. "But it must have fitted well with what the producers wanted, as I was then used for the whole of the series." Mr Henderson, was also called upon to be an extra on set in the production, continued: "It was very exciting to get dressed up in Regency clothes and be part of the production. "It's completely different from the day job of music and performing – there is a lot of waiting around for things to happen, and to be called onto the set. But on the second day of filming there were five traditional musicians there – so it was good craic." Ewen Henderson
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,484
Q: Continuous Delivery - Acceptance test stored alongside artifacts? I'm looking into migrating towards a CD pipeline with compile stage and acceptance stages. Currently I have acceptance tests in my repository, living alongside my service code. After the compile stage is successful and some form of artifact has been pushed to my repository, I'm trying to figure out the best strategy for the next stage. I really want to keep my acceptance tests in the same repository as my service code, I want to maintain a quick feedback loop when writing a new test and implementing the solution. Is it bad practice to store the acceptance test code (C# dll(s)) alongside the build artifacts, retrieve and execute them against the service code I have just deployed?? A: @adm, it is not entirely a bad or a good practice. The storage of acceptance test is usually decided by the size of your repository. If you have a sizeable repository , it might be a good idea to break it down. Otherwise, both test resources and output can be stored in a repository and it is definitely not bad One disadvantage however is that , if you have lockdown/freeze mechanisms in your codebase , you might not be able to update the test cases during the lockdown/freeze period.
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,822
Coventry Transport Museum Coventry Festival of Motoring 'will be bigger and better' in new home MOVING to Stoneleigh Park will give Coventry's Festival of Motoring space to get bigger and better in the future, according to organisers. Sam DimmerDigital Development Editor The popular festival – which regularly drew huge crowds to the War Memorial Park – was cancelled this year because it had outgrown the venue. Festival organisers at Coventry Transport Museum had started looking for a new venue around the time the cancellation was announced and weeks later a deal was struck. Now, days after the Telegraph revealed that Stoneleigh Park would be the festival's new home, Coventry Transport Museum chief executive Gary Hall has heaped praise on the venue ahead of the switch. "The key thing is the large indoor space that gives us the ability to put on the festival whatever the weather," Mr Hall said. "The festival is very susceptible to the weather. In 2008 we had to cancel the event because of it. "Some 600 cars take part in the drive around Coventry and Warwickshire whatever the weather but we want as many people as possible to come and enjoy it, even if it is wet. "It will be a bigger and better show. We will be able to bring in a number of different, interesting elements." It's another coup for Stoneleigh Park's interim chief executive Ian Pegler, who announced plans to revive the Royal Show last month. He said: "It's great news. We are just three miles away from Coventry – literally just down the road. "Stoneleigh Park is the perfect setting for this iconic, well-loved festival that proudly celebrates Coventry and Warwickshire's rich engineering history." The new event will include a showcase of transport-related technological and manufacturing innovations in the region, organised in conjunction with the Imagineering Foundation. Bob Shanks, chairman of the foundation, said: "We have engaged companies, as well as many of our local universities, all of whom are looking forward to having the chance to display their work." War Memorial Park
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
388
The September Fashion Issue of Z!nk Magazine is my return to print media, since the days when I published a zine. For those of you who missed this phenom, it's kindof a xeroxed equivalent to what people now refer to as blogging. Except the subject matter was more indie-authentic, and in my opinion less narcissistic than today's teen self-publishers. In the news section I wrote about the IFB Conference. When I attended the conference, it was squeaky-clean. The panel discussions were the main attraction, but they seemed largely didactic. I received free pressed powder from Bare Minerals, which I wear sometimes as blush. It must've been dark in there because the makeup artist selected a color that's 2 shades darker than my actual skin tone. Also, I was interested in The Bloggers Tool Kit, but I wasn't allowed to see one. I was so proud to have my name on the masthead: a first! Here's the article I wrote about Gucci Sustainable Soles, eco-friendly shoes. I want a pair, please!
{ "redpajama_set_name": "RedPajamaC4" }
4,863
package edu.ucdenver.ccp.datasource.fileparsers.geneontology; /* * #%L * Colorado Computational Pharmacology's common module * %% * Copyright (C) 2012 - 2015 Regents of the University of Colorado * %% * Redistribution and use in source and binary forms, with or without modification, * are permitted provided that the following conditions are met: * * 1. Redistributions of source code must retain the above copyright notice, this * list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright notice, * this list of conditions and the following disclaimer in the documentation * and/or other materials provided with the distribution. * * 3. Neither the name of the Regents of the University of Colorado nor the names of its contributors * may be used to endorse or promote products derived from this software without * specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED * OF THE POSSIBILITY OF SUCH DAMAGE. * #L% */ import java.io.File; import java.io.IOException; import java.util.HashMap; import java.util.HashSet; import java.util.Map; import java.util.Set; import org.apache.log4j.Logger; import edu.ucdenver.ccp.common.file.CharacterEncoding; import edu.ucdenver.ccp.common.file.reader.Line; import edu.ucdenver.ccp.common.string.StringConstants; import edu.ucdenver.ccp.datasource.fileparsers.SingleLineFileRecordReader; import edu.ucdenver.ccp.datasource.identifiers.DataSourceIdentifier; import edu.ucdenver.ccp.datasource.identifiers.impl.bio.GeneOntologyID; /** * This class is used to parse Gene Ontology gene-association.xxxx files * * @author Bill Baumgartner * */ public class GeneAssociationFileParser extends SingleLineFileRecordReader<GeneAssociationFileData> { private final static String COMMENT_INDICATOR = StringConstants.EXCLAMATION_MARK; private static final String GENE_ASSOCIATION_FILE_PREFIX = "gene_association."; private final String speciesKey; @SuppressWarnings("unused") private static Logger logger = Logger.getLogger(GeneAssociationFileParser.class); public GeneAssociationFileParser(File file, CharacterEncoding encoding) throws IOException { super(file, encoding, COMMENT_INDICATOR); this.speciesKey = extractSpeciesKey(file.getName()); } private String extractSpeciesKey(String fileName) { if (!fileName.startsWith(GENE_ASSOCIATION_FILE_PREFIX)) throw new RuntimeException(String.format("Expected file name to start with %s but instead observed: ", GENE_ASSOCIATION_FILE_PREFIX)); return fileName.substring(fileName.lastIndexOf(".")); } @Override public String getDataSpecificKey() { return speciesKey; } @Override protected GeneAssociationFileData parseRecordFromLine(Line line) { return GeneAssociationFileData.parseLineFromFile(line); } /** * Returns a mapping from GO Term to the Mgi IDs that have been annotated to it * * @param geneAssociationFile * @return */ public static Map<GeneOntologyID, Set<DataSourceIdentifier<?>>> getGoTermID2GeneIDsMap(File geneAssociationFile, CharacterEncoding encoding, int maxGoTerm2GeneIDLinkThreshold) { Map<GeneOntologyID, Set<DataSourceIdentifier<?>>> goTermID2GeneIDsMap = new HashMap<GeneOntologyID, Set<DataSourceIdentifier<?>>>(); GeneAssociationFileParser parser = null; try { parser = new GeneAssociationFileParser(geneAssociationFile, encoding); } catch (IOException ioe) { ioe.printStackTrace(); throw new RuntimeException(ioe); } while (parser.hasNext()) { GeneAssociationFileData dataRecord = parser.next(); GeneOntologyID goTermID = dataRecord.getGoTermID(); DataSourceIdentifier<?> geneID = dataRecord.getGeneID(); if (goTermID2GeneIDsMap.containsKey(goTermID)) { goTermID2GeneIDsMap.get(goTermID).add(geneID); } else { Set<DataSourceIdentifier<?>> geneIDs = new HashSet<DataSourceIdentifier<?>>(); geneIDs.add(geneID); goTermID2GeneIDsMap.put(goTermID, geneIDs); } } /* * filter out any terms that have greater than maxGoTerm2GeneIDLinkThreshold genes linked to * them */ Set<GeneOntologyID> goTermIDs = goTermID2GeneIDsMap.keySet(); Set<GeneOntologyID> goTermIDsToRemove = new HashSet<GeneOntologyID>(); for (GeneOntologyID goTermID : goTermIDs) { Set<DataSourceIdentifier<?>> linkedGeneIDs = goTermID2GeneIDsMap.get(goTermID); if (linkedGeneIDs.size() > maxGoTerm2GeneIDLinkThreshold) { goTermIDsToRemove.add(goTermID); } } for (GeneOntologyID goTermID : goTermIDsToRemove) { goTermID2GeneIDsMap.remove(goTermID); } return goTermID2GeneIDsMap; } }
{ "redpajama_set_name": "RedPajamaGithub" }
1,618
KPM Berlin refers to its pages with links to other sites on the Internet. The following applies to all of these links: KPM Berlin expressly declares that it has no influence on the design and contents of the linked pages. For this reason, we hereby expressly dissociate ourselves from all contents of all linked third-party sites at www.kpm-berlin.com and do not adopt these contents as our own. This declaration applies to all displayed links and to all contents of the pages to which links lead.
{ "redpajama_set_name": "RedPajamaC4" }
5,157
New Jersey Dunkin' Donuts settles personal injury lawsuit for $522,000 Published Tue, Sep 15 201511:59 AM EDT Updated Wed, Sep 16 201511:49 AM EDT Anita Balakrishnan@MsABalakrishnan NJ woman wins $522k settlement with Dunkin' A New Jersey woman won a $522,000 settlement in her case against a local Dunkin Donuts, after sustaining injuries from tripping over a spike in the parking lot, her attorney told CNBC Tuesday. Maria Marsala was carrying a tray of hot coffee when she encountered a dislodged curb stop outside a Dunkin Donuts in Highland Park, New Jersey in January of 2012, court documents said. She tripped over the mis-placed metal rebar spike, burning herself with coffee and getting back and shoulder injuries that required surgery, according to her attorney, Ed Rebenack. The case, which also named the property owner of the strip mall that contains several other stores, was slated to go to trial in late September. Read More McDonald's conditions are hazardous, workers claim New York's NBC 4 first reported the story. Dunkin Donuts did not immediately respond to CNBC's request for comment. The suit against the fast-food breakfast and treats chain is far from the first to highlight the perils of a steaming morning pick-me-up. Both McDonald's and Starbucks, for example, have faced lawsuits in the past over burns from spilled coffee, with mixed results. "Basic standards for parking lot maintenance are for the protection and safety of the general public," Rebenack told CNBC. "Allowing a metal spike to protrude out of the asphalt clearly violates these standards. Although it is never a replacement for health, Ms. Marsala is hopeful that the settlement will serve to remind business owners that their customer's safety should always be a priority." Read More Fast Food Is Fattening? Bizarre Class Action Suits Dunkin' Brands Group Inc Netflix shares rise slightly despite weak guidance, domestic subscriber miss
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
135
Q: node.js Internals: How can I find out where `process.binding('eval')` gets defined? * *How can I find out where in the C++ source code of node.js the JavaScript object gets defined which I can access through process.binding('eval')? - I already found out that it's in /src/node_script.cc in this special case, but: How can I know where I can find that module just when I just take a look on the /src/ directory overview? I don't want to step through all the files in /src/ in order to look for a module. *Where can I find some deep going information about the internals of process.binding()s? Thanks. A: I was looking for the same myself today. I cannot guarantee that there isn't more to it, but this is what I discovered. src/node_extensions.h contains a list of built-in modules, defined like: ITEM(node_module_name) where module_name is the name of the module (obviously) You can find out which file defines that module by searching for which file has a line that starts with NODE_MODULE(node_module_name, So, to find the file that defines the 'evals' module for process.bindings: $ grep "NODE_MODULE(node_evals" src/*.cc src/node_script.cc:NODE_MODULE(node_evals, node::InitEvals)
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,076
In this doc, we'll setup a PostgreSQL server, then we'll connect it to Databox and confirm that the connection is working. Finally, we will create a databoard visualizing the data. All of this without a single line of code – except for the PostgreSQL query, of course. Go to Available data sources option and find the PostgreSQL tile. Hover over it with your mouse and click the 'Connect' button that slides up into view. Enter your connection data in the popup and click the 'Activate' button. Default port 5432 is fine in most cases. Tada! After you saved your custom query, you should see the data on the table. If not, check if the right data source and metric are selected. In our example it's the 'My PostgreSQL' data source and '└ posts|name' metric, because we're pushing posts by names. The date range should be set to 'Today,' to see the latest data.
{ "redpajama_set_name": "RedPajamaC4" }
5,459
Potasznia – część wsi Krasiejów w Polsce, położona w województwie opolskim, w powiecie opolskim, w gminie Ozimek. W latach 1975–1998 Potasznia położona była w województwie opolskim. Przypisy Krasiejów
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,092
import csv import json import re CRS_RE = re.compile(r'[A-Z]{3}') class Station(dict): def __init__(self, code, name) -> None: self['stationCode'] = code self['stationName'] = name def __hash__(self) -> int: return hash((self['stationCode'], self['stationName'])) def __eq__(self, __o: object) -> bool: return type(__o) == Station and self['stationCode'] == __o['stationCode'] and self['stationName'] == __o['stationName'] def __repr__(self) -> str: return '{' + self['stationName'] + ', ' + self['stationCode'] + '}' stations = set() for letter in ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z']: data = json.load(open(f'tmp/{letter}.json')) for station in data: if CRS_RE.match(station[0]) and station[10] != '': stations.add(Station(station[0], station[1])) with open('stations.csv', 'w', newline='') as out_file: writer = csv.DictWriter(out_file, fieldnames=['stationName', 'stationCode']) writer.writeheader() sorted_stations = sorted(list(stations), key=lambda s: s['stationName']) writer.writerows(sorted_stations) print('done')
{ "redpajama_set_name": "RedPajamaGithub" }
1,616
Q: How to use fetch() in react front-end to get data from express back-end? I'm trying to make a full-stack web-app using react and express. It's going pretty well atm but here's my problem: So I have express running in back-end. All paths are used by react router except for '/api'. At the '/api/blogposts' path my server.js send the results of a query I made to the mySQL server. (I've checked it and this part works. If I browse to /api/blogposts my browser shows a json with the contents of my blogposts table). My problem is with getting it to show in my react front-end. I'm trying to use fetch() but it doesn't work. Here's my code for the component that is supposed to fetch the blogposts: import React from 'react'; import './Blogposts.css'; import SingleBpost from '../SingleBpost/SingleBpost.js'; class Blogposts extends React.Component { constructor(props) { super(props); this.state = { receivedPosts: [] }; } async getBpostsFromServer() { const response = await fetch("/api/blogposts"); let myPosts = await response.json(); this.setState({receivedPosts: myPosts}); } componentDidMount() { this.getBpostsFromServer(); } render() { console.log(this.state.receivedPosts); return( <div id="Blogposts"> <SingleBpost title="OwO" date="18/12/2021" author="Kepos Team" body="Hello, this is a test for the blogposts!" /> </div> ); } } export default Blogposts; Just to clarify the {this.state.generateBlogpost()} in the render method is just to check if I can get the data for now. Once this works I will feed it into another component's props like this: <SingleBpost title={this.state.generateBlogpost().title} date={this.state.generateBlogpost().date} author={this.state.generateBlogpost().author} body={this.state.generateBlogpost().body} /> Anyways: does anyone know why this doesn't work? I've tried a few things but I just can't get it to work. What am I doing wrong? Thanks in advance for any help! A: You need to set the state of the variable receivedPosts in the fetch function like this : this.setState({receivedPosts: results}); Also, you can call the function generateBlogpost() at the load of the Component Blogposts by adding the following function : componentDidMount() { this.generateBlogpost(); } A: this one is useless .then((results) => { this.state.receivedPosts = results; }); return this.state.receivedPosts; } //instead you should use setState({receivedPosts: data.data})
{ "redpajama_set_name": "RedPajamaStackExchange" }
31
{"url":"http:\/\/mathhelpforum.com\/differential-equations\/93901-general-question-about-differential-equations.html","text":"# Math Help - General Question about Differential Equations\n\n1. ## General Question about Differential Equations\n\nMy question is about differential equations as a university course.\n\nMost colleges seem to have students take differential equations after a third class in calculus (multi-variable calculus) and then proceed to take analysis after that (assuming students keep taking math courses). Is it possible and or unusual to take analysis before differential equations? I only ask because my college seems to allow either to be taken first. Is one better to do before the other?\n\nThanks\n\n2. Originally Posted by billa\nMy question is about differential equations as a university course.\n\nMost colleges seem to have students take differential equations after a third class in calculus (multi-variable calculus) and then proceed to take analysis after that (assuming students keep taking math courses). Is it possible and or unusual to take analysis before differential equations? I only ask because my college seems to allow either to be taken first. Is one better to do before the other?\n\nThanks\nAnalysis doesn't really use any concepts from Differential Equations so, no, there is no reason why you can't take Analysis before Differential Equations.\n(I would, however, strongly recommend taking Linear Algebra, as well as Multi-variable Calculus, before taking Differential Equations.)\n\n3. You can certainly take analysis before differential equations (I took them at the same time). However, depending on your background, it may be worth taking an introductory course in mathematical logic (some discrete math classes cover such topics). Real analysis is the rigerous definition of the foundations of calculus through proof, and previous experience in formal logic is definitely helpful.","date":"2015-08-02 18:05:43","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.820940375328064, \"perplexity\": 743.5100352073442}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-32\/segments\/1438042989142.82\/warc\/CC-MAIN-20150728002309-00031-ip-10-236-191-2.ec2.internal.warc.gz\"}"}
null
null
\section{Introduction} The statistical behaviour of Selmer groups of Jacobians of families of algebraic curves is a topic that has seen many advances in recent years. In \cite{BS-2selmerellcurves}, Bhargava and Shankar determined the average size of the $2$-Selmer group of the family of elliptic curves in short Weierstrass form when ordered by height, showing that it is equal to $3$. Bhargava and Gross \cite{Bhargava-Gross-hyperellcurves} generalized their results to the family of hyperelliptic curves of genus $g$ with a marked rational Weierstrass point. Poonen and Stoll \cite{PoonenStoll-Mosthyperellipticnorational} used the latter to prove that for each $g\geq 3$, a positive proportion of such hyperelliptic curves have exactly one rational point, and this proportion tends to $1$ as $g$ tends to infinity. See \cite{Shankar-2selmerhypermarkedpoints}, \cite{ShankarWang-hypermarkednonweierstrass} for similar results for families of hyperelliptic curves with other types of marked points and \cite{BS-3Selmer}, \cite{BS-4Selmer},\cite{BS-5Selmer}, \cite{Thorne-Romano-E8} for analogous results for $n$-Selmer groups of (hyper)elliptic curves with $n\geq 3$. \subsection{Statement of results} This paper is a contribution to the arithmetic statistics of non-hyperelliptic genus-$3$ curves. Such curves are canonically embedded in $\P^2$ as smooth plane quartics. Let $X$ be a (smooth, projective, geometrically connected) genus-3 curve over $\mathbb{Q}$ that is not hyperelliptic and $P\in X(\mathbb{Q})$ a marked rational point. We say $P$ is a \define{hyperflex} if $4P$ is a canonical divisor or equivalently, the tangent line at $P$ in the canonical embedding meets $X$ only at $P$. Any pair $(X,P)$ with $P$ a hyperflex is isomorphic to a pair $(C_b,P_{\infty})$ where $C_b$ is the projective completion of the plane curve \begin{equation}\label{equation: e6 family beginning paper} y^3 = x^4+(p_2x^2+p_5x+p_8)y+p_6x^2+p_9x+p_{12}, \end{equation} where $b = (p_2,\dots,p_{12}) \in \mathbb{Q}^6$, and where $P_{\infty}$ is the unique point at infinity. Pairs $(C_b,P_{\infty})$ given by Equation (\ref{equation: e6 family beginning paper}) are isomorphic if and only if the coefficients are related by a substitution $(p_i) \mapsto (\lambda^ip_i)$ for some $\lambda\in \mathbb{Q}^{\times}$, which explains the subscripts of the coefficients. Call such an equation \define{minimal} if $p_i \in \mathbb{Z}$ and the following two conditions are satisfied: \begin{itemize} \item There exists no prime $q$ such that $q^i$ divides $p_i$ for all $i \in \{2,5,6,8,9,12\}$. \item Either we have $p_5 >0$, or we have $p_5=0$ and $p_9 \geq 0$. \end{itemize} Then any pair $(X,P)$ arises from a unique minimal equation. Write $\sh{E} \subset \mathbb{Z}^6$ for the subset of integers $(p_2,p_5,p_6,p_8,p_9,p_{12})$ such that Equation (\ref{equation: e6 family beginning paper}) defines a smooth curve $C_b$, and write $\sh{E}_{\min} \subset \sh{E}$ for the subset for which the equation is minimal. For $b \in \sh{E}$, write $J_b$ for the Jacobian variety of $C_b$, a principally polarized abelian threefold over $\mathbb{Q}$. For $b\in \sh{E}$ we define the \define{height} of $b$ by the formula $$\mathrm{ht}(b) \coloneqq \max_i |p_i(b)|^{72/i}.$$ Note that for any $a >0$, the set $\{b \in \sh{E} \mid \mathrm{ht}(b) <a \}$ is finite. Our first main theorem concerns the average size of the $2$-Selmer group of $J_b$. In what follows, we let $\mathcal{F}$ be either $\sh{E}_{\min}$ or a subset of $\sh{E}$ defined by finitely many congruence conditions (see \S\ref{section: proof of main theorems}). \begin{theorem}[Theorem \ref{theorem: main theorem}]\label{theorem: first main theorem intro} When ordered by height, the average size of the $2$-Selmer group $\Sel_2J_b$ for $b\in \mathcal{F}$ is bounded above by $3$. More precisely, we have \begin{equation*} \limsup_{a\rightarrow \infty} \frac{ \sum_{b\in \mathcal{F},\; \mathrm{ht}(b)<a }\# \Sel_2J_b }{\# \{b \in \mathcal{F}\mid \mathrm{ht}(b) < a\}} \leq 3. \end{equation*} \end{theorem} We expect that the limit exists and equals $3$, see the discussion of Step $(3)$ in \S\ref{subsection: intro methods}. Thorne \cite{Thorne-E6paper} has proved that the average size of the $2$-Selmer set of $C_b$ (a pointed subset of $\Sel_2 J_b$) for $b\in \sh{E}$, when ordered by height, is finite. From this he deduces that a positive proportion of members of the family of affine curves $C^{\circ}_b$ for $b\in \sh{E}$, obtained from $C_b$ by removing the point at infinity, have integral points everywhere locally but no integral points globally. Theorem \ref{theorem: first main theorem intro} provides an explicit estimate on the size of the full $2$-Selmer group, not just the $2$-Selmer set. We therefore obtain more Diophantine consequences. For example, Bhargava and Shankar observed that bounding the $2$-Selmer group gives an upper bound on the average rank of elliptic curves. In our case we can bound the average of the Mordell--Weil rank $\rk(J_b)$ of $J_b$, the rank of the finitely generated abelian group $J_b(\mathbb{Q})$. Using the inequalities $2 \rk(J_b) \leq 2^{\rk(J_b)} \leq \# \Sel_2 J_b $, we obtain: \begin{corollary} The average rank $\rk(J_b)$ for $b\in \mathcal{F}$ is bounded above by $3/2$. \end{corollary} Another corollary is a bound on the number of rational points of $C_b$ for $b\in \sh{E}$, in the spirit of \cite[Corollary 1.4]{Bhargava-Gross-hyperellcurves}. Write $\delta$ for the proportion of curves in $\mathcal{F}$ satisfying Chabauty's condition, namely $\rk(J_b) \leq \text{genus}(C_b)-1 = 2$. Then Theorem \ref{theorem: first main theorem intro} implies that $$ \delta+ (1-\delta)\cdot 2^3 \leq 3, $$ so $\delta \geq 5/7$. A computation shows that at least $85.7\%$ of curves in our family have good reduction at $7$, and for such curves we have $\#C_b(\mathbb{F}_7)\leq 22$. Stoll's refined bound \cite[Corollary 6.7]{Stoll-Twists} on the Chabauty method implies: \begin{corollary} A majority (in fact at least $61\%$) of curves $C_b$ for $b\in \sh{E}$ have at most $26$ rational points. \end{corollary} Our second main result shows that the Chabauty method at the prime $2$ implies that a positive proportion of curves in our family have only one rational point, using the methods of Poonen and Stoll \cite{PoonenStoll-Mosthyperellipticnorational}. \begin{theorem}[Theorem \ref{theorem: poonen stoll analogue}]\label{theorem: intro poonen stoll} A positive proportion of curves $C_b$ for $ b\in\sh{E}$ have only one rational point. More precisely, the quantity \begin{equation*} \liminf_{a\rightarrow \infty} \frac{ \# \{b \in \sh{E} \mid \mathrm{ht}(b)<a, \, C_b(\mathbb{Q})=\{P_{\infty}\} \} }{\# \{b \in \sh{E} \mid \mathrm{ht}(b) < a \}} \end{equation*} is strictly positive. \end{theorem} \subsection{Methods}\label{subsection: intro methods} Bhargava and his collaborators have developed a general strategy for obtaining statistical results on $2$-Selmer groups of families of curves (and many other arithmetic objects). Roughly speaking, the proofs of these theorems have the following structure. For a family of curves $\mathcal{F}$ of interest, one hopes to find a representation $V$ of a reductive group $G$ over $\mathbb{Q}$ so that the $2$-Selmer groups of (the Jacobians of) the curves in $\mathcal{F}$ can be embedded in the set of $G(\mathbb{Q})$-orbits of $V(\mathbb{Q})$. Moreover, after fixing integral structures on $G$ and $V$, orbits corresponding to $2$-Selmer elements should have integral representatives. If the representation $V$ is coregular (meaning that $V\mathbin{/\mkern-6mu/} G \coloneqq \Spec \mathbb{Q}[V]^G$ is isomorphic to affine space) and satisfies some additional properties, then Bhargava's orbit-counting techniques allow us to count integral orbits in $V$ and sieve out those orbits not corresponding to $2$-Selmer elements. In \cite{Bhargava-Gross-hyperellcurves}, Bhargava and Gross studied the $2$-Selmer group of odd hyperelliptic curves of genus $g$ in this way, using the representation of $\SO_{2g+1}$ on the space of traceless, self-adjoint $(2g+1)\times (2g+1)$-matrices. Our proof of Theorem \ref{theorem: first main theorem intro} has the same structure, although most of the proofs of the individual parts are very different in nature. We now explain in steps how we $(1)$ find the representation $(G,V)$, $(2)$ prove that $2$-Selmer elements admit integral representatives and $(3)$ count integral orbits. For Step $(1)$, we follow the approach taken by Thorne \cite{Thorne-thesis} using a combination of Vinberg theory and the Grothendieck--Brieskorn correspondence. Given a (split, adjoint) simple algebraic group $H$ over $\mathbb{Q}$ with Lie algebra $\lieh$, there exists an involution $\theta\colon H\rightarrow H$, uniquely defined up to conjugation by an element of $H(\mathbb{Q})$, with the property that the group $G \coloneqq \left(H^{\theta}\right)^{\circ}$ is split and that the $G$-representation $V \coloneqq \lieh^{d\theta = -1}$ has good invariant-theoretic properties. We call such $\theta$ a \define{stable involution}. (In \S\ref{section: setup} we make an explicit choice for such an involution in the $E_6$ case, after having fixed a pinning of $H$.) Write $B \coloneqq V\mathbin{/\mkern-6mu/} G = \Spec \mathbb{Q}[V]^G$ and $\pi: V\rightarrow V\mathbin{/\mkern-6mu/} G$ for the canonical projection. Vinberg theory shows that $B$ is isomorphic to $\mathbb{A}^{\rank H}$, so $V$ is coregular. The theory of the Kostant section shows that $\pi$ has a section $\sigma : B \rightarrow V$ and for each $b\in B(\mathbb{Q})$ we call $\sigma(b) \in V(\mathbb{Q})$ the `distinguished orbit' or `reducible orbit' (playing a role analogous to that of reducible binary quartic forms in \cite{BS-2selmerellcurves}). Taking a transverse slice to the $G$-action on $V$ at a subregular nilpotent element of $V$ defines a closed subscheme $C^{\circ} \rightarrow V$. The restriction of $\pi$ to $C^{\circ}$ defines a family of curves $C^{\circ} \rightarrow B$. If $H$ is simply laced (so of type $A_n, D_n$ or $E_n$), Thorne \cite[Theorem 3.8]{Thorne-thesis} shows that the fibre $C^{\circ}_0$ above $0 \in B$ is a simple curve singularity of the same type as $H$ and $C^{\circ} \rightarrow B$ is a semi-universal deformation of its central fibre. Moreover in each case there exists a natural compactification $C \rightarrow B$ of the family $C^{\circ} \rightarrow B$, and if the fibre $C_b$ above a point $b\in B(\mathbb{Q})$ is smooth then he shows that there is a natural Galois equivariant isomorphism $J_b[2] \simeq Z_G(\sigma(b))$ where $J_b$ is the Jacobian of $C_b$. This last isomorphism, combined with the well-known interpretation of the $G(\mathbb{Q})$-orbits on $V_b(\mathbb{Q})$ in terms of the Galois cohomology of $Z_G(\sigma(b))$ (Lemma \ref{lemma: AIT}), gives the link between the $2$-Selmer group of $J_b$ and the orbits of the representation $V$. If $H$ is of type $A_{2g}$, the singularity is of the form $(y^2 = x^{2g+1})$ and the family $C \rightarrow B$ is isomorphic to the family of odd hyperelliptic curves of genus $g$ considered by Bhargava and Gross. If $H$ is of type $E_6$, the singularity is of the form $(y^3 = x^4)$ and if we write $B = \Spec \mathbb{Q}[p_2,p_5,p_6,p_8,p_9,p_{12}]$ then the family $C \rightarrow B$ is isomorphic to the family given by Equation (\ref{equation: e6 family beginning paper}). For Step $(2)$, we follow the same strategy as \cite{Thorne-Romano-E8} where the authors prove a similar result for a different representation in their study of the $3$-Selmer groups of genus-$2$ curves. It turns out that proving that a $G(\mathbb{Q}_p)$-orbit has an integral representative amounts to proving that a certain object, consisting of a reductive group over $\mathbb{Q}_p$ with extra data, extends to an object over $\mathbb{Z}_p$ (see Proposition \ref{proposition: G-orbits in terms of groupoids}). We achieve this by deforming to the case of square-free discriminant and using a general result on extending reductive group schemes over open dense subschemes of regular arithmetic surfaces (Lemma \ref{lemma: extend objects complement codimension 2}). In \cite{Thorne-Romano-E8} the authors use the Mumford representation to perform this step explicitly. Here we complete the deformation step by exploiting properties of the compactified Jacobian of the $E_6$ curve singularity $(y^3 = x^4)$ in the sense of Altman and Kleiman \cite{AltmanKleiman-CompactifyingThePicardScheme}, and by using Bertini theorems over $\mathbb{Q}_p$ and $\mathbb{F}_p$. A crucial ingredient is the fact that the total space of the relative compactified Jacobian of the semi-universal deformation of the singularity is nonsingular. The techniques applied here work verbatim for any of the families described in Step $(1)$ where the centre of the simply connected group of the corresponding Dynkin diagram has odd order, namely $A_{2n}, E_6$ and $E_8$. (This condition ensures that $C \rightarrow B$ has geometrically integral fibres, which leads to a good theory of the compactified Picard scheme.) This provides a way of proving the existence of integral representatives in many of the previously considered cases in the literature. (Our method only works for sufficiently large primes $p$ but this does not cause any problems in the counting step.) It should be straightforward to make this strategy work for all the families of Step $(1)$. For Step $(3)$, we follow the ideas of Bhargava closely, about which we will make two remarks. First of all, because we cannot prove a uniformity estimate like \cite[Theorem 2.13]{BS-2selmerellcurves}, we only obtain an upper bound in our estimates on integral orbits. We expect that similar uniformity estimates hold in our case, which would allow us to use the so-called square-free sieve to show that the average size of the $2$-Selmer group of $J_b$ is in fact equal to $3$. Secondly, the substantial work of `cutting off the cusp' has already been done in \cite{Thorne-E6paper} so counting integral orbits is a formal matter for us given the robustness of Bhargava's counting techniques. Why are we able to estimate the size of the full $2$-Selmer group and not just the $2$-Selmer set as in \cite{Thorne-E6paper}? Apart from a way of constructing integral representatives mentioned above, this is based on the following novelty. Thanks to \cite{thorne-planequarticsAIT}, we have a way of embedding the full $2$-Selmer group of curves in the orbits of our representations. But the construction does not make it clear that the $2$-Selmer group has in its image the reducible orbit. Controlling this is crucial for our counting techniques since we only count irreducible orbits in Step $(3)$. We prove that in fact the identity element of the $2$-Selmer group is mapped to the reducible orbit, using the following strategy. Fix $b\in B(\mathbb{Q})$ such that $C_b$ is smooth with Jacobian $J_b$, and let $G^{sc}$ denote the simply connected cover of $G$. It turns out that proving this statement for $C_b$ amounts to proving that the simply connected centralizer $\mathcal{U}\coloneqq Z_{G^{sc}}(\sigma(b))$ of $\sigma(b)$ is isomorphic to a subgroup $\mathcal{H}$ of the Mumford theta group related to a certain canonical line bundle on $J_b$. The strategy to prove that $\mathcal{U}$ and $\mathcal{H}$ are isomorphic is inspired by the following simple observation: let $C$ be a smooth projective geometrically connected genus-$g$ curve over $\mathbb{Q}$ with Jacobian $J_C$ and let $Z$ be a finite group scheme over $\mathbb{Q}$ that satisfies $Z_{\overbar{\mathbb{Q}}} \simeq \left(\mathbb{Z}/2\mathbb{Z}\right)^{2g}$. If there exists a $Z$-torsor $\widetilde{C} \rightarrow C$ such that $\widetilde{C}$ is geometrically connected, then $Z \simeq J[2]$ as finite group schemes over $\mathbb{Q}$. In our case roughly the same principles apply. The finite \'etale $\mathbb{Q}$-groups $\mathcal{U}$ and $\mathcal{H}$ are central extensions of $J_b[2]$ by $\{\pm 1\}$ which lie in the same isomorphism class over $\overbar{\mathbb{Q}}$. By constructing a $\mathcal{U}$-torsor and $\mathcal{H}$-torsor arising from the properties of our representation and the geometry of Mumford theta groups respectively, we realize $\mathcal{U}$ and $\mathcal{H}$ as quotients of the \'etale fundamental group of the open curve $C^{\circ}_{b,\overbar{\mathbb{Q}}}$ with respect to some rational basepoint (we actually have to take a tangential basepoint following \cite{Deligne-droiteprojective}). We then show that this \'etale fundamental group essentially has only one quotient with the required group-theoretic properties, so $\mathcal{U}$ and $\mathcal{H}$ both inherit the same Galois action from this fundamental group. This proves that $\mathcal{U}$ and $\mathcal{H}$ are isomorphic over $\mathbb{Q}$, proving the required statement. This argument proves a conjecture of Thorne \cite[Conjecture 4.16]{Thorne-thesis} in the $E_6$ case. We note that although our construction of orbits (in particular Theorem \ref{theorem: inject 2-descent into orbits}) is very much based on ideas developed in \cite{thorne-planequarticsAIT}, we have phrased the proofs in a way independent of that paper because of some simplifications in the argument and the more general form that we prove. To prove Theorem \ref{theorem: intro poonen stoll} we merely have to adapt certain arguments of \cite{PoonenStoll-Mosthyperellipticnorational} in a straightforward way. One of the crucial ingredients in their argument for hyperelliptic curves is an equidistribution result for $2$-Selmer elements under the mod $2$ reduction of the logarithm map. Since we only obtain an upper bound in Theorem \ref{theorem: first main theorem intro}, we only obtain an `at most equidistribution' result (Theorem \ref{theorem: equidistribution selmer}) but this is enough for our purposes. The results of this paper will be used in forthcoming work \cite{Laga-F4paper} (which was in fact the main motivation for this paper) where we consider the subfamily of curves defined by setting $p_5=p_9=0$ in Equation (\ref{equation: e6 family beginning paper}). The Jacobian of a curve in this subfamily splits as a product of an elliptic curve and a Prym variety, which in that case is a $(1,2)$-polarized abelian surface. Studying the Lie algebra embedding $F_4\subset E_6$ leads to estimates of Selmer groups of these Prym surfaces, which provides evidence for the heuristics of Poonen and Rains \cite{PoonenRains-maximalisotropic} in the case of non-principally polarized abelian varieties. \subsection{Organization} We now describe the organization of the paper. In \S\ref{section: setup} we define the group $G$ and the representation $V$ of $G$ whose orbits we study as a central topic of this paper. We review some properties of this representation and make the connection to the family of curves with Equation (\ref{equation: e6 family beginning paper}). In \S\ref{section: orbit parametrization}, we construct orbits associated with $2$-Selmer elements. In \S\ref{section: integral representatives} we prove that orbits coming from $2$-Selmer elements admit integral representatives away from small primes. In \S\ref{section: counting}, we employ Bhargava's orbit-counting techniques to give the estimates we need in order to prove Theorem \ref{theorem: first main theorem intro}. In \S\ref{section: proof of main theorems} we combine all of these ingredients and prove Theorem \ref{theorem: first main theorem intro}. Finally in \S\ref{section: applications to rational points} we prove Theorem \ref{theorem: intro poonen stoll}. \subsection{Acknowledgements} I thank my supervisor Jack Thorne for suggesting the problem, providing many useful suggestions and his constant encouragement. I also want to thank Marius Leonhardt, Davide Lombardo and Beth Romano for their comments on an earlier draft of this paper. Finally I wish to thank Bjorn Poonen for sharing with me the proof of Lemma \ref{lemma: sqfree disc implies regular node}. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 714405). \subsection{Notation and conventions} For a field $k$ we write $k^s$ for a fixed separable closure and $\Gamma_k = \Gal(k^s/k)$ for its absolute Galois group. We will often use the equivalence of categories between finite \'etale group schemes over $k$ (called finite $k$-groups) and finite groups with a continuous $\Gamma_k$-action. As such we may identify a finite $k$-group with its set of $k^s$-points. We define a \define{lattice} to be a finitely generated free $\mathbb{Z}$-module $\Lambda$ together with a symmetric and positive-definite bilinear form $(\cdot,\cdot)\colon \Lambda\times \Lambda \rightarrow \mathbb{Z}$. We write $\Lambda^{\vee}\coloneqq \{\lambda\in \Lambda \otimes \mathbb{Q} \mid (\lambda, \Lambda) \subset \mathbb{Z}\}$ for the \define{dual lattice} of $\Lambda$, which is naturally identified with $\Hom(\Lambda,\mathbb{Z})$. We say $\Lambda$ is a \define{root lattice} if $(\lambda,\lambda)$ is an even integer for all $\lambda\in \Lambda$ and the set $$\{ \alpha \in \Lambda \mid (\alpha,\alpha) =2 \}$$ generates $\Lambda$. If $\Phi \subset \mathbb{R}^n$ is a simply laced root system then $\Lambda= \mathbb{Z}\Phi$ is a root lattice. In that case we define the type of $\Lambda$ to be the Dynkin type of $\Phi$. If $S$ is a scheme, an \define{\'etale sheaf of root lattices} $\Lambda$ over $S$ is defined as a locally constant \'etale sheaf of finite free $\mathbb{Z}$-modules together with a bilinear pairing $\Lambda\times \Lambda \rightarrow \mathbb{Z}$ such that for every geometric point $\bar{s}$ of $S$ the stalk $\Lambda_{\bar{s}}$ is a root lattice. In that case $\Aut(\Lambda)$ is a finite \'etale $S$-group. If $X$ is a scheme over $S$ and $T\rightarrow S$ a morphism we write $X_T$ for the base change of $X$ to $T$. If $T = \Spec A$ is an affine scheme we also write $X_A$ for $X_T$. If $G$ is a smooth group scheme over $S$ then we write $\mathrm{H}^1(S,G)$ for the set of isomorphism classes of \'etale sheaf torsors under $G$ over $S$, which is a pointed set coming from non-abelian \v{C}ech cohomology. If $S = \Spec R$ we write $\mathrm{H}^1(R,G)$ for the same object. If $G\rightarrow S$ is a group scheme acting on $X\rightarrow S$ and $x \in X(T)$ is a $T$-valued point, we write $Z_G(x) \rightarrow T$ for the centralizer of $x$ in $G$. It is defined by the following pullback square: \begin{center} \begin{tikzcd} Z_G(x) \arrow[d] \arrow[r ] & T \arrow[d] \\ G\times_S X \arrow[r] & X\times_S X \end{tikzcd} \end{center} Here $G\times_S X \rightarrow X \times_S X$ denotes the action map and $T \rightarrow X\times_S X$ denotes the composite of $x$ with the diagonal $X\rightarrow X \times_S X$. If $x$ is an element of a Lie algebra $\lieh$ then we write $\mathfrak{z}_{\lieh}(x)$ for the centralizer of $x$ in $\lieh$, a subalgebra of $\lieh$. A \define{$\mathbb{Z}/2\mathbb{Z}$-grading} on a Lie algebra $\lieh$ over a field $k$ is a direct sum decomposition $$\lieh = \bigoplus_{i\in \mathbb{Z}/2\mathbb{Z}} \lieh(i) $$ of linear subspaces of $\lieh$ such that $[h(i),h(j)] \subset \lieh(i+j)$ for all $i,j \in \mathbb{Z}/2\mathbb{Z}$. This is equivalent to giving a $\mu_2$-action on $\lieh$ by considering the $(\pm 1)$-part of such an action. If $2$ is invertible in $k$ then giving a $\mathbb{Z}/2\mathbb{Z}$-grading is equivalent to giving an involution of $\lieh$. We call a triple $(X,H,Y)$ an \define{$\liesl_2$-triple} of a Lie algebra $\lieh$ if $X,Y,H$ are nonzero elements of $\lieh$ satisfying the following relations: \begin{equation*} [H,X] = 2X , \quad [H,Y] = -2Y ,\quad [X,Y] = H . \end{equation*} If $V$ is a vector space over a field $k$ we write $k[V]$ for the graded algebra $\Sym(V^{\vee})$. Then $V$ is naturally identified with the $k$-points of the scheme $\Spec k[V]$, and we call this latter scheme $V$ as well. If $G$ is a group scheme over $k$ we write $V \mathbin{/\mkern-6mu/} G\coloneqq \Spec k[V]^G$ for the \define{GIT quotient} of $V$ by $G$. \begin{table} \centering \begin{tabular}{|c | c | c |} \hline Symbol & Definition & Reference in paper \\ \hline $H$ & Split adjoint group of type $E_6$ & \S\ref{subsection: a stable grading} \\ $T$ & Split maximal torus of $H$ & \S\ref{subsection: a stable grading} \\ $\theta$ & Stable involution of $H$ & \S\ref{subsection: a stable grading} \\ $G$ & Fixed points of $\theta$ on $H$ & \S\ref{subsection: a stable grading} \\ $V$ & $(-1)$-part of action of $\theta$ on $\lieh$ & \S\ref{subsection: a stable grading}\\ $B$ & GIT quotient $V\mathbin{/\mkern-6mu/} G$ & \S\ref{subsection: a stable grading} \\ $\Delta \in \mathbb{Q}[B]$ & Discriminant polynomial & \S\ref{subsection: a stable grading} \\ $\pi\colon V \rightarrow B$ & Invariant map & \S\ref{subsection: a stable grading} \\ $\sigma\colon B \rightarrow V$ & Kostant section & \S\ref{subsection: distinguished orbit} \\ $C^{\circ} \rightarrow B$ & Family of affine curves & \S\ref{subsection: a family of curves} \\ $C \rightarrow B$ & Family of projective curves & \S\ref{subsection: a family of curves} \\ $J \rightarrow B^{\rs}$ & Jacobian variety of $C^{\rs} \rightarrow B^{\rs}$ & \S\ref{subsection: a family of curves} \\ $p_2,\dots ,p_{12}$ & Invariant polynomials of $G$-action on $V$ & \S\ref{subsection: a family of curves} \\ $A \rightarrow B^{\rs}$ & Centralizer of $\sigma|_{B^{\rs}}$ in $H$ & \S\ref{subsection: a family of curves}\\ $\Lambda \rightarrow B^{\rs}$ & Character group scheme of the torus $A\rightarrow B^{\rs}$& \S\ref{subsection: a family of curves}\\ $\mathscr{H} \rightarrow B^{\rs}$ & Subgroup of Mumford Theta group & \S\ref{subsection: Mumford theta groups} \\ $\mathscr{U} \rightarrow B^{\rs}$ & Centralizer of $\sigma|_{B^{\rs}}$ in $G^{sc}$ & \S\ref{subsection: comparting two central extensions} \\ $N$ & Sufficiently large integer &\S\ref{subsection: integral structures} \\ $S$ & $\mathbb{Z}[1/N]$ &\S\ref{subsection: integral structures} \\ $\underline{H}, \underline{G}, \underline{V}$ & Extensions of above objects over $\mathbb{Z}$ &\S\ref{subsection: integral structures} \\ $\mathcal{C} \rightarrow \underline{B}$ & Extension of $C \rightarrow B$ over $\mathbb{Z}$ &\S\ref{subsection: integral structures} \\ $\mathcal{J} \rightarrow \underline{B}_S^{\rs}$ & Jacobian of $\mathcal{C}^{\rs}_S \rightarrow \underline{B}^{\rs}_S$ &\S\ref{subsection: integral structures} \\ $\bar{\mathcal{J}} \rightarrow \underline{B}_S$ & Compactification of $\mathcal{J} \rightarrow \underline{B}_S^{\rs}$ & \S\ref{subsection: compactifications} \\ \hline \end{tabular} \caption{Notation used throughout the paper} \label{table 1} \end{table} \section{Setup} \label{section: setup} \subsection{Definition of the representation}\label{subsection: a stable grading} Let $H$ be a split adjoint semisimple group of type $E_6$ over $\mathbb{Q}$. We suppose that $H$ comes with a pinning $(T,P,\{X_{\alpha}\})$. So $T \subset H$ is a split maximal torus (which determines a root system $\Phi(H,T) \subset X^*(T)$), $P\subset H$ is a Borel subgroup containing $T$ (which determines a root basis $\Delta_{H} \subset \Phi(H,T)$) and $X_{\alpha}$ is a generator for each root space $\lieh_{\alpha}$ for $\alpha \in \Delta_{H}$. The group $H$ is of dimension $78$. Write $\check{\rho} \in X_*(T)$ for the sum of the fundamental coweights with respect to $\Delta_{H}$, characterised by the property that $(\alpha\circ \check{\rho})(t) = t$ for all $\alpha \in \Delta_{H}$. Write $\zeta\colon H\rightarrow H$ for the unique nontrivial automorphism preserving the pinning: it is an involution inducing the order-$2$ symmetry of the Dynkin diagram of $E_6$. Let $\theta \coloneqq \zeta \circ \Ad(\check{\rho}(-1)) = \Ad(\check{\rho}(-1)) \circ\zeta$. Then $\theta$ defines an involution of $\lieh$ and thus by considering $(\pm1)$-eigenspaces it determines a $\mathbb{Z}/2\mathbb{Z}$-grading $$\lieh = \lieh(0) \oplus \lieh(1).$$ Let $G \coloneqq H^{\theta}$ be the centralizer of $\theta$ in $H$ and let $V\coloneqq \lieh(1)$: the space $V$ defines a representation of $G$ by restricting the adjoint representation. If we write $\bigg$ for the Lie algebra of $G$ then $V$ is a Lie algebra representation of $\bigg = \lieh(0)$. The pair $(G,V)$ is the central object of study of this paper. The results of \cite{Reeder-torsion} applied to the Kac diagram of $\theta$ \cite[\S7.1; Table 3]{GrossLevyReederYu-GradingsPosRank} show that $G$ is isomorphic to $\PSp_8$ and $V$ is the unique irreducible $42$-dimensional subrepresentation of $\wedge^4(8)$, where $(8)$ denotes the defining representation of $\Sp_8$. The following proposition summarizes some properties of the representation $V$. In particular, it shows that regular semisimple orbits over algebraically closed fields are well understood. For a field $k/\mathbb{Q}$ and $v\in V(k)$, we say $v$ is \define{regular, nilpotent, semisimple} respectively if it is so when considered as an element of $\lieh(k)$. \begin{proposition} \label{prop : graded chevalley} Let $k/\mathbb{Q}$ be a field. The following properties are satisfied: \begin{enumerate} \item $V_k$ satisfies the Chevalley restriction theorem: if $\mathfrak{a} \subset V_k$ is a Cartan subalgebra, then the map $N_{G}(\mathfrak{a}) \rightarrow W_{\mathfrak{a}} \coloneqq N_{H}(\mathfrak{a})/Z_{H}(\mathfrak{a})$ is surjective, and the inclusions $\mathfrak{a} \subset V_k \subset \lieh_k$ induce isomorphisms $$\mathfrak{a}\mathbin{/\mkern-6mu/} W_{\mathfrak{a}} \simeq V_k\mathbin{/\mkern-6mu/} G \simeq \lieh_k \mathbin{/\mkern-6mu/} H .$$ In particular, the quotient is isomorphic to affine space. \item Suppose that $k$ is separably closed and let $x,y\in V(k)$ be regular semisimple elements. Then $x$ is $G(k)$-conjugate to $y$ if and only if $x,y$ have the same image in $V\mathbin{/\mkern-6mu/} G$. \item Let $\Delta \in \mathbb{Q}[V]^{G}$ be the restriction of the Lie algebra discriminant of $\lieh$ to the subspace $V$. Then for all $x\in V(k)$, $x$ is regular semisimple if and only if $\Delta(x) \neq 0$, if and only if the $G$-orbit of $x$ is closed in $V_k$ and the stabilizer $Z_{G}(x)$ is finite. \end{enumerate} \end{proposition} \begin{proof} These are classical results in the invariant theory of graded Lie algebras due to Vinberg and Kostant--Rallis; we refer to \cite[\S2]{Thorne-thesis} for precise references. Note that the discriminant of a Lie algebra is by definition the image of the product of all the roots in a fixed Cartan subalgebra under the Chevalley isomorphism. \end{proof} We note that Cartan subalgebras of $\lieh$ contained in $V$ do exist: we will construct a family of tori $A \rightarrow B^{\rs}$ in \S\ref{subsection: a family of curves} whose Lie algebras provide such examples. We write $B \coloneqq V\mathbin{/\mkern-6mu/} G = \Spec \mathbb{Q}[V]^{G}$ and $\pi\colon V \rightarrow B$ for the natural quotient map. We have a $\mathbb{G}_m$-action on $V$ given by $\lambda \cdot v = \lambda v$ and there is a unique $\mathbb{G}_m$-action on $B$ such that $\pi$ is $\mathbb{G}_m$-equivariant. \subsection{The distinguished orbit}\label{subsection: distinguished orbit} We describe a section of the GIT quotient $\pi\colon V \rightarrow B$ whose construction is originally due to Kostant. Let $E \coloneqq \sum_{\alpha \in \Delta_{H}} X_{\alpha} \in \lieh$. Then $E\in \lieh(1)$ is regular and nilpotent. By \cite[Lemma 2.14 and Lemma 2.17]{Thorne-thesis} there exists a unique normal $\liesl_2$-triple $(E,X,F)$ containing $E$. By definition, this means that $(E,X,F)$ is an $\liesl_2$-triple with the additional property that $X\in \lieh(0)$ and $F \in \lieh(1)$. We define the affine linear subspace $\kappa \coloneqq \left(E +\mathfrak{z}_{\lieh}(F) \right) \cap V \subset V$. \begin{proposition}\label{proposition: Kostant section E6} \begin{enumerate} \item The composite map $\kappa \hookrightarrow V\rightarrow B$ is an isomorphism. \item $\kappa$ is contained in the open subscheme of regular elements of $V$. \item The morphism $G \times \kappa \rightarrow V, (g,v) \mapsto g\cdot v$ is \'etale. \end{enumerate} \end{proposition} \begin{proof} Parts 1 and 2 are \cite[Lemma 3.5]{Thorne-thesis}; the last part is \cite[Proposition 3.4]{Thorne-thesis}, together with the fact that $G\times \kappa$ and $V$ have the same dimension. \end{proof} Write $\sigma\colon B \rightarrow V$ for the inverse of $\pi|_{\kappa}$. We call $\sigma$ the \define{Kostant section} for the group $H$. It determines a distinguished orbit over $\mathbb{Q}$ for every $b\in B(\mathbb{Q})$ in the representation $V$, playing an analogous role to reducible binary quartic forms as studied in \cite{BS-2selmerellcurves}. It will be used to organize the set of rational orbits with fixed invariants. \subsection{A family of curves}\label{subsection: a family of curves} We introduce a family of curves and relate it to stabilizers of regular semisimple elements in the representation $V$. We say an element $b\in B$ is \define{regular semisimple} if it has nonzero discriminant and we write $B^{\rs}\subset B$ for the open subscheme of regular semisimple elements of $B$, the complement of the discriminant locus in $B$. For a $B$-scheme $U$ we write $U^{\rs}$ for the restriction to the regular semisimple locus. For example if $k/\mathbb{Q}$ is a field and $v\in V(k)$, then $v\inV^{\rs}(k)$ if and only if $v$ is regular semisimple in the sense of \S\ref{subsection: a stable grading} by Part 3 of Proposition \ref{prop : graded chevalley}. The following straightforward lemma shows that for $v\in V^{\rs}(k)$ the isomorphism class of $Z_G(v)$ only depends on $\pi(v)$. \begin{lemma}\label{lemma: centralizers with same invariants isomorphic} Let $S$ be a scheme and $v,v' \colon S \rightarrow V^{\rs}$ be morphisms such that $\pi(v) = \pi(v')$. Then $Z_G(v) \simeq Z_G(v')$ as group schemes over $S$. \end{lemma} \begin{proof} This follows from the fact that $v,v'$ are \'etale locally $G$-conjugate and that $Z_G(v)$ is abelian, see \cite[Part 2 of Proposition 4.1]{Thorne-thesis}. \end{proof} We define $A$ as the centralizer $Z_{H}(\sigma|_{B^{\rs}})$, a maximal torus of $H_{B^{\rs}} = H \times B^{\rs}$. (Recall that by our conventions, the centralizer of a $B^{\rs}$-point of $V$ is a group scheme over $B^{\rs}$.) This defines for every field $k/\mathbb{Q}$ and $b\in B^{\rs}(k)$ a maximal torus $A_b$ in $H_k$. We write $\Lambda$ for the character group $X^*(A)$ of $A$, an \'etale sheaf of $E_6$ root lattices over $B^{\rs}$. \begin{lemma}\label{lemma: centralizer kostant same as mod 2 root lattice} The involution $\theta$ restricts to the inversion map on $A$, so $Z_{G}(\sigma|_{B^{\rs}}) = A[2]$. Moreover we have a natural isomorphism $ Z_G(\sigma|_{B^{\rs}}) \simeq \Lambda/2\Lambda$ of group schemes over $B^{\rs}$. \end{lemma} \begin{proof} The first claim follows from the fact that $\theta$ is a stable involution and can be deduced from \cite[Lemma 2.21]{Thorne-thesis}. To prove that $A[2] \simeq \Lambda/2\Lambda$, we note that the Cartier dual of $A[2]$ equals $\Lambda/2\Lambda$. The pairing on $\Lambda$ defines an injective map $\Lambda\rightarrow \Lambda^{\vee}$ whose image has index $3$, so its mod $2$ reduction is an isomorphism. This proves that $Z_{G}(\sigma|_{B^{\rs}})= A[2]$ is naturally isomorphic to the Cartier dual of $\Lambda^{\vee}/2\Lambda^{\vee}$, which is $\Lambda/2\Lambda$. \end{proof} We note that since $\Lambda$ is an \'etale sheaf of $E_6$ root lattices over $B^{\rs}$, $\Lambda/2\Lambda$ is a finite \'etale group scheme over $B^{\rs}$ and the pairing $(,)$ on $\Lambda$ induces a pairing $\Lambda/2\Lambda \times \Lambda/2\Lambda \rightarrow \{\pm 1\} , (\lambda,\mu) \mapsto (-1)^{(\lambda,\mu)}$, where we view $\{\pm 1 \}$ as a constant group scheme over $B^{\rs}$. We define the morphism of $B^{\rs}$-schemes $q_{\Lambda}\colon \Lambda/2\Lambda \rightarrow \{ \pm 1 \}$ by sending an $S$-point $\lambda$ to $(-1)^{(\lambda,\lambda)/2} $. Then $q_{\Lambda}$ is a quadratic form on $\Lambda/2\Lambda$, in the sense that $q_{\Lambda}(\lambda+\mu) = (-1)^{(\lambda,\mu)} q_{\Lambda}(\lambda) q_{\Lambda}(\mu) $ for all $S$-points $\lambda,\mu$. The following important proposition gives a connection between the representation $V$ and a family of algebraic curves parametrized by $B$. \begin{proposition}\label{proposition: bridge jacobians root lattices} We can choose polynomials $p_2, p_5, p_6, p_8, p_9, p_{12} \in \mathbb{Q}[V]^{G}$ with the following properties: \begin{enumerate} \item Each polynomial $p_i$ is homogeneous of degree $i$ and $\mathbb{Q}[V]^{G} \simeq \mathbb{Q}[p_2, p_5, p_6, p_8, p_9, p_{12}]$. Consequently, there is an isomorphism $B\simeq \mathbb{A}^6_{\mathbb{Q}}$. \item Let $C^{\circ} \rightarrow B$ be the family of affine curves given by the equation \begin{equation}\label{equation : E6 family middle of paper} y^3 = x^4+y(p_2x^2+p_5x+p_8)+p_6x^2+p_9x+p_{12}. \end{equation} Let $C\rightarrow B$ be the completion of $C^{\circ} \rightarrow B$ inside $\P^2_{B}$. If $k/\mathbb{Q}$ is a field and $b\in B(k)$, then $C_b$ is smooth if and only if $b\in B^{\rs}(k)$. \item Let $J \rightarrow B^{\rs}$ be the relative Jacobian of its smooth part \cite[\S9.3; Theorem 1]{BLR-NeronModels}. Then there is an isomorphism $\Lambda/2\Lambda \simeq J[2] $ of finite \'etale group schemes over $B^{\rs}$ that sends the pairing on $\Lambda/2\Lambda$ to the Weil pairing $J[2] \times J[2] \rightarrow \{ \pm 1\}$. \item There exists an isomorphism $Z_{G}(\sigma|_{B^{\rs}}) \simeq J[2]$ of finite \'etale group schemes over $B^{\rs}$. \end{enumerate} \end{proposition} \begin{proof} Part 1 follows from the isomorphism $\mathbb{Q}[V]^G\simeq \mathbb{Q}[\lieh]^{H}$ of Proposition \ref{prop : graded chevalley} and the well-known description of the invariant polynomials of the adjoint action of $H$ on $\lieh$; see for example \cite[Theorem 3.5]{Panyushev-Invarianttheorythetagroups}. Part 2 follows from \cite[Theorem 3.8, case $E_6$]{Thorne-thesis} and \cite[Corollary 3.16]{Thorne-thesis}, together with the fact that $C_b$ is always smooth at the point at infinity. Part 3 follows from \cite[Corollary 4.12]{Thorne-thesis}. Finally, Part 4 follows from combining Part 3 with Lemma \ref{lemma: centralizer kostant same as mod 2 root lattice}. \end{proof} For the remaining part of this paper we fix a choice of polynomials $p_2, p_5, p_6, p_8, p_9, p_{12} \in \mathbb{Q}[V]^{G}$ satisfying the conclusions of Proposition \ref{proposition: bridge jacobians root lattices}. Recall that we have defined a $\mathbb{G}_m$-action on $B$ which satisfies $\lambda \cdot p_i = \lambda^ip_i$. The assignment $\lambda \cdot (x,y) := (\lambda^3 x,\lambda^4 y)$ defines a $\mathbb{G}_m$-action on $C$ such that the morphism $C\rightarrow B$ is $\mathbb{G}_m$-equivariant. \subsection{Further properties of $J[2]$}\label{subsection: further properties of J[2]} We give some additional properties of the group scheme $J[2] \rightarrow B^{\rs}$, which by Proposition \ref{proposition: bridge jacobians root lattices} we may identify with $\Lambda/2\Lambda \rightarrow B^{\rs}$. Before we state them, we recall some definitions and set up notation. Recall from \S\ref{subsection: a stable grading} that $T$ is a split maximal torus of $H$. Let $\liet$ be its Lie algebra and $\Lambda_T$ its character group. Write $W\coloneqq N_G(T)/T$ for the Weyl group of $T$, a constant group scheme over $\mathbb{Q}$. Part 1 of Proposition \ref{prop : graded chevalley} implies that the natural map $B=V\mathbin{/\mkern-6mu/} G \rightarrow \lieh\mathbin{/\mkern-6mu/} H$ is an isomorphism. Write $\liet \rightarrow \liet \mathbin{/\mkern-6mu/} W \simeq \lieh \mathbin{/\mkern-6mu/} H \simeq B$ for the composite of the inverse of this isomorphism with the Chevalley isomorphism $\liet \mathbin{/\mkern-6mu/} W \simeq \lieh \mathbin{/\mkern-6mu/} H$ and the natural projection map $\liet \rightarrow \liet \mathbin{/\mkern-6mu/} W$. Restricting to regular semisimple elements defines a finite \'etale cover $f\colon\liet^{\rs} \rightarrow B^{\rs}$ with Galois group $W$. \begin{proposition}\label{proposition: monodromy of J[2]} We have the following: \begin{enumerate} \item The finite \'etale group scheme $\Lambda/2\Lambda\rightarrow B^{\rs}$ becomes trivial after the base change $f\colon\liet^{\rs} \rightarrow B^{\rs}$, where it is isomorphic to the constant group scheme $\Lambda_T/2\Lambda_T$. The monodromy action is given by the natural action of $W$ on $\Lambda_T/2\Lambda_T$. \item The only section of $\Lambda/2\Lambda\rightarrow B^{\rs}$ is the zero section. \item If $q\colon \Lambda/2\Lambda \rightarrow \{\pm 1\}$ is a $B^{\rs}$-morphism such that $q(\lambda+\mu) = (-1)^{(\lambda,\mu)}q(\lambda)q(\mu)$ for all $S$-points $\lambda,\mu$ of $\Lambda/2\Lambda$, then $q = q_{\Lambda}$. \item The only automorphism of the $B^{\rs}$-group scheme $\Lambda/2\Lambda$ fixing the pairing $\Lambda/2\Lambda\times \Lambda/2\Lambda \rightarrow \{\pm 1\}, (\lambda,\mu) \mapsto (-1)^{(\lambda,\mu)}$ is the identity. \end{enumerate} \end{proposition} \begin{proof} The first claim follows from the fact that the torus $A \rightarrow B^{\rs}$ is isomorphic to the constant torus $T\times \liet^{\rs} \rightarrow \liet^{\rs}$ after pulling back along $f$, with monodromy given by the action of $W$ on $T$. Indeed, a straightforward adaptation of Lemma \ref{lemma: centralizers with same invariants isomorphic} to the case of the adjoint action of $H$ on $\lieh$ shows that if $x,x'\colon S \rightarrow \lieh^{\rs}$ are $S$-points which agree after composing with $\lieh^{\rs} \rightarrow \lieh\mathbin{/\mkern-6mu/} H \simeq B$, then $Z_{H}(x) \simeq Z_{H}(x')$ as group schemes over $S$. (Here $\lieh^{\rs}\subset \lieh$ denotes the subset of regular semisimple elements.) In particular, we can apply this to the $\liet^{\rs}$-points $i\colon \liet^{\rs} \rightarrow \lieh^{\rs}$ (where $i$ denotes the inclusion) and $\sigma \circ f$ (where $\sigma$ denotes the Kostant section). Comparing their centralizers, we obtain an isomorphism $T\times{\liet^{\rs}} \simeq A_{\liet^{\rs}} $. Since this isomorphism is induced by \'etale locally conjugating $i$ and $\sigma\circ f$ by elements of $H$, the monodromy action is indeed given by the natural action of $W$ on $T$. For the second claim, it suffices to prove that the only $W$-invariant element of $\Lambda_T/2\Lambda_T$ is the identity. This is an easy exercise in the combinatorics of the root lattice of type $E_6$. For the third claim, note that the $B^{\rs}$-scheme of quadratic refinements of the non-degenerate pairing $\Lambda/2\Lambda\times \Lambda/2\Lambda\rightarrow \{\pm 1\}$ is a torsor for the group $\Lambda/2\Lambda \rightarrow B^{\rs}$ by \cite[\S1]{GrossHarris-theta}. Since the latter group scheme has only one $B^{\rs}$-point by the second claim, the lemma follows. Finally we treat the fourth claim. By the previous claim such an isomorphism must preserve the quadratic form $q_{\Lambda}$. So it suffices to prove that every automorphism of $\Lambda_T/2\Lambda_T$ preserving the quadratic form $q(\lambda)= (-1)^{(\lambda,\lambda)/2}$ and commuting with every element of $W$ is the identity. But since the natural map $W \rightarrow \Aut(\Lambda_T/2\Lambda_T,q)$ is an isomorphism \cite[Remark 4.3.4]{Lurie-minisculereps} and the centre of the Weyl group of $E_6$ is trivial, the proposition follows. \end{proof} For later purposes, it is useful to know that the isomorphism $\Lambda/2\Lambda \simeq J[2]$ intertwines certain quadratic forms on both sides, as we now explain. On the one hand, in \S\ref{subsection: a family of curves} we have defined a quadratic form $q_{\Lambda} \colon \Lambda/2\Lambda \rightarrow \{ \pm 1\}$ satisfying $q_{\Lambda}(\lambda+\mu) = (-1)^{(\lambda,\mu)} q_{\Lambda}(\lambda) q_{\Lambda}(\mu) $ for all $\lambda,\mu$. On the other hand, we can use the theory of theta characteristics to define a quadratic form on $J[2]$, as follows. (We refer the reader to \cite{GrossHarris-theta} for basics on theta characteristics.) For every field $k/\mathbb{Q}$ and every $b\in B^{\rs}(k)$ the curve $C_{b}$ has a marked point $P_{\infty}$ which is a hyperflex in the canonical embedding. This implies that $4P_{\infty}$ is a canonical divisor, so $\kappa_b = 2P_{\infty}$ is a theta characteristic. The following well-known result of Mumford \cite{Mumford-thetacharacteristicsalgebraiccurve} shows that to this data we can associate a quadratic form. To state it in a general set-up, let $X/k$ be a smooth projective curve with Jacobian variety $J_{X}$. We define for a divisor $D$ on $X$ the quantity $h^0(D) \coloneqq \dim_k \mathrm{H}^0(X,\O_{X}(D))$. \begin{lemma}\label{lemma: mumford construction quadratic form to theta char} Let $\kappa$ be a divisor on $X$ such that $2\kappa$ is canonical. Then the map $q_{\kappa}\colon J_{X}[2] \rightarrow \{\pm 1\}$ defined by \begin{displaymath} q_{\kappa}(\omega) \coloneqq (-1)^{h^0(\kappa + \omega) +h^0(\kappa) } \end{displaymath} is a quadratic refinement of the Weil pairing: for all $\omega, \eta \in J_{X}[2]$, we have $q_{\kappa}(\omega+\eta) = e_2(\omega,\eta)q_{\kappa}(\omega)q_{\kappa}(\eta)$, where $e_2 \colon J_{X}[2]\times J_{X}[2] \rightarrow \{\pm 1\} $ denotes the Weil pairing. \end{lemma} We apply the above construction to the fibres of $C^{\rs} \rightarrow B^{\rs}$ and the theta characteristic $\kappa = 2P_{\infty}$. In fact by \cite[Theorem 1]{Mumford-thetacharacteristicsalgebraiccurve} this procedure can be globalized: we obtain a quadratic form $q_{\kappa} \colon J[2] \rightarrow \{\pm1\}$ refining the Weil pairing $e_2 \colon J[2]\times J[2] \rightarrow \{\pm 1\}$. \begin{proposition}\label{prop: quadratic forms identified} Under the isomorphism $\Lambda/2\Lambda \simeq J[2]$ of Proposition \ref{proposition: bridge jacobians root lattices}, the quadratic forms $q_{\Lambda}$ and $q_{\kappa}$ are identified. \end{proposition} \begin{proof} Write $q\colon \Lambda/2\Lambda \rightarrow \{\pm 1\}$ for the composite of $q_{\kappa}$ with the above isomorphism. It suffices to prove that $q_{\Lambda}$ and $q$ are equal. Since both $q_{\Lambda}$ and $q$ are quadratic refinements of the same pairing on $\Lambda/2\Lambda$ by Proposition \ref{proposition: bridge jacobians root lattices}, this follows from Part 3 of Proposition \ref{proposition: monodromy of J[2]}. \end{proof} The following lemma relates the bitangents of a curve in our family with the $2$-torsion of the Jacobian and will be useful in \S\ref{section: applications to rational points}. Recall that $\Gamma_k$ denotes the absolute Galois group of a field $k$. \begin{lemma}\label{lemma: bitangents and 2-torsion} Let $k/\mathbb{Q}$ be a field and $b\in B^{\rs}(k)$. Let $\mathcal{B}$ be the set of bitangents of $C_b$ over $k^s$ different from the line at infinity in Equation (\ref{equation : E6 family middle of paper}), equipped with its natural $\Gamma_k$-action. If $\Gamma_k$ acts transitively on $\mathcal{B}$, then $J_b[2](k)=0$. \end{lemma} \begin{proof} It is well-known \cite[\S4]{GrossHarris-theta} that bitangents of $C_b$ correspond to odd theta characteristics of $C_b$; this correspondence identifies the line at infinity with $2P_{\infty}$. (Recall that $P_{\infty}$ denotes the unique point at infinity of $C_b$.) The assignment $\kappa \mapsto \kappa - 2P_{\infty}$ defines a $\Gamma_k$-equivariant bijection from the set of theta characteristics to the set of $2$-torsion points on $J_b$. Moreover under the identification $J_b[2] \simeq \Lambda_b/2\Lambda_b$ from Proposition \ref{proposition: bridge jacobians root lattices} which identifies the quadratic forms $q_{\kappa_b}$ and $q_{\Lambda_b}$ (Proposition \ref{prop: quadratic forms identified}), the set of odd theta characteristics is mapped bijectively to the zero set of the quadratic form $q_{\Lambda_b}$ on $\Lambda_b/2\Lambda_b$. The proof now follows from Lemma \ref{lemma: root lattice transitive zero set no invariants} below. \end{proof} \begin{lemma}\label{lemma: root lattice transitive zero set no invariants} Let $\Lambda$ be a root lattice of type $E_6$ with quadratic form $q\colon \Lambda/2\Lambda \rightarrow \{\pm 1\},\, \lambda \mapsto (-1)^{(\lambda,\lambda)/2}$. Let $G$ be a subgroup of $\Aut\left(\Lambda/2\Lambda, q \right)$ such that $G$ acts transitively on the set $\{v\in \Lambda/2\Lambda \mid v\neq 0 , \, q(v) =1 \}$. Then $\left(\Lambda/2\Lambda \right)^G=\{0\}$. \end{lemma} \begin{proof} Suppose that $v \in \Lambda/2\Lambda$ is a nonzero element fixed by every element of $G$. The assumptions on $G$ imply that $q(v)=-1$. For $i \in \{0,1\}$ define $$ S_i \coloneqq \{w \in \Lambda/2\Lambda \mid (v,w)=i \}. $$ Then $\Lambda/2\Lambda = S_0\sqcup S_1$ and each $S_i$ is stable under $G$. We claim that both $S_0$ and $S_1$ contain nonzero elements which take the value $1$ at $q$. This would prove the lemma since it contradicts the transitivity of $G$ on such elements. To prove the claim, note that the group $\Aut\left(\Lambda/2\Lambda,q\right)$ acts transitively on the set of non-zero elements of $\Lambda/2\Lambda$ which take the value $-1$ at $q$ since every such element is the image of a root in $\Lambda$. So it suffices to prove the claim for a single $v$, in which case it can easily be checked explicitly. \end{proof} \subsection{The discriminant polynomial}\label{subsection: discriminant polynomial} We compare the discriminant $\Delta\in \mathbb{Q}[B]$ which is defined using Lie theory with the discriminant of a plane quartic curve. We keep the notation of the beginning of \S\ref{subsection: further properties of J[2]}. Recall that $\Delta$ is defined as the image of $\prod_{\alpha} d\alpha\in \mathbb{Q}[\liet]^W$ under the chain of isomorphisms $\mathbb{Q}[\liet]^W \rightarrow \mathbb{Q}[\lieh]^{H} \rightarrow \mathbb{Q}[V]^{G} = \mathbb{Q}[B]$, where $\alpha \in \Phi(H,T)$ runs over the set of roots of $H$. Since $\Phi(H,T)$ has $72$ elements, $\Delta$ is homogenous of degree $72$. \begin{lemma}\label{lemma: discriminant geometrically irreducible} For every field $k/\mathbb{Q}$, $\Delta$ is irreducible in $k[B]$. \end{lemma} \begin{proof} It suffices to prove that we cannot partition $\Phi(H,T)$ into two nonempty $W$-invariant subsets, which is true since $W$ acts transitively on $\Phi(H,T)$. \end{proof} Now let $R$ be any ring and $F\in R[x,y,z]$ be a homogenous polynomial of degree $4$. In \cite[Definition 2.2]{Saito-Discriminanthypersurfacevendim}, the (divided) discriminant $\disc(F)\in R$ is defined. It is an integral polynomial in the coefficients of $F$ and $\disc(F) \in R^{\times}$ if and only if the plane quartic $(F=0) \subset \P^2_{R}$ is smooth over $R$. It satisfies the transformation properties $\disc(\lambda F) = \lambda^{27} F$ and $F((x,y,z)\cdot A) = (\det A)^{36} F(x,y,z)$ for every $\lambda\in R$ and $A \in \Mat_3(R)$ (Equations (2.2.3), (2.2.4) in loc. cit.). We define $\Delta_0\in\mathbb{Q}[B]$ as the discriminant of the (homogenized) polynomial appearing in Equation (\ref{equation : E6 family middle of paper}): \begin{equation}\label{equation: definition discriminant Delta0} \Delta_0 \coloneqq \disc(y^3z-x^4-(p_2x^2z+p_5xz^2+p_8z^3)y-(p_6x^2z^2+p_9xz^3+p_{12}z^4)). \end{equation} \begin{proposition}\label{proposition: discriminant Delta and Delta0 agree} The polynomials $\Delta$ and $\Delta_0$ agree up to an element of $\mathbb{Q}^{\times}$. \end{proposition} \begin{proof} Since $\Delta$ and $\Delta_0$ have the same vanishing locus (Part 2 of Proposition \ref{proposition: bridge jacobians root lattices}) and $\Delta$ is irreducible (Lemma \ref{lemma: discriminant geometrically irreducible}), it suffices to prove that $\Delta_0$ is weighted homogenous of degree $72$ in the variables $p_2,\cdots,p_{12}$. Write $F_B\in \mathbb{Q}[B][x,y,z]$ for the polynomial appearing in the right hand side of Equation (\ref{equation: definition discriminant Delta0}). Using the transformation properties of $\disc$ we obtain \begin{align*} \Delta_0(\lambda \cdot b )= \disc(\lambda^{12}F_B(\lambda^{-3}x,\lambda^{-4}y,z)) = \lambda^{12.27-7.36}\disc(F_B) = \lambda^{72}\Delta_0(b), \end{align*} as desired. \end{proof} \section{Orbit parametrization}\label{section: orbit parametrization} The purpose of this section is to prove that for each $b\in B^{\rs}(\mathbb{Q})$, we can construct a natural injection $\Sel_2J_b \hookrightarrow G(\mathbb{Q})\backslash V_b(\mathbb{Q})$, see Corollary \ref{corollary: Sel2 embeds}. In \cite{thorne-planequarticsAIT}, such an embedding was already constructed, but it is crucial to know that the distinguished orbit $G(\mathbb{Q})\cdot \sigma(b)$ lies in the image of this embedding and to have a more general version for the purposes of constructing integral representatives (Theorem \ref{theorem: inject 2-descent into orbits}). The technical input is an isomorphism between two central extensions (Proposition \ref{propostion: 2 central extensions coincide}), established in \S\ref{subsection: comparting two central extensions}. The reader is advised to read \S\ref{subsection: Mumford theta groups}, take Corollary \ref{corollary: commutative diagram corollary central extensions} on faith and jump straight to \S\ref{subsection: twisting and embedding the selmer group}. \subsection{Mumford theta groups}\label{subsection: Mumford theta groups} In this subsection we construct a finite subgroup $\mathcal{H}$ of a certain Mumford theta group when a curve with a rational theta characteristic is given and realize this group as the deck transformations of a covering of schemes. A general reference is \cite[Chapters 6, 11]{BirkenhakeLange-CAV}. Let $k/\mathbb{Q}$ be a field and $X/k$ a smooth projective geometrically integral curve of genus $g\geq 2$. Write $J_{X}$ for its Jacobian variety and $J_{X}^{g-1}$ for the $J_{X}$-torsor of line bundles of degree $g-1$ on $X$. The variety $J_{X}^{g-1}$ has a distinguished divisor $W_{g-1}$ given by the image of the Abel--Jacobi map $X^{g-1} \rightarrow J_{X}^{g-1}$, called the \define{theta divisor}. For an element $a\in J_{X}(k)$ (respectively $a\in J_{X}^{g-1}(k)$), we write $t_a$ for the translation map $t_a\colon J_{X}\rightarrow J_{X}$ (respectively $t_a\colon J_{X} \rightarrow J_{X}^{g-1}$). We say a line bundle $\kappa \in J_{X}^{g-1}(k^s)$ (or any divisor representing it) is a \define{theta characteristic} if $\kappa^{\otimes 2}$ is isomorphic to the canonical bundle. Suppose that $\kappa \in J_{X}^{g-1}(k)$ is a $k$-rational theta characteristic. In this case $\sh{M} = \O_{J_{X}}(t_{\kappa}^{*}W_{g-1})$ is a symmetric line bundle. We define the \define{Mumford theta group} $G(\sh{M}^2)$ of $\sh{M}^2$ to be the set $$\left\{ (\omega,\phi) \mid \omega \in J_{X}[2](k^s),\phi \colon \sh{M}^2 \xrightarrow{\sim} t_{\omega}^*\sh{M}^2 \right\}$$ with multiplication given by $(\omega,\phi)\cdot (\tau,\psi) = (\omega+\tau, t^*_{\omega}\psi \circ \phi).$ This group admits a natural $\Gamma_k$-action and fits into a central extension $$1\rightarrow \mathbb{G}_{m,k} \rightarrow G(\sh{M}^2) \rightarrow J_{X}[2] \rightarrow 1.$$ The next lemma follows from the definition of the Weil pairing. \begin{lemma} Let $\omega,\tau\in J_{X}[2](k^s)$, and let $\tilde{\omega},\tilde{\tau}$ be lifts of these elements to $G(\sh{M}^2)(k^s)$. Then $\tilde{\omega}\tilde{\tau}\tilde{\omega}^{-1}\tilde{\tau}^{-1} = e_2(\omega,\tau)$, where $e_2\colon J_{X}[2] \times J_{X}[2] \rightarrow \{\pm 1\}$ denotes the Weil pairing on $J_{X}[2]$. \end{lemma} Since $\sh{M}$ is symmetric, there exists a unique isomorphism $f\colon\sh{M}\xrightarrow{\sim}[-1]^*\sh{M}$ that is the identity at the fibre $\sh{M}_0$ above $0 \in J_{X}$. For every $\omega\in J_{X}[2]$, we thus obtain an isomorphism $f_{\omega}\colon \sh{M}_{\omega} \xrightarrow{\sim} \sh{M}_{-\omega} = \sh{M}_{\omega}$, hence a scalar $q_{\sh{M}}(\omega) \in \left(k^s\right)^{\times}$. Since $[-1]^*f \circ f = \Id_{\sh{M}}$, we see that $q_{\sh{M}}(\omega) = \pm 1$. The next proposition shows that $q_{\sh{M}} $ is a quadratic refinement of the Weil pairing $e_2$. \begin{lemma}\label{lemma: quadratic form mumford theta group is in fact a quadratic form} The map $q_{\sh{M}}$ agrees with the quadratic form $q_{\kappa}$ from Lemma \ref{lemma: mumford construction quadratic form to theta char}: for every $\omega \in J_{X}[2](k^s)$ we have $$q_{\sh{M}}(\omega) = (-1)^{h^0(\omega+\kappa)+h^0(\kappa)},$$ where $h^0(D) = \dim_k \mathrm{H}^0(X,\O_{X}(D))$. Consequently for every $\omega,\tau\in J_{X}[2](k^s)$ we have \begin{equation*} q_{\sh{M}}(\omega+\tau) = e_2(\omega,\tau) q_{\sh{M}}(\omega) q_{\sh{M}}(\tau). \end{equation*} \end{lemma} \begin{proof} By \cite[Proposition 2 of \S2]{Mumford-eqdefAVs} we have $q_{\sh{M}}(\omega) = (-1)^{m_{\omega+\kappa}(W_{g-1}) + m_{\kappa}(W_{g-1})}$, where $m_x(D)$ denotes the multiplicity of a divisor $D$ at a point $x$. By Riemann's singularity theorem, the multiplicity of the theta divisor $W_{g-1}$ at a point $a\in J^{g-1}_X$ is exactly $h^0(a)$. Combining the last two sentences proves the first identity. The second one follows from the first one and Lemma \ref{lemma: mumford construction quadratic form to theta char}. \end{proof} Following Mumford \cite[Definition above Proposition 3 of \S2]{Mumford-eqdefAVs}, the quadratic form $q_{\sh{M}}$ allows us to define the subgroup $\mathcal{H} \subset G(\sh{M}^2)$ as $$\mathcal{H} \coloneqq \left\{\widetilde{\omega} \in G(\sh{M}^2) \mid \widetilde{\omega}^2 = q_{\sh{M}}(\omega) \right\}.$$ (Here we write $\omega$ for the projection of $\widetilde{\omega}$ in $J_{X}[2]$.) Lemma \ref{lemma: quadratic form mumford theta group is in fact a quadratic form} implies that $\mathcal{H}$ is indeed a subgroup and it inherits a $\Gamma_k$-action since $\sh{M}$ is defined over $k$. It fits into the central extension \begin{equation*} 1\rightarrow \{\pm1 \} \rightarrow \mathcal{H} \rightarrow J_{X}[2] \rightarrow 1. \end{equation*} We now show how we can realize $\mathcal{H}$ as the Galois group of a covering space of schemes. This approach is certainly not new but we have been unable to find an adequate reference for it. First we recall for an invertible sheaf $\sh{L}$ on $J_{X}$ its associated $\mathbb{G}_m$-torsor $\Gmtorsor{\sh{L}} \rightarrow J_{X}$, the complement of the zero section in the total space of $\sh{L}$. For a scheme $S$ over $\Spec k$, the $S$-points of $\Gmtorsor{\sh{L}}$ are given by pairs $(x,\alpha)$ where $x: S\rightarrow J_{X}$ is an $S$-valued point of $J_{X}$ and $\alpha$ is an isomorphism $\O_S \xrightarrow{\sim} x^*\sh{L}$. We now define the morphism $p\colon \Gmtorsor{\sh{M}^2} \rightarrow \Gmtorsor{\sh{M}}$ which will be the desired $\mathcal{H}$-torsor and sits in the following commutative diagram: \begin{center} \begin{tikzcd} \Gmtorsor{\sh{M}^2} \arrow[d] \arrow[r, "p"] & \Gmtorsor{\sh{M}} \arrow[d] \\ J_{X} \arrow[r, "\times 2"] & J_{X} \end{tikzcd} \end{center} First we choose a rigidification of $\sh{M}$ i.e. an isomorphism $\sh{M}_0 \simeq k$. (The morphism we will construct depends on this choice but this does not cause any problems.) This induces rigidifications of the line bundles $[2]^*\sh{M}$ and $\sh{M}^4$ and there is a unique isomorphism $F\colon [2]^*\sh{M} \xrightarrow{\sim} \sh{M}^4$ afforded by the theorem of the cube which respects these rigidifications. Given a pair $(x,\alpha)$ corresponding to an $S$-valued point of $\Gmtorsor{\sh{M}^2}$, consider the tensor square $\alpha^{\otimes2}$ of $\alpha$, which is an isomorphism $\alpha^{\otimes 2}\colon \O_S \xrightarrow{\sim} x^*\sh{M}^4$. Pulling back $F$ along $x$ defines an isomorphism $$\left([2]\circ x\right)^*\sh{M} = x^*\left([2]^*\sh{M}\right) \simeq x^*\sh{M}^4. $$ Composing $\alpha^{\otimes2}$ with the inverse of this isomorphism defines an isomorphism $\beta\colon \O_S \xrightarrow{\sim} \left([2]\circ x\right)^*\sh{M}$. We define $p$ on $S$-points of $\Gmtorsor{\sh{M}^2}$ by sending the pair $(x,\alpha)$ to the pair $([2]\circ x,\beta)$ via the procedure just described. \begin{proposition}\label{proposition: V(M) admits a H-torsor} The morphism $p\colon \Gmtorsor{\sh{M}^2}\rightarrow \Gmtorsor{\sh{M}}$ has the natural structure of a right $\mathcal{H}$-torsor. \end{proposition} \begin{proof} For the proof of this proposition it will be useful to give a different interpretation of $\mathcal{H}$. For any $(\omega,\phi) \in G(\sh{M}^2)$, there is a unique $\Phi \colon [2]^*\sh{M} \rightarrow [2]^*\sh{M}$ such that the following diagram commutes: \begin{center} \begin{tikzcd} \sh{M}^4 \arrow[r, "\phi^{\otimes2}"] \arrow[d, "F^{-1}"] & t_{\omega}^*\sh{M}^4 \arrow[d, "t_{\omega}F^{-1}"] \\ {[2]^*\sh{M}} \arrow[rd, "\Phi", dashed] & {t^*_{\omega}[2]^*\sh{M}} \arrow[d, "\simeq"] \\ & {[2]^*\sh{M}} \end{tikzcd} \end{center} Here $t_{\omega}^*[2]^*\sh{M}\simeq [2]^*\sh{M}$ is the canonical isomorphism. The morphism $\Phi$ does not depend on the choice of rigidification of $\sh{M}$. Then \cite[Proposition 6 of \S2]{Mumford-eqdefAVs} shows that $(\omega,\phi)$ lies in the subgroup $\mathcal{H}$ of $G(\sh{M}^2)$ if and only if $\Phi$ is the identity. Using this fact we can define the action of $\mathcal{H}$ on $\Gmtorsor{\sh{M}^2}$ as follows. Take a pair $(x,\alpha)$ corresponding to an $S$-valued point of $\Gmtorsor{\sh{M}^2}$ and an $S$-valued point $(\omega,\phi) \in \mathcal{H}$. We define $$(x,\alpha) \cdot (\omega,\phi) := (t_{\omega}\circ x, x^*\phi \circ \alpha).$$ One readily checks that this is a well-defined right action of $\mathcal{H}$ on $\Gmtorsor{\sh{M}^2}$ which is $\Gamma_k$-equivariant. The different interpretation of $\mathcal{H}$ shows that the action commutes with $p$. Moreover, it acts simply transitively on the geometric fibres of $p$. \end{proof} We specialize the above construction to our situation of interest: for each $b\in B^{\rs}(k)$, the theta characteristic $\kappa = 2P_{\infty}$ on $C_b$ defines a central extension of finite group schemes over $k$: \begin{equation} \label{equation: subgroup mumford theta extension} 1 \rightarrow \{\pm 1 \} \rightarrow \mathcal{H}_b \rightarrow J_b[2] \rightarrow 1. \end{equation} We can globalize this to the family of smooth projective curves $C^{\rs}\rightarrow B^{\rs}$. Indeed, recall that $J\rightarrow B^{\rs}$ denotes the relative Jacobian of this family. Since $C^{\rs} \rightarrow B^{\rs}$ has a section $P_{\infty}$, the scheme $J$ parametrizes rigidified line bundles on $C^{\rs} \rightarrow B^{\rs}$ \cite[Theorem 9.2.5]{Kleiman-PicardScheme}. We can define a line bundle $\sh{M}$ on $J$ using the relative theta divisor (see the proof of \cite[\S9.4; Proposition 4]{BLR-NeronModels} for its construction). By adapting the definition of $G(\sh{M})$ to the relative situation (see \cite[Expos\'e 7; Definition 3.1]{Pinceauxcourbesgenresdeux}), we obtain a $B^{\rs}$-group scheme $\sh{G}(\sh{M})$ sitting in an exact sequence of smooth group schemes (Proposition 3.2 of loc. cit.): $$ 1\rightarrow \mathbb{G}_{m,B^{\rs}} \rightarrow \sh{G}(\sh{M}^2) \rightarrow J[2] \rightarrow 1. $$ By the same procedure as the beginning of this section, we obtain a quadratic form $q_{\sh{M}}\colon J[2] \rightarrow \{\pm 1\}$ and we define $\mathscr{H}$ as the kernel of the group homomorphism $\sh{G}(\sh{M}) \rightarrow \{\pm 1\}, \widetilde{\omega} \mapsto q_{\sh{M}}(\omega) \widetilde{\omega}^2$. It sits in an exact sequence of finite \'etale group schemes \begin{align*} 1\rightarrow \{\pm 1\} \rightarrow \mathscr{H} \rightarrow J[2] \rightarrow 1 \end{align*} which for each $k$-point $b$ specializes to the exact sequence (\ref{equation: subgroup mumford theta extension}). Once a rigidification for $\sh{M}$ is chosen (which is possible since $B^{\rs}$ has trivial Picard group), we can define a morphism $p\colon \Gmtorsor{\sh{M}^2} \rightarrow \Gmtorsor{\sh{M}}$ and the same logic as the proof of Proposition \ref{proposition: V(M) admits a H-torsor} shows that $p$ acquires the structure of an $\mathscr{H}$-torsor. \subsection{Comparing two central extensions}\label{subsection: comparting two central extensions} In this section we compare $\mathscr{H}$ with a finite \'etale group scheme coming from the representation theory of the pair $(G,V)$. The consequences of this comparison that will be used later in the paper are summarized in Corollary \ref{corollary: commutative diagram corollary central extensions}. Recall from \S\ref{subsection: a stable grading} that the group $G$ is a split simple group over $\mathbb{Q}$ isomorphic to $\PSp_8$. Write $G^{sc} \rightarrow G$ for its simply connected cover. We have an exact sequence $$1\rightarrow \{\pm 1\} \rightarrow G^{sc} \rightarrow G \rightarrow 1. $$ In \S\ref{subsection: a family of curves} we have defined a family of maximal tori $A\rightarrow B^{\rs}$ in $H$ with the property that $A \cap G_{B^{\rs}} = A[2]$. By Lemma \ref{lemma: centralizer kostant same as mod 2 root lattice} there is a natural isomorphism of $B^{\rs}$-group schemes $A[2] \simeq \Lambda/2\Lambda$. Taking the pullback of the inclusion $\Lambda/2\Lambda \hookrightarrow G_{B^{\rs}} = G\times B^{\rs}$ along the morphism $G_{B^{\rs}}^{sc} \rightarrow G_{B^{\rs}}$ yields a commutative diagram with exact rows \begin{center} \begin{tikzcd} 1 \arrow[r] & \{\pm1\} \arrow[r] & G_{B^{\rs}}^{sc} \arrow[r] & G_{B^{\rs}} \arrow[r] & 1 \\ 1 \arrow[r] & \{\pm1\} \arrow[r] \arrow[u, "="] & \mathscr{U} \arrow[r] \arrow[u] & {\Lambda/2\Lambda } \arrow[r] \arrow[u] & 1 \end{tikzcd} \end{center} where the righthand square is pullback. The finite \'etale group scheme $\mathscr{U} \rightarrow B^{\rs}$ is a central extension of $\Lambda/2\Lambda$ by $\{\pm 1\}$. It is isomorphic to $Z_{G^{sc}}(\sigma|_{B^{\rs}})$, the simply connected centralizer of the Kostant section. On the other hand, in the previous subsection we have defined a group scheme $\mathscr{H}$, a subgroup of a Mumford theta group, which fits in the exact sequence of \'etale group schemes $$1\rightarrow \{\pm 1\} \rightarrow \mathscr{H} \rightarrow J[2] \rightarrow 1.$$ The following proposition is a central technical result of this paper. It lifts the isomorphism $\Lambda/2\Lambda \rightarrow J[2]$ obtained in \cite[Corollary 4.12]{Thorne-thesis} to an isomorphism between the nonabelian groups $\mathscr{U}$ and $\mathscr{H}$. \begin{proposition}\label{propostion: 2 central extensions coincide} There exists a unique isomorphism $\mathscr{U}\simeq\mathscr{H}$ of group schemes over $B^{\rs}$ that preserves the subgroup $\{\pm 1\}$ and such that the induced isomorphism $\Lambda/2\Lambda\simeq J[2]$ coincides with the one from Proposition \ref{proposition: bridge jacobians root lattices}. \end{proposition} Since $\mathscr{U}$ and $\mathscr{H}$ are finite \'etale, by \cite[Tag \href{https://stacks.math.columbia.edu/tag/0BQM}{0BQM}]{stacksproject} it suffices to prove the statements over the generic point $\eta$ of $B^{\rs}$. This we will achieve at the end of this section after some preparatory lemmas. Write $\mathcal{H}$ and $\mathcal{U}$ for the generic fibres of $\mathscr{H}$ and $\mathscr{U}$ respectively. We write $k$ for the function field of $\eta$ with separable closure $k^s$ and absolute Galois group $\Gamma_k$. We choose a geometric generic point $\bar{\eta}\colon \Spec k^s\rightarrow B^{\rs}$ over $\eta$. We first prove that such an isomorphism exists when we forget the $\Gamma_k$-action. \begin{lemma}\label{lemma: 2 central extensions are isomorphic a abstract groups} There is an isomorphism of groups $\mathcal{U}_{\bar{\eta}}\simeq \mathcal{H}_{\bar{\eta}}$ compatible with the central extensions. \end{lemma} \begin{proof} By \cite[Theorem 2.4.1]{Lurie-minisculereps}, central extensions of $\Lambda_{\bar{\eta}}/2\Lambda_{\bar{\eta}}$ by $\{\pm 1\}$ as abstract groups are classified by quadratic forms of $\Lambda_{\bar{\eta}}/2\Lambda_{\bar{\eta}}$. According to \cite[Proposition A.2]{thorne-planequarticsAIT}, the quadratic form corresponding to $\mathcal{U}_{\bar{\eta}}$ is given by the standard quadratic form $\Lambda_{\bar{\eta}}/2\Lambda_{\bar{\eta}} \rightarrow \{\pm 1\} \colon \lambda \mapsto (-1)^{(\lambda,\lambda)/2}$. By Proposition \ref{prop: quadratic forms identified} and Lemma \ref{lemma: quadratic form mumford theta group is in fact a quadratic form}, this coincides with the quadratic form corresponding to $\mathcal{H}_{\bar{\eta}}$ transported along the isomorphism $\Lambda_{\bar{\eta}}/2\Lambda_{\bar{\eta}} \simeq J_{\bar{\eta}}[2]$. \end{proof} For the rest of this section we fix abstract groups $\widetilde{\mathrm{V}}$ and $\mathrm{V}$ and a central extension \begin{displaymath} 1 \rightarrow \{\pm 1 \} \rightarrow \widetilde{\mathrm{V}} \rightarrow \mathrm{V} \rightarrow 1 \end{displaymath} that is isomorphic to the central extension $\mathcal{U}_{\bar{\eta}}$ of $\Lambda_{\bar{\eta}}/2\Lambda_{\bar{\eta}}$ by $\{\pm1\}$. (We hope that the group $\mathrm{V}$, which is only used in \S\ref{subsection: comparting two central extensions}, will not be confused with the representation $V$.) This extension comes with a quadratic form $q\colon \mathrm{V} \rightarrow \{ \pm 1 \}$ defined by $q(v) = \widetilde{v}^2$ where $\widetilde{v}$ is a lift of $v$ to $\widetilde{\mathrm{V}}$. It will be useful to give a presentation of the group $\widetilde{\mathrm{V}}$. Let $e_1,\dots,e_6$ be a basis for the $\mathbb{F}_2$-vector space $\mathrm{V}$, which we assume satisfies $q(e_1) = -1$. If we choose a lift $\widetilde{e}_i \in \widetilde{\mathrm{V}}$ of $e_i$, a presentation for $\widetilde{\mathrm{V}}$ is given as follows: \begin{itemize} \item The generators are given by the symbols $\widetilde{e}_i$ for $i =1\dots,6$. \item The relations are given by (we set $-1 \coloneqq \widetilde{e}_1^2$): \begin{displaymath} \begin{cases} (-1)^2 = 1,\\ \widetilde{e}_i^2 = q(e_i),\\ [\widetilde{e}_i,-1] = 1,\\ [\widetilde{e}_i,\widetilde{e}_j]=q(e_i)q(e_j)q(e_i+e_j). \end{cases} \end{displaymath} \end{itemize} The proof of the following lemma is purely group-theoretic. \begin{lemma}\label{lemma: group theory with central extension and the free group} Let $F_6$ be the free group on six generators and $f\colon F_6 \rightarrow \mathrm{V} $ a surjective homomorphism. If $\widetilde{f},\widetilde{f}'$ are two surjective homomorphisms $F_6 \rightarrow \widetilde{\mathrm{V}}$ lifting $f$, then there exists a unique isomorphism $\phi \colon \widetilde{\mathrm{V}} \rightarrow \widetilde{\mathrm{V}}$ such that $\widetilde{f}' = \phi \widetilde{f}$. \end{lemma} \begin{proof} To prove the lemma it suffices the prove that $\ker \widetilde{f} = \ker \widetilde{f}'$. Since any two lifts of an element of $\mathrm{V}$ to an element of $\widetilde{\mathrm{V}}$ differ by an element of $\{ \pm 1\}$, there exists a function $\chi: F_6 \rightarrow \{\pm 1 \}$ such that $\widetilde{f}'(g) = \chi(g) \widetilde{f}(g)$ for all $g \in F_6$. Since the subgroup $\{\pm 1\}$ is central in $\widetilde{\mathrm{V}}$, $\chi$ is a homomorphism of groups. So it will be enough to show that $\ker(\chi \widetilde{f}) = \ker \widetilde{f}$ for every character $\chi: F_6 \rightarrow \{ \pm 1\}$, where now $\widetilde{f}$ is a preferred choice of lifting of $f$. We make this choice as follows. Choose generators $g_1,\dots,g_6$ of $F_6$ and let $e_i = f(g_i)$. We may assume that $q(e_1) = -1$. Choose an element $\widetilde{e}_i \in \widetilde{\mathrm{V}}$ lying above $e_i$. We define $\widetilde{f}\colon F_6 \rightarrow \widetilde{\mathrm{V}}$ by sending $g_i$ to $\widetilde{e}_i$. Then the presentation of $\widetilde{\mathrm{V}}$ given above implies that the kernel of $\widetilde{f}$ is generated by the following words: \begin{displaymath} \begin{cases} g_1^4,\\ g_i^2Q(g_i), \\ [g_i,g_1^2], \\ [g_i,g_j]Q(g_i)Q(g_j)Q(g_ig_j). \end{cases} \end{displaymath} Here we set $Q(g) \coloneqq g_1^2$ if $q(f(g)) = -1$ and $Q(g) \coloneqq 1$ otherwise. Since every such word has trivial image under every character $\chi: F_6 \rightarrow \{ \pm 1\}$, we see that the kernel of $\chi \widetilde{f}$ is generated by the same words. This concludes the proof of the lemma. \end{proof} We now investigate the structure of the \'etale fundamental group of the affine curve $C^{\circ}_{\eta} = C_{\eta} \setminus \{P_{\infty}\}$ where $P_{\infty}$ is the marked $k$-rational point at infinity. Choose an isomorphism between $k[[t]]$ and the completed local ring of $C_{\eta}$ at $P_{\infty}$, and write $\Spec k[[t]] \rightarrow C_{\eta}$ for the induced map on schemes. Let $y\colon \Spec k((t)) \rightarrow C^{\circ}_{\eta}$ be the restriction of this map to $C^{\circ}_{\eta}$. Let $\Omega$ be a separable closure of $k((t))$ and $\bar{y}\colon\Spec \Omega \rightarrow C^{\circ}_{\eta} $ be a geometric point over it. The geometric point $\bar{y}$ will serve as our basepoint of $C^{\circ}_{\bar{\eta}}$, and is sometimes called a \emph{tangential basepoint}, following \cite[\S15]{Deligne-droiteprojective}. We write $\pi_1(C^{\circ}_{\bar{\eta}},\bar{y})$ for the \'etale fundamental group of $C^{\circ}_{\bar{\eta}}$ with respect to the geometric point $\bar{y}$. It is isomorphic to the profinite completion of the free group on six generators, and acquires a natural continuous $\Gamma_{k((t))}$-action since $\bar{y}$ comes from a $k((t))$-rational point. The natural map $\Gamma_{k((t))} \rightarrow \Gamma_k$ has a splitting (since $\Omega = \cup_{n\geq 1} k^s((t^{1/n}))$ because $k$ has characteristic $0$), so the group $\pi_1(C^{\circ}_{\bar{\eta}},\bar{y})$ also has a continuous $\Gamma_k$-action. We will construct homomorphisms from $\pi_1(C^{\circ}_{\bar{\eta}},\bar{y})$ into various groups by considering torsors of $C^{\circ}_{\bar{\eta}}$ under these groups. The following lemma, which follows from the definition of the \'etale fundamental group, explains how this works. \begin{lemma}\label{lemma: constructing torsors} Let $\mathcal{G}$ be a finite $k$-group equipped with the discrete topology. Let $T$ be a scheme over $k$ and $T\rightarrow C^{\circ}_{\eta}$ a right $\mathcal{G}$-torsor. Let $\bar{t}\colon \Spec\Omega \rightarrow T$ be a geometric point above $\bar{y}$. Then we can associate to this data a continuous homomorphism $\phi_{\bar{t}}\colon \pi_1(C^{\circ}_{\bar{\eta}},\bar{y}) \rightarrow \mathcal{G}_{k^s}$. It is surjective if and only if $T$ is geometrically connected. Let $\bar{t}'$ be another geometric point of $T$ above $\bar{y}$. Then $\bar{t}' = \bar{t}\cdot h$ for some $h \in \pi_1(C^{\circ}_{\bar{\eta}},\bar{y})$ and $\phi_{\bar{t}'}$ is given by the composition of $\phi_{\bar{t}}$ with conjugation by $\phi_{\bar{t}}(h)$. \end{lemma} Let $C'\rightarrow C_{\eta}$ be the $J_{\eta}[2]$-torsor given by pulling back the multiplication-by-$2$ map $J_{\eta} \xrightarrow{\times 2} J_{\eta}$ via the Abel--Jacobi map with respect to the point $P_{\infty}$. There exists an obvious $k$-rational point above $P_{\infty}$ in $C'$ corresponding to the origin in $J_{\eta}$ upstairs. Define $T_1$ as the restriction of $C'$ to $C^{\circ}_{\eta}$. Then the $k((t))$-rational point $y\colon \Spec k((t)) \rightarrow C^{\circ}_{\eta}$ lifts to a $k((t))$-rational point $t_1\colon \Spec k((t)) \rightarrow T_1$. Using Lemma \ref{lemma: constructing torsors} we obtain a continuous $\Gamma_k$-equivariant homomorphism $\pi_1(C^{\circ}_{\bar{\eta}},\bar{y}) \rightarrow J_{\eta}[2]$. On the other hand, we define the $\Lambda_{\eta}/2\Lambda_{\eta}$-torsor $T_2 \rightarrow C^{\circ}_{\eta}$ as follows: recall from \cite[Proposition 3.6]{Thorne-thesis} that $C^{\circ}_{\eta}$ can be realized as a closed subscheme of $V_{\eta}$. We know the action map $G_{\eta} \rightarrow V_{\eta} : g\mapsto g\cdot \sigma(\eta)$ is \'etale, and in fact a torsor under the group $Z_{G_{\eta}}(\sigma(\eta))$. Taking the pullback along $C^{\circ}_{\eta} \rightarrow V_{\eta}$ and transporting the torsor structure along the isomorphism $Z_{G_{\eta}}(\sigma(\eta)) \simeq \Lambda_{\eta}/2\Lambda_{\eta}$ defines a $\Lambda_{\eta}/2\Lambda_{\eta}$-torsor $T_2$ such that the following diagram is commutative. (This diagram already appears right above Theorem 4.2 in \cite{Thorne-thesis}.) \begin{center} \begin{tikzcd} T_2 \arrow[d] \arrow[r] & G_{\eta} \arrow[d] \\ C^{\circ}_{\eta} \arrow[r] & V_{\eta} \end{tikzcd} \end{center} In the proof of \cite[Theorem 4.15]{Thorne-thesis}, Thorne shows: \begin{lemma}\label{lemma: Thorne thesis torsor extends and is iso} The torsor $T_2 \rightarrow C^{\circ}_{\eta}$ extends to a $\Lambda_{\eta}/2\Lambda_{\eta}$-torsor $\widetilde{C} \rightarrow C_{\eta}$. Moreover, the pushout of the torsor $\widetilde{C}$ along the isomorphism $\Lambda_{\eta}/2\Lambda_{\eta} \simeq J_{\eta}[2]$ from Proposition \ref{proposition: bridge jacobians root lattices} is isomorphic to $C'\rightarrow C_{\eta}$. \end{lemma} So again there exists a point $t_2\colon \Spec k((t)) \rightarrow T_2$ lifting the point $y$, which we will fix. We obtain a continuous $\Gamma_k$-equivariant homomorphism $\pi_1(C^{\circ}_{\bar{\eta}},\bar{y}) \rightarrow \Lambda_{\eta}/2\Lambda_{\eta}$. \begin{lemma}\label{lemma: existence of non-abelian torsors} We have the following. \begin{enumerate} \item There exists an $\mathcal{H}$-torsor $\widetilde{T}_1 \rightarrow C^{\circ}_{\eta}$ which factors as $\widetilde{T}_1 \rightarrow T_1 \rightarrow C^{\circ}_{\eta}$. The $k$-scheme $\widetilde{T}_1 $ is geometrically connected. Moreover there exists a $\Gamma_k$-equivariant continuous homomorphism $\pi_1(C^{\circ}_{\bar{\eta}} , \bar{y}) \rightarrow \mathcal{H}$ lifting the morphism $\pi_1(C^{\circ}_{\bar{\eta}} , \bar{y}) \rightarrow J_{\eta}[2]$. \item There exists a $\mathcal{U}$-torsor $\widetilde{T}_2 \rightarrow C^{\circ}_{\eta}$ which factors as $\widetilde{T}_2 \rightarrow T_2 \rightarrow C^{\circ}_{\eta}$. The $k$-scheme $\widetilde{T}_2$ is geometrically connected. Moreover there exists a $\Gamma_k$-equivariant continuous homomorphism $\pi_1(C^{\circ}_{\bar{\eta}} , \bar{y}) \rightarrow \mathcal{U}$ lifting the morphism $\pi_1(C^{\circ}_{\bar{\eta}} , \bar{y}) \rightarrow \Lambda_{\eta}/2\Lambda_{\eta}$. \end{enumerate} \end{lemma} \begin{proof} For Part 1, recall from \S\ref{subsection: Mumford theta groups} that there exists $k$-schemes $\Gmtorsor{\sh{M}}$ and $\Gmtorsor{\sh{M}^2}$ together with an $\mathcal{H}$-torsor $p\colon \Gmtorsor{\sh{M}^2}\rightarrow \Gmtorsor{\sh{M}}$ and a commutative diagram \begin{center} \begin{tikzcd} \Gmtorsor{\sh{M}^2} \arrow[d] \arrow[r, "p"] & \Gmtorsor{\sh{M}} \arrow[d] \\ J_{\eta} \arrow[r, "\times 2"] & J_{\eta} \end{tikzcd} \end{center} Let $i\colon C^{\circ}_{\eta} \rightarrow J_{\eta}$ be the Abel-Jacobi map with respect to the point $P_{\infty}$. If follows from \cite[Exercise 10 of Chapter 11]{BirkenhakeLange-CAV} that the pullback of the line bundle $\sh{M}$ along $i$ is trivial. In other words, the map $i\colon C^{\circ}_{\eta} \rightarrow J_{\eta}$ lifts to a map $\widetilde{i}\colon C^{\circ}_{\eta} \rightarrow \Gmtorsor{\sh{M}}$. Taking the pullback of the torsor $p$ along $\widetilde{i}$ defines an $\mathcal{H}$-torsor $\widetilde{T}_1 \rightarrow C^{\circ}_{\eta}$ which factors as $\widetilde{T}_1 \rightarrow T_1 \rightarrow C^{\circ}_{\eta}$ compatible with the torsor structures. Let $\bar{t}_1 \colon \Spec \Omega \rightarrow T_1$ be a geometric point above $t_1$. Choose a geometric point $\bar{t}_1'\colon \Spec \Omega \rightarrow \widetilde{T}_1 $ lying above $\bar{t}_1$. By Lemma \ref{lemma: constructing torsors} it determines a continuous homomorphism $\phi \colon \pi_1(C^{\circ}_{\bar{\eta}},\bar{y}) \rightarrow \mathcal{H}_{k^s}$, whose projection to $J_{\eta}[2]$ gives the previously constructed morphism $\pi_1(C^{\circ}_{\bar{\eta}},\bar{y})\rightarrow J_{\eta}[2]$. Changing $\bar{t}_1'$ means conjugating $\phi$ by an element of the form $\phi(h)$ where $h\in \pi_1(C^{\circ}_{\bar{\eta}},\bar{y})$ lies in the image of the map $\pi_1(\left(T_1\right)_{k^s},\bar{t}_1) \rightarrow \pi_1(C^{\circ}_{\bar{\eta}},\bar{y})$. But in that case $\phi(h) \in \{\pm 1\}$, so $\phi(h)$ lies in the centre of $\mathcal{H}_{k^s}$. We conclude that the homomorphism $\phi$ is independent of the choice of $\bar{t}_1'$, hence is $\Gamma_k$-equivariant. Moreover, the image of $\phi$ is a subgroup of $\mathcal{H}_{k^s}$ whose projection to $J_{\bar{\eta}}[2]$ is surjective. Since the $\{\pm 1\}$-extension $\mathcal{H}_{k^s} \rightarrow J_{\bar{\eta}}[2]$ is not split, this implies that $\phi$ itself must be surjective. We conclude that $\widetilde{T}_1$ must be geometrically connected, concluding Part 1 of the lemma. For Part 2, we complete the diagram in the definition of $T_2$ to the following diagram \begin{center} \begin{tikzcd} \widetilde{T}_2 \arrow[d] \arrow[r] & G^{sc}_{\eta} \arrow[d] \\ T_2 \arrow[d] \arrow[r] & G_{\eta} \arrow[d] \\ C^{\circ}_{\eta} \arrow[r] & V_{\eta} \end{tikzcd} \end{center} where both squares are pullback and $G^{sc}_{\eta} \rightarrow G_{\eta}$ is the natural projection. Since $G^{sc}_{\eta} \rightarrow V_{\eta}$ is a $Z_{G^{sc}}(\sigma(\eta))$-torsor, the morphism $\widetilde{T}_2 \rightarrow C^{\circ}_{\eta}$ is a $\mathcal{U}$-torsor. A similar argument to Part 1 shows that this data defines a homomorphism $ \pi_1(C^{\circ}_{\bar{\eta}} , \bar{y}) \rightarrow \mathcal{U}_{k^s} $ which is independent of any choices, surjective and $\Gamma_k$-equivariant. \end{proof} We have completed all the preparations for the proof of Proposition \ref{propostion: 2 central extensions coincide}, which we give now. Ignoring the dotted arrow, Lemma \ref{lemma: existence of non-abelian torsors} implies the existence of the following diagram, commutative by Lemma \ref{lemma: Thorne thesis torsor extends and is iso}: \begin{center} \begin{tikzcd} {\pi_1(C^{\circ}_{\bar{\eta}},\bar{y})} \arrow[r] \arrow[rd] & \mathcal{H} \arrow[r] \arrow[d, "\Psi", dashed] & {J_{\eta}[2]} \arrow[d, "\simeq"] \\ & \mathcal{U} \arrow[r] & \Lambda_{\eta}/2\Lambda_{\eta} \end{tikzcd} \end{center} By Lemma \ref{lemma: 2 central extensions are isomorphic a abstract groups}, we are in the situation of Lemma \ref{lemma: group theory with central extension and the free group}, so there exists a unique isomorphism $\Psi\colon \mathcal{H}_{k^s} \rightarrow \mathcal{U}_{k^s}$ such that the above diagram with the dotted arrow added is commutative. (Lemma \ref{lemma: group theory with central extension and the free group} can be applied even if $F_6$ is the profinite completion of the free group on six generators since we are dealing with finite quotients here.) Since the maps $\pi_1(C^{\circ}_{\bar{\eta}},\bar{y}) \rightarrow \mathcal{H}$ and $\pi_1(C^{\circ}_{\bar{\eta}},\bar{y}) \rightarrow \mathcal{U}$ are $\Gamma_k$-equivariant, $\Psi$ is $\Gamma_k$-invariant as well. This proves that $\mathcal{U}$ and $\mathcal{H}$ are isomorphic. To prove uniqueness, we note that the scheme of isomorphisms $\mathcal{U} \simeq \mathcal{H}$ compatible with the central extensions is a torsor under the group $\left(\Lambda_{\eta}/2\Lambda_{\eta}\right)^{\vee} \simeq \Lambda_{\eta}/2\Lambda_{\eta}$, by \cite[Lemma 2.4]{thorne-planequarticsAIT}. Since $\Lambda_{\eta}/2\Lambda_{\eta}$ does not have any non-identity $k$-rational points by Proposition \ref{proposition: monodromy of J[2]}, this completes the proof of Proposition \ref{propostion: 2 central extensions coincide}. \begin{corollary}\label{corollary: commutative diagram corollary central extensions} There is a commutative diagram of $B^{\rs}$-group schemes with exact rows \begin{center} \begin{tikzcd} 1 \arrow[r] & \{\pm1\} \arrow[r] & G_{B^{\rs}}^{sc} \arrow[r] & G_{B^{\rs}} \arrow[r] & 1 \\ 1 \arrow[r] & \{\pm1\} \arrow[r] \arrow[u, "="] & \mathscr{H} \arrow[r] \arrow[u] & {J[2]} \arrow[r] \arrow[u] & 1 \end{tikzcd} \end{center} enjoying the following properties: \begin{enumerate} \item The rightmost vertical arrow equals the composite of the isomorphism $J[2] \simeq Z_{G}(\sigma|_{B^{\rs}})$ from Proposition \ref{proposition: bridge jacobians root lattices} with the inclusion $Z_{G}(\sigma|_{B^{\rs}}) \hookrightarrow G_{B^{\rs}}$. \item The right-hand square is cartesian. \end{enumerate} \end{corollary} \subsection{Embedding the Selmer group}\label{subsection: twisting and embedding the selmer group} We start with a well-known lemma which provides the link between the rational orbits of our representations and \'etale cohomology. Its proof will be postponed to the proof of Proposition \ref{proposition: G-orbits in terms of groupoids} and is largely formal. The case of a field is treated in \cite[Proposition 1]{BhargavaGross-AIT} and the more general case is based on the same idea. Recall that for a $\mathbb{Q}$-algebra $R$ and an element $b\in B(R)$ we write $V_b$ for the pullback of the morphism $\pi\colon V \rightarrow B$ along $b$. \begin{lemma}\label{lemma: AIT} Let $R$ be a $\mathbb{Q}$-algebra. If $b\in B^{\rs}(R)$ then there is a canonical bijection of sets $$ G(R)\backslash V_b(R) \simeq \ker\left(\mathrm{H}^1(R,Z_{G}(\sigma(b)) ) \rightarrow \mathrm{H}^1(R,G)\right).$$ The distinguished orbit $G(R)\cdot \sigma(b)$ corresponds to the trivial element in $\mathrm{H}^1(R,Z_G(\sigma(b) ))$. \end{lemma} The bijection is given by sending the orbit $G(R)\cdot v$ to the isomorphism class of the $Z_{G}(\sigma(b))$-torsor $\{g\in G \mid g\cdot v = \sigma(b) \} \rightarrow \Spec R$. \begin{lemma}\label{lemma: H^1(R,Sp) is trivial} Let $R$ be a $\mathbb{Q}$-algebra such that every locally free $R$-module of constant rank is free. For each $n\geq 1$ write $\Sp_{2n}$ for the split symplectic group over $\mathbb{Q}$ of rank $n$. Then the pointed set $\mathrm{H}^1(R,\Sp_{2n})$ is trivial for all $n\geq 1$. In particular, the pointed set $\mathrm{H}^1(R,G^{sc})$ is trivial. \end{lemma} \begin{proof} Since $G\simeq \PSp_8$ we have $G^{sc}\simeq \Sp_8$ so it suffices to prove the first part. The set $\mathrm{H}^1(R,\Sp_{2n})$ is in canonical bijection with the set of isomorphism classes of pairs $(M,b)$, where $M$ is a projective $R$-module of rank $2n$ and $b: M\times M \rightarrow R$ is an alternating perfect pairing. Our assumptions imply that $M$ is free and the proof of \cite[Corollary 3.5]{Milnor-SymmetricBilinearForms} shows that any two alternating perfect pairings on $M$ are isomorphic. \end{proof} We now piece all the ingredients obtained so far together and deduce our first main result. \begin{theorem}\label{theorem: inject 2-descent into orbits} Let $R$ be a $\mathbb{Q}$-algebra such that every locally free $R$-module of constant rank is free and $b\in B^{\rs}(R)$. Then there is a canonical injection $\eta_b \colon J_b(R)/2J_b(R) \hookrightarrow G(R)\backslash V_b(R)$ compatible with base change on $R$. Moreover, the map $\eta_b$ sends the identity element to the orbit of $\sigma(b)$. \end{theorem} \begin{proof} By Corollary \ref{corollary: commutative diagram corollary central extensions}, we have a commutative diagram with exact rows of group schemes over $R$ (we continue to write $G$ and $G^{sc}$ for the base change of these $\mathbb{Q}$-groups to $R$): \begin{center} \begin{tikzcd} 1 \arrow[r] & \mu_2 \arrow[r] & G^{sc} \arrow[r] & G \arrow[r] & 1 \\ 1 \arrow[r] & \mu_2 \arrow[r] \arrow[u, "="'] & \mathscr{H}_b \arrow[r] \arrow[u] & J_b[2] \arrow[r] \arrow[u] & 1 \end{tikzcd} \end{center} Moreover by Lemma \ref{lemma: AIT}, the kernel of the map of pointed sets $\mathrm{H}^1(R,J_b[2]) \rightarrow \mathrm{H}^1(R,G)$ induced by the rightmost vertical map of the diagram is in canonical bijection with the set of $G(R)$-orbits in $V_b(R)$. Given $A\in J_b(R)$ we define $\eta_b(A) \in \mathrm{H}^1(R,J_b[2])$ as the image of $A$ under the $2$-descent map $J_b(R)/2J_b(R) \hookrightarrow \mathrm{H}^1(R,J_b[2])$, given by the isomorphism class of the $J_b[2]$-torsor $[2]^{-1}\left(A\right)$. To prove that $\eta_b(A)$ defines a $G(R)$-orbit in $V_b(R)$ we need to show that its class is killed under the map $\mathrm{H}^1(R,J_b[2]) \rightarrow \mathrm{H}^1(R,G)$. Using the triviality of $\mathrm{H}^1(R,G^{sc})$ by Lemma \ref{lemma: H^1(R,Sp) is trivial} and the above commutative diagram, it is enough to show that $\eta_b(A)$ lies in the image of the map $\mathrm{H}^1(R,\mathscr{H}_b) \rightarrow \mathrm{H}^1(R,J_b[2])$. Recall from \S\ref{subsection: Mumford theta groups} that we have a commutative diagram \begin{center} \begin{tikzcd} \Gmtorsor{\sh{M}^2_b} \arrow[d] \arrow[r, "p"] & \Gmtorsor{\sh{M}_b} \arrow[d] \\ J_b \arrow[r, "\times 2"] & J_b \end{tikzcd} \end{center} where the vertical arrows are $\mathbb{G}_m$-torsors and where $p$ is an $\mathscr{H}_b$-torsor. Since $\mathrm{H}^1(R,\mathbb{G}_m)$ is trivial, the point $A\in J_b(R)$ lifts to a point $\widetilde{A} \in \Gmtorsor{\sh{M}_b}(R)$. Then the fibre of $p$ above $\widetilde{A}$ will be an $\mathscr{H}_b$-torsor lifting $\eta_b(A)$. This concludes the first part of the theorem. The definition of $\eta_b$ shows that it sends the identity element of $J_b(R)/2J_b(R)$ to the identity element of $\mathrm{H}^1(R,J_b[2])$. By Lemma \ref{lemma: AIT} this corresponds to the orbit of $\sigma(b)$, proving the second part of the theorem. \end{proof} \begin{corollary}\label{corollary: Sel2 embeds} Let $b\in B^{\rs}(\mathbb{Q})$ and write $\Sel_2 J_b$ for the $2$-Selmer group of $J_b$ over $\mathbb{Q}$. Then the injection $J_b(\mathbb{Q})/2J_b(\mathbb{Q}) \hookrightarrow G(\mathbb{Q})\backslash V_b(\mathbb{Q})$ of Theorem \ref{theorem: inject 2-descent into orbits} extends to an injection $$\Sel_2 J_b \hookrightarrow G(\mathbb{Q})\backslash V_b(\mathbb{Q}).$$ \end{corollary} \begin{proof} We have a commutative diagram for every place $v$: \begin{center} \begin{tikzcd} {J_b(\mathbb{Q})/2J_b(\mathbb{Q})} \arrow[d] \arrow[r, "\delta"] & {\mathrm{H}^1(\mathbb{Q},J_b[2])} \arrow[r] \arrow[d] & {\mathrm{H}^1(\mathbb{Q},G)} \arrow[d] \\ {J_b(\mathbb{Q}_v)/2J_b(\mathbb{Q}_v)} \arrow[r, "\delta_v"] & {\mathrm{H}^1(\mathbb{Q}_v,J_b[2])} \arrow[r] & {\mathrm{H}^1(\mathbb{Q}_v,G)} \end{tikzcd} \end{center} To prove the corollary it suffices to prove that $2$-Selmer elements in $\mathrm{H}^1(\mathbb{Q},J_b[2])$ are killed under the natural map $\mathrm{H}^1(\mathbb{Q},J_b[2]) \rightarrow \mathrm{H}^1(\mathbb{Q},G)$. By definition, an element of $\Sel_2 J_b$ consists of a class in $\mathrm{H}^1(\mathbb{Q},J_b[2])$ whose restriction to $\mathrm{H}^1(\mathbb{Q}_v,J_b[2])$ lies in the image of $\delta_v$ for every place $v$. So by Theorem \ref{theorem: inject 2-descent into orbits} the image of such an element in $\mathrm{H}^1(\mathbb{Q}_v,G)$ is trivial for every $v$. Since the restriction map $\mathrm{H}^2(\mathbb{Q},\mu_2) \rightarrow \prod_{v} \mathrm{H}^2(\mathbb{Q}_v,\mu_2)$ has trivial kernel by the Hasse principle for the Brauer group, the kernel of $\mathrm{H}^1(\mathbb{Q},G) \rightarrow \prod_{v} \mathrm{H}^1(\mathbb{Q}_v,G)$ is trivial too. The result follows. \end{proof} \section{Integral orbit representatives}\label{section: integral representatives} In this section, we introduce integral structures for the pair $(G,V)$ and prove that for large primes $p$, the image of the map from Theorem \ref{theorem: inject 2-descent into orbits} applied to $R = \mathbb{Q}_p$ lands in the orbits which admit a representative in $\mathbb{Z}_p$. See Theorem \ref{theorem: integral representatives exist} for a precise statement. In \S\ref{subsection: integral structures}, we extend our constructions over $\mathbb{Z}[1/N]$ for some sufficiently large integer $N$. In \S\ref{subsection: some groupoids} and \S\ref{subsection: compactifications} we introduce the necessary technical background for the proof of Theorem \ref{theorem: integral representatives exist}. In \S\ref{subsection: case of square-free discriminant} we prove the case of square-free discriminant. In \S\ref{subsection: proof of theorem integral representatives} we combine all the above ingredients to prove Theorem \ref{theorem: integral representatives exist} in full generality. Finally in \S\ref{subsection: integrality, a global corollary}, we deduce an integrality result for orbits over $\mathbb{Q}$ (as opposed to orbits over $\mathbb{Q}_p$). \subsection{Integral structures}\label{subsection: integral structures} The pair $(G,V)$ naturally extends to a pair $(\underline{G},\underline{V})$ over $\mathbb{Z}$ with similar properties. Indeed, our choice of pinning of $H$ in \S\ref{subsection: a stable grading} determines a Chevalley basis of $\lieh$, hence a $\mathbb{Z}$-form $\underline{\mathfrak{h}}$ of $\lieh$ (in the sense of \cite{Borel-propertieschevalley}) with adjoint group $\underline{H}$, a split semisimple group of type $E_6$ over $\mathbb{Z}$. The $\mathbb{Z}$-lattice $\underline{V} = V\cap \underline{\mathfrak{h}}$ is admissible; define $\underline{G}$ as the Zariski closure of $G$ in $\GL(\underline{V})$. The $\mathbb{Z}$-group scheme $\underline{G}$ has generic fibre $G$ and acts faithfully on the free $\mathbb{Z}$-module $\underline{V}$ of rank $42$. The automorphism $\theta\colon H \rightarrow H$ of \S\ref{subsection: a stable grading} extends by the same formula to an automorphism $\underline{H}\rightarrow \underline{H}$, still denoted by $\theta$. We have $\underline{H}^{\theta}_{\mathbb{Z}[1/2]}=\underline{G}_{\mathbb{Z}[1/2]}$ and $\underline{G}_{\mathbb{Z}[1/2]}$ is a split reductive group of type $C_4$ over $\mathbb{Z}[1/2]$. Our main properties and constructions obtained so far work over $\mathbb{Z}[1/N]$ for some sufficiently large integer $N$, as we will now explain. After rescaling the polynomials $p_2,\dots,p_{12}\in \mathbb{Q}[V]^{G}$ fixed in \S\ref{subsection: a family of curves} using the $\mathbb{G}_m$-action on $V$ we can assume they lie in $\mathbb{Z}[\underline{V}]^{\underline{G}}$. Write $\underline{B} \coloneqq \Spec\mathbb{Z}[p_2,\dots,p_{12}]$ and write $\pi\colon \underline{V} \rightarrow \underline{B}$ for the corresponding morphism which extends the morphism $V\rightarrow B$ on $\mathbb{Q}$-fibres, already denoted by $\pi$. Recall that $\Delta \in \mathbb{Q}[V]^{G}$ is the Lie algebra discriminant of $\lieh$, a $G$-invariant polynomial of degree $72$. We can assume, again after suitable rescaling $p_2,\dots,p_{12}$ using the $\mathbb{G}_m$-action, that $\Delta\in \mathbb{Z}[V]^{G}$. We define $\underline{B}^{\rs} \coloneqq \Spec \mathbb{Z}[p_2,\dots,p_{12}][\Delta^{-1}]$. We extend the family of curves given by Equation (\ref{equation : E6 family middle of paper}) to the family $\mathcal{C} \rightarrow \underline{B}$ given by that same equation. Let us call a positive integer $N$ \define{good} if the following properties are satisfied (set $S \coloneqq \mathbb{Z}[1/N]$): \begin{enumerate} \item Each prime dividing the order of the Weyl group (so an element of $\{2,3,5\}$) is a unit in $S$. \item The discriminant locus $\{\Delta = 0\}_S \rightarrow \Spec S $ has geometrically integral fibres. Moreover $\Delta$ and $\Delta_0$ (which by formula (\ref{equation: definition discriminant Delta0}) defines an element of $\mathbb{Z}[\underline{B}]$) are equal up to a unit in $\mathbb{Z}[1/N]$. \item The morphism $\mathcal{C}_S\rightarrow \underline{B}_S$ is flat and proper with geometrically integral fibres. It is smooth exactly above $\underline{B}_S^{\rs}$. \item $S[\underline{V}]^{\underline{G}} = S[p_2,p_5,p_6,p_8,p_9,p_{12}]$. The Kostant section extends to a section $\sigma\colon \underline{B}_S \rightarrow \underline{V}^{\reg}$ of $\pi$ satisfying the following property: for any $b\in \underline{B}(\mathbb{Z}) \subset \underline{B}_S(S)$, we have $\sigma(N\cdot b) \in \underline{V}(\mathbb{Z})$. \item There exists open subschemes $\underline{V}^{\rs} \subset \underline{V}^{\reg} \subset \underline{V}_S$ such that if $S\rightarrow k$ is a map to a field and $v\in \underline{V}(k)$ then $v$ is regular if and only if $v\in \underline{V}^{\reg}(k)$ and $v$ is regular semisimple if and only if $v\in \underline{V}^{\rs}(k)$. Moreover, $\underline{V}^{\rs}$ is the open subscheme defined by the nonvanishing of the discriminant polynomial $\Delta$ in $\underline{V}_S$. \item The action map $\underline{G}_S \times \underline{B}_S \rightarrow \underline{V}^{\reg}, (g,b) \mapsto g\cdot \sigma(b)$ is \'etale and its image contains $\underline{V}^{\rs}$. \item Let $\mathcal{J} \rightarrow \underline{B}_S^{\rs}$ denote the relative Jacobian of $\mathcal{C}_S \rightarrow \underline{B}_S^{\rs}$. Then there is an isomorphism $\mathcal{J}[2] \simeq \Lambda/2\Lambda$ of \'etale sheaves on $\underline{B}_S^{\rs}$ whose restriction to $B^{\rs}$ is the isomorphism of Proposition \ref{proposition: bridge jacobians root lattices}. It intertwines the natural pairings on both sides. \item The group schemes $\mathscr{H}$ and $\mathscr{U}$ over $B^{\rs}$ have natural extensions to finite \'etale group schemes over $\underline{B}^{\rs}_S$ and there exists an isomorphism between the two extending the isomorphism from Proposition \ref{propostion: 2 central extensions coincide}. \end{enumerate} \begin{proposition} There exists a good integer $N$. \end{proposition} \begin{proof} This follows from the principle of spreading out. It suffices to consider each property in the above list separately. As an example we will treat Properties $3$, $4$ and $5$ in more detail, leaving the others to the reader. For Property $3$, we first choose an $N$ such that $\mathcal{C}_S \rightarrow \underline{B}_S$ is flat and proper. By \cite[Théorème 12.2.1(x)]{EGAIV-3} the locus where the fibres are geometrically integral is an open subscheme of $\underline{B}_S$. Since the fibres of $C\rightarrow B$ are geometrically integral (use the contracting $\mathbb{G}_m$-action and the geometric integrality of the central fibre), this subscheme equals $B$ over $\mathbb{Q}$. By spreading out, we can enlarge $N$ so that this subscheme is the whole of $\underline{B}_S$. Moreover the locus of $\underline{B}_S$ above which the morphism $\mathcal{C}_S\rightarrow \underline{B}_S$ is smooth is an open subscheme which coincides with the open subscheme $\underline{B}_S^{\rs}$ after base change to $\mathbb{Q}$ by Part 2 of Proposition \ref{proposition: bridge jacobians root lattices}. Again by spreading out, we can enlarge $N$ such that these two open subschemes coincide over $S$. For Property $4$, note that $\mathbb{Z}[1/2][\underline{V}]^{\underline{G}}$ is a finitely generated $\mathbb{Z}[1/2]$-algebra by \cite[Theorem 2]{Seshadri-GeometricReductivityArbitaryBase} and the fact that $\underline{G}$ is reductive over $\mathbb{Z}[1/2]$. Moreover it contains the subring $\mathbb{Z}[1/2][p_2,\dots,p_{12}]$. Since this inclusion of finitely generated $\mathbb{Z}[1/2]$-algebras is an equality after tensoring with $\mathbb{Q}$, the same holds after tensoring with $\mathbb{Z}[1/N]$ for some even $N$. The claim about the Kostant section follows from considering the denominators of the morphism $\sigma\colon B \rightarrow V$ and spreading out. Finally we consider Property $5$. We will construct open subschemes $\underline{\mathfrak{h}}^{\rs}_S\subset \underline{\mathfrak{h}}^{\reg}_S\subset \underline{\mathfrak{h}}_S$ with similar properties; the subschemes $\underline{V}^{\rs}\subset \underline{V}^{\reg}\subset \underline{V}_S$ will be obtained by restricting them to $\underline{V}_S$. Let $Z\rightarrow \underline{\mathfrak{h}}$ be the universal centralizer of the adjoint action of $\underline{H}$ on $\underline{\mathfrak{h}}$, so $Z = Z_{\underline{H}}(\Id_{\underline{\mathfrak{h}}})$. If $k$ is any field and $x\in \underline{\mathfrak{h}}(k)$ then by definition $x$ is regular if and only if the dimension of $Z_x$ equals $\rk H =6$. By \cite[Théorème 13.1.3]{EGAIV-3} and the fact that the dimension of a group scheme can be computed at the identity, the function $x\mapsto \dim Z_x$ is upper-semicontinuous on $\underline{\mathfrak{h}}$. So the locus $\underline{\mathfrak{h}}^{\reg}$ where the fibre has dimension $6$ is an open subscheme of $\underline{\mathfrak{h}}$. Let $Z^{\reg} \rightarrow \underline{\mathfrak{h}}^{\reg}$ be the restriction of $Z$ to $\underline{\mathfrak{h}}^{\reg}$. By \cite[Remark 4.4.2]{Riche-KostantSectionUniversalCentralizer}, the morphism $Z^{\reg}_S \rightarrow \underline{\mathfrak{h}}^{\reg}_S$ is smooth for some $N$. In that case the locus $\underline{\mathfrak{h}}^{\rs}_S$ where the fibres are tori is an open subscheme of $\underline{\mathfrak{h}}^{\reg}_S$ \cite[Exposé X; Corollaire 4.9]{SGA3-TomeII}, as required. The statement about the discriminant locus follows from spreading out. \end{proof} We henceforth fix a good integer $N$ for the remainder of this paper. We can then extend our previous results to $S$-algebras rather than $\mathbb{Q}$-algebras. We mention in particular: \begin{proposition}\label{proposition: inject 2-descent orbits spreading out} Let $R$ be an $S$-algebra and $b\in \underline{B}^{\rs}(R)$. Suppose that every locally free $R$-module of constant rank is free. Then there is an injective map $$\eta_b \colon \mathcal{J}_b(R)/2\mathcal{J}_b(R) \rightarrow \underline{G}(R)\backslash \underline{V}_b(R)$$ which is compatible with base change on $R$. Moreover it sends the identity element of $\mathcal{J}_b(R)/2\mathcal{J}_b(R)$ to the orbit of $\sigma(b)$. \end{proposition} We are now ready to state the main theorem of this section whose proof will be given at the end of \S\ref{subsection: proof of theorem integral representatives}. Write $\sh{E}_p$ for the set of all $b\in \underline{B}(\mathbb{Z}_p)$ which lie in $B^{\rs}(\mathbb{Q}_p)$. It consists of those $b\in \underline{B}(\mathbb{Z}_p)$ with nonzero discriminant. \begin{theorem}\label{theorem: integral representatives exist} Let $p$ be a prime not dividing $N$. Then for any $b\in \sh{E}_p$ the image of the map $$J_b(\mathbb{Q}_p)/2J_b(\mathbb{Q}_p) \rightarrow G(\mathbb{Q}_p)\backslash V_b(\mathbb{Q}_p)$$ from Theorem \ref{theorem: inject 2-descent into orbits} is contained in the image of the map $\underline{V}(\mathbb{Z}_p) \rightarrow G(\mathbb{Q}_p)\backslash V(\mathbb{Q}_p)$. \end{theorem} \subsection{Some groupoids}\label{subsection: some groupoids} In this section we follow \cite[\S4.3]{Thorne-Romano-E8} and define some groupoids which will be a convenient way to think about orbits in our representation and a crucial ingredient for the proof of Theorem \ref{theorem: integral representatives exist}. Throughout this section we fix a scheme $X$ over $S = \mathbb{Z}[1/N]$. Before we define the groupoids we need to define the outer isomorphism scheme, a technical complication which arises because the group $\underline{H}$ has outer automorphisms. Let $H', H''$ be reductive group schemes over $X$ whose geometric fibres are adjoint semisimple of Dynkin type $E_6$. (See \cite[Definition 3.1.1]{Conrad-reductivegroupschemes} for the definition of a reductive group scheme over a general base.) Since $H'$ is \'etale locally isomorphic to $H''$, the scheme of isomorphisms of reductive $X$-groups $\Isom_X(H',H'')$ is an $\Aut_X(H')$-torsor. Define $\Out_X(H',H'')$ as the push-out of this torsor under the map $\Aut_X(H')\rightarrow \Out_X(H')$. It is the quotient of $\Isom_X(H',H'')$ by the action of $H'$. Since $\Out_X(H')$ is a finite \'etale group scheme of order $2$ \cite[Theorem 7.1.9(2)]{Conrad-reductivegroupschemes}, the $X$-scheme $\Out_X(H',H'')$ is finite \'etale of order $2$ as well. We define the groupoid $\GrLie_X$ whose objects are triples $(H',\chi',\theta')$ where \begin{itemize} \item $H'$ is a reductive group scheme over $X$ whose geometric fibres are adjoint semisimple of Dynkin type $E_6$. \item $\chi'$ is a section of the $X$-scheme $\Out_X(\underline{H}_X,H')$. \item $\theta'\colon H' \rightarrow H'$ is an involution of reductive $X$-group schemes such that for each geometric point $\bar{x}$ of $X$ there exists a maximal torus $A_{\bar{x}}$ of $H'_{\bar{x}}$ such that $\theta'$ acts as $-1$ on $X^*(A_{\bar{x}})$. \end{itemize} A morphism $(H',\chi',\theta') \rightarrow (H'',\chi'',\theta'')$ in $\GrLie_X$ is given by an isomorphism $\phi \colon H'\rightarrow H''$ such that $\phi\circ \chi'=\chi''$ and $\phi \circ \theta' = \theta'' \circ \phi$. The triple $(\underline{H}_S,[\Id_{\underline{H}_S}],\theta_S)$ of \S\ref{subsection: integral structures} defines an object of $\GrLie_S$ by \cite[Corollary 14]{GrossLevyReederYu-GradingsPosRank}. We note that there is a natural notion of base change and the groupoids $\GrLie_X$ form a stack over the category of schemes over $S$ in the \'etale topology. \begin{proposition}\label{proposition: G-torsors in terms of groupoids} Let $X$ be an $S$-scheme. The assignment $(H',\chi',\theta')\mapsto \Isom((\underline{H}_X,[\Id],\theta_X),(H',\chi',\theta'))$ defines a bijection between: \begin{itemize} \item The isomorphism classes of objects in $\GrLie_X$. \item The set $\mathrm{H}^1(X,\underline{G})$. \end{itemize} \end{proposition} \begin{proof} We first prove that every two triples $(H',\chi',\theta'), (H'',\chi'',\theta'')$ in $\GrLie_X$ are \'etale locally isomorphic. The proof of this fact given below is very similar to the proof of \cite[Lemma 2.3]{Thorne-Romano-E8}; we reproduce it here for convenience. The question being \'etale local on $X$, we may assume that $H' = H''$ and $\chi'=\chi''$. Let $T$ denote the $X$-scheme of elements $h\in H'$ such that $\Ad(h)\circ \theta' = \theta''$; it is a closed subscheme of $H'$ that is $X$-smooth by \cite[Proposition 2.1.2]{Conrad-reductivegroupschemes}. Since smooth surjective morphisms have sections \'etale locally, it suffices to prove that $T\rightarrow X$ is surjective. Since the construction of $T$ is compatible with base change we may assume that $X=\Spec k$ where $k$ is an algebraically closed field. By assumption, there exist maximal tori $A',A''\subset H'$ on which $\theta', \theta''\in H'(k)$ act through $-1$. Using the conjugacy of maximal tori, we may assume that $A'=A''$ so $\theta' = a\cdot \theta''$ for some $a\in A'(k)$. Writing $a=b^2$ for some $b\in A'(k) , we see that $\theta' = b\cdot b\cdot \theta'' = b\cdot \theta'' \cdot b^{-1}$. Therefore $\theta'$ is $H'(k)$-conjugate (even $A'(k)$-conjugate) to $\theta''$, as desired. We now claim that $\Aut((\underline{H}_S,[\Id],\theta_S)) = \underline{G}_S$, which would prove the proposition by \'etale descent. Indeed, $\Aut((\underline{H}_S,[\Id],\theta_S))$ consists of inner automorphisms of $\underline{H}_S$ commuting with $\theta$. Since $\underline{H}_S$ is adjoint, these are precisely the elements of $\underline{G}_S$. \end{proof} We define the groupoid $\GrLieE_X$ whose objects are $4$-tuples $(H',\chi',\theta',\gamma')$ where $(H',\chi',\theta')$ is an object of $\GrLie_X$ and $\gamma'\in \lieh'$ (the Lie algebra of $H'$) satisfying $\theta'(\gamma') = -\gamma'$. A morphism $(H',\chi',\theta',\gamma')\rightarrow (H'',\chi'',\theta'',\gamma'')$ in $\GrLieE_X$ is given by an isomorphism $\phi \colon H' \rightarrow H''$ defining a morphism in $\GrLie_X$ and mapping $\gamma'$ to $\gamma''$. We define a map $\GrLieE_X \rightarrow \underline{B}(X)$ (where $\underline{B}(X)$ is seen as a discrete category) as follows. For an object $(H',\chi',\theta',\gamma')$ in $\GrLieE_X$, choose a faitfully flat extension $X'\rightarrow X$ such that there exists an isomorphism $\phi\colon (H',\chi',\theta')_{X'} \rightarrow (\underline{H}_X,[\Id],\theta_X)_{X'}$ in $\GrLie_{X'}$. We define the image of the object $(H',\chi',\theta',\gamma')$ under the map $\GrLieE_X \rightarrow \underline{B}(X)$ by $\pi(\phi(\gamma'))$. This procedure is independent of the choice of $\phi$ and $X'$ and by flat descent defines an element of $\underline{B}(X)$. For $b\in \underline{B}(X)$ we write $\GrLieE_{X,b}$ for the full subcategory of elements of $\GrLieE_{X,b}$ mapping to $b$ under this map. Recall that for $b\in \underline{B}(X)$, $\underline{V}_b$ denotes the fibre of $b$ of the map $\pi\colon \underline{V} \rightarrow \underline{B}$. \begin{proposition}\label{proposition: H1 of stabilizer and GrLieE} Let $X$ be an $S$-scheme and let $b \in \underline{B}_S(X)$. Then the assignment $$\mathcal{A} \mapsto \Isom((\underline{H}_X,[\Id],\theta_X,\sigma(b)),\mathcal{A})$$ defines a bijection between: \begin{itemize} \item Isomorphism classes of objects in $\GrLieE_{X,b}$ that are \'etale locally isomorphic to $(\underline{H}_X,[\Id],\theta_X,\sigma(b))$. \item The set $\mathrm{H}^1(X,Z_{\underline{G}_S}(\sigma(b)))$. \end{itemize} If $b \in \underline{B}_S^{\rs}(X)$, then every object of $\GrLieE_{X,b}$ is \'etale locally isomorphic to $(\underline{H}_X,[\Id],\theta_X,\sigma(b))$. \end{proposition} \begin{proof} Since $\Aut((\underline{H}_S,[\Id],\theta_S,\sigma(b))) = Z_{\underline{G}_S}(\sigma(b))$, the first statement follows from \'etale descent. It suffices to prove that every object $(H',\chi',\theta',\gamma')$ is \'etale locally isomorphic to $(\underline{H}_X,[\Id],\theta_X,\sigma(b))$ if $b\in \underline{B}_S^{\rs}(X)$. We may reduce to the case that $(H',\chi',\theta')=(\underline{H}_X,[\Id],\theta_X)$ by Proposition \ref{proposition: G-torsors in terms of groupoids}. By Property 6 of \S\ref{subsection: integral structures} (which is a spreading out of Proposition \ref{proposition: Kostant section E6} over $S$), the action map $\underline{G}_X \times \underline{B}^{\rs}_X \rightarrow \underline{V}^{\rs}_X$ is \'etale and surjective. Therefore it has sections \'etale locally, hence $\gamma'$ is \'etale locally $\underline{G}$-conjugate to $\sigma(b)$. \end{proof} The following proposition gives an interpretation of the (not necessarily regular semisimple) $G$-orbits of $V$ in terms of the groupoids $\GrLieE_X$ and $\GrLie_X$. \begin{proposition}\label{proposition: G-orbits in terms of groupoids} Let $X$ be an $S$-scheme and let $b\in \underline{B}(X)$. The following sets are in canonical bijection: \begin{itemize} \item The set of $\underline{G}(X)$-orbits on $\underline{V}_{b}(X)$. \item Isomorphism classes of objects $(H',\chi',\theta',\gamma')$ in $\GrLieE_{X,b}$ such that $(H',\chi',\theta') \simeq (\underline{H}_S,\chi,\theta_S)_X$ in $\GrLie_X$. \end{itemize} Consequently if $b\in \underline{B}_S^{\rs}(X)$, then the following sets are in canonical bijection: \begin{itemize} \item The set of $\underline{G}(X)$-orbits on $\underline{V}_{b}(X)$. \item The kernel of the map $\mathrm{H}^1(X,Z_{\underline{G}_S}(\sigma(b)))\rightarrow \mathrm{H}^1(X,\underline{G})$. \end{itemize} \end{proposition} \begin{proof} For the first part, we construct an explicit bijection between these two sets. If $v\in \underline{V}_{b}(X)$ is a representative of a $\underline{G}(X)$-orbit, we associate to $v$ the object $ (\underline{H}_X,[\Id],\theta_X,v)$ of $\GrLieE_{X,b}$. Changing $v$ by a $\underline{G}(X)$-conjugate does not change the isomorphism class of this object, so this association is well-defined. Conversely, if $(H',\chi',\theta',\gamma')$ is an object of $\GrLieE_{X,b}$ and $\phi \colon (H',\chi',\theta') \rightarrow (\underline{H}_S,[\Id],\theta_S)_X$ an isomorphism in $\GrLie_X$, we associate to it the element $v=\phi(\gamma') \in \underline{V}_{b}(X)$. Changing the isomorphism $\phi$ does not change the $\underline{G}(X)$-conjugacy class of $v$. The second part follows from combining the first part with Propositions \ref{proposition: G-torsors in terms of groupoids} and \ref{proposition: H1 of stabilizer and GrLieE}. \end{proof} \begin{remark} The groupoids $\GrLie_X$ and $\GrLieE_X$ for varying $X$ are stacks in the \'etale topology over $S$, and one can show that $\GrLie \simeq \left[ S/\underline{G}_S \right] $ and $\GrLieE \simeq \left[ \underline{V}_S/\underline{G}_S \right]$. We will not need these facts in what follows. \end{remark} \subsection{The compactified Jacobian}\label{subsection: compactifications} Recall that $\mathcal{J}\rightarrow \underline{B}_S^{\rs}$ denotes the relative Jacobian of the family of smooth curves $\mathcal{C}^{\rs}_S \rightarrow \underline{B}_S^{\rs}$. The morphism $\mathcal{J} \rightarrow \underline{B}_S^{\rs}$ is proper and smooth. In this section we introduce a compactification of this abelian scheme over $\underline{B}_S$. The reader not interested in the details of the construction can simply admit its properties which are summarized in Corollary \ref{corollary: good compactifications exist}. We start with some generalities on torsion-free rank $1$ sheaves. By a \define{curve} over a field $k$ we mean a finite type scheme over $k$ such that every irreducible component has dimension $1$. \begin{definition} Let $X$ be an integral projective curve over an algebraically closed field $k$. We say a coherent sheaf $I$ on $X$ is \define{torsion-free rank $1$} if it satisfies the following two conditions: \begin{enumerate} \item For each $p \in X$ the $\O_{X,p}$-module $I_p$ is torsion-free. \item If $\eta \in X$ is the generic point then we have an isomorphism $I_{\eta} \simeq \O_{X,\eta}$ of $\O_{X,\eta}$-modules. \end{enumerate} \end{definition} If $X$ is smooth then every torsion-free rank $1$ sheaf is invertible, but for non-smooth $X$ this need not to be the case. For example, if $X$ is the projective closure of the plane curve $(y^2 = x^3)$ then the ideal sheaf of the origin is a torsion-free rank $1$ sheaf which is not invertible. The above definition can be generalized to a family of curves. \begin{definition} Let $\mathcal{X} \rightarrow T$ be a flat projective morphism whose geometric fibres are integral curves. A locally finitely presented $\O_{\mathcal{X}}$-module $I$ is \define{$T$-relatively torsion-free rank $1$} if the following conditions are satisfied: \begin{enumerate} \item The sheaf $I$ is flat over $T$. \item For every geometric point $t$ of $T$ the sheaf $I_t$ is torsion-free rank $1$ on $\mathcal{X}_t$. \end{enumerate} \end{definition} We apply the above definitions to our situation of interest. The morphism $\mathcal{C}_S \rightarrow \underline{B}_S$ is flat, projective and its geometric fibres are integral curves. The Euler characteristic of the structure sheaf of the geometric fibres is constant, equal to $1-3 = -2$. The point at infinity defines a section $P_{\infty}\colon \underline{B}_S \rightarrow \mathcal{C}_S$ whose image lands in the smooth locus of the morphism. Let $F$ be the functor sending a $\underline{B}_S$-scheme $T$ to the set \begin{align*} \left\{(I,\phi) \mid I \text{ is } T\text{-relatively torsion-free rank }1 \text{ on } \mathcal{C}_T \rightarrow T ,\, \phi\colon \left(P_{\infty,T}\right)^*I\simeq \O_T \right\} /\simeq . \end{align*} Here we require $\phi$ to be an isomorphism of $\O_T$-modules, and we say two pairs $(I,\phi)$ and $(I',\phi')$ are isomorphic if there exists an isomorphism of $\O_{\mathcal{C}_T}$-modules $I\simeq I'$ identifying $\phi$ with $\phi'$. Let $F^0$ be the subfunctor of $F$ consisting of those torsion-free rank $1$ sheaves with Euler-characteristic $-2$ in each fibre. Altman and Kleiman \cite[Theorem 8.1]{AltmanKleiman-CompactifyingThePicardScheme} have shown that $F^0$ is representable. \begin{definition} We call the scheme $\bar{\mathcal{J}} \rightarrow \underline{B}_S$ representing the functor $F^0$ the \define{compactified Jacobian} of the family $\mathcal{C}_S \rightarrow \underline{B}_S$. \end{definition} By \cite[Theorem 8.5]{AltmanKleiman-CompactifyingThePicardScheme} the morphism $\bar{\mathcal{J}} \rightarrow \underline{B}_S$ is projective\footnote{There are several nonequivalent definitions of a projective morphism but in this case they all agree, see \cite[Tag \href{https://stacks.math.columbia.edu/tag/0B45}{0B45}]{stacksproject}.}. Moreover since every torsion-free rank $1$ sheaf on a smooth curve is invertible, the restriction of $\bar{\mathcal{J}}$ to $\underline{B}_S^{\rs}$ is isomorphic to $\mathcal{J}$. The fibres of $\mathcal{C}_S \rightarrow \underline{B}_S$ have only planar singularities; we may therefore appeal to \cite[Theorem 9]{AltmanKleimanSteven-IrreducibilityCompactifiedJacobian} to obtain the following good properties of $\bar{\mathcal{J}}$: \begin{proposition}\label{proposition: compactified Jacobian props AIK} The morphism $\bar{\mathcal{J}} \rightarrow \underline{B}_S$ is flat and its geometric fibres are integral of dimension $3$. \end{proposition} The crucial additional property of $\bar{\mathcal{J}}$, which follows from the fact that $C\rightarrow B$ is a semi-universal deformation of its central fibre, is the following. \begin{proposition}\label{proposition: smoothness compactified jacobian} For every geometric point $\Spec k \rightarrow \Spec S = \Spec \mathbb{Z}[1/N]$, the scheme $\bar{\mathcal{J}}_k$ is smooth. \end{proposition} \begin{proof} By \cite[Corollary B.2]{FantechiGottschevStraten-EulerNumberCompactifiedJacobian}, $\bar{\mathcal{J}}_k$ is smooth in a neighbourhood of the fibre above $0 \in \underline{B}_k$. (In loc. cit. it is assumed that the characteristic of the base field is $0$ but the proof given works for any algebraically closed field of characteristic not dividing $N$.) To see that $\bar{\mathcal{J}}_k$ is smooth everywhere, we use the contracting $\mathbb{G}_m$-action. Recall that we have defined a $\mathbb{G}_{m,k}$-action on $\mathcal{C}_k \rightarrow \underline{B}_k$ in \S\ref{subsection: a family of curves}. By functoriality this induces a $\mathbb{G}_{m,k}$-action on $\bar{\mathcal{J}}_k$ such that the morphism $\bar{\mathcal{J}}_k \rightarrow \underline{B}_k$ is $\mathbb{G}_{m,k}$-equivariant. If $Z$ is the singular locus of $\bar{\mathcal{J}}_k$ then $Z$ is a closed subscheme which is invariant under the action of $\mathbb{G}_{m,k}$. Since the closure of every orbit of $\underline{B}_k$ contains $0\in \underline{B}_k$, this subscheme must intersect the fibre above $0\in \underline{B}_k$ nontrivially, if it is nonempty. We conclude that $Z$ is empty and $\bar{\mathcal{J}}_k$ is smooth, as required. \end{proof} \begin{remark} Although the total space $\bar{\mathcal{J}}_k $ is smooth, the morphism $\bar{\mathcal{J}}_k\rightarrow \underline{B}_k$ will not be smooth over points which do not lie in $\underline{B}_k^{\rs}$. \end{remark} For later reference, we summarize the relevant properties of $\bar{\mathcal{J}}$ in the following corollary. \begin{corollary}\label{corollary: good compactifications exist} The morphism $\bar{\mathcal{J}} \rightarrow \underline{B}_S$ constructed above is flat and projective and its restriction to $\underline{B}^{\rs}_S \subset \underline{B}_S$ is isomorphic to $\mathcal{J} \rightarrow \underline{B}^{\rs}_S$. The morphism $\bar{\mathcal{J}} \rightarrow \Spec S$ is smooth with geometrically integral fibres. For every geometric point $\Spec k \rightarrow \Spec S$, $\mathcal{J}_k$ is dense in $\bar{\mathcal{J}}_k$ and the locus of $\bar{\mathcal{J}}_k$ where the morphism $\bar{\mathcal{J}}_k \rightarrow \underline{B}_k$ is smooth is an open subset whose complement has codimension at least two in $\bar{\mathcal{J}}_k$. \end{corollary} \begin{proof} The first sentence follows from Proposition \ref{proposition: compactified Jacobian props AIK} and the definition of $\bar{\mathcal{J}} \rightarrow \underline{B}_S$. The smoothness of $\bar{\mathcal{J}} \rightarrow \Spec S$ follows from Proposition \ref{proposition: smoothness compactified jacobian} and the flatness of $\bar{\mathcal{J}} \rightarrow \Spec S$. The integrality of the geometric fibres of $\bar{\mathcal{J}}\rightarrow \Spec S$ follows from the smoothness of $\bar{\mathcal{J}} \rightarrow \Spec S$, the irreducibility of the fibres of $\bar{\mathcal{J}} \rightarrow \underline{B}_S$ and Lemma \ref{lemma: irreducibility fibres} below. Moreover since $\mathcal{J}_k$ and $\bar{\mathcal{J}}_k$ are both irreducible of the same dimension, $\mathcal{J}_k$ is dense in $\bar{\mathcal{J}}_k$. Finally we prove the claim about the smooth locus of the morphism $\bar{\mathcal{J}}_k \rightarrow \underline{B}_k$; for the remainder of the proof we denote this morphism by $\phi$. Let $Z \subset \bar{\mathcal{J}}_k$ denote the (reduced) closed subscheme where $\phi$ fails to be smooth. The smoothness of $\mathcal{J}_k \rightarrow \underline{B}^{\rs}_k$ shows that $Z$ is supported above the complement of $\underline{B}^{\rs}_k$ in $\underline{B}_k$. Moreover since the fibres of $\phi$ are geometrically integral they intersect $Z$ in a proper closed subset. Combining these two facts shows that $Z$ has codimension at least two in $\bar{\mathcal{J}}_k$. \end{proof} \begin{lemma}\label{lemma: irreducibility fibres} Let $f\colon X\rightarrow Y$ be a flat morphism of schemes which is locally of finite presentation. Suppose that $Y$ and the fibres of $f$ are irreducible. Then $X$ is irreducible. \end{lemma} \begin{proof} Since $f$ is open, this follows from \cite[Tag \href{https://stacks.math.columbia.edu/tag/004Z}{004Z}]{stacksproject}. \end{proof} \subsection{The case of square-free discriminant}\label{subsection: case of square-free discriminant} In this section we prove Theorem \ref{theorem: integral representatives exist} in the case of square-free discriminant. We follow the homonymous section \cite[\S5.1]{Thorne-Romano-E8} closely. We start with some preparatory lemmas. The first two lemmas are very similar to \cite[Lemma 5.2 and 5.3]{Thorne-Romano-E8}; their proofs will be omitted. \begin{lemma}\label{lemma: trivial kernel of H1(R,G)->H1(K,G)} Let $R$ be a Noetherian regular integral domain with fraction field $K$ such that every locally free $R$-module of finite rank is free. Then the map $\mathrm{H}^1(R,\underline{G}) \rightarrow \mathrm{H}^1(K,\underline{G})$ has trivial kernel. \end{lemma} \begin{lemma}\label{lemma: injective H^1 for quasifinite etale gp scheme} Let $X$ be a Dedekind scheme (i.e. a regular integral one-dimensional noetherian scheme) with function field $K$. Let $\Gamma$ be a quasi-finite \'etale commutative group scheme over $X$. Suppose that $\Gamma$ is a N\'eron model of its generic fibre: for every \'etale morphism $U\rightarrow X$ with $U$ a Dedekind scheme with function field $K(U)$, we have $\Gamma(U) = \Gamma(K(U))$. Then the map $\mathrm{H}^1(X,\Gamma) \rightarrow \mathrm{H}^1(K,\Gamma)$ is injective. \end{lemma} The following lemma is a special case of a result proven by Poonen and Stoll concerning hypersurfaces of arbitrary degree and dimension. \begin{lemma}\label{lemma: sqfree disc implies regular node} Let $R$ be a discrete valuation ring in which $N$ is a unit. Let $K = \Frac R$ and let $\ord_K: K^{\times} \twoheadrightarrow \mathbb{Z}$ be the normalized discrete valuation. Let $b\in \underline{B}(R)$ and suppose that $\ord_K \Delta(b) = 1$. Then $\mathcal{C}_b$ is regular and its special fibre contains a unique singularity, which is a node. \end{lemma} \begin{proof} Recall from \S\ref{subsection: discriminant polynomial} that $\Delta_0\in \mathbb{Z}[\underline{B}]$ denotes the (divided) discriminant of a plane quartic curve. (It was originally defined as an element of $\mathbb{Q}[B]$ but by the same formula it defines an element of $\mathbb{Z}[\underline{B}]$.) Proposition \ref{proposition: discriminant Delta and Delta0 agree} and our assumptions on $N$ imply that $\Delta(b)$ and $\Delta_0(b)$ agree up to an element of $\mathbb{Z}[1/N]^{\times}$. So $\ord_K\Delta_0(b)=1$. The lemma now follows from the main result of \cite{PoonenStoll-Hypersurfacesdiscriminantuniformizer}. \end{proof} \begin{lemma}\label{lemma: squarefree disc properties of elements of V} Let $R$ be a discrete valuation ring with residue field $k$ in which $N$ is a unit. Let $\bar{k}$ be an algebraic closure of $k$. Let $K = \Frac R$ and let $\ord_K: K^{\times} \twoheadrightarrow \mathbb{Z}$ be the normalized discrete valuation. Let $x\in \underline{V}(R)$ with $b=\pi(x)\in \underline{B}(R)$ and suppose that $\ord_K \Delta(b)=1$. Then the reduction $x_k$ of $x$ in $\underline{V}(k)$ is regular and $\underline{G}(\bar{k})$-conjugate to $\sigma(b)_k$. In addition the $R$-group scheme $Z_{\underline{G}}(x)$ is quasi-finite \'etale and has special fibre of order $2^5$. \end{lemma} \begin{proof} We are free to replace $R$ by a discrete valuation ring $R'$ containing $R$ such that any uniformizer in $R$ is also a uniformizer in $R'$. Therefore we may assume that $R$ is complete and $k$ algebraically closed. Let $x_k=y_s+y_n$ be the Jordan decomposition of $x_k\in \underline{V}(k)$ as a sum of its semisimple and nilpotent parts. Let $\underline{\mathfrak{h}}_{0,k}=\mathfrak{z}_{\underline{\mathfrak{h}}}(y_s)$ and $\underline{\mathfrak{h}}_{1,k}=\image(\Ad(y_s))$. Then $\underline{\mathfrak{h}}_{k}=\underline{\mathfrak{h}}_{0,k}\oplus \underline{\mathfrak{h}}_{1,k}$, where $\Ad(x_k)$ acts nilpotently on $\underline{\mathfrak{h}}_{0,k}$ and invertibly on $\underline{\mathfrak{h}}_{1,k}$. By Hensel's lemma, this decomposition lifts to an $\Ad(x)$-invariant decomposition of free $R$-modules $\underline{\mathfrak{h}}_R = \underline{\mathfrak{h}}_{0,R}\oplus \underline{\mathfrak{h}}_{1,R}$, where $\Ad(x)$ acts topologically nilpotently on $\underline{\mathfrak{h}}_{0,R}$ and invertibly on $\underline{\mathfrak{h}}_{1,R}$. We claim that there exists a unique closed subgroup $L\subset \underline{H}_R$ with Lie algebra $\underline{\mathfrak{h}}_{0,R}$ such that $L$ is $R$-smooth with connected fibres. The uniqueness follows from \cite[Exp. XIV, Proposition 3.12]{SchemasenGroupesII}. To show existence, choose a regular semisimple element $\bar{r}$ of the reductive Lie algebra $\mathfrak{z}_{\underline{\mathfrak{h}}}(y_s)$ and an arbitrary lift $r\in \underline{\mathfrak{h}}_{0,R}$. The centralizer $\mathfrak{z}_{\underline{\mathfrak{h}}}(r)$ is a Cartan subalgebra of $\underline{\mathfrak{h}}_R$ whose reduction mod $k$ contains $y_s$. Since $k=k^s$, the algebra $\mathfrak{z}_{\underline{\mathfrak{h}}}(r)$ is split so there exists an element $y_{s,R}\in \mathfrak{z}_{\underline{\mathfrak{h}}}(r)$ lifting $y_s$ such that $\mathfrak{z}_{\underline{\mathfrak{h}}}(y_{s,R})=\underline{\mathfrak{h}}_{0,R}$. Then $L= Z_{\underline{H}}(y_{s,R})$ is $R$-smooth, has Lie algebra $\underline{\mathfrak{h}}_{0,R}$, and has connected fibres by \cite[Theorem 3.14]{Steinberg-Torsioninreductivegroups}. The construction shows that $L_k=Z_{\underline{H}}(y_s)$. Lemma \ref{lemma: sqfree disc implies regular node} shows that the curve $\mathcal{C}_{b,k}$ has a unique nodal singularity. Therefore\footnote{The proof of that corollary only depends on \cite[\S6.6]{Slodowy-simplesingularitiesalggroups} so is valid in any characteristic which is very good for $\lieh$, i.e. different from $2,3,5$; see the remark at the end of \cite[\S6.6]{Slodowy-simplesingularitiesalggroups}} \cite[Corollary 3.16]{Thorne-thesis} the derived group of $L$ has type $A_1$ and the centre $Z(L)$ of $L$ has rank $5$. Moreover the restriction $\theta_L$ of $\theta$ to $L$ is a stable involution, in the sense that for each geometric point of $\Spec R$ there exists a maximal torus of $L$ on which $\theta$ acts as $-1$, by \cite[Lemma 2.5]{Thorne-thesis}. There is an isomorphism $L/Z(L)\simeq \PGL_2$ inducing an isomorphism $\underline{\mathfrak{h}}_{R,0}^{\der}\simeq \underline{\mathfrak{h}}_{R,0}/\mathfrak{z}(\underline{\mathfrak{h}}_{R,0}) \simeq \liesl_{2,R}$ under which $\theta_L$ corresponds to the involution $\xi = \Ad\left(\text{diag}(1,-1) \right)$. (The isomorphism $\underline{\mathfrak{h}}_{R,0}^{\der}\simeq \underline{\mathfrak{h}}_{R,0}/\mathfrak{z}(\underline{\mathfrak{h}}_{R,0})$ exists by our assumptions on the residue characteristic of $N$, and by the same logic as the proof of Lemma \ref{proposition: G-torsors in terms of groupoids} any two stable involutions on $\liesl_{2,R} $ are \'etale locally conjugate.) The claims in the lemma now follow easily from explicit calculations in $\liesl_{2,R}$. Indeed, to show that $x_k$ is regular it suffices to show that $y_n$ is regular nilpotent in $\mathfrak{z}_{\underline{\mathfrak{h}}}(y_s)=\underline{\mathfrak{h}}_{0,k}$. Let $x'$ denote the projection of $x$ in $\lieh_{0,R}^{\der}$. The image of $x'$ under the isomorphism $\underline{\mathfrak{h}}_{R,0}^{\der}\rightarrow \liesl_{2,R}$ corresponds to an element of the form $$\begin{pmatrix} 0 & a \\ b & 0 \end{pmatrix}$$ with $\ord_K(ab)=1$. Therefore the reduction of $x'$ in $\liesl_{2,k}$ is regular nilpotent, as desired. We show that $x_k$ is $\underline{G}(k)$-conjugate to $\sigma(b)_k$. By \cite[Corollary 2.6 and Theorem 2.20]{Levy-Vinbergtheoryposchar} (which extends Vinberg theory to good positive characteristic), the semisimple parts of $x_k$ and $\sigma(b)_k$ are $\underline{G}(k)$-conjugate. Moreover both $x_k$ and $\sigma(b)_k$ are regular. Therefore it suffices to prove that $L^{\theta}(k)$ acts transitively on the regular nilpotent elements of $\underline{\mathfrak{h}}_{k,0}^{\theta=-\Id}$. Using the fact that $\underline{H}$ is adjoint, the character group of $Z(L)$ is given by the $E_6$ root lattice modulo the span of a root. Therefore $Z(L)$ has connected fibres. It follows that the exact sequence $$ 1\rightarrow Z(L_k) \rightarrow L_k \rightarrow \PGL_{2,k} \rightarrow 1 $$ induces a surjection $L_k^{\theta} \rightarrow \PGL_{2,k}^{\xi}$. Since $\PGL_{2,k}^{\xi}$ acts transitively on the regular nilpotents of $\liesl_{2,k}^{\xi=-\Id}$, the statement for $L_k$ follows. Finally by \cite[Proposition 2.8]{Thorne-thesis}, $Z_{\underline{G}}(x)_k=Z(L_k)[2]$. Therefore since $Z(L_k)$ is connected, $Z_{\underline{G}}(x)_k$ has order $2^5$. \end{proof} In the next proposition, we will use a slight abuse of notation and for any $b\in \underline{B}(R)$ with $\Delta(b)\neq 0$ we will write $\mathcal{J}_b$ (which is a priori only defined if $\Delta(b) \in R^{\times}$) for the $K$-scheme $\mathcal{J}_{b_K}$. \begin{proposition}\label{prop: integral reps squarefree discr} Let $R$ be a discrete valuation ring in which $N$ is a unit. Let $K = \Frac R$ and let $\ord_K: K^{\times} \twoheadrightarrow \mathbb{Z}$ be the normalized discrete valuation. Let $b\in \underline{B}(R)$ and suppose that $\ord_K \Delta(b)\leq 1$. Then: \begin{enumerate} \item If $x\in \underline{V}_b(R)$, then $Z_{\underline{G}}(x)(K) = Z_{\underline{G}}(x)(R)$. \item The natural map $\alpha\colon\underline{G}(R)\backslash \underline{V}_b(R) \rightarrow \underline{G}(K)\backslash \underline{V}_b(K)$ is injective and its image contains $\eta_b\left(\mathcal{J}_b(K)/2\mathcal{J}_b(K)\right)$. \item If further $R$ is complete and has finite residue field then the image of $\alpha$ equals $\eta_b\left(\mathcal{J}_b(K)/2\mathcal{J}_b(K)\right)$. \end{enumerate} \end{proposition} The proof is very similar to the proof of \cite[Proposition 5.4]{Thorne-Romano-E8}, where an analogous result for the stable $3$-grading on $E_8$ is proved. \begin{proof} If $\hat{R}$ is the completion of $R$ with fraction field $\hat{K}$, we have the equality $\underline{G}(\hat{K})=\underline{G}(\hat{R})\underline{G}(K)$ \cite[Th\'eor\`eme 3.2]{Nisnevich-Espaceshomogenesprincipaux}. We may therefore assume that $R$ is complete. If $\ord_K \Delta(b)=0$, $\mathcal{J}_b$ is smooth over $R$ and $\mathcal{J}_b(K)=\mathcal{J}_b(R)$. Since $Z_{\underline{G}}(x)$ is finite \'etale over $R$, the first part follows. By Proposition \ref{proposition: G-orbits in terms of groupoids} and Lemma \ref{lemma: injective H^1 for quasifinite etale gp scheme}, $\alpha$ is injective. Proposition \ref{proposition: inject 2-descent orbits spreading out} implies that $\eta_b\colon \mathcal{J}_b(K)/2\mathcal{J}_b(K)\rightarrow \underline{G}(K)\backslash \underline{V}_b(K)$ factors through $\underline{G}(R)\backslash \underline{V}_b(R)$, so the second part follows. If the residue field $k$ is finite, the pointed sets $\mathrm{H}^1(R, \underline{G})$ and $\mathrm{H}^1(R,\mathcal{J}_b)$ are trivial by \cite[III.3.11(a)]{milne-etalecohomology} and Lang's theorem. The third part then follows from the fact that the $2$-descent map $ \mathcal{J}_b(R)/2\mathcal{J}_b(R)\rightarrow \mathrm{H}^1(R,\mathcal{J}_b[2])$ is an isomorphism. We now assume that $\ord_K \Delta(b)=1$. Lemma \ref{lemma: sqfree disc implies regular node} implies that $\mathcal{C}_b/R$ is regular and has a unique singularity, which is a node. Write $\mathscr{J}_b$ for the N\'eron model of $\mathcal{J}_b$. The results of \cite[Chapter 9]{BLR-NeronModels} (in particular Theorem 1 of \S9.5 and Example 8 of \S9.2 therein) imply that $\mathscr{J}_b$ is a smooth group scheme over $R$ with connected fibres and that the special fibre of $\mathscr{J}_b$ is an extension of a $2$-dimensional abelian variety by a rank $1$ torus. The quasi-finite \'etale commutative group scheme $\mathscr{J}_b[2]$ has generic fibre of order $2^6$ and special fibre of order $2^5$. We claim that the map $\underline{G} \rightarrow \underline{V}_b^{\reg},\, g \mapsto g\cdot \sigma(b)$ is a torsor for the \'etale group scheme $Z_{\underline{G}}(\sigma(b))$. Since this map is smooth (Property 6 of \S\ref{subsection: integral structures}) and surjective in the generic fibre (Proposition \ref{prop : graded chevalley}), it suffices to prove that it is surjective in the special fibre. Since every closed point of $\underline{V}_{b,k}^{\reg}$ lifts to an element of $\underline{V}_b(R')$ for some finite extension $R \subset R'$ of ramification index $1$, this follows from Lemma \ref{lemma: squarefree disc properties of elements of V} applied to $R'$. We now prove the first part. Since $x$ is \'etale locally $\underline{G}$-conjugate to $\sigma(b)$ over $R$ by the previous paragraph, it suffices to consider the case $x=\sigma(b)$. We show that the isomorphism $Z_{\underline{G}}(\sigma(b))_K \simeq \mathcal{J}_b[2] $ of (a $\mathbb{Z}[1/N]$-analogue of) Proposition \ref{proposition: bridge jacobians root lattices} extends to an isomorphism $Z_{\underline{G}}(\sigma(b)) \simeq \mathscr{J}_b[2] $. Indeed, by the N\'eron mapping property the former isomorphism extends to an open immersion $Z_{\underline{G}}(\sigma(b))\rightarrow \mathscr{J}_b[2] $ of separated quasi-finite \'etale group schemes over $R$. Since the special fibre of $Z_{\underline{G}}(\sigma(b))$ has order $2^5$ by Lemma \ref{lemma: squarefree disc properties of elements of V}, this is an isomorphism. Part 1 then follows from the equality $\mathscr{J}_b[2](K) = \mathscr{J}_b[2](R)$. To prove the remaining parts, note that the surjectivity of $\underline{G} \rightarrow \underline{V}_b^{\reg},\, g \mapsto g\cdot \sigma(b)$ implies that (in the notation of \S\ref{subsection: some groupoids}) every object of $\GrLieE_{R,b}$ is \'etale locally isomorphic to $(\underline{H}_R,[\Id]_R,\theta_R,\sigma(b))$. By Propositions \ref{proposition: H1 of stabilizer and GrLieE} and \ref{proposition: G-orbits in terms of groupoids}, the $\underline{G}(R)$-orbits of $\underline{V}_b(R)$ are in bijection with the kernel of the map $\mathrm{H}^1(R, Z_{\underline{G}}(\sigma(b)))\rightarrow \mathrm{H}^1(R,\underline{G})$. Since the map $\mathrm{H}^1(R, Z_{\underline{G}}(\sigma(b)))\rightarrow \mathrm{H}^1(K, Z_{\underline{G}}(\sigma(b)))$ is injective (using the isomorphism $Z_{\underline{G}}(\sigma(b)))\simeq \mathscr{J}_b[2]$ and Lemma \ref{lemma: injective H^1 for quasifinite etale gp scheme}), the map $\underline{G}(R)\backslash \underline{V}_b(R) \rightarrow \underline{G}(K)\backslash \underline{V}_b(K)$ is injective too. To show that the image of $\underline{G}(R)\backslash \underline{V}_b(R) \rightarrow \underline{G}(K)\backslash \underline{V}_b(K)$ contains $\eta_b\left(\mathcal{J}_b(K)/2\mathcal{J}_b(K)\right)$, note that we have an exact sequence of smooth group schemes \begin{align*} 0 \rightarrow \mathscr{J}_b[2]\rightarrow \mathscr{J}_b \xrightarrow{\times 2} \mathscr{J}_b \rightarrow 0, \end{align*} since $\mathscr{J}_b$ has connected fibres. This implies the existence of a commutative diagram: \begin{center} \begin{tikzcd} \mathscr{J}_b(R)/2\mathscr{J}_b(R) \arrow[d] \arrow[r , "="] & \mathcal{J}_b(K)/2\mathcal{J}_b(K) \arrow[d] \\ {\mathrm{H}^1(R,\mathscr{J}_b[2])} \arrow[r] & {\mathrm{H}^1(K,\mathcal{J}_b[2])} \end{tikzcd} \end{center} It therefore suffices to prove that every element in the image of the map $\mathscr{J}_b(R)/2\mathscr{J}_b(R) \rightarrow \mathrm{H}^1(R,\mathscr{J}_b[2])$ has trivial image in $\mathrm{H}^1(R,\underline{G})$. This follows from the injectivity of the map $\mathrm{H}^1(R,\underline{G}) \rightarrow \mathrm{H}^1(K,\underline{G})$ (Lemma \ref{lemma: trivial kernel of H1(R,G)->H1(K,G)}). If $R$ has finite residue field then Lang's theorem implies that $\mathrm{H}^1(R,\underline{G}) = \{1\}$. In this case the $\underline{G}(R)$-orbits on $\underline{V}_b(R)$ are in bijection with $\mathrm{H}^1(R,\mathscr{J}_b[2])$. The triviality of $\mathrm{H}^1(R,\mathscr{J}_b)$ (again by Lang's theorem) shows that $\mathrm{H}^1(R,\mathscr{J}_b[2])$ is in bijection with $\mathscr{J}_b(R)/2\mathscr{J}_b(R) = \mathcal{J}_b(K)/2\mathcal{J}_b(K)$. This proves Part 3, completing the proof of the proposition. \end{proof} The following corollary considers arbitrary Dedekind schemes. Since such schemes do not satisfy the conditions of Theorem \ref{theorem: inject 2-descent into orbits} (they can carry nontrivial vector bundles), we must switch our focus from orbits to groupoids, in the language of \S\ref{subsection: some groupoids}. \begin{corollary}\label{corollary: int reps sqfree, general dedekind scheme} Let $X$ be a Dedekind scheme in which $N$ is a unit with function field $K$. For every closed point $p$ of $X$ write $\ord_{p} \colon K^{\times} \twoheadrightarrow \mathbb{Z}$ for the normalized discrete valuation of $p$. Let $b\in \underline{B}(X)$ be a morphism such that $\ord_{p}(\Delta(b))\leq 1$ for all $p$. Let $P\in \mathcal{J}_b(K)/2\mathcal{J}_b(K)$ and let $\eta_b(P)\in G(K) \backslash V_b(K)$ be the corresponding orbit from Theorem \ref{theorem: inject 2-descent into orbits}. Then the object of $\GrLieE_{K,b}$ corresponding to $\eta_b(P)$ using Proposition \ref{proposition: G-orbits in terms of groupoids} uniquely extends to an object of $\GrLieE_{X,b}$. \end{corollary} \begin{proof} By the same logic as the proof of Proposition \ref{prop: integral reps squarefree discr}, the morphism $\underline{G} \rightarrow \underline{V}_b^{\reg},\, g \mapsto g\cdot \sigma(b)$ is a torsor for the \'etale group scheme $Z_{\underline{G}}(\sigma(b))$ and the isomorphism $Z_{\underline{G}}(\sigma(b))_K \simeq \mathcal{J}_b[2] $ of Proposition \ref{proposition: bridge jacobians root lattices} extends to an isomorphism $Z_{\underline{G}}(\sigma(b)) \simeq \mathscr{J}_b[2]$, where $\mathscr{J}_b \rightarrow X$ denotes the N\'eron model of $\mathcal{J}_b$. So every object of $\GrLieE_{X,b}$ is \'etale locally isomorphic to $(\underline{H}_X,[\Id]_X,\theta_X,\sigma(b))$. Therefore by Proposition \ref{proposition: H1 of stabilizer and GrLieE} the set of isomorphism classes of objects in $\GrLieE_{X,b}$ is in bijection with the pointed set $\mathrm{H}^1(X,\mathscr{J}_b[2])$. Let $\mathcal{A}\in \mathrm{H}^1(K,\mathcal{J}_b[2])$ be the class corresponding to $\eta_b(P)$ under Proposition \ref{proposition: G-orbits in terms of groupoids}. It suffices to prove that $\mathcal{A}$ uniquely lifts under the natural map $\mathrm{H}^1(X,\mathscr{J}_b[2]) \rightarrow \mathrm{H}^1(K,\mathcal{J}_b[2])$. The equality $\mathscr{J}_b(X)=\mathcal{J}_b(K)$ implies that the $2$-descent map $\mathcal{J}_b(K)/2\mathcal{J}_b(K)\rightarrow \mathrm{H}^1(K,\mathcal{J}_b[2])$ factors through $\mathrm{H}^1(X,\mathscr{J}_b[2])\rightarrow \mathrm{H}^1(K,\mathcal{J}_b[2])$, so $\mathcal{A}$ indeed lifts. The uniqueness follows from the injectivity of the map $\mathrm{H}^1(X,\mathscr{J}_b[2]) \rightarrow \mathrm{H}^1(K,\mathcal{J}_b[2])$ (Lemma \ref{lemma: injective H^1 for quasifinite etale gp scheme}). \end{proof} \subsection{The proof of Theorem \ref{theorem: integral representatives exist}}\label{subsection: proof of theorem integral representatives} We now treat the general case. We will do this by deforming to the case of square-free discriminant, with the help of the following Bertini type theorem over $\mathbb{Z}_p$. \begin{proposition}\label{proposition: Bertini type theorem} Let $p$ be a prime number. Let $\mathcal{Y} \rightarrow \mathbb{Z}_p$ be a smooth, quasiprojective morphism of relative dimension $d\geq 1$ with geometrically integral fibres. Let $\mathcal{D} \subset \mathcal{Y}$ be an effective Cartier divisor. Assume that $\mathcal{Y}_{\mathbb{F}_p}$ is not contained in $\mathcal{D}$ (i.e. $\mathcal{D}$ is horizontal) and that $\mathcal{D}_{\mathbb{Q}_p}$ is reduced. Let $P\in \mathcal{Y}(\mathbb{Z}_p)$ be a section such that $P_{\mathbb{Q}_p} \not\in \mathcal{D}_{\mathbb{Q}_p}$. Then there exists a closed subscheme $\mathcal{X} \hookrightarrow \mathcal{Y}$ containing the image of $P$ satisfying the following properties. \begin{itemize} \item $\mathcal{X} \rightarrow \mathbb{Z}_p$ is smooth of relative dimension $1$ with geometrically integral fibres. \item $\mathcal{X}_{\mathbb{F}_p}$ is not contained in $\mathcal{D}$ and the (scheme-theoretic) intersection $\mathcal{X}_{\mathbb{Q}_p} \cap \mathcal{D}_{\mathbb{Q}_p}$ is reduced. \end{itemize} \end{proposition} \begin{proof} If $d=1$ we can take $\mathcal{X} = \mathcal{Y}$ and there is nothing to prove. Thus for the rest of the proof we may assume that $d\geq 2$. Fix a locally closed embedding $\mathcal{Y} \subset \P_{\mathbb{Z}_p}^n$. We will induct on $d$ by finding a suitable hypersurface section using Bertini theorems over $\mathbb{F}_p$ and $\mathbb{Q}_p$. Combining \cite[Theorem 1.2]{Poonen-BertiniTheoremsFiniteFields} and \cite[Theorem 1.1]{CharlesPoonen}, there exists a hypersurface $H$ in $\P^n_{\mathbb{F}_p}$ such that the (scheme-theoretic) intersection $\mathcal{Y}_{\mathbb{F}_p} \cap H$ is smooth, geometrically irreducible of codimension $1$ in $\mathcal{Y}_{\mathbb{F}_p}$, contains the point $P_{\mathbb{F}_p}$ and is not contained in $\mathcal{D}$. We will lift this hypersurface to a hypersurface in $\P^n_{\mathbb{Z}_p}$ with similar properties, as follows. Let $M$ be the projective space over $\mathbb{Q}_p$ parametrizing hypersurfaces of degree $\deg H$ in $\P^n_{\mathbb{Q}_p}$ containing the point $P_{\mathbb{Q}_p}$. By the classical Bertini theorem over $\mathbb{Q}_p$, there exists an open dense subscheme $U$ of $M$ such that every hypersurface $H'$ in $U$ has the property that $H' \cap \mathcal{Y}_{\mathbb{Q}_p}$ is smooth, geometrically irreducible of codimension $1$ and that $H' \cap \mathcal{D}_{\mathbb{Q}_p}$ is reduced. The subset of $M(\mathbb{Q}_p)$ whose reduction mod $p$ is the hypersurface $H$ is an open $p$-adic ball of $M(\mathbb{Q}_p)$. Consequently, it intersects $U(\mathbb{Q}_p)$ nontrivially. (Since an open $p$-adic ball in a projective $\mathbb{Q}_p$-space cannot be contained in a proper Zariski closed subscheme.) So there exists a hypersurface $\mathcal{H} \subset \P^n_{\mathbb{Z}_p}$ lifting $H$ such that $\mathcal{H}_{\mathbb{Q}_p} \in U(\mathbb{Q}_p)$. By \cite[Theorem 22.6]{Matsumura-CommutativeRingTheory}, the scheme $\mathcal{Y}\cap \mathcal{H}$ is flat over $\mathbb{Z}_p$. It follows that the scheme $\mathcal{Y} \cap \mathcal{H}\rightarrow \mathbb{Z}_p$ is smooth with geometrically integral fibres. By construction the special fibre of $\mathcal{Y} \cap \mathcal{H}$ is not contained in $\mathcal{D}$ and the generic fibre of $ \mathcal{H} \cap \mathcal{D}$ is reduced. The proposition now follows by replacing $\mathcal{Y}$ by $\mathcal{Y}\cap \mathcal{H}$ and induction on the relative dimension of $\mathcal{Y} \rightarrow \mathbb{Z}_p$. \end{proof} We come back to our situation of interest. Recall from $\S 4.1$ that $\sh{E}_p$ denotes the subset of elements of $\underline{B}(\mathbb{Z}_p)$ of nonzero discriminant. \begin{corollary}\label{corollary: deform the point in the jacobian general case integral} Let $p$ be a prime not dividing $N$. Let $b\in \sh{E}_p$ and $P \in J_b(\mathbb{Q}_p)$. Then there exists a morphism $\mathcal{X}\rightarrow\mathbb{Z}_p$ which is of finite type, smooth of relative dimension $1$ and has geometrically integral fibres, together with a point $x \in \mathcal{X}(\mathbb{Z}_p)$ satisfying the following properties. \begin{enumerate} \item There exists a morphism $\widetilde{b}\colon \mathcal{X} \rightarrow \underline{B}_{\mathbb{Z}_p}$ with the property that $\widetilde{b}(x) = b$ and that the discriminant $\Delta(\widetilde{b})$, seen as a map $\mathcal{X} \rightarrow \mathbb{A}^1_{\mathbb{Z}_p}$, is not identically zero on the special fibre and is square-free on the generic fibre of $\mathcal{X}$. \item Write $\mathcal{X}^{\rs}$ for the open subscheme of $\mathcal{X}$ where $\Delta(\widetilde{b})$ does not vanish. Then there exists a morphism $\widetilde{P}\colon \mathcal{X}^{\rs} \rightarrow \mathcal{J}$ lifting the morphism $ \mathcal{X}^{\rs} \rightarrow \underline{B}_{\mathbb{Z}_p}^{\rs}$ satisfying $\widetilde{P}(x_{\mathbb{Q}_p}) = P$. \end{enumerate} \end{corollary} \begin{proof} We apply Proposition \ref{proposition: Bertini type theorem} with $\mathcal{Y} = \bar{\mathcal{J}}_{\mathbb{Z}_p}$, the compactified Jacobian introduced in \S\ref{subsection: compactifications}. We define $\mathcal{D}$ to be the pullback of the discriminant locus $\{ \Delta = 0 \} \subset \underline{B}_{\mathbb{Z}_p}$ under the morphism $\bar{\mathcal{J}}_{\mathbb{Z}_p} \rightarrow \underline{B}_{\mathbb{Z}_p}$. Since the latter morphism is proper, we can extend $P \in J_b(\mathbb{Q}_p)$ to an element of $\bar{\mathcal{J}}_b(\mathbb{Z}_p)$, still denoted by $P$. We claim that the triple $(\mathcal{Y}, \mathcal{D},P)$ satisfies the assumptions of Proposition \ref{proposition: Bertini type theorem}. Indeed, the properties of $\mathcal{Y}$ follow from Corollary \ref{corollary: good compactifications exist}. Moreover $\mathcal{Y}_{\mathbb{F}_p}$ is not contained in $\mathcal{D}$ since $\Delta$ is nonzero mod $p$ by our assumptions on $N$ in \S\ref{subsection: integral structures}. Since $\bar{\mathcal{J}}_{\mathbb{Q}_p} \rightarrow \underline{B}_{\mathbb{Q}_p}$ is smooth outside a subset of codimension $2$ in $\bar{\mathcal{J}}_{\mathbb{Q}_p}$ and $\{\Delta = 0\}_{\mathbb{Q}_p} \subset B_{\mathbb{Q}_p}$ is reduced, the scheme $\mathcal{D}_{\mathbb{Q}_p}$ is reduced too. Finally $P_{\mathbb{Q}_p}\not \in \mathcal{D}_{\mathbb{Q}_p}$ since $b$ has nonzero discriminant. We obtain a closed subscheme $\mathcal{X} \hookrightarrow \bar{\mathcal{J}}_{\mathbb{Z}_p}$ satisfying the conclusion of Proposition \ref{proposition: Bertini type theorem}. Write $x\in \mathcal{X}(\mathbb{Z}_p)$ for the section corresponding to $P$, $\widetilde{b}$ for the restriction of $\bar{\mathcal{J}}_{\mathbb{Z}_p} \rightarrow \underline{B}_{\mathbb{Z}_p}$ to $\mathcal{X}$ and $\widetilde{P}$ for the restriction of the inclusion $\mathcal{X} \hookrightarrow \bar{\mathcal{J}}_{\mathbb{Z}_p}$ to $\mathcal{X}^{\rs}$. We claim that the tuple $(\mathcal{X},x,\widetilde{b},\widetilde{P})$ satisfies the conclusion of the corollary. This follows readily from Proposition \ref{proposition: Bertini type theorem}, except the statement that the discriminant map $\mathcal{X} \rightarrow \mathbb{A}^1_{\mathbb{Z}_p}$ is square-free on the generic fibre. This statement is equivalent to the pullback of the discriminant locus $\{\Delta = 0\} \subset B_{\mathbb{Q}_p}$ along $\tilde{b}_{\mathbb{Q}_p}\colon \mathcal{X}_{\mathbb{Q}_p} \rightarrow B_{\mathbb{Q}_p}$ being reduced. Since this pullback is $\mathcal{X}_{\mathbb{Q}_p} \cap \mathcal{D}_{\mathbb{Q}_p}$ which is reduced by Proposition \ref{proposition: Bertini type theorem}, the statement is true and the corollary follows. \end{proof} We have done all the preparations for the proof of Theorem \ref{theorem: integral representatives exist}, which we give now. We keep the notation from this section and assume that we have made a choice of $(\mathcal{X},x,\widetilde{b},\widetilde{P})$ satisfying the conclusion of Corollary \ref{corollary: deform the point in the jacobian general case integral}. The strategy is to extend the orbit $\eta_b(P)$ (which corresponds to the point $x_{\mathbb{Q}_p}$) to larger and larger subsets of $\mathcal{X}$. Let $y \in \mathcal{X}$ be a closed point of the special fibre with nonzero discriminant which has an affine open neighbourhood containing $x_{\mathbb{Q}_p}$. Let $R$ be the semi-local ring of $\mathcal{X}$ at $x_{\mathbb{Q}_p}$ and $y$. Since every projective module of constant rank over a semi-local ring is free \cite{Hinohara-projmodulessemilocalring}, we can apply Theorem \ref{theorem: inject 2-descent into orbits} to obtain an element of $G(R)\backslash V_{\widetilde{b}}(R)$. We can spread this out to an element of $G(U_1)\backslash V_{\widetilde{b}}(U_1)$ where $U_1$ is an open subset of $\mathcal{X}$ containing $x_{\mathbb{Q}_p}$ and $y$. Under the correspondence of Proposition \ref{proposition: G-orbits in terms of groupoids}, this corresponds to an object $(H_1,\chi_1,\theta_1,\gamma_1)$ of $\GrLieE_{U_1,\widetilde{b}}$ whose pullback along the point $x_{\mathbb{Q}_p} \in U_1(\mathbb{Q}_p)$ corresponds to the orbit $\eta_b(P)$. Let $U_2 = \mathcal{X}_{\mathbb{Q}_p}$. By Corollary \ref{corollary: int reps sqfree, general dedekind scheme}, the restriction of the object $(H_1,\chi_1,\theta_1,\gamma_1)$ to $U_1\cap U_2$ extends to an object $(H_2,\chi_2,\theta_2,\gamma_2)$ of $\GrLieE_{U_2,\widetilde{b}}$. We can glue these two objects to obtain an object $(H_0,\chi_0,\theta_0,\gamma_0)$ of $\GrLieE_{U_0,\widetilde{b}}$, where $U_0 = U_1 \cup U_2$. We observe that the complement of $U_0$ is a union of finitely many closed points since the special fibre of $\mathcal{X}$ is irreducible. By Lemma \ref{lemma: extend objects complement codimension 2} below, we can extend $(H_0,\chi_0,\theta_0,\gamma_0)$ to an object $(H_3,\chi_3,\theta_3,\gamma_3) \in \GrLieE_{\mathcal{X},\widetilde{b}}$. Let $(H_4,\chi_4,\theta_4,\gamma_4)\in \GrLieE_{\mathbb{Z}_p,b}$ denote the pullback of the previous object along the point $x\colon \Spec \mathbb{Z}_p \rightarrow \mathcal{X}$. Since $\mathrm{H}^1(\mathbb{Z}_p,\underline{G}) = \{1\}$, Proposition \ref{proposition: G-orbits in terms of groupoids} implies that $(H_4,\chi_4,\theta_4,\gamma_4)$ determines an element of $\underline{G}(\mathbb{Z}_p)\backslash \underline{V}_b(\mathbb{Z}_p)$ mapping to $\eta_b(P)$ under the natural map $\underline{G}(\mathbb{Z}_p) \backslash \underline{V}_b(\mathbb{Z}_p) \rightarrow G(\mathbb{Q}_p)\backslash V_b(\mathbb{Q}_p)$. This completes the proof of Theorem \ref{theorem: integral representatives exist}. \begin{lemma}\label{lemma: extend objects complement codimension 2} Let $X$ be an integral regular scheme of dimension $2$, and let $U\subset X$ be an open subset whose complement has dimension $0$. If $b\in \underline{B}_S(X)$, then restriction induces an equivalence of categories $\GrLieE_{X,b} \rightarrow \GrLieE_{U,b|_U}$. \end{lemma} \begin{proof} We will use the following fact \cite[Lemme 2.1(iii)]{ColliotTheleneSansuc-Fibresquadratiques} repeatedly: if $Y$ is an affine $X$-scheme of finite type, then restriction of sections $Y(X)\rightarrow Y(U)$ is bijective. To prove essential surjectivity, let $(H',\chi',\theta',\gamma')$ be an object of $\GrLieE_{U,b|_U}$. By \cite[Th\'eoreme 6.13]{ColliotTheleneSansuc-Fibresquadratiques} and Proposition \ref{proposition: G-torsors in terms of groupoids}, $(H',\chi',\theta')$ extends to an object $(H'',\chi'',\theta'')$ of $\GrLie_{X}$. If $Y$ is the closed subscheme of $\lieh''$ of elements $\gamma$ satisfying $\theta''(\gamma)=-\gamma$ and $\gamma$ maps to $b$ in $\underline{B}(X)$, then $Y$ is affine and of finite type over $X$. It follows by the fact above that $\gamma'$ lifts to an element $\gamma''\in \lieh''(X)$ and that $(H'',\chi'',\theta'',\gamma'')$ defines an object of $\GrLieE_{X,b}$. Since the scheme of isomorphisms $\Isom_{\GrLieE}(\mathcal{A},\mathcal{A}')$ between two objects of $\GrLieE_{X,b}$ is $X$-affine, fully faithfulness follows again from the above fact. \end{proof} \subsection{A global consequence}\label{subsection: integrality, a global corollary} Recall that $\sh{E}_p = \underline{B}(\mathbb{Z}_p) \cap B^{\rs}(\mathbb{Q}_p)$. Define $\sh{E} \coloneqq \underline{B}(\mathbb{Z}) \cap B^{\rs}(\mathbb{Q})$. We state the following corollary, whose proof is completely analogous to the proof of \cite[Corollary 5.8]{Thorne-Romano-E8} and uses the fact that $\underline{G}$ has class number $1$ (Proposition \ref{proposition: tamagawa}). \begin{corollary}\label{corollary: weak global integral representatives} Let $b_0 \in \sh{E}$. Then for each prime $p$ dividing $N$ we can find an open compact neighbourhood $W_p$ of $b_0$ in $\sh{E}_p$ and an integer $n_p\geq 0$ with the following property. Let $M = \prod_{p | N} p^{n_p}$. Then for all $b\in \sh{E} \cap \left(\prod_{p| N} W_p \right)$ and for all $y \in \Sel_2(J_{M\cdot b})$, the orbit $\eta_{M\cdot b}(y) \in G(\mathbb{Q}) \backslash V_{M\cdot b}(\mathbb{Q})$ contains an element of $\underline{V}_{M\cdot b}(\mathbb{Z})$. \end{corollary} This statement about integral representatives will be strong enough to obtain the main theorems in \S\ref{section: proof of main theorems}. \section{Counting}\label{section: counting} In this section we will apply the counting techniques of Bhargava to provide estimates for the integral orbits of bounded height in the representation $(\underline{G},\underline{V})$. \subsection{Heights and measures} \label{subsection: heights and measures} In this section we introduce measures on various spaces and study the relations between them. The results are used in the calculations of \S\ref{section: proof of main theorems}. Recall that $\underline{B} = \Spec \mathbb{Z}[p_2,p_5,p_6,p_8,p_9,p_{12}]$ and we have a $\mathbb{G}_m$-equivariant morphism $\pi \colon \underline{V} \rightarrow \underline{B}$. For any $b\in B(\mathbb{R})$ we define the \define{height} of $b$ by the formula $$\mathrm{ht}(b) \coloneqq \sup |p_i(b)|^{72/i}.$$ We have $\mathrm{ht}(\lambda\cdot b) = |\lambda|^{72} \mathrm{ht}(b)$ for all $\lambda \in \mathbb{R}^{\times}$ and $b\in B(\mathbb{R})$. We define $\mathrm{ht}(v) = \mathrm{ht}(\pi(v))$ for any $v\in V(\mathbb{R})$. Note that for each $a\in \mathbb{R}_{>0}$ the set of elements of $\underline{B}(\mathbb{Z})$ of height less than $a$ is finite. Let $\omega_{G}$ be a generator for the one-dimensional $\mathbb{Q}$-vector space of left-invariant top differential forms on $G$ over $\mathbb{Q}$. It is uniquely determined up to an element of $\mathbb{Q}^{\times}$ and it determines Haar measures $dg$ on $G(\mathbb{R})$ and $G(\mathbb{Q}_p)$ for each prime $p$. \begin{proposition}\label{proposition: tamagawa} \begin{enumerate} \item $\underline{G}$ has class number $1$: $\underline{G}(\mathbb{A}^{\infty}) = \underline{G}(\mathbb{Q})\underline{G}(\widehat{\mathbb{Z}})$. \item The product $\vol\left(\underline{G}(\mathbb{Z})\backslash \underline{G}(\mathbb{R}) \right) \cdot \prod_p \vol\left(\underline{G}(\mathbb{Z}_p)\right)$ converges absolutely and equals $2$, the Tamagawa number of $G$. \end{enumerate} \end{proposition} \begin{proof} The group $\underline{G}$ is the Zariski closure of $G$ in $\GL(\underline{V})$ and $G$ contains a maximal $\mathbb{Q}$-split torus consisting of diagonal matrices of $\GL(\underline{V})$. Therefore $\underline{G}$ has class number $1$ by \cite[Theorem 8.11; Corollary 2]{PlatonovRapinchuk-Alggroupsandnumbertheory}. So the product in the second part equals the Tamagawa number $\tau(G)$ of $G\simeq \PSp_8$. Now use the identities $\tau(\PSp_8)=2\tau(\Sp_8)$ \cite[Theorem 2.1.1]{Ono-relativetheorytamagawa} and $\tau(\Sp_8)=1$ (because $\Sp_8$ is simply connected). \end{proof} Let $\omega_V$ be a generator for the free rank one $\mathbb{Z}$-module of left-invariant top differential forms on $\underline{V}$. Then $\omega_V$ is uniquely determined up to sign and it determines Haar measures $dv$ on $V(\mathbb{R})$ and $V(\mathbb{Q}_p)$ for every prime number $p$. We define the form $\omega_B = dp_2 \wedge dp_5 \wedge dp_6 \wedge dp_8 \wedge dp_9 \wedge dp_{12} $ on $\underline{B}$. It defines measures $db$ on $B(\mathbb{R})$ and $B(\mathbb{Q}_p)$ for every prime $p$. \begin{lemma}\label{lemma: the constants W0 and W} There exists a constant $W_0\in \mathbb{Q}^{\times}$ with the following properties: \begin{enumerate} \item Let $\underline{V}(\mathbb{Z}_p)^{\rs} \coloneqq \underline{V}(\mathbb{Z}_p)\cap V^{\rs}(\mathbb{Q}_p)$ and define a function $m_p\colon \underline{V}(\mathbb{Z}_p)^{\rs} \rightarrow \mathbb{R}_{\geq 0}$ by the formula \begin{equation}\label{equation: def mp(v)} m_p(v) \coloneqq \sum_{v' \in \underline{G}(\mathbb{Z}_p)\backslash\left( G(\mathbb{Q}_p)\cdot v\cap \underline{V}(\mathbb{Z}_p) \right)} \frac{\#Z_{\underline{G}}(v)(\mathbb{Q}_p) }{\#Z_{\underline{G}}(v)(\mathbb{Z}_p) }. \end{equation} Then $m_p(v)$ is locally constant. \item Let $\underline{B}(\mathbb{Z}_p)^{\rs} \coloneqq \underline{B}(\mathbb{Z}_p)\cap B^{\rs}(\mathbb{Q}_p)$ and let $\psi_p\colon \underline{V}(\mathbb{Z}_p)^{\rs} \rightarrow \mathbb{R}_{\geq 0}$ be a bounded, locally constant function which satisfies $\psi_p(v) = \psi_p(v')$ when $v,v'\in \underline{V}(\mathbb{Z}_p)^{\rs}$ are conjugate under the action of $G(\mathbb{Q}_p)$. Then we have the formula \begin{equation}\label{equation: p-adic property W0} \int_{v\in \underline{V}(\mathbb{Z}_p)^{\rs}} \psi_p(v) d v = |W_0|_p \vol\left(\underline{G}(\mathbb{Z}_p)\right) \int_{f\in \underline{B}(\mathbb{Z}_p)^{\rs}} \sum_{g\in G(\mathbb{Q}_p)\backslash \underline{V}_b(\mathbb{Z}_p) } \frac{m_p(v)\psi_p(v) }{\# Z_{\underline{G} }(v)(\mathbb{Q}_p)} d b . \end{equation} \item Let $U_0\subset G(\mathbb{R})$ and $U_1\subset B^{\rs}(\mathbb{R})$ be open subsets such that the product morphism $\mu: U_0 \times U_1 \rightarrow V(\mathbb{R})^{\rs}$, given by $(g,b) \mapsto g\cdot \sigma(b)$, is injective. Then we have the formula \begin{equation}\label{equation: archimedean property W_0} \int_{v\in \mu\left(U_0\times U_1 \right)} dv = |W_0|_{\infty} \int_{g\in U_0} dg \int_{b\in U_1} db. \end{equation} \end{enumerate} \end{lemma} \begin{proof} The proof is identical to the proof of \cite[Proposition 3.3]{Romano-Thorne-ArithmeticofsingularitiestypeE}. Here we use the fact that the sum of the degrees of the invariants equals the dimension of the representation: $2+5+6+8+9+12 = 42 = \dim_{\mathbb{Q}}V$. \end{proof} We henceforth fix a constant $W_0 \in \mathbb{Q}^{\times}$ satisfying the properties of Lemma \ref{lemma: the constants W0 and W}. \subsection{Counting integral orbits}\label{subsection: counting with no congruence} In this section we count integral orbits in the representation $\underline{V}$. For any $\underline{G}(\mathbb{Z})$-invariant subset $X\subset \underline{V}(\mathbb{Z})$, define $$N(X,a) \coloneqq \sum_{\substack{v\in \underline{G}(\mathbb{Z})\backslash X \\ \mathrm{ht}(v)<a}} \frac{1}{\# Z_{\underline{G}}(v)(\mathbb{Z})}.$$ Let $k$ be a field of characteristic not dividing $N$. We say an element $v\in \underline{V}(k)$ is \define{$k$-reducible} if it has zero discriminant or if it is $\underline{G}(k)$-conjugate to the Kostant section $\sigma(\pi(v))$, and \define{$k$-irreducible} otherwise. We say an element $v\in \underline{V}(k)$ is \define{$k$-soluble} if it has nonzero discriminant and lies in the image of the map $\eta_b\colon \mathcal{J}_b(k)/2\mathcal{J}_b(k) \rightarrow \underline{G}(k)\backslash \underline{V}_b(k)$ from Theorem \ref{theorem: inject 2-descent into orbits} where $b = \pi(v)$. For any $X\subset \underline{V}(\mathbb{Z})$ write $X^{irr}\subset X$ for the subset of $\mathbb{Q}$-irreducible elements. Write $V(\mathbb{R})^{sol} \subset V(\mathbb{R})$ for the subset of $\mathbb{R}$-soluble elements. Recall that we have fixed a constant $W_0\in\mathbb{Q}^{\times}$ in \S\ref{subsection: heights and measures}. \begin{theorem}\label{theorem: counting R-soluble elements, no congruence} We have \begin{displaymath} N(\underline{V}(\mathbb{Z})^{irr} \cap V(\mathbb{R})^{sol},a) = \frac{|W_0|}{8}\vol\left(\underline{G}(\mathbb{Z})\backslash G(\mathbb{R})\right) \vol\left(\left\{b \in B(\mathbb{R}) \mid \mathrm{ht}(b) < a \right\} \right)+ o\left(a^{7/12}\right). \end{displaymath} \end{theorem} It will suffice to prove the following proposition. Recall that there exists $\mathbb{G}_m$-actions on $V$ and $B$ such that the morphism $\pi\colon V \rightarrow B$ is $\mathbb{G}_m$-equivariant, giving actions of $\mathbb{R}_{>0}$ on $V(\mathbb{R})$ and $B(\mathbb{R})$. \begin{proposition}\label{prop: counting sections} Let $U\subset B^{\rs}(\mathbb{R})$ be a connected open semialgebraic subset stable under the action of $\mathbb{R}_{>0}$ and let $s: U \rightarrow V^{\rs}(\mathbb{R})$ be a semialgebraic $\mathbb{R}_{>0}$-equivariant section of $\pi$ such that $s(U) \cap \{v\in V(\mathbb{R}) \mid \mathrm{ht}(v) = 1\}$ is a bounded subset of $V(\mathbb{R})$. Then $$N(G(\mathbb{R})\cdot s(U) \cap \underline{V}(\mathbb{Z})^{irr},a) = \frac{|W_0|}{\#Z_{G}(v_0)(\mathbb{R})}\vol\left(\underline{G}(\mathbb{Z})\backslash G(\mathbb{R})\right) \vol\left(\left\{b \in U \mid \mathrm{ht}(b) < a \right\} \right) +o\left(a^{7/12} \right),$$ where $v_0$ is any element of $s(U)$. \end{proposition} \begin{proof}[Proof that Proposition \ref{prop: counting sections} implies Theorem \ref{theorem: counting R-soluble elements, no congruence}] Arguing exactly as in \cite[\S1.9]{Thorne-E6paper}, we can find connected semialgebraic open subsets $L_i \subset \{b\in B^{\rs}(\mathbb{R}) \mid \mathrm{ht}(b) = 1\}$ and sections $s_i:L_i \rightarrow V(\mathbb{R})$ which are semialgebraic for $i=1,\dots,r$ such that if $U_i \coloneqq \mathbb{R}_{>0} \cdot L_i$ then (if we continue to write the unique extension of $s_i$ to a $\mathbb{R}_{>0}$-equivariant map $U_i \rightarrow V(\mathbb{R})$ by $s_i$): $$V^{\rs}(\mathbb{R}) = \bigcup_{i=1}^r G(\mathbb{R}) \cdot s_i(U_i).$$ Each $U_i$ is connected and the set $V(\mathbb{R})^{sol} \subset V^{\rs}(\mathbb{R})$ is open and closed by Lemma \ref{lemma: soluble elements open and closed}. So the image $s_i(U_i)$ either consists only of $\mathbb{R}$-soluble elements or contains no $\mathbb{R}$-soluble elements at all. Therefore by replacing $r$ by a smaller integer, we may write $V(\mathbb{R})^{sol} = \bigcup_{i=1}^r G(\mathbb{R}) \cdot s_i(U_i)$. Note that if $b\in B^{\rs}(\mathbb{R})$ the number of $G(\mathbb{R})$-orbits on $V_b(\mathbb{R})^{sol}$ equals $\# J_b(\mathbb{R})/2J_b(\mathbb{R})$. Moreover the quantity $\# J_b(\mathbb{R})/2J_b(\mathbb{R})/\#J_b[2](\mathbb{R})$ is independent of $b$, and equals $1/8$; this is a general fact about real abelian threefolds. Theorem \ref{theorem: counting R-soluble elements, no congruence} then follows from the inclusion-exclusion principle applied to the decomposition $V(\mathbb{R})^{sol} = \bigcup_{i=1}^r G(\mathbb{R}) \cdot s_i(U_i)$, together with Proposition \ref{prop: counting sections} applied to (the connected components of) $U_I=\pi\left( \cap_{i\in I} G(\mathbb{R})\cdot s_i(U_i) \right)$ for every $I\subset \{1,\dots,r\}$. \end{proof} \begin{lemma}\label{lemma: soluble elements open and closed} The subset $V(\mathbb{R})^{sol}\subset V^{\rs}(\mathbb{R})$ is open and closed in the Euclidean topology. \end{lemma} \begin{proof} We first prove that for each $b\in B^{\rs}(\mathbb{R})$, we can find an open connected neighbourhood $U\subset B^{\rs}(\mathbb{R})$ of $b$ and a partition $W_1 \sqcup \dots \sqcup W_n $ of $V(\mathbb{R})_U$ ($=$ the subset of $V(\mathbb{R})$ mapping to $U$) such that: \begin{enumerate} \item For all $i$, $W_i$ is open and closed in $V(\mathbb{R})_U$ and stable under the action of $G(\mathbb{R})$. \item For all $i$, if two elements $v, v' \in W_i$ have the same image in $U$, then $v$ and $v'$ are $G(\mathbb{R})$-conjugate. \end{enumerate} Indeed, Lemma \ref{lemma: AIT} implies that $V_b(\mathbb{R})$ consists of finitely many $G(\mathbb{R})$-orbits; let $v_1, \dots, v_n \in V_b(\mathbb{R})$ be a system of representatives. Similarly the space $V(\mathbb{R})$ contains finitely many $G(\mathbb{R})$-conjugacy classes of Cartan subalgebras; let $\mathfrak{c}_1,\dots,\mathfrak{c}_k$ be a system of representatives. Then every $v\in V^{\rs}(\mathbb{R})$ is $G(\mathbb{R})$-conjugate to an element of $\mathfrak{c}_j^{\rs}(\mathbb{R})$ for some unique $j$, and two elements of $\mathfrak{c}_j^{\rs}(\mathbb{R})$ are $G(\mathbb{R})$-conjugate if and only if they are conjugate under the finite group $N_{G}(\mathfrak{c}_j)(\mathbb{R})$. So after conjugation we may assume that there exists a function $f\colon \{1,\dots,n \} \rightarrow \{1,\dots,k \}$ such that $v_i \in \mathfrak{c}_{f(i)}^{\rs}(\mathbb{R})$ for all $i=1,\dots,n$. For each $j$ write $\pi_j\colon \mathfrak{c}_j^{\rs}(\mathbb{R}) \rightarrow B^{\rs}(\mathbb{R})$ for the $\mathbb{R}$-points of the quotient map. Then $\pi_j$ is a proper local homeomorphism and $N_{G}(\mathfrak{c}_j)(\mathbb{R})$ acts on its fibres. By \cite[Proposition 9.3.9]{RealAlgebraicgeometry} we can find a semialgebraic connected open subset $U\subset B^{\rs}(\mathbb{R})$ containing $b$ and semialgebraic sections $s_i\colon U \rightarrow \mathfrak{c}_{f(i)}^{\rs}(\mathbb{R})$ such that for each $j$, every $v\in \mathfrak{c}_j^{\rs}(\mathbb{R})$ with $\pi_j(v)\in U$ is $G(\mathbb{R})$-conjugate to an element of $s_i(U)$ for some unique $i$ with $f(i)=j$. If we set $W_i = G(\mathbb{R})\cdot s_i(U) $ then the $W_i$ form a partition of $V(\mathbb{R})_U $ with the required properties. Next one can similarly show that for every $b\in B^{\rs}(\mathbb{R})$, there exists an open neighbourhood $U\subset B^{\rs}(\mathbb{R})$ of $b$ such that the family of compact Lie groups $J(\mathbb{R}) \rightarrow B^{\rs}(\mathbb{R})$ is trivialized above $U$, as well as the finite groups $\mathrm{H}^1(\mathbb{R},J_b[2])$ and $\mathrm{H}^1(\mathbb{R},J_b)[2]$. Suppose moreover that we further shrink $U$ such that there exists a partition $V(\mathbb{R})_U = W_1\sqcup \dots \sqcup W_n$ with the properties as above. Then the map $V(\mathbb{R})_U \rightarrow \mathrm{H}^1(\mathbb{R},J_b[2])$, obtained from Lemma \ref{lemma: AIT} and by identifying $\mathrm{H}^1(\mathbb{R},J_{b'}[2])$ with $\mathrm{H}^1(\mathbb{R},J_{b}[2])$ for each $b'\in U$, is constant on each $W_i$. Combining the previous paragraphs shows that for every $v\in V^{\rs}(\mathbb{R})$ that is $\mathbb{R}$-soluble (resp. not $\mathbb{R}$-soluble), there exists an open neighbourhood $W\subset V^{\rs}(\mathbb{R})$ of $v$ such that every element of $W$ is $\mathbb{R}$-soluble (resp. not $\mathbb{R}$-soluble). This completes the proof. \end{proof} So to prove Theorem \ref{theorem: counting R-soluble elements, no congruence} it remains to prove Proposition \ref{prop: counting sections}. The proof of this proposition is the same as the proof of \cite[Theorem 3.1]{Thorne-E6paper} but by systematically using multisets and keeping track of the stabilizers as in \cite[\S10]{Bhargava-Gross-hyperellcurves}. (See the proof of \cite[Theorem 6.6]{Laga-F4paper} for a detailed exposition of such an orbit-counting result in a very similar set-up.) We note that `cutting off the cusp' has been carried out in \cite{Thorne-E6paper}. The only missing ingredient is Proposition \ref{proposition: estimates on red and bigstab}, whose proof we give below. To state the proposition we first introduce some notation. Let $\alpha_0\in \Phi(H,T)$ be the highest root of $H$ with respect to the root basis fixed in \S\ref{subsection: a stable grading}. Let $a_0\in X^*(T^{\theta})$ be the restriction of $\alpha_0$ to $T^{\theta}$. Then $a_0$ is a weight for the $T^{\theta}$-action on $V$. If $v\in V$ we can decompose $v$ into eigenvectors $\sum_a v_a$ where $a$ runs over the weights of $T^{\theta}$ on $V$ and $T^{\theta}$ acts on $v_a$ via $a$. Write $V(a_0)$ for the subset of $v\in V$ with the property that $v_{a_0}=0$. We call $V(a_0)$ the \define{cuspidal region}. Thorne has proven in \cite[\S2.3]{Thorne-E6paper} that the number of irreducible integral points in the cuspidal region is negligible. By an identical argument to \cite[\S10.7]{Bhargava-Gross-hyperellcurves} (see also the discussion after \cite[Lemma 6.17]{Laga-F4paper}), Lemma \ref{lemma: red and bigstab mod p} below implies that the number of $\mathbb{Q}$-reducible elements in the main body is negligible. It also implies the following proposition which will be used in the proof of Theorem \ref{theorem: main theorem}. \begin{proposition}\label{proposition: estimates on red and bigstab} Let $V^{bigstab}$ denote the subset of $\mathbb{Q}$-irreducible elements $v\in \underline{V}(\mathbb{Z})$ with $\#Z_G(v)(\mathbb{Q}) >1$. Then $N(V^{bigstab},a) = o(a^{7/12})$. \end{proposition} Let $N$ be the integer of \S\ref{subsection: integral structures} and let $p$ be a prime not dividing $N$. We define $V_p^{red}\subset \underline{V}(\mathbb{Z}_p)$ to be the set of vectors whose reduction mod $p$ is $\mathbb{F}_p$-reducible. We define $V_p^{bigstab} \subset \underline{V}(\mathbb{Z}_p)$ to be the set of vectors $v\in \underline{V}(\mathbb{Z}_p)$ such that $p | \Delta(v)$ or whose image in $\underline{V}(\mathbb{F}_p)$ has nontrivial stabilizer in $\underline{G}(\mathbb{F}_p)$. \begin{lemma}\label{lemma: red and bigstab mod p} We have $$\lim_{Y\rightarrow +\infty} \prod_{N<p<Y} \int_{V_p^{red}} dv = 0,$$ and $$\lim_{Y\rightarrow +\infty} \prod_{N<p<Y} \int_{V_p^{bigstab}} dv = 0.$$ \end{lemma} \begin{proof} The proof is very similar to the proof of \cite[Proposition 6.9]{Thorne-Romano-E8}. We only treat the case of $V_p^{bigstab}$, the case of $V_p^{red}$ being analogous and treated in detail in \cite[\S10.7]{Bhargava-Gross-hyperellcurves}. Let $p$ be a prime not dividing $N$. We have the formula $$ \int_{V_p^{bigstab}} dv = \frac{1}{\#\underline{V}(\mathbb{F}_p)}\# \{v\in \underline{V}(\mathbb{F}_p) \mid \Delta(v) = 0 \text{ or } Z_{\underline{G}}(v)(\mathbb{F}_p) \neq 1 \} .$$ Since $\{ \Delta = 0 \}$ is a hypersurface we have $$\frac{1}{\#\underline{V}(\mathbb{F}_p)}\# \{v\in \underline{V}(\mathbb{F}_p) \mid \Delta(v) = 0\} = O(p^{-1}). $$ If $v\in \underline{V}^{\rs}(\mathbb{F}_p)$ then $\#Z_{\underline{G}}(v)(\mathbb{F}_p)$ depends only on $\pi(v)$ by (the $\mathbb{Z}[1/N]$-analogue of) Lemma \ref{lemma: centralizers with same invariants isomorphic}. Moreover by Proposition \ref{proposition: G-orbits in terms of groupoids} and Lang's theorem we have $\#\underline{V}^{\rs}(\mathbb{F}_p) = \#\underline{G}(\mathbb{F}_p) \#\underline{B}^{\rs}(\mathbb{F}_p) $. So to prove the lemma it suffices to prove that there exists a $0< \delta <1$ such that $$ \frac{1}{\# \underline{B}^{\rs}(\mathbb{F}_p)}\#\{b\in \underline{B}^{\rs}(\mathbb{F}_p) \mid Z_{\underline{G}}(\sigma(b))(\mathbb{F}_p) \neq 1 \} \rightarrow \delta $$ as $p \rightarrow +\infty$. We will achieve this using the results of \cite[\S9.3]{Serre-lecturesonNx(p)}. Recall from \S\ref{subsection: a stable grading} that $T$ is a split maximal torus of $H$ with Lie algebra $\liet$ and Weyl group $W$. These objects spread out to objects $\underline{T}, \underline{H},\underline{\mathfrak{t}}$ over $\mathbb{Z}$. In \S\ref{subsection: further properties of J[2]} we have defined a $W$-torsor $f\colon \liet^{\rs}\rightarrow B^{\rs}$ which extends to a $W$-torsor $\underline{\mathfrak{t}}_S^{\rs} \rightarrow \underline{B}_S^{\rs}$, still denoted by $f$. The group scheme $J[2] \rightarrow \underline{B}^{\rs}_S$ is trivialized along $f$ and the monodromy action is given by the natural action of $W$ on $\Lambda_T/2\Lambda_T$ by the same logic as Proposition \ref{proposition: monodromy of J[2]}. Let $C\subset W$ be the subset of elements of $W$ which fix some nonzero element of $\Lambda_T/2\Lambda_T$. Then \cite[Proposition 9.15]{Serre-lecturesonNx(p)} implies that $$ \frac{1}{\# \underline{B}^{\rs}(\mathbb{F}_p)}\#\{b\in \underline{B}^{\rs}(\mathbb{F}_p) \mid Z_{\underline{G}}(\sigma(b))(\mathbb{F}_p) \neq 1 \} = \frac{\#C}{\#W}+O(p^{-1/2}). $$ To finish the proof it suffices to show that $C \neq W$. Let $w_{cox}\in W$ be a Coxeter element. Then the determinant of $1-w_{cox}$ on $\Lambda$ is \cite[Theorem 10.6.1]{Carter-SimpleGroupsLieType1972}: $$ \prod_{i} \left(1-e^{2\pi i (\deg(p_i)-1)/12} \right) = \Phi_{12}(1)\Phi_3(1)=3.$$ (Here $\Phi_n$ denotes the $n$-th cyclotomic polynomial.) Since this determinant is odd, the action of a Coxeter element on the mod $2$ root lattice does not fix any nonzero vector. This implies that $w_{cox} \not\in C$, as desired. \end{proof} \subsection{Counting with congruence conditions}\label{subsection: congruence conditions} We now introduce variants of Theorem \ref{theorem: counting R-soluble elements, no congruence} by imposing certain congruence conditions. We start with a version which involves only finitely many such congruence conditions. Let $M$ be a positive integer and $w\colon \underline{V}(\mathbb{Z}/M\mathbb{Z}) \rightarrow \mathbb{R}$ a function. For any $\underline{G}(\mathbb{Z})$-invariant subset $X\subset \underline{V}(\mathbb{Z})$ we write $$N_w(X,a) \coloneqq \sum_{\substack{v\in \underline{G}(\mathbb{Z})\backslash X \\ \mathrm{ht}(v)<a}} \frac{w\left(v \mod M \right) }{\# Z_{\underline{G}}(v)(\mathbb{Z})}.$$ We write $\mu_w$ for the average of $w$ where we put the uniform measure on $\underline{V}(\mathbb{Z}/M\mathbb{Z})$. The following theorem follows from the proof of Theorem \ref{theorem: counting R-soluble elements, no congruence} in the same way as \cite[\S2.5]{BS-2selmerellcurves}. Recall that we have fixed a constant $W_0\in\mathbb{Q}^{\times}$ in \S\ref{subsection: heights and measures}. \begin{theorem}\label{theorem: counting R-soluble elements, finite congruence} We have \begin{displaymath} N_w(\underline{V}(\mathbb{Z})^{irr} \cap V(\mathbb{R})^{sol},a) = \mu_w\frac{|W_0|}{8}\vol\left(\underline{G}(\mathbb{Z})\backslash G(\mathbb{R})\right) \vol\left(\left\{b \in B(\mathbb{R}) \mid \mathrm{ht}(b) < a \right\} \right)+ o\left(a^{7/12}\right). \end{displaymath} \end{theorem} We now consider the situation where we impose infinitely many congruence conditions which is needed to sieve out those orbits not corresponding to $2$-Selmer elements. Suppose we are given for each prime $p$ a $\underline{G}(\mathbb{Z}_p)$-invariant function $w_p\colon \underline{V}(\mathbb{Z}_p) \rightarrow [0,1]$ with the following properties: \begin{itemize} \item The function $w_p$ is locally constant outside the closed subset $\{v\in \underline{V}(\mathbb{Z}_p) \mid \Delta(v) = 0\} \subset \underline{V}(\mathbb{Z}_p)$. \item For $p$ sufficiently large, we have $w_p(v) = 1$ for all $v \in \underline{V}(\mathbb{Z}_p)$ such that $p^2 \nmid \Delta(v)$. \end{itemize} In this case we can define a function $w\colon \underline{V}(\mathbb{Z}) \rightarrow [0,1]$ by the formula $w(v) = \prod_{p} w_p(v)$ if $\Delta(v) \neq 0$ and $w(v) = 0$ otherwise. Call a function $w\colon \underline{V}(\mathbb{Z}) \rightarrow [0,1]$ defined by this procedure \define{acceptable}. For any $\underline{G}(\mathbb{Z})$-invariant subset $X\subset \underline{V}(\mathbb{Z})$ we define \begin{equation} N_w(X,a) \coloneqq \sum_{\substack{v\in \underline{G}(\mathbb{Z})\backslash X \\ \mathrm{ht}(v)<a}} \frac{w(v)}{\# Z_{\underline{G}}(v)(\mathbb{Z})}. \end{equation} The proof of the following inequality is standard. (Details can be found in the first part of the proof of \cite[Theorem 2.21]{BS-2selmerellcurves}.) \begin{theorem}\label{theorem: counting infinitely many congruence conditions} If $w\colon \underline{V}(\mathbb{Z}) \rightarrow [0,1]$ is an acceptable function we have \begin{displaymath} N_w(\underline{V}(\mathbb{Z})^{irr}\cap V(\mathbb{R})^{sol} ,a) \leq \frac{|W_0|}{8} \left(\prod_p \int_{\underline{V}(\mathbb{Z}_p)} w_p(v) d v \right) \vol\left(\underline{G}(\mathbb{Z}) \backslash G(\mathbb{R}) \right)\vol\left(\{b\in B(\mathbb{R}) \mid \mathrm{ht}(b) < a \} \right) + o(a^{7/12}). \end{displaymath} \end{theorem} \begin{remark} If we would be able to prove a so-called uniformity estimate bounding the error term occurring in Theorem \ref{theorem: counting R-soluble elements, no congruence} similar to \cite[Theorem 2.13]{BS-2selmerellcurves}, then we can strengthen the above inequality to an actual equality, which would lead to an equality in Theorem \ref{theorem: main theorem}. \end{remark} To count $2$-Selmer elements in $\sh{E}_{\min}$ we will require a slight variant of the above theorem. We write $B(\mathbb{R})_{\min}\subset B(\mathbb{R})$ for the subset of elements $b$ satisfying the following condition: either $p_5(b)>0$, or $p_5(b)=0$ and $p_9(b)\geq 0$. For any $X \subset V(\mathbb{R})$ we write $X_{\min}\subset X$ for the subset of elements whose image under $\pi$ lies in $B^{\rs}(\mathbb{R})_{\min}$. \begin{theorem}\label{theorem: counting infinitely many congruence conditions minimal} Let $w\colon \underline{V}(\mathbb{Z}) \rightarrow [0,1]$ be an acceptable function satisfying $w(v) = w(-v)$ for all $v\in \underline{V}(\mathbb{Z})$. Then we have \begin{displaymath} N_w(\underline{V}(\mathbb{Z})^{irr}\cap V(\mathbb{R})^{sol}_{\min} ,a) \leq \frac{|W_0|}{8} \left(\prod_p \int_{\underline{V}(\mathbb{Z}_p)} w_p(v) d v \right) \vol\left(\underline{G}(\mathbb{Z}) \backslash G(\mathbb{R}) \right)\vol\left(\{b\in B(\mathbb{R})_{\min} \mid \mathrm{ht}(b) < a \} \right) + o(a^{7/12}). \end{displaymath} \end{theorem} \begin{proof} This can be proved by adapting the counting arguments in \S\ref{subsection: counting with no congruence}, but we can deduce it easily from Theorem \ref{theorem: counting infinitely many congruence conditions}. Indeed, observe that $p_i(-b) = (-1)^ip_i(b)$ for any $b\in B(\mathbb{R})$. So, away from elements $v$ with $p_5(v) = p_9(v) = 0$, we see that every $\underline{G}(\mathbb{Z})$-orbit in $V(\mathbb{R})_{\min}$ gives rise to exactly two $\underline{G}(\mathbb{Z})$-orbits in $\underline{V}(\mathbb{R})$. Moreover an element $v\in V(\mathbb{R})$ is $\mathbb{R}$-soluble if and only if $-v$ is. Since the number of $\underline{G}(\mathbb{Z})$-orbits in $\underline{V}(\mathbb{Z})$ whose invariants $p_5$ and $p_9$ vanish is $o(a^{7/12})$, we see that \begin{align*} N_w(\underline{V}(\mathbb{Z})^{irr}\cap V(\mathbb{R})^{sol} ,a) = 2N_w(\underline{V}(\mathbb{Z})^{irr}\cap V(\mathbb{R})^{sol}_{\min} ,a)+o\left(a^{7/12} \right). \end{align*} The theorem now follows from the equality $\vol\left(\{b\in B(\mathbb{R}) \mid \mathrm{ht}(b) <a \} \right) = 2 \vol\left(\{b\in B(\mathbb{R})_{\min} \mid \mathrm{ht}(b) <a \} \right) $. \end{proof} \section{Proof of the main theorem}\label{section: proof of main theorems} In this section we prove the first main theorem stated in the introduction. Recall that we write $\sh{E}$ for the set of elements $b\in \underline{B}(\mathbb{Z})$ of nonzero discriminant. We write $\sh{E}_{\min} \subset \sh{E}$ for the subset of $b\in \sh{E}$ such that: \begin{itemize} \item No prime $q$ has the property that $q^i$ divides $p_i(b)$ for all $i\in \{2,5,6,8,9,12 \}$. \item Either $p_5(b)>0$, or $p_5(0) =0$ and $p_9(b) \geq 0$. \end{itemize} The set $\sh{E}_{\min}$ is in canonical bijection with the set of isomorphism classes of pairs $(X,P)$ where $X/\mathbb{Q}$ is a smooth, geometrically connected and projective curve of genus $3$ which is not hyperelliptic and $P \in X(\mathbb{Q})$ is a marked hyperflex point (this follows from \cite[Lemma 4.1]{Thorne-E6paper}). We recall that we have defined a height function $\mathrm{ht}$ for $\sh{E}$ in \S\ref{subsection: heights and measures}. We say a subset $\mathcal{F}\subset \sh{E}$ is defined by \define{finitely many congruence conditions} if $\mathcal{F}$ is the preimage of a subset of $\underline{B}(\mathbb{Z}/N\mathbb{Z})$ under the reduction map $\sh{E} \rightarrow \underline{B}(\mathbb{Z}/N\mathbb{Z})$ for some $N\geq 1$. \begin{theorem}\label{theorem: main theorem} Let $\mathcal{F}\subset \sh{E}$ be a subset defined by finitely many congruence conditions or $\mathcal{F} = \sh{E}_{\min}$. Then we have \begin{equation*} \limsup_{a\rightarrow \infty} \frac{ \sum_{b\in \mathcal{F},\; \mathrm{ht}(b)<a }\# \Sel_2J_b }{\# \{b \in \mathcal{F} \mid \mathrm{ht}(b) < a \}} \leq 3. \end{equation*} \end{theorem} The proof is along the same lines as the discussion in \cite[\S7]{Thorne-Romano-E8}. We will assume that $\mathcal{F} = \sh{E}_{\min}$, the other case being very similar. We first prove a `local' result. Recall that $\sh{E}_p$ is the set of elements $b\in \underline{B}(\mathbb{Z}_p)$ of nonzero discriminant, and define $\sh{E}_{p,\min} \subset \sh{E}_p$ to be the subset of those $b$ that do not lie in $p\cdot \underline{B}(\mathbb{Z}_p)$. (Recall that there is a $\mathbb{G}_m$-action on $\underline{B}$ which satisfies $\lambda\cdot p_i = \lambda^i p_i$.) \begin{proposition}\label{proposition: local result of main theorem} Let $b_0 \in \sh{E}_{\min}$. Then we can find for each prime $p$ dividing $N$ an open compact neighbourhood $W_p$ of $b_0$ in $\sh{E}_p$ such that the following condition holds. Let $\sh{E}_W = \sh{E} \cap \left(\prod_{p | N} W_p \right)$, and let $\sh{E}_{W,\min} = \sh{E}_W \cap \sh{E}_{\min}$. Then we have \begin{equation*} \limsup_{a\rightarrow \infty} \frac{ \sum_{b\in \sh{E}_{W,\min},\; \mathrm{ht}(b)<a }\# \Sel_2J_b }{\# \{b \in \sh{E}_{W,\min} \mid \mathrm{ht}(b) < a \}} \leq 3. \end{equation*} \end{proposition} \begin{proof} Choose the sets $W_p$ and integers $n_p\geq 0$ for $p| N$ satisfying the conclusion of Corollary \ref{corollary: weak global integral representatives}. We assume after shrinking the $W_p$ that they satisfy $W_p \subset \sh{E}_{p,\min}$. If $p$ does not divide $N$, set $W_p = \sh{E}_{p,\min}$ and $n_p = 0$. Let $M = \prod_{p} p^{n_p}$. For $v\in \underline{V}(\mathbb{Z})$ with $\pi(v) = b$, define $w(v) \in \mathbb{Q}_{\geq 0}$ by the following formula: \begin{displaymath} w(v) = \begin{cases} \left( \sum_{v'\in \underline{G}(\mathbb{Z})\backslash \left( \underline{G}(\mathbb{Q})\cdot v \cap \underline{V}(\mathbb{Z}) \right)} \frac{\# Z_{\underline{G}}(v')(\mathbb{Q})}{\# Z_{\underline{G}}(v')(\mathbb{Z})} \right)^{-1} & \text{if }b\in p^{n_p}\cdot W_p \text{ and } G(\mathbb{Q}_p)\cdot v \in \eta_{b}(J_b(\mathbb{Q}_p)/2J_b(\mathbb{Q}_p)) \text{ for all }p, \\ 0 & \text{otherwise.} \end{cases} \end{displaymath} Define $w'(v)$ by the formula $w'(v) = \#Z_{\underline{G}}(v)(\mathbb{Q}) w(v)$. Corollary \ref{corollary: Sel2 embeds} and Corollary \ref{corollary: weak global integral representatives} imply that if $b\in M \cdot \sh{E}_{W,\min}$, non-identity elements in the $2$-Selmer group of $J_b$ correspond bijectively to $G(\mathbb{Q})$-orbits in $V_b(\mathbb{Q})$ that intersect $\underline{V}(\mathbb{Z})$ nontrivially, that are $\mathbb{Q}$-irreducible and that are soluble at $\mathbb{R}$ and $\mathbb{Q}_p$ for all $p$. In other words, we have the formula: \begin{equation}\label{equation: selmer count vs orbit count} \sum_{\substack{b \in \sh{E}_{W,\min} \\ \mathrm{ht}(b) <a}}\left( \#\Sel_2(J_b)-1 \right) = \sum_{\substack{b \in M\cdot\sh{E}_{W,\min} \\ \mathrm{ht}(b) <M^{72}a}}\left( \#\Sel_2(J_b)-1 \right) = N_{w'}(\underline{V}(\mathbb{Z})^{irr}\capV(\mathbb{R})^{sol}_{\min} ,M^{72}a). \end{equation} Proposition \ref{proposition: estimates on red and bigstab} implies that \begin{equation}\label{equation: compare w and w'} N_{w'}(\underline{V}(\mathbb{Z})^{irr}\cap V(\mathbb{R})^{sol}_{\min} ,M^{72}a) = N_{w}(\underline{V}(\mathbb{Z})^{irr}\cap V(\mathbb{R})^{sol}_{\min},M^{72}a) + o(a^{7/12}). \end{equation} It is more convenient to work with $w(v)$ than with $w'(v)$ because $w(v)$ is an acceptable function in the sense of \S\ref{subsection: congruence conditions}. Indeed, for $v\in \underline{V}(\mathbb{Z}_p)$ with $\pi(v)=b$, define $w_p(v) \in \mathbb{Q}_{\geq 0}$ by the following formula \begin{displaymath} w_p(v) = \begin{cases} \left( \sum_{v'\in \underline{G}(\mathbb{Z}_p)\backslash \left( \underline{G}(\mathbb{Q}_p)\cdot v \cap \underline{V}(\mathbb{Z}_p) \right)} \frac{\# Z_{\underline{G}}(v')(\mathbb{Q}_p)}{\# Z_{\underline{G}}(v')(\mathbb{Z}_p)} \right)^{-1} & \text{if }b\in p^{n_p}\cdot W_p \text{ and } G(\mathbb{Q}_p)\cdot v \in \eta_{b}(J_b(\mathbb{Q}_p)/2J_b(\mathbb{Q}_p) ), \\ 0 & \text{otherwise.} \end{cases} \end{displaymath} Then an argument identical to \cite[Proposition 3.6]{BS-2selmerellcurves} shows that $w(v) =\prod_pw_p(v)$ for all $v\in\underline{V}(\mathbb{Z})$. The remaining properties for $w(v)$ to be acceptable follow from Part 1 of Lemma \ref{lemma: the constants W0 and W} and Proposition \ref{prop: integral reps squarefree discr}. From Lemma \ref{lemma: the constants W0 and W} we obtain the formula \begin{equation}\label{equation: mass formula w} \int_{v\in \underline{V}(\mathbb{Z}_p)} w_p(v) d v = |W_0|_p \vol\left(\underline{G}(\mathbb{Z}_p) \right) \int_{b \in p^{n_p}\cdot {W_p}} \frac{\#J_b(\mathbb{Q}_p)/2J_b(\mathbb{Q}_p)}{\#J_b[2](\mathbb{Q}_p)}d b. \end{equation} Using the equality $\#J_b(\mathbb{Q}_p)/2J_b(\mathbb{Q}_p) = |1/8|_p \#J_b[2](\mathbb{Q}_p)$ which holds for all $b\in \sh{E}_p$, we see that the integral on the right hand side equals $|1/8|_p\vol(p^{n_p}\cdot W_p)=|1/8|_pp^{-n_p\dim_{\mathbb{Q}}V} \vol(W_p)$. Combining the identities (\ref{equation: selmer count vs orbit count}) and (\ref{equation: compare w and w'}) shows that \begin{align*} \limsup_{a\rightarrow +\infty} a^{-7/12} \sum_{\substack{b \in \sh{E}_{W,\min} \\ \mathrm{ht}(b) <a}}\left( \#\Sel_2(J_b)-1 \right) & = \limsup_{a\rightarrow +\infty} a^{-7/12}N_w(\underline{V}(\mathbb{Z})^{irr}\cap V(\mathbb{R})^{sol}_{\min} ,M^{72}a).\\ \end{align*} This in turn by Theorem \ref{theorem: counting infinitely many congruence conditions minimal} is less then or equal to \begin{displaymath} \frac{|W_0|}{8} \left(\prod_p \int_{\underline{V}(\mathbb{Z}_p)} w_p(v) d v \right) \vol\left(\underline{G}(\mathbb{Z}) \backslash G(\mathbb{R}) \right) 2^5M^{42}. \end{displaymath} Using (\ref{equation: mass formula w}) this simplifies to \begin{displaymath} \vol\left(\underline{G}(\mathbb{Z})\backslash \underline{G}(\mathbb{R}) \right) \prod_p \vol\left(\underline{G}(\mathbb{Z}_p)\right) 2^5\prod_{p} \vol(W_p). \end{displaymath} On the other hand, an elementary sieving argument shows that \begin{displaymath} \lim_{a\rightarrow +\infty} \frac{\# \{b \in \sh{E}_{W,\min} \mid ht(b) < a \}}{a^{7/12}} = 2^5\prod_p \vol(W_p). \end{displaymath} We conclude that \begin{displaymath} \limsup_{a\rightarrow \infty} \frac{ \sum_{b\in \sh{E}_{W,\min},\; ht(b)<a } \left(\# \Sel_2J_b-1 \right) }{\# \{b \in \sh{E}_{W,\min} \mid ht(b) < a \}} \leq \vol\left(\underline{G}(\mathbb{Z})\backslash \underline{G}(\mathbb{R}) \right) \cdot \prod_p \vol\left(\underline{G}(\mathbb{Z}_p)\right). \end{displaymath} Since the Tamagawa number of $\underline{G}$ is $2$ (Proposition \ref{proposition: tamagawa}), the proposition follows. \end{proof} To deduce Theorem \ref{theorem: main theorem} from Proposition \ref{proposition: local result of main theorem}, choose for each $i\geq1$ sets $W_{p,i} \subset\sh{E}_p$ (for $p$ dividing $N$) such that if $W_i = \sh{E} \cap \left( \prod_{p | N} W_{p,i} \right)$, then $W_i$ satisfies the conclusion of Proposition \ref{proposition: local result of main theorem} and we have a countable partition $\sh{E}_{\min} = \sh{E}_{W_1,\min} \sqcup \sh{E}_{W_2,\min} \sqcup \cdots$. By an argument identical to the proof of Theorem 7.1 in \cite{Thorne-Romano-E8}, we see that for any $\varepsilon >0$, there exists $k\geq 1$ such that \begin{displaymath} \limsup_{a\rightarrow +\infty} \frac{ \sum_{\substack{b \in \sqcup_{i\geq k} \sh{E}_{W_i,\min} , \mathrm{ht}(b) < a }} \left(\#\Sel_2J_b -1\right) }{ \# \{b \in \sh{E}_{\min} \mid \mathrm{ht}(b) < a \} }<\varepsilon. \end{displaymath} This implies that \begin{align*} \limsup_{a\rightarrow +\infty} \frac{ \sum_{\substack{b\in \sh{E}_{\min} , \mathrm{ht}(b) < a }} \left(\#\Sel_2J_b -1 \right) }{ \# \{b \in \sh{E}_{\min} \mid \mathrm{ht}(b) < a \} } &\leq 2 \limsup_{a\rightarrow +\infty}\frac{\# \{b \in \sqcup_{i<k} \sh{E}_{W_i,\min} \mid \mathrm{ht}(b) < a \} }{ \# \{b \in \sh{E}_{\min} \mid \mathrm{ht}(b) < a \} } +\varepsilon \\ &\leq 2+\varepsilon. \end{align*} Since the above inequality is true for any $\varepsilon >0$, we conclude the proof of Theorem \ref{theorem: main theorem}. \section{Applications to rational points} \label{section: applications to rational points} The aim of the last section of this paper is to prove the following concrete consequence of Theorem \ref{theorem: main theorem}. Recall that for each $b\in \sh{E}$ we have a smooth projective curve $C_b/\mathbb{Q}$ with marked rational point $P_{\infty} \in C_b(\mathbb{Q})$. \begin{theorem}\label{theorem: poonen stoll analogue} A positive proportion of curves $C_b$ for $b$ in $\sh{E}$ have only one rational point. More precisely, the quantity \begin{equation*} \liminf_{a\rightarrow \infty} \frac{ \# \{b \in \sh{E} \mid \mathrm{ht}(b)<a ,\, C_b(\mathbb{Q})=\{P_{\infty}\} \} }{\# \{b \in \sh{E} \mid \mathrm{ht}(b) < a \}} \end{equation*} is strictly positive. \end{theorem} The proof will be given at the end of this section. We will achieve this by building on the work of Poonen and Stoll \cite{PoonenStoll-Mosthyperellipticnorational} where they prove the corresponding result for odd hyperelliptic curves. We advise the reader to consult the introduction of that paper where the strategy of the proof is carefully explained. We start by introducing some notation from \cite{PoonenStoll-Mosthyperellipticnorational}. \begin{itemize} \item For a field $k$ and integer $g \geq 1$ we let $\P$ be the usual map $k^g \setminus \{0 \} \rightarrow \P^{g-1}(k)$. We write $\rho$ for the reduction map $\P^{g-1}(\mathbb{Q}_p) = \P^{g-1}(\mathbb{Z}_p) \rightarrow \P^{g-1}(\mathbb{F}_p)$ or for the composition $\mathbb{Q}_p^g \setminus \{0\} \xrightarrow{\P} \P^{g-1}(\mathbb{Q}_p) \xrightarrow{\rho} \P^{g-1}(\mathbb{F}_p) $. If $T$ is a subset of a set $S$ and $f$ is a function defined only on $T$, then $f(S)$ means $f(T)$. \item If $A$ is an abelian variety over $\mathbb{Q}_p$ of dimension $g$ we write $\log$ for the logarithm homomorphism $A(\mathbb{Q}_p) \rightarrow \mathrm{H}^0(A,\Omega^1_{A/\mathbb{Q}_p})^{\vee} \simeq \mathbb{Q}_p^g,$ see \cite[\S4]{PoonenStoll-Mosthyperellipticnorational}. The map $\log$ is a local isomorphism with kernel $A(\mathbb{Q}_p)_{tors}$, the torsion points of $A(\mathbb{Q}_p)$. The image of $\log$ is a lattice in $\mathbb{Q}_p^g$, so after choosing an appropriate basis of $1$-forms $\log$ is a surjective homomorphism $A(\mathbb{Q}_p) \rightarrow \mathbb{Z}_p^g$. \item We define $\rho\log$ as the composition of $\log\colon A(\mathbb{Q}_p) \rightarrow \mathbb{Z}_p^g$ with the partially defined map $\rho\colon \mathbb{Z}_p^g \dashrightarrow \P^{g-1}(\mathbb{F}_p)$. The map $\rho \log$ is defined on $A(\mathbb{Q}_p) \setminus A(\mathbb{Q}_p)_{tors}$. \item If $A$ is an abelian variety over $\mathbb{Q}$ we have the $2$-Selmer group $\Sel_2A$ associated to $A$, which comes with a homomorphism $\Sel_2 A \rightarrow A(\mathbb{Q}_2)/2A(\mathbb{Q}_2)$. Write $\sigma$ for the composite of the latter homomorphism with the mod $2$ reduction of the logarithm map $\log \otimes \mathbb{F}_2 \colon A(\mathbb{Q}_2)/2A(\mathbb{Q}_2) \rightarrow \mathbb{F}_2^g $: it defines a homomorphism $\sigma \colon \Sel_2 A \rightarrow \mathbb{F}_2^g$. \end{itemize} Recall that we have defined the abelian scheme $J \rightarrow B^{\rs}$ as the Jacobian of the family of smooth projective curves $C^{\rs}\rightarrow B^{\rs}$ in \S\ref{subsection: a family of curves}. \begin{proposition}\label{proposition: generic manin mumford} Let $k$ be a field of characteristic zero with separable closure $k^s$ and let $\Spec k \rightarrow B^{\rs}$ be a map to the generic point of $B^{\rs}$. Let $X/k$ be the curve corresponding to this map, with marked point $P_{\infty} \in X(k)$. Let $J_X$ be the Jacobian variety of $X$. Use the point $P_{\infty}$ to embed $X$ in $J_X$. Let $J_X(k^s)_{tors}$ denote the torsion points in $J_X(k^s)$. Then we have $X(k^s)\cap J_X(k^s)_{tors} = \{0\}$. \end{proposition} \begin{proof} We may assume that $k = \mathbb{C}(p_2,p_5,p_6,p_8,p_9,p_{12})$ and $X$ is given by projective closure of the equation $y^3=x^4+(p_2x^2+p_5x+p_8)y+p_6x^2+p_9x+p_{12}$. Since this equation is the versal deformation of the singularity $y^3=x^4$, \cite[Theorem 1(2)]{Wajnryb-monodromygroupplanecurvesingularity} shows that the monodromy group contains $\ker\left(\Sp_6(\mathbb{Z}) \rightarrow \Sp_6(\mathbb{Z}/2\mathbb{Z}) \right)$. Suppose $P \in X(k^s)$ is a torsion point of exact order $n>1$. By an argument identical to \cite[Theorem 7.1]{PoonenStoll-Mosthyperellipticnorational} using the monodromy action and the fact that $X$ is not hyperelliptic, we may assume that $n=2$ or $4$. If $n=4$ then $3P-3P_{\infty}$ is linearly equivalent to $Q-P_{\infty}$ for some $Q\in X(k^s)$ different from $P_{\infty}$, again using the monodromy action. So $Q+2P_{\infty} \sim 3P$ and the line bundle $\mathcal{O}(Q+2P_{\infty})$ has at least $2$ independent global sections. Since the divisor $4P_{\infty}$ is canonical Riemann-Roch implies that $2P_{\infty}-Q$ is linearly equivalent to an effective divisor. This shows that $Q = P_{\infty}$, contradicting our previous assumptions. If $n=2$ then $2P-2P_{\infty}$ is a principal divisor, again a contradiction. We have obtained a contradiction in all cases, proving the proposition. \end{proof} For every prime $p$ we obtain a family of $p$-adic Lie groups $J(\mathbb{Q}_p)\rightarrow B^{\rs}(\mathbb{Q}_p)$. As before we define $\sh{E}_p = \underline{B}(\mathbb{Z}_p)\cap B^{\rs}(\mathbb{Q}_p)$. We define a measure on $\sh{E}_p$ by restricting the measure on $\underline{B}(\mathbb{Z}_p) = \mathbb{Z}_p^6$ defined in \S\ref{subsection: heights and measures}. Following \cite[\S8.2]{PoonenStoll-Mosthyperellipticnorational}, we say $U \subset \sh{E}_p$ is a \define{congruence class} if $U$ is the preimage of a subset of $\underline{B}(\mathbb{Z}_p/p^e\mathbb{Z}_p)$ under the reduction map $\underline{B}(\mathbb{Z}_p) \rightarrow \underline{B}(\mathbb{Z}_p/p^e\mathbb{Z}_p)$ for some $e\geq 1$. We say a congruence class $U$ is \define{trivializing} if $J(\mathbb{Q}_p) \rightarrow B^{\rs}(\mathbb{Q}_p)$ can be trivialized above $U$, in the sense of \cite[Definition 8.1]{PoonenStoll-Mosthyperellipticnorational}. The following equidistribution result is a crucial ingredient in the proof of Theorem \ref{theorem: poonen stoll analogue} and readily follows from the proof of Theorem \ref{theorem: main theorem}. (See \cite[Theorem 12.4]{Bhargava-Gross-hyperellcurves} for more details.) \begin{theorem}\label{theorem: equidistribution selmer} Let $U \subset\sh{E}_2$ be a trivializing congruence class. For any $w\in \mathbb{F}_2^3$, the average size of $\#\{s\in \Sel_2J_b\setminus \{0\} \mid \sigma(s)=w \}$ as $b$ varies in $\sh{E}\cap U$, is bounded above by $1/4$. \end{theorem} As for the average size of the $2$-Selmer group we only obtain an upper bound, but this will be enough for our purposes. Let $Z\subset \sh{E}_p$ be the subset of $b\in \sh{E}_p$ such that $C_b(\mathbb{Q}_p)\cap J_b(\mathbb{Q}_p)_{tors}\neq \{0\}$, where $C_b$ is embedded in $J_b$ via the Abel-Jacobi map with basepoint $P_{\infty}$. \begin{lemma}\label{lemma: density nontrivial torsion is zero} The set $Z$ is closed in $\sh{E}_p$ and of measure zero. Moreover the set of all $b\in \sh{E}$ such that $b \in Z$ has density zero. \end{lemma} \begin{proof} The first part follows from the previous proposition in the same way as \cite[Proposition 8.5]{PoonenStoll-Mosthyperellipticnorational} follows from \cite[Theorem 7.1]{PoonenStoll-Mosthyperellipticnorational}. The second part follows from the previous lemma in a similar way as \cite[Corollary 8.6]{PoonenStoll-Mosthyperellipticnorational} follows from \cite[Proposition 8.5]{PoonenStoll-Mosthyperellipticnorational}. \end{proof} \begin{lemma}\label{lemma: rho locally constant in trivializing congruence class} Let $U\subset \sh{E}_p$ be a trivializing congruence class. Let $Z$ be as in Lemma \ref{lemma: density nontrivial torsion is zero}. Then $\rho \log C_b(\mathbb{Q}_p)$ in $\P^2(\mathbb{F}_p)$ is locally constant as $b$ varies in $U\setminus Z$. \end{lemma} \begin{proof} The proof is very similar to that of \cite[Proposition 8.7]{PoonenStoll-Mosthyperellipticnorational}; we sketch the details. Let $U' = U\setminus Z$. Choose an isomorphism of $p$-adic analytic manifolds $J(\mathbb{Q}_p)_{U'}\simeq \mathbb{Z}_p^3\times F\times U'$ over $U'$, where $F$ is a finite group. We have a chain of analytic maps of $p$-adic manifolds \begin{align*} C(\mathbb{Q}_p)_{U'} \rightarrow J(\mathbb{Q}_p)_{U'} \xrightarrow{\log} \mathbb{Z}_p^3\times U' \twoheadrightarrow \mathbb{Z}_p^3 \dashrightarrow \P^2(\mathbb{Q}_p) \xrightarrow{\rho} \P^{2}(\mathbb{F}_p), \end{align*} except that the dashed arrow is only defined on $\mathbb{Z}_p^3 \setminus \{0\}$. The inverse image of $0 \in \mathbb{Z}_p^3$ in $C(\mathbb{Q}_p)_{U'}$ is $P_{\infty,U'}$, the section at infinity. Since the latter is a smooth divisor on $C(\mathbb{Q}_p)_{U'}$, the composition $C(\mathbb{Q}_p)_{U'} \setminus P_{\infty,U'} \rightarrow \P^{2}(\mathbb{F}_p)$ extends to a continuous map $e\colon C(\mathbb{Q}_p)_{U'} \rightarrow \P^2(\mathbb{F}_p)$. By continuity the fibres of $e$ are open and closed. So are their images in $U'$, since $C\rightarrow B$ is flat and proper. Thus for each $c\in \mathbb{F}_2^3$, the set of $b\in U'$ such that $c\in e(C_b(\mathbb{Q}_p))$ is open and closed. By considering intersections and complements of such sets, we see that $e(C_b(\mathbb{Q}_p))$ is locally constant as $b$ varies in $U'$. The lemma follows from the equality $ \rho\log(C_b(\mathbb{Q}_p)) =e(C_b(\mathbb{Q}_p))$. \end{proof} \begin{proposition}\label{proposition: existence good curve} There exists an element $b\in \sh{E}$ such that $b \in \sh{E}_2 \setminus Z$ and $\#\rho \log C_b(\mathbb{Q}_2) = 2$. \end{proposition} \begin{proof} We choose $b\in \sh{E}$ such that $C_b$ is isomorphic to the projective closure of the smooth curve $y^3+y = x^4+x+1$. Let $\mathcal{X}/\mathbb{Z}_2$ be the projective curve over $\mathbb{Z}_2$ given by the latter equation. Then $\mathcal{X}$ has good reduction at $2$ and $\#\mathcal{X}(\mathbb{F}_2)=1$. Let $\mathscr{J}/\mathbb{Z}_2$ be the Jacobian of $\mathcal{X}$. We have $\mathscr{J}[2](\overbar{\mathbb{F}}_2) = 0$ because $\mathcal{X}_{\mathbb{F}_2}$ is up to substitution given by the supersingular normal form of \cite[Proposition 2.1]{Nart-nonhyperellipticcharacteristictwo}, so $\mathscr{J}[2](\mathbb{F}_2)$ is trivial too. To determine $\mathscr{J}[2](\mathbb{Q}_2)$, we explicitly compute the bitangents of $\mathcal{X}_{\mathbb{Q}_2}$ different from the line at infinity. They are of the form $y = ax+b$ for some $a,b\in \overbar{\mathbb{Q}}_2$. We solve for the equation $x^4+x+1-(ax+b)^3-(ax+b) = (x^2+cx+d)^2$ where $c,d\in \overbar{\mathbb{Q}}_2$. Then $c$ and $d$ are polynomials in $a$ and $b$ and we are left with two polynomial conditions in $a$ and $b$. The resultant of these two polynomials with respect to the variable $b$ is up to a constant equal to \begin{dmath*} 4096+12288 a-126976 a^3+110592 a^6-165888 a^7-40704 a^9+70656 a^{10}-34560 a^{11}+17280 a^{15}+1344 a^{18}+480 a^{19}+a^{27}. \end{dmath*} A calculation in \texttt{Magma} \cite{MAGMA} shows that this polynomial is irreducible in $\mathbb{Q}_2$, so the absolute Galois group of $\mathbb{Q}_2$ acts transitively on these $27$ bitangents. Thus Lemma \ref{lemma: bitangents and 2-torsion} implies that $\mathscr{J}[2](\mathbb{Q}_2 ) = 0$. By \cite[Lemma 10.1]{PoonenStoll-Mosthyperellipticnorational} we see that the image of $\mathscr{J}(\mathbb{Q}_2)$ under the logarithm map with respect to a $\mathbb{Z}_2$-basis of $\mathrm{H}^0(\mathscr{J},\Omega^1_{\mathscr{J}/\mathbb{Z}_2} )$ is $\left(2\mathbb{Z}_2\right)^3$. We can compute the logarithm map explicitly on $\mathcal{X}(\mathbb{Q}_2)$ as follows. Since every element of $\mathcal{X}(\mathbb{Q}_2)$ reduces to the point at infinity $P_{\infty}$, the set $\mathcal{X}(\mathbb{Q}_2)$ consists of a single residue disk around $P_{\infty}$. Homogenizing the above equation and setting $y$ equal to $1$ gives the equation \begin{equation* z+z^3 = x^4+xz^3+z^4. \end{equation*} The point $P_{\infty}$ now corresponds to the point $(0,0)$ and $x$ is a uniformizer at $(0,0)$. The map $Q \mapsto x(Q)$ defines a homeomorphism $\mathcal{X}(\mathbb{Q}_2) \simeq 2\mathbb{Z}_2$. Taking the derivative of the above equation leads us to define \begin{equation*} \omega_1 = \frac{dx}{3z^2+1-3z^2x-4z^3}. \end{equation*} Moreover we set $\omega_2 = x\omega_1$ and $\omega_3 = z\omega_1$. Then $\{\omega_1,\omega_2,\omega_3\}$ forms a basis for the $\mathbb{Z}_2$-module $\mathrm{H}^0(\mathcal{X},\Omega^1_{\mathcal{X}/\mathbb{Z}_2})$. The logarithm map on $\mathcal{X}(\mathbb{Q}_2)$ is given by explicitly integrating these $1$-forms. A computation reveals that \begin{align*} z = x^4-x^{12}+O(x^{13}), \\ \omega_1 = \left(1-3x^8+3x^9+ O(x^{12}) \right) dx. \end{align*} Here each $\omega_i$ has a power series expansion with coefficients in $\mathbb{Z}_2$. This implies that the logarithm map, using the uniformizer $x$ and the differentials $\omega_i$, is explicitly given by \begin{align*} x \mapsto \left(x-\frac{x^9}{3}+\frac{3x^{10}}{10}+O(x^{13}), \frac{x^2}{2}-\frac{3x^{10}}{10}+O(x^{11}), \frac{x^5}{5}-O(x^{13}) \right) \end{align*} This description shows that $\rho \log \mathcal{X}(\mathbb{Q}_2)=\{(1:1:0),(1:0:0)\}$. Moreover the last power series has no roots in $2\mathbb{Z}_2$ apart from $0$ by Newton polygon considerations. This implies that $\mathcal{X}(\mathbb{Q}_2) \cap \mathscr{J}(\mathbb{Q}_2)_{tors} = \{0\}$ hence $b$ does not lie in $Z$. \end{proof} We are now ready to prove Theorem \ref{theorem: poonen stoll analogue}. Let $U\subset \sh{E}_2$ be a trivializing congruence class containing an element $b_0\in \sh{E}$ satisfying the conclusion of Proposition \ref{proposition: existence good curve}. Shrink $U$ using Lemma \ref{lemma: rho locally constant in trivializing congruence class} so that the image of $\rho\logC_b(\mathbb{Q}_2) \subset \P^2(\mathbb{F}_2)$ is constant for all $b\in U' = U\setminus Z$, say equal to $I$. Then \cite[Corollary 6.3]{PoonenStoll-Mosthyperellipticnorational} shows that $C_b(\mathbb{Q}) = \{P_{\infty}\}$ for all $b\in \sh{E} \cap U'$ with the property that the map $\sigma\colon \Sel J_b \rightarrow \mathbb{F}_2^3$ is injective and $I \cap \P\sigma(\Sel_2 J_b) = \emptyset$. By Theorem \ref{theorem: equidistribution selmer} and Lemma \ref{lemma: density nontrivial torsion is zero}, the proportion of $b\in \sh{E} \cap U$ satisfying these conditions is at least $1-1/4-\#I/4=1/4>0$. This proves the theorem. \begin{bibdiv} \begin{biblist} \bib{AltmanKleimanSteven-IrreducibilityCompactifiedJacobian}{inproceedings}{ author={Altman, Allen~B.}, author={Iarrobino, Anthony}, author={Kleiman, Steven~L.}, title={Irreducibility of the compactified {J}acobian}, date={1977}, booktitle={Real and complex singularities ({P}roc. {N}inth {N}ordic {S}ummer {S}chool/{NAVF} {S}ympos. {M}ath., {O}slo, 1976)}, pages={1\ndash 12}, review={\MR{0498546}}, } \bib{AltmanKleiman-CompactifyingThePicardScheme}{article}{ author={Altman, Allen~B.}, author={Kleiman, Steven~L.}, title={Compactifying the {P}icard scheme}, date={1980}, ISSN={0001-8708}, journal={Adv. in Math.}, volume={35}, number={1}, pages={50\ndash 112}, url={https://doi.org/10.1016/0001-8708(80)90043-2}, review={\MR{555258}}, } \bib{MAGMA}{article}{ author={Bosma, Wieb}, author={Cannon, John}, author={Playoust, Catherine}, title={The {M}agma algebra system. {I}. {T}he user language}, date={1997}, ISSN={0747-7171}, journal={J. Symbolic Comput.}, volume={24}, number={3-4}, pages={235\ndash 265}, url={http://dx.doi.org/10.1006/jsco.1996.0125}, note={Computational algebra and number theory (London, 1993)}, review={\MR{MR1484478}}, } \bib{RealAlgebraicgeometry}{book}{ author={Bochnak, Jacek}, author={Coste, Michel}, author={Roy, Marie-Fran\c{c}oise}, title={Real algebraic geometry}, series={Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)]}, publisher={Springer-Verlag, Berlin}, date={1998}, volume={36}, ISBN={3-540-64663-9}, url={https://doi.org/10.1007/978-3-662-03718-8}, note={Translated from the 1987 French original, Revised by the authors}, review={\MR{1659509}}, } \bib{Bhargava-Gross-hyperellcurves}{inproceedings}{ author={Bhargava, Manjul}, author={Gross, Benedict~H.}, title={The average size of the 2-{S}elmer group of {J}acobians of hyperelliptic curves having a rational {W}eierstrass point}, date={2013}, booktitle={Automorphic representations and {$L$}-functions}, series={Tata Inst. Fundam. Res. Stud. Math.}, volume={22}, publisher={Tata Inst. Fund. Res., Mumbai}, pages={23\ndash 91}, review={\MR{3156850}}, } \bib{BhargavaGross-AIT}{incollection}{ author={Bhargava, Manjul}, author={Gross, Benedict~H.}, title={Arithmetic invariant theory}, date={2014}, booktitle={Symmetry: representation theory and its applications}, series={Progr. Math.}, volume={257}, publisher={Birkh\"{a}user/Springer, New York}, pages={33\ndash 54}, url={https://doi.org/10.1007/978-1-4939-1590-3_3}, review={\MR{3363006}}, } \bib{BirkenhakeLange-CAV}{book}{ author={Birkenhake, Christina}, author={Lange, Herbert}, title={Complex abelian varieties}, edition={Second}, series={Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]}, publisher={Springer-Verlag, Berlin}, date={2004}, volume={302}, ISBN={3-540-20488-1}, url={https://doi.org/10.1007/978-3-662-06307-1}, review={\MR{2062673}}, } \bib{BLR-NeronModels}{book}{ author={Bosch, Siegfried}, author={L\"{u}tkebohmert, Werner}, author={Raynaud, Michel}, title={N\'{e}ron models}, series={Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)]}, publisher={Springer-Verlag, Berlin}, date={1990}, volume={21}, ISBN={3-540-50587-3}, url={https://doi.org/10.1007/978-3-642-51438-8}, review={\MR{1045822}}, } \bib{Borel-propertieschevalley}{incollection}{ author={Borel, Armand}, title={Properties and linear representations of {C}hevalley groups}, date={1970}, booktitle={Seminar on {A}lgebraic {G}roups and {R}elated {F}inite {G}roups ({T}he {I}nstitute for {A}dvanced {S}tudy, {P}rinceton, {N}.{J}., 1968/69)}, series={Lecture Notes in Mathematics, Vol. 131}, publisher={Springer, Berlin}, pages={1\ndash 55}, review={\MR{0258838}}, } \bib{BS-4Selmer}{unpublished}{ author={Bhargava, Manjul}, author={Shankar, Arul}, title={The average number of elements in the 4-{S}elmer groups of elliptic curves is 7}, date={2013}, note={Arxiv Preprint, available at \url{https://arxiv.org/abs/1312.7333v1}}, } \bib{BS-5Selmer}{unpublished}{ author={Bhargava, Manjul}, author={Shankar, Arul}, title={The average size of the 5-{S}elmer group of elliptic curves is 6, and the average rank is less than 1}, date={2013}, note={Arxiv Preprint, available at \url{https://arxiv.org/abs/1312.7859v1}}, } \bib{BS-2selmerellcurves}{article}{ author={Bhargava, Manjul}, author={Shankar, Arul}, title={Binary quartic forms having bounded invariants, and the boundedness of the average rank of elliptic curves}, date={2015}, ISSN={0003-486X}, journal={Ann. of Math. (2)}, volume={181}, number={1}, pages={191\ndash 242}, url={https://doi.org/10.4007/annals.2015.181.1.3}, review={\MR{3272925}}, } \bib{BS-3Selmer}{article}{ author={Bhargava, Manjul}, author={Shankar, Arul}, title={Ternary cubic forms having bounded invariants, and the existence of a positive proportion of elliptic curves having rank 0}, date={2015}, ISSN={0003-486X}, journal={Ann. of Math. (2)}, volume={181}, number={2}, pages={587\ndash 621}, url={https://doi.org/10.4007/annals.2015.181.2.4}, review={\MR{3275847}}, } \bib{Carter-SimpleGroupsLieType1972}{book}{ author={Carter, Roger~W.}, title={Simple groups of {L}ie type}, publisher={John Wiley \& Sons, London-New York-Sydney}, date={1972}, note={Pure and Applied Mathematics, Vol. 28}, review={\MR{0407163}}, } \bib{Conrad-reductivegroupschemes}{incollection}{ author={Conrad, Brian}, title={Reductive group schemes}, date={2014}, booktitle={Autour des sch\'{e}mas en groupes. {V}ol. {I}}, series={Panor. Synth\`eses}, volume={42/43}, publisher={Soc. Math. France, Paris}, pages={93\ndash 444}, review={\MR{3362641}}, } \bib{CharlesPoonen}{article}{ author={Charles, Fran\c{c}ois}, author={Poonen, Bjorn}, title={Bertini irreducibility theorems over finite fields}, date={2016}, ISSN={0894-0347}, journal={J. Amer. Math. Soc.}, volume={29}, number={1}, pages={81\ndash 94}, url={https://doi.org/10.1090/S0894-0347-2014-00820-1}, review={\MR{3402695}}, } \bib{ColliotTheleneSansuc-Fibresquadratiques}{article}{ author={Colliot-Th\'{e}l\`ene, J.-L.}, author={Sansuc, J.-J.}, title={Fibr\'{e}s quadratiques et composantes connexes r\'{e}elles}, date={1979}, ISSN={0025-5831}, journal={Math. Ann.}, volume={244}, number={2}, pages={105\ndash 134}, url={https://doi.org/10.1007/BF01420486}, review={\MR{550842}}, } \bib{Deligne-droiteprojective}{incollection}{ author={Deligne, P.}, title={Le groupe fondamental de la droite projective moins trois points}, date={1989}, booktitle={Galois groups over {${\bf Q}$} ({B}erkeley, {CA}, 1987)}, series={Math. Sci. Res. Inst. Publ.}, volume={16}, publisher={Springer, New York}, pages={79\ndash 297}, url={https://doi.org/10.1007/978-1-4613-9649-9_3}, review={\MR{1012168}}, } \bib{SchemasenGroupesII}{book}{ author={Demazure, M.}, author={Grothendieck, A.}, title={Sch\'{e}mas en groupes. {II}: {G}roupes de type multiplicatif, et structure des sch\'{e}mas en groupes g\'{e}n\'{e}raux}, series={S\'{e}minaire de G\'{e}om\'{e}trie Alg\'{e}brique du Bois Marie 1962/64 (SGA 3). Lecture Notes in Mathematics, Vol. 152}, publisher={Springer-Verlag, Berlin-New York}, date={1970}, review={\MR{0274459}}, } \bib{SGA3-TomeII}{article}{ author={Demazure, P}, author={Grothendieck, Alexander}, author={others}, title={Sch{\'e}mas en groupes (sga 3), tome ii}, date={1962}, journal={Lecture Notes in Mathematics}, volume={152}, } \bib{FantechiGottschevStraten-EulerNumberCompactifiedJacobian}{article}{ author={Fantechi, B.}, author={G\"{o}ttsche, L.}, author={van Straten, D.}, title={Euler number of the compactified {J}acobian and multiplicity of rational curves}, date={1999}, ISSN={1056-3911}, journal={J. Algebraic Geom.}, volume={8}, number={1}, pages={115\ndash 133}, review={\MR{1658220}}, } \bib{GrossHarris-theta}{incollection}{ author={Gross, Benedict~H.}, author={Harris, Joe}, title={On some geometric constructions related to theta characteristics}, date={2004}, booktitle={Contributions to automorphic forms, geometry, and number theory}, publisher={Johns Hopkins Univ. Press, Baltimore, MD}, pages={279\ndash 311}, review={\MR{2058611}}, } \bib{EGAIV-3}{article}{ author={Grothendieck, A.}, title={\'{E}l\'{e}ments de g\'{e}om\'{e}trie alg\'{e}brique. {IV}. \'{E}tude locale des sch\'{e}mas et des morphismes de sch\'{e}mas. {III}}, date={1966}, ISSN={0073-8301}, journal={Inst. Hautes \'{E}tudes Sci. Publ. Math.}, number={28}, pages={255}, url={http://www.numdam.org/item?id=PMIHES_1966__28__255_0}, review={\MR{217086}}, } \bib{Hinohara-projmodulessemilocalring}{article}{ author={Hinohara, Yukitoshi}, title={Projective modules over semilocal rings}, date={1962}, ISSN={0040-8735}, journal={Tohoku Math. J. (2)}, volume={14}, pages={205\ndash 211}, url={https://doi.org/10.2748/tmj/1178244175}, review={\MR{180580}}, } \bib{Kleiman-PicardScheme}{incollection}{ author={Kleiman, Steven~L.}, title={The {P}icard scheme}, date={2005}, booktitle={Fundamental algebraic geometry}, series={Math. Surveys Monogr.}, volume={123}, publisher={Amer. Math. Soc., Providence, RI}, pages={235\ndash 321}, review={\MR{2223410}}, } \bib{Laga-F4paper}{unpublished}{ author={Laga, Jef}, title={Arithmetic statistics of {P}rym surfaces}, date={2020}, note={Preprint, available at \url{https://www.dpmms.cam.ac.uk/~jcsl5/}}, } \bib{Levy-Vinbergtheoryposchar}{article}{ author={Levy, Paul}, title={Vinberg's {$\theta$}-groups in positive characteristic and {K}ostant-{W}eierstrass slices}, date={2009}, ISSN={1083-4362}, journal={Transform. Groups}, volume={14}, number={2}, pages={417\ndash 461}, url={https://doi.org/10.1007/s00031-009-9056-y}, review={\MR{2504929}}, } \bib{Lurie-minisculereps}{article}{ author={Lurie, Jacob}, title={On simply laced {L}ie algebras and their minuscule representations}, date={2001}, ISSN={0010-2571}, journal={Comment. Math. Helv.}, volume={76}, number={3}, pages={515\ndash 575}, url={https://doi.org/10.1007/PL00013217}, review={\MR{1854697}}, } \bib{Matsumura-CommutativeRingTheory}{book}{ author={Matsumura, Hideyuki}, title={Commutative ring theory}, series={Cambridge Studies in Advanced Mathematics}, publisher={Cambridge University Press, Cambridge}, date={1986}, volume={8}, ISBN={0-521-25916-9}, note={Translated from the Japanese by M. Reid}, review={\MR{879273}}, } \bib{Milnor-SymmetricBilinearForms}{book}{ author={Milnor, John}, author={Husemoller, Dale}, title={Symmetric bilinear forms}, publisher={Springer-Verlag, New York-Heidelberg}, date={1973}, note={Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 73}, review={\MR{0506372}}, } \bib{milne-etalecohomology}{book}{ author={Milne, James~S.}, title={\'{E}tale cohomology}, series={Princeton Mathematical Series}, publisher={Princeton University Press, Princeton, N.J.}, date={1980}, volume={33}, ISBN={0-691-08238-3}, review={\MR{559531}}, } \bib{Mumford-eqdefAVs}{article}{ author={Mumford, D.}, title={On the equations defining abelian varieties. {I}}, date={1966}, ISSN={0020-9910}, journal={Invent. Math.}, volume={1}, pages={287\ndash 354}, url={https://doi.org/10.1007/BF01389737}, review={\MR{204427}}, } \bib{Mumford-thetacharacteristicsalgebraiccurve}{article}{ author={Mumford, David}, title={Theta characteristics of an algebraic curve}, date={1971}, ISSN={0012-9593}, journal={Ann. Sci. \'{E}cole Norm. Sup. (4)}, volume={4}, pages={181\ndash 192}, url={http://www.numdam.org/item?id=ASENS_1971_4_4_2_181_0}, review={\MR{292836}}, } \bib{Nisnevich-Espaceshomogenesprincipaux}{article}{ author={Nisnevich, Yevsey~A.}, title={Espaces homog\`enes principaux rationnellement triviaux et arithm\'{e}tique des sch\'{e}mas en groupes r\'{e}ductifs sur les anneaux de {D}edekind}, date={1984}, ISSN={0249-6291}, journal={C. R. Acad. Sci. Paris S\'{e}r. I Math.}, volume={299}, number={1}, pages={5\ndash 8}, review={\MR{756297}}, } \bib{Nart-nonhyperellipticcharacteristictwo}{article}{ author={Nart, Enric}, author={Ritzenthaler, Christophe}, title={Non-hyperelliptic curves of genus three over finite fields of characteristic two}, date={2006}, ISSN={0022-314X}, journal={J. Number Theory}, volume={116}, number={2}, pages={443\ndash 473}, url={https://doi.org/10.1016/j.jnt.2005.05.014}, review={\MR{2195934}}, } \bib{Ono-relativetheorytamagawa}{article}{ author={Ono, Takashi}, title={On the relative theory of {T}amagawa numbers}, date={1965}, ISSN={0003-486X}, journal={Ann. of Math. (2)}, volume={82}, pages={88\ndash 111}, url={https://doi.org/10.2307/1970563}, review={\MR{177991}}, } \bib{Panyushev-Invarianttheorythetagroups}{article}{ author={Panyushev, Dmitri~I.}, title={On invariant theory of {$\theta$}-groups}, date={2005}, ISSN={0021-8693}, journal={J. Algebra}, volume={283}, number={2}, pages={655\ndash 670}, url={https://doi.org/10.1016/j.jalgebra.2004.03.032}, review={\MR{2111215}}, } \bib{Poonen-BertiniTheoremsFiniteFields}{article}{ author={Poonen, Bjorn}, title={Bertini theorems over finite fields}, date={2004}, ISSN={0003-486X}, journal={Ann. of Math. (2)}, volume={160}, number={3}, pages={1099\ndash 1127}, url={https://doi.org/10.4007/annals.2004.160.1099}, review={\MR{2144974}}, } \bib{PoonenRains-maximalisotropic}{article}{ author={Poonen, Bjorn}, author={Rains, Eric}, title={Random maximal isotropic subspaces and {S}elmer groups}, date={2012}, ISSN={0894-0347}, journal={J. Amer. Math. Soc.}, volume={25}, number={1}, pages={245\ndash 269}, url={https://doi.org/10.1090/S0894-0347-2011-00710-8}, review={\MR{2833483}}, } \bib{PlatonovRapinchuk-Alggroupsandnumbertheory}{book}{ author={Platonov, Vladimir}, author={Rapinchuk, Andrei}, title={Algebraic groups and number theory}, series={Pure and Applied Mathematics}, publisher={Academic Press, Inc., Boston, MA}, date={1994}, volume={139}, ISBN={0-12-558180-7}, note={Translated from the 1991 Russian original by Rachel Rowen}, review={\MR{1278263}}, } \bib{PoonenStoll-Mosthyperellipticnorational}{article}{ author={Poonen, Bjorn}, author={Stoll, Michael}, title={Most odd degree hyperelliptic curves have only one rational point}, date={2014}, ISSN={0003-486X}, journal={Ann. of Math. (2)}, volume={180}, number={3}, pages={1137\ndash 1166}, url={https://doi.org/10.4007/annals.2014.180.3.7}, review={\MR{3245014}}, } \bib{PoonenStoll-Hypersurfacesdiscriminantuniformizer}{unpublished}{ author={Poonen, Bjorn}, author={Stoll, Michael}, title={The valuation of the discriminant of a hypersurface}, date={2020}, note={Preprint, available at \url{http://math.mit.edu/~poonen/papers/discriminant.pdf}}, } \bib{Reeder-torsion}{article}{ author={Reeder, Mark}, title={Torsion automorphisms of simple {L}ie algebras}, date={2010}, ISSN={0013-8584}, journal={Enseign. Math. (2)}, volume={56}, number={1-2}, pages={3\ndash 47}, url={https://doi.org/10.4171/LEM/56-1-1}, review={\MR{2674853}}, } \bib{Riche-KostantSectionUniversalCentralizer}{article}{ author={Riche, Simon}, title={Kostant section, universal centralizer, and a modular derived {S}atake equivalence}, date={2017}, ISSN={0025-5874}, journal={Math. Z.}, volume={286}, number={1-2}, pages={223\ndash 261}, url={https://doi.org/10.1007/s00209-016-1761-3}, review={\MR{3648498}}, } \bib{GrossLevyReederYu-GradingsPosRank}{article}{ author={Reeder, Mark}, author={Levy, Paul}, author={Yu, Jiu-Kang}, author={Gross, Benedict~H.}, title={Gradings of positive rank on simple {L}ie algebras}, date={2012}, ISSN={1083-4362}, journal={Transform. Groups}, volume={17}, number={4}, pages={1123\ndash 1190}, url={https://doi.org/10.1007/s00031-012-9196-3}, review={\MR{3000483}}, } \bib{Romano-Thorne-ArithmeticofsingularitiestypeE}{article}{ author={Romano, Beth}, author={Thorne, Jack~A.}, title={On the arithmetic of simple singularities of type {$E$}}, date={2018}, ISSN={2522-0160}, journal={Res. Number Theory}, volume={4}, number={2}, pages={Art. 21, 34}, url={https://doi.org/10.1007/s40993-018-0110-5}, review={\MR{3787911}}, } \bib{Thorne-Romano-E8}{article}{ author={Romano, Beth}, author={Thorne, Jack~A.}, title={E8 and the average size of the 3-{S}elmer group of the {J}acobian of a pointed genus-2 curve}, date={2020}, journal={Proceedings of the London Mathematical Society}, eprint={https://londmathsoc.onlinelibrary.wiley.com/doi/pdf/10.1112/plms.12388}, url={https://londmathsoc.onlinelibrary.wiley.com/doi/abs/10.1112/plms.12388}, } \bib{Saito-Discriminanthypersurfacevendim}{article}{ author={Saito, Takeshi}, title={The discriminant and the determinant of a hypersurface of even dimension}, date={2012}, ISSN={1073-2780}, journal={Math. Res. Lett.}, volume={19}, number={4}, pages={855\ndash 871}, url={https://doi.org/10.4310/MRL.2012.v19.n4.a10}, review={\MR{3008420}}, } \bib{Pinceauxcourbesgenresdeux}{book}{ author={Szpiro, L.}, author={Beauville, A.}, author={math{\'e}matique~de France, Soci{\'e}t{\'e}}, title={S{\'e}minaire sur les pinceaux de courbes de genre au moins deux}, publisher={Societ{\'e} math{\'e}matique de France}, note={Ast\'{e}risque No. 86 (1981) (1981)}, review={\MR{642675}}, } \bib{Serre-lecturesonNx(p)}{book}{ author={Serre, Jean-Pierre}, title={Lectures on {$N_X (p)$}}, series={Chapman \& Hall/CRC Research Notes in Mathematics}, publisher={CRC Press, Boca Raton, FL}, date={2012}, volume={11}, ISBN={978-1-4665-0192-8}, review={\MR{2920749}}, } \bib{Seshadri-GeometricReductivityArbitaryBase}{article}{ author={Seshadri, C.~S.}, title={Geometric reductivity over arbitrary base}, date={1977}, ISSN={0001-8708}, journal={Advances in Math.}, volume={26}, number={3}, pages={225\ndash 274}, url={https://doi.org/10.1016/0001-8708(77)90041-X}, review={\MR{466154}}, } \bib{Shankar-2selmerhypermarkedpoints}{article}{ author={Shankar, Ananth~N.}, title={2-{S}elmer groups of hyperelliptic curves with marked points}, date={2019}, ISSN={0002-9947}, journal={Trans. Amer. Math. Soc.}, volume={372}, number={1}, pages={267\ndash 304}, url={https://doi.org/10.1090/tran/7546}, review={\MR{3968769}}, } \bib{Slodowy-simplesingularitiesalggroups}{book}{ author={Slodowy, Peter}, title={Simple singularities and simple algebraic groups}, series={Lecture Notes in Mathematics}, publisher={Springer, Berlin}, date={1980}, volume={815}, ISBN={3-540-10026-1}, review={\MR{584445}}, } \bib{stacksproject}{misc}{ author={{Stacks Project Authors}, The}, title={\textit{Stacks Project}}, date={2018}, note={\url{https://stacks.math.columbia.edu}}, } \bib{Steinberg-Torsioninreductivegroups}{article}{ author={Steinberg, Robert}, title={Torsion in reductive groups}, date={1975}, ISSN={0001-8708}, journal={Advances in Math.}, volume={15}, pages={63\ndash 92}, url={https://doi.org/10.1016/0001-8708(75)90125-5}, review={\MR{354892}}, } \bib{Stoll-Twists}{article}{ author={Stoll, Michael}, title={Independence of rational points on twists of a given curve}, date={2006}, ISSN={0010-437X}, journal={Compos. Math.}, volume={142}, number={5}, pages={1201\ndash 1214}, url={https://doi.org/10.1112/S0010437X06002168}, review={\MR{2264661}}, } \bib{ShankarWang-hypermarkednonweierstrass}{article}{ author={Shankar, Arul}, author={Wang, Xiaoheng}, title={Rational points on hyperelliptic curves having a marked non-{W}eierstrass point}, date={2018}, ISSN={0010-437X}, journal={Compos. Math.}, volume={154}, number={1}, pages={188\ndash 222}, url={https://doi.org/10.1112/S0010437X17007515}, review={\MR{3719247}}, } \bib{Thorne-thesis}{article}{ author={Thorne, Jack~A.}, title={Vinberg's representations and arithmetic invariant theory}, date={2013}, ISSN={1937-0652}, journal={Algebra Number Theory}, volume={7}, number={9}, pages={2331\ndash 2368}, url={https://doi.org/10.2140/ant.2013.7.2331}, review={\MR{3152016}}, } \bib{Thorne-E6paper}{article}{ author={Thorne, Jack~A.}, title={{$E_6$} and the arithmetic of a family of non-hyperelliptic curves of genus 3}, date={2015}, journal={Forum Math. Pi}, volume={3}, pages={e1, 41}, url={https://doi.org/10.1017/fmp.2014.2}, review={\MR{3298319}}, } \bib{thorne-planequarticsAIT}{article}{ author={Thorne, Jack~A.}, title={Arithmetic invariant theory and 2-descent for plane quartic curves}, date={2016}, ISSN={1937-0652}, journal={Algebra Number Theory}, volume={10}, number={7}, pages={1373\ndash 1413}, url={https://doi.org/10.2140/ant.2016.10.1373}, note={With an appendix by Tasho Kaletha}, review={\MR{3554236}}, } \bib{Wajnryb-monodromygroupplanecurvesingularity}{article}{ author={Wajnryb, Bronislaw}, title={On the monodromy group of plane curve singularities}, date={1979/80}, ISSN={0025-5831}, journal={Math. Ann.}, volume={246}, number={2}, pages={141\ndash 154}, url={https://doi.org/10.1007/BF01420166}, review={\MR{564684}}, } \end{biblist} \end{bibdiv} \begin{footnotesize} \textsc{Jef Laga }\; \texttt{jcsl5@cam.ac.uk} \newline \textsc{Department of Pure Mathematics and Mathematical Statistics, Wilberforce Road, Cambridge, CB3 0WB, UK} \end{footnotesize} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,658
Kruševo (cyr. Крушево) – wieś w Serbii, w okręgu raskim, w mieście Novi Pazar. W 2011 roku liczyła 425 mieszkańców. Przypisy Miejscowości w okręgu raskim
{ "redpajama_set_name": "RedPajamaWikipedia" }
286
Q: Why do we specify a delegate along with an event, why not just use an event in C#? Why do we specify a delegate along with the event, why not just use event in C#? For instance, I have the following code: class Shop { internal delegate void _EventHandler(object sender, GoodsInfoEventArgs e); internal event _EventHandler GoodsArrived; public void BringGoods(string goods) { if (GoodsArrived != null) { GoodsArrived(this, new GoodsInfoEventArgs(goods)); } } } Why would not those who have developed C# implement the events in the following a way: class Shop { internal event _EventHandler GoodsArrived; public void BringGoods(string goods) { if (GoodsArrived != null) { GoodsArrived(this, new GoodsInfoEventArgs(goods)); } } } I mean without the delegate. I am aware of how the event works in C#. It will just call all subscribed delegates in case there are any and it will be equal null otherwise. And also I am aware about the difference between the event and the delegate. The event allows only addition or subtraction of delegates, but does not allow to change the pointer (we can perform += and -= operations on event, but we can not perform the = operation on event), while delegate allows all addition, subtraction and assign operations be performed. So, the event is a wrapper around the delegate and the wrapper allows to control in which way the delegate can change. All that being said I do not understand the reasoning behind making it required to have a delegate defined in every place in program where we define an event. In case you do not understand a part in my question, please, ask me about it and I will provide more info. Thank you. I am sorry for confusion. I meant, why not use something like this: internal event _EventHandler(object sender, GoodsInfoEventArgs e) GoodsArrived; ? A: As you mention in your question, the event provides the mechanism so subscribe and unsubscribe. The delegate however is required to define the signature of the methods that can handle the event. There is a default signature (void with parameters sender and args), but you can use another type for EventArgs and also omit the sender parameter for instance. In addition, there is a pre-defined delegate EventHandler that you can use so you don't have to create the delegate yourself if you stick with the default signature and use EventArgs as arguments. internal event EventHandler GoodsArrived; If you want to provide a custom class for event arguments, you can use the generic version of the delegate. public class MyEventArgs : EventArgs { // ... } internal event EventHandler<MyEventArgs> GoodsArrived; As for your update, if you use one the out-of-the-box EventHandler delegates, you are almost there. You have a standard signature and do not need to create a delegate yourself. The decision to base the events on delegates provides a lot of flexibility that you might or might not need. You can define events that use just the right set of parameters, also you can re-use existing delegates for multiple events. The default EventHandler implementations simplify things a lot and support you in defining an event that conforms to best practices (without taking away the flexibility). From my point of view and experience, it would not add a lot to define the signature directly at the event. Just use an out-of-the-box EventHandler delegate; you can immediately recognize the signature and do not have to define the delegate yourself.
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,137
\section{Introduction} \subsection{Background} Let $M$ denote the centered Hardy-Littlewood maximal operator on $\mathbb{R}^d$, i.e. for $f \in L^1_{loc}(\mathbb{R}^d)$, \begin{equation}\label{Intro_max} Mf(x) = \sup_{r >0} \frac{1}{m(B_r(x))} \int_{B_r(x)} |f (y)|\,\text{\rm d}y\,, \end{equation} where $B_r(x)$ is the ball centered at $x$ with radius $r$ and $m(B_r(x))$ is its $d$-dimensional Lebesgue measure. One of the classical results in harmonic analysis states that $M:L^p(\mathbb{R}^d) \to L^p(\mathbb{R}^d)$ is a bounded operator for $1<p \leq \infty$. For $p=1$ we have $M: L^1(\mathbb{R}^d) \to L^{1,\infty}(\mathbb{R}^d)$ bounded. In 1997, Kinunnen \cite{Ki} showed that $M: W^{1,p}(\mathbb{R}^d) \to W^{1,p}(\mathbb{R}^d)$ is bounded for $1 < p \leq \infty$, and that was the starting point on the study of the regularity of maximal operators acting on Sobolev functions. This result was later extended to multilinear, local and fractional contexts in \cite{CM, KL, KiSa}. Due to the lack of reflexivity of $L^1$, results for $p=1$ are subtler. For instance, in \cite[Question 1]{HO}, Haj\l asz and Onninen asked whether the operator $f \mapsto |\nabla Mf|$ is bounded from $W^{1,1}(\mathbb{R}^d)$ to $L^1(\mathbb{R}^d)$. Progress on this question (and its variant for BV-functions) has been restricted to dimension $d=1$. \smallskip Let $\widetilde{M}$ denote the uncentered maximal operator (defined similarly as in \eqref{Intro_max}, with the supremum taken over all balls containing the point $x$ in its closure). Refining the work of Tanaka \cite{Ta}, Aldaz and P\'{e}rez L\'{a}zaro \cite{AP} showed that if $f$ is of bounded variation then $\widetilde{M}f$ is absolutely continuous and \begin{equation}\label{Intro_AP} {\rm Var\,} \widetilde{M}f \leq {\rm Var\,} f, \end{equation} where ${\rm Var\,} f$ denotes the total variation of $f$. Observe that inequality \eqref{Intro_AP} is sharp. More recently, Kurka \cite{Ku} considered the centered maximal operator in dimension $d=1$ and proved that \begin{equation}\label{Intro_Ku} {\rm Var\,} Mf \leq 240,004\, {\rm Var\,} f. \end{equation} It is currently unknown if one can bring down the value of such constant to $C=1$ in the centered case. Other interesting works related to this theory are \cite{ACP, CFS, CS, HM, Lu1, St}. \subsection{Discrete setting} In this paper we consider issues of similar flavor, now in the discrete setting. Let us start with some definitions. \smallskip We denote a vector $\vec{n} \in \mathbb{Z}^d$ by $\vec{n} = (n_1, n_2, \ldots, n_d)$. For a function $f:\mathbb{Z}^{d}\rightarrow \mathbb{R}$ we define its $\ell^{p}$-norm as usual: \begin{equation}\label{Intro_l_p_norm} \|f\|_{\ell^{p}{( \mathbb{Z}^{d})}}= \left(\sum_{\vec n\in \mathbb{Z}^{d}} {|f(\vec n)|^{p}}\right)^{1/p}, \end{equation} if $1\leq p<\infty$, and \begin{equation*} \|f\|_{\ell^{\infty}{(\mathbb{Z}^{d})}}= \sup_{\vec n\in\mathbb{Z}^{d} }{|f(\vec n)|}. \end{equation*} We define its total variation ${\rm Var\,} f$ by $$ {\rm Var\,} f= \sum_{i=1}^d \sum_{\vec n \in \mathbb{Z}^d} \big| f(\vec n+\vec e_{i})-f(\vec n)\big|, $$ where $\vec e_{i}=(0,0,\ldots,1,\ldots,0)$ is the canonical $i-$th base vector. Also, we say that a function $f:\mathbb{Z}^{d}\to\mathbb{R}$ is a {\it delta function} if there exist $\vec p\in\mathbb{Z}^{d}$ and $k\in\mathbb{R}$, such that $$ f(\vec p)=k\ \ \ \text{and}\ \ \ f(\vec n)=0 \ \ \forall\ \vec n \in\mathbb{Z}^{d}\setminus\{\vec p\}. $$ \subsubsection{A sharp inequality in dimension one} For $f:\mathbb{Z}\to \mathbb{R}$ we define its centered Hardy-Littlewood maximal function $Mf :\mathbb{Z} \to \mathbb{R}^+$ as \begin{equation*} Mf(n) = \sup_{r \in \mathbb{Z}^+} \frac{1}{(2r+1)} \sum_{k=-r}^r |f(n+k)|, \end{equation*} while the uncentered maximal function $\widetilde{M}f :\mathbb{Z} \to \mathbb{R}^+$ is given by \begin{equation*} \widetilde{M}f(n) = \sup_{r,s \in \mathbb{Z}^+} \frac{1}{(r +s +1)} \sum_{k=-r}^s |f(n+k)|. \end{equation*} In \cite{BCHP}, Bober, Carneiro, Hughes and Pierce proved the following inequalities \begin{equation}\label{obj 0} {\rm Var\,} \widetilde Mf \leq {\rm Var\,} f \leq 2\|f\|_{\ell^{1}(\mathbb{Z})} \end{equation} and \begin{equation}\label{obj 1} {\rm Var\,} Mf\leq \left(2+\frac{146}{315}\right) \|f\|_{\ell^{1}(\mathbb{Z})}. \end{equation} The leftmost inequality in \eqref{obj 0} is the discrete analogue of \eqref{Intro_AP}. The rightmost inequality in \eqref{obj 0} is simply the triangle inequality. Both inequalities in \eqref{obj 0} are in fact sharp (e.g. equality is attained if $f$ is a delta function). On the other hand, inequality \eqref{obj 1} is not optimal, and it was asked in \cite{BCHP} whether the sharp constant for \eqref{obj 1} is in fact $C=2$. Our first result answers this question affirmatively, also characterizing the extremal functions. \begin{theorem}\label{lim d=1 C=2} Let $f:\mathbb{Z}\to\mathbb{R}$ be a function in $\ell^{1}(\mathbb{Z}).$ Then \begin{equation}\label{main theo cent d=1} {\rm Var\,} Mf\leq 2\,\|f\|_{\ell^{1}(\mathbb{Z})}, \end{equation} and the constant $C=2$ is the best possible. Moreover, the equality is attained if and only if $f$ is a delta function. \end{theorem} \noindent {\sc{Remark:}} In \cite{Te}, Temur proved the analogue of \eqref{Intro_Ku} in the discrete setting, i.e. \begin{equation}\label{Te_Var} {\rm Var\,} Mf \leq C \ {\rm Var\,} f \end{equation} with constant $C=(72000)2^{12}+4$. This inequality is qualitatively stronger that \eqref{main theo cent d=1} (in fact, ${\rm Var\,} f$ should be seen as the natural object to be on the right-hand side), but it does not imply \eqref{main theo cent d=1}. By triangle inequality, inequality \eqref{main theo cent d=1} suggests that it may be possible to prove \eqref{Te_Var} with constant $C=1$, but this is currently an open problem. \subsubsection{Sharp inequalities in higher dimensions} We now aim to extend Theorem \ref{lim d=1 C=2} to higher dimensions. In order to do so, we first recall the notion of maximal operators associated to regular convex sets as considered in \cite{CH}. \smallskip Let $\Omega\subset \mathbb R^{d}$ be a bounded open convex set with Lipschitz boundary, such that $\vec 0\in$ int$(\Omega)$ and that $\pm \vec e_{i} \in \overline\Omega$ for $1 \leq i \leq d$. For $r>0$ we write \begin{equation*} \overline\Omega_{r}(\vec{x}_{0}) =\big\{\vec{x} \in\mathbb R^{d}; \, r^{-1}(\vec{x}-\vec{x}_{0})\in \overline{\Omega}\big\}, \end{equation*} and for $r=0$ we consider \begin{equation*} \overline\Omega_{0}(\vec{x}_{0}) =\{\vec{x}_{0}\}. \end{equation*} Whenever $\vec{x}_{0}=\vec 0$ we shall write $\overline\Omega_{r}=\overline\Omega_{r}\big(\vec{{0}}\big) $ for simplicity. This object plays the role of the ``ball of center $\vec x_{0}$ and radius $r$" in our maximal operators below. For instance, to work with regular $\ell^{p}-$balls, one should consider $\Omega=\Omega_{\ell^p}=\{\vec x\in\mathbb{R}^{d}; \|\vec x\|_{p}<1 \}$, where $\|\vec x\|_{p}=(|x_{1}|^{p}+|x_{2}|^{p}+\ldots+|x_{d}|^{p})^{\frac{1}{p}} $ for $\vec x=(x_{1},x_{2},\ldots,x_{d})\in \mathbb{R}^{d}$ and $1 \leq p < \infty$, and $\|\vec x\|_{\infty}=\max\{|x_1|, |x_2|, \ldots, |x_d|\}$. \smallskip Given $f:\mathbb{Z}^d \to \mathbb{R}$, we denote by $A_{r}f(\vec n)$ the average of $|f|$ over the $\Omega$-ball of center $\vec n$ and radius $r$, i.e. \begin{equation*} A_{r}f(\vec n) = \frac{1}{N(r)}\,\sum_{\vec m\in \overline\Omega_{r} }{|f(\vec n+\vec m)}|, \end{equation*} where $N(\vec x,r)$ is the number of the lattice points in the set $\overline\Omega_{r}(\vec x)$\ (and $N(r):=N(\vec 0,r)$). We denote by $M_{\Omega}$ the discrete centered maximal operator associated to $\Omega$, \begin{equation}\label{Intro_disc_Omega_cent} M_{\Omega}f(\vec n)=\sup_{r\geq 0}A_{r}f(\vec n)=\sup_{r\geq 0} \frac{1}{N(r)}\,\sum_{\vec m\in \overline\Omega_{r} }{|f(\vec n+\vec m)}|, \end{equation} and we denote by $\widetilde{M}_{\Omega}$ its uncentered version \begin{equation}\label{Intro_disc_Omega_uncent} \widetilde{M}_{\Omega}f(\vec n)=\sup_{\overline\Omega_{r}(\vec x_{0}) \owns \vec n}\,A_{r}f(\vec x_{0}) =\sup_{\overline\Omega_{r}(\vec x_{0}) \owns \vec n}\, \frac{1}{N(\vec x_{0},r)}\,{\sum_{\vec m\in \overline\Omega_{r}(\vec x_{0}) }{|f( \vec m)}|}. \end{equation} It should be understood throughout the rest of the paper that we always consider $\Omega$-balls with at least one lattice point. These convex $\Omega$--balls have roughly the same behavior as the regular Euclidean balls from the geometric and arithmetic points of view, in the sense that for large radii, the number of lattice points inside the $\Omega$-ball is roughly equal to the volume of the $\Omega$-ball (see \cite[Chapter VI \S 2, Theorem 2]{Lang}). \smallskip Given $1\leq p< \infty$ and $f\in \ell_{loc}^{1}(\mathbb{Z}^{d})$, we denote by $M_{p}$ the discrete centered maximal operator associated to $\Omega_{\ell^p}$, \begin{equation*} M_{p}f(\vec n)=M_{\Omega_{\ell^p}}f(\vec n) \end{equation*} and for $p=\infty$, we denote $$ Mf(\vec n)=M_{\Omega_{\ell^\infty}}f(\vec n). $$ Analogously, we denote by $\widetilde M_{p}f$ and $\widetilde Mf$ the uncentered versions of the discrete maximal operators associated to $\Omega_{\ell^p}$, for $1 \leq p \leq \infty$. Note that in dimension $d=1$ we have $M_{p}=M \ \text{and}\ \widetilde M_{p}=\widetilde M$ for all $1\leq p\leq \infty$. \smallskip In \cite{CH}, Carneiro and Hughes showed that, for any regular set $\Omega$ as above and $f:\mathbb{Z}^{d}\to\mathbb{R}$, there exist constants $C(\Omega,d)$ and $\widetilde C(\Omega,d)$ such that \begin{equation}\label{obj 2} {\rm Var\,} M_{\Omega}f\leq C(\Omega,d)\|f\|_{\ell^{1}(\mathbb{Z}^{d})} \end{equation} and \begin{equation}\label{obj 3} {\rm Var\,} \widetilde M_{\Omega}f\leq \widetilde C(\Omega,d)\|f\|_{\ell^{1}(\mathbb{Z}^{d})}. \end{equation} Inequalities \eqref{obj 2} and \eqref{obj 3} were extended to a fractional setting in \cite[Theorem 3]{CMa}. Here we extend Theorem \ref{lim d=1 C=2} to higher dimensions in two distinct ways. We find the sharp form of \eqref{obj 2}, when $d\geq 1$ and $\Omega=\Omega_{\ell^1}$ (i.e. rombus), and the sharp form of \eqref{obj 3}, when $d\geq 1$ and $\Omega=\Omega_{\ell^\infty}$ (i.e. regular cubes). As we shall see below, we use different techniques in the proofs of these two extensions, taking into consideration the geometry of the chosen sets $\Omega$. \smallskip For $d\geq1$ and $k\geq 0$ we denote $N_{1,d}(k)=\big|\overline{(\Omega_{\ell^1})_{k}}\big|=\big|\{\vec x\in\mathbb{Z}^{d}; \|\vec x\|_{1}\leq k\}\big|$. Here is our next result. \begin{theorem}\label{main theo cent} Let $d\geq2$ and $f:\mathbb{Z}^{d}\to\mathbb{R}$ be a function in $\ell^{1}(\mathbb{Z}^{d})$. Then \begin{equation}\label{eq main theo cent} {\rm Var\,} M_{1}f\leq 2d\left(1+\sum_{k\geq 1}\frac{(N_{1,d-1}(k)-N_{1,d-1}(k-1))}{N_{1,d}(k)}\right)\|f\|_{\ell^{1}(\mathbb{Z}^{d})}=:C(d)\|f\|_{\ell^{1}(\mathbb{Z}^{d})} \end{equation} and this constant $C(d)$ is the best possible. Moreover, the equality is attained if and only if $f$ is a delta function. \end{theorem} \noindent {\sc{Remark:}} Note that $C(d)<\infty$, because there exists a constant $C$ such that $$ N_{1,d}(k)=Ck^{d}+O(k^{d-1}), $$ where $C=m(\Omega_{\ell^1})$ (see \cite[Chapter VI \S 2, Theorem 2]{Lang}). Then, for sufficiently large $k$ we have $$ \frac{N_{1,d-1}(k)-N_{1,d-1}(k-1)}{N_{1,d}(k)}\sim \frac{1}{k^{2}}. $$ In particular, for $d=2$ we obtain $$C(2)=4+8\sum_{k\geq 1}\frac{1}{k^{2}+(k+1)^{2}}.$$ \smallskip Our proof of Theorem \ref{main theo cent} is the natural extension of the proof of Theorem \ref{lim d=1 C=2} but we decided to present Theorem \ref{lim d=1 C=2} separately since it contains the essential idea with less technical details. The next result is the sharp version of \eqref{obj 3} for the discrete uncentered maximal operator with respect to cubes (i.e. $\ell^{\infty}$-balls). This proof follows a different strategy from Theorems \ref{lim d=1 C=2} and \ref{main theo cent}. \begin{theorem}\label{main theo noncent} Let $d\geq 1$ and $f:\mathbb{Z}^{d}\to\mathbb{R}$ be a function in $\ell^{1}(\mathbb{Z}^{d}).$ Then \begin{equation}\label{eq noncent d>1} {\rm Var\,} \widetilde Mf\leq 2d\left(1+\sum_{k\geq 1}\frac{1}{k}\left(\left(\frac{2}{k+1}+\frac{2k-1}{k}\right)^{d-1}-\left(\frac{2k-1}{k}\right)^{d-1}\right)\right)\|f\|_{\ell^{1}(\mathbb{Z}^{d})}=:\widetilde C(d)\|f\|_{\ell^{1}(\mathbb{Z}^{d})}, \end{equation} and the constant $\widetilde C(d)$ is the best possible. Moreover, the equality is attained if and only if $f$ is a delta function. \end{theorem} \noindent {\sc{Remark:}} In particular $\widetilde C(1)= 2$ (and we recover \eqref{obj 0}) and $\widetilde C(2)=12$. \smallskip For the proofs of these three theorems we may assume throughout the rest of the paper, without loss of generality, that $f\geq0$. \section{Proof of Theorem \ref{lim d=1 C=2} } Since $f\in \ell^{1}(\mathbb{Z})$, we have that for all $n\in\mathbb{Z}$ there exists $r_{n}\in\mathbb{Z}$ such that $Mf(n)=A_{r_{n}}f(n)$. We define $$ X^{-}=\{n\in\mathbb{Z}; Mf(n)\geq Mf(n+1)\}\ \ \text{and}\ \ X^{+}=\{n\in\mathbb{Z}; Mf(n+1)>Mf(n)\}. $$ \noindent Then we have \begin{eqnarray}\label{suma} {\rm Var\,} Mf&=&\sum_{n\in\mathbb{Z}}|Mf(n)-Mf(n+1)|\nonumber\\ &=&\sum_{n\in X^{-}}Mf(n)-Mf(n+1)+\sum_{n\in X^{+}}Mf(n+1)-Mf(n)\nonumber\\ &\leq&\sum_{n\in X^{-}}A_{r_{n}}f(n)-A_{r_{n}+1}f(n+1)+\sum_{n\in X^{+}}A_{r_{n+1}}f(n+1)-A_{r_{n+1}+1}f(n). \end{eqnarray} Given $p\in\mathbb{Z}$ fixed, we want to evaluate the maximal contribution of $f(p)$ to the right-hand side of \eqref{suma}. \smallskip \noindent{\it Case 1:} If $n\in X^{-}$ and $n\geq p$. In this situation we have that the contribution of $f(p)$ to $A_{r_{n}}f(n)-A_{r_{n}+1}f(n+1)$ is $0$ (if $p<n-r_{n}$) or $\frac{1}{2r_{n}+1}-\frac{1}{2r_{n}+3}$ (if $n-r_{n}\leq p$). In the second case we have \begin{eqnarray*} \frac{1}{2r_{n}+1}-\frac{1}{2r_{n}+3}&=&\frac{2}{(2r_{n}+1)(2r_{n}+3)}\\ &\leq& \frac{2}{(2(n-p)+1)(2(n-p)+3)}\\ =&=&\frac{1}{2(n-p)+1}-\frac{1}{2(n-p)+3}. \end{eqnarray*} The equality is attained if and only if $r_{n}=n-p$. \smallskip \noindent {\it Case 2:} If $n\in X^{+}$ and $n\geq p$. Now we have that the contribution of $f(p)$ to $A_{r_{n+1}}f(n+1)-A_{r_{n+1}+1}f(n)$ is non-positive (if $p<n+1-r_{n+1}$) or $\frac{1}{2r_{n+1}+1}-\frac{1}{2r_{n+1}+3}$ (if $n+1-r_{n+1}\leq p$). In the second case we have \begin{eqnarray*} \frac{1}{2r_{n+1}+1}-\frac{1}{2r_{n+1}+3}&=&\frac{2}{(2r_{n+1}+1)(2r_{n+1}+3)}\\ &\leq& \frac{2}{(2(n+1-p)+1)(2(n+1-p)+3)}\\ &=&\frac{1}{2(n+1-p)+1}-\frac{1}{2(n+1-p)+3}\\ &<&\frac{1}{2(n-p)+1}-\frac{1}{2(n-p)+3}. \end{eqnarray*} \smallskip \noindent {\it Case 3:} If $n\in X^{-}$ and $n<p$. In this situation we have that the contribution of $f(p)$ to $A_{r_{n}}f(n)-A_{r_{n}+1}f(n+1)$ is non-positive (if $p>n+r_{n}$) or $\frac{1}{2r_{n}+1}-\frac{1}{2r_{n}+3}$ (if $n+r_{n}\geq p$). In the second case we have \begin{eqnarray*} \frac{1}{2r_{n}+1}-\frac{1}{2r_{n}+3}&=&\frac{2}{(2r_{n}+1)(2r_{n}+3)}\\ &\leq& \frac{2}{(2(p-n)+1)(2(p-n)+3)}\\ &=&\frac{1}{2(p-n)+1}-\frac{1}{2(p-n)+3}\\ &<&\frac{1}{2(p-n-1)+1}-\frac{1}{2(p-n-1)+3}. \end{eqnarray*} \smallskip \noindent {\it Case 4:} If $n\in X^{+}$ and $n<p$. Now we have that the contribution of $f(p)$ to $A_{r_{n+1}}f(n+1)-A_{r_{n+1}+1}f(n)$ is either $0$ (if $p>n+1+r_{n+1}$) or $\frac{1}{2r_{n+1}+1}-\frac{1}{2r_{n+1}+3}$ (if $n+1+r_{n+1}\geq p$). In the second case we have \begin{eqnarray*} \frac{1}{2r_{n+1}+1}-\frac{1}{2r_{n+1}+3}&=&\frac{2}{(2r_{n+1}+1)(2r_{n+1}+3)}\\ &\leq& \frac{2}{(2(p-n-1)+1)(2(p-n-1)+3)}\\ &=&\frac{1}{2(p-n-1)+1}-\frac{1}{2(p-n-1)+3}. \end{eqnarray*} The equality is achieved if and only if $r_{n+1}=p-n-1$. \smallskip \noindent {\it Conclusion:} Therefore the contribution of $f(p)$ to the right-hand side of \eqref{suma} is bounded by $$ \sum_{n\geq p}\frac{1}{2(n-p)+1}-\frac{1}{2(n-p)+3}+\sum_{n<p}\frac{1}{2(p-n-1)+1}-\frac{1}{2(p-n-1)+3}=2. $$ As $p$ is an arbitrary point in $\mathbb{Z}$, this establishes \eqref{main theo cent d=1}. If $f$ is a delta function we can easily see that $$ {\rm Var\,} Mf =2\|f\|_{\ell^{1}(\mathbb{Z})}. $$ On the other hand, given a function $f:\mathbb{Z}\to\mathbb{R}$ such that ${\rm Var\,} Mf=2\|f\|_{\ell^{1}(\mathbb{Z})}$ and $f\geq0$, let us define $P=\{t\in\mathbb{Z}; f(t)\neq 0\}$. Then $$ {\rm Var\,} Mf=2\sum_{t\in P}f(t), $$ and, given $t_{1}\in P$, the contribution of $f(t_{1})$ to \eqref{suma} is 2. Therefore, by the previous analysis we note that for all $n\geq t_{1}$ we must have that $n\in X^{-}$ and $r_{n}=n-t_{1}$. If we take $t_{2}\in P$ the same should happen, which implies that $t_{1}=t_{2}$ and therefore $P=\{t_{1}\}$. This proves that $f$ is a delta function and the proof is concluded. \section{Proof of Theorem \ref{main theo cent}} \subsection{Preliminaries} Since $f\in \ell^{1}(\mathbb{Z}^{d})$, we have that there exists $r_{\vec n}\in\mathbb{Z}$ such that $M_{1}f(\vec n)=A_{r_{\vec n}}f(\vec n)$. For all $\vec m=(m_{1},m_{2},\dots,m_{d})\in\mathbb{Z}^{d}$ we define $$ |\vec m|_{1}=\sum_{i=1}^{d}|m_{i}|, $$ and for $1\leq j\leq d$, we define $$ I_{j}=\{l\subset\mathbb{Z}^{d};\ l\ \text{is a line parallel to the vector}\ \vec e_{j} \}, $$ $$ X_{j}^{-}=\{\vec n\in\mathbb{Z}^{d}; M_{1}f(\vec n)\geq M_{1}f(\vec n+\vec e_{j})\}\ \ \text{and}\ \ X_{j}^{+}=\{\vec n\in\mathbb{Z}^{d}; M_{1}f(\vec n+\vec e_{j})>M_{1}f(\vec n)\}. $$ We then have \begin{eqnarray}\label{suma d>1} {\rm Var\,} M_{1}f&=&\sum_{\vec n\in\mathbb{Z}^{d}}\sum_{j=1}^{d}|M_{1}f(\vec n)-M_{1}f(\vec n+\vec e_{j})|\nonumber\\ &=&\sum_{j=1}^{d}\sum_{l\in I_{j}}\sum_{\vec n\in l\cap X_{j}^{-}}M_{1}f(\vec n)-M_{1}f(\vec n+\vec e_{j})+\sum_{j=1}^{d}\sum_{l\in I_{j}}\sum_{\vec n\in l\cap X_{j}^{+}}M_{1}f(\vec n+\vec e_{j})-M_{1}f(\vec n)\nonumber\\ &\leq&\sum_{j=1}^{d}\sum_{l\in I_{j}}\sum_{\vec n\in l\cap X_{j}^{-}}A_{r_{\vec n}}f(\vec n)-A_{r_{\vec n}+1}f(\vec n+\vec e_{j})\\ &&\ \ \ \ +\sum_{j=1}^{d}\sum_{l\in I_{j}}\sum_{\vec n\in l\cap X_{j}^{+}}A_{r_{\vec n+\vec e_{j}}}f(\vec n+\vec e_{j})-A_{r_{\vec n+\vec e_{j}}+1}f(\vec n)\nonumber. \end{eqnarray} Fixed a point $\vec p=(p_{1},p_{2},\dots,p_{d})\in\mathbb{Z}^{d}$, we want to evaluate the maximal contribution of $f(\vec p)$ to the right-hand side of \eqref{suma d>1}. \subsection{Auxiliary results} We now prove the following lemma of arithmetic character, which will be particularly useful in the rest of the proof. \begin{lemma}\label{fund lemma} If $d\geq1$, then \begin{equation}\label{Ineq_Fund_Lem} N_{1,d}(k)^{2}> {N_{1,d}(k+1)}N_{1,d}(k-1) \ \ \forall \ k\geq1. \end{equation} \end{lemma} \begin{proof} We prove this via induction. For $d=1$ we have that $N_{1,1}(k)=2k+1$, therefore $$ N_{1,1}(k)^{2}=4k^{2}+4k+1>(2k+3)(2k-1)=N_{1,1}(k+1)N_{1,1}(k-1). $$ Since $N_{1,d}(k)=\big|\{(x_{1},\dots,x_{d})\in\mathbb{Z}^{d};|x_{1}|+ \dots +|x_{d}|\leq k\}\big|$, fixing the value of the last variable, we can verify that \begin{equation}\label{recurrencia larga} N_{1,d}(k)=N_{1,d-1}(k)+2\sum_{j=0}^{k-1}N_{1,d-1}(j). \end{equation} Now, let us assume that the result is true for $d$, i.e. \begin{equation}\label{hip} N_{1,d}(k)^{2}> {N_{1,d}(k+1)}N_{1,d}(k-1) \ \ \forall \ k\geq1. \end{equation} We want to prove that this implies that the result is also true for $d+1$. For simplicity we denote $g(k):=N_{1,d}(k)\ \ \text{and}\ \ f(k):=N_{1,d+1}(k) \ \ \text{for all}\ k\geq 0$. Thus by \eqref{hip} we have that \begin{equation}\label{consec_hip} \frac{g(1)}{g(0)}> \frac{g(2)}{g(1)}>\dots>\frac{g(k)}{g(k-1)}>\frac{g(k+1)}{g(k)}>\dots \end{equation} and by \eqref{recurrencia larga} we have that $$ f(k)=g(k)+2\sum_{j=0}^{k-1}g(j) \ \ \ \ \forall\ k\geq 0. $$ The latter implies that $$ f(k+1)-f(k)=g(k+1)+g(k)\ \ \ \ \forall\ k\geq 0. $$ Therefore, by \eqref{consec_hip}, we obtain that $$ \frac{g(k+1)}{g(k)}> \frac{g(k+2)+g(k+1)}{g(k+1)+g(k)} $$ and $$ \frac{g(k+1)+2\sum_{j=1}^{k}g(j)}{g(k)+2\sum_{j=1}^{k}g(j-1)}> \frac{g(k+1)}{g(k)}. $$ Combining these inequalities we arrive at $$ \frac{f(k+1)}{f(k)}\geq \frac{g(k+1)+2\sum_{j=1}^{k}g(j)}{g(k)+2\sum_{j=1}^{k}g(j-1)} > \frac{g(k+1)}{g(k)}> \frac{g(k+2)+g(k+1)}{g(k+1)+g(k)}=\frac{f(k+2)-f(k+1)}{f(k+1)-f(k)}, $$ and hence $$ \frac{f(k+1)-f(k)}{f(k)}> \frac{f(k+2)-f(k+1)}{f(k+1)}. $$ This implies that $$ \frac{f(k+1)}{f(k)}> \frac{f(k+2)}{f(k+1)}\ \ \forall \ k\geq 0, $$ which establishes the desired result. \end{proof} \begin{corollary}\label{consec_lem} If $d\geq1$, we have that \begin{equation}\label{prop theo 2} \frac{1}{N_{1,d}(k)}-\frac{1}{N_{1,d}(k+1)}> \frac{1}{N_{1,d}(k+1)}-\frac{1}{N_{1,d}(k+2)} \ \ \forall \ k\geq0. \end{equation} \end{corollary} \begin{proof} We notice that \eqref{prop theo 2} is equivalent to $$ \frac{N_{1,d}(k+1)}{N_{1,d}(k)}+\frac{N_{1,d}(k+1)}{N_{1,d}(k+2)}>2. $$ This follows from Lemma \ref{fund lemma} and the arithmetic mean - geometric mean inequality because $$ \frac{N_{1,d}(k+1)}{N_{1,d}(k)}+\frac{N_{1,d}(k+1)}{N_{1,d}(k+2)}> \frac{N_{1,d}(k+2)}{N_{1,d}(k+1)}+\frac{N_{1,d}(k+1)}{N_{1,d}(k+2)}\geq 2. $$ \end{proof} \subsection{Proof of Theorem \ref{main theo cent}} Let us simplify notation by writing $N_{1}(k):=N_{1,d}(k)$. Given $1\leq j\leq d$, using Corollary \ref{consec_lem} we make the following observations. \smallskip \noindent{\it Case 1:} If $\vec n\in X_{j}^{-}$ and $n_{j}\geq p_{j}$. In this situation we have that the contribution of $f(\vec p)$ to $A_{r_{\vec n}}f(\vec n)-A_{r_{\vec n}+1}f(\vec n+\vec e_{j})$ is non-positive (if $|\vec n-\vec p|_{1}>r_{\vec n}$) or $\frac{1}{N_{1}(r_{\vec n})}-\frac{1}{N_{1}(r_{\vec n}+1)}$ (if $|\vec n-\vec p|_{1}\leq r_{\vec n}$). In the second case we have \begin{eqnarray*} \frac{1}{N_{1}(r_{\vec n})}-\frac{1}{N_{1}(r_{\vec n}+1)} &\leq&\frac{1}{N_{1}(|\vec n-\vec p|_{1})}-\frac{1}{N_{1}(|\vec n-\vec p|_{1}+1)}\\ &=&\frac{1}{N_{1}(|\vec n-\vec p|_{1})}-\frac{1}{N(|\vec n+\vec e_{j}-\vec p|_{1})}. \end{eqnarray*} The equality is attained if and only if $r_{\vec n}=|\vec n-\vec p|_{1}$. \smallskip \noindent{\it Case 2:} If $\vec n\in X_{j}^{+}$ and $n_{j}\geq p_{j}$. Now we have that the contribution of $f(\vec p)$ to $A_{r_{\vec n+\vec e_{j}}}f(\vec n+\vec e_{j})-A_{r_{\vec n+\vec e_{j}}+1}f(\vec n)$ is non-positive (if $|\vec n+\vec e_{j}-\vec p|_{1}>r_{\vec n+\vec e_{j}}$) or $\frac{1}{N_{1}(r_{\vec n+\vec e_{j}})}-\frac{1}{N_{1}(r_{\vec n+\vec e_{j}}+1)}$ (if $|\vec n+\vec e_{j}-\vec p|_{1}\leq r_{\vec n+\vec e_{j}}$). In the second case we have \begin{eqnarray*} \frac{1}{N_{1}(r_{\vec n+\vec e_{j}})}-\frac{1}{N_{1}(r_{\vec n+\vec e_{j}}+1)}&\leq&\frac{1}{N_{1}(|\vec n+\vec e_{j}-\vec p|_{1})}-\frac{1}{N_{1}(|\vec n+\vec e_{j}-\vec p|_{1}+1)}\\ &=&\frac{1}{N_{1}(|\vec n-\vec p|_{1}+1)}-\frac{1}{N_{1}(|\vec n-\vec p|_{1}+2)}\\ &<&\frac{1}{N_{1}(|\vec n-\vec p|_{1})}-\frac{1}{N_{1}(|\vec n-\vec p|_{1}+1)}\\ &=&\frac{1}{N_{1}(|\vec n-\vec p|_{1})}-\frac{1}{N(|\vec n+\vec e_{j}-\vec p|_{1})}. \end{eqnarray*} \smallskip \noindent{\it Case 3:} If $\vec n\in X_{j}^{-}$ and $n_{j}< p_{j}$. In this situation we have that the contribution of $f(\vec p)$ to $A_{r_{\vec n}}f(\vec n)-A_{r_{\vec n}+1}f(\vec n+\vec e_{j})$ is non-positive (if $|\vec n-\vec p|_{1}>r_{\vec n}$) or $\frac{1}{N_{1}(r_{\vec n})}-\frac{1}{N_{1}(r_{\vec n}+1)}$ (if $|\vec n-\vec p|_{1}\leq r_{\vec n}$). In the second case we have \begin{eqnarray*} \frac{1}{N_{1}(r_{\vec n})}-\frac{1}{N_{1}(r_{\vec n}+1)} &\leq&\frac{1}{N_{1}(|\vec p-\vec n|_{1})}-\frac{1}{N_{1}(|\vec p-\vec n|_{1}+1)}\\ &<&\frac{1}{N_{1}(|\vec p-\vec n-\vec e_{j}|_{1})}-\frac{1}{N_{1}(|\vec p-\vec n|_{1})}. \end{eqnarray*} \smallskip \noindent{\it Case 4:} If $\vec n\in X_{j}^{+}$ and $n_{j}< p_{j}$. Now we have that the contribution of $f(\vec p)$ to $A_{r_{\vec n+\vec e_{j}}}f(\vec n+\vec e_{j})-A_{r_{\vec n+\vec e_{j}}+1}f(\vec n)$ is non-positive (if $|\vec p-\vec n-\vec e_{j}|_{1}>r_{\vec n+\vec e_{j}}$) or $\frac{1}{N_{1}(r_{\vec n+\vec e_{j}})}-\frac{1}{N_{1}(r_{\vec n+\vec e_{j}}+1)}$ (if $|\vec p-\vec n-\vec e_{j}|_{1}\leq r_{\vec n+\vec e_{j}}$). In the second case we have \begin{eqnarray*} \frac{1}{N_{1}(r_{\vec n+\vec e_{j}})}-\frac{1}{N_{1}(r_{\vec n+\vec e_{j}}+1)}&\leq&\frac{1}{N_{1}(|\vec p-\vec n-\vec e_{j}|_{1})}-\frac{1}{N_{1}(\vec p-\vec n-\vec e_{j}|_{1}+1)}\\ &=&\frac{1}{N_{1}(|\vec p-\vec n-\vec e_{j}|_{1})}-\frac{1}{N_{1}(|\vec p-\vec n|_{1})}. \end{eqnarray*} The equality is achieved if and only if $r_{\vec n+\vec e_{j}}=|\vec p-\vec n-\vec e_{j}|_{1}$. \smallskip \noindent{\it Conclusion:} Given a line $l$ in the lattice, we define the distance from $\vec p$ to $l$ by $$ d(l,\vec p)=\min\{|\vec m-\vec p|_{1};\,\vec m\in l\}. $$ If the direction of $l$ is the same as the direction of $\vec e_{j}$, by intersecting $l$ with the hyperplane $H_{j}=\{\vec z\in\mathbb{Z}^{d}; z_{j}=p_{j}\}$ we obtain the point that realizes the distance from $p$ to $l$. By the previous analysis we have that the contribution of $f(\vec p)$ to $$ \sum_{\vec n\in l\cap X_{j}^{-}}A_{r_{\vec n}}f(\vec n)-A_{r_{\vec n}+1}f(\vec n+\vec e_{j})+\sum_{\vec n\in l\cap X_{j}^{+}}A_{r_{\vec n+\vec e_{j}}}f(\vec n+\vec e_{j})-A_{r_{\vec n+\vec e_{j}}+1}f(\vec n) $$ is less than or equal to \begin{equation}\label{cent peso maximo} \frac{2}{N_{1,d}(d(l,\vec p))}. \end{equation} As $p$ belongs to $d$ lines of the lattice, given $k\in \mathbb{N}$ there exist $d(N_{1,d-1}(k)-N_{1,d-1}(k-1))$ lines such that $d(l,\vec p)=k$. Thus the contribution of $f(\vec p)$ to the right-hand side of \eqref{suma d>1} is less than or equal to $$ \left(2d+\sum_{k\geq 1}\frac{2d(N_{1,d-1}(k)-N_{1,d-1}(k-1))}{N_{1,d}(k)}\right), $$ and as a consequence of this we obtain the desired inequality. \smallskip If $f$ is a delta function, then there exist $\vec y\in\mathbb{Z}^{d}$ and $k\in\mathbb{R}$ such that $$ f(\vec y)=k \ \ \text{and}\ \ \ f(\vec x)=0\ \ \forall \ \vec x\in\mathbb{Z}^{d}\setminus\{y\}. $$ Considering the contribution of $|f(\vec y)|$ to a line $l$ in the lattice $\mathbb{Z}^{d}$ we have equality in \eqref{cent peso maximo}, and hence in \eqref{eq main theo cent}. On the other hand, let us assume that $f:\mathbb{Z}^{d}\to\mathbb{R}$ is a nonnegative function that verifies the equality in \eqref{eq main theo cent}. We define $P=\{\vec t\in\mathbb{Z}^{d}; f(\vec t)\neq 0\}$ and then $$ {\rm Var\,} M_{1}f=\left(2d+\sum_{k\geq 1}\frac{2d(N_{1,d-1}(k)-N_{1,d-1}(k-1))}{N_{1,d}(k)}\right)\sum_{\vec t\in P}f(\vec t). $$ Therefore, given $\vec s=(s_{1},s_{2},\dots,s_{d})\in P$ and a line $l$ in the lattice, the contribution of $f(\vec s)$ to $l$ in \eqref{cent peso maximo} must be $\frac{2}{N_{1,d}(d(l,\vec s))}$ by the previous analysis. Then, if there exists $\vec u\in P\setminus\{\vec s\}$, the contribution of $f(\vec u)$ to $l$ in \eqref{suma d>1} must also be $ \frac{2}{N_{1,d}(d(l,\vec u))}$. Assume without loss of generality that $s_{d}>u_{d}$ and consider the line $l=\{(s_{1},s_{2},\dots,s_{n-1},x); x\in\mathbb{Z}\}$. As we have equality in \eqref{eq main theo cent}, given $\vec n\in l$ such that $n_{d}\geq s_{d}$, we need to have that $\vec n\in X^{-}_{j}$ and $|\vec n-\vec s|_{1}=r_{\vec n}=|\vec n-\vec u|_{1}$, which gives us a contradiction. Thus $f$ must be a delta function. \section{Proof of Theorem \ref{main theo noncent}} \subsection{Preliminaries} As before we start noticing that, since $f\in \ell^{1}(\mathbb{Z}^{d})$, for each $\vec n \in \mathbb{Z}^d$ there exist $r_{\vec n}\in\mathbb{R}^{+}$ and $c_{\vec n}\in\mathbb{R}^{d}$ such that $\vec n\in c_{\vec n}+Q(r_{\vec n})$ and $\widetilde Mf(\vec n)=A_{r_{\vec n}}f( c_{\vec n})$, where $Q_{r_{\vec n}}=\{m\in\mathbb{Z}^{d}; |m|_{\infty}\leq r_{\vec n}\}=\{m\in\mathbb{Z}^{d},\max\{|m_{1}|,\ldots,|m_{d}|\}\leq r_{\vec n}\}$. We now introduce the local maxima and minima of a discrete function $g:\mathbb{Z} \to \mathbb{R}$.\footnote{The local extrema are defined slightly differently in \cite{BCHP, CH}, but used with the meaning stated here.} We say that an interval $[n,m]$ is a {{\it string of local maxima}} of $g$ if $$g(n-1) < g(n) = \ldots = g(m) > g(m+1).$$ If $n = -\infty$ or $m = \infty$ (but not both simultaneously) we modify the definition accordingly, eliminating one of the inequalities. The rightmost point $m$ of such a string is a {\it right local maximum} of $g$, while the leftmost point $n$ is a {\it left local maximum} of $g$. We define {\it string of local minima}, {\it right local minimum} and {\it left local minimum} analogously. \smallskip Given a line $l$ in the lattice $\mathbb{Z}^{d}$ parallel to $\vec e_{d}$ there exists $n'\in\mathbb{Z}^{d-1}$ such that $l=\{(n',m); m\in\mathbb{Z}\}$. Let us assume that $\widetilde{M}f(n',x)$ is not constant as function of $x$ (otherwise the variation of the maximal function over this line will be zero). Let $\{[a_j^-,a_j^+]\}_{j \in \mathbb{Z}}$ and $\{[b_j^-,b_j^+]\}_{j \in \mathbb{Z}}$ be the ordered strings of local maxima and local minima of $\widetilde{M}f(n',x)$ (we allow the possibilities of $a_j^{-}$ or $b_j^{-} = - \infty$ and $a_j^{+}$ or $b_j^{+} = \infty$), i.e. \begin{equation}\label{Sec4_sequence} \ldots < a_{-1}^- \leq a_{-1}^+ < b_{-1}^- \leq b_{-1}^+ < a_0^- \leq a_0^+ < b_0^-\leq b_0^+ < a_1^- \leq a_1^+ < b_1^- \leq b_1^+ < \ldots \end{equation} This sequence may terminate in one or both sides and we adjust the notation and the proof below accordingly. Note that we have at least one string of local maxima since $\widetilde{M}f(\vec n) \to 0$ as $|\vec n|_{\infty} \to \infty$, therefore, if the sequence terminates in one or both sides, it must terminate in a string of local maxima. The variation of the maximal function in $l$ is given by \begin{equation}\label{sum min max} 2\sum_{j\in\mathbb{Z}}\widetilde Mf(n',a^{+}_{j})-\widetilde Mf(n',b^{-}_{j})\leq 2\sum_{j\in\mathbb{Z}}A_{r_{(n',a_{j}^+)}}f( c_{(n',a_{j}^+)})-A_{r_{(n',a_{j}^+)}+|a^{+}_{j}-b^{-}_{j}|}f( c_{(n',a_{j}^+)}). \end{equation} \smallskip We now prove an auxiliary lemma. \begin{lemma}\label{a lo mas un} Given $\vec q\in\mathbb{Z}^{d}$ and a line $l$ in the lattice $\mathbb{Z}^{d}$. There exists at most one string of local maxima of $\widetilde Mf$ in $l$ such that there exists $\vec n$ in the string whose contribution of $f(\vec q)$ to $A_{r_{\vec n}}f(c_{\vec n})$ is positive. \end{lemma} \begin{proof} Assume without loss of generality that $l=\{(m_{1},m_{2},\dots,m_{d-1}, x); \, x\in\mathbb{Z}\}=\{(m',x);\,x \in\mathbb{Z}\}.$ Consider a string of local maxima of $\widetilde Mf$ in $l$ \begin{equation}\label{string lem proof} \widetilde Mf(m',a-1) < \widetilde Mf(m',a) = \ldots = \widetilde Mf(m',a+n) > \widetilde Mf(m',a+n+1). \end{equation} Let $$ \widetilde Mf(m',a+i)=A_{r_{(m',a+i)}}f( c_{(m',a+i)})\ \ \forall\ 0\leq i\leq n. $$ Given $\vec q=(q_{1},q_{2},\dots,q_{d})\in\mathbb{Z}^{d}$, a necessary condition for the contribution of $f(\vec q)$ to $ A_{r_{(m',a+i)}}f( c_{(m',a+i)})$ to be positive for some $i$ is that $a-1< q_{d}< a+n+1$ (otherwise this would violate one of the endpoint inequalities in \eqref{string lem proof}). The result follows from this observation. \end{proof} \subsection{Proof of Theorem \ref{main theo noncent}} Given $\vec p\in\mathbb{Z}^{d}$ and a line $l$ in the lattice $\mathbb{Z}^{d}$, we define $d(l,\vec p)= \min\{|\vec p-\vec m|_{\infty};\, \vec m\in l\}$ and $d(l,\vec p)_{+}=\max\{1,d(l,\vec p)\}$. As consequence of Lemma \ref{a lo mas un}, given $\vec p=(p_{1},p_{2},\dots,p_{d-1},p_{d})\in\mathbb{Z}^{d}$ and a line $l=\{(n_{1},n_{2},\dots,n_{d-1},x)\in\mathbb{Z}^{d}; \, x\in\mathbb{Z}\}$ such that $\big|\{i\in\{1,2,\dots,d-1\};|n_{i}-p_{i}|=d(l,\vec p)\}\big|=j$, the contribution of $f(\vec p)$ to the right-hand side of \eqref{sum min max} is less than or equal to \begin{equation}\label{dist line} \frac{2}{(d(l,\vec p)+1)^{j}(d(l,\vec p))_{+}^{d-j}}. \end{equation} In fact, if an $\ell^\infty$-cube contains $\vec p$ and a point in $l$ then it must have side at least $d(l,\vec p)$, and it must contain $(d(l,\vec p)+1)$ lattice points in each direction $\vec e_i$ for $i$ such that $|n_{i}-p_{i}|=d(l,\vec p)$. In the other $d-j$ directions the cube contains at least $d(l,\vec p)$ lattice points. This leads to \eqref{dist line}. \smallskip If equality in \eqref{dist line} is attained for a point $\vec p$ and a line $l$, then there is a point $\vec q \in l$ that realizes the distance to $\vec p$, belongs to a string of local maxima of $l$, and such that $\vec p\in c_{\vec q}+Q(r_{\vec q})$. Moreover, this string of local maxima must be unique, otherwise $f(\vec p)$ would also have a negative contribution coming from a string of minimum in \eqref{sum min max}. In particular this implies that $\widetilde Mf(\vec p)\geq \widetilde Mf(\vec n)$ for all $\vec n\in l$. If we fix a point $\vec p$ and assume that equality in \eqref{dist line} is attained {\it for all lines} $l$ in our lattice, then $\widetilde Mf(\vec p)\geq \widetilde Mf(\vec n)$ for all $\vec n\in \mathbb{Z}^d$. \smallskip Therefore, as $\vec p$ belong to $d$ lines of the lattice $\mathbb{Z}^{d}$, and given $k\in\mathbb{N}$ and $j\in\{1,2,\dots,d-1\}$ there exist $2^{j}{{d-1 \choose j}}(2(k-1)+1)^{d-1-j}$ lines $l=\{(n_{1},n_{2},\dots,n_{d-1},x);\,x\in\mathbb{Z}\}$ such that $d(l,\vec p)=k$ and $\big|\{i\in\{1,2,d\dots,d-1\};|n_{i}-p_{i}|=k\}\big|=j$, the contribution of $f(\vec p)$ to the variation of the maximal function in $\mathbb{Z}^{d}$ is less than or equal to \begin{eqnarray*} &&2d+d\sum_{k\geq 1}\sum_{j=1}^{d-1}2^{j}{d-1 \choose j}(2k-1)^{d-1-j}\frac{2}{(k+1)^{j}\,k^{d-j}}\\ &&=2d+\sum_{k\geq 1}\frac{2d}{k}\sum_{j=1}^{d-1}{d-1\choose j}\left(\frac{2}{k+1}\right)^{j}\left(\frac{2k-1}{k}\right)^{d-1-j}\\ &&=2d+\sum_{k\geq 1}\frac{2d}{k}\left(\left(\frac{2}{k+1}+\frac{2k-1}{k}\right)^{d-1}-\left(\frac{2k-1}{k}\right)^{d-1}\right). \end{eqnarray*} This concludes the proof of \eqref{eq noncent d>1}. \smallskip If $f$ is a delta function, with $f(\vec n)=0$ for all $n\in\mathbb{Z}^{d}\setminus\{\vec p\}$ for some $p\in\mathbb{Z}^{d}$, it is easy to see that we have equality in \eqref{dist line} for the contribution of $|f(\vec p)|$ to all lines $l$, which implies equality in \eqref{eq noncent d>1}. On the other hand, let us assume that $f:\mathbb{Z}^{d}\to\mathbb{R}$ is a nonnegative function that verifies the equality in \eqref{eq noncent d>1}. We define $P=\{\vec t\in\mathbb{Z}^{d}; f(\vec t)\neq 0\}$ and thus $$ {\rm Var\,}\widetilde Mf=\left(2d+\sum_{k\geq 1}\frac{2d}{k}\left(\left(\frac{2}{k+1}+\frac{2k-1}{k}\right)^{d-1}-\left(\frac{2k-1}{k}\right)^{d-1}\right)\right)\sum_{t\in P}f(t). $$ Then, given $\vec s\in P$, if there exists $\vec u\in P\setminus\{\vec s\}$, we consider a line $l$ in the lattice $\mathbb{Z}^{d}$ such that $\vec s\in l$ and $\vec u\notin l$. The contribution of $f(\vec s)$ to $l$ must be 2, $\widetilde Mf(\vec s)=f(\vec s)$ belongs to the unique string of local maxima of $\widetilde Mf$ in $l$ and the right-hand side of \eqref{sum min max} must be $2f(\vec s)$, by the previous analysis. Therefore the contribution of $f(\vec u)$ to the line $l$ is $0$ and then $f(\vec u)$ does not provide the maximum contribution as predicted in \eqref{dist line}, hence \eqref{eq noncent d>1} cannot attained. We conclude that $f$ must be a delta function. \section*{Acknowledgents} \noindent I am deeply grateful to my advisor Emanuel Carneiro for encouraging me to work on this problem, for all the fruitful discussions and for his guidance throughout the preparation of this paper. I would like to thank Renan Finder and Esteban Arreaga for all the interesting discussions related to the proof of Lemma \ref{fund lemma}. I also want to thank Mateus Sousa for a careful review of this paper. The author also acknowledges support from CAPES-Brazil.
{ "redpajama_set_name": "RedPajamaArXiv" }
4,580
Prophetic Voices Why Britain? A brief history Kathleen Raine 1908 – 2003 Kathleen Raine was a poet, a critic, a scholar and more than that. In the re-emergence of Albion as a mythic idea she is important because of her love for William Blake and her understanding of where Britain stood during its troubled twentieth century; two world wars followed by the emergence of hope later in the century. She often referred to Blake as her master and famously quoted, 'In time of trouble, I kept the divine vision.' In her 1973 autobiography 'Farewell Happy Fields' she portrayed her beloved Northumberland as Eden and told of how she was able to see many of Shakespeare's plays when she was growing up. It is said that she knew much of the Bible by heart. 'Poetry is the very essence of life' was another of her famous quotes and she believed that poets belonged to another plane of existence; a higher world. Later formal education took place at Girton College, Cambridge where she met Dr Jacob Bronowski and Humphrey Jennings, and she began to turn away from the natural sciences towards poetry, and endowed her thought with a Platonic vision of Goodness, Beauty and Truth. Subsequently she met Ted Hughes, Rosamond Lehmann, Wendell Berry, T.S. Eliot and John Taverner, expanding her circle of friends continuously. In 1981, at the age of 73, she co-founded the Temenos Academy Review with Keith Critchlow, Brian Keeble and Philip Sherrard to feature and promote the truth of the Perrenial Wisdom 'as it has always been in the several spiritual civilisations.' This review publication contained lectures and poetry, narrative and book reviews and was published to a remarkably high standard. At the time it said, 'The Temenos Academy Review is making history. It will be a record for the future of the leading and seminal thought of this time in England and the English-speaking world, and of something of the rich contribution from other traditions especially those of the Orient – which are now bringing about a new renaissance in the West. We are for this time the successor of The Yellow Book of the Pre-Raphaelites and the Criterion edited by T.S. Eliot between the First and Second World Wars.' Later, in 1990, they initiated the Temenos Academy of Integral Studies promoting a universalist philosophy. At the time Kathleen spoke of Temenos standing for the treasure of 'things new and old' which are always timely because they belong to the unchanging nature of things. She grasped the importance of declaring 'the learning of the Imagination' anchored in the Perennial Philosophy. This stream led her to spend time in New York with the Lindisfarne Association, founded by William Irwin Thompson, where according to Kathleen the two currents of the 'New Age and Perennial Philosophy' did not find a successful accord. She agreed with Yeats that poetry and religion is the same thing and was inspired in later life to reach out to H.R.H. Prince of Wales saying, 'Anything I can do for him, I will do.' He returned the help by patronage of the Temenos Academy writing in 'Lighting A Candle' (reflections, memories and tributes to Kathleen Raine): 'She did her utmost to re-awaken Albion "sunk in deadly sleep" and to remind us that what Blake wished to bring about was nothing less than a reversal of the premises of materialism; not that people should be a little more spiritual and a little more imaginative but that we should understand that the cosmos is not a mechanism but a living, sacred universe and that "Everything that lives is holy."' Lighting A Candle / Kathleen Raine and Temenos (Temenos Academy 2008) The Collected Poems of Kathleen Raine Ed. Brian Keeble (Golgonooza Press 2000) Blake and Tradition (RKP 1969) Blake and the New Age (George Allen and Unwin 1979) Yeats, The Initiate (George Allen and Unwin 1987) Seeing God Everywhere; Essays on Nature and the Sacred (World Wisdom 2004) Farewell Happy Fields (Hamilton/G.Braziller 1974) (Pt 1 autobiography) The Land Unknown (Hamilton/G.Braziller 1975) (Pt 2 autobiography) The Lion's Mouth (Hamilton/G.Braziller 1977) (Pt 3 autobiography) www.temenosacademy.org Obituary in The Guardian © 2022 Albion.org.uk | web services by Kaywebs
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,032
Mischa Barton's neighbors told police the actress threatened to kill herself, 911 audio reveals Feb 09, 2017 | 11:44 AM Audio from 911 calls during Mischa Barton's neighborhood disturbance last month reveals her neighbors told police the actress was threatening to take her own life. At least two people called the police to report Barton was hysterically crying in her backyard shortly before she willingly checked into a hospital for mental evaluation, according to audio obtained by TMZ. "My back downstairs neighbor is hysterically crying in the backyard and says she's going to kill herself," a female neighbor told police in her phone call. "She's screaming and she keeps saying 'I just want to die.'" Barton, 31, has since claimed her alarming behavior in the early hours of Jan. 26 was the result of her being unknowingly slipped GHB — commonly known as a "date rape drug" — the night before. She told People magazine in a statement that hospital staffers told her she had been given the substance. The female neighbor who called the police told the dispatcher that it's not uncommon for authorities to visit Barton's home. Mischa Barton's neighbors told police she said she was "going to kill herself" in 911 calls last month. (TMZ) "They go see her at least every few months because she loses it," she said. A male neighbor, meanwhile, reiterated that Barton was making alarming claims in what appeared to be a separate call. "She's saying that 'It's all over,' that 'Everything is done,' and then just wailing," the man told authorities. Barton — best known as a star on former Fox teen drama "The O.C. — went to the hospital once the paramedics came to her home. The hospitalization comes more than seven years after the actress was put on an involuntary psychiatric hold following an apparent freak out in July 2009. Barton was photographed alongside an unidentified man in West Hollywood earlier this week, marking the first time she's been spotted publicly since last month's incident.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,172
\section{\label{sec1}Introduction} The in-medium nucleon-nucleon interaction has been an object of intensive theoretical and experimental research of modern nuclear physics over the last few decades, see for a review~\cite{EoS}. The main finding was a softening of the nuclear equation of state at densities reached in intermediate energy nucleus-nucleus collisions, which was consistent with a variety of phenomenological~\cite{pheno} and microscopic~\cite{micro} models. In addition the empirical saturation of the proton-nucleus optical potential turned out to be consistent with heavy-ion theoretical studies \cite{cass1}. While the bare antinucleon-nucleon ($\overline{N}N$) interaction has been actively studied, see Refs.~\cite{lear} and references therein, empirical information on the in-medium interactions of antinucleons is still very poor. Antiproton production has been investigated theoretically in reactions induced by protons \cite{cass2a} and heavy ions in the SIS-energy region \cite{cass2}, where some data on antiprotons were available. Complementary studies of antiproton annihilation in nuclei~\cite{oset} and antiprotonic atoms~\cite{gal} provided further insight on the optical potential at very low energies, however, with rather big uncertainties in the nuclear interior due to the strong annihilation cross section at the surface of the nucleus. In the near future the FAIR facility intends to study the still controversial and empirically less known high energy domain of the (anti)nuclear interactions in more details than before. For instance, the nuclear equation of state for strangeness degrees of freedom and also the in-medium antinucleon-nucleon interaction are some of the key projects~\cite{panda_big}. They are relevant for the formation of exotic (anti)matter systems such as double-strange hypernuclei and $\overline{\Lambda}$-hypernuclei in antiproton-induced reactions in the $\overline{\mbox P}$ANDA~ experiment at FAIR~\cite{panda}. The microscopic Brueckner-Hartree-Fock calculations of the in-medium $\overline{N}N$-scattering have been carried out in~\cite{abhf}. On the other hand, a complementary theoretical background for phenomenological models builds the relativistic hadrodynamics (RHD). It is based on the relativistic mean-field (RMF) theory, which is a well established tool for infinite and finite nuclear systems~\cite{wal74}. However, as already shown many years ago~\cite{cass2}, there are still unresolved problems in RMF models, when applying them to antiproton-nucleus scattering and to heavy ion collisions. By just imposing G-parity arguments, like in microscopic models~\cite{heidenbauer,abhf}, the RMF do not describe the experimental data~\cite{cass2,larionov,mishustin}. This incompatibility of mean-field models with respect to G-parity symmetry has been also shown in recent transport studies~\cite{larionov}, where one had to largely decrease the antinucleon-meson couplings by hand in order to reproduce the empirical data. In this work we address this issue why the conventional RMF models do not describe antiproton-nucleus Dirac phenomenology. To be more specific, our studies are based on the non-linear derivative (NLD) model \cite{nld} to RMF. The NLD model describes simultaneously the density dependence of the nuclear equation of state and the energy dependence of the proton-nucleus optical potential. Latter feature is missing in standard RMF models. Then applying G-parity transformation it is shown that the real part of the proton \textit{and simultaneously} the real part of the antiproton optical potentials are reproduced fairly well in comparison with phenomenological studies. We finally make predictions for the deepness of the real part of the antiproton optical potential and estimate its imaginary part at low energies and energies relevant for the forthcoming experiments at FAIR. \section{\label{sec2}NLD formalism} The NLD approach~\cite{nld} to nuclear matter is based essentially on the Lagrangian density of RHD~\cite{wal74}. It describes the interaction of nucleons through the exchange of auxiliary meson fields (Lorentz-scalar, $\sigma$, and Lorentz-vector meson fields $\omega^{\mu}$)~\cite{dbhf} \begin{equation} \mathcal{L} = \mathcal{L}_{Dirac} + \mathcal{L}_{mes} + \mathcal{L}_{int} \;. \label{NDC-free} \end{equation} The Lagrangian in Eq.~(\ref{NDC-free}) consists of the free Lagrangians for the Dirac field $\Psi$ and for the meson fields $\sigma$ and $\omega^{\mu}$. The isovector meson $\rho$ is not considered here, for simplicity. In conventional RHD the interaction Lagrangian ${\cal L}_{int}$ contains meson fields which couple to the Dirac field via the corresponding Lorentz-density operators $g_{\sigma}\overline{\Psi}\Psi\sigma$ and $-g_{\omega}\overline{\Psi}\gamma^{\mu}\Psi\omega_{\mu}$ for the scalar and vector parts, respectively. Such interactions describe rather successfully the saturation properties of nuclear matter, but they miss the energy dependence of the mean field. A possible solution to this problem has been proposed in~\cite{cass2a} where the momentum-dependent phenomenological form factors were introduced. In~\cite{nld} this idea has been generalized in a manifestly covariant way. In particular, the symmetrized interaction in the NLD model is given by \begin{align} {\cal L}_{int} & = \frac{g_{\sigma}}{2} \left[ \overline{\Psi} \, \stackrel{\leftarrow}{{\cal D}} \Psi\sigma +\sigma\overline{\Psi} \, \stackrel{\rightarrow}{{\cal D}} \Psi \right] - \frac{g_{\omega}}{2} \left[ \overline{\Psi} \, \stackrel{\leftarrow}{{\cal D}} \gamma^{\mu}\Psi\omega_{\mu} +\omega_{\mu}\overline{\Psi}\gamma^{\mu} \, \stackrel{\rightarrow}{{\cal D}} \Psi \right] \;. \label{NDC} \end{align} The interaction between the Dirac and the meson fields has a similar functional form as in standard RHD \cite{wal74}. However, now new operators ${\cal D}$ acting on the nucleon fields appear, which are the non-linear functionals of partial derivatives \begin{equation} \stackrel{\rightarrow}{{\cal D}} := \exp{\left(\frac{-v^{\beta}i\stackrel{\rightarrow}{\partial}_{\beta}+m}{\Lambda}\right)} ~,~ \stackrel{\leftarrow}{{\cal D}} := \exp{\left(\frac{i\stackrel{\leftarrow}{\partial}_{\beta}v^{\beta}+m}{\Lambda}\right)} \;. \label{ope} \end{equation} In Eq.~(\ref{ope}) $v^{\beta}$ denotes a dimensionless auxiliary $4$-vector and $\Lambda$ stands for the cut-off parameter. The latter has been adjusted to the saturation properties of nuclear matter~\cite{nld}. In the limiting case of $\Lambda\rightarrow\infty$ the standard Walecka model is retained. The NLD Lagrangian $\mathcal{L}$ is a functional of not only $\Psi$, $\overline{\Psi}$ and their first derivatives, but it depends on all higher order covariant derivatives of $\Psi$ and $\overline{\Psi}$. For such a generalized functional the Euler-Lagrange equations take the form \cite{nld} \begin{align} \frac{\partial{\cal L}}{\partial\phi} - \partial_{\alpha_{1}}\frac{\partial{\cal L}}{\partial(\partial_{\alpha_{1}}\phi)} &+ \partial_{\alpha_{1}}\partial_{\alpha_{2}}\frac{\partial{\cal L}}{\partial(\partial_{\alpha_{1}}\partial_{\alpha_{2}}\phi)} + \cdots + \\ &(-)^{n}\partial_{\alpha_{1}}\partial_{\alpha_{2}}\cdots\partial_{\alpha_{n}} \frac{\partial{\cal L}} {\partial(\partial_{\alpha_{1}}\partial_{\alpha_{2}}\cdots\partial_{\alpha_{n}}\phi)}= 0 \nonumber \;. \label{Euler} \end{align} Contrary to the standard expressions for the Euler-Lagrange equation, now infinite series of terms ($n\rightarrow\infty$) proportional to higher order derivatives of the Dirac field $(\phi=\Psi,\overline{\Psi})$ appear. They can be evaluated by a Taylor expansion of the non-linear derivative operators~(\ref{ope}). As shown in \cite{nld}, in nuclear matter an infinite series of terms can be resumed exactly and the following Dirac equation is obtained \begin{equation} \left[ \gamma_{\mu}(i\partial^{\mu}-\Sigma^{\mu}) - (m-\Sigma_{s}) \right]\Psi = 0\;, \label{Dirac} \end{equation} with Lorentz-vector and Lorentz-scalar self-energies defined as follows \begin{equation} \Sigma^{\mu} = g_{\omega}\omega^{\mu}e^{\frac{-v^{\beta}i\stackrel{\rightarrow}{\partial}_{\beta}+m}{\Lambda}} ~,~ \Sigma_{s} = g_{\sigma}\sigmae^{\frac{-v^{\beta}i\stackrel{\rightarrow}{\partial}_{\beta}+m}{\Lambda}} \;. \label{Sigma} \end{equation} The Proca and Klein-Gordon equations for the meson fields can be also derived \begin{align} \partial_{\mu}F^{\mu\nu} + m_{\omega}^{2}\omega^{\nu} &= \frac{1}{2}g_{\omega} \left[ \overline{\Psi}e^{\frac{i\stackrel{\leftarrow}{\partial}_{\beta}v^{\beta}+m}{\Lambda}} \gamma^{\nu}\Psi + \overline{\Psi}\gamma^{\nu}e^{\frac{-v^{\beta}i\stackrel{\rightarrow}{\partial}_{\beta}+m}{\Lambda}} \Psi \right], \label{omega_meson} \\ \partial_{\mu}\partial^{\mu}\sigma + m_{\sigma}^{2}\sigma &= \frac{1}{2}g_{\sigma} \left[ \overline{\Psi}e^{\frac{i\stackrel{\leftarrow}{\partial}_{\beta}v^{\beta}+m}{\Lambda}} \Psi + \overline{\Psi}e^{\frac{-v^{\beta}i\stackrel{\rightarrow}{\partial}_{\beta}+m}{\Lambda}} \Psi \right] \;, \label{sigma_meson} \end{align} with the field tensor $F^{\mu\nu}=\partial^{\mu}\omega^{\nu}-\partial^{\nu}\omega^{\mu}$. The meson field equations~(\ref{omega_meson}) and~(\ref{sigma_meson}) show a similar form as in the linear Walecka model of RHD, except of the highly non-linear behavior of the source terms, which generate selfconsistent couplings between the meson-field equations. Applying the usual RMF approximation to the idealized system of infinite nuclear matter, the Dirac equation (\ref{Dirac}) maintains its original form. However, we have to distinguish between nucleons ($N$) forming the nuclear matter and antinucleons ($\overline{N}$) which interact with the nuclear matter. For the description of antiparticles we require G-parity invariance of the Dirac equation and then follow the standard procedure of applying a G-parity transformation ${\rm G}={\rm C}e^{i\pi I_{2}}$ to the negative energy states, where $I_{2}$ is the operator associated with the 2nd component of the isospin "vector" and C is the charge conjugation operator. The invariance of the Dirac equation under charge conjugation requires that the auxiliary vector $v^{\beta}$ must be odd under C-parity transformation. With our choice of $v^{\beta}=(1,\vec{0}\;)$ for positive energy solutions~\cite{nld} this results in $v^{\beta}=(-1,\vec{0}\;)$ for the charge conjugated Dirac field. This leads to the following Dirac equations for nucleons \begin{eqnarray} \left[ \gamma_{\mu}(i\partial^{\mu}-\Sigma^{\mu}) - (m-\Sigma_{s}) \right]\Psi_{N} & = & 0 \label{Dirac_p} \end{eqnarray} and antinucleons \begin{eqnarray} \left[ \gamma_{\mu}(i\partial^{\mu}+\Sigma^{\mu}) - (m-\Sigma_{s}) \right]\Psi_{\overline{N}} & = & 0 \label{Dirac_pbar} \end{eqnarray} interacting with nuclear matter, where $\Psi_{N}=\Psi^{+}$ and $\Psi_{\overline{N}}=\Psi_{C}$ denote the positive energy and the charge conjugated Dirac fields, respectively. The nucleon and antinucleon self-energies entering Eqs.~(\ref{Dirac_p}) and~(\ref{Dirac_pbar}) are the same \begin{eqnarray} \Sigma_{v}\equiv \Sigma^{0} &=& g_{\omega}\omega_{0}e^{-\frac{E-m}{\Lambda}}\;, \nonumber \\ \Sigma_{s} &=& g_{\sigma}\sigma e^{-\frac{E-m}{\Lambda}} \;. \label{SelfenNM} \end{eqnarray} However, note the opposite signs in the Lorentz-vector interactions in Eqs.~(\ref{Dirac_p}) and (\ref{Dirac_pbar}). Furthermore, the single particle energies $E$ have to be obtained from the in-medium mass-shell conditions which are different for nucleons ($N$) and antinucleons ($\overline{N}$) \begin{equation} E_{N}(p) = \sqrt{p^{2}+m^{*2}}+\Sigma_{v}~,~~ E_{\overline{N}}(p) = \sqrt{p^{2}+m^{*2}}-\Sigma_{v} \;. \label{mass-shel} \end{equation} The in-medium (or effective) Dirac mass in Eq.~(\ref{mass-shel}) is given by $m^{*}=m-\Sigma_{s}$. Note, that $m^{*}$ depends explicitly on particle momentum. Again, in the limiting case of $\Lambda\rightarrow\infty$, the exponential factor is equal to unity and the equations are reduced to the ones from the Walecka model. In the NLD model the cut-off parameter $\Lambda$ is of natural size, i.e., of typical hadronic mass scale in this problem. In the following, $\Lambda=770$~MeV is chosen, as in the original work \cite{nld}. In nuclear matter the NLD equations of motion for $\omega$ and $\sigma$ simplify to standard algebraic equations \begin{equation} m_{\omega}^{2}\omega^{0} = g_{\omega}\rho_{v} ~,~ m_{\sigma}^{2}\sigma = g_{\sigma}\rho_{s} \; \label{mesonsNM} \end{equation} with the corresponding density sources $\rho_{v} = \langle \overline{\Psi}_{N} \gamma^{0} e^{-\frac{E-m}{\Lambda}}\Psi_{N}\rangle$ and $\rho_{s} = \langle \overline{\Psi}_{N} e^{-\frac{E-m}{\Lambda}}\Psi_{N}\rangle$. The vector density $\rho_{v}$ is not related to the conserved nucleon density $\rho$. It has to be derived from a generalized Noether-theorem \cite{nld} and reads \begin{align} J^{0} \equiv \rho = \langle \overline{\Psi}_{N}\gamma^{0}\Psi_{N} \rangle \label{rhoBarOld} + \frac{g_{\omega}}{\Lambda} \langle \overline{\Psi}_{N}\gamma^{0}e^{-\frac{E-m}{\Lambda}} \Psi_{N} \rangle \omega_{0} - \frac{g_{\sigma}}{\Lambda} \langle \overline{\Psi}_{N}e^{-\frac{E-m}{\Lambda}} \Psi_{N} \rangle \sigma \quad . \end{align} The meson-nucleon couplings $g_{\omega}$ and $g_{\sigma}$ can be taken from any linear Walecka model, e.g., \cite{wal74}, as it has been done here. Moreover, we use the same coupling constants for both nucleon and antinucleon interactions. \section{Results and Discussion} \begin{figure*}[t] \begin{center} \includegraphics[width=0.7\linewidth]{fig1.eps} \end{center} \vspace{-0.2cm} \caption{Kinetic energy dependence of the scalar and vector Lorentz-components of the antinucleon self-energy in nuclear matter at densities of $\rho=\rho_{\rm sat}$ (left), $\rho=2\rho_{\rm sat}$ (middle) and $\rho=3\rho_{\rm sat}$ (right) using the linear Walecka model (dashed lines), the linear Walecka model with rescaled couplings with the factor $\xi=0.25$~\cite{larionov} (dash-dotted) and the NLD approach (solid lines). } \label{fig1} \end{figure*} We have applied both the NLD and the conventional linear Walecka models to nuclear matter at various baryon densities and also at various nucleon and antinucleon energies relative to matter at rest. At first, we discuss the self-energies, which are real quantities in RMF. Then we focus our study on the energy and density dependencies of the optical potential, first for in-medium proton interactions, and then for the antiproton case. Fig.~\ref{fig1} shows the Lorentz-scalar and Lorentz-vector components of the antinucleon self-energy in nuclear matter, $\Sigma_{s}$ and $\Sigma_{v}$, as a function of the kinetic energy at three baryon densities. The antinucleon kinetic energy is calculated relative to the potential depth of the nuclear matter at rest, i.e., $E_{kin}=E_{\overline{N}}-m=\sqrt{p^{2}+m^{*2}}-\Sigma_{v}-m$. The NLD calculations show an explicit energy dependence for both components of the antinucleon self-energy. In particular, the self-energies decrease with increasing energy, for all baryon densities. On the other hand, with rising baryon density they increase only moderately at each energy. The saturation in energy and density results from the non-linear interaction, as discussed in detail in Ref.~\cite{nld}. In the linear Walecka model the Lorentz-vector self-energy grows strongly with increasing density, while the Lorentz-scalar component saturates. Both components in the standard RMF are energy independent. \begin{figure*}[t] \begin{center} \includegraphics[width=0.7\linewidth]{fig1a.eps} \end{center} \vspace{-0.2cm} \caption{Density dependence of the scalar and vector Lorentz-components of the antinucleon self-energy in nuclear matter at energies of $E_{\rm kin}= 0.5$ GeV (left), $E_{\rm kin}= 1$ GeV (middle) and $E_{\rm kin}= 2$ GeV (right) using the linear Walecka model (dashed lines), the linear Walecka model with rescaled couplings with the factor $\xi=0.25$~\cite{larionov} (dash-dotted) and the NLD approach (solid lines).} \label{fig1a} \end{figure*} For antinucleon interactions in nuclear matter the mean-field potential consists of the sum of scalar and vector self-energies. At vanishing momentum and at saturation density the linear Walecka model leads to a value of $-\Sigma_{v}-\Sigma_{s}\approx -700$~MeV, which is too deep according to phenomenology~\cite{data1,data2}. This feature has been always a critical problem in standard RMF models. Even the inclusion of non-linear self-interactions of the $\sigma$ field (and eventually of the $\omega$ field)~\cite{boguta} do not improve the result, since non-linear self-interactions become pronounced only above the saturation density. On the other hand, the NLD model reduces considerably the deepness of the potential at zero momentum by almost a factor of two. The particular difference of the potential depth at vanishing momentum between conventional RMF and NLD is not a trivial issue. The consequences of such an energy and density behavior will be discussed below when considering the optical potentials. As discussed in Ref.~\cite{larionov}, in order to reproduce the data from antiproton-induced reactions, the antinucleon-meson coupling constants of the Walecka model have to be rescaled by a factor of $\xi\simeq 0.2-0.3$. Fig.~\ref{fig1} shows also the calculations in the linear Walecka model, but with rescaled couplings by a factor of $\xi=0.25$ (dash-dotted curves in Fig.~\ref{fig1}). Indeed, as one can see in Fig.~\ref{fig1}, the rescaled Walecka model~\cite{larionov} reproduces the NLD results in average. However, former results fail to reproduce the energy dependence and, in particular, the density dependence, as it is demonstrated in Fig.~\ref{fig1a}. In Fig.~\ref{fig1a} the density dependence (at various fixed kinetic energies) of the antinucleon self-energies is displayed. The NLD self-energies saturate with density and energy according to microscopic Dirac-Brueckner studies, as discussed in detail in~\cite{nld}. In the conventional Walecka model the vector self-energy diverges with increasing density leading to a too strong repulsion at high densities. In fact, this effect of repulsive nature is softened in the rescaled model to large extend, however, the linear divergent behavior of the vector self-energy still remains. The NLD calculations agree (in average) with the rescaled Walecka model around the saturation density and at kinetic energies around $1$~GeV only. The very different energy behavior of the self-energies between NLD and linear Walecka models influences the Schr\"odinger-equivalent optical potential. In general, it is extracted from (anti)proton-nucleus scattering and therefore it is suited for comparisons between theory and empirical studies. Its real part is given by \begin{equation} \mathfrak{Re} U_{\rm opt} = \pm\frac{E}{m} \Sigma_{v} - \Sigma_{s} + \frac{1}{2m} \left( \Sigma^{2}_{s} - \Sigma_{v}^{2}\right) \:, \label{U_opt} \end{equation} where $E$ is the energy of an (anti)nucleon with bare mass $m$ inside nuclear matter at a fixed baryon density and upper (lower) sign holds for nucleons (antinucleons). At first we consider the proton-nucleus optical potential. \begin{figure}[t] \begin{center} \includegraphics[width=0.7\linewidth]{fig2.eps} \end{center} \vspace{-0.2cm} \caption{Energy dependence of the Schr\"{o}dinger equivalent proton optical potential at saturation density $\rho_{sat}=0.16~fm^{-3}$. Theoretical calculations in the linear Walecka model (dashed) and NLD approach (solid) are compared to Dirac phenomenology \protect\cite{hama}. } \label{fig2} \end{figure} Fig.~\ref{fig2} shows the real part of the optical potential according to Eq.~(\ref{U_opt}) as function of the nucleon kinetic energy $E_{kin}=E-m=\sqrt{p^{2}+m^{*2}}+\Sigma_{v}-m$. The linear Walecka model (dashed curve) predicts the behavior of the optical potential versus energy only qualitatively, and strongly diverges with increasing kinetic energy of the nucleon. It does not reproduce the empirical saturation at higher energies. This problem is well known in RMF and has already attracted much attention in the past~\cite{typel}. Of course, the main reason for such strong deviation is the missing energy dependence of the self-energy in the standard RMF. As discussed in detail in Ref.~\cite{nld}, the NLD approach resolves this issue of RMF models. The solid curve in Fig.~\ref{fig2} corresponds to the NLD calculations and describes the data very well. On the other hand, the interaction of an antinucleon at a given momentum relative to nuclear matter at rest is quite different with respect to the proton-nucleus interaction: the sign of the Lorentz-vector self-energy changes in Eq.~(\ref{U_opt}). Therefore, in the linear Walecka model the real part of the optical potential is again a linear function in energy, as in the nucleon case, but now it diverges to $-\infty$ (see Fig.~\ref{fig3}, dashed curve). \begin{figure}[t] \begin{center} \includegraphics[width=0.7\linewidth]{fig3.eps} \end{center} \vspace{-0.2cm} \caption{Same as Fig.~\protect\ref{fig2}, but for the antiproton case. The filled box at vanishing momentum represents empirical data extrapolated to saturation density \protect\cite{data1}. The second filled area at kinetic energies between $1000$ and $2500$~MeV is taken from transport calculations on antiproton-nucleus reactions \protect\cite{larionov}.} \label{fig3} \end{figure} Such a prediction is in contradiction with calculations using dispersion relations \cite{cass2}. In fact, by fitting the imaginary part of the antinucleon-nucleus optical potential to the total proton-antiproton cross section, its real part decreases with increasing energy. Furthermore, an existing information from heavy-ion collisions~\cite{cass2} and reactions induced by protons~\cite{cass2} and antiprotons~\cite{larionov} give clear evidence for a considerable reduction of the antiproton-nucleus optical potential with rising energy. As has been discussed in Ref.~\cite{larionov}, the transport theoretical description of antiproton-nucleus data is not possible within the conventional Walecka model, except if one rescales the antinucleon-meson coupling constants by a phenomenological factor of $\xi\approx 0.2$. This is not compatible with G-parity arguments and suggests a strong violation of the charge conjugation symmetry in the nuclear medium~\cite{mishustin}, which otherwise must be a perfect symmetry in strong interactions. On the contrary, the NLD calculations (solid curve in Fig.~\ref{fig3}) predict a completely different behavior as compared with the Walecka model. It results in a much softer potential at vanishing momentum and much stronger decrease of the real part of the optical potential $\mathfrak{Re}U_{\rm opt}$ with increasing energy. Due to the large annihilation cross section experimental data at low energies can be obtained only at very low densities $\rho\simeq (0.005\div 0.02) \rho_{\rm sat}$ close to the nuclear surface~\cite{data1,data2}, while empirical information at saturation density is obtained by extrapolation only. At these low densities the NLD model leads to values of $\mathfrak{Re} U_{opt}\simeq -(6\div 50)$~MeV, which seem to be still too deep with respect to the data~\cite{data1,data2}. At the density of interest $\rho=\rho_{\rm sat}$ the NLD model predicts a rather soft potential, which is much closer to extrapolated data~\cite{data1,data2} and dispersion relations~\cite{cass2} (filled box at zero kinetic energy in Fig.~\ref{fig3}). A comparison between our model and phenomenological antiproton-nucleus reactions at higher energies seems more meaningful. In fact, with increasing energy the annihilation cross section drops strongly and it is supposed that the antiprotons may penetrate deeper inside the nuclear interior, and thus densities close to $\rho_{\rm sat}$ can be tested. The second filled area in Fig.~\ref{fig3} shows the empirical optical potential as extracted from the transport theoretical analyses in Ref.~\cite{larionov,larionov2}. In this energy region the comparison between NLD results and transport calculations (which use conventional RMF, but with largely reduced antinucleon-meson couplings) turns out to be fairly well. Our results are also in qualitative agreement with the analysis of Ref.~\cite{Zhang}, where a strong decrease of $\mathfrak{Re}U_{\rm opt}$ with increasing energy is obtained. Interestingly, the antinucleon optical potentials $\mathfrak{Re}U_{\rm opt}$ strongly differ at zero momentum between NLD and standard RMF, while in the nucleon case (see Fig.~\ref{fig2}) no differences were visible. By considering the fields at the same baryon density and at zero momentum one would naively expect a similar potential depth for both models. Indeed, the non-linear effects start to dominate above the saturation density~\cite{nld}. However, the observed difference at zero momentum comes from the in-medium dependence of the (anti)nucleon single-particle energy. At fixed saturation density the energy shift, caused by the difference (proton-nucleus) or sum (antiproton-nucleus) of two big fields, varies strongly between the two models. However, small shift variations in the energy affect the NLD self-energies, due to their pronounced energy dependence. On the other hand, the fields of the linear Walecka model are not influenced due to their independence on energy. This feature becomes more pronounced with increasing density, as seen in Fig.~\ref{fig1} (middle and right panels). In terms of the optical potentials the interpretation is similar. In the proton-nucleus case (Fig.~\ref{fig2}) the slopes between both models at vanishing momentum are essentially the same. Therefore the in-medium energy shift is of minor relevance and there is no gap between both potentials. In the antiproton-nucleus case (Fig.~\ref{fig3}) the gap is much more pronounced due to the quite different slopes between NLD and linear Walecka optical potentials. \begin{figure}[t] \begin{center} \includegraphics[width=0.7\linewidth]{fig4.eps} \end{center} \vspace{-0.2cm} \caption{Energy dependence of the imaginary part of the antinucleon optical potential at low density, as indicated. The theoretical result, extracted from the dispersion relation (see Eq.~(\protect\ref{DispRel})) in the NLD approach (solid curves) is compared to experimental data (symbols) taken from \protect\cite{data2}. The filled area indicates the model changes by varying the density from $0.005$ up to $0.02$ (relative to $\rho_{sat}$). The inserted panel shows the same quantity but at the fixed saturation density $\rho_{sat}=0.16~fm^{-3}$, again in comparison to data (symbols), which are extrapolated to $\rho_{sat}$.} \label{fig4} \end{figure} For a complete description of in-medium antiproton interactions also the imaginary part of the optical potential is needed. Since RMF does not provide the imaginary part of the self-energies, we estimate $\mathfrak{Im}U_{\rm opt}$ using dispersion relation~\cite{Disp} \begin{equation} \mathfrak{Im}U_{\rm opt}(p) = -\frac{2p}{\pi} \; {\cal P} \int_{0}^{\infty} \frac{\mathfrak{Re}U_{\rm opt}(p^{\prime \;})}{p^{\prime\;2}-p^{2}} dp^{\prime} \:, \label{DispRel} \end{equation} where $p\equiv |\vec{p}\;|$ means the antiparticle momentum and ${\cal P}$ denotes the principal value. The real part $\mathfrak{Re}U_{\rm opt}$ is taken from the NLD model. The results are shown in Fig.~\ref{fig4}. The insert in Fig.~\ref{fig4} shows $\mathfrak{Im}U_{\rm opt}$ versus the kinetic in-medium energy $E_{kin}$ at saturation density. At vanishing kinetic energy the imaginary part of the optical potential is rather large ($\simeq -200$~MeV) and consistent with empirical information (error bars)~\cite{data2}. At high beam energies $\mathfrak{Im}U_{\rm opt}$ start to decrease again, but remains rather strong. The present estimation seems to be in line with the empirical study of Ref.~\cite{Zhang}, where $\mathfrak{Im}U_{\rm opt}=-135$~MeV is essentially independent on the energy in the range from $180$~MeV up to $\simeq 2$~GeV. According to Refs.~\cite{data1,data2} antiprotons penetrate the nuclear surface up to densities of $\rho\simeq (0.005-0.02)\rho_{\rm sat}$ before annihilation. Therefore, we calculate $\mathfrak{Im}U_{\rm opt}$ at this low densities, as shown in the main panel of Fig.~\ref{fig4}. The filled area indicates the model calculations for matter densities $\rho\simeq (0.005-0.02)\rho_{sat}$. The solid curve shows the model result at an average density of $\rho \simeq 0.01~\rho_{sat}$. As one can see the NLD model reproduces the data~\cite{data2} fairly well also at these low densities. \section{Summary and Outlook} In summary, the NLD model, which incorporates on a mean-field level non-linear effects in baryon density and simultaneously in single-particle energy, has been applied to nucleon and antinucleon interactions in nuclear matter. We have shown that due to the explicit energy dependence of the self-energies the proton-nucleus optical potential is very well reproduced. At the same time, the NLD model predicts a much softer real part of the antiproton optical potential at low energies as compared to the Walecka model. We also find a strong decrease of the optical potential with increasing energy. These results are remarkably consistent with available information from reactions involving heavy ion and (anti)proton beams and other studies based on dispersion-theoretical approaches. A comparison with the conventional Walecka model has shown that the main effect responsible for a description of the in-medium (anti)nucleon optical potential originates from the energy dependence of the mean-field, which is absent in standard RMF models. We further estimated the imaginary part of the antiproton optical potential within the NLD model using dispersion relation. The results were in qualitative agreement with the low density data and empirical extrapolations at saturation density. We, therefore, conclude that RMF models may remain a very useful theoretical tool for the description and analysis of the antinucleon interactions in nuclear medium. The energy dependence of the real and imaginary parts of the antinucleon optical potential, studied in this work, is expected to be important at energies relevant for the $\overline{\mbox P}$ANDA~ experiment at FAIR. The nuclear compression due to the strong attractive antinucleon mean-field, which significantly differs between the Walecka and the NLD models, and also the very different energy behavior between them will affect the dynamics of antiproton-nucleus reactions. Thus, we expect various observable phenomena at FAIR as important probes for the NLD predictions. For instance, the fragmentation of the excited and radially expanded residual nuclei, where the energy transferred to radial expansion is expected to depend on the degree of compression. Strangeness production of $s=-1$ and especially $s=-2$ hyperons, such as cascade ($\Xi$) particles, is expected to be medium dependent, in particular close to threshold energies. The associate formation of single-$\Lambda$ and, in particular, of double-strange hypernuclei is thus supposed to be also model dependent. As an outlook we conclude the importance and relevance of our results for the future activities at FAIR. \section*{Acknowledgements} This work was supported by HIC for FAIR, DFG through TR16 and by BMBF. We acknowledge useful discussions with A.~Larionov.
{ "redpajama_set_name": "RedPajamaArXiv" }
6,258
Q: How do you get complete Kwalitee output for a Perl module before uploading it? After I upload a module to PAUSE I can go sometime later to cpants.cpanauthors.org or metacpan.org and see a bunch of Kwalitee output and a Kwalitee score. How are you supposed to get this same information beforehand? For instance, I have the issue: meta_yml_has_licence Define the license if you are using in Build.PL. If you are using MakeMaker (Makefile.PL) you should upgrade to ExtUtils::MakeMaker version 6.31. I recently upgraded to Perl v5.26.1 and I see I have a Test::Kwalitee module. Am I supposed to roll my own tester using this module, or is there something else that I am missing? A: After spending some time writing a conditional t/kwalitee.t test using Test::Kwalitee and Module::CPANTS::Analyse I stumbled upon the Ubuntu package libapp-cpants-lint-perl which installs the command cpants_lint. To use run: cpants_lint --verbose ModuleName.tar.gz on the *.tar.gz file you plan to upload to PAUSE. Alternately, I found you can install App::CPANTS::Lint which installs cpants_lint.pl which is pretty much the same.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,640
{"url":"http:\/\/codeforces.com\/blog\/entry\/52094","text":"Codeforces celebrates 10 years! We are pleased to announce the crowdfunding-campaign. Congratulate us by the link https:\/\/codeforces.com\/10years. \u00d7\n\n### sdnr1's blog\n\nBy\u00a0sdnr1, history, 3 years ago, ,\n\nSuppose you want to solve a problem in which you have 3 types of queries in a grid of size N\u2009\u00d7\u2009N:\n1. Insert a 1 in the grid at any position\n2. Remove a 1 from any position in the grid\n3. Count the number of 1 in a subgrid (ie. any rectangle inside the grid).\nInitially the grid is empty and there are Q queries.\n\nThis can be solved easily by using a 2D BIT. But the conventional 2D BIT has space complexity O(N2). So if N\u2009<\u2009\u2009=\u2009105, this won't work. Hence a compressed version of 2D BIT is required. This problem can be solved with an Implicit Treap along with BIT, but the implementation would be too complex. Here is an easy way to solve such a problem.\n\nIn this implementation an Order Statistics Tree (read about it here) is embedded at each node in a BIT. It only works if a 2D BIT has to be implemented for a grid of binary numbers (grid filled with only 1 or 0). The update() function has been broken into 2 functions \u2014 insert() (to insert a 1 in the grid at a given point) and remove() (to remove a 1 from the grid). The query() function counts number of 1s in the subgrid from (1,\u20091) to any given position in the grid.\n\n#include <bits\/stdc++.h>\n#include <ext\/pb_ds\/assoc_container.hpp>\n#include <ext\/pb_ds\/tree_policy.hpp>\n#define mp make_pair\nusing namespace std;\nusing namespace __gnu_pbds;\ntypedef pair<int, int> pii;\ntypedef tree<pii, null_type, less<pii>, rb_tree_tag, tree_order_statistics_node_update> OST;\n\nconst int N = 100001;\n\nOST bit[N];\n\nvoid insert(int x, int y)\n{\nfor(int i = x; i < N; i += i & -i)\nbit[i].insert(mp(y, x));\n}\n\nvoid remove(int x, int y)\n{\nfor(int i = x; i < N; i += i & -i)\nbit[i].erase(mp(y, x));\n}\n\nint query(int x, int y)\n{\nint ans = 0;\nfor(int i = x; i > 0; i -= i & -i)\nans += bit[i].order_of_key(mp(y+1, 0));\nreturn ans;\n}\n\n\nTime complexity : O(Qlog2(N))\nSpace complexity : O(Qlog(N))\n\nProblems : Anton and Permutation, DISTNUM\n\nPS: Suggestions are welcome. Please notify if there are any mistakes.\n\n\u2022 +48\n\n \u00bb 3 years ago, # | \u00a0 0 Could you please add a few practice questions. Thanks for the post!\n\u2022 \u00bb \u00bb 3 years ago, # ^ | \u00a0 +5 Problem and first place where I've seen the described technique being used (link).\n \u00bb 3 years ago, # | \u00a0 0 Auto comment: topic has been updated by sdnr1 (previous revision, new revision, compare).\n \u00bb 4 weeks ago, # | \u00a0 0 Tried solving this Problem, using simple 2D Fenwick tree using map,int> as tree. Also with order statistics implementation, but both give TLE. Whereas, using Merge Sort tree gives AC. ( Merge sort tree is basically, a segment tree where each node of tree keeps sorted array of interval it manages. ). Aren't all these supposed to be $O(log^2(N))$ per query? Is there a large constant for BIT and Order Statistics tree? 2D BIT attemptOST attemptSegment Tree AC\n\u2022 \u00bb \u00bb 4 weeks ago, # ^ | \u2190 Rev. 2 \u2192 \u00a0 0 Your 2D BIT implementation looks $O(\\log^3(n))$ since you have two loops in the BIT code, plus another log factor from accessing the map. You could switch to a hashmap (you'd have to write your own hash function), it may (or may not) be faster in practice.Your OST code should be $O(\\log^2(n))$ per query, but I think the issue is that the input to the problem can have values like $L_1 = 0$ which makes your BIT code loop indefinitely. Try setting the maximum N to 100010 or so, and add +2 to every incoming value (pairs and queries) (the +2 is because we want $L_1 - 1 > 0$ as well).\n\u2022 \u00bb \u00bb \u00bb 4 weeks ago, # ^ | \u00a0 0 I had tried unordered_map ( with custom hash )during the contest yesterday, but that too gave TLE. Also, I had posted the same comment on another blog here, where I realised the same thing, my implementation of BIT wants $1$ based indexing. I tried $+1$ to every input with $N$ set to $100005$, but I don't see why $+2$ and $N$ set to $100010$ would do anything different?Oh, we want $L1-1 > 0$ because we subtract those while getting a rectangle not starting at $(0,0)$. Cool, I'll try it now.\n\u2022 \u00bb \u00bb \u00bb \u00bb 4 weeks ago, # ^ | \u00a0 0 This is giving WA for some reason. The only thing I can think of is, OST keep unique values, and problem doesn't mention that all input points will be distinct.\n\u2022 \u00bb \u00bb \u00bb \u00bb 4 weeks ago, # ^ | \u2190 Rev. 2 \u2192 \u00a0 0 One more thing: with $1$ based indexing the highest value is now $N$, so in your insert code the condition i < N isn't really correct anymore. Since we add +2, that loop should run with i <= 100002. Finally the problem doesn't say anything about the points being distinct\/unique. So instead of a set of pairs {y, x}, you might want to use a map that maps y to some counter. Then you don't have to store the x or worry about duplicate points.EDIT: As in: using OST = tree, rb_tree_tag, tree_order_statistics_node_update>;\n\u2022 \u00bb \u00bb \u00bb \u00bb \u00bb 4 weeks ago, # ^ | \u00a0 0 Okay, but how to get count of all $y <= B$ in this case?","date":"2020-02-25 12:26:42","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3256421685218811, \"perplexity\": 1634.2706250238896}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-10\/segments\/1581875146066.89\/warc\/CC-MAIN-20200225110721-20200225140721-00170.warc.gz\"}"}
null
null
Thanks to the tremendous efforts of a great deal of people who freely give their time and skills, there is a wealth of free and open-source software out there. Although not always up to par with commercial solutions, they offer solutions to those who may not be able to afford the steep prices often asked for commercial software. Here I present some free and some open-source options. When talking about free software people can sometimes forget about web browsers. In the early days, browsers were things that were purchased – something that is resurfacing on mobile devices. So why have I chosen Chrome? There was a time when I would have recommended Firefox, if it hadn't become slow, bloated and suffer from horrific memory leaks. As is it, Chrome is streamlined, fast and has good features and plugins. This handy little programme (Windows only) can 'zip' up files and folders, making them smaller, and so easier to store and email to people. Words can barely express how much I love VLC and all it can do. It's a terrific media player, able to handle any media file/codec that I've thrown at it without so much as a whimper. It simply blows the packaged media players that arrive with your computer clean out of the water. Not only will it do that, it will let you stream video, convert files and more. Check out their site to see what else they offer. I was in tow minds as to whther I should include jEdit; It's a pretty nifty editor, but I've found that it can be slow, at least on OSX. It's still good and there's plenty of useful plugins available. It's worth a look, in fact, if OSX didn't have Dashcode, I'd probably still be using it. FileZilla is a feature-rich FTP programme, letting you access and manage FTP accounts quickly and easily. CyberDuck is another open-source FTP programme. I prefer this over FileZilla as I find the interface more user-friendly. Ubuntu is a Linux-based operating system. It's also a viable replacement to commercial operating systems. There are a number of different Ubuntu options available, depending on the use to which you'll be putting it. LibreOffice in an office productivity suite, and contains the Microsoft Office equivalents of Word, Excel etc. LibreOffice is now the leading open-source office productivity suite. LibreOffice has been created by the former developers of OpenOffice, another free, open-source office suite package. The developers left the OpenOffice project when Oracle tried to commercialise it. Once the driving forces behind OpenOffice jumped ship, Oracle tried to give the project back to the community but it looked like it was too late. Fortunately it was taken over and revived by Apache. These are just a selection of the wealth of open-source and free software solutions which are available for you. Depending on your needs, you may find that the software listed above doesn't offer what you're looking for, but there's probably something out there somewhere. Tagged community, free, linux, microsoft, office, open, open-source, software, source, windows.
{ "redpajama_set_name": "RedPajamaC4" }
8,562
La Coupe de Serbie-et-Monténégro de football (en ) est une compétition de football à élimination directe organisée par la Fédération de Serbie-et-Monténégro de football. Elle est créée en 1992 pour remplacer la Coupe de Yougoslavie et disparaît en 2006 à la suite de la séparation de la Serbie et du Monténégro. Jusqu'en 2003, il s'agissait de la coupe nationale de la République fédérale de Yougoslavie avant que le pays ne change de nom à partir de cette date. Elle est remplacée par la suite par la Coupe de Serbie et la Coupe du Monténégro. Histoire Palmarès Bilan par club Notes et références Compétition de football en Serbie-et-Monténégro Serbie-et-Monténégro
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,610
{"url":"https:\/\/search.r-project.org\/CRAN\/refmans\/dae\/html\/designAnatomy.html","text":"designAnatomy {dae} R Documentation\n\n## Given the layout for a design, obtain its anatomy via the canonical analysis of its projectors to show the confounding and aliasing inherent in the design.\n\n### Description\n\nComputes the canonical efficiency factors for the joint decomposition of two or more structures or sets of mutually orthogonally projectors (Brien and Bailey, 2009; Brien, 2017; Brien, 2019), orthogonalizing projectors in a set to those earlier in the set of projectors with which they are partially aliased. The results can be summarized in the form of a decomposition table that shows the confounding between sources from different sets. For examples of its use also see the vignette daeDesignNotes.pdf.\n\n### Usage\n\ndesignAnatomy(formulae, data, keep.order = TRUE, grandMean = FALSE,\northogonalize = \"hybrid\", labels = \"sources\",\nmarginality = NULL, check.marginality = TRUE,\nwhich.criteria = c(\"aefficiency\",\"eefficiency\",\"order\"),\naliasing.print = FALSE,\nomit.projectors = c(\"pcanon\", \"combined\"), ...)\n\n### Arguments\n\n formulae An object of class list whose components are of class formula. Usually, the terms in a single formula have the same status in the allocation of factors in the design. For example, all involve only factors that were allocated, or all involve factors that were recipients of allocated factors. The names of the components are used to identify the sources in the summary.pcanon object. They will also be used to name the terms, sources and marginality lists in the pcanon.object. data A data.frame contains the values of the factors and variables that occur in formulae. keep.order A logical indicating whether the terms should keep their position in the expanded formula projector, or reordered so that main effects precede two-factor interactions, which precede three-factor interactions and so on. grandMean A logical indicating whether the projector for the grand mean is to be included for each structure. orthogonalize A character vector indicating the method for orthogonalizing a projector to those for terms that occurred previously in a single formula. Three options are available: hybrid; differencing; eigenmethods. The hybrid option is the most general and uses the relationships between the projection operators for the terms in the formula to decide which projectors to substract and which to orthogonalize using eigenmethods. The differencing option subtracts, from the current projector, those previously orthogonalized projectors for terms whose factors are a subset of the current projector's factors. The eigemethods option recursively orthogonalizes the projects using an eigenanalysis of each projector with previously orthogonalized projectors. If a single value is given, it is used for all formulae. labels A character nominating the type of labels to be used in labelling the projectors, and which will be used also in the output tables, such the tables of the aliasing in the structure. The two alternatives are terms and sources. Terms have all factors\/variables in it separated by colons (:). Sources have factors\/variables in them that represent interactions separated by hashes (#); if some factors are nested within others, the nesting factors are surrounded by square brackets ([ and ]) and separated by colons (:). If some generalized, or combined, factors have no marginal terms, the constituent factors are separated by colons (:) and if they interact with other factors in the source they will be parenthesized. marginality A list that can be used to supply some or all of the marginality matrices when it is desired to overwrite calculated marginality matrices or when they are not calculated. If the list is the same length as the formulae list, they will be associated in parallel with the components of formulae, irrespective of the naming of the two lists. If the number of components in marginlaity is less than the number of components in formulae then both lists must be named so that those in the marginality list can be matched with those in the formulae list. Each component of the marginality list must be either NULL or a square matrix consisting of zeroes and ones that gives the marginalites of the terms in the formula. It must have the row and column names set to the terms from the expanded formula, including being in the same order as these terms. The entry in the ith row and jth column will be one if the ith term is marginal to the jth term i.e. the column space of the ith term is a subspace of that for the jth term and so the source for the jth term is to be made orthogonal to that for the ith term. Otherwise, the entries are zero. A row and column should not be included for the grand mean even if grandMean is TRUE. check.marginality A logical indicating whether the marginality matrix, when it is supplied, is to be checked against that computed by pstructure.formula. It is ignored when orthogonalize is set to eigenmethods. which.criteria A character vector nominating the efficiency criteria to be included in the summary of aliasing between terms within a structure. It can be none, all or some combination of aefficiency, mefficiency, sefficiency, eefficiency, xefficiency, order and dforthog \u2013 for details see efficiency.criteria. If none, no summary is printed. aliasing.print A logical indicating whether the aliasing between sources is to be printed. omit.projectors A character vector of the types of projectors to omit from the returned pcanon object. If pcanon is included in the vector then the projectors in these objects will be replaced with a numeric containing their degrees of freedom. If combined is included in the vector then the projectors for the combined decomposition will be replaced with a numeric containing their degrees of freedom. If none is included in the vector then no projectors will be omitted. ... further arguments passed to terms.\n\n### Details\n\nFor each formula supplied in formulae, the set of projectors is obtained using pstructure; there is one projector for each term in a formula. Then projs.2canon is used to perform an analysis of the canonical relationships between two sets of projectors for the first two formulae. If there are further formulae, the relationships between its projectors and the already established decomposition is obtained using projs.2canon. The core of the analysis is the determination of eigenvalues of the product of pairs of projectors using the results of James and Wilkinson (1971). However, if the order of balance between two projection matrices is 10 or more or the James and Wilkinson (1971) methods fails to produce an idempotent matrix, equation 5.3 of Payne and Tobias (1992) is used to obtain the projection matrices for their joint decompostion.\n\n### Value\n\nA pcanon.object.\n\nChris Brien\n\n### References\n\nBrien, C. J. (2017) Multiphase experiments in practice: A look back. Australian & New Zealand Journal of Statistics, 59, 327-352.\n\nBrien, C. J. (2019) Multiphase experiments with at least one later laboratory phase . II. Northogonal designs. Australian & New Zealand Journal of Statistics, accepted for publication.\n\nBrien, C. J. and R. A. Bailey (2009). Decomposition tables for multitiered experiments. I. A chain of randomizations. The Annals of Statistics, 36, 4184 - 4213.\n\nJames, A. T. and Wilkinson, G. N. (1971) Factorization of the residual operator and canonical decomposition of nonorthogonal factors in the analysis of variance. Biometrika, 58, 279-294.\n\nPayne, R. W. and R. D. Tobias (1992). General balance, combination of information and the analysis of covariance. Scandinavian Journal of Statistics, 19, 3-23.\n\ndesignRandomize, designLatinSqrSys, designPlot,\npcanon.object, p2canon.object, summary.pcanon, efficiencies.pcanon, pstructure ,\nprojs.2canon, proj2.efficiency, proj2.combine, proj2.eigen, efficiency.criteria, in package dae,\neigen.\n\nprojector for further information about this class.\n\n### Examples\n\n## PBIBD(2) from p. 379 of Cochran and Cox (1957) Experimental Designs.\n## 2nd edn Wiley, New York\nPBIBD2.unit <- list(Block = 6, Unit = 4)\nPBIBD2.nest <- list(Unit = \"Block\")\ntrt <- factor(c(1,4,2,5, 2,5,3,6, 3,6,1,4, 4,1,5,2, 5,2,6,3, 6,3,4,1))\nPBIBD2.lay <- designRandomize(allocated = trt,\nrecipient = PBIBD2.unit,\nnested.recipients = PBIBD2.nest)\n\n##obtain combined decomposition and summarize\nunit.trt.canon <- designAnatomy(formulae = list(unit=~ Block\/Unit, trt=~ trt),\ndata = PBIBD2.lay)\nsummary(unit.trt.canon, which.criteria = c(\"aeff\",\"eeff\",\"order\"))\nsummary(unit.trt.canon, which.criteria = c(\"aeff\",\"eeff\",\"order\"), labels.swap = TRUE)\n\n## Three-phase sensory example from Brien and Payne (1999)\n## Not run:\ndata(Sensory3Phase.dat)\nEval.Field.Treat.canon <- designAnatomy(formulae = list(\neval= ~ ((Occasions\/Intervals\/Sittings)*Judges)\/Positions,\nfield= ~ (Rows*(Squares\/Columns))\/Halfplots,\ntreats= ~ Trellis*Method),\ndata = Sensory3Phase.dat)\nsummary(Eval.Field.Treat.canon, which.criteria =c(\"aefficiency\", \"order\"))\n\n## End(Not run)\n\n\n[Package dae version 3.2-13 Index]","date":"2022-12-06 17:18:12","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6169230937957764, \"perplexity\": 1944.1918242496652}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446711111.35\/warc\/CC-MAIN-20221206161009-20221206191009-00596.warc.gz\"}"}
null
null
Q: Why is $pV=nRT$ not applicable to a water-steam cylinder system? A cylinder, with a weightless piston, has an internal diameter of $0.24 m$. The cylinder contains water and steam at $100 C$. It is situated in a constant temperature bath at $100 C$, Figure $2.1$. Atmospheric pressure is $1.01 × 10^5 Pa$. The steam in the cylinder occupies a length of $0.20 m$ and has a mass of $0.37 g$.(see diagram below). I'd like to have a better understanding of the setup: Why is the ideal gas equation, $pV=nRT$ not applicable to this system, does the fact that the steam is not isolated from the water below it (so that condensation occurs) violates the assumption that there are no molecular forces between ideal gas molecules? The pressure also seems to remain constant at $p=p_{atm}$ as the piston is slowly pushed downwards/ pulled upwards, because the steam condenses/evaporates into water so there are less/more frequent collisions between the steam molecules and the cylinder, is my explanation satisfactory? The diagram: A: Saturated steam is NOT an ideal gas. It is water vapor that is in equilibrium with the liquid that it is in contact with. This means that any heat input to the system will vaporize some of the water at 100 deg C, and any heat removal from the system will condense some of the steam at 100 deg C. This type of system follows the Antoine equation rather than the ideal gas law. See https://en.wikipedia.org/wiki/Antoine_equation. A: The water vapor does approximately follow the ideal gas law at 100 C. But the number of moles of water vapor has changed, and this needs to be taken into account.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,910
Textile Nature # Textile Nature Textile Techniques and Inspiration from the Natural World Anne Kelly ## Acknowledgements I would like to thank the artists and students named in the text for their contributions (see featured artists here for their details) and the institutions and organizations mentioned in the book for sharing their collections and images. Thank you also to Rachel Whiting, photographer, to Tina Persaud and Kristy Richardson at Batsford for their support of this project, and to family and friends for their unwavering good humour. ## Contents Acknowledgements Introduction 1 Drawing from nature 2 Planting in cloth 3 Taking flight 4 Working with green spaces 5 Nature in context Conclusion Featured artists Further reading Suppliers Picture credits Index ## Introduction 'There are three principal means of acquiring knowledge... observation of nature, reflection and experimentation. Observation collects facts; reflection combines them and experimentation verifies the result of that combination.' _Denis Diderot, French philosopher_ _Bay Tree and Shoe_ by Anne Kelly. Artists, designers and makers have always been influenced by nature, and textile artists even more so. An early series of my textile work was based on drawings of plants from the garden and as my pieces have grown in size and complexity, I've kept and valued a connection with the natural world. Through my teaching and travels with my work, I've observed that practitioners of all ages and abilities share a huge common love of nature and textiles, which has inspired me to write this book. The journey will take you from its starting points through to making and exhibiting, looking at inspirational examples from around the world, and help you to 'grow your own' work and connect with green spaces. I believe that observing nature can provide you with a wealth of information and resources for making unique and meaningful work. **Chapter 1: Drawing from Nature** will bring you closer to the natural world, and will show you how to organise your resources to generate creativity and enhance your studio practice. Looking at sketchbooks, folding books, a 'nature table' and recording information in different formats will enable you to make work using a variety of art and textile techniques. **Chapter 2: Planting in Cloth**. Any student of textile design will recognise the staying power of the plant and floral image. Whether looking at single or many species, pattern, leaf or bud, plants provide endless possibilities for print, stitch, dye or construction. Simplifying your designs and making a printing block can enable you to use one design in a variety of ways. Pages from the author's sketchbook. In **Chapter 3: Taking Flight**, bird and insect motifs, and how to make and use them in your work, are the focus. These subjects are increasingly popular in all areas of textile work, and we will be looking at the style and substance involved in creating and representing them. The context for creating birds and insects will also be examined as will three-dimensional design. **Chapter 4: Working with Green Spaces** continues the theme of connecting with your local environment and making the most of the resources available. My work and other artists' residencies in gardens in the UK will be explored, and we will look at taking your own work beyond your locality. **Chapter 5: Nature in Context** looks at how subjects from nature can be used symbolically or as a jumping-off point for further ideas. I look at the work of practitioners who subvert the themes of nature to send a message, as well as examine some ideas for working when travelling. My aim in this book is to enable textile artists and students of all ages to find inspiration and ideas. ## Drawing from nature ## 'Great art picks up where nature ends.' ## _Marc Chagall_ _Baroque Ceiling_ by Anne Kelly. ### Drawing from observation Nature really is the best teacher. It is much more difficult to make up or imagine an object from the natural world than it is to observe it from life. Creating a studio or work environment where inspiring objects are close to hand greatly helps with the creative process. The traditional 'nature table' has been a starting point for many students and echoes the cabinet of curiosities dating from medieval times. I have rercreated a contemporary version of it in my studio. ### A nature table in my studio A nature table is a common element in many of our childhood memories. It is used in primary schools to introduce children to elements of the natural world. A range of found seasonal items, such as leaves, acorns, feathers, shells and plants can be placed together and the table allows children to identify and become familiar with seasonal objects and to handle them. Many artists employ a similar method to display and work from inspiring items collected in their daily life and travels. My studio is in my garden, not far from the house but far enough away to be able to observe the changing seasons and foliage up close. The windows give sufficient light to work without artificial light most days. I enjoy collecting unusual objects with students who use my studio and I have assembled some of these in a display with seasonal flowers and plants. I'm drawn to vintage fabric and ephemera so these are included. This provides several opportunities – to be able to draw from objects, either singly or in groups. The objects themselves are interesting enough but the patterns and structures found in them can also be used in a variety of ways. The nature table in the author's studio. ### What you can do with one plant... _Bay_ embroidery by Anne Kelly. _Bay_ is from an early series of embroideries based on the plants in my garden. I chose plants that I could see from my studio and embellished them with colours and textures that fit the theme. Taking the bay leaf (or any garden plant) as a starting point, you can create a range of work using a series of different techniques. • **Stitch** (appliqué and embroider): using a variety of coloured fabrics, cut out a leaf shape from one of the fabrics and appliqué it onto a background, stitching in the details of the veins and embellishing it with a mixture of hand and machine stitching. • **Print:** make a template from the shape of the leaf and use it to create a repeat print design. Shown right is a Gelli© print, taken from an actual bay leaf. Although it is possible to make monoprints without Gelli© plates, these do give a crisp and clean print. Roll the Gelli© plate with acrylic paint and a roller/brayer and lay your leaves into the painted surface. Remove gently and place your fabric over the plate. Roll with a dry roller/brayer and remove the print. • **Dye:** mix up a variety of green dyes to paint the leaf outline, shape and detail onto fabric. Then overstitch and paint in. I use non-toxic Brusho© dyes, which are fine for work that will not be worn or washed. The colours are not as vibrant as Procion© dyes, but they are naturalistic and good for painting on cloth. I mix the quantity I need into paint wells in a palette and use the colours required, mixing them as I go. • **Weave:** use the leaf shapes to create a Japanese-inspired screen. Cut bay-leaf shapes out of felt, tracing around the shape of the leaf, and using an over/under weaving method, work into a cut fabric background, arranging in a pattern of your choice. ### Emma Nishimura Looking at leaves, either singly or in groups, can be rewarding as you discover their intricate and individual structure. I was inspired to make a miniature screen after seeing the work of Emma Nishimura at the World of Threads Festival of Contemporary Fiber Arts near Toronto, Canada. She created a beautiful sculpture, _Shifting Views_ , using small leaf shapes, wire rods and soil to create a field of reeds. Detail from _Shifting Views_ by Emma Nishimura. Emma says about her work: _'The silhouette of a mountain and a cloudy sky: a photographic record of the present, at the site of the past. When viewed from afar, this piece evokes the memory of a landscape, yet upon closer investigation, it's actually a field made up of hundreds of reeds. A study in duality and all that lies between, the past and the present, the concrete and the intangible, the fleeting and the grounded, this piece is part of my ongoing exploration into the myriad layers contained within a story. Working with an image taken from the landscape in which my grandparents lived during their internment as Japanese Canadians in the Second World War, a vista has been re-created and a memory rooted. Yet from all angles, this piece offers only a shifting, fractured reading; nothing is complete or whole, just as no story is ever fully experienced, told or remembered.'_ ### Going large Often you will want to start small, as in the samples here, but a lot can be gained from supersizing your work, even in the early stages of exploration. This will drive you towards studying the details of your subject, perhaps using a magnifying glass or photographic enlargement. It can also produce some dramatic and stunning pieces, as in the work of Pauline Verrinder and Meredith Woolnough, shown here. Pauline Verrinder Pauline is a well-established tutor and embroiderer, based near Cambridge, England. I was fortunate enough to meet her when teaching in Bedfordshire. She was making leaves, embroidered on a stretched scrim support and embellished on the sewing machine. The delicate veins of the leaves come through in her work and, when finished, the leaf is a strong enough shape and form to be presented by itself. Pauline describes her piece: _'Natural forms are the inspiration used for the wired leaf shown in this image. Shapes are formed using cotton-covered wire. The leaf shape is applied to framed cotton scrim using free-machine embroidery set to zigzag (width on 4). Edges and veins of the leaf are covered closely with zigzag stitch using a cotton thread. Using the same machine setting, I randomly scribble on the scrim to give a distressed look to the leaf. To finish, the formed shape can be coloured with silk paint or Dye-Na-Flow© and heat set.'_ _Leaf_ sample for workshop by Pauline Verrinder. ### Meredith Woolnough _Scribbly Gum Leaf_ , embroidered textile by Meredith Woolnough. Meredith is an artist working in New South Wales, Australia. She writes about her work: _'I have always found inspiration in the natural world. I am lucky enough to live close to both coastal and bush land environments so I get to visit various habitats frequently. Exploring, collecting and drawing makes up a large part of the fieldwork aspect of my practice and I like to research any plant or animal thoroughly so I understand it before I translate it into stitches. I am also a keen scuba diver so I love to explore the world below the waves as well. I have always been fascinated by the structure of things, from the hard shapes of coral colonies to the minute arterial veins in leaves. I like to draw parallels between the growth and life systems of various organisms in my work commenting upon the interconnectedness of all living things. My process involves using a domestic sewing machine as an unconventional drawing tool. I employ a similar process to traditional machine darning or the more modern name "freehand machine embroidery" where the feed teeth are turned off, giving you complete control about how you move the base fabric under the needle. I use a water-soluble base fabric to create my work and once my embroidered design is complete I simply wash it all away in hot water to leave my skeleton of stitches behind.'_ ### Sampling nature While you are studying your subject, don't just think about its form – consider its texture too. You could utilize some of the many traditional embroidery stitches for this, both from your own culture and others. Think outside the box if you can. Do you have any fabrics or other bits and bobs that could be incorporated to convey texture and form? ### Tunbridge Wells Embroiderers' Guild In her seminal book _The Constance Howard Book of Stitches_ , the well-known author and tutor Constance Howard worked in co-operation with the Tunbridge Wells Embroiderers' Guild. The book is a wonderful compendium of sample stiches and instruction, but also, more importantly, detailed interpretation. A more recent project undertaken by the guild has been an A–Z of techniques. Many of the stitches are based on natural motifs; plants, grasses and landscape. The project also examines tones, textures and threads as well as the more practical concerns of needles and techniques. I was fortunate enough to be able to look at these actual boards, through the loan of their Chairman Carole Barter, and to photograph them afresh for this book. ### Lizzie George Lizzie is an artist and teacher living in a small village in West Sussex, England. She is inspired by the local landscape and nature and produced this canvas of plant forms for a fund-raising exhibition for a local hospice. The plant forms are appliquéd and machine embroidered with couching on linen union. The work was then stretched onto canvas. _Wildflower Sampler_ , mixed-media embroidery on linen union by Lizzie George. Coei O A former student from Hong Kong who went on to study at Central St Martins in London, Coei O set herself the task of making a sample sketchbook with stitch examples to plan one of her coursework projects. Her samples are beautifully worked and show how well-presented and defined stitching can become an artwork in itself. They are primarily based around plant and flower motifs and are echoed in the design for her final piece. Pages from Coei O's sketchbook. ### The landscape of home Background for _Sparrow Stories_ by Anne Kelly – see here. Our connection to the landscape surrounding us helps to shape our identity from an early age. I live in Kent, in the south-east of England, where the landscape is a mixture of rolling hills, downland, forest and chalky seacoast. Representing all of this in one textile would be impossible, so when instructing my students I often advise them to focus on an aspect of the landscape that particularly interests them. The piece above, for example, which was for an RSPB centre, focuses on birds and trees. A student, Susannah Fenn, was learning how to use a free-motion embroidery foot (see sample, right). She was working on a large drawing of a crow on linen union and using black cotton on the top and bobbin of the machine to create a delicate line. The range of marks she has created is remarkable for a first attempt and emphasises the link between drawing and stitch, a theme that I will be returning to throughout this book. Student sample of free-motion stiching on linen union. ### Cas Holmes In her folding books, Cas uses scraps of fabric and paper, Japanese tissue and paste to create translucent surfaces. She then draws and 'stitch-sketches' over them, often recording snippets of her surroundings – unconventional observations such as dead birds and the detritus of polluting humans. This emphasises her connection to nature. Cas says: _'We take for granted the diverse flora and fauna our seasonally changing landscape and varied habitat gives us on such a small island. This piece reflects the transient change of the Norfolk coast and fenlands. Printing directly with foliage and referencing patterns in the wet earth, I am looking "under and above the soil" for inspiration.'_ _Fenland_ , mixed-media textile by Cas Holmes. ### Carol Naylor _Radiant Light_ , embroidered textile by Carol Naylor. Carol is an experienced and well-known machine embroiderer, and has made the interpretation of landscape her own area of expertise. Her textiles are densely worked, always studied directly from life and very personal. Carol says: _'I look, I draw, I select and I translate. Sometimes I simply rely on the memory of shapes or colours observed. It can be enough to launch a series of works. I constantly revisit my sketchbooks, and one drawing can be reinterpreted in many ways._ _My abiding passion is landscape. What really interests me is the way in which the surface of the land changes wherever I am. I look constantly for evidence of both the underlying structures within natural land formations, and the surface patterns and textures created by seasonal changes and cultivation..._ _I draw and develop ideas from first-hand resources, exploring qualities of light and shade, line and colour. I work mainly with heavyweight threads that are too thick to go through the eye of the needle, so these go underneath on the bobbin (spool). I turn the piece over and work from the back of the canvas. The top light, normal thread then "couches" the heavy thread underneath so that when I turn to the front again I have created long lines rather than individual stitches. It's just like drawing with the sewing-machine needle, providing the marks that a pencil or pen would make and the richly coloured threads offer a wide and exciting palette.'_ Melvyn Evans _Landscape with Blackbird_ , linocut by Melvyn Evans. Melvyn is a designer and printmaker who lives near Sevenoaks, in Kent, England. His work follows in the tradition of inspirational mid-century designer/printmakers such as Eric Ravilious, who simplified and condensed the landscape into recognisable elements. Melvyn uses lino blocks, which will be looked at on here. He says: _'I'm fascinated by connections between aural traditions and the British landscape. There is a sense of prehistory in old place names and early monuments. I generally start with small drawings and some of these ideas I scale up into larger drawings. I'm very interested in composition, creating a flow through the image. Once I'm happy with the drawing, I reverse it and transfer it onto the lino ready to start cutting. I use a separate lino block for each colour, the colour separation being worked out at the drawing stage._ _For me there is a very close relationship between the printmaking process and drawing, in that I am asking a limited number of colours to achieve a desired effect without the use of a key block. My prints rely instead on a balance of shapes and tones worked out through repeated drawings._ _As this series has developed I have used texture to soften the graphic look that is so characteristic of linocuts and to impart a more painterly quality to the final image. The use of texture also introduces an element of chance.'_ ### Flocked lino samples Flocked lino cut (centre) and prints (left and right) by Anne Kelly. Taking inspiration from the graphic impact of printmaking, I have returned to a more traditional method of printing on fabric. Flocked lino is a wonderful material as it gives you defined shapes and the carved block can be mounted onto plywood to create a more permanent printing block. Start by drawing your image onto the flocked lino, using a white-coloured pencil or crayon. Remember that your image will be reversed and that you are cutting out the negative shapes and leaving the positive lines. I have chosen a simple flower motif to print onto fabric. I use standard lino cutting tools to carve into the block. As well as creating a textured look, flocked lino is easy to cut so it is a great choice for beginners. Once I have printed my sample, using a roller and acrylic paint or thick fabric paint, I will embellish the piece and use it in a larger work. The graphic potential of the flocking itself can be inspirational. ### Justine Head Drawing for _Poppy_ design, pencil on paper and print on adhesive fabric (top) by Justine Head. Justine is a foundation and degree tutor, former tutor at London College of Fashion and a fashion trained textile artist. She recently took a sabbatical from teaching and moved to Le Marais in Paris, a hub of creativity and innovation in the city. She was commissioned by The Collection, an interior design shop in Paris, to produce the floral stickers and some were used in the Christian Lacroix hotel, Le Petit Moulin. These were taken from her drawings of real flowers and are simplified forms and powerful statements within their own right. They are laser-cut from original drawings onto adhesive fabric and then peeled and positioned into place. Her designs were some of the first to be created using this now very popular technique. ### Organic shapes Organic shapes are all around us, from the undulations of hills and valleys to the wonderous curves of leaves and petals and even right down to the details of individual cells. A collaboration with printmaker Jenifer Newson looked at blood cell structure under the microscope. I made a series of hanging panels, each based on different cell names and exhibited these in my solo show at Farnham Maltings in Surrey, England. More recently I decided to return to the theme with a new series of work. #### Lymph nodes I was drawn to the shapes of these lymph nodes following some investigations and found their complexity intriguing. Using a combination of canvas and net, I created a dyed and hand-coloured background. I then drew the shapes of the nodes over the top of the background, using a fine line waterproof pen. I finally used free-motion embroidery over the drawings, using a darker thread. The pieces were suspended in panels in a similar way to the previous series. _Red Blood Cells_ (left) and _Lymph Nodes_ (right), mixed-media textiles by Anne Kelly. Kim Thittichai _Stones_ , mixed-media textile by Kim Thittichai. Kim is an internationally known artist, author and tutor. Her books are renowned for explaining in clear terms the complexities of working with textiles that respond to heat interventions. As a brand ambassador for a leading manufacturer of fusible interfacings and soluble fabrics, she is an expert in their application. One of Kim's favourite locations is New Zealand, and she has made work reflecting her affinity with the landscape. She says: 'Stones _was created on a course with Gwen Hedley and was run by the Textile Study Group. First of all the group were asked to make a few basic, small drawings – nothing too frightening. We then made printing blocks from one of the drawings. I chose my stones drawing; I like strong, simple shapes. After making the printing blocks the group printed up their fabrics, taking the colours from the drawings. I used viscose satin and cotton organdie. Having made several prints using the same colour tones I was able to choose the best three prints to create my sample. I decided to lay printed cotton organdie over viscose satin and cut sections out. I then cut other sections out from my third piece of fabric to apply shapes onto my sample. I used Bondaweb© on the back of the small pieces and ironed them in place. I then used backstitch to define the lines of the print and the shapes and used colours that were in the work.'_ ## Planting in cloth ## '...everything in nature is coloured.' ## _Paul Cézanne_ _Old Colonial Bird Tree_ , mixed-media textile by Anne Kelly. ### Leaves and trees Reflections of nature can start with the basic outlines and silhouettes that we see around us – leaves and trees are a good place to begin. This chapter gets underway with a look at how several artists – including myself – have made use of leaves and leaf shapes, all in very different ways. Detail of _William Morris Trees_ , mixed-media textile by Anne Kelly. #### _William Morris Trees_ In autumn 2013 I was approached by the arts coordinator at the Tunbridge Wells Hospital in Kent, England, about exhibiting in one of their corridor galleries. The multi-faith centre at the hospital was also identified as an area in need of some artwork and I was commissioned to create two hanging panels for the room. The hospital is surrounded by trees; some older and some newly planted. The coloured glass in the centre is blue and green so I wanted to incorporate this scheme into my design. I started with two large drawings of wooded areas, taken from the ground looking up. I used a background of William Morris printed upholstery fabric, found in a charity shop. I then drew my designs directly onto the fabric and used a combination of florals and plain designs to appliqué the tree shapes onto the background. I overstitched the edges of the trees and branches and added leaves in a mixture of colours. There are also birds perched in and around the branches – these were designed on separate pieces of fabric and added into the piece. I used a simple stencil motif of a tree and an Indian tree printing block to make a border for the top and bottom of each piece. The borders were added after the main piece was finished. Each piece was backed with a vintage Sanderson upholstery fabric of oak leaves and hung from metal rods. As the pieces do not contain specific religious imagery or iconography they appeal to visitors of all faiths and none. _William Morris Trees_ hanging in situ at Tunbridge Wells Hospital multi-faith centre. ### Working with plants Plant motifs surround us and it is inspiring to look directly at plants and their shapes in nature, as Alice Fox and Helen Ott have done in the following works. Other artists, such as Hillary Fayle and Carmen Li, have taken one step closer to nature and work directly with plant material itself. Alice Fox Alice has a unique, almost forensic approach to her work. I taught alongside her in North Wales and saw first hand how she sets up a studio/laboratory space, full of exciting experiments and projects. Her relationship to the natural world is implicit. She says: _'I have always been fascinated by the natural world and the detail of organic things. My practice brings together recording, collecting and interaction with the landscape. The work that I produce celebrates and carries an essence of what I experience in the natural world. I aim to draw the viewers in, invite them to look closer and notice things they might otherwise have overlooked. I am concerned with embodiment of the landscape rather than direct representation. Each piece can be seen as a small record of a walk: a journey or moment from a journey. The works I produce are contemplative and quiet, but look closely and you'll discover there is complex activity; patterns can appear both random and organised. Look again and there is something new to discover.'_ The piece reproduced here is _Leaf Lexicon_ , a series of impressions of leaves taken from 'A Language of Leaves', works loosely based on thoughts about asemic writing and the forms that leaves make when they fall and are arranged on the ground. Asemic writing can suggest meaning but is open to the viewer's interpretation. _Leaf Lexicon_ , mixed-media textile by Alice Fox. ### Hillary Fayle Hand-stitched leaf by Hillary Fayle. American textile artist Hillary Fayle describes her process: _'I use found botanical material such as leaves, seedpods and branches to explore human connection to the physical world. By combining these organic objects with the rich traditions of needlecraft, I bind nature and the human touch. This gentle but intricate stitch work communicates the idea that our relationship with the natural world is both tenuously fragile and infinitely complex. The objects that I find in the physical world serve as a great inspiration to me. Perfect in their imperfection, they tell me stories of their existence. I regard these leaves, twigs, stones, bones, feathers and all that may have happened to them; the events that had to unfold to lead them into my hands. As I decide how I will interact with each, I relate its history, real or imagined.'_ ### Carmen Li Hand-stitching sample, embroidery cotton and beads on calico by Carmen Li. This leaf is a student piece, exploring different ways to decorate and embellish a leaf outline on fabric. Carmen started with a closely observed drawing of a maple leaf. She drew the outline onto fabric and embellished the main areas of the leaf with a variety of stitches and beading. The tonal values of using different shades of green make it unique and add interest to the overall piece. Helen Ott Helen is a local textile artist and student who has lived in Japan and has a great appreciation of Japanese fabric and stitch: _'I have been collecting small precious pieces of old/antique Japanese fabrics for years, fascinated by how they feel and look: soft, faded, worn, treasured. I sew my pictures by hand, using Japanese threads and sometimes having to stabilise the fabric carefully if it is too delicate to handle. We lived in Tokyo for some years and I was lucky enough to meet various Japanese artists and be guided to study others who work with textiles. My recent pieces have all been heavily influenced by the Sensei, the concept of Wabi Sabi and the effective simplicity of Japanese design.'_ In her piece _Anemones_ , Helen uses the subtle variations in tone to advantage, capturing the fragility and unique shape of these delicate flowers. _Anenomes_ , Japanese fabric appliqué and hand-stitch by Helen Ott. ### Wildflowers Wildflowers are not only an attractive subject but also a very topical one – as the world starts to realize that these very special plants are under threat, efforts are being made to protect their ever-dwindling habitats. The threat of loss makes these flowers all the more poignant. _Wildflower Book Box_ , mixed-media textile and archival card box by Anne Kelly. ### _Wildflower Book Box_ I made this constructed book box as a response to an open call for submissions to 'Alternative Stories', an altered-book exhibition. Using _British Wildflowers_ , an old but beautifully illustrated paperback, I covered the outside and inside of the box with its pages, and used the cover on the inside lid, along with an original stitched piece of work to line the bottom. The exhibition was a focal point in the upstairs galleries at the Beaney in Canterbury, England, and ran for several months. There was an inspiring range of work and very good feedback from the public. Artist Karen Gardner's review of the piece said: _'This is a piece about nostalgia. How we cherish and memorialise on intimate levels. Anne Kelly's creation is a "turned inside out" book, and the book is John Hutchinson's_ British Wild Flowers, vol. 2. _If hearts and flowers are the icons of sentimentality, here is the exception – a cerebral depiction of flowers, just as they are portrayed in this well-known botanist's book.'_ #### Vintage books: herbals Old books of wildflowers and herbals are a great resource. Often they show simplified images and diagrams of flowers that are useful for drawing and designing motifs. I keep a look out for these in charity shops and at booksellers where they are often overlooked. The books show a diverse approach to representing flowers and identifying their characteristics. Vintage books from the author's collection. #### Transfers and collage There are many ways of using floral imagery and transferring it onto fabric. For example, the use of T-shirt transfers on fabric has become much easier as the technology has improved. Transfers have lost their stiff and waxy feel and they are much thinner and 'invisible'. I like to use them and cut out shapes to add into other pieces. You can use them in any inkjet printer. This is the method: 1 Choose an image that you would like to transfer. 2 Place the image in the scanning/photocopying area of your printer and load the transfer paper (check which side up the paper goes in). Also note that you will have to reverse any writing on the image using a program on your computer, as the image will come out back to front once transferred. 3 Copy the image and print it out, wait until the ink is dry and then follow the instructions on the package for ironing the transfer onto fabric. It is crucial that the iron is the right temperature and that you protect your ironing board and iron; baking parchment is good for this. 4 Your transfers can now be embellished and overstitched as part of a larger piece of work. _Wildflower Sampler_ , mixed-media textile collage by Anne Kelly. ### Louise Pettifer _Rose Garden_ , mixed media by Louise Pettifer. Louise was the artist-in-residence at Sissinghurst Castle Garden in Kent, England. She worked on a series of flower pieces connected to the garden. Using a combination of techniques, developed since her training as a textile designer. She says: _'I start by choosing a plant or flower to study. Once I have made my drawings and I'm back in the studio I can begin one of my layered works. I cut several lino blocks and create a unique one-off print (a monoprint) by printing them in several layers. The next step is preparing the collage papers, using a range of painting and printmaking techniques. Once I am happy with the colours and textures of the papers, I cut them by hand, following the outlines of my original drawings. The cutting is a delicate procedure, which takes many hours and a steady hand. I usually add a layer of line drawing, using inks, and finally, apply the paper cut elements to the surface.'_ These drawings of roses are delicate and beautifully cut, and could equally be printed onto fabric. Louise's method of working in layers could be used for sketchbook work as well as finished pieces. Paper and card cut outs for _Rose Garden_. _Roses I_ , mixed-media textile by Val Holmes. ### Val Holmes Val is one of the UK's best-known writers on textiles and textile artists. Now living and working in France, she teaches and exhibits both sides of the Channel. She says: _'Gardens and landscapes are a big source of inspiration for my work, and my own garden, now full of roses, is an important source. Having seen a professional garden where the roses were allowed to run wild, I now (almost) do that with most of mine, and the results are pretty good. The image_ Roses I _is a monoprint worked with Manutex and Procion dye on glass and printed onto calico. It is then embroidered by machine. The second image_ Roses _is the second print off the same monoprint, which has been embroidered a little less. The aim in my work has always been to work towards a level of abstraction – abstract realism if you like, with lyricism. Using monoprints as part of this process allows me to dye work that is almost automatically less exact than if I were to paint it. I can then experiment with the same or very similar image in a number of ways, thus pushing further my learning process.'_ _Roses_ , mixed-media textile by Val Holmes. #### Incorporating flowers Working with flowers and floral motifs as the central image can be inspiring, but inserting flowers into a composition or using them to build up a background can also enrich your work. Here I have integrated embroidered flowers and printed and metallic fabric into a base image. I have then 'drawn' over it with free-motion stitching to add flowers and a bird. Detail from _Natural History Waistcoat_ by Anne Kelly. Drawing flowers in stitch can also be a natural extension of sketchbook work. In this detail from _Suburban Gardens_ I have used layers of floral imagery in the background and then free-motion stitch over the top. Some hand stitching is also added for depth. Detail from _Suburban Gardens_ by Anne Kelly. ### _Wildflower Tea-cloth Sketchbook_ After my residency at Sussex Prairie Garden in West Sussex, England, I wanted to record my time there with a cumulative piece, based on sketchbook studies from the garden. I had a collection of tea cloths, which I had patched together and used for covering the 'wagon studio' at the garden. I decided that with their delicate floral borders and lacy edges these would make an ideal backdrop for my drawings. I traced the drawings from the sketchbook onto the back of the tea cloths, and used free-motion embroidery to outline them. When I had completed the series, I added birds, also taken from the sketchbook. Some small embroidered pieces of cloth and shisha, taken from an Indian textile, completed the piece. I backed the piece with a vintage furnishing fabric. It was the centrepiece of my work at 'Cross-Pollination', a group exhibition featuring artists-in-residence at three different gardens, and as part of the Chelsea Fringe. I also used photographs taken from the sketchbook and piece to create a small photo picture book. _Wildflower Tea-cloth Sketchbook_ , mixed-media textile by Anne Kelly. On a much smaller scale, textile work can represent a meadow or grouping of flowers beautifully. The photo picture book created using photographs from the sketchbook. Emily Notman _Meadow Brooch_ , mixed-media textile by Emily Notman. Emily makes beautifully dense fabric collages and pieces, like this _Meadow Brooch_ , which often incorporate plants and flowers as part of a larger composition. Emily has written about her work: _'I create bespoke textile installations mixing media and building up tactile, delicate surfaces. I work and rework my pieces with paint, dyes, bleach and ink, burning and layering until finishing it with hand stitch. I find beauty in flaky walls, overgrown buildings and encrusted surfaces (this is something I re-create in my work). My pieces evolve and grow with time, incorporating history with layering, and sometimes the tiniest mark or stitch changes a piece dramatically – it's this detail that excites me. The diversity of stitch is key in my work from loopy, loose hand stitch to fine, subtle rows of machine embroidery. I have a range of yarns, wools, machine threads and embroidery cottons to work with. I work with the chunky thick yarns first, building on the marks made with paint. I then decorate with delicate fine loops. A piece could then be entrapped with netting or lace, which then acts as another surface to build on.'_ ### Mid-century influences Looking back at historical designs and cloth work is an established practice for designers on the lookout for inspiration. I have a fondness for mid-century design, due to my upbringing and the decor/surroundings around during my childhood in Canada. I am delighted to see it make a comeback and that it is widely celebrated and given a contemporary twist. Allotment flowers (sweetpeas). ### Pattern and print Textile pattern and print is dominated by floral images. From early experiments in block printing to today's digital technology, flowers are everywhere. Melanie Bowles Slow Grow was an innovative project by Melanie Bowles, Senior Lecturer at Chelsea College of Arts and co-author of _Digital Textile Design_ and _Print, Make, Wear_. She is also the co-director of the creative enterprise The People's Print with Dr Emma Neuberg. Melanie describes the process: _'Slow Grow creates a design model for the wearer to be at the centre of the design process to create a textile and garment unique to them, reflecting their character and environment. Slow Grow follows the journey of grower to wearer (Mary). Mary is passionate about her Fulham allotment, and growing her flowers and produce is important to her well-being and happiness. Mary's creativity is translated into a sweetpea design, which is printed and made into a shirt. Mary is central to the design process and aims to create a slow fashion piece to create a long-life garment that is unique to the wearer. Slow Grow encourages participatory design by engaging the wearer through the journey of creating her own printed textile and garment by placing her at the centre of the design process, working with design concepts of slow design and emotional durable design, and using local digital print bureaus and dressmaking patterns for production._ Melanie Bowles' sweetpea design (digital print on fabric). ### Alison Milner Alison Milner's textile bag design for the Rathfinny Estate. Alison describes herself as 'a designer of two-dimensions for three-dimensions', working from the south coast in the UK. Her work incorporates transfers onto a variety of surfaces as well as using print processes. She taps into a 'mid-century' aesthetic, with a contemporary edge: _'I was asked to develop some products for the shop at Rathfinny Estate, Alfriston, in East Sussex. The shop was not yet bought and the sparkling wine vineyard was only just being planted, so it was really interesting to see everything happening. I was asked to develop a whole range of products... I started by designing twelve "iconic images" chosen in collaboration with the owners to represent the vineyard. Six of the images were photographic and six graphic. We also chose six colours; based on the colours of the landscape and wild flowers. The two images used on the bags pictured were a stylised kidney vetch (actually yellow but printed in poppy red) and a skylark (printed in pale sky blue). Kidney vetch is the sole food of a rare blue butterfly found on the estate. We sourced a very nice juco (jute and cotton mix) bag that is made in India. The images on the bag are silkscreen printed so my graphics were ideal for that.'_ ### Maxine Sutton _Mustard_ , mixed-media textile by Maxine Sutton. Maxine is a textile artist who designs and makes handmade interior products, based in Margate, Kent, England. She trained in fine art and was influenced by American and British abstraction. She comments: _'I love the idea of the workshop household and believe that the handmade object creates layers of significance and forms a part of personal and family narratives, making links and connections through generations. My practice rests on a strong belief in the importance of our connection to materials... Using hand-making techniques such as screen printing, Irish machine and hand embroidery with other traditional needlework processes, I aim to create accessible artworks and functional objects in which the tangible material qualities of the work will communicate on many levels. I continue to explore the interplay between printed and embroidered textures, colour, mark, drawn and stitched lines. I often play with imagery and ideas springing from our relationship with familiar domestic objects and environments, everyday pastimes and the meaning of "home" and home-making activities. Abstracted and illustrative forms are hand drawn, paper cut, found or sometimes photographic. Screen-printed surfaces are layered and collaged with appliquéd and needle-punched techniques, embroidered lines and densely embroidered areas create further layers, detail and texture.'_ ### Nancy Nicholson Embroidered bird design by Nancy Nicholson. I'm drawn to Nancy's intricately designed work as she uses plants, flowers and birds in many of her pieces. Nancy trained in fine art textiles at the Royal College of Art in London and her recent work uses her own designs as well as taking inspiration from her late mother Joan Nicholson's work, produced in the 1960s and 1970s. Nancy says: _'My work stems from a love instilled in me from my mother of the ultimate pleasure of making something slowly and beautifully by hand. I believe we want to slow down from our quick-fix, quick-thrill, immediate-satisfaction outlook these days. We are learning it is very pleasurable to spend some time over what we do and gain an enjoyment which would otherwise be missed... It takes time and patience to do something well.'_ ### Floral collage – 'Vintage Flower Garden' workshops I've enjoyed teaching at the Fibreworks in Chipping Norton, in Oxfordshire, England. It is a small and vibrant shop with workshop studio above it. It is a hub in the village and they run a Fibre Festival each summer there. I have organised several workshops and a favourite has been a 'Vintage Flower Garden' theme. We are able to dip into a lovely collection of floral fabrics at the studio, but I always encourage students to bring in a good selection of their own. Previously embroidered pieces of work, such as vintage teacloths, hankies, napkins and tablecloths, can provide excellent backgrounds for working on. We start with a simple outline drawing, to identify key areas and the composition of the piece. Students then choose a background fabric and start to cut out floral elements from their chosen fabrics, Using a combination of lace, floral and translucent fabrics, they are then able to layer them onto the background. When these are bonded to the background using iron-on bonding fabric, the students can concentrate on the other elements of the image. A house, a tree or plants can provide a focal point for the work. The example shown here demonstrates the type of thing that can be achieved using this technique. The final stage is to embellish the piece using buttons, ribbons and hand stitching. ### Lucy Shaw Lucy has produced a lovely house collage, made from a long-treasured collection of Liberty printed fabric. Using this as her background range enabled her to make connections between different colours and prints. I was delighted to see the result with her delicate hand stitching and embellishing. House collage by Lucy Shaw, made using Liberty fabric. Selection of vintage embroidery samples from the author's collection. ### Folk art The term 'folk art' is used to cover a wide range of styles from all over the world. It usually embraces bold images of familiar objects in bright colours that lift the spirits. Flowers, birds and plants are a recurring theme. The simple motifs, often combined with decorative elements, are easy to work with, and pieces are as fun to work as they are to view. _Folk_ , mixed-media textile by Anne Kelly. ### Stencils, pattern and print Looking at common garden plants and birds can be a useful place to find motifs and inspiration. Use your motifs to make prints, which can add depth and texture to your work and are a good way of breaking up the surface of your composition. Choose a few interestingly shaped plants or animals, ensuring that they have a strong linear structure. These will be easier to draw and separate when designing your stencils. Birds and insects are also good choices. Paper bird stencils made by Anne Kelly. Tree and flower stencils made by Anne Kelly. Stencils are the simplest and most versatile method of printing, as they can be used for hand and also screen printing. The durability of stencils using plain cartridge paper is underestimated – they can be very strong, and the closer the stencil is to the fabric, the more accurate the print will be. This is the method: 1 Start with an image of a flower or tree that you would like to use. I chose a Shaker inspired motif of a tree (centre), and drew it onto the paper. 2 Using a sharp scalpel and cutting board, cut out the shapes of your design. 3 Position your stencil over the fabric you have chosen to print on. Use masking tape if you wish to secure the stencil. Using fabric or acrylic paint and a small sponge, dab (do not paint) the colour through the holes in the image. Leave to dry. 4 When it is dry, you can iron the stencil between clean paper and reuse it – the paint left on will only make it stronger. This stencil was used in my commissioned piece _William Morris Trees_. #### Indian block prints On the _William Morris Trees_ commission, I also used carved wooden printing blocks. These blocks are widely available and are useful for creating motifs and backgrounds in your work. Particularly effective over already printed and dyed fabrics, they can add colour and texture to mixed-media pieces. For the clearest, sharpest prints, place a padded cloth or towel under your printing surface. Use a sponge to apply your chosen paint to the block then press down with a slight rocking motion onto your fabric and remove the block. Hand-carved wood printing blocks from China and India. ### Carolyn Forster A hand and machine-stitched quilt provided inspiration for Carolyn Forster's _Antique Flowerpot Quilt_ design. Carolyn is a British quilter and writer, well known for her books on the subject. She describes her inspiration and the technique that she used for her _Antique Flowerpot Quilt_ , adapting her design from an antique quilt (above) seen at an exhibition: _'The quilt that most attracted me was the one I named "the flower pot quilt". Ironically the faded, threadbare pots could hardly be seen from a distance. Despite the strength and vibrancy of the quilt it was its worn-out and faded quality, and also the softness created through years of use and laundering, that appealed to me. When I came to making my own version I chose to try and reflect that in choosing a very soft palette of fabrics, so that you have to really look at the quilt. I have joined some of the small pieces together to create fabric large enough to cut the bold shapes from. I selected various different background fabric squares for the appliqué to add some interest and help make the placement of these background fabrics interesting and less predictable. Another point that attracted me was that although all of the elements of the design are big and bold, the flower centres themselves are tiny. The maker used reverse appliqué to cut through the red of the flowers and insert a small piece of the turquoise for the centres. I chose to use a method of freezer-paper appliqué to make my top, although the large simple shapes could easily be sewn using needle turn. The quilt is very densely and finely quilted with a variety of designs even over the appliqué pieces. I am hoping I have managed to create that warmth and softness from the outset without having to wait a hundred years first.'_ ### Dyeing with plants Hand-dyed French marigold wool, dyed and spun by the Sussex Guild of Weavers at Sussex Prairie Garden. Deborah Barker's hedgerow dyeing samples. At Sussex Prairie Garden in southern England, the owners have encouraged local members of the Sussex Guild of Weavers to use the plants from the garden to make dyes to dye their wool with for spinning. When I was artist-in-residence, they used French marigolds. The vibrant and bright colour stood out when dried out and spun. #### Ditchling Museum of Art and Craft Ditchling Museum in Sussex, England describes itself as follows: _'The museum holds an internationally important collection of work by the artists and craftspeople who were drawn to the village. Being able to see special objects and works of art and craft in the village where they were made is a rare opportunity. It offers a unique way to consider how the objects were made and who they were made for. The impact of the many artists and craftspeople who came to live and work in Ditchling from the beginning of the twentieth century onwards established this village as one of the most important places for the visual arts and crafts in Britain.'_ Part of their collection contains work from the weaver Ethel Mairet. The museum also has links to the local Plumpton College and offers workshops in natural dyeing. They have started to grow a garden for natural dyes and will be adding to it as the project progresses. Dyer Deborah Barker who attended a workshop there said: _'I saw that Ditchling Museum was offering a hedgerow dye workshop run by dyer and weaver Jenny KilBride... Jenny is the daughter of Valentine KilBride who worked as a weaver and dyer in Ethel Mairet's dye workshop in Ditchling in the first half of the twentieth century._ _Eight of us gathered excitedly around the stoves as Jenny produced the dye materials, which included rowanberries, blackberries, golden rod and dahlia flowers. It had been a grey autumn day but as we took dyed skeins of wool out into the museum courtyard to dry on the chestnut pale fencing the sky brightened and we were rewarded with the picture of the rich luminescence of the plant-dyed wool in the late autumn sunlight.'_ Ditchling Museum garden, showing plants grown for dyeing workshops. 'Dyeing is an art; the moment science dominates it, it is an art no longer, and the craftsman must go back to the time before science touched it and, and begin all over again.' _Ethel Mairet,_ A Book on Vegetable Dyes _(1916)_ ### _Supporting Statements_ I was delighted to be involved with this project, in conjunction with Nell Mellerick, the artist-in-residence at Hospice in the Weald in Pembury, Kent, England. The piece consisted of three long textile hangings, which were hung from ceiling to floor. This collaboration of artwork was made especially for the Tenth Palliative Care Congress in Harrogate and exhibited there. Every panel was made up from individual patients' art pieces. Each patient was encouraged to explore a self-portrait through a botanical representation of themselves and text about their favourite memory of it. Nell said: _'This piece echoes the benefit of patients having access to the arts. It supports their creative needs, gives them a sense of purpose and boosts self-esteem when they have had so many losses through their disease. My role as Creative Artist provides an open session of craftwork, group art sessions, one-to-one sessions working on memory projects or exploring creative expression.'_ I have previously been involved in fundraising for the hospice and was looking forward to working on this group collaboration, _Supporting Statements_ , with Nell and her patients. The theme reflects the care and encouragement that the hospice provides. The piece expresses reflective and hopeful qualities and provides a colourful backdrop for their self-portraits and journeys. Detail from _Supporting Statements_ , mixed-media textile hangings by Anne Kelly with Nell Mellerick and Hospice in the Weald. ### Making a collage Method for making a paper/fabric collage: 1 Start with a plain, lightweight cloth background. Choose a motif that is simple but striking to create your image. 2 Cut out pieces of fabric into the shapes that you are going to use for your imagery. 3 Make a mixture of 50 per cent PVA:50 per cent water to use as a binder to layer paper and fabric together. If using this method, your layers need to be thin, so that the glue dries quickly. 4 Brush the mixture over and under all of your layers – you can top it with white tissue paper if you like. 5 When it is dry, you can make tree shapes and leaves from fabric, which can be tacked in place on top. By using simple shapes and colours I created a folk-art style image, which was overstitched with running stitch and embellished with buttons (shown below). Tree collage, mixed-media textile embroidery by Anne Kelly. Tree collage, mixed-media textile by Anne Kelly. ## Taking flight ## 'I hope you love birds too...' ## _Emily Dickinson_ Detail of _Butterflies_ , mixed-media textile by Judith Mundwiler. ### Birds and insects I am often asked 'Why birds?'. They are included in many of my pieces and also feature as subjects in their own right. The answer is simple. Birds are everywhere and co-exist with us in a rapidly changing world; they are symbols of the soul and of the imagination and at a deep level they seem to call to me. Insects, particularly butterflies, are favoured by artists for their symbolic value as well as for their beauty and charm. In this chapter birds and insects will be interpreted by a variety of artists and makers. _Owl_ , mixed-media textile mounted on wood by Anne Kelly. Detail from _Birds_ embroidery by Anne Kelly. When I started participating in group shows and open studios, I made small objects to exhibit. I'm drawn to brooches and like wearing them so I made two kinds using insects for inspiration. I devised a way of trapping free-motion embroidered moths, stitched onto maps in between two layers of Perspex. Detail from an untitled collage of mixed-media textile by Anne Kelly. The Perspex had the function of magnifying the stitched moth, which was effective. My second brooch design involved recycling old thread spools, made from heavy card. I had a collection of these, some of which had writing on them. I glued them together to create a base on which to apply free-machine embroidered beetles, calling them _Thread Beetles_. I then applied a brooch pin to the back of each piece. Recently I've returned to this theme and made small 'nature block' brooches of moths and butterflies. Brooches and small embroidery, mixed media on Perspex and textile by Anne Kelly. _Butterfly_ , mixed-media textile on wood by Anne Kelly. _Butterfly Book_ , mixed-media textile by Anne Kelly. #### Mini books As well as making small objects, I have always liked putting together small books, with images and themes that can be captured in a hand-held format. I used a transfer method with T-shirt transfers from vintage imagery (see here) to create the images on pages in the book. _Moth Folding Book_ , mixed-media on canvas with stitch by Anne Kelly. I like to use calico or canvas for my mini books because it has a good weighty feel to it. In addition to using T-shirt transfers for the pages I often use stamps, especially number and alphabet stamps – the latter can be used for adding words. I colour some pages with dye and embellish with stitch. My favourite book formats are the standard book form (shown above) and the concertina form as in the moth book (shown right). For a standard book I simply bind the pages together with a piece of canvas folded over on the edge. Instructions for making a book like the one shown on the right are on here. ### An insect collaboration Judith Mundwiler from Switzerland and Gabi Mett from Germany have worked together for many years. I was impressed by their collaborative work, which was displayed at the Festival of Quilts. They describe their collaboration and this series of work as follows: _'We have held exhibitions, written a book and designed courses together. Since our first exhibition, we have always strived to find linking points to connect our artistic personalities. We planned a presentation of our work at the Festival of Quilts in the UK (Birmingham) under the theme "Short Stories – Between the Lines", whilst keeping our individual styles and ideas. We kept in touch to discuss our work and plan further steps. One main point was that all the artwork had to be foldable. Our wish to exchange material as we did in our first joint exhibition was put into concrete terms during this process. We had bought two books from an antiquarian bookshop. One was a reference book on grasses and the other was a book on butterflies. These two books were dismantled and each of us got one half of each book to process further. The butterfly illustrations were especially inspiring._ _At this time, Gabi Mett was working on a series of artworks in which old sewing accessories were integrated. In this case it was paper bags that originally served as templates to tag linen in stores. In combination with the butterfly illustrations and the names of these insects, as well as thoughts on their disappearance at the present time, her work paid homage to these very special marvels. Judith Mundwiler has been working for some time with used teabags and eco-dying. In this work she combined these materials to draw attention to the beautiful butterfly drawings. She also references the extinction of insects with the title_ Are They Still Here? _This artwork is sewn and embroidered by hand.'_ _Are They Still Here?_ , mixed-media textile by Gabi Mett. #### A buzz about bees Bees are very much in the news worldwide, and our relationship to them is one of interdependence. When I was artist-in-residence at Sussex Prairies Garden, I decided to make a series of work looking at the different types of bees there are in British gardens. I started with an appliquéd and patched background, to which I added free-machined bees and a title for each type of bee (for example, garden, meadow and so on). The garden has an ongoing project with Sussex University using Warré hives, which give the bees a more natural home than the National hives more commonly used. A fascination with insects is also evident in many other contemporary artists' work. _Three Bees_ , mounted mixed-media textiles by Anne Kelly. _Bee_ , embroidery on vintage table linen by Anne Kelly. Lesley Coates Lesley Coates is a talented student who enjoys working in a wide variety of media. Here she has drawn and coloured a moth design on calico and embroidered one version. _Moths_ by Lesley Coates. #### Jane Nicholas Jane makes striking and detailed stumpwork insects. Based in Australia, she sells her kits and teaches internationally. She writes: _'While I find all aspects of that curious form of raised embroidery, known as stumpwork, appealing, nothing captures the imagination more than the idea of stumpwork insects. With their beautiful shapes, appealing symmetry and often jewel-like colouration, stumpwork is the ideal technique to choose if you wish to embroider a lifelike insect. The unique characteristic of stumpwork – embroidered detached shapes with wired edges – is ideal for working the elytra and wings, which can be embellished in a myriad of ways. The wired edges allow for the wings and elytra to be curved and lifted away from the body. This, together with a variety of raised surface embroidery techniques for working bodies and legs, makes for a very realistic interpretation of these fascinating creatures._ _A specimen box is the perfect way to "collect" stumpwork insects – it provides for the fascination of research, the challenge of interpretation and the joy of stitching. When a fellow embroiderer, with a passion for insects, suggested that I work a specimen box, the seed was sown. What started as a collection of assorted insects in one specimen box, rapidly morphed into a series of boxes, containing collections of insects grouped according to order and sub-order. So far, I have embroidered specimen boxes of dragonflies, butterflies, moths and beetles. I am currently exploring the world of cicadas and stick insects – a wonderful challenge!'_ _Beetles_ , stumpwork embroidery by Jane Nicholas. #### Hand-stitched butterfly This intricately designed and drawn student's butterfly has been realised in stitch, finely and with love. It is an example of sustained and neat lines of hand stitching coming through to create a multi-layered and beautiful object. The vibrant colours reflect the student's travels through Central America. Hand-embroidered butterfly on digitally printed fabric. #### Inspired by Louise Bourgeois Smaller pieces are an effective way of isolating insects, as can be seen in this folding book embroidered cover. Free-motion embroidery was used for the piece. Louise Bourgeois was a French-American sculptor, printmaker and artist who created political and symbolic artworks. I was very moved by an exhibition of her textile work at Hauser & Wirth Gallery in London a few years ago, and I was inspired to create two works 'in homage' to it (shown opposite). They both feature insects, but are used in different ways. _My LB_ is a piece using fragments of illustrations taken from my Bernina sewing-machine manual, combined with butterflies and illustrations of trees. The separate pieces were patched and appliquéd together. The whole piece was then bound, backed and overstitched. The title is a play on the name 'Louise Bourgeois' but also an ode to my 'lovely Bernina', using vintage images of sewing machines. Folding sketchbook and cover, mixed-media textile by Anne Kelly. Detail of _My LB_ , mixed-media textile by Anne Kelly. The second piece, _Baby Spider_ , takes the motif of the spider sculpture from Louise Bourgeois's well-known _Maman_. I have used the silhouette of a spider set against a background made from knitwear, fabric and pieces of fabric. I also hand stitched a quote from her about fear: 'Once I was beset by anxiety but I pushed the fear away by studying the sky, determining when the moon would come out and where the sun would appear in the morning.' _Baby Spider_ , mixed-media textile by Anne Kelly. ### Birds Bird from Anne Kelly's nature-block series, mixed media on wood. #### Nature blocks Often a new process is born from a desire to use materials that appear around you. I had a supply of recycled wood offcuts that I decided to use for mounting work on, similar to a small icon. I made backgrounds using transferred and reclaimed pages from natural-history books. I then mounted painted and stitched birds over the top and laminated the pieces with water-based varnish. I produced a series of birds and butterflies using this method. Observing birds from life can be challenging but makes for a much more realistic impression. ### Anna Dickerson Anna is a painter and former artist-in-residence at London Zoo. Her colourful and detailed paintings of garden birds are luminous and I was keen to discover her process for drawing them, as she works from life wherever possible. This makes her work distinctive and expressive. She has a bird feeding station outside her kitchen window, and she has developed a relationship with the birds which allows her to observe them almost unnoticed. Anna works quickly in a range of materials including pencil and coloured crayon and with shellac inks, which allow her to block in the background as well as to capture each bird's movements. Her sketchbook studies show the information that she is able to gather before painting her intricate finished pieces. Garden bird sketches, pencil and shellac ink on paper, by Anna Dickerson. #### Garden birds After working with Anna drawing birds at her studio, I decided to create a series of mixed-media works incorporating paintings of garden birds. I started with a background made from pieces of fabric that reflected the garden, either printed or embroidered, and stitched them together. Over the base layer I incorporated my paintings on calico of garden birds. I like to use paint with a soft non-plastic finish, like good quality acrylics or casein tempera. At this stage, I also added small remnants of lace and fabric, to provide focal points for the work. Bird Handkerchief series, mixed-media textile by Anne Kelly. #### Students' birds I teach mixed-media courses as well as textiles and one of our projects was to draw and design birds to fit into a Joseph Cornell-type constructed box. One of my students, Angela Kent, decided to draw and colour a bird on fabric, mounting it onto newspaper. She then stitched around the edge with blanket stitch, embellishing it further. _Bird_ , mixed media on textile by Angela Kent. Mixed-media collage on paper by Anne Kelly and _Bird_ , mixed-media textile by Helen Ott. Nicola Jarvis _Bird_ , embroidered textile by Nicola Jarvis. Nicola is an artist and senior tutor at the Royal School of Needlework in London. She has made a series of work inspired by William Morris, which was exhibited at the William Morris Gallery in London and toured William Morris houses around the UK. Nicola says: _'When I was a child we lived on the edge of a town overlooking open fields and I would amuse myself in the long, rambling garden of my home. The changing colours and atmosphere of this place became a wondrous backdrop to days spent poking about in flower beds, peering into shrubs and trees to look at insects and spy on birds' nests. I was mesmerised by the myriad hues of plants and flowers, watching the light intensifying on their textures or casting silhouettes against the sky. This was where I developed a fascination and sensitivity for nature that continues to feed my practice today. I always explore initial ideas through drawing when beginning a design project or embarking on an artwork. Drawing is central to my practice and a new project will begin with selecting a natural form, for example, a flower for the compelling colour of its petals, some interestingly shaped leaves or the irresistible patterns of a bird. An endless stream of stimuli will attract my eye toward something, which is then clarified in the act of drawing when the eye, imagination and hand collaborate. In capturing and processing these visual and tactile qualities, I represent them in a way that exposes much potential for design. I am continually developing my personal culture of drawing and embroidery, and examine this through the design processes and aesthetics of specific artists and designers from various historic periods. I try to understand nature by examining it closely and then use my imagination to develop ideas.'_ #### Embroidered birds I enjoy working on vintage and reclaimed fabrics, so when I decided to create a small series of bird embroideries for a solo exhibition, I repurposed some table linen and old fabrics for the project. I made some simple drawings that I transferred to the fabric and then stitched along the outlines and added detailing where required. As you can see in the example shown left, I deliberately kept the informal, sketchy look of the drawings to maintain the lively, gleeful feeling of the originals. Sometimes artists can try too hard to create 'perfect' images and overwork them, taking all the life out of them in the process. Try not to worry about getting it right or wrong, just concentrate on expressing what you feel and what you want to convey. Bird series, hand stitching on vintage textile by Anne Kelly. ### Suzette Smart Making birds in three dimensions can be challenging but an effective way of capturing their characteristics. Suzette is a textile artist and tutor who crafts beautiful machine-embroidered pieces and three-dimensional birds. She describes them as follows: _'Through the textures and patterns found in the stitch, each little bird reflects the landscape in which it travels. To complete its story, props such as stitched postcards and words are then added.'_ How to make a three-dimensional bird, by Suzette Smart: 1 With no set pattern, layer small pieces of scrim and voiles onto stiffened fabric and stitch down with free-machine embroidery. The stitched fabric should be eclectic with vermicelli, flowers and words. You will need to make a piece of embroidery around A4 in size. 2 Draw outlines for two wings, two sides and two gussets onto card and cut out. Move the templates around the embroidery to find the perfect pieces. 3 Position and stitch the wings and buttons for eyes onto the sides and add buttons or maybe sequins for the eyes. Then you can begin to hand stitch the pieces together, starting at the top of the bird. 4 For the legs and feet, you will need one length of wire. This is stitched securely into place through each side of the bottom seams. Now shape both ends of the wire into birds feet, add a little stuffing and finish stitching your seams together. Finally, hold the feet down with the palm of your hand and gently bend to find the bird's standing position. Mixed-media embroidery bird by Suzette Smart. ### Catherine Frere-Smith Catherine Bennett (designing under her maiden name Frere-Smith) is a textile designer and artist, designing printed textiles and embroidered sculptures inspired by her childhood upbringing in rural Kent, England. She is greatly inspired by nature and during her time studying at Chelsea College of Art in London it became the main focus of her work. Mixed-media textile blue tit by Catherine Frere-Smith. Catherine Frere-Smith's Final Collection BA exhibition, Chelsea College of Art. ### Karen Suzuki Karen's city pigeons are so expressive and have great character. She says: _'I use animal forms to explore the possibilities presented by combining and reworking fabrics. Rather than a strictly representational approach, I aim to capture something of the animal's character. I tend to work with urban creatures, especially city pigeons, which attract me by the tenacity with which they survive the hardships of their city existence. My materials and process-based approach is entirely worked using my version of traditional hand-sewing techniques, building up surfaces, on top of a cotton base, from small pieces of altered textiles and other media, sewn together vigorously with freeform stitching. I use freely stitched together pieces of existing textiles together with pieces that have been worked using embroidery techniques such as pulled thread and appliqué; or reweaving elements back into the fabrics._ _I sometimes layer textiles of differing opacity and transparency to give depth and cohesion to the surfaces, and to create a kind of history for the object from materials that have gradually accumulated and adhered to the surfaces over time. These energetic processes are also intended to give each creature a sense of uniqueness, vitality and spontaneity. I have also experimented with incorporating elements that are not conventional textiles, such as altered food packaging. My aim for the work is develop this idea further, finding a way to better express the idea of the fragility and complexity of how animals exist in an urban environment.'_ Mixed-media textile city pigeons by Karen Suzuki. Vintage bird books from the author's collection. #### Vintage bird books I have started a collection of vintage natural-history books mostly found in second-hand book shop or recieved as gifts. I am drawn to the highly accurate, often delicate and descriptive illustrations they contain and, as most of the illustrations are out of copyright due to age, they are ideal for using as transfers. You can see them in many of my pieces. Mixed-media folding book by Anne Kelly. #### Paper mâché birds These little birds were made as a contribution to my community installation piece _Sparrow Stories_ by artist Jenifer Newson. They were added to the piece and pinned on the surface. They are beautifully hand painted and worked well with the textile background. Papier mâché birds made for _Sparrow Stories_ by Jenifer Newson. #### _An Ark for Birds and Moths_ I had made some flat pieces combining collage and stitch for my nature blocks (see here), using recycled wood offcuts. I found an old handmade Noah's Ark in a local charity shop and was inspired to create _An Ark for Birds and Moths_. It was a large piece and needed a lot of cleaning and preparation. When I had prepared the surface, I chose images and fragments of fabric with birds and moths to laminate onto the ark. I used some stitched pieces and printed panels to cover larger areas. As a finishing touch, I used lace and ribbon to edge and highlight aspects of the ark – including the windows. _An Ark for Birds and Moths_ , mixed media on wood by Anne Kelly. ### Alternative views #### _Woodland Walks_ backpack I was given this damaged canvas rucksack and decided to rescue and embellish it with woodland scenes and images from a vintage woodland history book. I used the edges of an old table napkin, with lace and linen as part of the background. The heavily embroidered pieces have woodland animals on them, done with free-motion embroidery. It was first exhibited at the Prague Patchwork Meeting. _Woodland Walks_ backpack, mixed media on canvas by Anne Kelly. Detail from _Woodland Walks_ backpack, mixed media on canvas. Leisa Rich Leisa is a Canadian-born, American-based artist. She describes the technique behind her impressive wall installation _Mass Hysteria_ : _'In 1971 my mother taught me to sew. One of our sessions covered "darning". Using a Bernina 807, she demonstrated how to drop the feed dogs, change the foot to a darning foot, and move the material around in various ways in order to repair a tear in a skirt, a hole in a sock, to fix anything that needed it. It didn't take me long to figure out that I could use this technique to "draw". I remember creating a little dog on a scrap of cotton. This kicked off my fascination with using a sewing machine to embroider, a technique that has since been coined as free-motion machine embroidery. Pretty much any sewing machine has this capability, although some do it more easily than others. In addition, there are now multitudes of papers, dissolvable glues, heat transfers and surfaces that can be stitched on, washed away, stitched over and layered, using decorative threads, embroidery cotton in the bobbin, elastics and more. The technique is now limitless!_ Mass Hysteria _is done on clear vinyl in similar free-motion machine embroidery fashion. Each bird attaches to the wall using small straight pins and can be formed into a myriad of configurations. I still use that same Bernina 807 that I first learned on back in 1971!_ Mass Hysteria _is my personal reflection on the challenge of watching my mother face dementia. These thoughts became birds ... careening wildly in Hitchcockian attack mode, or tethered like kites so they won't get away.'_ _Mass Hysteria_ , mixed-media textile by Leisa Rich. ### Lesley Patterson-Marx _Songbird Harmonica Book_ , a limited-edition reproduction of the original book by Lesley Patterson-Marx. Lesley produces a diverse range of work in many media. I was drawn to her use of found objects and stitch, evident in this _Songbird Harmonica Book_ : _'I was looking at an old harmonica and realised that, with its two cover plates, this musical instrument had the potential to become a book. The plates weren't too interesting, though, so I went in search of a unique harmonica and found the perfect one: an antique instrument called The Songbird. The name itself inspired the content of the piece. I created the pages, which fold up like an accordion between the two covers, from antique sheet music. To make the pages more durable, I backed them with a porous fabric called tarlatan, which I frequently use in printmaking. I collaged images of songbirds on top of the sheet music and sewed the pages together. The result is an object that looks like a harmonica at first glance, but then opens to reveal an unexpected surprise of colourful songbirds.'_ ### Working from life #### Booth Museum of Natural History, Brighton, England Many cities and towns around the world contain natural-history museums, and these are a wonderful resource for textile artists working on themes from nature. The Booth Museum in Brighton, England specialises in birds and butterflies so it is of special interest to me. As the description on the website says: _'The Booth Museum was founded in 1874 by naturalist and collector Edward Thomas Booth. The Victorians were passionate about natural history and Edward Booth's particular interest was ornithology, the study of birds. During his lifetime he collected a huge variety of stuffed British birds and was a pioneer of the environmental type of display called "diorama", displaying birds in their natural habitat. It was this collection of over 300 cases (with the proviso that the dioramas should not altered) that launched the opening of the museum under Brighton civic ownership in 1891. In 1971 the Booth became a Museum of Natural History.'_ Gulls series in exhibition at Harbour Gallery, Jersey, mixed media on canvas by Anne Kelly. I was interested in making some studies of seagulls and headed for this museum as it is possible to get very close to the dioramas and the birds inside them. I drew them onto brown paper with ink (as shown right), and then transferred the drawings onto a mixed-media collage of fabric and paper that had been dyed and laminated onto canvas. I then stitched the gulls with free-motion stitching. The series was exhibited at the Harbour Gallery on the island of Jersey as part of a nature-themed exhibition. I also gave a series of workshops there based on seabirds, where Jenny Mahy, one of the students, created the piece shown opposite, above. Gull drawing by Anne Kelly. _Seagulls_ by Jenny Mahy, from the course at Harbour Gallery, Jersey. ### Lindsay Taylor Lindsay is a textile artist and tutor based on the Isle of Wight, England. She was commissioned to make a piece of art inspired by one of the Dutch masters hanging in the Wallace Collection in London. Instantly she was drawn to a 17th-century painting by Jan Weenix, _Flowers on a Fountain with a Peacock_. Her challenge was the peacock, with all his splendid feathers. She says: _'Like any creative venture that takes the natural world as its cue, my work is ever-evolving. My interest in three-dimensional forms has prompted me to extend my skills and seek out and explore new techniques and materials._ _Taking the abundant beauty and untamed, intricate shapes of the natural world as my inspiration, I work predominantly in three dimensions, weaving and winding hand-dyed natural fabrics into organic forms. My studio is located at the edge of a large forest on the Isle of Wight, an ideal environment for any artist fascinated by the native plants and flora that inhabit Britain's woodland and rural landscapes. Here my textile work takes shape. To work these transformations I use a variety of techniques. These include: free-hand machine embroidery, traditional hand embroidery, painting, dyeing, quilting, moulding, felting, sculpting, beading, trapunto, wiring and appliqué._ _My materials are chosen carefully for their texture, credibility and aesthetic appeal.'_ _Peacock_ , mixed-media embroidery by Lindsay Taylor. ### _Sparrow Stories_ For a recent solo show, I exhibited a series of new autobiographical pieces (see the _Aprons_ series, and wanted to include a community collaboration so I decided to make a piece that would help to raise funds for the RSPB. I was made aware of the worldwide decline of the sparrow population, which disturbed me as I had always taken these small birds for granted. I made a densely embroidered background of trees and birds, and invited contributors to donate stories, pictures, poems and even three-dimensional models of sparrows. The response was tremendous and we even arranged to have a sparrow soundtrack playing at the exhibition opening. A representative from the RSPB was present and helped with the fundraising and providing information on the organisation and their projects. During the exhibition, I held a workshop with a community group who made stitched pieces to contribute to the piece. I was delighted to be invited to take the piece to the RSPB education centre on Hampstead Heath near London where it was displayed for eight months. _Sparrow Stories_ by Anne Kelly, a community collaborative piece, embroidered background with mixed-media pieces pinned to it. _Sparrow_ , mixed-media embroidery on paper by Anne Kelly. ## Working with green spaces ## 'So still were the big woods where I sat, sound might not yet have been born.' ## Emily Carr Pages from the author's sketchbook. ### _The Natural History of the Garden_ at Sussex Prairie Garden View into the garden at Sussex Prairie Garden. I was artist-in-residence at Sussex Prairie Garden, an RHS partnered garden nestled in the South Downs near Henfield in West Sussex, England. The McBride family had transformed their farm into a wild-planted prairie space in keeping with the design ideals of internationally renowned garden designer Piet Oudolf, whom they had worked with. The garden specialises in the planting of sweeps of perennials and exhibits the work of artists, sculptors and craftspeople. I wanted to create an installation that would engage with and inspire the visiting public. I decided to use my love of folk art and created templates of birdhouses, trees and plants to make hardboard shapes, which were then covered with recycled fabric. I made bunting to go across the top of the wall with the title of the piece _The Natural History of the Garden_. I left a space in the centre of the installation for a set of 'washing lines' on which to peg contributions to the wall display. I had asked students and the public via social media to send me JPEGs of plants, animals, poems, photographs and any references to the garden. There were some wonderful responses and the installation grew throughout my residency. Detail of wall installation and bunting by Anne Kelly at Sussex Prairie Garden. We allocated a weekend in the summer to be a fundraising event for the RSPB and asked their community fundraising team, whom I had worked with previously, to help to organise a series of activities including a nature trail for visitors. As part of the weekend, I conducted workshops with children. I also taught a series of adult courses at the garden, exploring different aspects of the space and referencing plants in different formats. I kept a sketchbook of my time at the garden and set myself the challenge of making a drawing every time I visited it. I also organised a system whereby visitors to the garden could contribute to the wall piece by helping themselves to images, fabric and embroidery cotton for stitching. They were able to sit and stitch and also take the work away to finish when they wanted to. Community collaboration installation (below) and detail (left) at Sussex Prairie Garden. #### Covered Wagon studio I was keen to establish a workspace and studio during my residency at Sussex Prairie Garden. I wanted it to be a welcoming space that guests to the garden would be able to 'dip into' when visiting. I collaborated with the owners of the garden who had found a 'wagon'-shaped growing frame, which I could use as the main structure. Covered Wagon studio, Sussex Prairie Garden. I had some help from the volunteers at the garden to reinforce the structure with bamboo poles and string as poles to go across the wagon. These were later to be used for hanging drying fabric. I placed some work tables under and near the structure. I then made a large cover for the whole piece out of recycled embroidered tablecloths and teacloths. This would eventually be disassembled into separate pieces that I used in _Wildflower Teacloth Sketchbook_. The studio was used throughout the season and acted as a good backdrop for displaying work. One of the workshops I gave at the garden concentrated on folding books (see instructions here). Wallhanging, Sussex Prairie Garden. #### Making a folding book Here are some simple instructions for making a folded book: 1 First, select the papers and fabric you would like to use for your book, assembling them in a long rectangular shape that will later be folded. 2 Choose a strong but lightweight fabric (calico or light canvas is ideal) and bond the cut-out paper pieces with the fabric together using a diluted 2:1 mixture of PVA glue. Hanging your rectangular piece to dry will ensure it doesn't stick to surfaces. You can continue to use this mixture to add further images, drawings and thin fabric scraps. 3 When your piece is completely dry, you can iron it into folded sections. I've chosen leaves, using real, printed and drawn leaves alongside leaf skeletons as my subject. I've used a combination of machine and hand stitching to embellish the piece, and I've also added some words to focus on and make the design more interesting. This piece was auctioned by Workshop on the Web founder Maggie Grey to support the Teenage Cancer Trust. Shed Studio during the Cross-Pollination at the Chelsea Fringe event. Wakehurst Place book, mixed-media textile on card by Anne Kelly. ### Cross-Pollination Following on from my residency, I was keen to establish links with artists-in-residence at other gardens. Louise Pettifer was artist-in-residence at Sissinghurst Castle garden in Kent, England. She trained as a textile designer and produces layered plant collages, using a technique similar to the method I use for making hand-cut stencils for printing (see here). She draws the plants and cuts them out of thin card/paper. Sections of the plants are layered over each other and the whole image is then added to a monoprinted background. She reproduces these originals as prints, which can then be transferred onto a range of surfaces, including textiles. _Cottage Garden, September_ , monoprint and cut paper by Louise Pettifer. ### Rosie MacCurrach _Laurel and Pear Tree,_ etching on paper by Rosie MacCurrach. Rosie trained as a textile designer – her drawings embrace this and are multi-tonal and layered. They perfectly reflect the changing seasons and activities at the famous garden of Great Dexter in East Sussex, England, and are reminiscent of the great illustrators of the 1930s and 1940s such as Edward Ardizzone and Eric Ravilious. Capturing atmosphere and light in her work and creating small vignettes of her surroundings, Rosie's works can tell us a lot about working in garden surroundings. In early 2015 we exhibited together at the Crowborough Centre in East Sussex, England, and provided statements about our residencies as well as examples of work produced there. We also participated in the 2015 Chelsea Fringe, which is a charity running garden-themed events in areas around the Chelsea Flower Show in London. ### A natural place Green spaces inspire and working from nature first hand is the best way to learn. #### Royal School of Needlework The Royal School of Needlework (RSN) is a forward-looking charity dedicated to teaching, practising and promoting the art of hand embroidery. It runs courses in hand embroidery for all levels from beginner to degree programmes in all techniques. It has an Embroidery Studio for work on both the conservation and restoration of historical textiles and for creating new commissions. Blackwork thistle by Royal School of Needlework Certificate student Yuliya Klem. Silk shaded floral bouquet, English twentieth century, Royal School of Needlework Collection. Silk shaded _Linum perenne_ (perennial or blue flax) by Royal School of Needlework Certificate student Kaoru Ozaki. RSN says of the Inspired by the Garden (2015) exhibition: _'Almost since the start of embroidery, capturing flowers and the natural world has been an irresistible subject for stitch. Embroidery lends itself perfectly to capturing the textures, colours, shapes and movement of nature and on show were beautiful pieces of work including traditional floral interpretations and a host of more unusual embroidery subjects from vegetables and fruit to fungi._ _The exhibition featured historic work from the RSN collection together with current embroideries by RSN students and tutors – all inspired by the natural world using a wide variety of stitched techniques. Historical pieces date from the eighteenth century and the exhibition will come right up to date with pieces submitted in for the RSN Degree, Certificate and Diploma courses.'_ This piece was a wedding gift, combining elements of the English countryside, plants and birds. _English Country Ducks_ by Anne Kelly. #### _Marshlands – New to Old_ After my first collaboration with the RSPB, _Sparrow Stories_ , which was installed at The Hive education centre on Hampstead Heath, I was invited to exhibit at the RSPB Rainham Marshes nature reserve in Essex, in southern England. I drew on the link between the marshland at Rainham and the marshland habitat in Canada, in New Brunswick, where I trained as an artist. I made four panels called _Marshlands – New to Old_ , which were displayed at the Purfleet Hide. I used a simple background to mount drawings of marshland birds from both countries, plants and plant outlines. The bottom edge of each piece was embroidered with plant stems and extended upwards into the main body of the work. I also included some large insect motifs. The whole piece was overstitched with large machined running stitch, in a wave-like pattern. I added a bright green vintage fabric border as a contrast to the dark outlines of the work. _Marshlands – New to Old_ by Anne Kelly, in situ at RSPB Rainham Marshes, Essex. #### Sketchbook work on site The power of the sketchbook as resource cannot be underestimated. Use it for experimentation, for getting down ideas, expressing feelings, collecting references, mark making and whatever else inspires you. Above all, use it when working outside or when researching at museums or attending workshops. I like to sketch in my Shed Studio, where I have set up a nature table (see here) that I can use as reference in my own work or when teaching. The studio is in the middle of the garden and reflects the green spaces outside through its many windows. The proximity of the plants, birds and outdoor space make it a peaceful and contemplative place to work. It has also been a feature of our local open-studio organisation, South East Open Studies, for over ten years, and has hosted a number of guest artists as well as a core group of artists connected through thematic approaches. Student sketchbook, Shed Studio, coloured pencil on paper. #### Karl Simmons Sketchbook pages, mixed media on paper by Karl Simmons. Karl is a painter based in London. On his travels around the UK and abroad he keeps detailed sketchbooks, which he uses to paint from. They contain fascinating records of his thought processes and the drawings he uses as source material for his work. They have a magical and light touch, which imbue his larger paintings and are works of art in themselves. They have been used as teaching tools for his students. Jennifer Collier Jennifer explains her work as re-making household objects from stitched recycled papers. She says: _'My practice focuses on creating work from paper; by bonding, waxing, trapping and stitching I produce unusual paper "fabrics", which are used to explore the "remaking" of household objects. The papers are treated as if cloth, with the main technique employed being stitch; a contemporary twist on traditional textiles. The papers themselves serve as both the inspiration and the media for my work, with the narrative of the books and papers suggesting the forms. I tend to find items then investigate a way in which they can be reused and transformed; giving new life to things that would otherwise go unloved or be thrown away.'_ I have chosen to highlight Jennifer's birdhouses, which show the delicate selection of imagery that she uses in her work. Jennifer has led the way in the upcycling revolution in art and craft; a veteran maker of vintage material, investigating the themes of reuse and recycling. Every exquisite detail is made, folded and manipulated from paper. Once books, maps, envelopes, wallpaper or scrap, the paper is transformed into textural forms. Like cloth it is stitched to construct two- or three-dimensional objects, decorative and functional: lampshades, cameras, tools and furniture. The origin of the paper often provides a starting point for the artwork: the narrative of the books and papers suggesting idea and form. Jennifer's work centres around domestic objects made entirely from paper: upholstered chairs, kitchen utensils and garden tools hanging in their shed invite you in. References to fairytales, films, literature, music and nursery rhymes – the layers of paper and meaning together build the narrative. _Birdhouses_ , mixed-media textile construction by Jennifer Collier. ### Jane Churchill _Jessie Ellman's Collection of Years 1916,_ hand-cut moths made from vintage paper in museum box by Jane Churchill. Jane is a Sussex-based artist who trained as a set designer and her work encompasses the theatrical but is also heavily influenced by natural history, memory and more recently the personal histories of the First World War. Jane says: _'This box is part of my immersive installation_ Degrees of Separation, _which tells glimpses of Will and Jessie's love story, of loss and connection through the First World War. My mixed-media work explores the boundaries between truth and fiction, created artwork and artefact._ _In July 1917 Lieut. W.G. Hicks was killed near Arras in France. He left behind his fiancée Jessie Ellman. This museum box of moths, which was inspired by Victorian natural history collections, is part of a larger piece based on Jessie's feelings about the extent of the casualties in the First World War, each moth representing a fallen soldier.'_ ## Nature in context ## 'Tell your own story, and you will be interesting.' ## _Louise Bourgeois_ Hanging by Anne Kelly from Covered Wagon studio, Sussex Prairie Garden. Images of nature can be used at face value for purely aesthetic reasons, but some artists like to dig deeper and load their art with symbolic meanings and resonances. In this chapter, I will look at the work of artists who use nature and natural imagery as a conduit for making a personal statement about issues that concern them. I will also look at the Goldsmiths College collection of textiles, which is a useful resource for study and research. The chapter will conclude with some examples of textiles influenced by nature from different parts of the world. ### Women's work _There Are No Words_..., hand-stitched textile by Caren Garfen. Caren Garfen The intricately and beautifully stitched work of Caren Garfen belies a much more serious and arguably more subversive purpose. I was drawn to her sampler _There Are No Words..._ with her use of animals and plants. Caren says: _'When I create an artwork I endeavour to understand the nature of gender inequality by examining women's societal roles in the 21st century, although my most recent works have been site responsive. These have necessitated looking into the past to reveal interesting facts about women's lives in both the 1800s and, in the case of the sampler,_ There Are No Words to Embroider That Single Desolating Fact, _the 1500s. My work can be seen as a feminist campaign; the making becomes a spirited action which pursues a political or social end. My meticulous and time-consuming hand-stitched words are chosen to raise people's awareness of issues concerning females today. These art pieces are contemporary banners documenting women's lives in the present and from the past. Textiles are of particular importance in enabling me to put my ideas across. Textiles are something that everyone can relate to – they are part of our everyday lives._ _All of my work commences with a flat sheet of fabric which is eventually manipulated into a finished artwork, taking the form of everyday domestic objects such as bedding, tea towels or a kitchen-paper towel roll. The viewer can recognise these immediately, but on closer examination they will find that there is more to them than meets the eye. In creating_ There Are No Words... _it was important to re-create something that resonated with the National Trust property Newark Park in Gloucestershire. I wanted to embellish the piece with symbolism and coded messages and felt the best way forward was to painstakingly hand stitch a sampler. The building was originally a hunting lodge so it was essential to add motifs to emphasise this. Thus deer, foxes, birds and trees were sewn into this work. I use humour too by combining handmade and hand-stitched labels with either silkscreen printed or stitched imagery. I am interested in how a play on words can be extracted from a sentence, which is thereby transformed to reveal a greater truth. Although we have made a good deal of progress since women first got the vote, it seems to me that we still have a long way to go, in terms of our working lives, childcare and self-image, and I will continue to highlight all of these issues, and others, in future artworks.'_ #### _Independent Minded Women_ In this embroidery mounted onto canvas, I am expressing some thoughts about my family history by recycling a piece made by my grandmother. She was a great needlewoman, and made work in all media: lacemaking, embroidery, quilting, knitting, crocheting and rug-hooking. I chose to use the embroidery of thistles to create almost a 'family tree' with mentions of four generations embroidered over the top. I was happy to include other fragments of embroidery, print and some fabric brought back from Africa by my daughter Ruth. Both of my formidable grandmothers get a mention. It is a piece that, although explicit with its wording, hopefully also makes you think again. _Independent Minded Women_ , mixed-media textile on canvas by Anne Kelly. ### Lynn Setterington _Passing Down_ , mixed-media textile by Lynn Setterington. Lynn is an internationally recognised artist working in the textiles arena. Celebrations of the ordinary and overlooked are key themes in Lynn's work. She became known for her use of kantha embroidery in the early 1990s and has since gone on to devise and instigate a large number of social engagement projects and textile collaborations with diverse groups. Lynn says: 'Passing Down _was a collaborative project with poet Helen Clare to create a large-scale embroidered poem as part of the Manchester Science Festival 2008. The final poem is configured as a tree of life/family tree made up of series of quotes and statements from the participating groups. Individual leaves were stitched by students from the Embroidery Programme at Manchester Metropolitan University to add different voices to the narrative. The cloth is on display at Nowgen, a genetics research centre in Grafton Sreet, Manchester, next to Manchester Royal Infirmary.'_ ### Amelia Scott I was invited to put together an exhibition for the corridor galleries at our local Tunbridge Wells Hospital in Kent. I wanted to reflect some of the history of the hospital and its role in our town, although it is a relatively new building. Amelia Scott was an eminent philanthropist and supporter of women's suffrage and was vice president of the Tunbridge Wells branch of the National Union of Women's Suffrage Societies. I was able to view some of Amelia's original letters and documents at The Women's Library Reading Room, which is now housed in the LSE Library. Amelia was active in all aspects of women's work in Tunbridge Wells during the First World War, including the establishment of the Soldiers' Central Laundry. She was awarded the Gold Palm Order of the Crown, an extremely prestigious award, in 1929, by the King of the Belgians, for her work with Belgian refugees. I wanted to reflect the surroundings of the hospital, which Amelia helped to support on this site, by using birds from the area. There are also plants and maps from the town, marking the areas where Amelia and her sister lived and worked. There are small references to her work in the First World War – poppies, nurse's uniform and her leaflet bag. _Amelia Scott's Birds_ , mixed-media textile by Anne Kelly. _Amelia Scott's Birds_ in situ at Tunbridge Wells Hospital. #### Goldsmiths Textile Collection and Constance Howard Gallery The Goldsmiths Textile Collection and the Constance Howard Gallery are located at Deptford Town Hall, part of Goldsmiths, University of London. The textile collection illustrates the history of textiles at Goldsmiths from the 1940s to the present day and includes works by alumni of Goldsmiths and other textile artists, as well as ethnographic and historical textiles and dress. The collection is used by a wide variety of researchers from different disciplines including visual arts, anthropology, history and design. Students, academic staff and researchers, both from within Goldsmiths and outside, together with members of the public use the collections. There is a reference library which, in addition to books, holds journals and an extensive collection of pamphlets and exhibition catalogues. The Constance Howard Gallery holds exhibitions of textiles. These include pieces from the collection or from associated research projects, or work by textile artists, students or alumni. Purse made from apple seeds, held at Goldsmiths Textile Collection, Library, Goldsmiths, University of London. Small straw woven sewing case from Africa, held at Goldsmiths. Purse decorated with peacock feathers, held at Goldsmiths. _Moth Bag_ , mixed-media embroidered textile on bag form by Anne Kelly. _Primitive Bag_ , mixed-media embroidered textile pieces appliquéd on bag form by Anne Kelly. #### Nature bags Inspired by the collection at the Constance Howard Gallery, I made some bags using natural themes. I upcycled readymade bags and stitched embroidered panels onto the existing shapes. I aimed to match the images with the shape of the bags, so that they enhanced the style and form of the bags. ### Travel in textiles #### Russia Other sources of inspiration can be taken from the interpretation of nature. In October 2013, I visited Moscow and the wonderful All Russian Museum there. It has a large collection of folk art – from wooden painted sleds to the embroidered panels, which particularly interested me. These reinforced my love of traditional pattern, embodied in folk and naïve art. The predominant colour of these samples was red, which I had been exploring through my own work at that point. The patterns included elements of the natural world, which is another recurring theme in my work. I made some initial sketches using pencils and a fine line pen, taking note of the colours involved. I then added some watercolour to the drawings the same day, back at my hotel. I wanted to use a limited palette to reinforce the colour schemes that I associated with this style of folk art and to capture the mood and feeling of the museum, through sketching. When I had finished my preliminary sketches, I started to plan a series of work based on my visit and the embroidery that I had seen at the museum. I like to use fabric that relates to the place that I am working on and has a connection to it. I chose to use as a base fabric and background an old evening dress from Eastern Europe with a satin finish. I had decided to work on a triptych and to create a top and middle section and shoe section from the imagery and newer textiles that I had collected on my visit. I began by covering the base with a sheer nylon curtain fabric, with a slight pinky tinge. This immediately neutralised the background without completely covering the outline of the dress. It also provided a 'blank canvas' to work on. Once I was back in the studio, I then turned my attention to the patterns themselves, and made some transfers of sections of my manipulated photographs onto printable canvas. These sections were incorporated into the background and made into separate motifs, like the butterfly and flowers as part of the tops. As the transfers themselves were quite pale, I decided to make the pattern more prominent by stitching over it, using a free machine embroidery foot. This made me appreciate quite how intricate and involved the patterns must have been to weave, print and stitch. _Russian Folk_ , mixed media textile by Anne Kelly. detail from _Russian Folk_ , mixed media textile by Anne Kelly. #### Prague I was invited to exhibit at the Prague Patchwork Meeting and to teach a workshop there. I drew on the theme of 'windows' and using a mixture of vintage and new fabric, produced a demonstration piece for the group. I was trying to capture the feeling of the old city with its distinctive architecture. Starting with a handkerchief as a background, we added drawings and collaged elements of views from the city. These were stitched onto the base and then overstitched with free-motion embroidery and some hand stitching. #### Canada A small piece in the form of a postcard, where images from both the UK and Canada link the two locations, along with remnants of cross stitch. _Canada Card_ by Anne Kelly. _Prague Window_ , mixed-media textile on vintage fabric by Anne Kelly. _Greek Dress Apron_ , mixed-media textile by Anne Kelly. _Aprons_ series by Anne Kelly in the Corridor Gallery, World of Threads Festival, Oakville, Canada. #### _Aprons_ series My _Aprons_ series was designed as an autobiographical series of work, describing different environments that I have been a part of, and each one referencing travel and the natural world. Earlier pieces from the series were exhibited at my solo exhibition at the Trinity Town and Country Gallery in Tunbridge Wells, Kent. Carolyn Forster, author and quilter, wrote: _'The materials were everyday and accessible and also evocative; it inspired me to want to get stitching... even though you could create your own work you still wondered if it had as much to say or as much depth as Anne's work'._ Five pieces of work from the series were exhibited at the International World of Threads Festival in Canada. Hand-embroidered belt from a Guatemalan market. #### North and South America In November 2014, I was able to visit an amazing collection of First People's work from Canada at the McCord Museum in Montreal. It was a poignant experience as I grew up in the city and remember visiting the museum as a small child. The museum describes the collection: _'The exhibition is a must to discover ancient traditions where the creation of original garments proved a rich heritage of identities and cultures. Because dress is not only utilitarian, it is used to quickly distinguish allies from enemies, indicate the power of spiritual leaders such as shamans, or, by wearing finely decorated clothing, show a hunter's respect towards animals on which his family depends for survival... Dress participates in the development, preservation and communication of social, cultural, political and spiritual identities of First Nations, Inuit and Métis._ The hand-embroidered belt (above) from Guatemala was made by a local craft cooperative. It features flower and bird motifs, heavily embroidered in satin stitch on a woven cotton belt. Chinese embroideries from the author's collection. #### China I was fortunate enough to be able to visit China twice in the last few years, and naturally was keen to see a collection of textiles while there. I went to the Chinese Museum of Women and Children in Beijing, where I saw an amazing collection of textiles from the 53 ethnic regions of China. I also found some samples at the antique market in Beijing from old pillow-end covers. The embroidery is intricate, colourful and very inspirational. The colours, although faded, remain rich and sensual. Chinese silk embroidered pillow ends from the author's collection. Travel sketchbooks by Anne Kelly. #### Travel sketchbooks and studio Whenever I travel, I take small sketchbooks with me and make notes and drawings. These are invaluable when back in the studio and often become pieces in their own right. On a recent visit to America I used some folk-art inspired fabric from a thrift shop there and used a room in the flat where I was staying as an impromptu studio. I have worked in hotel rooms, on aeroplanes, wherever I am. _Red Tree_ , work in progress by Anne Kelly. #### Needlecase I like to make practical items as well as creating pieces for framing or hanging. A needlecase is a particularly useful item for textile artists and it offers an opportunity to sample a new technique or display a small finished piece. I don't like to get too hung up on the dimensions – I just work with what I've got – but there is plenty of free instructional information on the Internet if you like to work to a prescription. My method is laid out below: 1 Choose a remnant of strong fabric and/or vintage fabric for the cover of the case. 2 Add an appliquéd image to the cover at this stage if you wish – I've chosen a bird. Work stitching over the fabric as desired and add any other adornments. 3 Use felt pieces for the inside, to hold the needles. Make the felt pieces slightly smaller than the cover. 4 Pin the felt pieces into place and sew along the spine of the case to join the pieces together. Needlecase by Anne Kelly. ### Conclusion 'The art of writing is the art of discovering what you believe.' _Gustave Flaubert_ When I started working on this book three years ago, I soon realised what a huge area of study the bond between textiles and nature could be. I have tried to select areas that will interest and encourage artists, students and educators of textile art. By providing some contextual links and resources, I hope that readers will be inspired to take the topic further. _Textile Nature_ started with some simple connections between seeing and making, taking influences from the natural world. My aim has been to illuminate our attachment to nature and to see how we can use this to reflect our surroundings and location in the wider community. By exploring different aspects of making; through drawing, stitch, print, construction and weave, the book presents ideas and starting points. I am very grateful to all the artists, individuals and institutions that have generously allowed the reproduction of their images and words. The book is much richer as a result. I have also been fortunate to work with some wonderful tutors and teachers who have linked their practice with the natural world, and helped me to locate my niche in it. It is no coincidence that this book starts and ends with quotes by two French writer/philosophers. My husband trained as a philosopher and has been my strongest supporter. I would also like to thank my children and their partners and this book is dedicated to all of them. _House Sparrow_ folding books by Anne Kelly. Mixed-media collages by Anne Kelly. ### Featured artists My links: www.annekellytextiles.com www.annekellytextiles.blogspot.co.uk www.annekellytextiles.wordpress.com www.craftscouncil.org.uk/directory/maker/anne-kelly-textiles ### Artist websites Melanie Bowles www.melaniebowles.co.uk Jane Churchill www.janechurchillartist.com Jennifer Collier www.jennifercollier.co.uk Anna Dickerson www.annadickerson.com Melvyn Evans www.melvynevans.com Hillary Fayle www.hillaryfayle.wordpress.com Carolyn Forster www.carolynforster.co.uk Alice Fox www.alicefox.co.uk Catherine Frere-Smith www.catherinefreresmith.com Caren Garfen www.carengarfen.com Cas Holmes www.casholmestextiles.co.uk Val Holmes www.textile-art-centre.com.fr/val-holmes Nicola Jarvis www.nicolajarvisstudio.com Rosie MacCurrach www.rosiemaccurrach.com Gaby Mett www.gabi-mett.de Ellen Montelius www.ellenmontelius.com Alison Milner www.alisonmilner.co.uk Judith Mundwiler www.judithmundwiler.ch Carol Naylor www.carolnaylor.co.uk Jane Nicholas www.janenicholas.com Nancy Nicholson www.nancynicholson.co.uk Emily Notman www.emilynotman.co.uk Emma Nishimura www.emmanishimura.com Lesley Patterson-Marx www.lesleypattersonmarx.com Louise Pettifer www.louisepettifer.co.uk Leisa Rich www.monaleisa.com Lynn Setterington www.lynnsetterington.co.uk Suzette Smart www.suzettesmart.wordpress.com Maxine Sutton www.maxinesutton.com Karen Suzuki www.namelesswonders.jimdo.com Lindsay Taylor www.lindsay-taylor.co.uk Kim Thittichai www.kimthittichai.com Pauline Verrinder www.paulineverrinder.com Jane Will www.flowersanddaughters.co.uk Meredith Woolnough www.meredithwoolnough.com _Small Flower_ , mixed media embroidery by Anne Kelly. ### Further information All-Russian Decorative, Applied and Folk Art Museum, Moscow www.russianmuseums.info/M276 The Beaney House of Art and Knowledge, Canterbury, UK www.canterbury.co.uk/beaney/ Booth Museum, Brighton, UK www.brightonmuseums.org.uk/booth Chelsea Fringe www.chelseafringe.com The Chinese Museum of Women and Children, Beijing ccwm.china.com.cn Ditchling Museum, East Sussex, UK www.ditchlingmuseumartcraft.org.uk Embroiderers' Guild www.embroiderersguild.com The Fibreworks, Oxfordshire www.thefibreworks.co.uk Goldsmiths Textile Collection www.gold.ac.uk/textile-collection Great Dixter House and Gardens, East Sussex, UK www.greatdixter.co.uk The Harbour Gallery, Jersey www.theharbourgalleryjersey.com The Hospice in the Weald, Kent, UK www.hospiceintheweald.org.uk The Knitting and Stitching Show www.theknittingandstitchingshow.com McCord Museum, Montreal, Canada www.mccord-museum.qc.ca Mississippi Valley Textile Museum, Canada www.mvtm.ca/mvtm Narodni Museum, Prague, Czech Republic www.nm.cz Prague Patchwork Meeting www.praguepatchworkmeeting.com The Quilters' Guild www.quiltersguild.org.uk Royal School of Needlework www.royal-needlwork.org.uk Royal Society for the Protection of Birds (UK) www.rspb.org.uk Sissinghurst Castle Garden, Kent, UK www.nationaltrust.org.uk/sissinghurst-castle-garden Sussex Prairies Garden, West Sussex, UK www.sussexprairies.co.uk World of Threads Festival, Canada www.worldofthreadsfestival.com The Women's Library Reading Room, at the London School of Economics Library, London www.lse.ac.uk/library/collections Fabric collage books by Anne Kelly. ### Further reading Bourgeois, Louise, _Stitches in Time_ (August Projects/MOCA, 2003) Bowles, Melanie, _Digital Textile Design_ (Lawrence King, 2012) Brodie, Antonia, _V &A Pattern: Garden Florals_ (V&A Publishing, 2010) Cleeves, Tim and Holden, Peter, _RSPB Handbook of British Birds_ (Bloomsbury Natural History, 2014) Flint, India, _Eco Colour: Botanical dyes for beautiful textiles_ (Murdoch, 2008) Haxell, Kate, _The Stich Bible_ (David & Charles, 2012) Holmes, Cas and Kelly, Anne, _Connected Cloth_ (Batsford, 2013) Howard, Constance, _The Constance Howard Book of Stitches_ (Batsford, 1979) Scott, Rebecca, _Samplers_ (Shire, 2009) Tellier-Loumagne, Francoise, _The Art of Embroidery_ (Thames & Hudson, 2006) ### Suppliers ### UK George Weil Old Portsmouth Road, Peasmarsh, Guildford, Surrey GU3 1LZ 01483 565800 www.georgeweil.com Seawhite Avalon Court, Star Road Trading Estate, Partridge Green, Horsham, West Sussex RH13 8RY 01403 711633 www.seawhite.co.uk Colourcraft (C&A) Ltd Unit 6, Carlisle Court 555 Carlisle Street East Sheffield S4 8DT 0114 242 1431 www.colourcraftltd.com Art Van Go The Studios, 1 Stevenage Road Knebworth Herts, SG3 6AN 01483 814946 www.artvango.co.uk Bernina UK 91 Goswell Road London EC1V 7EX 020 7549 7849 info@bernina.co.uk ### Canada and USA Textile Museum of Canada Shop 55 Centre Avenue Toronto, ON Canada M5G 2H5 (416) 599-5321 www.textilemuseum.ca/shop/tmc-shop PRO Chemical and Dye 126 Shove St, Fall River MA 02724 1-800-228-9393 <http://www.prochemicalanddye.com> ### Australia The Thread Studio 6 Smith Street Perth, Western Australia 6000 (61) 8 9227 1561 www.thethreadstudio.com _Jersey_ , folding sketchbook by Anne Kelly. ### Picture credits Photography by Rachel Whiting, with the exception of the following: Melanie Bowles page 42 (top), page 42 (bottom); Jane Churchill page 101; Jennifer Collier page 100; Melvyn Evans page 21; Hillary Fayle page 31; Carolyn Forster page 51; Catherine Frere-Smith page 72, page 73; Caren Garfen page 105; Cas Holmes page 20 (top); Val Holmes page 38, page 39; Nicola Jarvis page 69; Anne Kelly page 6, page 8, page 10, page 11, page 14, page 16, page 18, page 19, page 33, page 46, page 48 (top), page 52, page 53, page 54, page 55 (top), page 58 (bottom), page 62 (centre), page 65; page 67; page 68 (top), page 70, page 77 (bottom), page 82, page 83 (top), page 84, page 85, page 92, page 97, page 112, page 114; Gabi Mett page 61; Alison Milner page 43; Judith Mundwiler page 56; Carol Naylor page 20 (bottom); Jane Nicholas page 63; Nancy Nicholson page 45; Emma Nishimura page 13; Emily Notman page 41; Lesley Patterson-Marx page 81; Louise Pettifer page 37 (top), page 37 (bottom), page 94; D. Ramkalswon page 110; Royal School of Needlework page 96; Suzette Smart page 71; Maxine Sutton page 44; Karen Suzuki page 74; Lindsay Taylor page 83 (bottom); Kim Thittichai page 25; M. West page 80; Michael Wicks page 30; Meredith Woolnough page 15. Small mixed-media canvasses by Anne Kelly. ### Index Bags 110-111 Barker, Deborah Birds As image , , , , , , , , , , , , Three-Dimensional , , , , , Bourgeois, Louise Bowles, Melanie Chelsea Fringe 93-95 Churchill, Jane Coates, Lesley Collage , , , Collier, Jennifer Dickerson, Anna Dyeing Chemical , Natural , Embroiderers' Guild, Embroidery Applique , Chinese Free motion , Hand , , , , , , Irish Machine , , , , , , Stump work Evans, Melvyn Fayle, Hillary Fibreworks, the Flowers As image , , , , , , , , , , , , Folding Books , , , , , Forster, Carolyn Folk Art , Fox, Alice Frere-Smith, Catherine 72-73 Gardens Great Dixter Sissinghurst Castle , Sussex Prairie 88-91 Garfen, Caren 104-105 George, Lizzie Healthcare Groups, work with Hospice in the Weald MTW Hospital Trust , , Head, Justine Holmes, Cas Holmes, Val Insects As image , , , , Bees Jarvis, Nicola Kent, Angela Leaves As image , , Embroidered , , , Li, Carmen MacCurrach, Rosie Mairet, Ethel , Mett, Gaby Milner, Alison Mixed media , , , , , , , , , , , , Mundwiler, Judith , Museums All-Russian Decorative, Applied and Folk Art Booth Museum of Natural History Ditchling Museum of Art and Craft , McCord Museum, Montreal Nature blocks , , Nature table , Naylor, Carol Needlecase Newson, Jenifer Nicholas, Jane Nicholson, Nancy Nishimura, Emma Notman, Emily O, Coei Organic shapes , Ott, Helen , Paper cut , Patterson-Marx, Lesley Pettifer, Louise , Prague Patchwork Meeting Printing Block Digital , , Etching Flocked Lino Gelli (R trademark symbol) Linocut Mono print , Silkscreen , Stencil , Quilting Research Collections Goldsmiths Textile Collection LSE Women's Library Royal School of Needlework Rich, Leisa RSPB, work for , , , , Setterington, Lynn Shaw, Lucy Shed studio , , , 93-95 Sketchbooks , , , , , Smart, Suzette , Sutton, Maxine Suzuki, Karen Taylor, Lindsay Thittichai, Kim Trees , , , Verrinder, Pauline Vintage Books , Embroidery Weaving , Woolnough, Meredith World of Threads Festival , First published in the United Kingdom in 2016 by Batsford 1 Gower Street London WC1E 6HD An imprint of Pavilion Books Company Ltd Copyright © Batsford, 2016 Text © Anne Kelly, 2016 The moral rights of the author have been asserted. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written permission of the copyright owner. eISBN: 978-1-84994-406-9 This book can be ordered direct from the publisher at the website: www.pavilionbooks.com, or try your local bookshop. Distributed in the United States and Canada by Sterling Publishing Co., Inc. 1166 Avenue of the Americas, 17th Floor, New York, NY 10036
{ "redpajama_set_name": "RedPajamaBook" }
3,379
Q: Get value from input box added by script Better will be if you can see it. On my website Loloid There is input box, where user write his username. In menu is region and there user choose his region, where he plays. When he choose his region, in form is new hidden input with your region. I have PHP file, where i save his username with $_POST, but it doesn't work for region. Here are my codes, I have no idea how to save my region with $_POST //.Value are classes for regions $('.Value').click(function(e){ var value = $(e.target).text(); $('.hiddenRegion').html(value); }); Here is HTML <form method="post" action="lol.php" id="form"> <ul> <li class="field"> <input class="input" type="text" autocomplete="off" placeholder="Write a summoner name" name="summonerName" id="summonerName"/> <a id="formSubmit"><i class="icon-search"></i></a> <input type="hidden" name="hiddenInput" class="hiddenRegion"></input> </li> </ul> </form> And PHP $region = $_POST['hiddenInput']; When i print $region with echo, it returns just blank string A: Input tags never have </input> at the end, and you get and set its value in JavaScript using the .value property, or in JQuery the .val method. So, this should fix your problem: $('.Value').click(function(e){ var value = $(e.target).text(); $('.hiddenRegion').val(value); }); A: You are setting the input control values and not html.In-order to set or get input control values one should use val() It should be $('.hiddenRegion').val(value); and not $('.hiddenRegion').html(value); And to get values $('.hiddenRegion').val() A: You also have another problem. The hidden input is generating a new input every single time you select a region prior to clicking search. <li class="field"> <input class="input" type="text" autocomplete="off" placeholder="Write a summoner name" name="summonerName" id="summonerName"> <a onclick="$('#form').submit();"><i class="icon-search" style="top: -9%;"></i></a> </li> <input type="hidden" class="hiddenRegion" name="hiddenRegion" style="display: none;"></input> <input type="hidden" class="hiddenRegion" name="hiddenRegion" style="display: none;"></input> <input type="hidden" class="hiddenRegion" name="hiddenRegion" style="display: none;"></input>
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,020
{"url":"https:\/\/codeforces.com\/blog\/entry\/109438","text":"### ScarletS's blog\n\nBy\u00a0ScarletS, 2 months ago,\n\nThank you for participating in our contest! We hope you enjoyed it. Implementations will be added soon (when Codeforces lets authors submit solutions!).\n\nPlease let us know what you thought of the problems by voting!\n\n1758A - SSeeeeiinngg DDoouubbllee\n\nHint\nSolution\nImplementation (C++)\nImplementation (Java)\nImplementation (Python)\nFeedback\n\n1758B - XOR = Average\n\nHint\nSolution\nImplementation (C++)\nImplementation (Java)\nImplementation (Python)\nVideo Editorial\nFeedback\n\n1758C - Almost All Multiples\n\nHint\nSolution\nImplementation (C++)\nImplementation (Java)\nImplementation (Python)\nVideo Editorial\nFeedback\n\n1758D - Range = \u221aSum\n\nHint\nSolution\nImplementation (C++)\nImplementation (Java)\nImplementation (Python)\nVideo Editorial\nFeedback\n\n1758E - Tick, Tock\n\nHint\nSolution\nImplementation (C++)\nImplementation (Java)\nImplementation (Python)\nFeedback\n\n1758F - Decent Division\n\nHint\nSolution\nImplementation (C++)\nImplementation (Python)\nFeedback\n\n\u2022 +66\n\n \u00bb 2 months ago, # | \u00a0 -93 stupid constructive problems\n\u2022 \u00bb \u00bb 2 months ago, # ^ | \u00a0 +5 Stupid?? How?\n\u2022 \u00bb \u00bb 2 months ago, # ^ | \u2190 Rev. 2 \u2192 \u00a0 +82 we did inform you in the announcement, didn't we?\n\u2022 \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 +45 omg saarang comment\n\u2022 \u00bb \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 omg @saarang comment\n\u2022 \u00bb \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 omg saarang comment\n\u2022 \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 +1 I think you should still try to vary the problems a bit. Making a round where more than half of the problems are constructive doesn't make any sense. Maybe you could as well make a math contest.\n\u2022 \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 -49 Since when did we start taking announcements seriously? you also hinted that there may be an interactive problem, but I can't find any. Maybe next time keep your constructive problems for your mom.\n\u2022 \u00bb \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 -6 do you know what atmost 1 means just asking\n\u2022 \u00bb \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 +12 skill issue\n\u2022 \u00bb \u00bb 2 months ago, # ^ | \u2190 Rev. 2 \u2192 \u00a0 +8 constructive algorithms and greedy are the part of problem-solving, They are not stupid.\n \u00bb 2 months ago, # | \u00a0 +8 wow thanks for the quick editorial\n \u00bb 2 months ago, # | \u00a0 +10 Very good round (even though I lost rating :cri:)\n \u00bb 2 months ago, # | \u2190 Rev. 3 \u2192 \u00a0 +12 I am proud to First Solve B today, it's my first time having such great experience.on D, I did not divide cases, instead I thought about two pointers. I set $L=\\min$ and $R=\\max$, and the total sum as $\\frac{L+R}{2} \\cdot n$. And then, I advanced $R$ and $L$ until I could find an answer. For the exact method, please see my accepted submission \u2014 182519051. I am yet not sure how this method really works, but it did. Can anyone provide a formal proof on why this works?\n\u2022 \u00bb \u00bb 2 months ago, # ^ | \u00a0 -52 I C how your solution for B matches the same solution on YouTube and more over your template and Language keep on changing with every other problem. Do you feel guilt? Shit anyway.\n\u2022 \u00bb \u00bb \u00bb 2 months ago, # ^ | \u2190 Rev. 2 \u2192 \u00a0 +48 I C how your solution for B matches the same solution but I was literally first solve. I can't be copying anyone else if I'm the first one to solve LOLUPD: proof\n\u2022 \u00bb \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 That me on that pic??!?!?!\n\u2022 \u00bb \u00bb \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 +13 Yep, looks like you're there, though I didn't really intend on targeting anyone specifically in the screenshot.\n\u2022 \u00bb \u00bb \u00bb \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 I just was too excited that I'm one of the first who solved B\n\u2022 \u00bb \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 +3 Cool I was 5th apparently\n \u00bb 2 months ago, # | \u2190 Rev. 2 \u2192 \u00a0 +21 Here's a simple solution for D. For example, if $n = 5$, we use $[2, 3, 4, 5\/6\/7\/8\/9, 10]$. We can add 1 to all numbers to change sum without changing the LHS. To change the value $\\mod n$, we can choose the correct integer in the 4th location. Using this we can get all integers above some value. And the square of max \u2014 min is provably larger than the current sum. So it works.\n \u00bb 2 months ago, # | \u00a0 -10 Nice problemset!! got +ve delta :)..stuck on D though!\n \u00bb 2 months ago, # | \u00a0 0 I hate my life\n \u00bb 2 months ago, # | \u00a0 +5 C was amazing. Took me 40 minutes but made that..\n\u2022 \u00bb \u00bb 2 months ago, # ^ | \u2190 Rev. 4 \u2192 \u00a0 0 Can any one Point out the mistake i have done : As x was missing from there original position .I am trying to place next multiple (i.e. x2)at that place and repeating the same for next place. codevoid chal() { ll n,x; cin>>n>>x; vector aa(n+1,0); Fo(i,1,n+1){ aa[i]=i; } aa[1]=x; aa[n]=1; ll j=x; while(2*j<=n){ aa[j]=2*j; j=2*j; } if(n!=j){ cout<<-1<\n\u2022 \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 You have to check for the factors of n and then iterate over factors to replace the xth element with the next factor of n..Check my code and you will understand, Code\n\u2022 \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 +3 Video Solution for Problem C Hint:Answer is impossible is n is not divisible by xStart with the identity permutation 1,2,3,...,nNow we know, p[1] = x, p[n] = 1, so p[x] = n. So, we get x,2,3,..,x-1,n,x+1,...,n-1,1 The only task left is to make this lexicographically smaller.Now elements from 2,3,...,x-1 are fixed, so to make this lexicographically smaller, which element can you swap?\n\u2022 \u00bb \u00bb \u00bb 2 months ago, # ^ | \u2190 Rev. 2 \u2192 \u00a0 0 hey i am also doing the same(and getting WA) ...what did you find wrong in this logic? 182912947\n\u2022 \u00bb \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 You should see the example given in the editorial. You will get it. By my logic, there is no answer but there is an answer. But as per the editorial place n at x and then increase the size which also seems to be logical. Spoilervoid chal() { ll n,x; cin>>n>>x; if(n%x!=0){ cout<<-1< aa(n+1); aa[1]=x; aa[n]=1; Fo(i,2,n){ aa[i]=i; } if(x!=n){ aa[x]=n; } debug(aa); ll y=x; for(ll j=x+1;j\n\u2022 \u00bb \u00bb \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 thanks..i got the tc the above logic failing Input 1 12 4Output 4 2 3 8 5 6 7 16 9 10 11 1Answer 4 2 3 12 5 6 7 8 9 10 11 1\n\u2022 \u00bb \u00bb \u00bb 2 months ago, # ^ | \u2190 Rev. 2 \u2192 \u00a0 0 Hey hydra_cody, Can you tell me what is wrong with this logic? I also use the same logic you described above (i.e.,x2). Here is my code-184297340\n\u2022 \u00bb \u00bb \u00bb \u00bb 7 weeks ago, # ^ | \u00a0 0 Take a look at Ticket 16556 from CF Stress for a counter example.\n\u2022 \u00bb \u00bb \u00bb \u00bb \u00bb 7 weeks ago, # ^ | \u00a0 0 Thanks, I find my mistake and submit successfully.\n \u00bb 2 months ago, # | \u2190 Rev. 2 \u2192 \u00a0 +46 My solution for D: 182515472Let's assume that the minimum element is $x$ and the maximum element is $y$. Then we have a lot of freedom to change the elements in the middle without changing the range. The smallest sum is when the array is $[x, x+1, x+2,\\ldots, x+n-2, y]$ and the largest sum is when the array is $[x, y-n+2, \\ldots, y-2, y-1, y]$. We can achieve any sum between these extremes using a greedy algorithm. Start with the array with the smallest sum, visualize the elements as points on a number line, and move elements to the right one by one, where you move it as far as possible without making the sum exceed the target.So if we have values for $x$ and $y$ such that it is possible, we can construct one possible array with the above approach. Now, how do we choose values of $x$ and $y$ for each possible $n$?Let's assume we know $x$. Then we can iterate $y=x+n-1, y=x+n, y=x+n+1,\\ldots$ until one of them is valid. We just test validity by making sure the range squared $(y-x)^2$ is between the minimum possible sum $(n-1)x+y+(n-2)(n-1)\/2$ and the maximum possible sum $(n-1)y+x-(n-2)(n-1)\/2$.I assumed that it will always work when $x=2n$ and I assumed that $y(n) \\le y(n+1)$ so that I can use two pointers in $O(n)$ time instead of iterating every possible $y$ for every possible $x$, which I believe would be $O(n^2)$.\n\u2022 \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 I have iterated over every possible $x$ and every possible $y$ here.https:\/\/codeforces.com\/contest\/1758\/submission\/182516630\n \u00bb 2 months ago, # | \u00a0 +8 Auto comment: topic has been updated by manish.17 (previous revision, new revision, compare).\n \u00bb 2 months ago, # | \u00a0 0 This is my first time i get standing in top 500. As a pupil, i think this contest is easier than the previous div 2 contests. Anyone think so ?\n\u2022 \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 Probably because the problems were constructive, which comes easier for some people\n \u00bb 2 months ago, # | \u00a0 0 Putting 69 in B's 1st test case answer was intentional.\n \u00bb 2 months ago, # | \u00a0 +11 E is a really nice problem\n \u00bb 2 months ago, # | \u00a0 0 In problem B , for n=4, why answer can not be 1 1 3 2. ScarletS\n\u2022 \u00bb \u00bb 2 months ago, # ^ | \u2190 Rev. 2 \u2192 \u00a0 +12 The average of 1, 1, 3, 2 is 7\/4The XOR of 1, 1, 3, 2 is 11 != 7\/4\n\u2022 \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 XOR of 1,1,3,2**** = 1\n\u2022 \u00bb \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 +6 I fixed it about 4 minutes before you replied...\n\u2022 \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 Got it Bro, but in query clarification section you guys said that we have to take real number calculation, acc. to which 7\/4==1, that's the only confusion I had. Btw, Thanks again for clarifying here.\n\u2022 \u00bb \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 +3 7\/4 is a real number :)\n\u2022 \u00bb \u00bb \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 Oh shit:(, I just overthink, Btw Thanks once again.\n \u00bb 2 months ago, # | \u00a0 +15 My solution for D https:\/\/codeforces.com\/contest\/1758\/submission\/182529333Let's say you want to make the sum of all the numbers as (2*n)^2 then on average all the numbers should be 4*n, now you want max \u2014 min as 2*n take max as 5*n and min as 3*n, now if you see all the remaining number still averages out to be 4*n, if n is even, distribute the number like 4*n-i,4*n+i and same for odd just 1 number would be 4*n\n\u2022 \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 i did the same approach.\n\u2022 \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 If there's any ans\/ approach if I want to take the sum of the nums as n^2?\n \u00bb 2 months ago, # | \u00a0 +1 My solution for C : we can think of a permutation as follows X,2,3..__....,n-1,1 ( '__' denotes there is no element on the xth index) here at the first index there is x and at the last index there is 1, and all other indices except xth index are filled with permutation[i] = i , i != x; Here only x index has no element in it , and we have not placed n in our list , so we have to place n to the as right as possible. so we can just start iterating from (x+1)th index to (n-1)th index and see if n can be placed at that index and this index can be moved to xth index , if yes then we change x to j. for example n = 12 and k = 2, list = {2,empty,3,4,5,6,7,8,9,10,11,1} now we can see that for 4th index , you can move 4 to the empty place , list = {2,4,3,empty,5,6,7,8,9,10,11,1} and at the empty place you can put n = 12; now you can not shift the empty position to the right so just place n over there list = {2,4,3,12,5,6,7,8,9,10,11,1} Note : You will have to take care of other edge cases as well ! My submission : 182552063\n \u00bb 2 months ago, # | \u00a0 0 what's wrong with my solution for C here?? I can't figure out\n\u2022 \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 failed test-case: 1 20 2 your output: 2 4 3 10 5 6 7 8 9 20 11 12 13 14 15 16 17 18 19 1correct output: 2 4 3 20 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 1\n \u00bb 2 months ago, # | \u00a0 -23 Four constructive tasks... If you can't come up with normal tasks, why are you doing a contest?\n \u00bb 2 months ago, # | \u00a0 0 All the submission I made for problem C in the contest got WA. After contest ended I submitted one and got accepted :(\n \u00bb 2 months ago, # | \u00a0 0 My solution for C #include #define int long long int void solve(){ int n, x; std::cin >> n >> x; std::vector v(n+1); std::iota(v.begin(), v.end(), 0ll); v[x] = n; v[1] = x; v[n] = 1; int i = x+1, j = x; while(i < n){ if(v[j]%i == 0 && i%j == 0){ std::swap(v[i], v[j]); j = i; } i += 1; } bool ok = true; for(int i=1; i> t; while(t--){ solve(); } } If p[1] != n and p[1] = k, p[k] = n then try to place n as further as possible If p[1] = n then no need\n \u00bb 2 months ago, # | \u00a0 0 thanks for the fast editorial and for putting in hints.\n \u00bb 2 months ago, # | \u2190 Rev. 4 \u2192 \u00a0 0 My approach for task D: Assume we construct our initial sequence as M, M+2, M+4, .., M+2*(N-1). Difference between endpoints = d = 2*(N-1). Now let's define the Deficit as: sum \u2014 d*d. deficit = M*N + N*(N-1) - 4(N-1)^2 We can observe that deficit increases with increase in M, and it increases in multiples of N. So we can binary search for that point where deficit first becomes negative. At this point, absolute value of deficit is <= N. So we can cover the deficit by shifting the in-between elements by 1.\n\u2022 \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 Your name justifies you\n \u00bb 2 months ago, # | \u00a0 +3 The bound in the editorial for F is indeed quite loose. A simple way to improve it:To remove something we must have added it first so operations are at most 2*(number of added intervals). Case 1 adds one interval, case 2 adds two intervals, however case 2 operations can be at most half of the operations (since they pair with a corresponding case 1 operation). Therefore the described solution will do $3n$ operations at most.\n \u00bb 2 months ago, # | \u00a0 0 another opportunity to feel dumb\n \u00bb 2 months ago, # | \u00a0 0 Can anyone please Point out the mistake I have done? 182566417\n\u2022 \u00bb \u00bb 7 weeks ago, # ^ | \u00a0 0 Take a look at Ticket 16557 from CF Stress for a counter example.\n \u00bb 2 months ago, # | \u00a0 0 Can someone please explain why this solution 182516763 works for E?\n \u00bb 2 months ago, # | \u00a0 -17 Welcome to ConstructForces. I love it.\n \u00bb 2 months ago, # | \u00a0 0 another solution for problem D For even number n: same as official solution, the answer is [n-n\/2, n-n\/2+1, ... ,n-1, n+1, ... ,n+n\/2-1, n, n\/2+1]. For odd number n: construct y = n*4, answer is [y-n, y-((n-1)\/2)-1, ... , y-2, y-1, y, y+1, y+2, ... , y+(n-1)\/2+1, y+n]. notice the first and the last is special. for instance: 7: y = 28, answer is [21, 26, 27, 28, 29, 30, 35], the max \u2014 min = 14, sum = 196 = 14 * 14.\n \u00bb 2 months ago, # | \u00a0 0 I really liked problem d Although it took me every single cell in my brain to solve it But it was worth the effort\n \u00bb 2 months ago, # | \u00a0 0 Why is complexity O(nlogn) instead of O(n) in C?\n\u2022 \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 Using sieve(if used) .. complexity is O(nloglogn) in avg cases.. in worst cases it will be O(nlogn)..Feel free to correct!!\n\u2022 \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 You only need to factorize one number, which can be done in o(sqrt(n))\n\u2022 \u00bb \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 may be author solution use precomputation of smallest prime factor of all numbers...just like in my soln 182512680\n \u00bb 2 months ago, # | \u00a0 0 great round thanks for quick editorial.\n \u00bb 2 months ago, # | \u2190 Rev. 2 \u2192 \u00a0 +3 There's an another quite interesting solution for problem B: a_i=\\left\\{ \\begin{aligned} 1&,i=1 \\\\ n+1&,i\u2208[2,n] \\end{aligned} \\right.In this situation, $XOR = Average = n$.\n\u2022 \u00bb \u00bb 2 months ago, # ^ | \u00a0 +8 Not true for n = 3: 1 4 4. Average is 3, XOR is 1.\n\u2022 \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 Oh sorry, to be more clearly, that's solution for even numbers.As for odd numbers, let $a_i=1$.\n\u2022 \u00bb \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 +16 No it's my bad, that should have been obvious. Maybe I shouldn't reply to CF comments while sleep deprived :)\n\u2022 \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 Can you explain how? my average is coming--> n+1 and XOR= 0 maybe im missing something\n\u2022 \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 that's solution for even numbers.\n\u2022 \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 How did you get the intuition for this?\n\u2022 \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 Well, Maybe $(a-1)(a+1)=a^2-1$ inspired me.\n \u00bb 2 months ago, # | \u00a0 0 Very good problems for me, thanks a lot!\n \u00bb 2 months ago, # | \u00a0 +6 For $D$, here is another solution.If $n$ is odd, we can let $[a_1,a_2,\\dots, a_n]=[3n,4n-{n-3\\over2},\\dots,4n-2,4n-1,4n,4n+1,4n+2,\\dots,4n+{n-3\\over2},5n]$If $n$ is even, we can let $[a_1,a_2,\\dots, a_n]=[3n,4n-{n-2\\over2},\\dots,4n-2,4n-1,4n+1,4n+2,\\dots,4n+{n-2\\over2},5n]$$\\displaystyle\\sum_{i=1}^na_i=4n^2,\\max-\\min=5n-3n=2n$182509063\n\u2022 \u00bb \u00bb 2 months ago, # ^ | \u00a0 +14 wow thx an elegant and nice solution :D\n \u00bb 2 months ago, # | \u00a0 0 Alternative Solution for D: Let req = (right \u2014 left)^2 For a section of n elements spread over a range of length len (len = max \u2014 min) and beginning from 1, we can find the min and max possible values that can be obtained by shifting values in this range. \u200b Eg: len = 5, n = 3. min = [1, 2, 5] = 8, max = [1, 4, 5] = 10. We run an infinite loop for satisfying the condition min <= req <= max. It can be easily proved that if this condition is satisfied, we'll always be able to find an appropriate solution for req. Now, if max < req, then we'll need to 'boost' (add an offset to all elements), as that is the only way of satisfying the equation. My Solution\n\u2022 \u00bb \u00bb 2 months ago, # ^ | \u00a0 +3 PS: The infinite loop is only for avoiding extra case work. The equation should be satisfied in only a couple of iterations.\n \u00bb 2 months ago, # | \u00a0 0 what's wrong with my solution for C 182586576 .. i am not able to figure out\n\u2022 \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 Hey buddy, if you try this test case: 1 16 2 your code will produce this result: 2 4 3 16 5 6 7 8 9 10 11 12 13 14 15 1 which is not optimal as you still have to swap between 8 and 16 to get the minimal permutation. It is not enough to put some number m in place of x and put n in place of m, you may have to do the same process multiple times. In the test case above for example, you need to put 4 in place of 2, 8 in place of 4 and 16 in place of 8 to get the right answer: 2 4 3 8 5 6 7 16 9 10 11 12 13 14 15 1 I hope I was helpful :D\n \u00bb 2 months ago, # | \u00a0 0 182497978 Can anyone explain what i did wrong on Question C Almost All Multiples\n\u2022 \u00bb \u00bb 7 weeks ago, # ^ | \u00a0 0 Take a look at Ticket 16558 from CF Stress for a counter example.\n \u00bb 2 months ago, # | \u00a0 0 This contest was amazing! Thanks to everyone who contributed to its preparation :D\n \u00bb 2 months ago, # | \u2190 Rev. 2 \u2192 \u00a0 0 Guys you need to improve a lot in editorial language. All I read in C editorial was Ohh It must be something high level; I'm not upto this problem. The Solution to C is very simple and intuitive. The basic intuition is just check if a number is already taken and if taken check for multiples who are divisible by n and if yes take that multiple and move on. It's just that simple. I am not good in writing answers as a newbie but man as a newbie reading your answers make me feel like maybe this problem is too tough for me. Take it as constructive and positive criticism. I love the work the Codeforces Team is doing and am grateful for it.\n \u00bb 2 months ago, # | \u2190 Rev. 4 \u2192 \u00a0 0 Here is my approach for D \u2014 https:\/\/codeforces.com\/contest\/1758\/submission\/182611450 The explanation is commented out in the code.\n \u00bb 2 months ago, # | \u00a0 0 What's wrong in my solution for C ? here anyone can help??\n\u2022 \u00bb \u00bb 7 weeks ago, # ^ | \u00a0 0 Take a look at Ticket 16559 from CF Stress for a counter example.\n \u00bb 2 months ago, # | \u00a0 0 Short greedy solution for Problem C: https:\/\/codeforces.com\/contest\/1758\/submission\/182634632\n \u00bb 2 months ago, # | \u2190 Rev. 2 \u2192 \u00a0 0 For Problem D, I am doing Binary search on range and for fixed range we have multiple options [1,range] [2,range+1] , [3,range+2] ... 1) For every option we need to take only n-2 elements(excluding min and max because min and max are fixed as range is fixed) 2) Now For picking n-2 numbers, I am taking min sum possible using n-2 numbers and max sum possible using n-2 numbers 3) min sum is picking first n-2 numbers(excluding first which is reserved for min), and similarly max sum is picking n-2 last elements(excluding last) 4) Now we get a range of possible sum [MIN,MAX] MIN->min sum, MAX->max sum for each option ([1,range],[2,range+1]) etc.. 5) For Any option if range*2 falls into [MIN,MAX] then we found a solution 6) But we cant check for every option linearly, can we do binary search again on array of options which I described above? 7) can someone who followed on similar lines share your code or the approach is not feasible ? \n \u00bb 2 months ago, # | \u00a0 0 Another solution of D:Start process from 1 2 3 4 5 ... array. At each step, we increase either all elements by 1, or the last element by 1, depending on which part of the equality is smaller. After we have increased the maximum at least once, we get the space to adjust the sum by value = max_cnt_up * (n-2) (by increasing the elements in the middle). If this value is enough, we stop the algorithm.There can be problems only with small n, for example, if n=2 we do not have the opportunity to adjust the sum at all, but fortunately algorithm converges.\n \u00bb 2 months ago, # | \u00a0 0 1758C \u2014 Almost All Multiples 182522546Please can anyone help me to figure out why my solution is giving the wrong answer? Thank you in advance.\n\u2022 \u00bb \u00bb 7 weeks ago, # ^ | \u00a0 0 Take a look at Ticket 16560 from CF Stress for a counter example.\n \u00bb 2 months ago, # | \u2190 Rev. 3 \u2192 \u00a0 0 for question D, the sqrt(sum) question. My solution was like this. My solution for the even is the same but for the odd solution i did as like this: my sequence will be in the form of n\/2+1,n+3,n+3,n+2,n+2,n+2,n+2,...,3*n\/2+2, its sum is equal to n^2 + 2n + 1proof: i did some maths adn made sure that 3*n\/2+2>=n+3 for all n (>=2 stated in the question) max-min=n+12*(n+3)+(n\/2+1+3n\/2+2)+(n-4)(n+2)=4n+9+n^2-2n-8=n^2+2n+1someone pls tell me why am i wrong xd\n \u00bb 2 months ago, # | \u00a0 0 I get the solution for B but can someone please explain their thinking process that lead them to this solution?\n \u00bb 2 months ago, # | \u00a0 0 can someone help me to find mistake in my logic for question C.Logic: I am storing all the numbers from 2 to n except x in a unordered map and then I am writing a for loop and checking if the number equals to idx is present in map or not. if it is not present then I am checking for presence of (idx*2) and also checking if it divides n or not.if it do not divide and if idx divides n then just put n at position idx.\n\u2022 \u00bb \u00bb 7 weeks ago, # ^ | \u00a0 0 Take a look at Ticket 16561 from CF Stress for a counter example.\n \u00bb 2 months ago, # | \u00a0 0 1758C \u2014 Almost All Multiples \u2014 implementation(C++) and implementation(Python) are equal, i mean links are the same, can u correct it pls, thanks for blog btw!!\n\u2022 \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 Fixed.\n\u2022 \u00bb \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 ty!!\n \u00bb 2 months ago, # | \u00a0 0 my submission D:182975484 case n=2 (1 3 or 6 10...) case n>2 create an array of n odd numbers (1, 3,...,2n-1), add 2 to the last element to max-min=2n array will become (1,3....2n+1) (1+3...+2n-1) = n^2 then the sum of the above array is n^2+2 (max-min)^2=4n^2. the missing part of the above array is 3n^2-2 add the first and last elements of the array 3*n-1 and the elements between the 3*n we get the required array\n \u00bb 2 months ago, # | \u2190 Rev. 2 \u2192 \u00a0 0 For problem E, I have doubts related to the sample input. in the question for the sample row=2 col=3 hour=4 1 0 - 1 -1 -1 2 it's mentioned that the following is the configuration. Can anyone tell me how this configuration is reached?For the first sample, this is a possible configuration for the clocks: 1 0 3 0 3 2 Any help will be appreciated. Thanks.\n\u2022 \u00bb \u00bb 2 months ago, # ^ | \u00a0 0 Perhaps you are misunderstanding the problem. The problem is about counting the number of ways to replace each of the $-1$s with integers in the range $[0, h - 1]$ such that the configuration is solvable.\n \u00bb 2 months ago, # | \u2190 Rev. 2 \u2192 \u00a0 0 In D just write numbers from 1 to n ,then increase n by (n-1) ,then see how much more required to get (max-min)^2 ,let it be K so first add K\/n in all numbers ,but there is till k%n left but that's not a prblm,just add it to the second last number and it won't create collision and remove distinction as at the beginning we freed upto n-1 spaces by increasing last number (n) by n-1 XbirCode of My Solution\n \u00bb 7 weeks ago, # | \u00a0 0 Easy Solution for problem Dhttps:\/\/codeforces.com\/blog\/entry\/109416?#comment-975682\n \u00bb 7 weeks ago, # | \u00a0 +5 Nice B question. Nicely explained, good approach.\n \u00bb 7 weeks ago, # | \u2190 Rev. 4 \u2192 \u00a0 0 Alternative approach to problem E:Set up nxm bipartite graph, G. There is an edge from ith LHS vertex to jth RHS vertex of weight w iff there is a clock at (i, j) with value w. An edge also exists in the opposite direction with weight -w. Set up UFDS for G as well.Perform dfs on G instead using the same logic of finding contradicting modulos. If so, just output 0.Now iterate through all pairs in G, if ith LHS vertex isn't already in the same UFDS set as jth RHS vertex, there are exactly h possible clocks you can now insert at cell (i, j), then union these two sets.","date":"2023-02-01 15:46:25","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4238661527633667, \"perplexity\": 1890.5499726463822}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764499946.80\/warc\/CC-MAIN-20230201144459-20230201174459-00434.warc.gz\"}"}
null
null
Q: tinymce does not load if upon loading the page I immediately click on the textfield I'm using the tinymce-rails gem in my Rails 4.2 app. Problem: When loading a page with a tinymce textfield, if I immediately quickly click on that textfield (before the page has had a chance to load tinymce) then tinymce won't load. The textfield will then just be a regular textfield. So there seems to be a timing issue of some sorts going on. The problem particularly exists on my (somewhat slow) development server and if the textfield has a lot of text in it. But also on my faster production server and when there's less text in the textfield, the problem is replicable (just have to click faster). Is there a solution around this? It is not possible to replicate the problem with the demo on the tinymce website, so it must have to do something with my implementation. My view/form: <%= tinymce_assets %> ...Some text... <%= form_for(@helptext, html: {class: "form-horizontal"}) do |f| %> <%= render 'shared/error_messages', object: f.object %> <%= f.text_area :description, maxlength: 250, class: 'form-control input-md', rows: 2 %> <%= f.text_area :content, required: true, class: "form-control input-md tinymce", rows: 10, cols: 120 %> <%= f.submit "Save changes" %> <% end %> <%= tinymce %> In addition, I of couse have gem 'tinymce-rails' in my Gemfile, and I've customized config/tinymce.yml to load with the menu I'd like.
{ "redpajama_set_name": "RedPajamaStackExchange" }
135
Q: How to insert values into a junction/linking table in SQL Server? I am piggy backing off this question regarding creating a junction/linking table. It is clear how to create a junction table, but I am concerned about how to fill the junction table with data. What is the simplest and/or best method for filling out the junction table (movie_writer_junction) with data between two other tables (movie, writer) CREATE TABLE movie ( movie_id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY, movie_name NVARCHAR(100), title_date DATE ); CREATE TABLE writer ( writer_id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY, writer_name NVARCHAR(100), birth_date DATE ); INSERT INTO movie VALUES ('Batman', '2015-12-12'), ('Robin', '2016-12-12'), ('Charzard, the movie', '2018-12-12') INSERT INTO writer VALUES ('Christopher', '1978-12-12'), ('Craig', '1989-12-12'), ('Ash', '1934-12-12') CREATE TABLE movie_writer_junction ( movie_id INT, writer_id INT, CONSTRAINT movie_writer_pk PRIMARY KEY(movie_id, writer_id), CONSTRAINT movie_id_fk FOREIGN KEY(movie_id) REFERENCES movie(movie_id), CONSTRAINT writer_fk FOREIGN KEY(writer_id) REFERENCES writer(writer_id) ); The final junction table is currently empty. This is a simple example, and you can manually fill the data into the junction table, but if I have two tables with millions of rows, how is something like this completed? A: Hi I'm guessing this relates to the fact that you can't rely on the Identity Columns being the same in different regions. You can write your inserts as a cross join from the 2 src tables Insert junc_table (writer_id, movie_id) Select writer_id , movie_id from writer CROSS Join movie where writer_name = 'Tolkien' and movie_name = 'Lord of the Ring' This way you always get the correct Surrogate Key (the identity) from both tables. Its pretty easy to generate a SQL statement for all your existing junction combinations using a bit of Dynamic SQL Another Approach is to Use SET IDENTITY_INSERT ON - but this needs to be done when loading the 2 other tables and that ship may already have sailed!
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,259
package com.mendeley.api.exceptions; /** * Exception that is thrown when a paged request has already returned the final page. */ public class NoMorePagesException extends MendeleyException { public NoMorePagesException() { super("No more pages available"); } }
{ "redpajama_set_name": "RedPajamaGithub" }
1,509
package org.apache.camel.rx.support; import java.util.concurrent.ExecutorService; import org.apache.camel.Endpoint; import org.apache.camel.Exchange; import rx.Observable; import rx.Subscriber; import rx.functions.Func1; public class EndpointSubscribeFunc<T> implements Observable.OnSubscribe<T> { private final ExecutorService workerPool; private final Endpoint endpoint; private final Func1<Exchange, T> converter; public EndpointSubscribeFunc(ExecutorService workerPool, Endpoint endpoint, Func1<Exchange, T> converter) { this.workerPool = workerPool; this.endpoint = endpoint; this.converter = converter; } @Override public void call(Subscriber<? super T> subscriber) { subscriber.add(new EndpointSubscription<>(workerPool, endpoint, subscriber, converter)); } }
{ "redpajama_set_name": "RedPajamaGithub" }
5,202
{"url":"https:\/\/active-analytics.com\/blog\/somebenchmarksondataioinr\/","text":"# Some data I\/O benchmarks in R\n\nAuthor: Dr Chibisi Chima-Okereke Created: January 15, 2013 18:50:00 GMT Published: May 22, 2013 05:38:00 GMT\n\nThis blog post is an attempt to provide a look as some benchmarks for read\/write times using data formats that can be read and written with basic R. In this exercise I have used \u201cnative R\u201d and have not attempted to optimize it in anyway regarding parallel processing or specialized packages that optimize data read\/write speeds.\n\nThe file formats covered here are CSV, RData, dBase, and bin (binary) files. The interested R programmer should see the data i\/o manual for more details. I attempt to use MySQL using RODBC, but the connection, but the data set I used contained too many columns to be written, so I swiftly abandoned this idea. The thought of having to create a routine that splices the table down to hundreds of parts and recompile them only to tell me what I already know \u2013 database will be the slowest was not worth the time investment.\n\nMachine Specifications:\n\nOS: Windows 7,\nArchitecture: 64 bit,\nRAM: 16GB @1600 MHz,\nCPU: i7-3612QM,\nHard disk type: Solid state\n\n\n## The data set\n\nThe KDD cup is an annual data mining competition, where a data mining problem is set for anyone interested to take part. Kind of like an Olympic event for geeks! The reason I mention this is that the data set I have used comes from KDD 2009 where the competition was on churn modelling. Incidentally, Allan Englehardt has written an interesting article on churn modelling. 2 out of 5 chunks of the large training data set was used. This amounts to a table with 19,999 rows and 15,000 columns, almost 300 million items (the first row of the raw chunk1 are the column names). As is, the raw 2 data chunks total to 642 MB in size. The first 14,740 columns are numeric variables, and the rest 260 columns are categorical.\n\nThis is clearly a large data set, and worse still, it is very wide. I chose a hard data set on purpose, it is of a size is where R starts to struggle with data manipulation and read\/write operations, and it\u2019s width makes it very unwieldy. Once you start trying to throw around data of this of size and shape, it is apparent that you need to think differently about these operations.\n\nOne thing that R gets criticized for is its memory hungriness; this is quite true. Everything is stored in memory, and once the memory is allocated, it\u2019s pretty difficult to unallocated it regardless of whether you rm() the object and gc() any number of times. This may be to do with the way operations are carried out and where data is left over. Part of the problem also is that adding gc() inside an iterative read function will slow down the process and defeat the purpose of a benchmark. I tried this but did not notice any change in the memory usage. But memory in R is an issue for another day.\n\n## The Benchmarks\n\n### Introduction\n\nNote that in these benchmarks before the read operation was done, R was restarted.\n\n### CSV file I\/O\n\nThis benchmarks start with reading CSV files using the read.csv() function in R. Amongst the methods that worked, this took the longest. Some people may or may not be surprised to hear this. I wrote and read the data in a na\u00efve way not bother to use any explicit chunking of the files leaving the operation to the read.csv() function.\n\n# Reading in the CSV file\nsystem.time(mainTable <- read.csv(file = paste(path, \"mainTable.csv\", sep = \"\")))\n# user system elapsed\n# 191.63 2.14 193.80\n# Writing the CSV file\nsystem.time(write.csv(x = mainTable, file = paste(path, \"mainTable.csv\", sep = \"\"), row.names = FALSE))\n# user system elapsed\n# 424.43 5.53 430.45\n\n\nThe size on disk of the CSV file was 658 MB\n\n### dBase file I\/O\n\nUsing this format required the foreign package. The size on disk of the file was 4 GB and the writing (therefore the reading) could not be done to completion.\n\nsystem.time(write.dbf(mainTable, file = paste(path, \"mainTable.dbf\", sep = \"\/\")))\n# user system elapsed\n# 581.03 4.74 621.29\n# There were 50 or more warnings (use warnings() to see the first 50)\nwarnings()\n# Warning messages:\n# 1: In min(x) : no non-missing arguments to min; returning Inf, ...\nsystem.time(mainTable <- read.dbf(file = paste(path, \"mainTable.dbf\", sep = \"\/\")))\n# user system elapsed\n# 5.84 0.78 8.76\n# Does not return all the data!\ndim(mainTable)\n# [1] 19999 664\n\n\n### RData\/RDS format\n\nThe RData and RDS file formats are essentially the same and no difference in read\/write speeds was shown. The size of the file on disk is 70.6 MB.\n\n# Reading RData format\nsystem.time(load(file = paste(path, \"mainTable.RData\", sep = \"\/\")))\n# user system elapsed\n# 10.63 0.44 11.06\n# Writing RData format\nsystem.time(save(mainTable, file = paste(path, \"mainTable.RData\", sep = \"\/\")))\n# user system elapsed\n# 38.11 0.42 38.58\n\n\n### Binary file format\n\nThe file was made up of categorical and numerical data which both amount to being numerical. The binary data format can therefore be purely numerical. Size of the data on the disk was 2.21 GB\n\nWe first take the table of factors, and extract all the factor levels.\n\n# This is a table of the factors\ntempTable <- mainTable[,14741:15000]\n# First create the mapping layer for the factors, and save factors as integers\nfactorMap <- lapply(1:260, function(x){\nfactLevels <- levels(tempTable[,x])\n# Note the global assign\ntempTable[,x] <<- as.integer(tempTable[,x])\nif(x %% 10 == 0)print(x)\nreturn(factLevels)\n})\nmainTable[,14741:15000] <- tempTable\n\n\nNow we create a file for each of the 15,000 columns\n\n# Writing binary files to folder\nitemNames <- names(mainTable)\nsystem.time(for(i in seq(along = itemNames))writeBin(mainTable[,i],\ncon = paste(path, \"nativeBin\\\\\", itemNames[i], \".bin\", sep = \"\"),\nendian = \"little\"))\n# user system elapsed\n# 2.90 6.13 12.67\n# Saving the mapping layer\nbinLayer <- list(\"itemNames\" = itemNames, \"factorMap\" = factorMap)\nsystem.time(save(binLayer, file = paste(path, \"binLayer.RData\", sep = \"\")))\n# user system elapsed\n# 0.07 0.00 0.06\n\n\nNow we can read the binary files back (after restarting R). We read back the bin files into a list object, and bind it back into a data frame.\n\n# The mapping layer\nsystem.time(load(file = paste(path, \"binLayer.RData\", sep = \"\")))\n# user system elapsed\n# 0.03 0.00 0.03\n# mainTable <- NULL\nitemNames <- binLayer[[1]]\nfactorMap <- binLayer[[2]]\n# Reading back bin files into a list\nsystem.time(mainTable <- lapply(seq(along = itemNames), function(x){\nitemNames[x], \".bin\", sep = \"\"),\nendian = \"little\", what = ifelse(x < 14741, \"numeric\", \"integer\"),\nn = 19999)\nif(x > 14740){\n}\n\n}))\n# user system elapsed\n# 6.11 2.71 9.77\n# renaming the list\nsystem.time(names(mainTable) <- itemNames)\n# user system elapsed\n# 1.06 0.59 1.65\n# Binding the columns together\nsystem.time(temp1 <- as.data.frame(mainTable[1:14740]))\n# user system elapsed\n# 4.23 1.04 5.27\nsystem.time(temp2 <- as.data.frame(mainTable[14741:15000]))\n# user system elapsed\n# 0.03 0.00 0.04\nsystem.time(mainTable <- cbind(temp1, temp2))\n# user system elapsed\n# 1.17 0.03 1.21\n# user system elapsed\n# 1.40 0.67 2.07\n# 17.94s all together\n\n\n## Summary\n\nOne of the main surprises from this exercise was that the RData format performed very will indeed. On more standard table dimensions, even up to 10 million rows but with less than 40 columns, I have seen the binary format read in a fraction of the time as RData. In this exercise I tried to write the binary file as a single string of numbers and reformat it back into a table of numbers and factors once back into R, however R died every time I attempted the write operation. However even if the read\/write time might be faster, the time required for formatting the data back into a table with the factors etc. would be prohibitive \u2013 if at all possible on my system since extensive data manipulation such as folding a vector into a data frame and including the factor levels etc. takes up rather a lot of memory.\n\nThere are many R packages that cater to large file format, the high performance computing task views would be a good place to start investigating for those interested.","date":"2021-06-24 02:27:28","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3238166272640228, \"perplexity\": 3855.1981536577564}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623488550571.96\/warc\/CC-MAIN-20210624015641-20210624045641-00571.warc.gz\"}"}
null
null
Q: mysql-cli and mysql-workbench sync not working I have a simple question.. I created new user in the mysql command line with the commmand create user user_name@localhost identified by password; grant all privileges on DB_name.* to user_name@localhost; flush privileges; but when I turn on the mysql workbench there`s only root user in that application, though new user has been created in the mysql-cli. what`s wrong..?
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,760
Q: SwiftUI - How to size a view relative to its parent when the parent is inside a scrollview? I'm trying to dynamically size some views which end up being placed inside of a scrollview. Here is the simplest sample code I can think of: struct RootView: View { var body: some View { ScrollView { VStack(alignment: .leading) { // More views above HStack(spacing: 16) { MyView() MyView() } .padding([.leading, .trailing], 16) // More views below } } } } struct MyView: View { var body: some View { VStack(alignment: .leading, spacing: 24) { Image("myImage") .resizable() .scaledToFill() VStack(alignment: .leading, spacing: 0) { Text("Text") OtherView() } } } } EDIT: I think really the main issue I'm having is regarding how to dynamically size each MyView inside of the HStack. If I wanted the Image in MyView to be sized to fill its width and grow vertically to maintain its aspect ratio, and then also size each MyView in RootView to be 40% of RootView's width, what is the best way to accomplish this? I've tried using GeometryReader but when it's nested inside the ScrollView, it causes the view its used in to collapse in on itself. If I use it outside of the ScrollView, I'm effectively always going to be getting the screen width (in this application) which isn't always what I need. On top of that, imagine that MyView is nested deeper in the view hierarchy and not called directly from RootView, but rather one of its child views. Or better yet, imagine that we don't know that RootView doesn't know its rendering a MyView if the view is determined at runtime. To give a little context to anyone who is interested in some backstory, the app I'm trying to build is very modular in nature. The idea is that we really only have one "container view" struct that determines which views to render at runtime. We basically have a ScrollView in this container view and then any number of subviews. I'm really struggling with why it seems so difficult to set a view's content dimensions relative to its parent, any assistance would be hugely appreciated. A: The best way I can think of is using a GeometryReader view. Here is an example. GeometryReader { geometry in RoundedRect(cornerRadius: 5).frame(width: geometry.size.width * 0.8, height: geometry.size.height * 0.8) } Typically I use the GeometryReader as a "Root" view and scale everything off of it, however you can place it inside of another view or even as an overlay to get the parent view size. For example. VStack { GeometryReader { geometry in //Do something with geometry here. } } Check it out here. A: If I understood your goal correctly it is just needed to make images resizable (that makes them fill available space taking into account aspect ratio), like VStack(alignment: .leading, spacing: 24) { Image("myImage") .resizable() // << here !! .aspectRatio(contentMode: .fill)
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,111
Melanomyza femoralis is a species of fly in the family Lauxaniidae. References Lauxaniidae Articles created by Qbugbot Taxa named by Hermann Loew Insects described in 1861
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,764
Portfolio Company CEOs Why iSelect? for Accredited Investors for Family Offices Agrifood Conversations My iSelect System C: The Convergence of Food and Health We're here to talk about food system innovation. When we first started this conference, we talked about agtech, and as investors we also focus on healthcare and the convergence of healthcare and agriculture. Innovation is now being... iSelect Invests for Impact and Performance: 3 Case Studies At iSelect, we invest for impact AND performance. We are a research-driven organization dedicated to deep diligence and careful execution. We invest in companies that are addressing critical global issues, in large markets, and with a projected... WOULDA, COULDA, SHOULDA Missing Out is Always a Big Theil It's 2004. you're the first investor in a start-up social media company with 1M users and, you believe a fair valuation. You garner a board seat and watch the rest... How iSelect is Investing with Impact Without Sacrificing Performance Impact investing has grown up. Originally associated with environmental and sustainability causes, the concept is now expanding to include diversity issues, social impact, healthcare and food access and more. Today, it's about investing in companies that are... Why the Fly-Over States are Worth More than a Downward Glance Intuitively we all know innovation is deflationary. Today, you probably wouldn't spend more than $100 on a terabyte of digital storage. Not that long ago, some companies were fitting entire floors of office buildings and spending a... COVID-19: A FOCUS ON FUNDAMENTALS COVID-19 has us questioning what a "new normal" will look like in terms of how we work, travel and invest. It has amplified pre-existing challenges and created new ones. One key lesson it has taught us across... Food is Health: How to Access Uncorrelated Growth in 2020 "Food is Health" has driven the iSelect investment process since our start in 2014. Americans spend more than $1.5T each year on food and almost $2T on diet-related illnesses such as cardiovascular disease, obesity and type 2... Where Does Health Start? In the Ground Healthy soil is literally the foundation for a healthy food system and is a leading indicator for many key metrics around sustainability and ecological health. As a result, soil health is inextricably linked to plant, animal, human... What Will COVID-19 Mean for the Future of Healthcare? While there is very little good to be said about the ongoing COVID-19 situation, we can only hope that we recover from this pandemic while also making the healthcare and food systems better and stronger than they... Here's What's Coming Next in Cancer Diagnostics In 2019 over 600,000 people died from cancer in the United States alone. One of the primary drivers behind these high mortality rates for cancer is late diagnosis. For this reason, we're seeing increasing demand for noninvasive... Therapeutics and Inflammation: What's Coming Next? Many of the major medical conditions that we face in our lives share an underlying cause: chronic inflammation. Investing at the Nexus of Food and Health Almost three years ago, iSelect strategically refocused its investment thesis on identifying the synergies between food and healthcare. At the time, food and healthcare were two completely independent, siloed verticals. But, the fact is, the more that... Propagate Ventures: Planting New Revenue Streams for Farmers As a company dedicated to regenerative agroforestry solutions, Propagate Ventures works with farmers and land managers to design and install tree-crop systems that work in tandem with existing farm operations.... What the Borden Bankruptcy Tells Us About the Future of Food As a sailor, I learned a long time ago, it is always darkest before the dawn. Borden is bankrupt. And the dawn is coming. Innovation, Inflation and the New Digital Economy A decade ago, the U.S. economy was in a bad place. Still reeling from the financial crisis, 2009 saw ultra-low interest rates and the expectation that years of recovery would call for rates beyond 4% by 2020. What Researchers are Learning About the Biome Might Just Save Your Life… or Kill You Over Thanksgiving I overheard some parents talking about their children and dating. Apparently some young adults are now using 23andMe to determine compatibility before a date becomes a serious relationship. The parents mentioned an attempt to sneak... How Blood-Based Diagnostics are Changing Cancer Care iSelect hosts a Deep Dive webinar on a novel innovation topic on the first and third Wednesdays of each month at 9 a.m. central. Our most recent session focused on early cancer diagnosis using blood-based tools. Testing... Creating a Food System That Works (for Everyone) Innovation is what drives prices down. That's why we have TVs and computers that are both incredibly powerful and affordable, and why we have healthcare that is incredibly unaffordable. If food prices had risen at the... Why Bioavailability is Becoming the Next Buzzword in Agriculture iSelect hosts a Deep Dive webinar on a novel innovation topic on the first and third Wednesdays of each month at 9 a.m. central. Our most recent session focused on bioavailability. By improving the bioavailability... Why We Created iSelect The purpose behind what we do at iSelect is simple: we help investors and create great companies. When we started this organization, it was in reaction to a market opportunity that we saw -- the disconnect between... Is Nutrient Reuse the Future of Sanitation? What if the waste products we throw away every day could do more for us? What if it were possible to take organic waste, such as food scraps, and recycle it into fuel... Nutrition, Inflammation and How Our Bodies Stand up to Disease iSelect hosts a Deep Dive webinar on a novel innovation topic on the first and third Wednesdays of each month at 9 a.m. central. Our most recent session focused on nutrition's role in inflammation. Research... Filter Content: B2B Technology iSelect Approach 1401 S. Brentwood Blvd. ©2020 iSelect Fund | Your use of this website signifies that you accept iSelect's website terms and conditions of use and privacy policy.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,960
New Forest – dystrykt w hrabstwie Hampshire w Anglii. W 2011 roku dystrykt liczył 176 462 mieszkańców. Miasta Fordingbridge Lymington New Milton Ringwood Totton and Eling Inne miejscowości Ashurst, Bagnum, Bank, Bartley, Barton on Sea, Bashley, Beaulieu, Bisterne, Blackfield, Blashford, Blissford, Boldre, Bramshaw, Bransgore, Breamore, Brockenhurst, Brook, Brookheath, Broxhill, Bucklers Hard, Bull Hill, Burgate, Burley, Cadnam, Calshot, Copythorne, Crendell, Cripplestyle, Crow, Damerham, Dibden, Dibden Purlieu, East Boldre, East End, East Hill, East Martin, East Mills, Ellingham, Emery Down, Exbury, Fawley, Fritham, Frogham, Furze Hill, Godshill, Gorley Lynch, Hale Park, Hale Purlieu, Hale, Hangersley, Harbridge, Hardley, Hightown, Highwood, Hinton, Hordle, Hungerford, Hyde, Hythe, Keyhaven, Lepe, Linbrook, Linwood, Lopshill, Lower Daggons, Lyndhurst, Marchwood, Martin, Milford on Sea, Minstead, Netley Marsh, Ogdens, Pennington, Rockbourne, Sandleheath, Sopley, Sway, Turmer, Whitsbury, Winsor, Woodgreen. Przypisy Dystrykty hrabstwa Hampshire
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,588
Two people shot, 1 killed in Central City on New Year's Eve Updated: 1:59 PM CST Dec 31, 2021 triple shooting The New Orleans Police Department is investigating a triple shooting Friday afternoon that injured two people and killed another. NOPD said the shooting happened at the intersection of Martin Luther King Jr. Boulevard at South Prieur.A man arrived at an area hospital with a gunshot wound to the leg. Another victim was found at the scene suffering from a gunshot wound, according to NOPD. Police say a third victim suffered a graze wound in the incident. One of the victims was declared dead shortly after the shooting, according to NOPD. The New Orleans Police Department is investigating a triple shooting Friday afternoon that injured two people and killed another. NOPD said the shooting happened at the intersection of Martin Luther King Jr. Boulevard at South Prieur. A man arrived at an area hospital with a gunshot wound to the leg. Another victim was found at the scene suffering from a gunshot wound, according to NOPD. Police say a third victim suffered a graze wound in the incident. One of the victims was declared dead shortly after the shooting, according to NOPD.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,546
\section{Introduction} It is generally accepted that molecular clouds are the birth places of stars \citep[and references therein]{MO2007}. In the classic scenario \citep{Shu1987}, pre-star-forming molecular clouds are spherically layered structures with the molecular, atomic and ionized gas phases assumed to be dominant from the inside to the outside. Various molecular tracers have been used to trace $\rm{H}_{2}$, eg. [C I] \citep{Pin2017,Val2018,Oka2019}, [C II] \citep{Tang2016,Zan2018,Ryb2019}, OH \citep{DWM2019,EA2019,Tang2017}, the main content of molecular clouds, with CO being one of the most widely used \citep[e.g.][]{HD2015,Gen2015,MWISP2019}. By assuming a fixed dust-to-gas ratio, FIR and millimeter continuum observations can be used to indicate the total gas content, including atomic and molecular components \citep[e.g.][]{Bot2007,GR2014,Lenz2017}, although there could be a bias from inaccurate assumption of the dust-to-gas ratio. The 21cm line is generally used to trace the atomic hydrogen (HI) component and is considered to be optically thin in most situations \citep{GH1988}. Recombination lines as well as centimeter continuum are often used to trace the ionized gas component \citep{Ind2009}. While these tracers are able to depict the general picture of the different phases of gas, they have obvious weakness. CO and other molecules only trace $\rm{H}_{2}$\ above certain densities and extinctions, and their abundance can be easily biased by local metallicity \citep{Mad2016}. The excitation temperature and optical depth cannot be simultaneously determined from a single line, so the column density can easily be underestimated, even for the HI 21cm line \citep{Ber2008,Hei2003b,Dic2003,ST2004,Dic2009}. Deriving the total amount of cold HI gas by analyzing self-absorption features of the 21cm line is feasible, but is complicated by confusion due to multiple components \citep{RC1972,Gib2000,Kav2003,MCG2006,Den2018}. More sophisticated approaches to the analysis of HI self-absorption have been made during the past decade: \citet{LG2003} proposed the concept of HI Narrow Self-Absorption (HINSA) to refer to the HI self-absorption features associated with cold HI gas mixed in molecular cores, following the discovery of narrow HI absorption features coinciding with OH emission lines in a number of Galactic clouds. These authors derived the column density of cold HI gas indicated by HINSA features. By constructing a time-dependent molecular cloud formation model in which the rate of transformation of HI to $\rm{H}_{2}$\ by dust surface chemistry balances the $\rm{H}_{2}$\ destruction rate due to cosmic rays, \citet{GL2005} utilized the cold-HI/$\rm{H}_{2}$\ ratio using HINSA features as a chemical clock to probe the formation of molecular clouds. This proved the HINSA technique as a new tool to study the early state of molecular cloud formation. \citet{Tian2010} have also shown that HINSA technique can be adopted as an indicator of spatial relationship between features. \citet{LG2003} reported a HINSA detection rate of 77\% for the clouds in Taurus/Perseus region. \citet{Krco2010} found a detection rate of over 80\% over a wide range of environments in the Galaxy. The prevalence of HINSA features suggest cold HI gas is always associated with molecular cores, at least in our Galaxy. It is therefore of interest to explore a different environment to test for the presence of HINSA features, and study the properties and evolution of molecular clouds using this technique. The Large Magellanic Cloud (LMC) is an ideal target for a similar study. As the nearest gas-rich galaxy to the Milky Way, it is located at a distance of 50kpc \citep{W1997,Pie2013,deG2014}. Its prominent disk has a low inclination angle of 33$^{\circ}$\ \citep{W1997}, i.e.\ it is close to face-on. This permits spatially resolved studies of the galaxy's stellar and ISM content, making the study of the LMC more similar to ``galactic" than ``extragalactic" environments. With a smaller stellar mass of a few $10^9$ $M_{\odot}$\ \citep{Fei1980,Kim1998,AN2000}, the LMC is in a more primitive evolutionary state than the Milky Way and other large disk galaxies: its ISM metallicity is 0.2 dex lower than the local value \citep{RD1992,W1997,RD2019}, consistent with the trend of lower-mass galaxies having lower metallicity \citep[e.g.][]{Tre2004,Kew2008,Asa2009,Man2010,Sch2015}. Thus studies of the LMC have the potential to reveal the `gastrophysics' (gas astrophysics) and star formation laws of similar low-metallicity irregular galaxies in the high-redshift Universe \citep{Wil2009}. Several studies of the cool phase HI in the LMC have been conducted in the past two decades. \citet{Dic1994} and \citet{Dic1995} suggested that the cool gas in the LMC is either more abundant or colder than that of the Milky Way by analyzing the absorption spectrum of background compact continuum sources. \citet{Meb1997} and \citet{MZ2000} confirmed this trend and reported typical temperatures of the diffuse cool gas in the LMC of 30-40 K, compared with the typical value of 60 K in the solar neighborhood \citep{Kal1985}. \citet{Bra2012} used a different approach of Gaussian component fitting and found a low temperature for the LMC cool gas consistent with previous studies. This study also created an opacity-corrected HI column density map of the LMC, finding a global correction factor of 1.33. Infrared \citep{Ber2008,Gal2011,Meix2013} and ultraviolet \citep{Tum2002,Wel2012,RD2019} studies have also provided important information on cool phase atomic gas in the LMC. Different techniques have been applied in previous HI absorption studies of the LMC. However the HINSA technique has never been utilized beyond the Milky Way. With the advent of a recent LMC CO survey, i.e. the MAGMA survey \citep{Wong2011} using the ATNF 22 m Mopra telescope, it is now possible to probe cold HI gas associated with molecular cores using the HINSA technique applied to the MAGMA CO cloud catalog. We have therefore conducted a joint analysis of the MAGMA CO data cube and the ATCA+Parkes HI survey data \citep{Kim2003} to study the properties of the HINSA cold HI gas in the LMC. Section 2 of the paper describes the data; Section 3 explains the data reduction process using different HINSA techniques; Section 4 shows the main results and Section 5 discusses the applicability of different HI absorption techniques and the implications for the LMC. Finally, we summarize our result in Section 6. \section{Data} In this section we introduce the data used in this study. \begin{figure*} \centering \includegraphics[width=7in]{data_used.png} \caption{Data used in this work. Grayscale image: HI column density from the ATCA+Parkes LMC HI Survey \citep{Kim2003}; red contours: the MAGMA CO Survey DR3 moment 0 map, with a contour level at 1.0 K$\cdot$km/s; white rectangular regions: boundaries of the MAGMA $^{13}$CO\ map for selected regions; white ellipses: radial rings as described in Section 4.3; white circle markers and green labels: the location and ID of the sources listed in Table 1.} \label{fig:data_map} \end{figure*} \subsection{HI} An HI 21cm survey with resolution of 1\arcmin.0 (\appro15 pc assuming a distance of 50 kpc) was conducted during the late 1990s with the Australia Telescope Compact Array (ATCA)\citep{Kim1998}. Due to the missing flux problem for interferometers, this survey was not sensitive to structures larger than 500 pc. To complement these data, \citet{Kim2003} combined ATCA interferometer and Parkes single-dish observations \citep{SS2003} to give the most complete HI survey of the LMC in terms of sky and spatial frequency coverage. Their data cube contains a complete sampling of spatial structures from 15 pc to 10 kpc. The velocity resolution is 1.649 km\,s$^{-1}$ and brightness temperature sensitivity 2.4 K. \subsection{CO} The most complete CO survey in terms of sky coverage in the past decade has been the second LMC CO survey conducted by the NANTEN telescope \citep{Fuk2008}. It is a spatially continuous survey which identified 272 molecular clouds. The Magellanic Mopra Assessment (MAGMA) is a follow-up CO survey to target detected regions, with better sensitivity by a factor of 2, and was conducted with the ATNF 22m Mopra telescope \citep{Hug2010}. \citet{Wong2011} cataloged 450 molecular clouds based on the CO $J$=1-0 map. We employ the third data release of MAGMA for this study \citep{Wong2011,Wong2017}. It contains the CO $J$=1-0 cube described in \citet{Wong2011}. The cube has an angular resolution of 45\arcsec, and a pixel spacing of 15\arcsec. The velocity resolution is 0.526 km\,s$^{-1}$. The rms noise of the cube is typically 300 mK. Compared to the published paper \citep{Wong2011}, the released data cube has been processed with a constant 10 mK offset to bring the baseline back to $\sim$ 0 K. As described in Sections 4.1 and 5.1, we also utilized the unreleased MAGMA $^{13}$CO\ data for optical depth determination. $^{13}$CO\ observations were obtained simultaneously with the $^{12}$CO\ observations for data obtained in 2006 June to 2013 September, and will be described fully in a separate paper (Wong et al., in preparation). A merged cube was generated from 1244 individual 5\arcmin\ $\times$ 5\arcmin\ square maps spanning a heliocentric velocity range of 200--325 km s$^{-1}$. The CO spectra were placed on a main-beam brightness temperature scale ($T_{\rm mb}$) assuming an ``extended beam'' efficiency of 0.43 based on daily observations of Orion KL referenced to the measurements of \citet{Ladd2005}. Our $T_{\rm mb}$ scale has recently been confirmed by comparison with ALMA total power mapping (R. Indebetouw, private communication). The resulting maps possess a Gaussian beam of 45\arcsec\ FWHM which is oversampled with a pixel scale of 15\arcsec. The typical RMS map noise is $\sigma(T_{\rm mb}) \approx 0.19$ K per 0.55 km s$^{-1}$ channel. The spatial coverage of the CO and $^{13}$CO\ data used in this study is shown in Figure~\ref{fig:data_map}, on top of the HI column density map for the LMC. \section{Methods} \subsection{HINSA techniques} One challenge to applying the HINSA concept to analysis of HI absorption features is how to reconstruct the background emission or the ``original" spectrum before absorption. An accurately recovered ``original" spectrum leads to an accurately defined absorption line profile, and vice versa. Previous studies have used several different approaches. \citet{LG2003} adopted an intuitive method by masking the absorption feature and fitting the rest of the HI profile with a polynomial. This is common practice in absorption analysis, but suffers from the subjectivity in judging the shape of the original spectrum. As they reported, the fitted result can vary as much as 1 K using different orders of polynomial. \citet{Per2011} made the assumption of a smooth and gradual variation of the background emission, and take the average spectrum of several reference points around the center of the core as the ``original'' spectrum. But as stated by many authors, the HI gas is intrinsically filamentary \citep[e.g.][]{Elm2011}, thus considering it as ``smooth and gradual'' can cause unpredictable biases. \citet{Krc2008} presented a new technique to improve the quality of HINSA feature fitting procedure. Considering the narrow nature of HINSA features, they proposed that the narrow dip in the HI profile would generate a feature in the 2nd-derivative of the observed line profile since the slowly changing ``original'' profile is largely suppressed while the fast changing absorption dip is highlighted. This was used to locate the HINSA-like absorption features in the HI profile. By constraining the regions searched by such a method with molecular tracers, finding the possible HI self-absorption features associated with molecular clouds is possible. This provides a more convenient way to extract the HINSA profile with more confidence than the previous methods. \subsection{HINSA techniques applied in this work} In this work, we basically adopt the \citet{Krc2008} technique, although some modifications were made to cope with the fact that the MAGMA program had only released $^{12}$CO\ data at the time of our analysis. \subsubsection{Radiative transfer analysis} Assuming the cold HI gas responsible for a HINSA feature has optical depth $\tau \left(v\right)$, then: \begin{equation} {T}_{{A}}\left({v}\right)={T}_{{b}}\left({v}\right){e }^{{-\tau \left({v}\right)}}+{T}_{{H}}\left[{1-{e }^{{-\tau \left({v}\right)}}}\right], \end{equation} where $v$\ is the velocity, ${T}_{{\rm A}}\left(v\right)$\ is the observed HI spectrum, ${T}_{{\rm b}}\left(v\right)$\ is the background HI emission or so-called ``original'' spectrum, including the emission from background HI clouds as well as other background sources such as the CMB. ${T}_{{\rm H}}$\ is the temperature of the HINSA-generating cold HI associated with molecular material. In writing this function, we have neglected the foreground warm HI which is actually not affected by the absorbing cold HI gas. The same approximation was adopted by \citet{Krc2008} for the nearby sources in the Galaxy. For the sources in the LMC that could be embedded anywhere in the HI disk, this could be a poorer assumption. The impact of this will be discussed later. We make the simple assumption that $\tau \left(v\right)$\ has a Gaussian shape and can be expressed as \begin{equation} \tau \left({v}\right)={\tau }_{{0}}\exp\left({-\frac {{{\left({v-{v}_{{H}}}\right)}}^{{2}}}{2{\sigma }_{{H}}^{{2}}}}\right), \end{equation} where ${\tau }_{{0}}$\ represents the peak optical depth of the cold HI gas, ${v}_{{\rm H}}$\ is the velocity of the peak optical depth, and ${\sigma}_{{H}}$\ is the width of the optical depth profile. In our study, we use a single Gaussian fit to the CO spectrum, and take the fitted central velocity of the CO peak as the value of ${v}_{{\rm H}}$. The line width of the gas component ${\sigma}_{{H}}$, consists of two components, thermal and non-thermal according to: \begin{equation} {\sigma}_{{H}} ={{\left({{\sigma }_{{H_{th}}}^{{2}}+{\sigma }_{{H_{nt}}}^{{2}}}\right)}}^{{\frac {1}{2}}}, \end{equation} where the subscripts \textit{th}\ and \textit{nt}\ represent thermal and non-thermal, respectively. Similarly, for the CO gas: \begin{equation} {\sigma}_{{CO}} ={{\left({{\sigma }_{{CO_{th}}}^{{2}}+{\sigma }_{{CO_{nt}}}^{{2}}}\right)}}^{{\frac {1}{2}}}. \end{equation} For well-mixed gas, the non-thermal line width would be similar for different components \citep{LG2003}. Combining formulas (3) and (4), we obtain: \begin{equation} {\sigma}_{{H}} =\left[{{\sigma}_{{CO}}^{{2}}+{\left({{\sigma }_{{H_{th}}}^{{2}}-{\sigma }_{{CO_{th}}}^{{2}}}\right)}}\right]^{{\frac {1}{2}}}, \end{equation} where the thermal linewidth for both HI and CO gas satisfy \begin{equation} {\sigma }_{{th}}={{\left({\frac {2kT}{m}}\right)}}^{{\frac {1}{2}}}, \end{equation} where $m$ represents the mass of a hydrogen atom or CO molecule, when ${\sigma }_{{th}}$\ is replaced by ${\sigma }_{H_{th}}$\ or ${\sigma }_{CO_{th}}$, respectively. Assuming that the different gas components inside the molecular cloud are in thermodynamic equilibrium then, for either HI or CO, the temperature $T$ in equation (6) can be replaced with the same CO kinetic temperature ${T}_{{\rm k}}$. Under the assumption of LTE, we take ${T}_{{\rm k}}$\ to be equal to ${T}_{{\rm ex}}$, the excitation temperature of CO. We therefore have \begin{equation} f\left({{T}_{{ex}}}\right)=\frac {{T}_{{{B}_{{0}}}}}{{T}_{{1-0}}}+f\left({{T}_{{bg}}}\right), \end{equation} where $f\left(T\right)$ is defined as \begin{equation} f\left({T}\right)=\frac {1}{{\exp{\left({\frac {{T}_{{1-0}}}{T}}\right)}}-1}. \end{equation} ${T}_{{{\rm B}_{{0}}}}$\ is the brightness temperature at the CO line center, here adopted as the peak temperature of the fitted Gaussian profile. ${T}_{{1-0}}$\ is the equivalent temperature of the $^{12}$CO $J=1-0$\ transition and has the value 5.53~K. ${T}_{{\rm bg}}$\ is the background field temperature, for which we use the CMB temperature of 2.73 K. With these assumptions and relations, we can recover the ``original'' HI spectrum as function of a single variable ${\tau }_{{0}}$. As demonstrated in \citet{Krc2008}, a narrow dip in a smooth line would generate a prominent feature in the 2nd-derivative profile. Ideally, we expect that such a feature can be minimized if we adjust the value of ${\tau }_{{0}}$\ until the narrow dip in spectrum vanishes. We integrate the square of the 2nd-derivative of the recovered ``original'' spectrum, and we stop adjusting the value of ${\tau }_{{0}}$\ when the change falls below a precision criterion of $10^{-4}$ K km s$^{-1}$. We are left with the peak optical depth ${\tau }_{{0}}$\ and the ``original'' HI spectrum before absorption, ${T}_{{\rm b}}\left(v\right)$. The amount of HINSA absorption as a function of frequency, i.e. the HINSA profile, is the difference between the ``original'' HI spectrum and the observed HI spectrum. Then we can derive the HINSA brightness temperature profile as \begin{equation} \begin{aligned} {T}_{{\rm HINSA}}\left({v}\right)&={T}_{{b}}\left({v}\right)-{T}_{{A}}\left({v}\right)\\ &=\left({{T}_{{A}}\left({v}\right)}-{{T}_{{ex}}}\right)\left({e }^{{\tau \left({v}\right)}}-{1}\right). \end{aligned} \end{equation} In summary, we derive the HINSA profile using the following stpdf: \begin{itemize} \item Calculate the so-called ``original'' HI spectrum, which does not show the absorption and thus appears smoother (the smoothness of the ``original'' spectrum is judged by its 2nd-derivative). \item Subtract the real HI spectrum from the calculated ``original'' spectrum to derive the HINSA profile. \end{itemize} \subsection{Deriving physical parameters} It can be seen from Equation 9 that when the central velocities for $\tau \left(v\right)$\ and the observed HI spectrum ${T}_{{\rm A}}\left(v\right)$\ are different, an asymmetric ${T}_{{\rm HINSA}}\left(v\right)$\ profile will result. To parameterize the ${T}_{{\rm HINSA}}\left(v\right)$\ profile, a Gaussian fit is performed, and peak temperature, central velocity and width ${\sigma}_{{\rm HINSA}}$\ are derived. Then we calculate the column density of the HINSA-associated cold HI based on \citet{LG2003}'s formula (13): \begin{equation} \begin{aligned} \frac{N\left({\rm HINSA}\right)}{{\rm cm}^{-2}}=1.95\times {{10}}^{{18}}{\tau }_{{0}}\frac {{\sigma }_{{\rm HINSA}}}{\rm km\;s^{-1}}\left(\frac {T_k}{\rm K}\right) . \end{aligned} \end{equation} CO is almost always optically thick in molecular cores, so the estimation of $\rm{H}_{2}$\ column density $N\left({{H}_{{2}}}\right)$\ based solely on CO can be unreliable. However, the CO luminosity-$\rm{H}_{2}$\ column density conversion factor, or X-factor, is often the only way to estimate $\rm{H}_{2}$\ column density in external galaxies \citep[and references therein]{Bol2013}. Similarly for the LMC, there is currently no other molecular tracer available that has such completeness in coverage. We therefore use the latest estimate for the LMC X-factor, $4\times {{10}}^{{20}}{\rm cm}^{{-2}}\left({\rm K\;km\;s^{-1}}\right)^{-1}$ \citep{Bol2013}, which is a direct result of the MAGMA Project \citep{Hug2010,Wong2011,Pin2010}. The integrated flux of the CO profile is calculated from the Gaussian fit to avoid the effect of component blending. The HI-to-H2 ratio, defined as the ratio in the HINSA-associated cold HI and the $\rm{H}_{2}$\ content, is calculated by comparing $N\left({\rm HINSA}\right)$\ and $N\left({{H}_{{2}}}\right)$. It is the major parameter we derive that shows the abundance of the HINSA-associated HI. \subsection{Optical depth correction} The molecular clouds where HINSA features are detected are embedded in the LMC's HI gas disk. The presence of foreground HI gas diminishes the strength of the HINSA absorption dip that we're looking for. Unlike \citet{LG2003} who estimate the proportion of the foreground gas using the Galactic rotation curve, the location of the molecular clouds within the LMC disk is unknown. Here we evaluate the effect of the foreground gas on the observed HINSA features. Using the same variable $p$ as \citet{LG2003} to describe the position of a given molecular cloud in a uniform disk, $(1-p)$ is the fraction of foreground HI gas relative to the total amount of HI gas in the line of sight. The \emph{real} optical depth of the HINSA HI (equation 12 of \citet{LG2003}) is: \begin{equation} \begin{aligned} {\tau}_{0}^{\prime}=\ln \left[\frac{{p}{T}_{b}+\left({T}_{c}-{T}_{H}\right)\left(1-{\tau}_{f}\right)}{{p}{T}_{b}+\left({T}_{c}-{T}_{H}\right)\left(1-{\tau}_{f}\right)-{T}_{\rm HINSA}}\right] \end{aligned} \end{equation} where ${T}_{{\rm b}}$, ${T}_{{\rm H}}$\ and ${T}_{{\rm HINSA}}$\ are as defined in section 3.2.1, ${T}_{{\rm c}}$\ is the continuum temperature, ${\tau }_{{f}}$\ is the foreground HI optical depth, and ${\tau }_{{f}}=\left({1}-{p}\right){\tau }_{{HI}}$, where ${\tau }_{{HI}}$\ is the total HI optical depth along the line of sight through the LMC's disk. When the foreground HI is ignored as it was done in section 3.2.1, $p=1$ and ${\tau }_{{0}}^{\prime}$=${\tau }_{{0}}$\ as defined in section 3.2.1. The optical depth correction factor, given by $C$, is defined by: \begin{equation} \begin{aligned} {C}=\frac{{\tau}_{0}^{\prime}}{{\tau}_{0}}. \end{aligned} \end{equation} Using a typical set of parameters ${T}_{{\rm b}}$$={80}$ K (Galactic value, \citet{LG2003}), ${T}_{{\rm c}}$$={3.8}$ K, ${T}_{{\rm H}}$$={10}$ K and ${\tau }_{{HI}}$$=0.7$ , a typical $C(p,\ $${T}_{{\rm HINSA}}$$)$ relation is shown in Figure~\ref{fig:correctionfactor}. As shown in Figure \ref{fig:A1}, the adopted value ${T}_{{\rm b}}$$={80}$ K is also a typical value in the LMC for HINSA regions. \citet{LG2003} adopted ${T}_{{\rm c}}$$={3.5}$ K for Milky Way studies, whereas the value used for the LMC (3.8 K) is derived from the 20-cm continuum map of \citet{Hugh2007}. The flux for Region 3 of \citet{Hugh2007}, where the continuum flux at 3.75 GHz \citep{Hayn1991} is higher than 40 mJy beam$^{-1}$, i.e. the brighter part of the LMC, is considered for the derivation of ${T}_{{\rm c}}$. The value adopted for ${\tau }_{{HI}}$\ is the average value of ${\tau }_{{max}}$\ measured by \citet{Dic1994}, \citet{Meb1997} and \citet{MZ2000} towards 87 radio sources behind the LMC. The value of $C$ is large when $p$ is small, and is very sensitive to ${T}_{{\rm HINSA}}$. However for $p>0.3$, the scatter becomes smaller and $C$ approaches unity. Although the exact value of $p$ is unknown, assuming that the scale height of the molecular disk of the LMC is smaller than that of the HI disk, we can adopt $p={0.5}$. For $p\sim0.5$, the value of $C$ varies in a narrow range of $\sim2$ for different values of ${T}_{{\rm HINSA}}$\ -- the difference is less than 14\% for values between 0.1 K and 10 K. In the following calculation, ${T}_{{\rm c}}$\ is fixed to ${3.8}$ K, while the values of ${T}_{{\rm b}}$, ${T}_{{\rm H}}$\ and ${T}_{{\rm HINSA}}$\ are used on a pixel-by-pixel basis. \begin{figure} \centering \includegraphics[width=3.3in]{calccorrection.png} \caption{The optical depth correction factor $C$ for different values of the embedding depth $p$ and $T_{\rm HINSA}$.} \label{fig:correctionfactor} \end{figure} \section{Results} Of all the $1997\times 2230$ pixels in the HI data cube, 1446 pixels were detected with HINSA features, i.e. an angular filling factor of $\sim 3\times 10^{-4}$. The details of these detections are given below. \subsection{HINSA-HI abundance} Figure~\ref{fig:HIH2ratio} shows the histogram of HINSA-HI abundance, i.e.\ the ratio of HINSA-HI column density to that of $\rm{H}_{2}$, which is important for comparison with other studies. The value after optical depth correction varies from 0.5\e{-3} to 3.4\e{-3} (68\% interval), with a mean value of $(1.31 \pm 0.03)$\e{-3}; the value before correction varies from 0.3\e{-3} to 1.6\e{-3} (68\% interval), with a mean value of $(0.64 \pm 0.02)$\e{-3}. We also show the results from \citet{LG2003}, a HINSA survey of the Taurus Molecular Cloud and \citet{Krco2010}, a HINSA survey in other regions in the Milky Way for comparison. The 68\% interval value range is 0.2\e{-3} to 4.4\e{-3} for \cite{LG2003}, 0.5\e{-3} to 2.5\e{-3} for \citet{Krco2010}, and 0.4\e{-3} to 3.0\e{-3} for both Milky Way samples combined. The mean value for both Milky Way samples is $(1.0 \pm 0.2)$\e{-3}. Our result shows that the LMC's HINSA-HI/$\rm{H}_{2}$\ abundance ratio is slightly higher, but not significantly different from the Milky Way value, which means the LMC has a similar cold gas fraction to the Milky Way. \begin{figure*} \centering \includegraphics[width=6in]{DR3calcHItoH2.png} \caption{A histogram of the HINSA-HI to $\rm{H}_{2}$\ ratio $\log_{10} (N_{\rm HINSA}/N_{\rm H_2}$). The bold black histogram is the LMC results from the present work. The red histogram shows the result for Taurus/Perseus region from \citet{LG2003}. The green histogram shows the result for Milky Way regions outside Taurus from \cite{Krco2010} (the values for each velocity component instead of the mean value for each line-of-sight are used). The blue histogram shows the sum of the previous two studies. To improve the visibility of the diagram, the $y$-axis is scaled up by a factor of 5 for the Milky Way results.} \label{fig:HIH2ratio} \end{figure*} \subsection{Catalog} The HINSA detections were inspected manually. Consecutive pixels with detections were catalogued into the same ``group''. There are 37 groups of HINSA detections in the LMC where the peak optical depth of HINSA-HI is higher than 0.2. Table \ref{tbl:selectedsightlines} is a catalog of the physical parameters of the peak optical depth positions for these groups. \begin{table*} \centering \begin{tabular}{lccccccccr} \hline \hline No. & $\alpha$ (2000) & $\delta$ (2000) & $\tau_{0}$ & $T_{H}$ & $\sigma_{H}$ & $N_{\rm HINSA}$ & $N_{\rm H_2}$ & $N_{\rm HINSA} /N_{\rm H_2}$ & Cloud ID \\ & (h:m:s) & (\arcdeg:\arcmin:\arcsec) & & (K) & (km\,s$^{-1}$) & (cm$^{-2}$) & (cm$^{-2}$) & & \\\hline 1 & 04:47:21.90 & -67:11:42.3 & 0.29 & 4.0 & 2.0 & 7.6E+18 & 2.5E+21 & 7.0E-03 & 9 \\ 2 & 04:47:34.98 & -67:12:16.0 & 0.31 & 4.5 & 1.9 & 8.3E+18 & 2.3E+21 & 8.8E-03 & 10 \\ 3 & 04:49:01.79 & -68:36:17.2 & 0.36 & 5.2 & 1.8 & 1.2E+19 & 3.9E+21 & 7.3E-03 & 19 \\ 4 & 04:49:11.07 & -68:35:03.9 & 0.30 & 4.9 & 0.9 & 4.0E+18 & 1.6E+21 & 5.8E-03 & 24 \\ 5 & 04:49:29.52 & -68:30:14.5 & 0.21 & 4.3 & 1.9 & 5.7E+18 & 2.5E+21 & 5.1E-03 & 30 \\ 6 & 04:50:23.76 & -69:30:15.9 & 0.31 & 4.7 & 0.7 & 3.5E+18 & 1.1E+21 & 7.5E-03 & 36 \\ 7 & 04:51:50.21 & -69:21:18.0 & 0.31 & 4.3 & 1.4 & 6.2E+18 & 2.4E+21 & 6.0E-03 & 44 \\ 8 & 04:52:16.74 & -66:53:40.6 & 0.24 & 4.0 & 1.5 & 4.7E+18 & 1.8E+21 & 6.0E-03 & 50 \\ 9 & 04:52:51.04 & -68:03:51.5 & 0.31 & 4.7 & 1.7 & 8.2E+18 & 3.1E+21 & 6.1E-03 & 58 \\ 10 & 04:54:05.70 & -69:11:33.1 & 0.29 & 5.1 & 2.6 & 1.3E+19 & 5.0E+21 & 5.9E-03 & 65 \\ 11 & 04:55:33.86 & -66:28:16.9 & 0.33 & 4.8 & 2.3 & 1.2E+19 & 4.6E+21 & 6.3E-03 & 78 \\ 12 & 04:56:17.62 & -66:37:26.5 & 0.26 & 5.0 & 1.4 & 6.2E+18 & 2.8E+21 & 5.1E-03 & 80 \\ 13 & 04:58:42.28 & -66:07:59.2 & 0.20 & 5.3 & 1.7 & 5.9E+18 & 3.8E+21 & 3.5E-03 & 110 \\ 14 & 05:03:47.65 & -67:18:35.1 & 0.20 & 5.5 & 1.8 & 6.3E+18 & 4.5E+21 & 3.1E-03 & 137 \\ 15 & 05:05:26.14 & -66:53:54.0 & 0.24 & 4.6 & 1.4 & 5.1E+18 & 2.2E+21 & 5.4E-03 & 146 \\ 16 & 05:09:55.96 & -68:53:33.3 & 0.22 & 4.5 & 2.7 & 8.7E+18 & 4.1E+21 & 4.7E-03 & 165 \\ 17 & 05:13:21.03 & -69:23:03.4 & 0.24 & 7.4 & 1.8 & 9.0E+18 & 7.7E+21 & 2.6E-03 & 207 \\ 18 & 05:13:25.50 & -67:28:17.6 & 0.31 & 4.4 & 1.4 & 6.2E+18 & 1.8E+21 & 8.5E-03 & 206 \\ 19 & 05:13:51.33 & -67:07:42.8 & 0.27 & 3.4 & 2.1 & 5.6E+18 & 1.1E+21 & 1.2E-02 & - \\ 20 & 05:14:33.31 & -68:46:09.2 & 0.36 & 4.8 & 1.6 & 8.6E+18 & 2.8E+21 & 7.3E-03 & 213 \\ 21 & 05:22:12.97 & -67:57:42.9 & 0.57 & 4.3 & 2.9 & 2.4E+19 & 3.6E+21 & 1.9E-02 & 291 \\ 22 & 05:24:21.84 & -68:25:41.2 & 0.38 & 6.2 & 2.3 & 1.8E+19 & 7.0E+21 & 6.3E-03 & 350 \\ 23 & 05:24:51.46 & -69:40:20.8 & 0.36 & 4.7 & 2.8 & 1.5E+19 & 4.6E+21 & 8.4E-03 & 355 \\ 24 & 05:25:10.68 & -69:40:40.1 & 0.23 & 7.6 & 1.5 & 7.6E+18 & 6.5E+21 & 2.6E-03 & 358 \\ 25 & 05:25:53.67 & -66:14:07.3 & 0.22 & 4.1 & 2.6 & 7.6E+18 & 3.0E+21 & 5.6E-03 & 374 \\ 26 & 05:35:24.75 & -67:34:48.0 & 0.96 & 4.1 & 3.6 & 4.8E+19 & 3.6E+21 & 4.1E-02 & 451 \\ 27 & 05:35:47.86 & -69:13:08.0 & 0.30 & 4.5 & 1.3 & 6.0E+18 & 3.4E+21 & 4.0E-03 & 459 \\ 28 & 05:35:53.06 & -69:02:22.9 & 0.27 & 5.2 & 2.3 & 1.1E+19 & 5.3E+21 & 4.7E-03 & 462 \\ 29 & 05:38:29.73 & -69:02:09.6 & 0.28 & 4.9 & 2.2 & 9.7E+18 & 4.1E+21 & 5.5E-03 & 508 \\ 30 & 05:39:35.66 & -69:46:16.4 & 0.22 & 6.4 & 3.2 & 1.4E+19 & 1.0E+22 & 3.0E-03 & 531 \\ 31 & 05:39:44.42 & -69:37:31.6 & 0.34 & 4.4 & 1.7 & 8.2E+18 & 3.1E+21 & 6.5E-03 & 548 \\ 32 & 05:40:02.89 & -69:51:26.4 & 0.48 & 6.9 & 3.1 & 3.2E+19 & 1.2E+22 & 7.5E-03 & - \\ 33 & 05:41:16.56 & -70:55:31.9 & 0.37 & 4.8 & 2.0 & 1.3E+19 & 3.4E+21 & 9.2E-03 & 606 \\ 34 & 05:43:23.42 & -69:25:12.2 & 0.20 & 5.1 & 2.0 & 6.4E+18 & 4.1E+21 & 3.5E-03 & 635 \\ 35 & 05:44:42.57 & -69:28:13.6 & 0.36 & 6.0 & 2.7 & 1.9E+19 & 8.3E+21 & 5.7E-03 & 650 \\ 36 & 05:46:17.45 & -69:38:26.6 & 0.27 & 3.9 & 2.3 & 7.9E+18 & 2.2E+21 & 8.1E-03 & 666 \\ 37 & 05:47:01.88 & -70:46:11.0 & 0.23 & 4.1 & 3.1 & 9.0E+18 & 3.4E+21 & 5.8E-03 & 672 \\ \hline\hline \end{tabular} \caption{Physical parameters for the sightlines where HINSA optical depth is greater than 0.2. The Cloud ID is from Catalog C in \citet{Wong2011}. The spectra for each of these sightlines are displayed in the Appendix A.} \label{tbl:selectedsightlines} \end{table*} \subsection{Spatial distribution} \citet{Cho2016} derive a photometric metallicity map of the LMC using MCPS and OGLE III data. It shows a shallow metallicity gradient, with the central bar having the highest metallicity and the outer parts having the lowest metallicity. To confirm whether there is a corresponding trend in the spatial distribution of the HINSA-HI abundance ratio, we divide the LMC into 6 concentric elliptical rings. The radii of the rings start at 0.5 kpc, and are spaced by 0.5 kpc, doubling the bin width used by \citet{Cho2016}. The position angle (PA) of these rings is extracted from the PA measurements of \citet{Kim1998}. An HI morphologically-derived inclination angle of 22 degrees is adopted, according to the measurements of \citep{Kim1998}. The kinematic centre of the LMC HI disk is used here \citep[$05^{h}17.6^{m}, -69^{d}02^{m}$ as given by][]{Kim1998}, which deviates from the optical centre used by \citet{Cho2016} by 27 arcmin. The pixels with HINSA detections are divided into 7 radial bins, containing 189, 153, 343, 442, 248, 51, 26 and 51 pixels, from small to large radius respectively. We derive the mean and standard deviation for each group by fitting the $\log_{10}$ histogram with a Gaussian. The result, shown in Figure~\ref{fig:DR3HItoH2_rings}, shows no radial gradient of the HINSA-HI abundance in the LMC. To examine whether there is any radial trend in the HINSA-HI abundance ratio in the Milky Way, we have looked into the distances of the molecular clouds that were studied in the previous Milky Way HINSA studies \citep{LG2003,Krco2010}. We find that the existing HINSA measurements in the Milky Way are focused either on nearby molecular clouds (less than 1kpc from the Sun) or clouds at unknown distances. It is not yet possible to make a definitive statement on the radial distribution of HINSA-HI abundance in the Milky Way. \begin{figure} \centering \includegraphics[width=3in]{DR3HItoH2_rings.png} \caption{HINSA-HI abundance as a function of the radius from the center of the LMC. The error bars show the standard deviation within in each bin.} \label{fig:DR3HItoH2_rings} \end{figure} \subsection{Highlighted regions} The detection of HINSA signatures is prevalent along the sightlines towards molecular clouds in the LMC. For those regions with strong and concentrated HINSA signatures, six have $^{13}$CO\ data: N11, N44, NAN17, NAN216, NAN223 and the Ridge southward of 30 Dor (the `Ridge'). The $^{13}$CO\ data is used to help determine the optical depth of CO, as discussed in Section 5.1. These regions are highlighted here: the optical depth map of HINSA-HI are shown in Appendix B. The highlighted regions are mostly distributed along the two spiral features of the LMC, with one (N44) located to the north of the optical bar. The distribution of these selected regions is similar to the distribution of star formation activity in the LMC: 30 Dor and the southern ridge has the most violent star-formation activity; the western spiral feature and the region north of the optical bar are also quite active, while the region south of the optical bar is lacking major star formation activity and molecular clouds. These maps show that the distribution of HI emission, CO and HINSA-HI roughly follows an onion shell structure with HI emission around an outer shell and HINSA-HI in the inner core. But it also seems that the spatial peak in the HINSA-HI optical depth is often mismatched with the peak of the CO cloud. This may reflect the inadequacy of CO as an $\rm{H}_{2}$\ cloud tracer, or it may reflect an evolutionary sequence. \citet{Zuo2018} have reported the discovery of a shell structure of HINSA-HI around a molecular cloud in the Milky Way which indicates the depletion of atomic hydrogen in the center of the molecular cloud. The mismatch of the HINSA-HI peak and the CO cloud peak in LMC clouds could be due to a similar reason, although our lower spatial resolution makes this harder to judge. \section{Discussion} \subsection{Optical depth of CO} In Section 3, we assumed optically thick CO emission. This assumption affects the estimate of the temperature of the HINSA gas. If the optically thick assumption breaks down, the excitation temperature of CO will be underestimated. The LTE assumption will also be incorrect so the kinetic temperature of the gas will be further underestimated. The $^{13}$CO\ data for the above selected regions were therefore used to calculate the optical depth of CO to test this assumption. We selected all the pixels where the peak S/N ratio for the $^{13}$CO\ spectrum was larger than 3. The optical depth of $^{13}$CO\ for these sightlines was first calculated by assuming CO is optically thick \citep{WRH2013}: \begin{equation} \begin{aligned} T_{ex}(^{12}CO)=5.5/ln(1+\frac{5.5}{T_{B}(^{12}CO)+0.82}) \end{aligned} \label{con:Tex12CO0} \end{equation} Then the optical depth of $^{13}$CO\ is derived by \begin{equation} \begin{aligned} \tau(^{13}CO)=-ln\Bigg\{1-\frac{T_{B}(^{13}CO)}{5.3}\bigg\{\\\Big[exp(\frac{5.3}{T_{ex}(^{12}CO)}-1)\Big]^{-1}-0.16\bigg\}^{-1}\Bigg\} \end{aligned} \label{con:tau13CO} \end{equation} Then the optical depth of CO is derived by multiplying the $^{13}$CO\ optical depth by the $^{12}$CO/$^{13}$CO\ ratio: \begin{equation} \begin{aligned} \tau(^{12}CO)=X(^{12}CO/^{13}CO)\tau(^{13}CO) \end{aligned} \label{con:tau12CO} \end{equation} Then a corrected excitation temperature of CO was derived by using the updated CO optical depth $\tau(^{12}CO)$: \begin{equation} \begin{aligned} T_{ex}'(^{12}CO)=5.5/ln\Bigg[1+\\\Big(\frac{T_{B}(^{12}CO)}{5.5\Big[1-exp\big(-\tau(^{12}CO)\big)\Big]}+0.15\Big)^{-1}\Bigg] \end{aligned} \label{con:Tex12CO1} \end{equation} We iterated the above process (from Equation \ref{con:tau13CO} to \ref{con:Tex12CO1}) for 100 times, resulting in an improved estimate of excitation temperature and optical depth for CO. The result of the above is dependent on the value of the $^{12}$CO/$^{13}$CO\ abundance ratio adopted. Previous observations have shown that the $^{12}$CO/$^{13}$CO\ ratio for the molecular clouds in the LMC may or may not be different from the Milky Way value of $\sim100$. For example \citet{Joh1994} suggested a ratio of $50^{+25}_{-20}$ for N 159 in the LMC, while \cite{Isr2003} suggest a ``similar'' intrinsic isotopic ratio to the Milky Way. By adopting the conservative estimate of 50, we derived the optical depth distribution for our selected sightlines, as shown in Figure~\ref{fig:1213ratio}. The histogram peaks at $\sim5$, indicating predominantly optically thick emission. An optically thick assumption will therefore remain approximately valid for $^{12}$CO/$^{13}$CO\ $> 20$. \begin{figure} \centering \includegraphics[width=3.3in]{calcCOtau.png} \caption{CO optical depth distribution} \label{fig:1213ratio} \end{figure} It is worth noting that, although the assumption that the CO is optically thick makes it straightforward to estimate ${T}_{{\rm H}}$, it will also result in an overestimate of CO linewidth, and an overestimate of ${\sigma}_{{H}}$. By adopting a typical set of parameters (${\tau }_{{0}}$=0.5, ${\sigma}_{{H}}$=0.5 km~s$^{-1}$, ${T}_{{\rm H}}$=5K) to generate an artificial HI spectrum with HINSA absorption and fitting the absorption using our method, we investigate the impact of an overestimate of ${\sigma}_{{H}}$. The simulation result suggests that when ${\sigma}_{{H}}$\ is overestimated by less than $100\%$, the corresponding peak HINSA optical depth will be underestimated by less than 50\%, but the HINSA-HI abundance will be overestimated by less than 20\%. \subsection{Comparison with previous work} \subsubsection{Background continuum source observations} \citet{Dic1994}, \citet{Dic1995}, \citet{Meb1997} and \citet{MZ2000} measured 21~cm absorption lines toward 27 sources in the background of the LMC. The sparse sampling of these measurements as well as the low space filling factor of CO clouds makes it very hard for any coincidence between these two data sets: only six out of the 27 sightlines are coincident with MAGMA CO detections: 0526-678, 0536-693, 0539-696, 0540-697, 0539-697, 0521-699. Among these 6 sources, 0539-696, 0540-697, 0539-697 are located in the north part of the Ridge region, where HINSA signatures are clearly detected, whilst only 0540-697 is behind the HINSA detected pixel. The optical depth of HINSA-HI is 0.017 at this position, and \cite{Dic1994} reported the optical depth of 0540-697's four absorption components as: 1.39, 0.54, 0.57, 0.64. The HINSA feature in the sightline of 0540-697 peaks at 251 km s$^{-1}$, which is overlapping with one of the 3 subcomponents of the 237 km s$^{-1}$ component reported by \citet{Dic1994}. The clear difference in the HINSA optical depth reported here from the optical depth reported by Dickey et al. is due to the fact that we use different assumptions and thus the results trace different gaseous components in the CNM. HINSA traces the colder and thus less abundant part of the CNM. It should also be noted that, for this particular sightline, the small HINSA absorption may have a large uncertainty due to noise. It would be more meaningful if we were able to compare HINSA results from a statistical perspective. \subsubsection{HI line modeling} \citet{Bra2012} calculated the HI optical depth of the LMC by fitting the flatness of the HI spectrum, as explained in \citet{Bra2009}. We have compared the optical depth of HINSA-HI derived in our work with the HI optical depth result of \citet{Bra2012} as shown in Figure~\ref{fig:Tau-TauRB}. No correlation is apparent. It is not surprising because the two methods are tracing different gaseous components. \cite{Bra2012} assumed the atomic clouds are isothermal on scales of 100 pc and neglected multiple velocity components, which are prevalent in the LMC. Our work, on the contrary, focuses on true optical depth effects arising from the temperature differences of molecular clouds and surrounding HI gas. \begin{figure} \centering \includegraphics[width=3.3in]{CalcBraunTau.png} \caption{Pixel-pixel comparison of derived optical depth value between this work and \cite{Bra2012}.}\label{fig:Tau-TauRB} \end{figure} \subsubsection{Milky Way HINSA measurements} In Section 4.1 we over-plotted the Milky Way HINSA-HI abundance result of \cite{LG2003}, \citet{Krco2010} on the histogram of the HINSA-HI abundance of the LMC. Although our result is of the same magnitude as the Milky Way results, the difference between the two data sets should be noted: the Milky Way measurements are based on data of much better spatial resolution and velocity resolution \citep[e.g. 0.13 pc and 0.16 km s$^{-1}$ for ][]{LG2003}, compared to 15 pc and 1.649 km s$^{-1}$ for the LMC. Similar to Section 5.1, we performed a simulation to investigate the impact of HI resolution. It shows that when ${T}_{{\rm H}}$\ and ${\sigma}_{{H}}$\ are estimated correctly, the relatively low velocity resolution of 1.6 km s$^{-1}$ does not affect the measurement of HINSA optical depth, but it will cause a $\sim10\%$ underestimate of the HINSA-HI abundance. Since our measurements also cover a much larger volume than Galactic studies, we may also underestimate the HINSA optical depth and abundance due to the low spatial filling factor of molecular clouds. There are also differences in methodology in that the Milky Way studies use a Galactic rotation model to derive the dynamical distance for clouds, which can provide a relatively accurate estimate of foreground gas content. For the LMC, we can only assume the clouds are located in the middle of the warm HI disk. These differences may affect in detail the comparison of our HINSA-HI abundance results with that of the Milky Way. It is also worth mentioning that the latest HINSA measurement by \citet{Zuo2018} has reported a relatively high HINSA-HI/$\rm{H}_{2}$\ ratio from 0.2\% to 2\% in a single very young molecular cloud that is considered to be still in the formation process. One of the initial assumptions of this study is that the metallicity difference between the LMC and the Milky Way may produce a measureable effect on the HINSA-HI/$\rm{H}_{2}$\ ratio of the two galaxies. However, the insignificant difference reported in Section 4.1 does not support such a scenario. The low metallicity that results in relatively low CO abundance does not appear to significantly affect the HINSA-HI/$\rm{H}_{2}$\ ratio. Similarly, the low metallicity that reduces the dust surface area on which $\rm{H}_{2}$\ can form, does not affect HINSA-HI. This implies that molecular cloud cooling can still proceed despite lower dust and diffuse molecule abundance. $\rm{H}_{2}$\ self-shielding is likely fundamental in this process. \section{Summary} We have used ATCA+Parkes LMC HI survey data \citep{Kim2003} and MAGMA LMC CO Survey data (DR3) \citep{Wong2011} to locate and measure HI Narrow Self-Absorption (HINSA) features towards the molecular clouds in the LMC. This is the first confirmed detection of HINSA in an external galaxy. The HINSA-HI/$\rm{H}_{2}$\ ratio in the LMC varies from 0.5\e{-3} to 3.4\e{-3} (68\% interval), with a mean value of $(1.31 \pm 0.03)$\e{-3}, after correcting for the effect of foreground HI gas. This is slightly higher, but not significantly different from the Milky Way value from the combined results of \citet{LG2003} and \citet{Krco2010}, namely a 68\% interval range of 0.4\e{-3} to 3.0\e{-3} and mean value of $(1.0 \pm 0.2)$\e{-3}. This result indicates similar amount of cold gas existing in the LMC compared to the Milky Way. Unlike the case for stellar metallicity, the ratio does not show a radial gradient. However, a key assumption is the accuracy of the CO X-factors that we have adopted for the Milky Way and the LMC. The small HINSA-HI/$\rm{H}_{2}$\ ratio shows that the molecular clouds in the LMC are more than 99 percent molecular, confirming the relatively short formation time scale of molecular clouds. We find that HINSA features are prevalent in the surveyed sightlines: a catalog of 37 sight-lines where the peak HINSA-HI optical depth is higher than 0.2 is presented. Six typical regions where HINSA detections are concentrated (N11, N44, NAN17, NAN216, NAN223 and the LMC Ridge south of 30Dor), are examined in detail and the $^{13}$CO\ data for these regions are used to confirm the optical-thick assumption adopted in the calculations. We find no correlation between our results with those based on previoursly-developed techniques, such as background continuum sources \citep[e.g.][]{Dic1994} or HI line profile shape \citep{Bra2012}. \section*{Acknowledgements} We thank the anonymous referee for useful and detailed comments. This work is supported by National Natural Science Foundation of China (NSFC) programs, No. 11988101, 11725313, 11690024, 11833008, and the CAS International Partnership Program No.114-A11KYSB20160008. The support provided by China Scholarship Council (CSC) during a visit of Boyang Liu to ICRAR/UWA is acknowledged. This work was carried out in part at the Jet Propulsion Laboratory which is operated for NASA by the California Institute of Technology. Parts of this research were supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. \section{Introduction} It is generally accepted that molecular clouds are the birth places of stars \citep[and references therein]{MO2007}. In the classic scenario \citep{Shu1987}, pre-star-forming molecular clouds are spherically layered structures with the molecular, atomic and ionized gas phases assumed to be dominant from the inside to the outside. Various molecular tracers have been used to trace $\rm{H}_{2}$, eg. [C I] \citep{Pin2017,Val2018,Oka2019}, [C II] \citep{Tang2016,Zan2018,Ryb2019}, OH \citep{DWM2019,EA2019,Tang2017}, the main content of molecular clouds, with CO being one of the most widely used \citep[e.g.][]{HD2015,Gen2015,MWISP2019}. By assuming a fixed dust-to-gas ratio, FIR and millimeter continuum observations can be used to indicate the total gas content, including atomic and molecular components \citep[e.g.][]{Bot2007,GR2014,Lenz2017}, although there could be a bias from inaccurate assumption of the dust-to-gas ratio. The 21cm line is generally used to trace the atomic hydrogen (HI) component and is considered to be optically thin in most situations \citep{GH1988}. Recombination lines as well as centimeter continuum are often used to trace the ionized gas component \citep{Ind2009}. While these tracers are able to depict the general picture of the different phases of gas, they have obvious weakness. CO and other molecules only trace $\rm{H}_{2}$\ above certain densities and extinctions, and their abundance can be easily biased by local metallicity \citep{Mad2016}. The excitation temperature and optical depth cannot be simultaneously determined from a single line, so the column density can easily be underestimated, even for the HI 21cm line \citep{Ber2008,Hei2003b,Dic2003,ST2004,Dic2009}. Deriving the total amount of cold HI gas by analyzing self-absorption features of the 21cm line is feasible, but is complicated by confusion due to multiple components \citep{RC1972,Gib2000,Kav2003,MCG2006,Den2018}. More sophisticated approaches to the analysis of HI self-absorption have been made during the past decade: \citet{LG2003} proposed the concept of HI Narrow Self-Absorption (HINSA) to refer to the HI self-absorption features associated with cold HI gas mixed in molecular cores, following the discovery of narrow HI absorption features coinciding with OH emission lines in a number of Galactic clouds. These authors derived the column density of cold HI gas indicated by HINSA features. By constructing a time-dependent molecular cloud formation model in which the rate of transformation of HI to $\rm{H}_{2}$\ by dust surface chemistry balances the $\rm{H}_{2}$\ destruction rate due to cosmic rays, \citet{GL2005} utilized the cold-HI/$\rm{H}_{2}$\ ratio using HINSA features as a chemical clock to probe the formation of molecular clouds. This proved the HINSA technique as a new tool to study the early state of molecular cloud formation. \citet{Tian2010} have also shown that HINSA technique can be adopted as an indicator of spatial relationship between features. \citet{LG2003} reported a HINSA detection rate of 77\% for the clouds in Taurus/Perseus region. \citet{Krco2010} found a detection rate of over 80\% over a wide range of environments in the Galaxy. The prevalence of HINSA features suggest cold HI gas is always associated with molecular cores, at least in our Galaxy. It is therefore of interest to explore a different environment to test for the presence of HINSA features, and study the properties and evolution of molecular clouds using this technique. The Large Magellanic Cloud (LMC) is an ideal target for a similar study. As the nearest gas-rich galaxy to the Milky Way, it is located at a distance of 50kpc \citep{W1997,Pie2013,deG2014}. Its prominent disk has a low inclination angle of 33$^{\circ}$\ \citep{W1997}, i.e.\ it is close to face-on. This permits spatially resolved studies of the galaxy's stellar and ISM content, making the study of the LMC more similar to ``galactic" than ``extragalactic" environments. With a smaller stellar mass of a few $10^9$ $M_{\odot}$\ \citep{Fei1980,Kim1998,AN2000}, the LMC is in a more primitive evolutionary state than the Milky Way and other large disk galaxies: its ISM metallicity is 0.2 dex lower than the local value \citep{RD1992,W1997,RD2019}, consistent with the trend of lower-mass galaxies having lower metallicity \citep[e.g.][]{Tre2004,Kew2008,Asa2009,Man2010,Sch2015}. Thus studies of the LMC have the potential to reveal the `gastrophysics' (gas astrophysics) and star formation laws of similar low-metallicity irregular galaxies in the high-redshift Universe \citep{Wil2009}. Several studies of the cool phase HI in the LMC have been conducted in the past two decades. \citet{Dic1994} and \citet{Dic1995} suggested that the cool gas in the LMC is either more abundant or colder than that of the Milky Way by analyzing the absorption spectrum of background compact continuum sources. \citet{Meb1997} and \citet{MZ2000} confirmed this trend and reported typical temperatures of the diffuse cool gas in the LMC of 30-40 K, compared with the typical value of 60 K in the solar neighborhood \citep{Kal1985}. \citet{Bra2012} used a different approach of Gaussian component fitting and found a low temperature for the LMC cool gas consistent with previous studies. This study also created an opacity-corrected HI column density map of the LMC, finding a global correction factor of 1.33. Infrared \citep{Ber2008,Gal2011,Meix2013} and ultraviolet \citep{Tum2002,Wel2012,RD2019} studies have also provided important information on cool phase atomic gas in the LMC. Different techniques have been applied in previous HI absorption studies of the LMC. However the HINSA technique has never been utilized beyond the Milky Way. With the advent of a recent LMC CO survey, i.e. the MAGMA survey \citep{Wong2011} using the ATNF 22 m Mopra telescope, it is now possible to probe cold HI gas associated with molecular cores using the HINSA technique applied to the MAGMA CO cloud catalog. We have therefore conducted a joint analysis of the MAGMA CO data cube and the ATCA+Parkes HI survey data \citep{Kim2003} to study the properties of the HINSA cold HI gas in the LMC. Section 2 of the paper describes the data; Section 3 explains the data reduction process using different HINSA techniques; Section 4 shows the main results and Section 5 discusses the applicability of different HI absorption techniques and the implications for the LMC. Finally, we summarize our result in Section 6. \section{Data} In this section we introduce the data used in this study. \begin{figure*} \centering \includegraphics[width=7in]{data_used.png} \caption{Data used in this work. Grayscale image: HI column density from the ATCA+Parkes LMC HI Survey \citep{Kim2003}; red contours: the MAGMA CO Survey DR3 moment 0 map, with a contour level at 1.0 K$\cdot$km/s; white rectangular regions: boundaries of the MAGMA $^{13}$CO\ map for selected regions; white ellipses: radial rings as described in Section 4.3; white circle markers and green labels: the location and ID of the sources listed in Table 1.} \label{fig:data_map} \end{figure*} \subsection{HI} An HI 21cm survey with resolution of 1\arcmin.0 (\appro15 pc assuming a distance of 50 kpc) was conducted during the late 1990s with the Australia Telescope Compact Array (ATCA)\citep{Kim1998}. Due to the missing flux problem for interferometers, this survey was not sensitive to structures larger than 500 pc. To complement these data, \citet{Kim2003} combined ATCA interferometer and Parkes single-dish observations \citep{SS2003} to give the most complete HI survey of the LMC in terms of sky and spatial frequency coverage. Their data cube contains a complete sampling of spatial structures from 15 pc to 10 kpc. The velocity resolution is 1.649 km\,s$^{-1}$ and brightness temperature sensitivity 2.4 K. \subsection{CO} The most complete CO survey in terms of sky coverage in the past decade has been the second LMC CO survey conducted by the NANTEN telescope \citep{Fuk2008}. It is a spatially continuous survey which identified 272 molecular clouds. The Magellanic Mopra Assessment (MAGMA) is a follow-up CO survey to target detected regions, with better sensitivity by a factor of 2, and was conducted with the ATNF 22m Mopra telescope \citep{Hug2010}. \citet{Wong2011} cataloged 450 molecular clouds based on the CO $J$=1-0 map. We employ the third data release of MAGMA for this study \citep{Wong2011,Wong2017}. It contains the CO $J$=1-0 cube described in \citet{Wong2011}. The cube has an angular resolution of 45\arcsec, and a pixel spacing of 15\arcsec. The velocity resolution is 0.526 km\,s$^{-1}$. The rms noise of the cube is typically 300 mK. Compared to the published paper \citep{Wong2011}, the released data cube has been processed with a constant 10 mK offset to bring the baseline back to $\sim$ 0 K. As described in Sections 4.1 and 5.1, we also utilized the unreleased MAGMA $^{13}$CO\ data for optical depth determination. $^{13}$CO\ observations were obtained simultaneously with the $^{12}$CO\ observations for data obtained in 2006 June to 2013 September, and will be described fully in a separate paper (Wong et al., in preparation). A merged cube was generated from 1244 individual 5\arcmin\ $\times$ 5\arcmin\ square maps spanning a heliocentric velocity range of 200--325 km s$^{-1}$. The CO spectra were placed on a main-beam brightness temperature scale ($T_{\rm mb}$) assuming an ``extended beam'' efficiency of 0.43 based on daily observations of Orion KL referenced to the measurements of \citet{Ladd2005}. Our $T_{\rm mb}$ scale has recently been confirmed by comparison with ALMA total power mapping (R. Indebetouw, private communication). The resulting maps possess a Gaussian beam of 45\arcsec\ FWHM which is oversampled with a pixel scale of 15\arcsec. The typical RMS map noise is $\sigma(T_{\rm mb}) \approx 0.19$ K per 0.55 km s$^{-1}$ channel. The spatial coverage of the CO and $^{13}$CO\ data used in this study is shown in Figure~\ref{fig:data_map}, on top of the HI column density map for the LMC. \section{Methods} \subsection{HINSA techniques} One challenge to applying the HINSA concept to analysis of HI absorption features is how to reconstruct the background emission or the ``original" spectrum before absorption. An accurately recovered ``original" spectrum leads to an accurately defined absorption line profile, and vice versa. Previous studies have used several different approaches. \citet{LG2003} adopted an intuitive method by masking the absorption feature and fitting the rest of the HI profile with a polynomial. This is common practice in absorption analysis, but suffers from the subjectivity in judging the shape of the original spectrum. As they reported, the fitted result can vary as much as 1 K using different orders of polynomial. \citet{Per2011} made the assumption of a smooth and gradual variation of the background emission, and take the average spectrum of several reference points around the center of the core as the ``original'' spectrum. But as stated by many authors, the HI gas is intrinsically filamentary \citep[e.g.][]{Elm2011}, thus considering it as ``smooth and gradual'' can cause unpredictable biases. \citet{Krc2008} presented a new technique to improve the quality of HINSA feature fitting procedure. Considering the narrow nature of HINSA features, they proposed that the narrow dip in the HI profile would generate a feature in the 2nd-derivative of the observed line profile since the slowly changing ``original'' profile is largely suppressed while the fast changing absorption dip is highlighted. This was used to locate the HINSA-like absorption features in the HI profile. By constraining the regions searched by such a method with molecular tracers, finding the possible HI self-absorption features associated with molecular clouds is possible. This provides a more convenient way to extract the HINSA profile with more confidence than the previous methods. \subsection{HINSA techniques applied in this work} In this work, we basically adopt the \citet{Krc2008} technique, although some modifications were made to cope with the fact that the MAGMA program had only released $^{12}$CO\ data at the time of our analysis. \subsubsection{Radiative transfer analysis} Assuming the cold HI gas responsible for a HINSA feature has optical depth $\tau \left(v\right)$, then: \begin{equation} {T}_{{A}}\left({v}\right)={T}_{{b}}\left({v}\right){e }^{{-\tau \left({v}\right)}}+{T}_{{H}}\left[{1-{e }^{{-\tau \left({v}\right)}}}\right], \end{equation} where $v$\ is the velocity, ${T}_{{\rm A}}\left(v\right)$\ is the observed HI spectrum, ${T}_{{\rm b}}\left(v\right)$\ is the background HI emission or so-called ``original'' spectrum, including the emission from background HI clouds as well as other background sources such as the CMB. ${T}_{{\rm H}}$\ is the temperature of the HINSA-generating cold HI associated with molecular material. In writing this function, we have neglected the foreground warm HI which is actually not affected by the absorbing cold HI gas. The same approximation was adopted by \citet{Krc2008} for the nearby sources in the Galaxy. For the sources in the LMC that could be embedded anywhere in the HI disk, this could be a poorer assumption. The impact of this will be discussed later. We make the simple assumption that $\tau \left(v\right)$\ has a Gaussian shape and can be expressed as \begin{equation} \tau \left({v}\right)={\tau }_{{0}}\exp\left({-\frac {{{\left({v-{v}_{{H}}}\right)}}^{{2}}}{2{\sigma }_{{H}}^{{2}}}}\right), \end{equation} where ${\tau }_{{0}}$\ represents the peak optical depth of the cold HI gas, ${v}_{{\rm H}}$\ is the velocity of the peak optical depth, and ${\sigma}_{{H}}$\ is the width of the optical depth profile. In our study, we use a single Gaussian fit to the CO spectrum, and take the fitted central velocity of the CO peak as the value of ${v}_{{\rm H}}$. The line width of the gas component ${\sigma}_{{H}}$, consists of two components, thermal and non-thermal according to: \begin{equation} {\sigma}_{{H}} ={{\left({{\sigma }_{{H_{th}}}^{{2}}+{\sigma }_{{H_{nt}}}^{{2}}}\right)}}^{{\frac {1}{2}}}, \end{equation} where the subscripts \textit{th}\ and \textit{nt}\ represent thermal and non-thermal, respectively. Similarly, for the CO gas: \begin{equation} {\sigma}_{{CO}} ={{\left({{\sigma }_{{CO_{th}}}^{{2}}+{\sigma }_{{CO_{nt}}}^{{2}}}\right)}}^{{\frac {1}{2}}}. \end{equation} For well-mixed gas, the non-thermal line width would be similar for different components \citep{LG2003}. Combining formulas (3) and (4), we obtain: \begin{equation} {\sigma}_{{H}} =\left[{{\sigma}_{{CO}}^{{2}}+{\left({{\sigma }_{{H_{th}}}^{{2}}-{\sigma }_{{CO_{th}}}^{{2}}}\right)}}\right]^{{\frac {1}{2}}}, \end{equation} where the thermal linewidth for both HI and CO gas satisfy \begin{equation} {\sigma }_{{th}}={{\left({\frac {2kT}{m}}\right)}}^{{\frac {1}{2}}}, \end{equation} where $m$ represents the mass of a hydrogen atom or CO molecule, when ${\sigma }_{{th}}$\ is replaced by ${\sigma }_{H_{th}}$\ or ${\sigma }_{CO_{th}}$, respectively. Assuming that the different gas components inside the molecular cloud are in thermodynamic equilibrium then, for either HI or CO, the temperature $T$ in equation (6) can be replaced with the same CO kinetic temperature ${T}_{{\rm k}}$. Under the assumption of LTE, we take ${T}_{{\rm k}}$\ to be equal to ${T}_{{\rm ex}}$, the excitation temperature of CO. We therefore have \begin{equation} f\left({{T}_{{ex}}}\right)=\frac {{T}_{{{B}_{{0}}}}}{{T}_{{1-0}}}+f\left({{T}_{{bg}}}\right), \end{equation} where $f\left(T\right)$ is defined as \begin{equation} f\left({T}\right)=\frac {1}{{\exp{\left({\frac {{T}_{{1-0}}}{T}}\right)}}-1}. \end{equation} ${T}_{{{\rm B}_{{0}}}}$\ is the brightness temperature at the CO line center, here adopted as the peak temperature of the fitted Gaussian profile. ${T}_{{1-0}}$\ is the equivalent temperature of the $^{12}$CO $J=1-0$\ transition and has the value 5.53~K. ${T}_{{\rm bg}}$\ is the background field temperature, for which we use the CMB temperature of 2.73 K. With these assumptions and relations, we can recover the ``original'' HI spectrum as function of a single variable ${\tau }_{{0}}$. As demonstrated in \citet{Krc2008}, a narrow dip in a smooth line would generate a prominent feature in the 2nd-derivative profile. Ideally, we expect that such a feature can be minimized if we adjust the value of ${\tau }_{{0}}$\ until the narrow dip in spectrum vanishes. We integrate the square of the 2nd-derivative of the recovered ``original'' spectrum, and we stop adjusting the value of ${\tau }_{{0}}$\ when the change falls below a precision criterion of $10^{-4}$ K km s$^{-1}$. We are left with the peak optical depth ${\tau }_{{0}}$\ and the ``original'' HI spectrum before absorption, ${T}_{{\rm b}}\left(v\right)$. The amount of HINSA absorption as a function of frequency, i.e. the HINSA profile, is the difference between the ``original'' HI spectrum and the observed HI spectrum. Then we can derive the HINSA brightness temperature profile as \begin{equation} \begin{aligned} {T}_{{\rm HINSA}}\left({v}\right)&={T}_{{b}}\left({v}\right)-{T}_{{A}}\left({v}\right)\\ &=\left({{T}_{{A}}\left({v}\right)}-{{T}_{{ex}}}\right)\left({e }^{{\tau \left({v}\right)}}-{1}\right). \end{aligned} \end{equation} In summary, we derive the HINSA profile using the following stpdf: \begin{itemize} \item Calculate the so-called ``original'' HI spectrum, which does not show the absorption and thus appears smoother (the smoothness of the ``original'' spectrum is judged by its 2nd-derivative). \item Subtract the real HI spectrum from the calculated ``original'' spectrum to derive the HINSA profile. \end{itemize} \subsection{Deriving physical parameters} It can be seen from Equation 9 that when the central velocities for $\tau \left(v\right)$\ and the observed HI spectrum ${T}_{{\rm A}}\left(v\right)$\ are different, an asymmetric ${T}_{{\rm HINSA}}\left(v\right)$\ profile will result. To parameterize the ${T}_{{\rm HINSA}}\left(v\right)$\ profile, a Gaussian fit is performed, and peak temperature, central velocity and width ${\sigma}_{{\rm HINSA}}$\ are derived. Then we calculate the column density of the HINSA-associated cold HI based on \citet{LG2003}'s formula (13): \begin{equation} \begin{aligned} \frac{N\left({\rm HINSA}\right)}{{\rm cm}^{-2}}=1.95\times {{10}}^{{18}}{\tau }_{{0}}\frac {{\sigma }_{{\rm HINSA}}}{\rm km\;s^{-1}}\left(\frac {T_k}{\rm K}\right) . \end{aligned} \end{equation} CO is almost always optically thick in molecular cores, so the estimation of $\rm{H}_{2}$\ column density $N\left({{H}_{{2}}}\right)$\ based solely on CO can be unreliable. However, the CO luminosity-$\rm{H}_{2}$\ column density conversion factor, or X-factor, is often the only way to estimate $\rm{H}_{2}$\ column density in external galaxies \citep[and references therein]{Bol2013}. Similarly for the LMC, there is currently no other molecular tracer available that has such completeness in coverage. We therefore use the latest estimate for the LMC X-factor, $4\times {{10}}^{{20}}{\rm cm}^{{-2}}\left({\rm K\;km\;s^{-1}}\right)^{-1}$ \citep{Bol2013}, which is a direct result of the MAGMA Project \citep{Hug2010,Wong2011,Pin2010}. The integrated flux of the CO profile is calculated from the Gaussian fit to avoid the effect of component blending. The HI-to-H2 ratio, defined as the ratio in the HINSA-associated cold HI and the $\rm{H}_{2}$\ content, is calculated by comparing $N\left({\rm HINSA}\right)$\ and $N\left({{H}_{{2}}}\right)$. It is the major parameter we derive that shows the abundance of the HINSA-associated HI. \subsection{Optical depth correction} The molecular clouds where HINSA features are detected are embedded in the LMC's HI gas disk. The presence of foreground HI gas diminishes the strength of the HINSA absorption dip that we're looking for. Unlike \citet{LG2003} who estimate the proportion of the foreground gas using the Galactic rotation curve, the location of the molecular clouds within the LMC disk is unknown. Here we evaluate the effect of the foreground gas on the observed HINSA features. Using the same variable $p$ as \citet{LG2003} to describe the position of a given molecular cloud in a uniform disk, $(1-p)$ is the fraction of foreground HI gas relative to the total amount of HI gas in the line of sight. The \emph{real} optical depth of the HINSA HI (equation 12 of \citet{LG2003}) is: \begin{equation} \begin{aligned} {\tau}_{0}^{\prime}=\ln \left[\frac{{p}{T}_{b}+\left({T}_{c}-{T}_{H}\right)\left(1-{\tau}_{f}\right)}{{p}{T}_{b}+\left({T}_{c}-{T}_{H}\right)\left(1-{\tau}_{f}\right)-{T}_{\rm HINSA}}\right] \end{aligned} \end{equation} where ${T}_{{\rm b}}$, ${T}_{{\rm H}}$\ and ${T}_{{\rm HINSA}}$\ are as defined in section 3.2.1, ${T}_{{\rm c}}$\ is the continuum temperature, ${\tau }_{{f}}$\ is the foreground HI optical depth, and ${\tau }_{{f}}=\left({1}-{p}\right){\tau }_{{HI}}$, where ${\tau }_{{HI}}$\ is the total HI optical depth along the line of sight through the LMC's disk. When the foreground HI is ignored as it was done in section 3.2.1, $p=1$ and ${\tau }_{{0}}^{\prime}$=${\tau }_{{0}}$\ as defined in section 3.2.1. The optical depth correction factor, given by $C$, is defined by: \begin{equation} \begin{aligned} {C}=\frac{{\tau}_{0}^{\prime}}{{\tau}_{0}}. \end{aligned} \end{equation} Using a typical set of parameters ${T}_{{\rm b}}$$={80}$ K (Galactic value, \citet{LG2003}), ${T}_{{\rm c}}$$={3.8}$ K, ${T}_{{\rm H}}$$={10}$ K and ${\tau }_{{HI}}$$=0.7$ , a typical $C(p,\ $${T}_{{\rm HINSA}}$$)$ relation is shown in Figure~\ref{fig:correctionfactor}. As shown in Figure \ref{fig:A1}, the adopted value ${T}_{{\rm b}}$$={80}$ K is also a typical value in the LMC for HINSA regions. \citet{LG2003} adopted ${T}_{{\rm c}}$$={3.5}$ K for Milky Way studies, whereas the value used for the LMC (3.8 K) is derived from the 20-cm continuum map of \citet{Hugh2007}. The flux for Region 3 of \citet{Hugh2007}, where the continuum flux at 3.75 GHz \citep{Hayn1991} is higher than 40 mJy beam$^{-1}$, i.e. the brighter part of the LMC, is considered for the derivation of ${T}_{{\rm c}}$. The value adopted for ${\tau }_{{HI}}$\ is the average value of ${\tau }_{{max}}$\ measured by \citet{Dic1994}, \citet{Meb1997} and \citet{MZ2000} towards 87 radio sources behind the LMC. The value of $C$ is large when $p$ is small, and is very sensitive to ${T}_{{\rm HINSA}}$. However for $p>0.3$, the scatter becomes smaller and $C$ approaches unity. Although the exact value of $p$ is unknown, assuming that the scale height of the molecular disk of the LMC is smaller than that of the HI disk, we can adopt $p={0.5}$. For $p\sim0.5$, the value of $C$ varies in a narrow range of $\sim2$ for different values of ${T}_{{\rm HINSA}}$\ -- the difference is less than 14\% for values between 0.1 K and 10 K. In the following calculation, ${T}_{{\rm c}}$\ is fixed to ${3.8}$ K, while the values of ${T}_{{\rm b}}$, ${T}_{{\rm H}}$\ and ${T}_{{\rm HINSA}}$\ are used on a pixel-by-pixel basis. \begin{figure} \centering \includegraphics[width=3.3in]{calccorrection.png} \caption{The optical depth correction factor $C$ for different values of the embedding depth $p$ and $T_{\rm HINSA}$.} \label{fig:correctionfactor} \end{figure} \section{Results} Of all the $1997\times 2230$ pixels in the HI data cube, 1446 pixels were detected with HINSA features, i.e. an angular filling factor of $\sim 3\times 10^{-4}$. The details of these detections are given below. \subsection{HINSA-HI abundance} Figure~\ref{fig:HIH2ratio} shows the histogram of HINSA-HI abundance, i.e.\ the ratio of HINSA-HI column density to that of $\rm{H}_{2}$, which is important for comparison with other studies. The value after optical depth correction varies from 0.5\e{-3} to 3.4\e{-3} (68\% interval), with a mean value of $(1.31 \pm 0.03)$\e{-3}; the value before correction varies from 0.3\e{-3} to 1.6\e{-3} (68\% interval), with a mean value of $(0.64 \pm 0.02)$\e{-3}. We also show the results from \citet{LG2003}, a HINSA survey of the Taurus Molecular Cloud and \citet{Krco2010}, a HINSA survey in other regions in the Milky Way for comparison. The 68\% interval value range is 0.2\e{-3} to 4.4\e{-3} for \cite{LG2003}, 0.5\e{-3} to 2.5\e{-3} for \citet{Krco2010}, and 0.4\e{-3} to 3.0\e{-3} for both Milky Way samples combined. The mean value for both Milky Way samples is $(1.0 \pm 0.2)$\e{-3}. Our result shows that the LMC's HINSA-HI/$\rm{H}_{2}$\ abundance ratio is slightly higher, but not significantly different from the Milky Way value, which means the LMC has a similar cold gas fraction to the Milky Way. \begin{figure*} \centering \includegraphics[width=6in]{DR3calcHItoH2.png} \caption{A histogram of the HINSA-HI to $\rm{H}_{2}$\ ratio $\log_{10} (N_{\rm HINSA}/N_{\rm H_2}$). The bold black histogram is the LMC results from the present work. The red histogram shows the result for Taurus/Perseus region from \citet{LG2003}. The green histogram shows the result for Milky Way regions outside Taurus from \cite{Krco2010} (the values for each velocity component instead of the mean value for each line-of-sight are used). The blue histogram shows the sum of the previous two studies. To improve the visibility of the diagram, the $y$-axis is scaled up by a factor of 5 for the Milky Way results.} \label{fig:HIH2ratio} \end{figure*} \subsection{Catalog} The HINSA detections were inspected manually. Consecutive pixels with detections were catalogued into the same ``group''. There are 37 groups of HINSA detections in the LMC where the peak optical depth of HINSA-HI is higher than 0.2. Table \ref{tbl:selectedsightlines} is a catalog of the physical parameters of the peak optical depth positions for these groups. \begin{table*} \centering \begin{tabular}{lccccccccr} \hline \hline No. & $\alpha$ (2000) & $\delta$ (2000) & $\tau_{0}$ & $T_{H}$ & $\sigma_{H}$ & $N_{\rm HINSA}$ & $N_{\rm H_2}$ & $N_{\rm HINSA} /N_{\rm H_2}$ & Cloud ID \\ & (h:m:s) & (\arcdeg:\arcmin:\arcsec) & & (K) & (km\,s$^{-1}$) & (cm$^{-2}$) & (cm$^{-2}$) & & \\\hline 1 & 04:47:21.90 & -67:11:42.3 & 0.29 & 4.0 & 2.0 & 7.6E+18 & 2.5E+21 & 7.0E-03 & 9 \\ 2 & 04:47:34.98 & -67:12:16.0 & 0.31 & 4.5 & 1.9 & 8.3E+18 & 2.3E+21 & 8.8E-03 & 10 \\ 3 & 04:49:01.79 & -68:36:17.2 & 0.36 & 5.2 & 1.8 & 1.2E+19 & 3.9E+21 & 7.3E-03 & 19 \\ 4 & 04:49:11.07 & -68:35:03.9 & 0.30 & 4.9 & 0.9 & 4.0E+18 & 1.6E+21 & 5.8E-03 & 24 \\ 5 & 04:49:29.52 & -68:30:14.5 & 0.21 & 4.3 & 1.9 & 5.7E+18 & 2.5E+21 & 5.1E-03 & 30 \\ 6 & 04:50:23.76 & -69:30:15.9 & 0.31 & 4.7 & 0.7 & 3.5E+18 & 1.1E+21 & 7.5E-03 & 36 \\ 7 & 04:51:50.21 & -69:21:18.0 & 0.31 & 4.3 & 1.4 & 6.2E+18 & 2.4E+21 & 6.0E-03 & 44 \\ 8 & 04:52:16.74 & -66:53:40.6 & 0.24 & 4.0 & 1.5 & 4.7E+18 & 1.8E+21 & 6.0E-03 & 50 \\ 9 & 04:52:51.04 & -68:03:51.5 & 0.31 & 4.7 & 1.7 & 8.2E+18 & 3.1E+21 & 6.1E-03 & 58 \\ 10 & 04:54:05.70 & -69:11:33.1 & 0.29 & 5.1 & 2.6 & 1.3E+19 & 5.0E+21 & 5.9E-03 & 65 \\ 11 & 04:55:33.86 & -66:28:16.9 & 0.33 & 4.8 & 2.3 & 1.2E+19 & 4.6E+21 & 6.3E-03 & 78 \\ 12 & 04:56:17.62 & -66:37:26.5 & 0.26 & 5.0 & 1.4 & 6.2E+18 & 2.8E+21 & 5.1E-03 & 80 \\ 13 & 04:58:42.28 & -66:07:59.2 & 0.20 & 5.3 & 1.7 & 5.9E+18 & 3.8E+21 & 3.5E-03 & 110 \\ 14 & 05:03:47.65 & -67:18:35.1 & 0.20 & 5.5 & 1.8 & 6.3E+18 & 4.5E+21 & 3.1E-03 & 137 \\ 15 & 05:05:26.14 & -66:53:54.0 & 0.24 & 4.6 & 1.4 & 5.1E+18 & 2.2E+21 & 5.4E-03 & 146 \\ 16 & 05:09:55.96 & -68:53:33.3 & 0.22 & 4.5 & 2.7 & 8.7E+18 & 4.1E+21 & 4.7E-03 & 165 \\ 17 & 05:13:21.03 & -69:23:03.4 & 0.24 & 7.4 & 1.8 & 9.0E+18 & 7.7E+21 & 2.6E-03 & 207 \\ 18 & 05:13:25.50 & -67:28:17.6 & 0.31 & 4.4 & 1.4 & 6.2E+18 & 1.8E+21 & 8.5E-03 & 206 \\ 19 & 05:13:51.33 & -67:07:42.8 & 0.27 & 3.4 & 2.1 & 5.6E+18 & 1.1E+21 & 1.2E-02 & - \\ 20 & 05:14:33.31 & -68:46:09.2 & 0.36 & 4.8 & 1.6 & 8.6E+18 & 2.8E+21 & 7.3E-03 & 213 \\ 21 & 05:22:12.97 & -67:57:42.9 & 0.57 & 4.3 & 2.9 & 2.4E+19 & 3.6E+21 & 1.9E-02 & 291 \\ 22 & 05:24:21.84 & -68:25:41.2 & 0.38 & 6.2 & 2.3 & 1.8E+19 & 7.0E+21 & 6.3E-03 & 350 \\ 23 & 05:24:51.46 & -69:40:20.8 & 0.36 & 4.7 & 2.8 & 1.5E+19 & 4.6E+21 & 8.4E-03 & 355 \\ 24 & 05:25:10.68 & -69:40:40.1 & 0.23 & 7.6 & 1.5 & 7.6E+18 & 6.5E+21 & 2.6E-03 & 358 \\ 25 & 05:25:53.67 & -66:14:07.3 & 0.22 & 4.1 & 2.6 & 7.6E+18 & 3.0E+21 & 5.6E-03 & 374 \\ 26 & 05:35:24.75 & -67:34:48.0 & 0.96 & 4.1 & 3.6 & 4.8E+19 & 3.6E+21 & 4.1E-02 & 451 \\ 27 & 05:35:47.86 & -69:13:08.0 & 0.30 & 4.5 & 1.3 & 6.0E+18 & 3.4E+21 & 4.0E-03 & 459 \\ 28 & 05:35:53.06 & -69:02:22.9 & 0.27 & 5.2 & 2.3 & 1.1E+19 & 5.3E+21 & 4.7E-03 & 462 \\ 29 & 05:38:29.73 & -69:02:09.6 & 0.28 & 4.9 & 2.2 & 9.7E+18 & 4.1E+21 & 5.5E-03 & 508 \\ 30 & 05:39:35.66 & -69:46:16.4 & 0.22 & 6.4 & 3.2 & 1.4E+19 & 1.0E+22 & 3.0E-03 & 531 \\ 31 & 05:39:44.42 & -69:37:31.6 & 0.34 & 4.4 & 1.7 & 8.2E+18 & 3.1E+21 & 6.5E-03 & 548 \\ 32 & 05:40:02.89 & -69:51:26.4 & 0.48 & 6.9 & 3.1 & 3.2E+19 & 1.2E+22 & 7.5E-03 & - \\ 33 & 05:41:16.56 & -70:55:31.9 & 0.37 & 4.8 & 2.0 & 1.3E+19 & 3.4E+21 & 9.2E-03 & 606 \\ 34 & 05:43:23.42 & -69:25:12.2 & 0.20 & 5.1 & 2.0 & 6.4E+18 & 4.1E+21 & 3.5E-03 & 635 \\ 35 & 05:44:42.57 & -69:28:13.6 & 0.36 & 6.0 & 2.7 & 1.9E+19 & 8.3E+21 & 5.7E-03 & 650 \\ 36 & 05:46:17.45 & -69:38:26.6 & 0.27 & 3.9 & 2.3 & 7.9E+18 & 2.2E+21 & 8.1E-03 & 666 \\ 37 & 05:47:01.88 & -70:46:11.0 & 0.23 & 4.1 & 3.1 & 9.0E+18 & 3.4E+21 & 5.8E-03 & 672 \\ \hline\hline \end{tabular} \caption{Physical parameters for the sightlines where HINSA optical depth is greater than 0.2. The Cloud ID is from Catalog C in \citet{Wong2011}. The spectra for each of these sightlines are displayed in the Appendix A.} \label{tbl:selectedsightlines} \end{table*} \subsection{Spatial distribution} \citet{Cho2016} derive a photometric metallicity map of the LMC using MCPS and OGLE III data. It shows a shallow metallicity gradient, with the central bar having the highest metallicity and the outer parts having the lowest metallicity. To confirm whether there is a corresponding trend in the spatial distribution of the HINSA-HI abundance ratio, we divide the LMC into 6 concentric elliptical rings. The radii of the rings start at 0.5 kpc, and are spaced by 0.5 kpc, doubling the bin width used by \citet{Cho2016}. The position angle (PA) of these rings is extracted from the PA measurements of \citet{Kim1998}. An HI morphologically-derived inclination angle of 22 degrees is adopted, according to the measurements of \citep{Kim1998}. The kinematic centre of the LMC HI disk is used here \citep[$05^{h}17.6^{m}, -69^{d}02^{m}$ as given by][]{Kim1998}, which deviates from the optical centre used by \citet{Cho2016} by 27 arcmin. The pixels with HINSA detections are divided into 7 radial bins, containing 189, 153, 343, 442, 248, 51, 26 and 51 pixels, from small to large radius respectively. We derive the mean and standard deviation for each group by fitting the $\log_{10}$ histogram with a Gaussian. The result, shown in Figure~\ref{fig:DR3HItoH2_rings}, shows no radial gradient of the HINSA-HI abundance in the LMC. To examine whether there is any radial trend in the HINSA-HI abundance ratio in the Milky Way, we have looked into the distances of the molecular clouds that were studied in the previous Milky Way HINSA studies \citep{LG2003,Krco2010}. We find that the existing HINSA measurements in the Milky Way are focused either on nearby molecular clouds (less than 1kpc from the Sun) or clouds at unknown distances. It is not yet possible to make a definitive statement on the radial distribution of HINSA-HI abundance in the Milky Way. \begin{figure} \centering \includegraphics[width=3in]{DR3HItoH2_rings.png} \caption{HINSA-HI abundance as a function of the radius from the center of the LMC. The error bars show the standard deviation within in each bin.} \label{fig:DR3HItoH2_rings} \end{figure} \subsection{Highlighted regions} The detection of HINSA signatures is prevalent along the sightlines towards molecular clouds in the LMC. For those regions with strong and concentrated HINSA signatures, six have $^{13}$CO\ data: N11, N44, NAN17, NAN216, NAN223 and the Ridge southward of 30 Dor (the `Ridge'). The $^{13}$CO\ data is used to help determine the optical depth of CO, as discussed in Section 5.1. These regions are highlighted here: the optical depth map of HINSA-HI are shown in Appendix B. The highlighted regions are mostly distributed along the two spiral features of the LMC, with one (N44) located to the north of the optical bar. The distribution of these selected regions is similar to the distribution of star formation activity in the LMC: 30 Dor and the southern ridge has the most violent star-formation activity; the western spiral feature and the region north of the optical bar are also quite active, while the region south of the optical bar is lacking major star formation activity and molecular clouds. These maps show that the distribution of HI emission, CO and HINSA-HI roughly follows an onion shell structure with HI emission around an outer shell and HINSA-HI in the inner core. But it also seems that the spatial peak in the HINSA-HI optical depth is often mismatched with the peak of the CO cloud. This may reflect the inadequacy of CO as an $\rm{H}_{2}$\ cloud tracer, or it may reflect an evolutionary sequence. \citet{Zuo2018} have reported the discovery of a shell structure of HINSA-HI around a molecular cloud in the Milky Way which indicates the depletion of atomic hydrogen in the center of the molecular cloud. The mismatch of the HINSA-HI peak and the CO cloud peak in LMC clouds could be due to a similar reason, although our lower spatial resolution makes this harder to judge. \section{Discussion} \subsection{Optical depth of CO} In Section 3, we assumed optically thick CO emission. This assumption affects the estimate of the temperature of the HINSA gas. If the optically thick assumption breaks down, the excitation temperature of CO will be underestimated. The LTE assumption will also be incorrect so the kinetic temperature of the gas will be further underestimated. The $^{13}$CO\ data for the above selected regions were therefore used to calculate the optical depth of CO to test this assumption. We selected all the pixels where the peak S/N ratio for the $^{13}$CO\ spectrum was larger than 3. The optical depth of $^{13}$CO\ for these sightlines was first calculated by assuming CO is optically thick \citep{WRH2013}: \begin{equation} \begin{aligned} T_{ex}(^{12}CO)=5.5/ln(1+\frac{5.5}{T_{B}(^{12}CO)+0.82}) \end{aligned} \label{con:Tex12CO0} \end{equation} Then the optical depth of $^{13}$CO\ is derived by \begin{equation} \begin{aligned} \tau(^{13}CO)=-ln\Bigg\{1-\frac{T_{B}(^{13}CO)}{5.3}\bigg\{\\\Big[exp(\frac{5.3}{T_{ex}(^{12}CO)}-1)\Big]^{-1}-0.16\bigg\}^{-1}\Bigg\} \end{aligned} \label{con:tau13CO} \end{equation} Then the optical depth of CO is derived by multiplying the $^{13}$CO\ optical depth by the $^{12}$CO/$^{13}$CO\ ratio: \begin{equation} \begin{aligned} \tau(^{12}CO)=X(^{12}CO/^{13}CO)\tau(^{13}CO) \end{aligned} \label{con:tau12CO} \end{equation} Then a corrected excitation temperature of CO was derived by using the updated CO optical depth $\tau(^{12}CO)$: \begin{equation} \begin{aligned} T_{ex}'(^{12}CO)=5.5/ln\Bigg[1+\\\Big(\frac{T_{B}(^{12}CO)}{5.5\Big[1-exp\big(-\tau(^{12}CO)\big)\Big]}+0.15\Big)^{-1}\Bigg] \end{aligned} \label{con:Tex12CO1} \end{equation} We iterated the above process (from Equation \ref{con:tau13CO} to \ref{con:Tex12CO1}) for 100 times, resulting in an improved estimate of excitation temperature and optical depth for CO. The result of the above is dependent on the value of the $^{12}$CO/$^{13}$CO\ abundance ratio adopted. Previous observations have shown that the $^{12}$CO/$^{13}$CO\ ratio for the molecular clouds in the LMC may or may not be different from the Milky Way value of $\sim100$. For example \citet{Joh1994} suggested a ratio of $50^{+25}_{-20}$ for N 159 in the LMC, while \cite{Isr2003} suggest a ``similar'' intrinsic isotopic ratio to the Milky Way. By adopting the conservative estimate of 50, we derived the optical depth distribution for our selected sightlines, as shown in Figure~\ref{fig:1213ratio}. The histogram peaks at $\sim5$, indicating predominantly optically thick emission. An optically thick assumption will therefore remain approximately valid for $^{12}$CO/$^{13}$CO\ $> 20$. \begin{figure} \centering \includegraphics[width=3.3in]{calcCOtau.png} \caption{CO optical depth distribution} \label{fig:1213ratio} \end{figure} It is worth noting that, although the assumption that the CO is optically thick makes it straightforward to estimate ${T}_{{\rm H}}$, it will also result in an overestimate of CO linewidth, and an overestimate of ${\sigma}_{{H}}$. By adopting a typical set of parameters (${\tau }_{{0}}$=0.5, ${\sigma}_{{H}}$=0.5 km~s$^{-1}$, ${T}_{{\rm H}}$=5K) to generate an artificial HI spectrum with HINSA absorption and fitting the absorption using our method, we investigate the impact of an overestimate of ${\sigma}_{{H}}$. The simulation result suggests that when ${\sigma}_{{H}}$\ is overestimated by less than $100\%$, the corresponding peak HINSA optical depth will be underestimated by less than 50\%, but the HINSA-HI abundance will be overestimated by less than 20\%. \subsection{Comparison with previous work} \subsubsection{Background continuum source observations} \citet{Dic1994}, \citet{Dic1995}, \citet{Meb1997} and \citet{MZ2000} measured 21~cm absorption lines toward 27 sources in the background of the LMC. The sparse sampling of these measurements as well as the low space filling factor of CO clouds makes it very hard for any coincidence between these two data sets: only six out of the 27 sightlines are coincident with MAGMA CO detections: 0526-678, 0536-693, 0539-696, 0540-697, 0539-697, 0521-699. Among these 6 sources, 0539-696, 0540-697, 0539-697 are located in the north part of the Ridge region, where HINSA signatures are clearly detected, whilst only 0540-697 is behind the HINSA detected pixel. The optical depth of HINSA-HI is 0.017 at this position, and \cite{Dic1994} reported the optical depth of 0540-697's four absorption components as: 1.39, 0.54, 0.57, 0.64. The HINSA feature in the sightline of 0540-697 peaks at 251 km s$^{-1}$, which is overlapping with one of the 3 subcomponents of the 237 km s$^{-1}$ component reported by \citet{Dic1994}. The clear difference in the HINSA optical depth reported here from the optical depth reported by Dickey et al. is due to the fact that we use different assumptions and thus the results trace different gaseous components in the CNM. HINSA traces the colder and thus less abundant part of the CNM. It should also be noted that, for this particular sightline, the small HINSA absorption may have a large uncertainty due to noise. It would be more meaningful if we were able to compare HINSA results from a statistical perspective. \subsubsection{HI line modeling} \citet{Bra2012} calculated the HI optical depth of the LMC by fitting the flatness of the HI spectrum, as explained in \citet{Bra2009}. We have compared the optical depth of HINSA-HI derived in our work with the HI optical depth result of \citet{Bra2012} as shown in Figure~\ref{fig:Tau-TauRB}. No correlation is apparent. It is not surprising because the two methods are tracing different gaseous components. \cite{Bra2012} assumed the atomic clouds are isothermal on scales of 100 pc and neglected multiple velocity components, which are prevalent in the LMC. Our work, on the contrary, focuses on true optical depth effects arising from the temperature differences of molecular clouds and surrounding HI gas. \begin{figure} \centering \includegraphics[width=3.3in]{CalcBraunTau.png} \caption{Pixel-pixel comparison of derived optical depth value between this work and \cite{Bra2012}.}\label{fig:Tau-TauRB} \end{figure} \subsubsection{Milky Way HINSA measurements} In Section 4.1 we over-plotted the Milky Way HINSA-HI abundance result of \cite{LG2003}, \citet{Krco2010} on the histogram of the HINSA-HI abundance of the LMC. Although our result is of the same magnitude as the Milky Way results, the difference between the two data sets should be noted: the Milky Way measurements are based on data of much better spatial resolution and velocity resolution \citep[e.g. 0.13 pc and 0.16 km s$^{-1}$ for ][]{LG2003}, compared to 15 pc and 1.649 km s$^{-1}$ for the LMC. Similar to Section 5.1, we performed a simulation to investigate the impact of HI resolution. It shows that when ${T}_{{\rm H}}$\ and ${\sigma}_{{H}}$\ are estimated correctly, the relatively low velocity resolution of 1.6 km s$^{-1}$ does not affect the measurement of HINSA optical depth, but it will cause a $\sim10\%$ underestimate of the HINSA-HI abundance. Since our measurements also cover a much larger volume than Galactic studies, we may also underestimate the HINSA optical depth and abundance due to the low spatial filling factor of molecular clouds. There are also differences in methodology in that the Milky Way studies use a Galactic rotation model to derive the dynamical distance for clouds, which can provide a relatively accurate estimate of foreground gas content. For the LMC, we can only assume the clouds are located in the middle of the warm HI disk. These differences may affect in detail the comparison of our HINSA-HI abundance results with that of the Milky Way. It is also worth mentioning that the latest HINSA measurement by \citet{Zuo2018} has reported a relatively high HINSA-HI/$\rm{H}_{2}$\ ratio from 0.2\% to 2\% in a single very young molecular cloud that is considered to be still in the formation process. One of the initial assumptions of this study is that the metallicity difference between the LMC and the Milky Way may produce a measureable effect on the HINSA-HI/$\rm{H}_{2}$\ ratio of the two galaxies. However, the insignificant difference reported in Section 4.1 does not support such a scenario. The low metallicity that results in relatively low CO abundance does not appear to significantly affect the HINSA-HI/$\rm{H}_{2}$\ ratio. Similarly, the low metallicity that reduces the dust surface area on which $\rm{H}_{2}$\ can form, does not affect HINSA-HI. This implies that molecular cloud cooling can still proceed despite lower dust and diffuse molecule abundance. $\rm{H}_{2}$\ self-shielding is likely fundamental in this process. \section{Summary} We have used ATCA+Parkes LMC HI survey data \citep{Kim2003} and MAGMA LMC CO Survey data (DR3) \citep{Wong2011} to locate and measure HI Narrow Self-Absorption (HINSA) features towards the molecular clouds in the LMC. This is the first confirmed detection of HINSA in an external galaxy. The HINSA-HI/$\rm{H}_{2}$\ ratio in the LMC varies from 0.5\e{-3} to 3.4\e{-3} (68\% interval), with a mean value of $(1.31 \pm 0.03)$\e{-3}, after correcting for the effect of foreground HI gas. This is slightly higher, but not significantly different from the Milky Way value from the combined results of \citet{LG2003} and \citet{Krco2010}, namely a 68\% interval range of 0.4\e{-3} to 3.0\e{-3} and mean value of $(1.0 \pm 0.2)$\e{-3}. This result indicates similar amount of cold gas existing in the LMC compared to the Milky Way. Unlike the case for stellar metallicity, the ratio does not show a radial gradient. However, a key assumption is the accuracy of the CO X-factors that we have adopted for the Milky Way and the LMC. The small HINSA-HI/$\rm{H}_{2}$\ ratio shows that the molecular clouds in the LMC are more than 99 percent molecular, confirming the relatively short formation time scale of molecular clouds. We find that HINSA features are prevalent in the surveyed sightlines: a catalog of 37 sight-lines where the peak HINSA-HI optical depth is higher than 0.2 is presented. Six typical regions where HINSA detections are concentrated (N11, N44, NAN17, NAN216, NAN223 and the LMC Ridge south of 30Dor), are examined in detail and the $^{13}$CO\ data for these regions are used to confirm the optical-thick assumption adopted in the calculations. We find no correlation between our results with those based on previoursly-developed techniques, such as background continuum sources \citep[e.g.][]{Dic1994} or HI line profile shape \citep{Bra2012}. \section*{Acknowledgements} We thank the anonymous referee for useful and detailed comments. This work is supported by National Natural Science Foundation of China (NSFC) programs, No. 11988101, 11725313, 11690024, 11833008, and the CAS International Partnership Program No.114-A11KYSB20160008. The support provided by China Scholarship Council (CSC) during a visit of Boyang Liu to ICRAR/UWA is acknowledged. This work was carried out in part at the Jet Propulsion Laboratory which is operated for NASA by the California Institute of Technology. Parts of this research were supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013.
{ "redpajama_set_name": "RedPajamaArXiv" }
248
Britain and Iran's Fight Over Oil Tankers Is Getting Serious British Marines stormed an Iranian oil tanker. Now, an Iranian military commander has threatened to seize a British ship in retaliation. by Tim Hume July 5, 2019, 2:13pm On Thursday, a deployment of British Marines stormed an Iranian oil tanker accused of carrying crude oil to Syria in breach of sanctions against the Assad regime. Now, a senior Iranian military commander has threatened to seize a British ship in retaliation. British Marines helped Gibraltarian police seize the 330-meter Grace-1 early Thursday after it was suspected of carrying oil from Iran to Syria, in breach of European Union sanctions against the Assad regime. The seizure set off a furious response from Tehran, which called it "piracy" and accused the British government of doing the bidding of the U.S. On Friday, Maj. Gen. Mohsen Rezaei, a commander in Iran's Islamic Revolutionary Guard Corps, tweeted that if the Grace isn't released, Tehran should respond with its own seizure. "If Britain does not release the Iranian oil tanker, it is the authorities duty to seize a British oil tanker," he wrote. "Islamic Iran in its 40-year history has never initiated hostilities in any battles, but has also never hesitated in responding to bullies." Iranian Foreign Ministry spokesman Abbas Mousavi said that Britain's ambassador to Tehran, Rob Macaire, had been formally summoned for a complaint about the seizure of the tanker, which he said could further inflame tensions in the Persian Gulf. The standoff comes at a particularly volatile moment in the relationship between Iran and the West, after Tehran announced it had deliberately exceeded uranium enrichment limits set down in the ailing 2015 nuclear accord, which the U.S. abandoned last year. The U.S., which has since pursued a policy of "maximum pressure" via sanctions on Tehran, accused Iran of carrying out explosive attacks on two oil tankers in the Strait of Hormuz last month. Tehran denied the allegations, accusing the U.S. of "warmongering." Fabian Picardo, the chief minister of Gibraltar, a British overseas territory on Spain's southern tip, said his officials stopped the tanker because it was believed to be destined for Syria's Banyas refinery, which is owned by an entity that's subject to EU sanctions against the Syrian regime. The bloc imposed a number of sanctions against the Assad regime for its brutal crackdown on civilians at the outset of the Syrian conflict in 2011. U.S. national security adviser John Bolton applauded the seizure as "excellent news," vowing that "America and our allies will continue to prevent regimes in Tehran & Damascus from profiting off this illicit trade." Cover: A view of the Grace 1 super tanker in the British territory of Gibraltar, Thursday, July 4, 2019. Spain's acting foreign minister says a tanker stopped off Gibraltar and suspected of taking oil to Syria was intercepted by British authorities after a request from the United States. (AP Photo/Marcos Moreno) Tagged:VICE NewsbritainGibraltaroil tankerEU sanctions Get the latest from VICE News in your inbox. Sign up right here.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,720
Nicole Herschmann (ur. 27 października 1975 w Rudolstadt) – niemiecka bobsleistka, brązowa medalistka igrzysk olimpijskich i mistrzostw świata. Kariera Pierwszy sukces w karierze osiągnęła w 2002 roku, kiedy wspólnie z Susi Erdmann zdobyła brązowy medal w dwójkach na igrzyskach olimpijskich w Salt Lake City. Brązowy medal w tej samej konkurencji wywalczyła także na mistrzostwach świata w Altenbergu w 2008 roku, gdzie partnerowała jej Claudia Schramm. Herschmann i Erdmann zajęły także piąte miejsce podczas rozgrywanych w 2006 roku igrzysk olimpijskich w Turynie. Linki zewnętrzne Niemieccy bobsleiści Niemieccy medaliści olimpijscy Medaliści Zimowych Igrzysk Olimpijskich 2002 Uczestnicy Zimowych Igrzysk Olimpijskich 2006 Urodzeni w 1975
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,485
{"url":"https:\/\/www.wyzant.com\/resources\/answers?page=11545","text":"04\/26\/13\n\n04\/26\/13\n\n#### Subtract the polynomials\n\n(x^3 y^2+6xy-(4x^3 y^2-5xy)=? What's the difference of the polynomial ? (Simplify your answer. Do not factor.)\n\n04\/26\/13\n\n04\/26\/13\n\n04\/26\/13\n\n#### Determine rather the expression is a polynomial.\n\nIf it is, state how many terms and variables the polynomial contains. Than state its degree. 5x^2-3x-5=\n\n04\/26\/13\n\n#### Determine rather the expression is a polynomial.\n\nIf it is, state how many terms and variables the polynomial contains. Than state its degree.\n\n04\/26\/13\n\nwrite a story using those words\n\n04\/25\/13\n\n#### Find the curvature of the curve of intersection?\n\nFind the curvature of the curve of intersection of the cylinder x^2+y^2=16 and the plane x+z=5 at (4, 0, 1).\n\n04\/25\/13\n\n#### 2,570,000,000,000 expressed in engineering notation\n\nis 2.570*10 9\u00a0 the correct way to express 2,570,000,000,000 in engineering notation\n\n04\/25\/13\n\n#### Find the Cartesian equation for the curve?\n\nFind the Cartesian equation for the curve described by the polar equation r=1\/(1-sin(theta)).\n\n04\/25\/13\n\n#### What is the y-component of r(pi\/2)?\n\nIf r'(t)=<sin(t), -cos(t), 2t> and r(0)=<1, 1, 2>. What is the y-component of r(pi\/2)? r(t)=<-cos(t), -sin(t), t^2> r(pi\/2)=-1 So -1 is the answer?\n\n04\/25\/13\n\n#### 9k^2+45k=0\n\nHow do you solve this since there isn't a c? I need the vertex points (ie: (2,5)) I'm in the 8th grade (Algebra 1) and our teacher neglected to tell us how to do these.\n\n04\/25\/13\n\n#### if you add a negative and a positive which will it be a positive or negative\n\nxlkudhlkgjhslakwejlgekjfjken;iud iuh uh uohiuh\u00a0 i h u jo h ipj\u00a0\u00a0 ljb jh ig uf jk h jknlkh uyg lkj hg yut\n\n04\/25\/13\n\n#### how would you graph y=2x-3\n\nI need help with this problem, I have been stuck on it for about 2 hours\n\n04\/25\/13\n\n#### Solve 2+2=y algabraically\n\nSolve 2+2=y algabraically\n\n04\/25\/13\n\n#### This is a math Q\n\nWhat is 1\/3 times pi times 25 times 12? Im doing my math homework and i don't really get it... :\/\n\n04\/25\/13\n\n4.15 times 6.3\n\n04\/25\/13\n\n#### How do you change x^(7\/5) into a radical expression?\n\nHow do you change x^(7\/5) into a radical expression?\n\n04\/25\/13\n\n04\/25\/13\n\n#### scientific form\n\nplace 2 x10 to the 3rd power x 923 in scientific form with correct significant digits\n\n04\/25\/13\n\n#### write x 7\/5 as a radical\n\nwrite x 7\/5 as a radical\n\n04\/25\/13\n\n#### FIND THE 7TH ELEMENT\n\nIF THE INITIAL VALUE OF A SEQUENCE IS 6 AND THE COMMON DIFFERENCE IS 6, WHAT IS THE 7TH ELEMENT?\n\n04\/25\/13\n\n#### qestion is written in description beause it say 160 charcher only plese help me plese plese\n\non in the second week of april can you tell\u00a0 us how many watermelons did he manage of the two weeks of april\\ l\n\n04\/25\/13\n\n#### finding the center of a circle given the equation\n\nwhat is the center of the circle with the equation: x squared plus y squared minus 6x plus 2y minus 6 equals 0\n\n## Still looking for help? Get the right answer, fast.\n\nGet a free answer to a quick problem.\nMost questions answered within 4 hours.\n\n#### OR\n\nChoose an expert and meet online. No packages or subscriptions, pay only for the time you need.","date":"2021-10-23 15:10:50","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6367643475532532, \"perplexity\": 2730.0228908365525}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323585696.21\/warc\/CC-MAIN-20211023130922-20211023160922-00514.warc.gz\"}"}
null
null
Our Place - Stories of Woolloomooloo - Cheryl Lindo Biography Cheryl was born in 1949 and her parents owned the sandwich shop, opposite the Frisco… Our Place - Stories of Woolloomooloo - Jackie Gratton-Wilson Biography Jackie who was born in 1962 is the daughter of Brenda Humble who was also… Our Place - Stories of Woolloomooloo - Sam Donato Biography Sam was born in 1939 into an Italian family; his father was Australian-born. They lived… Our Place - Stories of Woolloomooloo - Joyce Higgins Biography Joyce who was born in 11912 remembers World War I and the departure of the… Our Place - Stories of Woolloomooloo - Judy Chambers Biography Judy was born in 1935 and her family lived in Cathedral Street, where there were… Our Place - Stories of Woolloomooloo - Nell Leonard Biography Nell was born in 1920 and lived in Woolloomooloo all her life. She loved the… Our Place - Stories of Woolloomooloo - Jean Jurd Biography Jean was born in 1925, growing up she with her mother and sister drifted from… Our Place - Stories of Woolloomooloo - Billy Pascoe Biography Billy was borm in 1911. Billy vividly recalls the return of troops from WWI and… Our Place - Stories of Woolloomooloo - Beth Thorpe Biography Beth thought "Woolloomooloo was absolutely wonderful – only a walk away from the heart of the…
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
568
\section{gftg} Our result in \cite{Ger} resembles the Local Index Formula (\cite{CM, Hig}) in a differential-geometric setting. Let $M$ be a smooth $n$-manifold with no boundary and let $E$ be a smooth ${{\mathbb Z}}_2$-graded vector bundle. Let $\pi:T^*M\to M$ be the cotangent bundle and let $L$ be an odd skew-adjoint endomorphism of $\pi^*E$ invertible everywhere but the zero section of $T^*M$. Finally, we assume that the coefficients of $L$ are first-order homogeneous polynomials of the "vertical" (fiberwise) coordinates of $T^*M.$ For example, $L$ could be a symbol of an odd self-adjoint elliptic operator on $E$. Now, let $\nabla$ be a connection on $\pi^*E$ which is a pullback of some connection on $E$. Suppose that both connections respect the grading of $E$ and $\pi^*E$. According to Quillen \cite{Q}, the Chern character corresponding to $L$ may be written as $\operatorname{exp}(\nabla+L)^2$. Here, $\nabla+L$ is a differential operator on the sections of $\Lambda^* T^*M\otimes\pi^* E$ which shares many properties of an ordinary connection, such as the ${{\mathbb Z}}_2$-graded Leibniz rule and the fact that $(\nabla+L)^2$ is an endomorphism of $\Lambda^* T^*M\otimes\pi^* E$, i.e. a $0$-th order differential operator, which is still called the {\it curvature}. In addition, the graded trace (the {\it supertrace}) ${\operatorname tr_s}(\nabla+L)^{2k}$ for any $k$ is a differential form whose cohomology class is independent on the concrete choice of $\nabla$, a topological characteristic of $E$. We denote $\nabla+L$ by $\nabla_L$ and call it a {\it superconnection}. The big advantage is that if $L$ is as above, then the Chern character form ${\stre}\nabla^2_L$ decays exponentially fast along the fibers of $T^*M$ so that the dual Chern character current can be defined on $\Omega^*M$: $$\eta\mapsto\int_{T^*M}\pi^*(\eta){\stre}\nabla^2_L.$$ WARNING: for notational convenience, we omit the supertrace from our formulas, though it is tacitly assumed everywhere. In \cite{Ger}, we have proved the following theorem about this current: \begin{theorem}\label{theorem:oldthm} Let $Y_R$ be the open $R$-tubular neighbourhood of the zero section in $T^*M$ and let $X_R$ be its complement. Under the hypotheses outlined above, for any $\eta\in\Omega^*(M)$ \begin{multline}\label{equation:oldthmeq} \int_{T^*M}\!\!\!\!\operatorname{tr_s}\pi^*(\eta)\exp{\nabla_L^{2}} = \lim_{R\to 0}\sum_{z\in{{\mathbb C}}}Res|_z\Gamma(z) \int_{X_R}\operatorname{tr_s}\pi^*(\eta)({-\nabla_L^2})^{-z}, \end{multline} where the right-hand side integral is understood to be the meromorphic extension from the region $Re(z)\gg 0,$ on which it converges. Further, all but finitely many residues on the right-hand side vanish as $R\to 0$. \end{theorem} The proof of the equality \ref{equation:oldthmeq} is based on the Mellin Transform. Section 6 of \cite{Ger}, contains the argument and a concise outline preceding it. In the present note, we concentrate on the right-hand side of the equality. We state the following slightly stronger result. \begin{theorem}\label{theorem:newthm} Under the hypotheses outlined above, for any $\eta\in\Omega^\kappa(M)$ and any positive $R,$ \begin{align}\label{equation:newthmeq} \int_{T^*M}\!\!\operatorname{tr_s}\pi^*(\eta)\exp{\nabla_L^{2}}\!\! &= Res_{z={\frac \kappa 2}-n}\Gamma(z)\int_{X_R}\operatorname{tr_s}\pi^*(\eta)\big[({-\nabla_L^2})^{-z}\big]_{2n-\kappa}, \end{align} where the right-hand side integral is understood to be the meromorphic extension from the region $Re(z)\gg 0,$ on which it converges. In particular, it does not depend on $R.$ Further, both sides vanish if $\kappa$ is odd. \end{theorem} Here, by $[\omega]_\kappa$ we denote the $\kappa$-degree part of the mixed differential form $\omega.$ We proceed to: \begin{itemize} \item[1)] Review the notion of complex powers $(-\nabla_L^2)^{-z}$ via holomorphic functional calculus. \item[2)] Review the geometric series expansion of $(-\nabla_L^2)^{-z}$ used in \cite{Ger}. \item[3)] Prove theorem \ref{theorem:newthm} based on that expansion and on theorem \ref{theorem:oldthm}. \end{itemize} 1) Complex powers of the curvature $\nabla_L^2$ are defined via the following integrals: $$(-\nabla_L^2)^{-z}={\frac 1 {2\pi i}}\int_\gamma \lambda^{-z} (\lambda+\nabla^2)^{-1}d\lambda,$$ where $\gamma$ is a counter-clockwise oriented contour which surrounds the pointwise spectrum of $\nabla_L^2$. We prove in \cite{Ger}, section 5, that $\gamma$, in fact, may be taken as a vertical which is oriented downward and separates $sp(\nabla_L^2)$ from the imaginary axis. Such $\gamma$ exists as long as the underlying point of $T^*M$ does not lie in the zero section. In \cite{Ger}, we have also dealt with the fact that $\gamma$ depends on that point in the first place. (Briefly, we have shown that if we integrate over $X_R$, then $\gamma$ can be chosen uniformly. But then we take the limit as $R\to 0$.) We have also shown that $\int_{X_R}\pi^*(\eta)(-\nabla_L^2)^{-z}$ converges for $Re(z)\gg 0$ and has a meromorphic extension to all of ${{\mathbb C}}$ with at most simple poles. 2)In order to see why the meromorphic extension exists, we write out the expression for $(-\nabla_L^2)^{-z}$ via geometric series. First, \begin{align*} \nabla_L^2&=(\nabla+L)^2\\ &=\nabla^2+[\nabla,L]+L^2\\ &=\nabla^2+dL+[\theta,L]+L^2, \end{align*} where $\nabla=d+\theta$ and $\theta$ is a locally defined odd endomorphism of $\pi^*E$. Such $\theta$ exists, for any connection or superconnection and is comprised of Christoffel symbols. "Locally" means that we restrict our attention to a coordinate chart $V$ on $T^*M$ over which $\pi^*E$ is trivial and which is itself a local trivialization of $T^*M$ over $M$. Thus, there are two sets of coordinates. "Horizontal", i.e. coordinates of $M$ $x^1\ldots x^n$; and "vertical" coordinates of a fiber. We use polar coordinates here, $\rho$ being the radial one and $\Xi^1,\ldots\,\Xi^{n-1}$ being the coordinates of a unit sphere. We then write: \begin{align} \label{equation:weget1}{2\pi i}(-\nabla_L^2)^{-z}\!\!&=\!\!\int_\gamma\!\!\scriptstyle \lambda^{-z}\big(\lambda+L^2 +d_x L+d_\Xi L +[\theta,L]+d_\rho L+\nabla^2\big)^{-1}\!\!d\lambda\\ \nonumber \!\!&=\!\!\int_\gamma\scriptstyle \!\!\lambda^{-z}(\lambda+L^2)^{-1} \sum_{k=0}^{2n}\big(-(\lambda+L^2)^{-1} (d_x L+d_\Xi L +[\theta,L]+d_\rho L+\nabla^2)\big)^k \!\!d\lambda. \end{align} Observe that $\nabla^2+d_\rho L$ is not a multiple of $\rho$, while from $d_x L+d_\Xi L +[\theta,L]$ one power of $\rho$ may be pulled out. We then expand the $k$-th term of the series as a non-commutative polynomial in $\nabla^2$, $d_\rho L$, $d_\Xi L$, and $d_x L+[\theta,L]$ and separate the powers of $\rho$. For illustration, we treat just one typical "interesting" term. It must contain one copy of $d_\rho L$ and $n-1$ copies of $d_\Xi L$ in order to produce a volume form on $T^*M$. (Observe that $\theta$ and $\nabla^2$ cannot contain any vertical differentials, since $\nabla$ has been pulled back from $E$.) One such term has the form: \begin{align} \label{equation:weget2}\nonumber\int_\gamma\lambda^{-z}&(\lambda+L^2)^{-1} \!\big[(\lambda+L^2)^{-1} (d_x L+\![\theta,L])\big]^l\,\times \\ \nonumber&\qquad \big[(\lambda+L^2)^{-1} d_\rho L\big] \big[(\lambda+L^2)^{-1} d_\Xi L\big]^{n-1} \big[(\lambda+L^2)^{-1}\nabla^2)\big]^{k-n-l} \!d\lambda \\ \nonumber& =\rho^{-2(z+k)+n+l-1} \int_\gamma\sigma^{-z}(\sigma+{\L}^2)^{-1} \big[(\sigma+{\L}^2)^{-1} (d_x {\L}+[\theta,{\L}])\big]^l\,\times \\ &\qquad \big[(\sigma+{\L}^2)^{-1} d_\rho L\big] \big[(\sigma+L^2)^{-1} d_\Xi {\L}\big]^{n-1} \big[(\sigma+{\L}^2)^{-1}\nabla^2)\big]^{k-n-l} d\sigma, \end{align} where ${\L}={\frac L \rho}$ and $\sigma={\frac \lambda {\rho^2}}$. Strictly speaking $\gamma$ should be replaced by ${\frac 1 {\rho^2}} \gamma$ but we leave that detail to \cite{Ger}. 3) In order to prove theorem \ref{theorem:newthm}, we need to compute $\int_{X_R\bigcap V}\pi^*(\eta)(-\nabla^2_L)^{-z}$. Rather than using $(-\nabla_L^2)^{-z}$, we perform the computation just for the sample term shown in (\ref{equation:weget2}). Assuming $Re(z)\gg 0$, we get: \begin{align}\label{equation:weget3} \nonumber \int_{X_R\bigcap V}&\!\!\!\pi^*(\eta)\!\int_\gamma\!\lambda^{-z}(\lambda+L^2)^{-1} \big[(\lambda+L^2)^{-1} (d_x L+[\theta,L])\big]^l\,\times \\ &\nonumber \,\, \big[(\lambda+L^2)^{-1} d_\rho L\big] \big[(\lambda+L^2)^{-1} d_\Xi L\big]^{n-1} \big[(\lambda+L^2)^{-1}\nabla^2)\big]^{k-n-l} d\lambda \\ & \nonumber\\ \nonumber& =\int_R^\infty \pi^*(\eta)\rho^{-2(z+k)+n+l-1 }d\rho \int_{\pi(V)}dx \int_{S^{n-1}} d\Xi \,\times \\ \nonumber & \qquad\qquad\int_\gamma\sigma^{-z}(\sigma+{\L}^2)^{-1} \big[(\sigma+{\L}^2)^{-1} (d_x {\L}+[\theta,{\L}])\big]^l\times \\ \nonumber &\qquad\qquad\qquad \big[(\sigma+{\L}^2)^{-1} d_\rho L\big] \big[(\sigma+L^2)^{-1} d_\Xi {\L}\big]^{n-1}\times \\ \nonumber &\qquad\qquad\qquad \big[(\sigma+{\L}^2)^{-1}\nabla^2)\big]^{k-n-l} d\sigma \\&\nonumber\\& = {\frac {R^{-2(z+k)+n+l}}{2(z+k)-n-l}}\phi_V(z), \end{align} where $\phi_V$ is defined by the above equation. It is entire in $z$. (It is true after $\sigma$ is integrated out and then still true after one integrates out $\Xi$ and $x$ over the compact manifold $S^*M$\cite{Ger}). Also, $\phi_V$ is independent on $R$. Thus, the integral in \ref{equation:weget2} has a meromorphic extension to ${{\mathbb C}}$ with at most simple poles. Compactness of $M$ and the fact that there are only finitely many terms such as the one above, imply that $\int_{X_R}\pi^*(\eta)(-\nabla_L^2)^{-z}$ also has a meromorphic extension to ${{\mathbb C}}$. We proceed to prove the theorem \ref{theorem:newthm}. For the case of even $\kappa=deg(\eta),$ we consider the residues of: $$\Gamma(z){\frac {R^{-2(z+k)+n+l}}{2(z+k)-n-l}}\phi_V(z).$$ We have the following lemma: \begin{lemma}\label{theorem:l1} $\phi_V(-m)=0 $ for all $m < {\frac {3n-\kappa}2}.$ \end{lemma} {\bf Proof:} Observe that for $z=0,-1,\ldots -(k+1)$ $\phi_V(z)=0$, since then $(-\nabla_L^2)^{-z}$ is just a positive integer power. To see this, suppose for a moment that $L^2$ is a scalar and thus: $$(-\nabla_L^2)^{-z}=\sum_{k=0}^\infty \big(^{-z}_k\big)(L^2)^{-z-k} ([L,\nabla]+\nabla^2)^k.$$ If $z=-m$, the series terminates for $k>m$. For general $L$, the phenomenon is similar. Next, we count the differential form degrees in (\ref{equation:weget3}). Unless $$\kappa+2(k-n-l)+l+n=2n=dim(T^*M),$$ $\phi_V$ vanishes identically. Thus, $\kappa+2k-l=3n$, so that $${\frac {3n-\kappa}2}\le k.$$\qed Now, one can easily compute the location of the residue due to ${\frac {R^{-2(z+k)+n+l}}{2(z+k)-n-l}}$ in terms of $\kappa$: $${\frac {n+l} 2}-k={\frac \kappa 2}- n.$$ In summary, all the residues coming from $\Gamma(z)$ between $0$ and ${\frac {\kappa-3n}2}$ are killed by the zeroes of $\phi_V$ and the only residue in that range is supplied by $${\frac {R^{-2(z+k)+n+l}}{2(z+k)-n-l}}= {\frac {R^{-2(z+n)-\kappa}}{2(z+n)-\kappa}}.$$ Finally, in \cite{Ger} we have shown that \begin{multline}\label{equation:finally} \int_{X_R}\!\!\!\!\operatorname{tr_s}\pi^*(\eta)\exp{\nabla_L^{2}} = \sum_{z\in{{\mathbb C}}}Res|_z\Gamma(z) \int_{X_R}\operatorname{tr_s}\pi^*(\eta)({-\nabla_L^2})^{-z}, \end{multline} The argument uses the fact that $\int_0^\infty e^{-\lambda t}t^{z-1}dt= \lambda ^{-z}\Gamma(z)$ and Mellin transforms to translate the exponential expression on the left of (\ref{equation:finally}) into the one on the right. Taking limits as $R\to 0$, we see that the residues to the left of ${\frac {\kappa-3n}2}$ are multiples of a positive power of $R$ and thus vanish, whereas the one at ${\frac \kappa 2}-n$ is independent on $R$. This proves the theorem (\ref{equation:newthmeq}) in the case when $\kappa$ is even. If $\kappa$ is odd, then due to $\kappa+2k-l=3n$, $l+n$ must be odd. That is the total power of $d_\rho L$, $d_\Xi L$ and $d_x L+[\theta,L]$. Now, since all the connections preserve the ${{\mathbb Z}}_2$-gradings, the local expression for $\theta$ must be a block-diagonal matrix of 1-forms. The same is true about $\nabla^2$. However, $L$ is block-off-diagonal, since it is an odd endomorphism. Thus, if $n+l$ is odd, the supertrace of the corresponding term vanishes, which proves the theorem \ref{theorem:newthm} for odd $\kappa$. \begin{remark} {\rm It is clear that the right side of theorem \ref{theorem:newthm} may be written as an integral over the unit sphere bundle. It would be curious to obtain this through Wodzicki Residue.} \end{remark}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,377
{"url":"http:\/\/georgehernandez.com\/h\/xComputers\/XML\/Traverse.asp","text":"Once the XML data is loaded into an XML DOM object or an XML DSO, the objects can be used to traverse through the data in the data nodes of the XML document. The data nodes need to be traversed before it can be prepped for presentation.\u00a0 Then the data can be placed into HTML.\n\n## XML DOM Object\n\nEach XML document and each XML DOM Document object consists of a single root element and its descendants (other elements). If it is an XHTML document, then the root element must be <html>. An XML DOM object is traversed primarily by accessing its root element.\n\nXMLDOMObject.documentElement\n'The root element or node.\n\nThe childNodes collection of the root element has all the usual characteristics of 0-based collections. EGs:\n\nXMLDOMObject.documentElement.childNodes.item(n)\n'The child node.\nXMLDOMObject.documentElement.childNodes.item(n).text\n'The value in the child node.\nXMLDOMObject.documentElement.childNodes.length\n'The number of child nodes.\nXMLDOMObject.documentElement.childNodes.item(n).childNodes.item(n)\n'The grandchild node.\n\nNote that an object variable can be set to any node. EGs:\n\nnodeRoot = XMLDOMObject.documentElement\nnodeChild = NodeRootchildNodes.item(n)\nnodeGrandChild = NodeChild.childNodes.item(n)\n\nYou can navigate to any mode in this fashion. A child node of a node can also be accessed by its tag name. This gets the value of the first node with that tag name if it exists, but note that there may be multiple child nodes with that tag name.\n\nValueOfChildOfAnyNode = nodeAny.getElementsByTagName('tagName').item(n).text\n\nOnce the value of a particular is node is captured and set to some variable, it can be placed into HTML or used in additional scripting.\n\n### Prep for HTML\n\nAssume that we've captured a value:\n\n<head>\n...\n<script ....>\n...\nstrValue = XMLDOMObject.documentElement.childNodes.item(n).text\n<\/script>\n...\n\nThat value may be placed anywhere in the HTML body with this bit of script:\n\n<script>document.write(strValue)<\/script>\n\nHere are bits of script that cycles through an XML DOM object that is structured like a typical table (see XDM DSO below).\n\nThis one basically goes through the items by their indexes and then looks for field\/data node names.\n\nrstNodeRoot = XMLDOMObject.documentElement;\nfor (rstCount = 0; rstCount < rstNodeRoot.length; rstCount++){\nrecNodeChild = rstNodeRoot.item(rstCount).childNodes;\nfor (fldCount = 0; fldCount < recNodeChild.length; fldCount++){\nswitch (recNodeChild.item(fldCount).tagName){\ncase \"FieldName1\";\nstrFieldValue1 = recNodeChild.item(fldCount).text;\nbreak;\ncase \"FieldValue2\";\nstrField2 = recNodeChild.item(fldCount).text;\nbreak;\n...\ncase \"FieldValueN\";\nstrFieldN = recNodeChild.item(fldCount).text;\nbreak;\n}\n}\ndocument.writeln(\"<ul>\");\ndocument.writeln(\"<li>\" + strFieldValue1 + \"<\/li>\");\ndocument.writeln(\"<li>\" + strFieldValue2 + \"<\/li>\");\n...\ndocument.writeln(\"<li>\" + strFieldValueN + \"<\/li>\");\ndocument.writeln(\"<\/ul><br>\");\n}\n\n### Prep for DHTML\n\nSome event can call this function:\n\n<script ...>\nfunction CallMe(){\nspan1.innerText = XMLDOMObject.documentElement.childNodes.item(n).text\n}\n<\/script>\n\nThat function will place the value into this span which could be anywhere in the body:\n\n<span id=\"span1\"><\/span>\n\n## XML DSO\n\nXML DSO can be navigated like an XML DOM object but then why bother to make an XML DSO?\n\nXML DSOs (Data Source Objects) can be used like an XML DOM object but it also has two major advantages:\n\n\u2022 The XML DSO can be accessed in a fashion similar to the ADO Recordset object.\n\u2022 The XML DSO can have HTML elements bound to it in a fashion similar to a VB Data Control.\n\nHowever, the XML DSO also has two major drawbacks:\n\n\u2022 XML data must be structured in the\u00a0typical table with fields and records format. That is the XML data must have the following structure:\n\u2022 A root node corresponding to a table\/recordset.\n\u2022 Multiple occurrences of only one child nodes corresponding to records.\n\u2022 Grandchild nodes corresponding to fields in each record\/child node.\n\u2022 The XML DSO is Microsoft specific.\n\nHere is an example of XML data structured like a typical table\/recordset that has 2 records\/rows and 3 fields\/columns:\n\n<rstNodeRoot>\n<recNodeChild>\n<fldNodeGrandChild1>fldValue1InRec1<fldNodeGrandChild1>\n<fldNodeGrandChild2>fldValue2InRec1<fldNodeGrandChild2>\n<fldNodeGrandChild3>fldValue3InRec1<fldNodeGrandChild3>\n<\/recNodeChild>\n<recNodeChild>\n<fldNodeGrandChild1>fldValue1InRec2<fldNodeGrandChild1>\n<fldNodeGrandChild2>fldValue2InRec2<fldNodeGrandChild2>\n<fldNodeGrandChild3>fldValue3InRec2<fldNodeGrandChild3>\n<\/recNodeChild>\n<\/rstNodeRoot>\n\nA node, i.e. a row can be made the current code with script similar to either of the following:\n\nrecCurrent = XMLDSO.XMLDocument.documentElement.childNodes.item(n)\nrecCurrent = XMLDSO.documentElement.childNodes.item(n)\n\nScript can get the grandchild node value, i.e. the value of a field in the current row with this syntax:\n\nfldValue = recCurrent.childNodes.item(n).text\nfldValue = XMLDSO.recordset(\"FieldName\")\n\nThe rows can be navigated by calling upon script similar to the following:\n\nfunction FMovePrevious(){\nif (XMLDSO.recordset.bof){\n}else{\nXMLDSO.recordset.movePrevious()\nif (XMLDSO.recordset.bof){\nXMLDSO.recordset.moveFirst();\n}\n}\n}\n\n### Binding HTML to the XML DSO\n\nHTML elements to be bound to the XML DSO.\n\nThis example shows the value of the specified field for the current row.\n\n<span datasrc=\"#XMLDSO\" datafld=\"FieldName\"><\/span>\n\nThis example will show an image if the value of the specified field contains the path and file name of an image.\n\n<src\u00a0 datasrc=\"#XMLDSO\" datafld=\"FieldName\">\n\nAn HTML table can be bound to the XML DSO and display all of its data. All rows that are not part of a <thead> or <tfoot> row group are repeated once for each field\/child node.\n\n<table datasrc=\"#XMLDSO\">\n<tr>\n<th>FieldNameTitle1<\/th>\n<th>FieldNameTitle1<\/th>\n...\n<th>FieldNameTitleN<\/th>\n<\/table>","date":"2017-03-30 08:49:04","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.226375550031662, \"perplexity\": 3450.9257242057697}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-13\/segments\/1490218193288.61\/warc\/CC-MAIN-20170322212953-00202-ip-10-233-31-227.ec2.internal.warc.gz\"}"}
null
null
Can electrical brain stimulation enhance memory? Stimulating specific parts of the brain with electromagnetic pulses could improve our memory of certain facts, a recent study suggests. The area of the brain where memory is processed is called the hippocampus. This involves the formation, storing and organisation of memories, and is an important structure in helping us to associate certain sounds and smells with memories. Researchers at Northwestern University in the US set out to discover whether using non-invasive electrical stimulation on the hippocampus would have an effect on memory. With the help of detailed scans, the scientists located the hippocampus inside the brains of 16 participants. The volunteers were then presented with a series of pictures along with a set of unrelated words. For example, a picture of a man would be paired with the word 'cat'. They were asked to remember each pairing. A device exerting short bursts of electromagnetic pulses was then applied to the participants' heads directly above the hippocampus. This was done for a period of 20 minutes each day for five consecutive days. After the sessions, participants were given similar memory tests to complete. It was discovered that they scored significantly better following stimulation, with their memory still enhanced even 24 hours after they had completed the procedure. It was also found that volunteers made 30 per cent less errors compared with their scores on the test before the applied electrical stimulation. Moving forward, researchers have begun to investigate the relationship between memory and older age. They aim to find breakthroughs in the study of early signs of dementia.
{ "redpajama_set_name": "RedPajamaC4" }
9,538
{"url":"https:\/\/www.gradesaver.com\/textbooks\/math\/algebra\/elementary-algebra\/chapter-9-roots-and-radicals-chapter-9-review-problem-set-page-430\/30","text":"# Chapter 9 - Roots and Radicals - Chapter 9 Review Problem Set - Page 430: 30\n\n$\\dfrac{3x\\sqrt[3]{3}}{2}$\n\n#### Work Step by Step\n\nUsing the properties of radicals, the given expression, $\\dfrac{3}{4}\\sqrt[3]{24x^3} ,$ simplifies to \\begin{array}{l}\\require{cancel} \\dfrac{3}{4}\\sqrt[3]{8x^3\\cdot3} \\\\\\\\= \\dfrac{3}{4}\\sqrt[3]{(2x)^3\\cdot3} \\\\\\\\= \\dfrac{3}{4}\\cdot2x\\sqrt[3]{3} \\\\\\\\= \\dfrac{3}{\\cancel{2}(2)}\\cdot\\cancel{2}x\\sqrt[3]{3} \\\\\\\\= \\dfrac{3x\\sqrt[3]{3}}{2} .\\end{array} Note that all variables are assumed to have positive values.\n\nAfter you claim an answer you\u2019ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide\u00a0feedback.","date":"2018-09-20 21:55:21","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9810791611671448, \"perplexity\": 4346.127582454768}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-39\/segments\/1537267156622.36\/warc\/CC-MAIN-20180920214659-20180920235059-00346.warc.gz\"}"}
null
null
{"url":"https:\/\/mathematica.stackexchange.com\/questions\/194240\/mesh-cell-count-for-voronoi-mesh-too-low","text":"# Mesh cell count for Voronoi mesh too low\n\nI was trying to solve this question out of interest and thought perhaps creating a Voronoi mesh, cropping it to a circle, and colouring the mesh cells might work. However, if I ask VoronoiMesh to create cells for too many points MeshCellCount[mesh, 2] (or equivalently Length@MeshCells[mesh]) it returns a number that is smaller than the number of points provided initially.\n\nI've tried using different functions to generate the points around which the cells should be built, used both exact and real numbers, and checked out the documentation for VoronoiMesh and MeshRegion, but I'm still not sure what's causing this. Are my points simply too close together for VoronoiMesh to uniquely determine a cell for some of them?\n\nThe simplest code that reproduces this is:\n\nMeshCellCount[\nVoronoiMesh[\n],\n2]\n\n\nwhich should return 36,000 since it is 100 radial points and 360 azimuthal points, but instead returns 35,985. For this code it seems to start when there's around 32,000 elements. If the radial points inside Range are set to 87, I get the expected result. If the radial points are set to 88 (with the same 360 azimuthal points) I get an unexpected result. For all smaller numbers it seems to work as expected.\n\nFor some reason, if I use the following code to determine the number of cells, this discrepancy shows up at even smaller numbers of cells.\n\ngenerate[i_] :=\nTable[\n{r Sin[\u03b8], r Cos[\u03b8]},\n{\u03b8, 0, 359 \u03c0\/180, \u03c0\/180},\n{r, 1\/2, (i - 1) + 1\/2}\n]\n66*360 - MeshCellCount[VoronoiMesh[Flatten[generate[66], 1]], 2]\n\n\nThe result of this code is 2 where I would expect it to be zero for all values passed to generate.\n\nDoes anyone know what I'm doing wrong or if there is a workaround? Or am I simply asking too much of VoronoiMesh?\n\nLet's look at the centroids of the faces to see if we can figure where the issue lies.\n\npts = Flatten[Quiet[Thread[CirclePoints[Range[100], 360]]], 1];\n\nvor = VoronoiMesh[pts];\n\ncentroids = PropertyValue[{vor, 2}, MeshCellCentroid];\nnorms = Sort[Norm \/@ centroids];\n\nKeySelect[Counts[Round[norms]], LessThan[100]]\n\n<|1 -> 345, 2 -> 360, 3 -> 360, 4 -> 360, 5 -> 360, 6 -> 360, ...|>\n\n\nSo it looks like there are 15 missing inner most faces. Let's take a look:\n\nMeshRegion[\nMeshCoordinates[vor],\nPick[MeshCells[vor, 2], RegionMember[Disk[{0, 0}, 1], centroids]]\n]\n\n\nI don't know what went wrong, nor do I know how to fix the builtin behavior. But we can find a workaround by adapting the answer here:\n\nNeeds[\"IGraphM\"];\n\nVoronoi2D[pts_] :=\nBlock[{minmax, padding, vpts, dm, prims, vnodes, conn, adj, vlines, mr1d, g, faces, lens},\nminmax = MinMax \/@ Transpose[pts];\n\ndm = DelaunayMesh[vpts];\nprims = MeshPrimitives[dm, 2, \"Multicells\" -> True][[1, 1]];\nvnodes = circumCenter2D[prims];\n\nconn = dm[\"ConnectivityMatrix\"[2, 1]];\n\nmr1d = Quiet @ MeshRegion[vnodes, Line[vlines]];\n\ng = IGMeshGraph[mr1d];\nfaces = IGFaces[g];\n\n(* delete outer face *)\nlens = Length \/@ faces;\nfaces = Pick[faces, UnitStep[lens - Max[lens]], 0];\n\nMeshRegion[MeshCoordinates[mr1d], Polygon[faces]]\n]\n\n(* speed up from calling Circumsphere... but some rounding error could be introduced *)\nWith[{\na = Det[{{x1, y1, 1}, {x2, y2, 1}, {x3, y3, 1}}],\nbx = Det[{{x1^2+y1^2, y1, 1}, {x2^2+y2^2, y2, 1}, {x3^2+y3^2, y3, 1}}],\nby = Det[{{x1^2+y1^2, x1, 1}, {x2^2+y2^2, x2, 1}, {x3^2+y3^2, x3, 1}}],\n\u03b5 = 2^22 * \\$MachineEpsilon\n},\ncircumCenter2D = Compile[{{pts, _Real, 2}},\nBlock[{x1, y1, x2, y2, x3, y3},\nx1 = pts[[1, 1]];\ny1 = pts[[1, 2]];\nx2 = pts[[2, 1]];\ny2 = pts[[2, 2]];\nx3 = pts[[3, 1]];\ny3 = pts[[3, 2]];\n\nRound[.5Divide[{bx, -by}, a], \u03b5]\n],\nCompilationTarget -> \"C\",\nParallelization -> True,\nRuntimeOptions -> \"Speed\",\nRuntimeAttributes -> {Listable}\n]\n];\n\n\nVoronoi2D[pts] \/\/ MeshCellCount\n\n{36691, 72752, 36000}\n\n\u2022 This works great, thanks! Any idea how to get rid of the outermost cells so that we're left with just a circular region? The usual methods I've been using for VoronoiMesh don't seem to work on this mesh. \u2013\u00a0MassDefect Mar 30 '19 at 19:27\n\u2022 @MassDefect I think you could use the same Pick` idiom in my answer with a different disk radius. \u2013\u00a0Chip Hurst Mar 30 '19 at 20:25","date":"2020-06-05 04:21:52","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.23784825205802917, \"perplexity\": 3283.7254840500004}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-24\/segments\/1590348492427.71\/warc\/CC-MAIN-20200605014501-20200605044501-00059.warc.gz\"}"}
null
null
Pomoć u razvoju je finansijska pomoć upućena od strane država i drugih tela u cilju podrške društvenog i ekonomskog razvoja zemlje u razvoju. Razlikuje se od humanitarne pomoći jer se fokusira na iskorenjivanje siromaštva na duge staze, ne samo kratkotrajno. Termin razvojna kooperacija koji koristi na primer, Svetska zdravstvena organizacija označava ideju partnerstva koje treba da postoji između donora i primaoca, umesto tradicionalnog pristupa u kome bogatija i razvijenija strana dominira u odnosu. Veliki deo pomoći dolazi od zapadnih razvijenih država, ali i manje bogate države doniraju pomoć. Pomoć može da bude bilateralna, kada jedna država direktno donira novac drugoj ili multilateralna, odnosno da donor pomaže međunarodnu organizaciju kao što je slučaj sa svetkom bankom ili različitim organizacijama u okviru Ujedinjenih nacija koje onda tu pomoć distribuiraju do zemalja u razvoju. Trenutni odnos ovih vrsta pomoći je 70% bilateralna i 30% multilateralna. Reference Ekonomija
{ "redpajama_set_name": "RedPajamaWikipedia" }
381
Pools are party central, basically by nature. They're a great opportunity to win over your grumpy neighbors or spend quality time with your extended family. And, bonus, they offer a reprieve for awkward guests like me, who answer three questions about writing or work and then hide behind the refreshments table for ten minutes. Here at River Pools, we've learned that pools bring people together like nothing else. We specialize in fiberglass pools, but we're here to help you regardless of whether or not you buy a pool from us. Whether you have 3 guests or 30 (which seems like a lot to me, but still), we want to help you make the most of this bonding experience. And it's so satisfying to pull off a successful party! It all starts with the idea...and with the idea, the details to make it happen. Schedule a day when it won't be too hot and won't rain. Storms and heat exhaustion kill the party mood. On the same note, host the get-together either in the evening or on the weekend. People who work Monday through Friday want to party in the pool, too. Write out the guest list ahead of time. This keeps the number of guests manageable. And if you're cool, send actual invitations—ideally, with water/beach puns. Provide shaded areas for partygoers and for the food/drink station. Everyone needs a break from the sun at some point, plus shade will keep your sustenance in better condition. Odds are good that at least one person will need to use the restroom, and they'll probably be soaking wet from the pool. You may want to drape towels or mats on that pool-to-potty path so that no one drips all over your house. Test your pool water chemistry and all the accessories (filter, lights, water features, etc.) about a week beforehand so that you have time to fix any potential issues. Knock on wood, everything will be fine the day of the party, but you don't want to be caught off guard if a storm rolls in. Have a backup plan—another location, a tent, indoor or non-pool activities, or simply a rain date. Remind your guests to bring the basics—and pick them up for yourself if you don't already have them. I used to complain about the public-pool rules against swimming in street clothes, but it isn't just hotels being picky. Regular-people clothes can actually be bad for the pool. Dye can leach into the water. Threads of fabric can clog the drain and filter. The bacteria in street clothes can be a health hazard. Plus, the chlorine can bleach and wear out your clothes, so just wear your swimsuit. It was made for pool water. Pro tip: Hosts and hostesses should run on the assumption that their guests will forget something important. So stock up! If someone does have that issue, you don't have to scramble for supplies. If everyone comes prepared, then so much the better! You should 100% always provide drinks and snacks to your party guests. You don't want them to waste away to nothing in the summer heat. Depending on the time of day, you might want to serve an actual meal too. Alongside the sweets, serve snacks with nutritional value. Swimming is exercise, and pure sugar only goes so far. Pools are awesome by themselves, but it never hurts to include some extras. Toys and accessories make the pool itself more fun and interactive, which is super helpful if kids will attend. And hey, there's no rule saying that adults can't join in. Water balloons are a fun way to get back at anyone who splashes you, but remember that the shredded balloon leftovers will clog up your filter if left in the water. Skimping on decor is lame. But good news: you don't have to spend a billion dollars to spruce up that party. A few key pieces count for a lot! Browns and blacks have no place at a pool party. Any decor should be white or brightly colored to follow through on the summer vibes. Outdoor lights look lovely, plus they extend your party time past dusk. Party hearty, my friends! You can adjust your aesthetic with the type of lights you pick: Tiki torches, sky lanterns, and twinkling lights all have different looks. Themed centerpieces create simple, small accents that bring everything together. And you can customize them to fit your theme! Flowers add to the oasis feeling, especially if you can get hibiscus. Seashells evoke, well, the sea. Obviously you'll need regular-size tables for the drinks and food—but consider setting out small tables around the chairs and benches. This will give your guests somewhere to set their food, drinks, and whatever else they might be carting around. Bonus: more opportunities for cute li'l decorative centerpieces! You may already have oversized umbrellas set up with your patio furniture. They're iconic, and they help provide the shade you need. Mason jars give a Pinterest-y, beach-casual look to your pool party. You can use them to hold decorations or to drink from—probably not at the same time. Caution: these jars (and anything similar) should be plastic, not glass, so they won't shatter if dropped. Say hello to the Beach Boys and Hey Ocean! A playlist with fun, summery music will add to the ambience. If you don't have one already, you can find them pre-made on sites like YouTube, Spotify, and 8tracks. Pool parties are fun. You know what else is fun? Safety! Broken glass is no good in any situation. We know this. But if it falls into the pool water, it can be almost impossible to find. Patios can be slippery, so better safe than sorry. Keep an eye on the young'ns, especially those small enough that they have to stay in the shallow end. Hire your local teenager as a temporary lifeguard if you don't want to do this yourself. You should already have a backup plan in case a storm approaches. Take a rain check; you don't want anyone to get hurt. Pool water attracts lightning like they were the only young people at a speed-dating event. It's hard to imagine anyone turning down a pool party invitation because the pool was "beneath them." How bad would that pool have to be? We're there to enjoy the sun and swim. Still, fiberglass pools lend themselves especially well to pool parties. Most models include bench seats, and some have a tanning ledge as well. (Fun fact: these features are included in the pool price, not extra, because they're built into the mold design.) They're awesome for the partygoers who want to lounge and socialize partially in the water rather than play in the water. Fiberglass pools also have a smooth surface, unlike concrete pools. You don't have to worry about party guests scraping their skin. Yay for guests leaving your place in one piece! If I can be a little superficial for a moment, fiberglass pools also look good. Vinyl liner pools tend to look a little cheap, and that's because they are. By contrast, fiberglass pools come in many colors and can be designed to match any aesthetic. One common concern is that fiberglass pools aren't big enough for a party, since they only go up to about 40 feet long. But in reality, that's not an issue! Any pool over 35 feet long accommodates a lot of people—and given the patio and the grill, it's rare that more than half of the group is in the pool at any given time. River Pools is a fiberglass pool manufacturer and installer near Richmond, Virginia, with certified dealers across the country. We're enthusiastic about fiberglass pools because we've found that they give pool owners the best experience possible—and customer satisfaction is our priority in everything we do, even if it means you don't buy from us. If you'd like to learn more about pools, take a look at our unbiased comparison of the three pool types, and feel free to get in touch with us if you have any questions. We'd love to help you on your pool journey! Next: When Should I Close My Pool for the Winter? Editor's note: This blog article was updated on November 7, 2018.
{ "redpajama_set_name": "RedPajamaC4" }
9,168
\section{\label{sec:level1}Introduction} High-frequency ultrasound (above $\mathcal{O}(10^{6})\text{ Hz}$) is useful in a variety of applications in micro-fluidic devices such as droplet manipulation, fluid mixing and atomization because of the compatible wavelength for microfluidics. Moreover, the setup for ultrasonic devices are usually simple, and are therefore suitable for constructing small, compact experiments. Nozzle-free atomization methods with high-frequency ultrasonic devices were reported by Ang \textit{et al.}~\cite{ang2015nozzleless} and Collignon \textit{et al.}~\cite{collignon2018improving}. The devices reported in these works are energy efficient, and are able to atomize fluid at high flow rates. A nozzle-free implementation is also drastically simpler: the fluid is atomized into small droplets directly from the transducer surface. Due to the complexity of the nonlinear interactions between the fluid and the acoustic wave, many of the physical phenomena related to high-frequency acoustically induced atomization are not well understood. At the relatively low forcing frequencies encountered in every day settings, atomization is successfully explained by the classical Faraday instability theory. However, forcing frequencies typically employed in modern acoustofluidics severely violate a fundamental Faraday wave theory assumption: the difference between the excitation frequency and the natural resonant frequency should be much smaller than the excitation frequency~\cite{perlin2000capillary,binks1997nonlinear}. In systems that violate the Faraday conditions, there is a complete absence of any classically predicted originating mechanisms for resonant capillary wave generation, yet such waves are nonetheless found at scales that are visible to the eye. Acoustically induced capillary waves were first described by Rayleigh in the nineteenth century~\cite{rayleigh1879capillary}. Over the last several decades, researchers have begun employing ultrasound to force the dynamics of the droplets and investigating the effects. With power input that is high enough, capillary waves on the liquid-gas interface lose its stability and small droplets are atomized from capillary wave crests~\cite{lang1962ultrasonic}. Compared to traditional jet nebulizers, ultrasonic nebulizers are more portable, efficient, and easy to use. Ultrasonic nebulizers are widely used in pulmonary drug delivery~\cite{taylor1997ultrasonic}, surface coating~\cite{majumder2010insights}, and many other fields. However, most of these studies focus on the behavior of the droplet excited either with low frequency vibration of a plate~\cite{whitehill2010droplet} or with modulated acoustic waves using ultrasonic devices. Trinh and Wang modulated the ultrasound to excite the vibration of droplets in a liquid-liquid system~\cite{trinh1982experimental}. Michael Baudoin~\cite{baudoin2012low} modulated the surface acoustic wave (20~MHz) to waves with frequencies lower than 150~Hz. Less energy is required in this case to vibrate and move the droplet relative to the unmodulated signal. In these studies, although ultrasonic waves are used, it is low-frequency resonance interactions with the high-frequency vibrations that generate the oscillation of the droplets while the original high-frequency ultrasound is regarded as a static radiation stress source. The excitation frequencies used in these studies are near the resonant frequencies of the droplets. Blamey~\textit{et al.} \cite{Blamey:uq} studied capillary waves induced with ultrasonic forcing frequency $\mathcal{O}[10^{7}\text{ Hz}]$. Remarkably, they observed capillary waves at the droplet's natural frequency ($\mathcal{O}[10^{2}\text{ Hz}]$) and a \emph{complete absence} of evidence of Faraday instability. The mechanism of energy transfer across these vastly disparate scales was left unresolved. As for the theoretical study of the droplet vibration excited by acoustic waves, Murray \textit{et al.}~\cite {murray1999droplet} first applied the boundary element method (BEMs) to simulate a droplet's response to acoustic excitation. Lyubimov theoretically studied the oscillatory behavior of a hemispherical droplet on an oscillating substrate~\cite{lyubimov2006behavior}. The model in that work explicitly considers the effect of the pinned contact line. These foregoing numerical studies are limited to forcing scenarios where the ratio of the excitation frequency to droplet resonant frequency is less than ten. Models and observations limited to this regime are insufficient for characterizing droplet vibration in response to ultrasonic acoustic forcing, since the difference between droplet resonant frequency and the forcing frequency is typically many orders of magnitude. An important assumption made in theoretical studies of the droplet oscillation is that the perturbation is infinitesimal~\cite{strani1984free}. For low frequency ultrasound, since the wavelength is much larger than the droplet's characteristic length, one can assume that the shape of the interface is only infinitesimally distorted when the input power of the ultrasound is small. In this case, a droplet's shape can be expressed as a sum of Legendre polynomials. This approach is infeasible for droplets excited by high frequency ultrasonic waves. When the wavelength of the acoustic wave is comparable or even smaller than the radius of the droplets, induced pressure nodes will cause the droplet surface to elevate and statically deform. This static abnormal curvature is observed in Manor's study of a 2~$\mu\ell$ droplet atop a lead zirconate titanate (PZT) thickness polarized disk transducer operating at 2~MHz~\cite{manor2011substrate}. Suryanarayana~\cite{suryanarayana1991effect} studied the effect of the shape change caused by the acoustic radiation stress for levitated droplets. However, in this study, the shape change is regarded as a static deformation and the interaction between fluid shape and pressure distribution is ignored by assuming the deformation is static. In this letter, we describe a physical model that explains the energy transfer from high-frequency ultrasonic forcing (MHz and beyond) to low-frequency capillary waves on the droplet surface based on interaction between the acoustic radiation force and surface tension. \begin{figure*}[t!] \centering \includegraphics[width=0.9\textwidth]{CW_Figure_exp_updated_2.eps} \caption (a) Experimental setup with the DHM system and thickness mode device; laser light comes from the laser condenser at the bottom, passes through the droplet sample placed on the top surface of the acoustic device, and then is collected by the lens above. (b) Image containing information on the phase difference at the center of droplet's surface from the DHM system. (c) A droplet placed on the surface of the acoustic device in the transparent window area. The (d) flow chart of the algorithm simulating the shape of the droplet and the acoustic pressure distribution.} \label{experiment} \end{figure*} \section{Experimental Methods} The ultrasonic devices were fabricated from $128^{\circ}$ Y-rotated, X-propagating lithium niobate wafers with 500~$\mu$m thickness and mirror-finish polishing on both sides (Roditi, London, UK). On each side of the wafer, a sputter deposition method (Denton Discovery 18, New Jersey, USA) was used to deposit a 400~$nm$ layer of chromium and a layer of gold. These provide electrodes for the thickness mode vibration of the substrate. One 0.5$cm \times$0.5$cm$ transparent area was left without gold deposition at the center of each transducer for the digital holographic microscope (DHM) laser to pass through the media during experiments (Fig.~\ref{experiment}(a)). Thickness-mode vibrations were induced by applying an amplified voltage potential at a frequency matched to the thickness resonance of the device (6.6~MHz for the 500~$\mu m$ thick wafer). A 5-$\mu\ell$ droplet of deionized water was dispensed onto the center of the transparent window using a measuring pipette (2-20 $\mu L$, Thermo Fisher Scientific, USA). This droplet volume is used so that the radius of the droplet is smaller than the capillary length, $l = \sqrt{\gamma/\rho g}$, with $\gamma$ as surface tension and where $\rho$ is the liquid density. For the media used in this study, capillary forces dominate at the droplet surface and the effect of gravity is inconsequential. The resonant frequency and vibration amplitude per unit volt input for the transducer were characterised with laser Doppler vibrometry (LDV; UHF-120, Polytec, Waldbronn, Germany). Measuring microscale vibrations on the surface of droplets is challenging due to the size and speed of the dynamics under consideration (nanometer amplitude at timescales as small as a few milliseconds). While an LDV is suitable for single-point and scanning measurements of a surface with well-defined periodic vibrations, the DHM (transmission, Lyncee-tec, Zurich, Switzerland) utilizes cutting-edge metrology to characterize interfacial dynamics across an entire region of interest on the liquid-air boundary. The transmission DHM system used in this study generated three-dimensional holographic data by interpreting comparative phase delays between a laser that passed through the dynamic medium and a reference laser traversing an unobstructed path. Although the phase was only unique up to a $2\,\pi$ factor, continuous changes in space and time allow for phase unwrapping of the two-dimensional images to reliably overcome this constraint. When combined with the refractive index of the medium, the unwrapped images provide high-accuracy measurements of time-dependent interfacial displacements. This made the DHM system particularly well-suited for measuring capillary waves on an air-fluid interface. We employed a high-speed camera (FASTCAM NOVA S12, Photron, San Diego, CA USA) integrated with the DHM. In total, the system offers a recording rate of up to 116,000~fps and provided real-time three-dimensional surface structure data with high spatial resolution (nanometer scale accuracy in the normal vertical $y$-direction and micron scale accuracy in the transverse $xz$-plane). \section{Physical model} The acoustic impedance can be calculated by the density of the media and speed of sound $Z = \rho c$. The reflection coefficient, which describes how much portion of the acoustic power is reflected at the interface, can be expressed as $r = {\frac{Z_l-Z_g}{Z_l+Z_g}}^2$, where $Z_l$ and $Z_g$ are the acoustic impedances of liquid and gas. With water and air used in this example, more than 99\% of the acoustic waves carried in water are reflected from the air-water interface, implying that most of the vibrational energy will be conserved in the droplet. Since the attenuation length of the acoustic wave can be estimated by Stokes' law~\cite{lighthill1978acoustic}: $\alpha_l^{-1}=\frac{\rho v^3}{4 \pi^2 f^2 (\frac{4}{3}\mu+\mu_b)}$\, where $\rho$ is the density of the liquid, $v$ is sound velocity, $f$ is the frequency, and $\mu$ and $\mu_b$ are the dynamic and bulk viscosity, respectively, the attenuation length of 1-MHz-order acoustic waves are more than a meter. The acoustic waves are expected to be reflected multiple times and form compressed and rarefied regions. The acoustic pressure pattern formed in the droplet would thus affect the fluid surface in a complicated way. Mass and momentum conservation equations are used in the analysis~\cite{nyborg1965acoustic,riaud2017influence} to account for this complexity; \begin{subequations} \begin{align} \begin{split} \frac{\partial\rho}{\partial t}+\nabla\cdot(\rho u)=0 \end{split}\\ \begin{split} \rho\frac{\partial u}{\partial t}+\rho(u\cdot\nabla)u=-\nabla p+\mu\nabla^2u+(\mu_B+\frac{\mu}{3})\nabla\nabla\cdot u) \end{split} \end{align} \label{NS} \end{subequations} Where $\rho$ is the fluid density, $u$ is the fluid velocity, $P$ stands for the fluid pressure, $\mu$ and $\mu_B$ represent shear viscosity and bulk viscosity respectively. Since the vibrational velocity from the ultrasonic device required to initiate the capillary wave in our study is small, the so-called 'slow streaming' assumption \cite{Friend:2011ss} can be used in the analysis and the physical quantities in the equation \ref{NS}(a) and \ref{NS}(2) can be decomposed into three contributions~\cite{hunt1955notes,nyborg1965acoustic} as: \begin{subequations} \begin{numcases}{} u = u_0 +\epsilon u_1 +\epsilon^2u_2 +\mathcal{O}[\epsilon^3]\\ p = p_0 +\epsilon p_1 +\epsilon^2p_2 +\mathcal{O}[\epsilon^3]\\ \rho = \rho_0 +\epsilon \rho_1 +\epsilon^2\rho_2 +\mathcal{O}[\epsilon^3] \end{numcases} \label{decompose} \end{subequations} $u_0$, $p_0$ and $\rho_0$ are hydrostatic terms and those with subscript 1 and 2 refers to first and second order perturbations. $\epsilon$ is Mach number defined as ratio of fluid velocity and speed of sound ($\epsilon = u_1/c_0$). Based on the fact of small velocity, $\epsilon\ll 1$. Taking expansion \ref{decompose} (a-c) into equation \ref{NS}(a) and \ref{NS}(b) and grouping in terms of $\epsilon$, the equations can be separated into three parts: the zeroth, first, and second order components of the acoustic perturbation. The expression of the first-order acoustic perturbation represents the behavior of the linear acoustic waves in the fluid. Since the size of the droplet is small, the hydrodynamic Reynolds number in this case would be a small value and the equations can be further simplified~\cite{zarembo1971acoustic} as: \begin{subequations} \begin{align} \begin{split} \frac{\partial \rho_1}{\partial t}+\rho_0(\nabla\cdot u_1) &= 0 \end{split}\\ \begin{split} \rho_0\frac{\partial u_1}{\partial t} &= -\nabla p_1 \end{split} \end{align} \label{NS2} \end{subequations} Together with the linear equation of state $p_1 = c_0^2\rho_1$, these equations can be used to describe the acoustic wave in the fluid with small Mach and Reynolds numbers. We solve for the radiation pressure using the linear pressure wave equations above. The acoustics are modified to include reflection on the interfaces and attenuation along the path of propagation. We use the finite element method in the frequency domain to obtain the pressure distribution (COMSOL Multiphysics, COMSOL Inc., Burlington, MA USA). The impedance boundary condition is used to simulate the reflection of the acoustic wave on the fluid-air interface. The acoustic wave pressure is assumed to decay exponentially with distance when travelling in the fluid. The attenuation factors are calculated based on the properties of the fluid and the acoustic waves \cite{takamura2012physical}. Manor \textit{et al.}~\cite{manor2011substrate} have reported that the acoustic radiation pressure on an air-water interface generated with ultrasonic actuator working at high frequency could cause the droplet to deform. The pressure jump at the interface and surface curvature (\emph{i.e.}, shape) are related according to the Young-Laplace boundary condition. We use an axisymmetric droplet shape analysis method~\cite{del1997axisymmetric} to numerically minimize the droplet shape error subject to a constant volume constraint $V_0$ and fixed contact length constraint $l_0$. Thus, the classical Laplace equation can be expressed as function of arc length $s$ on the interface and tangential angle $\theta$: \begin{equation} \frac{d\theta}{ds} = 2b+cz-\frac{\sin\theta}{x}+\frac{P_a}{\gamma}, \label{laplace} \end{equation} with \begin{subequations} \begin{align} {dx}/{ds} &= \cos\theta\\ {dy}/{ds} &= \sin\theta\\ {dV}/{ds} &= \pi x^2\sin\theta \end{align} \label{three} \end{subequations} The problem is simplified into a two-dimensional case based on the axisymmetric assumption; $x$, $y$ and $V$ represent the position and the differential volume at the corresponding position. The acoustic pressure $p_a$ on the interface, as the result from previous simulation, is extracted to calculate the surface shape of the fluid. We name this simulation process the \emph{pressure-interface feedback model} since it mimics the feedback interplay process between the acoustic pressure distribution and the shape of the droplet's interface. In eqn.~(\ref{laplace}), $b$ is the curvature at the apex, which is treated as an additional variable to solve the equation with a Neumann boundary condition (${d\theta}/{ds}=b$ at $s=0$). It can be seen from eq.~(\ref{laplace}) that only the change of angle over different locations $s$ is updated with this numerical method, which means that though the drop's surface shape can be calculated, the step size $ds$ is required to determine the length of the curve. In this paper, a fixed step size is used to calculate the drop shape. To increase accuracy, a tiny fixed step size ($10^{-7}$~m) compared to the drop's dimension was used. A small step size requires a finer grid and a larger number of points when extracting pressure data from the simulation result. With an artificially defined step size, it remains difficult to obtain an exact solution of the drop shape, though the overall shape should be correct. To overcome this problem, the shape of the droplet is then rescaled so that the contact length matches with $l_0$. Another challenge is that the curvature $b$ is unknown without measurement with specific devices. A shooting method is therefore applied to generate results from eqns.~\eqref{laplace} and \eqref{three}. Each time a different value of $b$ is found in this process and used to solve the differential equation \ref{laplace} until the volume conservation condition is satisfied ($\sum_i dV_i = V_0$). The pressure-interface feedback model is implemented with two nested loops as shown in Fig.~\ref{experiment} (b) with the outer loop seeking values of the curvature on the droplet's apex via the shooting method, and the inner loop solving the differential equation in a stepwise fashion, terminating when the last pressure data point is reached. After calculating the surface shape of the droplet, the interface is then updated and imported back into the finite element analysis to determine the acoustic pressure distribution for the next quasi-static state calculation. \begin{figure*}[t!] \centering \includegraphics[width=0.9\textwidth]{CW_Figure1_updated.eps} \caption{Vibration pattern of the droplet collected with the DHM system before and after the power is applied to the acoustic devices, the amplitude of the acoustic waves are 1.1~nm (a), 1.6~nm (b) and 2.3~nm (c) respectively.(d)-(f) are the FFT of the results in (a)-(c) separately after the power is applied to the acoustic devices } \label{f2} \end{figure*} We next seek to correlate the interframe time scale $\Delta t$ to an approximate expression derived from nondimensional analysis. The equation $\Delta x = 1/2a\Delta t^2$ is used since, initially, the vibrational velocity on the droplet surface is zero. The expression $\Delta x$ represents the displacement of the droplet's interface at its apex, and $a$ is the acceleration. Thus the inter-frame time can be estimated to be $\Delta t = \sqrt{2\Delta x/a}$. Acceleration on the interface is caused by the acoustic pressure $P_a$, and if we treat the points on the interface as differential surface areas $A$, the relationship between the acoustic pressure and the acceleration can be defined as $p_a A = ma$. Thus $a = \frac{P_aA}{\rho V}=\frac{P_a}{\rho \Delta x}$. Taking the equation of acceleration into the expression of the inter-frame time, we obtain $\Delta t = \sqrt{\frac{2\rho}{P_a}}\Delta x$. Based on the simulation result, the inter-frame time ranges from $10^{-4}$ to $10^{-3}$~s, which is much larger than the period of the excitation signal ($10^{-7}$~s). \section{Results and Discussion} Experiments revealed three essential dynamical regimes: static shape change, steady vibration, and nonlinear vibration, as shown in Fig.~\ref{f2}~(a)-(c), respectively. Care was taken to isolate the system from ambient perturbations such as vibration and localized air currents. The residual vibration with an amplitude of around 60~nm on the droplet surface is due to the high-speed camera fan vibrating the observation system (the high-speed camera is rigidly fixed to the observation tray). We then studied the effect of different on-source vibrational amplitudes on the oscillation of the droplet surface and we controlled the amplitude by tuning the input power to the ultrasonic devices. The amplitudes of the thickness-mode vibrations on the transducer surface were detected with LDV. The noted dynamical regimes correspond to the on-source vibration amplitude. When the on-source vibrational amplitude is small ($\leq 1.5$~nm), a sudden change of the droplet height is observed at the instant acoustic excitation is applied (Fig.~\ref{f2} (a)). This occurs due to a sudden change in the pressure at the interface resulting from acoustic radiation forces. In the static shape change mode, an increase in input power to the transducer corresponds to an increase in deformation, and the amplitude of the droplet surface oscillation before and after the droplet's shape is unchanged. Evidently the high-frequency acoustic wave is not directly interacting with the low-frequency oscillation on the droplet surface. This suggests the existence of another mechanism facilitating energy transfer from the ultrasonic to the capillary wave's wavenumbers. The natural oscillation of the droplet surface can be predicted with Rayleigh's equation, \begin{equation} f = \frac{1}{2\pi}\sqrt{\frac{l(l+1)(l+3)\gamma}{\rho R^3}}, \end{equation} where $R$ is the radius of the droplet, $\gamma$ is the surface tension, and $l=1,2,3,...$ is the mode number. In this case, the first natural frequency is 80~Hz, which agrees with the stationary frequency transform in Fig.~\ref{f2} (d). The results in Fig.~\ref{f2}~(b) are generated with a 1.5~nm input. The oscillation amplitude on the droplet surface is distinct from the ambient fan perturbation in amplitude and frequency. Sudden shape change can still be observed when input signal is initiated. Following the shape change, the droplet exhibits linear, stable vibrations. Frequency peaks associated with these vibrations are plotted in Fig.~\ref{f2} (e). These are peaks located at 178, 357 and 537~Hz. No super-harmonic modes for the natural frequency are observed. To clarify how the energy is transferred from the ultrasonic device's vibrations to capillary waves, we conducted particle image velocimetry (PIV) experiments with high-speed camera (FASTCAM MINI, Photron, Japan). Acoustic streaming can be generated by nonlinear interaction between acoustic wave and fluid \cite{lighthill1978acoustic}. This phenomenon is commonly seen when the frequency of the ultrasound is high since higher proportion of acoustic energy will be attenuated into the fluid to drive the flow \cite{dentry2014frequency}. However, there is no flow observed with the same amplitude of the vibrations from the source that is able to drive the steady vibration on the droplet surface. Thus the observed capillary waves are not a result of any flow behavior. As on-source vibration amplitude is increased, nonlinearity plays a larger role in the capillary wave dynamics. Evidence of nonlinearity is obsserved in Fig.~\ref{f2} (c) and (f). In Fig.~\ref{f2} (c), the wave pattern is nonuniform and no obvious period of oscillation can be directly observed. The peaks in the frequency space are broadened and this is due to non-resonance interaction between waves with different frequency (Fig.~\ref{f2} (f)). These interactions generate waves with new wavelengths and frequencies. With a low-level of nonlinearity in a finite domain, a wave that is not congruous with the resonance conditions of the droplet will vanish while newly generated waves that are located within a spectral range determined by the nonlinear broadening of the dispersion relation will remain~\cite{berhanu2018turbulence,kartashova2010nonlinear}. We then confirmed the normal axis DHM measurements of the rapid, transient initial shape change with direct high speed transverse profile imaging recorded by high speed camera. Since the initial shape change of the droplet is in the submicron to micron scale, we used a high-speed camera with a 5X objective lens (M Plan Apo 5x objective, Mitutoyo, Japan) to observe the droplet from the side and capture the height difference before and after the power was applied to the ultrasonic device. We then binarized the images that were produced from the camera to then calculate the height change at the droplet apex. \begin{figure}[!b] \includegraphics[width=0.5\textwidth]{CW_Figure_particle_updated.eps} \caption{Two pictures taken in the particle tracking experiment process (a) before and (b) after the acoustic device was turned on. A ring-shaped pattern was formed by acoustic streaming-driven recirculation to form nodes corresponding with the location of the high-pressure regions in the droplet; this is comparable to (c) the two-dimensional (side view) acoustic pressure distribution in the water droplet, taking advantage of the axisymmetric nature of the droplet. The particles' position from the particle tracking experiments (d, blue line) favorably compare to the peaks in pressure from the simulation (d, red line).} \label{fp} \end{figure} In order to resolve the pressure distribution, we tracked the migration of a homogeneous dispersion of fluorescent polystyrene particles (3~$\mu$m Fluoresbrite YG Microspheres at a concentration of $4.99 \times 10^5$~particles/m$\ell$; excitation and emission maximum wavelengths at 485 and 441~nm, respectively, Polysciences, Warrington, PA, USA) using high-speed imaging. The size was selected to be much smaller than the wavelength of the progressive acoustic wave in order to mitigate the influence of direct acoustic radiation forcing. This ensures that the particles will only migrate due to local acoustic streaming that will deliver the particles to the high pressure regions which are the quiescent nodes amid the recirculating flow adjacent the substrate surface. We illuminated the particles with a blue laser sheet generator (488~nm wavelength). To decrease the background light intensity, a low-pass optical filter (<450~nm longpass filter FEL0450, ThorLabs, Newton, NJ USA) was placed in the camera light path to block the excitation light, leaving only the fluorescence signal to pass to the camera. The thickness of the laser sheet is 200~$\mu$m, only the particles in this height range will be illuminated. We set up the laser to pass through the bottom of the droplet so more particles can be visualized. Figure~\ref{fp} (a), (b) shows the distribution of the particles before and after acoustic excitation. The particles are uniformly distributed in the droplet before the acoustic wave is generated and migrate to well-defined positions forming a ring-like pattern during excitation. \begin{figure*}[!t] \centering \includegraphics[width=0.8\textwidth]{CW_Figure2_updated.eps} \caption{The simulation results of the motion of the apex of a droplet's fluid interface excited by thickness-mode vibrations: (a) 1.1~nm amplitude vibrations are insufficient to induce vibration after the initial, nearly static height change. (b) A 1.5~nm vibration was required in the simulation for the droplet's interface to exhibit steady vibrations. Increasing the vibration to (c) 1.9~nm on the droplet surface amplifies the motion and higher-order motions are also apparent. These vibration amplitude-driven thresholds from a static shape to steady vibration and nonlinear vibration are consistent with the experimental results in Fig.~\ref{f2}.} \label{fs} \end{figure*} The results of the acoustic pressure simulations are shown in Fig.~\ref{fp} (c). The simulations take into account wave reflection and attenuation. The complex distribution of positive and negative pressure nodes is caused by the interaction of acoustic waves with the interface as they are reflected multiple times within the droplet. The pressure wave interactions lead to local pressure nodes. Within a stable oscillating pressure distribution, particles are driven from positive pressure nodes to the closest positions with negative acoustic pressure. The results of our simulation are confirmed with experimental particle migration measurements. The numbers of particles with different distance are counted and the histogram is normalized to the blue curve in Fig.~\ref{fp} (d). Each point on the curve represents the probability that a particle will be located at a specific distance from the droplet center after migration. To compare the experimental data to the simulated position of the negative pressure nodes, we take the average of the pressure simulated in different layers along y axis at the bottom of the droplet (blue area shown in Fig.~\ref{fp}(c)). Since the particles migrate toward the closest negative nodes, the probability associated with a particle migrating to a give position is proportional to: (i) the pressure, and (ii) the number of particles in the regions. We divide the illuminated area into several regions according to the midpoints between any two neighboring negative pressure nodes (the way of how these regions are separated are shown with the black lines through the midpoints in Fig.~\ref{fp}(c)). We calculate the ratio of particle counts in different regions by comparing the area of the annular regions. Particles within a certain region are assumed to migrate to the local negative pressure node. The red curve in Fig.~\ref{fp}(d) represents the normalized probability corresponding to migrated particle positions based on the simulated pressure results. The data collected from the particle tracking experiments find good agreement with magnitude, number, and location of the pressure nodes predicted by the pressure-interface feedback model. Since there is no significant net acoustic streaming flow within the droplet, these results provide strong evidence for the existence of the spatially localized stable pressure distribution. It can be seen here that with high frequency ultrasound, the acoustic waves' wavelength is on the order of, smaller than, the size of the droplet. When properly accounted for, the effects of reflection and attenuation of the acoustic waves and their interactions serve to redistribute pressure within the droplet in a manner that is highly consistent with our observations. This demonstrates a clear, intuitive mechanism for the noted energy transfer across wavenumbers spanning many orders of magnitude. And this is clearly very different from the mechanism(s) proposed by the classical theory In order to analyze the dynamic droplet shape change induced by acoustic pressure feedback, we constructed a pressure-interface model by extracting the simulated pressure data from the surface of the droplet and utilized the data within a modified Young-Laplace boundary condition (eqn.~\eqref{laplace}). Here, the surface tension balances the acoustically-driven dynamic pressure jump by inducing local curvature. The direction of the change is determined by the sign of the local pressure change. At each step in the the simulation, the shape that is deduced by minimizing the curvature against the Young-Laplace boundary is then utilized to compute an updated pressure distribution. This update is then used with the Young-Laplace condition to update the droplet shape. Iterating accordingly, we obtain a time series of states of the droplet shape and pressure distribution. Figure \ref{fs} (a) shows the simulated case for a small on-source vibrational amplitude: the transducer amplitude is 1.1~nm. The droplet experiences a nearly instantaneous height change when the input is switched on and stabilizes with the new shape and no further changes appear. This corresponds to experimental observations of the static mode where capillary waves are not generated. However, with a slightly greater input amplitude of 1.5~nm, capillary waves are generated on the droplet surface. In the simulation results shown in Fig.~\ref{fs} (b), one observes that at this amplitude, the droplet apex vibrations corresponds to a capillary wave with an amplitude of 200~nm. This prediction is also consistent with the experimental results. As the forcing amplitude is further increased to 1.9~nm, the amplitude of the resulting capillary waves increases to 1~$\mu$m and nonlinear vibration patterns emerge, as shown in Fig.~\ref{fs} (c). \begin{figure}[ht] \centering \includegraphics[width=0.5\textwidth]{CW_Figure5_updated.eps} \caption{ Experimental results of 90\%-10\% glycerol-water solution droplet excited with acoustic waves in static mode (a) and steady vibration mode (b) (3.9~nm amplitude). (c) shows the laminar pattern of the pressure distribution in the 90\%-10\% glycerol-water solution droplet. (d) The simulation results of the vibration generated with 3.9~nm acoustic waves at the apex of the droplet} \label{f5} \end{figure} The complexity of the pressure distribution is due not only to wave reflections, small attenuation is also an important factor. The attenuation factor for acoustic waves is~\cite{morse1986theoretical} $2(\frac{\alpha V}{\omega})^2=\frac{1}{\sqrt{1+\omega^2\tau^2}}-\frac{1}{1+\omega^2\tau^2}$, where $\alpha$ is the attenuation coefficient, $V$ is the sound velocity, $\omega$ is the angular frequency, and $\tau$ is the relaxation time. Here the relaxation time is given by $\tau = \frac{4\mu/3+\mu_b}{\rho v^2}$. For water droplets, the dynamic viscosity $\mu$ and bulk viscosity $\mu_b$ are $0.89$~mPa$\cdot$s and $0.2$~mPa$\cdot$s, respectively. For an acoustic wave at 6.6~MHz, the attenuation distance ($1/\alpha$) is therefore 0.034~m, so the acoustic waves are reflected multiple times until fully attenuated within millimeter sized water droplet, like those considered here. To study attenuation effects on capillary wave formation, we conducted experiments and simulations for a 90\%-10\% glycerol-water solution. Glycerol is used since it has a similar density (1260~$kg/m^3$) and surface tension (63.4~$mN/m$) to water, but a substantially higher attenuation ($1/\alpha = 2.8\times10^{-4}$~$m$). This allows us to isolate the effect of attenuation on capillary wave formation. The results for the solution are similar to those for water. With small vibrational amplitude, only sudden height jumps are observed (Fig\ref{f5} (a)) and no capillary wave formation. The vibration amplitudes on the interface before and after the droplet height jumps are same, which means that droplet only changes from one steady state to another in the process. Figure \ref{f5} (b) shows the vibration of the droplet excited by input with 3.9~nm amplitude. The details of the vibrations can be seen in the inset. The vibration at droplet apex itself is of the linear sine wave form: with 3.9~nm amplitude input, capillary waves are generated on the surface and the droplet reaches a steady vibration mode. Compared to the vibration of the water droplet, the solution droplet took significantly longer to reach the steady vibration state after the sudden droplet shape change. Simulations were conducted with the same parameters used in the experiments and the results for the 3.9~nm input amplitude are shown in Fig.~\ref{f5} (c), (d). Figure \ref{f5} shows the acoustic pressure distribution in the droplet. A laminar pressure distribution is observed with nodal formation near the top portion of the droplet. The input amplitude threshold for capillary wave generation is confirmed with experiment, as shown in Fig.~\ref{f5} (b), further validating the model. A rapid capillary wave amplitude decay is observed in both the simulation and the experiment. The ratio between the decay time to the period of the steady vibration is roughly five in both cases, as shown in Fig.~\ref{f5} (b)(d). \section{Conclusions} A new method to observe the onset and growth of capillary wave motion on fluid interfaces from high frequency acoustic waves has been provided using high-speed digital holographic microscopy. The results produced from this method are compared to a new approach to the solution of capillary wave dynamics through the use of a hybrid solution method. This method employs a two-step process, first producing the pressure distribution on the fluid interface from the relatively fast acoustic standing wave distribution in the acoustic cavity formed by the droplet. This step is followed by a computation of the new shape of the fluid interface that would arise as a consequence of the new pressure distribution taking into account the acoustic pressure variation at the interface. The necessary increment in time between steps was determined from a simple nondimensional analysis. Remarkably, the correlation was good between the computational results produced using this method and the experimental observations. Further refinements of this method is likely to produce additional insight into the complex phenomena of capillary wave generation. \section{Acknowledgments} The work presented here was generously supported by a research grant from the W.M.\ Keck Foundation to J.\ Friend. The authors are also grateful for the substantial technical support by Yves Emery and Tristan Coloumb at Lyncee-tec, and Eric Lawrence, Mario Pineda, Michael Frech, and Jochen Schell among Polytec's staff in Irvine, CA and Waldbronn, Germany. Fabrication was performed in part at the San Diego Nanotechnology Infrastructure (SDNI) of UCSD, a member of the National Nanotechnology Coordinated Infrastructure, which is supported by the National Science Foundation (Grant ECCS--1542148).
{ "redpajama_set_name": "RedPajamaArXiv" }
5,505
I recently heard a story on our local NPR station about the work of Suzanne Simard. Simard is a Canadian ecologist who has spent the last four decades studying forest floors and tree survival. Thirty years ago, based on lab experiments, Simard decided to go out into the field and test how trees were connected by placing a bag over fur and birch seedlings individually. Then she injected a bag with radioactive carbon 14 gas and waited for the tree to soak up the carbon, turning it into sugars, and sending it to its root system. She used different carbon gases on different plants. What she discovered was shocking. Over and over again, the tree which had received a large doses of carbon (a good thing) passed it on underground to it's neighboring tree. The results could be noted on the neighbor tree with a geiger counter! Trees shared their resources, and it was discovered in the coming months that they would even send carbon back and forth to each other depending on whichever tree was lacking it more during that season. This led to years of new discoveries about the interconnectedness of the forest. Trees are not nearly as isolated and competitive as we once thought. In fact, we now know through isotope tracing that larger, established "mother trees" can send carbon and other nutrients through an underground pathway of fungi to new seedlings far away that are cut off from the sun and need additional help to survive. Astoundingly, there was even information from older trees passed to younger trees that allowed them to be more resilient in the face of future stresses. Simard learned that trees are not primarily competitors… in fact, they are communicators and collaborators. Amazing. See where we're going here? Trees talk. Trees can even care for each other. It's kind of embarrassing that we often don't, huh? I mean, we have brains and opposable thumbs, after all. We are all too quick to embrace a competition over cooperation mentality. We look at what other people have, what they think, what they believe, and how they look- and our first instinct is often comparison and critique. We are convinced we are in competition. Maybe we need to keep learning from God's good world around us. Jesus offers us a different way: one that reflects our truest identity as beings created in the image of God. We are called to care for one another, to see ourselves as connected, and work for each others good. We are not isolated individuals, even though we may try hard to convince ourselves of it. Jesus constantly gave a vision of people that, rather than being in competition with each other, were moving toward radical cooperation instead. And today, as everyday, we are presented with a choice. We can choose to see ourselves in light of Christ's mercy, or we can try to go it alone. If we chose the former, then we are given the ability to see others through that lens of grace as well. We look for the best in each other. We call out the good. We seek understanding in misunderstanding. We seek forgiveness when we wrong another. And we look for opportunities to serve. We are, after all, cooperative beings, who can only survive if we choose cooperation. Competition will kill us all- body, spirit, and soul. But Jesus brings life. What resources do I have that could benefit and encourage those around me? Am I more prone to look critically at other people, or look cooperatively at them? Do I find joy in knowing that I belong to God's transnational and transhistorical family? Jesus, don't let me believe the lie of isolation. Soooo inspirational, I love the sharing of resources. Will look in a different way at our coffee trees today. May giv me a better idea of what they are saying to me as I work with them.
{ "redpajama_set_name": "RedPajamaC4" }
1,208
04:28 PM, June 03, 2019 / LAST MODIFIED: 04:33 PM, June 03, 2019 Trump calls London mayor "stone cold loser" as he lands in Britain US President Donald Trump and First Lady Melania Trump arrive for their state visit to Britain, at Stansted Airport near London, Britain, June 3, 2019. Photo: Reuters Reuters, England Donald Trump lashed out at London Mayor Sadiq Khan on Monday, calling him a "stone-cold loser" after the mayor criticised the British government for inviting the US president for a state visit. On Monday, Trump arrived in Britain with his wife Melania for a three-day visit, and he had already blasted Khan before his plane touched down. "@SadiqKhan, who by all accounts has done a terrible job as Mayor of London, has been foolishly 'nasty' to the visiting President of the United States, by far the most important ally of the United Kingdom," Trump said on Twitter shortly before Air Force One landed at Stansted Airport near London. "He is a stone cold loser who should focus on crime in London, not me." On Sunday, Labour's Khan said it was important to have good relations with the United States but that Britain should not be "rolling out the red carpet" for Trump. He has also compared Trump to 20th century fascists. "This is much more serious than childish insults which should be beneath the President of the United States," a spokesman for the mayor said. "Sadiq is representing the progressive values of London and our country warning that Donald Trump is the most egregious example of a growing far-right threat around the globe." Trump will be treated to a display of British royal pageantry during the June 3-5 visit: lunch and a formal dinner with Queen Elizabeth, tea with heir Prince Charles, and a tour of Westminster Abbey, coronation church of English monarchs for 1,000 years. He will also commemorate the 75th anniversary of the World War Two D-Day landings, and foreign minister and Conservative leadership candidate Jeremy Hunt said the trip should be above party politics. Hunt, who greeted Trump at Stansted Airport, said that Trump had mentioned the mayor to him on arrival. "He wasn't exactly saying that he's going to be inviting Sadiq Khan for royal treatment at the White House any time soon," Hunt told the BBC, declining to give further details of the conversation. UK envoy said Trump ditched Iran deal to spite Obama British ambassador to US quits after spat over leaked memos Trump assails Britain's May, ambassador to US Trump sets foot in NKorea, agrees with Kim to resume talks G20 Summit: China's Xi warns against protectionism Popular In World Trump under fire for 'racist' attack on US congresswomen US firms may get nod to restart Huawei sales in 2-4 weeks Facebook to pay $5b as settlement over privacy issues Iran tried to seize British oil tanker: Report Bangladeshi held for getting naked, attacking stewardess New Zealanders give up weapons after mosque killings Turkey receives first Russian missile delivery, risking US ire One dies, hundreds evacuate after 7.3 quake in Indonesia Death toll in Mumbai building collapse rises to 13 Dawn Editor Zaffar Abbas gets CPJ press freedom award Conjoined twins split after 50-hour surgery 7 dead as century-old building collapses in Mumbai
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,264
{"url":"https:\/\/crypto.stackexchange.com\/questions\/45032\/using-the-babais-naive-rounding-algorithm-to-decode-in-the-conditions-of-the-s","text":"# Using the Babai's Naive Rounding algorithm to decode, in the conditions of the SIS problem, is secure?\n\nGiven the SIS Problem: Given an integer q, a matrix $A \\in \\mathbb{Z}_q^ {n \\times m}$ uniformly random, a real $\\beta$, a syndrome $u \\in \\mathbb{Z}_q^n$, find a nonzero integer $e \\in \\mathbb{Z}^m$ such that $Ae=u \\mod q$ and $|| e||_{2} \\leq \\beta$.\n\nSampling an uniform $A \\in \\mathbb{Z}_q^ {n \\times m}$, along with a relatively short full-rank \"trapdoor\" set of vectors $S \\in \\Lambda^{\\perp}(A)$ as in the paper of Ajtai of 1999 (Generating hard instances of the short basis problem). Choosing $t \\in \\mathbb{Z}^m$ via linear algebra such that $At=u \\mod q$ and using the Babai naive rounding algorithm with basis $S$ to decode $-t \\in \\mathbb{Z}^m$ to a point $v \\in \\Lambda^{\\perp}(A)$ yields the solution to the SIS problem $e=t+v$. Therefore it is difficult to obtain $e$ right?\n\nBut on the other hand doesn't the attack of Nguyen and Regev of the GGH scheme ( Learning a Parallelepiped: Cryptanalysis of GGH and NTRU Signatures ) work in this case? Is this a contradiction to the SIS hardness? I'm wrong somewhere!\n\n\u2022 The Nguyen-Regev attack on GGH and NTRU works because those schemes were \"leaking\" the trapdoor S while signing. So after observing a certain number of signatures, their algorithm could recover S. So their attack had nothing to do with solving SIS, but rather with how to use the leaked information to recover a trapdoor (which can then be used to solve SIS). In the regular SIS problem, the S is unknown to the adversary and there is of course no \"leakage\" coming from anywhere. \u2013\u00a0Vadim L. Mar 25 '17 at 22:36\n\u2022 @VadimL. As the user indicated it's considered an answer? Could you write it up as one? \u2013\u00a0Maarten Bodewes Mar 25 '17 at 23:45","date":"2021-05-06 04:43:33","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7089105844497681, \"perplexity\": 1083.1068689442836}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-21\/segments\/1620243988725.79\/warc\/CC-MAIN-20210506023918-20210506053918-00063.warc.gz\"}"}
null
null
Published: Feb 07, 2018 at 4:44 p.m. Updated: Feb 07, 2018 at 5:09 p.m. ANNAPOLIS VALLEY, NS - A medical doctor heavily immersed in local physician recruitment efforts says the demand for care cannot be met by doctors alone. "We need more than family physicians - and I say that simply because I don't think it would be possible to recruit enough family physicians to cover the need that is there and will continue to be there," said Dr. Crystal Todd, Chief of Family Medicine for the Nova Scotia Health Authority's western zone. As of Feb. 4, Todd said nine of the 28 doctor vacancies in the zone were in Kings or Annapolis counties. "There are seven community family physicians that have left their practices through retirement, illness or relocation and have yet to be replaced. We are also recruiting for two full-time hospitalists for Valley Regional Hospital to bring the numbers to full complement," she said. Todd sees the impact of the doctor shortage from the lens of a recruiter and a local family physician. "I can say as a family doctor in the area, yes, there is an increase in requests coming through my office. I'm now getting requests almost on a daily basis from neighbours and family members and friends of people that are losing, or have lost, their family doctor," said Todd. A Need a Family Practice Registry report prepared by the Nova Scotia Health Authority for Jan. 1 shows 11,680 of the 41,877 Nova Scotians awaiting placement are based in the western zone, which spans from Kings County to Yarmouth and includes the South Shore. Of that, 6,248 are in Annapolis and Kings counties. "For whatever reason… we have some areas where we have had groups of physicians that have left in close proximity of each other and that makes it feel like something enormous is suddenly happening," said Todd. Several vacancies have been filled in recent years, but Todd said it is not always possible for an incoming physician to take on all of the patients attached to a retiring doctor with a large practice. "We encourage practicing physicians to talk to us as soon as they are thinking about retirement and we ask them frequently about retirement plans. The sooner we know, the sooner we can start looking for a replacement," said Todd, noting that it is ideal when the new doctor can come in while the retiring physician is still working to allow for a gradual transition. Collaborative clinics like the new one in Kingston, family medicine residency programs within the Annapolis Valley, cooperation between the various healthcare providers and inviting communities are all key to physician recruitment and retention, said Todd. With many new doctors seeking collaborative working environments, Todd sees family physicians, nurse practitioners, family practice nurses, pharmacists, social workers, dietitians and other healthcare professionals teaming up more and more to meet the needs of patients. Kevin Chapman, director of finance and partnerships for Doctors Nova Scotia, is encouraged by the steps taken to promote the use of collaborative care models and implement residency programs in rural communities. "We're concerned when we see over half of the physicians in the western zone are over 50 years old," he said, noting that more physicians in the city can solely focus on office work while doctors in rural areas often take on added responsibilities like surgical assists, emergency department shifts, hospital work and nursing home visits. Healthcare providers and community members alike all have a role to play in ensuring medical residents and new physicians are welcomed by a network of support upon their arrival, said Chapman. Doctors Nova Scotia spokesperson Barbara Johnson stressed that retention is crucial at a time when more than half of the practicing family physicians and specialists in Nova Scotia are older than 50.
{ "redpajama_set_name": "RedPajamaC4" }
1,402
Next Generation Films leasing building in Galion The city has reached an agreement with Next Generation Films for the lease of a 280,000-square-foot building. Next Generation Films leasing building in Galion The city has reached an agreement with Next Generation Films for the lease of a 280,000-square-foot building. Check out this story on mansfieldnewsjournal.com: https://ohne.ws/27kRCxN Mark Caudill, Reporter Published 5:34 p.m. ET May 13, 2016 Next Generation Films Main Office.(Photo: Jason J. Molyet/News Journal)Buy Photo GALION - The City of Galion has reached an agreement with Next Generation Films for the lease of a 280,000-square-foot building, a deal that will bring in $35,000 a month. The lease is for eight months, at which time both sides will re-evaluate the situation. "They'll fill that building for the rest of the year," Galion communications director Matt Echelberry said. "What Next Generation is trying to do is have some warehousing operations." Next Generation Films was founded in 1994 in Lexington. The company is on the forefront of innovation and technology in the flexible packaging industry. "Next Generation is a pretty strong company," Galion City Councilman Jim Hedges said. "The sky's the limit to what they can do with that building." Neither Echelberry nor Hedges knew how many potential jobs would be involved. Representatives of Next Generation could not be reached for comment Friday. The Lexington company will lease space in the South Street Commerce Center, a former crane plant that was donated to the city at the end of 2015 when Hydraulic Technologies closed. Preparation costs for leasing and ongoing carrying costs will be more than covered by initial lease payments. Echelberry said the city already has received a check for $102,000 from Next Generation. Hedges, who knows a number of Next Gen employees, reached out to the Lexington company about a possible deal. Mayor Tom O'Leary spearheaded the recruitment from there. City council authorized the lease and fixed certain terms. "Any business that comes into Galion is a big plus," Hedges said. "I think it works well for what they want to do. It's a huge building. "It's a win-win." Hedges said this is a good time for Galion. This lease agreement has come at a time when the city's law department has been involved in establishing a tax increment financing district to facilitate the construction of a $4 million hotel structure, the creation of a citywide community reinvestment area and in working with developers to bring new fast-food and retail operations to the city. "Galion's growing pretty good," Hedges said. "I want to see it continue." Read or Share this story: https://ohne.ws/27kRCxN
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
513
The Ghana Revenue Authority (GRA) is to collect GH₵27.56 billion in revenue for the national kitty for 2016. Domestic Direct Revenue is expected to bring in GH₵11.513 billion, Domestic Indirect Revenue bringing GH₵ 5.916 billion, while GH₵10.159 billion is expected in Customs Revenue. Mr George Blankson, the Commissioner-General of GRA, speaking at a Press Soiree, in Accra, said the Authority exceeded its tax mobilisation target of GH₵21.57 billion for 2015. The event was to interact with the media and also deepen the relationship, which exists between the two institutions. He said GRA considered the media as a true partner in the job of revenue mobilisation and the development of the country. "You have been instrumental in ensuring that the activities of the Authority are heard throughout the country," he explained. He said the 2015 revenue mobilisation performance showed a growth of GH₵5.014 billion, representing 29.3 per cent over the previous year's. The Commissioner-General said the main reasons accounting for the achievement of the target were the strategies adopted by the Management. He said to further improve the Administration of Excise Tax, the GRA would in the course of the year begin the implementation of the affixing of the Excise Tax Stamp on excisable products for both locally manufactured and imported goods. On the Common External Tariff, the Commissioner-General said, it would become operational in Ghana from this month and it was expected to bring the country in harmony with other ECOWAS countries in the imposition of tariff. He said as part of the reforms in the Tax Administration, the GRA had been engaged over the years to review the various tax laws to conform to the international best practices, making them less complex, easy to understand and user friendly. "As a result of this, the Value Added Tax Act and the Excise Tax Act had already been passed and last year the Customs Act 2015 (Act 891) and the Income Tax Act 2015 (Act 896) were also passed," he added.
{ "redpajama_set_name": "RedPajamaC4" }
4,720
"Alabama Chief Justice Screwed 66 Judges": Side With Roy Moore Or Side With The Law Defying history, the law, and common sense, Alabama Chief Justice Roy Moore has issued an order prohibiting Alabama probate judges from issuing marriage licenses to same-sex couples. Those judges now face a choice between disobeying the law of the land and disobeying their boss. Moore issued his law not as chief justice, but in his administrative role as head of the Alabama court system. This is not Justice Moore's first Hail Mary in the lost cause against gay marriage—and he's not alone. All over the country, activists and law professors are wasting paper on fatuous proclamations that Obergefell v. Hodges is not really the law of the land, or is illegitimate because it's so horrible, or is somehow, some way not as binding as the Supreme Court said it was (PDF). Roy Moore is just the only one who's a state supreme court justice. As with Moore's past efforts to delay the inevitable, today's order was a mélange of the sensible and the risible. On the sensible side, Justice Moore does have some law on his side—in fact, three extremely narrow, technical threads on which he hangs his order. First, technically speaking, Obergefell only bound the five states that were a party to it. Since Alabama was not one of those states, technically its law is caught in limbo. Second, the Alabama Supreme Court upheld its same-sex marriage ban on March 3, 2015. And third, injunctions stemming from two federal cases challenging the ban are, as gellMoore opined last February (PDF), only binding on the executive branch, not the judicial branch—which includes probate judges. This appears to have been an oversight, the result of a pleading error by one of the parties. But rather than extend them in a common-sense way, Moore chose to restrict them in a nonsensical one. So, as three hyper-technical matters of law, Obergefell doesn't govern, the Alabama case stands, and the federal injunction doesn't apply. But that's where it all becomes laughable—if not outright dishonest. It is completely obvious that the Obergefell decision does, indeed, govern all 50 states. The logic it applied to Michigan is equally applicable to Alabama. That's why LGBT activists broke out the champagne last June. It's also why judges and clerks around the country, with only a handful of exceptions like Kim Davis, have applied the law and granted same-sex marriage licenses for months now. Even the cases upon which Moore relies, in fact contradict him. For example, Moore cites an Eighth Circuit case decided on Aug. 11 that said "The [Obergefell] Court invalidated laws in Michigan, Kentucky, Ohio, and Tennessee—not Nebraska." But that case affirmed, not rejected, the right to same-sex marriage in Nebraska, and forbade Nebraska from blocking it while the court case wound down to its inevitable conclusion. This happens all the time. When the Supreme Court rules on an issue, it does not automatically end all the cases that deal with it. But it does make their outcomes obvious. So, while the legal matters are formally resolved, lower courts issue or stay injunctions in light of the Supreme Court ruling. For example, when the Supreme Court outlawed miscegenation bans in 1967, those bans technically remained on the books in 16 states, and many were not repealed until quite recently. But courts immediately issued injunctions forbidding the enforcement of those laws. To take another example, many of the sodomy laws at issue in Lawrence v. Texas are technically still on the books. But courts everywhere have prohibited their enforcement. Obergefell, obviously—laughably obviously—is similar. As the Supreme Court wrote, "the right to marry is a fundamental right inherent in the liberty of the person, and under the Due Process and Equal Protection Clauses of the Fourteenth Amendment, couples of the same-sex may not be deprived of that right and that liberty. The Court now holds that same-sex couples may exercise the fundamental right to marry. No longer may this liberty be denied to them… The State laws challenged by Petitioners in these cases are now held invalid." Yes, as Justice Moore italicizes in his order, only "the State laws challenged… in these cases" were invalidated last June. But the rest of that paragraph obviously applies to all same-sex couples everywhere. There is no distinction between those in Alabama and those in Michigan, and so the legal outcome of the Arizona cases is a foregone conclusion. To cherry-pick one clause from the entire paragraph is, at best, facetious. And it's not unlike the way Moore cites that Nebraska case: snipping out two words that support his position, and ignoring all of the context. Where the laughter stops, though, is in Alabama's 66 probate court offices. These judges and their clerks are, with only a handful of exceptions, loyal public servants who are trying to do their jobs. Many of them personally oppose gay marriage, but recognize that they've sworn oaths to enforce the Constitution, not the Bible. What the hell are they supposed to do now? Perhaps the worst part of Moore's odious order is when he cites the "confusion" among Alabama judges, as if that confusion simply arose on its own somehow. In fact, he sowed it himself, with his court- and common-sense-defying orders last February, and he has watered those seeds with his absurd hair-splitting today. Of course, Moore's order will be rendered null and void, hopefully expeditiously, by a federal court in Alabama formally closing the same-sex marriages cases still pending, or extending the injunctions in them to judicial as well as executive employees. The tide of history will not be turned. But in the meantime, not only has Moore demeaned every married couple in Alabama, straight and gay, he has also thrown his own employees under the bus. If I were a probate judge in Birmingham, I'm not sure what I would do tomorrow morning. Roy Moore's symbolic snatch of demagoguery may play well at the polls someday. But in the meantime, he has disrespected Alabama's LGBT citizens, disrespected the rule of law, and disrespected all those doing their best to enforce it. By: Jay Michaelson, The Daily Beast, January 7, 2015 January 8, 2016 Posted by raemd95 | Alabama Supreme Court, Marriage Equality, Roy Moore | Alabama Probate Judges, County Clerks, Kim Davis, Lawrence v Texas, LGBT, Nebraska, Obergefell v Hodges, Same Sex Marriage, U. S. Supreme Court | 2 Comments "An Analogy Offered With A Nudge And A Wink": Is Bernie Sanders A Nazi? On Our Epidemic Of Bad Analogies The internet rewards hyperbole. Maybe that's why bad — incendiary, wildly inaccurate — analogies seem to be spreading throughout the media landscape, and especially on the right. Analogies are an indispensable tool of reasoning and rhetoric, highlighting similarities between two or more things, people, or events. But deploying analogies can be complicated, since the things, people, or events being compared are invariably dissimilar in a multitude of ways. The trick in deploying an analogy effectively is to highlight a similarity that reveals something important and underappreciated about the main thing, person, or event. The key to making a mess of an analogy is drawing a comparison in which the dissimilarities are so vast that they overshadow and even undermine the comparison altogether. Consider Kevin Williamson's much-discussed article from National Review calling Democratic presidential candidate Bernie Sanders a Nazi. Now, Williamson doesn't actually use the term Nazi. But he does say that Sanders "is, in fact, leading a national-socialist movement." Just in case readers failed to make the link to the National Socialist movement led by Adolf Hitler, Williamson immediately concedes that it's "uncomfortable" to draw such a comparison about "a man who is the son of Jewish immigrants from Poland and whose family was murdered in the Holocaust." Still, Williamson insists, "there is no other way to describe his view and his politics." It turns out, though, that what Williamson really means is not that Sanders dreams of world military conquest and the extermination of Jews and other inferior races in the name of Aryan purity — you know, like an actual National Socialist. What Williamson really means is that Sanders is both a socialist and a nationalist. Which makes him "a national socialist in the mode of Hugo Chávez." Oh, that kind of national socialist. By the time we come to this big reveal toward the end of Williamson's article, it's impossible not to feel manipulated, even duped, by the "national socialist" analogy that forms the backbone of the story — because the author utterly failed, and never even really intended, to demonstrate a relevant similarity between Sanders' campaign and the fascist political movement that swept Germany in the 1930s and went by the name of National Socialism. The Williamson article is somewhat unusual in that its core analogy is offered with a nudge and a wink. Other conservatives draw their inflammatory comparisons with complete sincerity. Perhaps no recent event has inspired more spurious analogies than the Supreme Court's defense of a constitutional right to same-sex marriage in Obergefell v. Hodges. The decision has inspired some defenders of traditional marriage to call Obergefell the Dred Scott decision of our time (because, like Dred Scott, Obergefell was supposedly an act of lawless judicial usurpation that subverted the democratic will of the people). Others have likened Obergefell to Roe v. Wade, the 1973 decision that declared a constitutional right to abortion and ended up conjuring the national pro-life movement into existence. Still others have described a future in which the "Gestapo" will begin knocking on the doors of those who oppose same-sex marriage, or compared life for conservative Christians post-Obergefell to life under "the lie" of communist totalitarianism. Let's take these one at a time: Unlike Dred Scott, Obergefell and same-sex marriage enslave no one. Moreover, whereas upholding the rights of slave owners led to immediate and total loss of liberty for large numbers of human beings, opponents of same-sex marriage have had a difficult time demonstrating to courts that granting the right to marry to the nation's tiny population of homosexuals, in itself, does any measurable harm at all to those who define a marriage in traditional terms. (As for the harms to the exercise of religious freedom that may well follow from Obergefell, they are not a direct consequence of same-sex marriage itself but are rather a product of an anticipated expansion of the nation's anti-discrimination laws to cover gay marriage. This complication is obviously something obscured by the Dred Scott analogy, as is the likely prospect of legislating carve-outs from anti-discrimination laws for religious organizations.) Unlike with the consequences of Roe, no one can plausibly claim that a person is killed as a result of exercising the right proclaimed by Obergefell. That would seem to render the comparison somewhat lacking in cogency. (It also points to why the constitutional triumph of same-sex marriage is exceedingly unlikely to spark powerful, enduring grassroots opposition like the pro-life movement.) The Gestapo? You've got to be kidding. Let me know when the secret police begins pounding on your door, and I will pledge my life, fortune, and sacred honor to prevent you from being sent to a concentration camp for your traditionalist Christian beliefs. But until that time, please get a grip. Outbursts like that only make you look paranoid, self-pitying, and bizarrely out of touch with both present American reality and the bloody history of real political oppression. As for the analogy to communism, the same admonition applies. Even in the realistically worst-case scenario predicted by opponents of same-sex marriage — the forced compliance of religious schools and other church-affiliated institutions with anti-discrimination laws protecting gay marriage; the loss of tax-exempt status for churches — the United States would resemble contemporary France far more than the Soviet Union. The advent of French-style ideological secularism (laïcité) in the U.S. would mark a significant (and in my view unwelcome) change, including a significant constriction of religious freedom from historic American norms. But that's a far cry from totalitarianism. (Last time I checked, France was a liberal democracy, albeit one with a somewhat different understanding of the proper relation between church and state.) I could go on, pointing to other false comparisons deployed by the right. (Keeping up with neoconservative invocations of Munich, 1938 could be a full-time job all on its own.) But it would be a mistake to think that liberals never make unconvincing analogies. As far as many conservative Christians are concerned, the entire effort to portray opposition to same-sex marriage as equivalent to opposing interracial marriage is profoundly misleading. And they have a point. (Allowing people of the same sex to marry is a much more radical change to the institution than opening marriage to men and women of different races — and the sexual morality wrapped up with male-female marriage is far more deeply intertwined with the theological traditions of Western Christianity than racialized theories of matrimony ever were.) The point is that politicians and commentators on both sides of the aisle do themselves no favors by drawing false analogies. It's a form of hype — sloganeering used in place of reason. Sometimes, as with the purported parallel between interracial and same-sex marriage, a weak analogy succeeds as propaganda. But more often, the analogy persuades no one who wasn't already convinced. In such cases, argument and evidence will always have a greater likelihood of prevailing. Accept no substitutes. By: Damon Linker, The Week, July 23, 2015 July 27, 2015 Posted by raemd95 | Bernie Sanders, Media, Nazis, Socialism | Communism, Conservatives, Inter-racial Marriage, National Socialism, Obergefell v Hodges, Religious Beliefs, Roe v Wade, Slavery | 2 Comments "Blowing With The Winds": Conservatives Love Scott Walker's Anti-Gay Transition Scott Walker has his groove back with social conservatives and he has the Supreme Court to thank. After the court ruled that the Constitution guarantees same-sex couples the right to marry, Walker released a statement calling for a constitutional amendment to let states define marriage as between one man and one woman. Social conservatives loved it, and it came at a moment when he needed all the love he could get. Back in May, the Wisconsin governor traveled to Washington to meet with a bevy of leaders from the party's more conservative wing. And in that meeting, there were lots of Walker skeptics. Penny Nance—the president of the influential conservative group Concerned Women for America—emailed to The Daily Beast after that meeting to say she still wasn't convinced Walker was a strong enough opponent of same-sex marriage. "I think people are still trying to discern" his position, she wrote. His list of confusing comments about the issue over the years made it a little tricky for some on the right to ascertain his position. In 2014, for instance, after a district court judge declared that the Badger State's ban on same-sex marriage wasn't constitutional, he gave an oddly obtuse answer on the topic at a press conference. "It doesn't really matter what I think," Walker told reporters, per the Milwaukee Journal Sentinel. "It's in the Constitution." Then he refused to clarify his position on the marriage question. "No," he said. "I'm just not stating one at all." For gay marriage foes, that little exchange didn't exactly make him a profile in courage. And it wasn't the only time he telegraphed a position on the question that was a bit more nuanced than you might expect from, well, a Republican presidential candidate. In a 2013 interview with Bloomberg, the likely 2016 contender indicated that he could be comfortable with federal legislation protecting LGBT people from workplace discrimination. Walker noted that Wisconsin didn't let same-sex couples marry, but still afforded them those employment protections. "There's a healthy balance there," he said. Opponents of same-sex marriage are not interested in finding "a healthy balance," and they weren't thrilled with Walker's comments. But all this changed on Friday after the Supreme Court ruled that same-sex couples have a constitutional right to wed. In response, Walker released a statement saying he favored amending the Constitution to let individual states decide whether or not to allow those unions. As The Daily Beast noted at the time, this distinguished him from other top-tier Republican contenders who refused to back changes to the Constitution. And people noticed. When the Beast asked Nance if Walker's full-throated support of a constitutional amendment gave her more confidence that he would side with her on the marriage question, she emailed, "Boy has it!" "In calling for a federal marriage amendment that would allow states to determine their own laws on marriage Walker has put to final rest any questions social conservatives had on his willingness to lead on the matter," she wrote. And though Nance—like most activists—doesn't have a 2016 favorite yet, she said taking a Walker-esque position on marriage is a must. "Just as Roe made the issue of life central to support for a presidential candidate, the Obergefell decision has hardened our resolve on marriage," she wrote. "The courts have made them issues that candidates for federal office can no longer duck." Brian Brown, the president of the National Organization for Marriage, is in the same boat. He said he was "distraught" with the comments Walker made last year about the overturn of Wisconsin's constitutional amendment. "I thought it was a huge mistake," Brown said. "But ever since then, he has been working very hard to be a leader on the marriage issue." He also said that, in his view, Walker has changed his position on marriage, and for the better. "If we ask people to sign pledges and stand for principles, then when they do it, we can't second-guess them," he said. "So I'm ecstatic he's doing this." And Bob Vander Plaats, the president of the Iowa-based conservative group The Family Leader, said he was also delighted with Walker's endorsement of an amendment. He said his group was "openly concerned" with some of Walker's previous comments on marriage, and that the governor's stance has assuaged those fears. Asked if he thought Walker had changed his position on how to handle marriage issues, Vander Plaats said, "Yea, without question." "I was thrilled to be able to see his response to this opinion," he said. Walker aides emailed to say that the governor's position on the issue hasn't actually changed, noting that in 1997 as a state legislator, he voted to ban same-sex marriage in the Badger State. But while Walker's single-minded opposition to same-sex marriage has won him favor with anti-same-sex-marriage activists, it's already alienated some big Republican donors. The Washington Post reported last week that Walker lost the support of one hedge-fund billionaire after having a long argument with him about the issue. And an insider close with the New York Republican donor community expressed disappointment with Walker's change of tone on the issue and support for a constitutional amendment, and suggested it could make it harder for him to secure New York Republican donors. Mary Cheney, an openly gay political consultant who is also Dick Cheney's daughter, expressed bafflement at Walker's move. "From a political perspective, I don't understand why you would do that," she said. By: Betsy Woodruff, The Daily Beast, June 30, 2015 July 3, 2015 Posted by raemd95 | Conservatives, Marriage Equality, Scott Walker | Bob Vander Plaats, Concerned Women for America, Federal Marriage Constitutional Amendment, Mary Cheney, National Organization for Marriage, Obergefell v Hodges, Penny Nance, Republican Donors | Leave a comment "You're Not Worthy Of Respect": Clarence Thomas's Disgraceful Definition Of Human Dignity During a break on my reporting trip to Ferguson, Missouri this spring, I visited the museum inside the Old Courthouse, a magnificent, green-domed federal-style building that sits in the shadow of the St. Louis Arch. It houses artifacts and displays relating to the Dred Scott case, tried there in 1847; ten years later, in 1857, the United States Supreme Court would hand Scott—an enslaved man suing for freedom for himself and his family—his final judicial defeat. In arguably the worst decision ever handed down by any American court, in words that are displayed today inside that museum in large, bold, white letters, Chief Justice Roger Taney wrote that African Americans were "beings of an inferior order," so much so that they had "no rights which the white man was bound to respect." Taney's statement is anathema to the very idea of equality. But he asserted that the Founding Fathers, as indicated in the Constitution itself, would have thought the same of people who looked like Scott, or me. In historical terms, Taney wasn't far off. The Constitution needed correcting, and it wasn't until the Fourteenth Amendment, ratified in 1868, eleven years after the Scott decision, that this got cleared up. But I wondered again this morning, as marriage equality became the law of the land, what Constitution Clarence Thomas is reading, and in what America he lives. On Friday, Thomas—a black man who grew up in the Jim Crow South, a man who should know precisely the meaning of equal protection under the law—issued one of four individual written dissents in the case, Obergefell v. Hodges. It begins in the strict constitutionalist vein that Thomas is known for, but broadens to cover not only the Constitution but also the nation as a whole. For Thomas, the decision isn't so much about laws as it is about principle: The Court's decision today is at odds not only with the Constitution, but with the principles upon which our Nation was built. Since well before 1787, liberty has been understood as freedom from government action, not entitlement to government benefits. The Framers created our Constitution to preserve that understanding of liberty. Yet the majority invokes our Constitution in the name of a "liberty" that the Framers would not have recognized, to the detriment of the liberty they sought to protect. Along the way, it rejects the idea—captured in our Declaration of Independence—that human dignity is innate and suggests instead that it comes from the Government. Let's consider this passage literally, and let's consider the kind of liberty that the "Framers" recognized. The Constitution was ratified in 1787, in a new nation in which the enslavement of kidnapped Africans and their descendants—to say nothing of the abuse, murder, and rape they suffered—was already a national institution. Their notion of liberty didn't include folks who looked like Dred Scott, me, or Thomas himself; Thomas's "liberty" wasn't open to gay or lesbian Americans in that day and age, either. In a paper written in time for the nation's bicentennial 39 years ago, Louis Crompton noted that homosexuality was punishable by the death when this country began. Its abolition plodded through the states over the next few decades. (In 1792, Thomas Jefferson, Crompton notes, called for the castration of those found guilty of sodomy in a Virginia bill.) Penalties were reduced to imprisonment in most cases; South Carolina, perennially the last state to act in the name of its most vulnerable citizens, was slowest to change, repealing their death penalty only eight years after the Civil War. To use Thomas's words, I'd argue, strongly, that all of this constitutes the government stripping away the dignity of those suffering legal punishments for being who they are. Thomas, however, appears to define dignity more strictly, as the quality of being worthy of respect. That's strange to hear coming from a man who, while the head of the Equal Employment Opportunity Commission, sexually harassed Anita Hill and likened criticism of his reprehensible behavior to a "high-tech lynching." But I'll allow that the idea of preserving dignity and therefore proving oneself as worthy of respect is an idea Thomas, a high-achieving student who nonetheless chose to study English literature in college to help him shed the burden of his Gullah dialect, is quite familiar with. What I can't stomach, however, is Thomas's tendency to ignore the systemic effects of prejudice, and in the process serve as an agent to foster them. By not recognizing what plagues so many, he allows hatred and ignorance to swell. Thomas clearly wants marginalized people to pull themselves up by the bootstraps, all while he's committed to taking those same bootstraps away. This is his legacy, a disgraceful sequel to the term of the man he succeeded, Thurgood Marshall. Granted, Thomas sometimes interprets symbols—such as burning crosses or Confederate flags—as offensive. But the actual, institutional bias those symbols promote escapes him. Thomas frequently infuses respectability politics into his rulings, which demonstrates his continued obliviousness to reality: It is not the responsibility of a vulnerable people to convince the powerful they are worth protecting. It is not the duty of the marginalized to prove they have dignity and therefore become worthy of being treated as equals; that task lies squarely across the shoulders of the rulers. And, in this regard, Thomas's blindness shows. This is a person who, during the demonization of black people in the Reagan era, thought we were the main problem. He returns to the notion of dignity later in the dissent in a passage that is even more shocking and incorrect. Citing the Declaration of Independence's "all men are created equal"—a phrase that in an increasingly gender-aware nation, should already raise alarms about a lack of inclusion—he writes: …human dignity cannot be taken away by the government. Slaves did not lose their dignity (any more than they lost their humanity) because the government allowed them to be enslaved. Those held in internment camps did not lose their dignity because the government confined them. And those denied governmental benefits certainly do not lose their dignity because the government denies them those benefits. The government cannot bestow dignity, and it cannot take it away. We live in a nation whose industries, cities, and towns grew out of fertile soil wet with the blood and sweat of slaves. The United States has long been full of unmarked geysers of prejudice, blasting their ignorance on continuously marginalized people—including the LGBTQI Americans who in many ways continue to live, despite this ruling, as second-class citizens. Marriage equality does not close the housing, employment, and healthcare disparities that exist between us cisgender straight folks and those who are not. It is only the beginning of another long march. We live in a nation where a young white man with a racist manifesto can study the Bible with a group of African Americans and then murder them, and in the aftermath the chattering class will engage in debates about whether a racist act has occurred. We live in a place where Matthew Shepard can be slain for being gay in 1998, and Wyoming, the state where he died, can remain one of five without a hate-crime law nearly two decades later. This is a place where, since its founding, the government has had a strong say over just how much dignity a person is allowed. The right of same-sex couples to marry was one that many straight men were not bound to respect, depending upon their state. There are still many of these men, but they cannot remove the dignity the government has today bestowed. Dignity may be innate, but that doesn't mean it can't be taken away from you. It can become a two-way street. You can consider yourself worthy of honor or respect, as Oxford defines it, all you wish. But if institutional discrimination deprives you of such basic human rights as health care, education, and the right to marry whomever you love, honor and respect is not afforded you. Sometimes, in the course of history, states and people need to be bound by law to respect you. Relying upon human nature, or the Founders' supposed intentions is ridiculous when you consider yesteryear. Thomas, having lost the argument over marriage equality, chose to offer a pernicious, unsympathetic dissent that gives short shrift to the forces of discrimination and subjugation legalized by government while further emboldening his self mythology, this legendary story he keeps feeding us. Thomas would have you believe that because he himself could survive the indignities forced upon him by Jim Crow—a system of legal discrimination that eventually came to be made illegal, after a variety of Supreme Court decisions very much like today's ruling—and that somehow, others should be able to endure something similar without the benefit of the very legal recourse that he can deliver from his perch. Using himself as the basis for a legal argument is asinine. Doing so in the service of discrimination is inexcusable. By: Jamil Smith, Sr Editor, The New Republic, June 26, 2015 June 27, 2015 Posted by raemd95 | Clarence Thomas, Marriage Equality, U. S. Constitution | 14th Amendment, African Americans, Liberty, Matthew Shepard, Obergefell v Hodges, Racial Inequality, Slavery, Thurgood Marshall | Leave a comment
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,622
Salary: approx. 23,000 pound sterling inc. Salary: approx 23,000 pound sterling inc. Qualifications/Experience: BSc. and research experience. State registration. Clinical: Leading subject through gait analysis session. Data preparation. Salary: approx. 24,000 pounds sterling inc. Term: 1 year appointment with the likely extension to two years. Research: Please contact to discuss. adam.shortland@gstt.sthames.nhs.uk ) for more detail and a chat.
{ "redpajama_set_name": "RedPajamaC4" }
457
Krepî (în ) este un sat în comuna Teple din raionul Stanîcino-Luhanske, regiunea Luhansk, Ucraina. Demografie Conform recensământului din 2001, majoritatea populației localității Krepî era vorbitoare de rusă (%), existând în minoritate și vorbitori de alte limbi. Note Krepî, Teple, Stanîcino-Luhanske, Luhansk Krepî, Teple, Stanîcino-Luhanske, Luhansk Krepî, Teple, Stanîcino-Luhanske, Luhansk
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,309